{"id":611,"date":"2010-06-23T16:09:46","date_gmt":"2010-06-23T20:09:46","guid":{"rendered":"https:\/\/wpmu2.mit.local\/?p=611"},"modified":"2010-06-23T16:11:05","modified_gmt":"2010-06-23T20:11:05","slug":"computer-vision-for-vehicles","status":"publish","type":"post","link":"https:\/\/wpmu2.mit.local\/computer-vision-for-vehicles\/","title":{"rendered":"Computer Vision for Vehicles"},"content":{"rendered":"
\"Figure<\/a>

Figure 1: Segmentation result from single camera for urban day-time driving<\/p><\/div>\n

Under the pressure of increasing population, crowded traffic, the energy crisis, and environmental concerns, current transportation systems have run into serious challenges in the following respects: safety, security, efficiency, mobile access, and the environment [1<\/a>]<\/sup> . There have been over 200,000 pedestrian fatalities in the last 30 years in US. Eighty percent of police reports [1<\/a>]<\/sup> cited driver errors as the primary cause of vehicle crashes. With the availability of faster computers, better sensor technology, and wider coverage of the wireless communication network, Intelligent Vehicles and Intelligent Transportation Systems (ITS) are gradually being seen as a crucial innovation to improve safety and to reduce damages. It is estimated that implementing collision-avoidance systems in vehicles could prevent 1.1 million accidents in the US each year — 17 percent of all traffic accidents, which could save 17,500 lives and $26 billion in accident-related costs [2<\/a>]<\/sup> . The demand for in-car electronic products is increasing. Around 35 percent of the cost of car assembly comes from electronics [3<\/a>]<\/sup> .<\/p>\n

\"Figure<\/a>

Figure 2: Segmentation results for urban day-time driving, for night driving, and for pedestrian tracking.<\/p><\/div>\n

Environment-understanding technology is very vital to provide Intelligent Vehicles with the ability to respond automatically to fast-changing environments and dangerous situations. To obtain perceptual abilities, it is expected to automatically detect static and dynamic obstacles and obtain their related information, such as locations, speed, collision\/occlusion possibility, and other dynamic current\/historic information. Conventional methods independently detect individual pieces of information, which are normally noisy and not very reliable. Instead we propose a fusion-based and layered-based information-retrieval methodology to systematically detect obstacles and obtain their location\/timing information for visible and infrared sequences. The proposed obstacle-detection methodologies take advantage of connections between different kinds of information and increase the computational accuracy of obstacle information estimation, thus improving environment-understanding abilities and driving safety. Three examples are shown in Figures 1 and 2. [4<\/a>]<\/sup> [5<\/a>]<\/sup> [6<\/a>]<\/sup> [7<\/a>]<\/sup><\/p>\n


\r\nReferences
  1. I.~T.~S. of~America, “National intelligent transportation system program plan: A ten-year vision,” the United States Department of Transportation, Tech.\u00a0 Rep., January 2002. [↩<\/a>] [↩<\/a>]<\/li>
  2. The Intelligent Vehicle Initiative: Advancing “Human-Centered” Smart Vehicles. Available: http:\/\/www.tfhrc.gov\/pubrds\/pr97-10\/p18.htm [↩<\/a>]<\/li>
  3. \u201cAsia – New Hotbed for Consumer Automotive Electronics.\u201d Available: http:\/\/www.technewsworld.com\/story\/52539.html [↩<\/a>]<\/li>
  4. Yajun Fang, Sumio Yokomitsu, Berthold Horn, Ichiro Masaki, \u201cA Layered-based Fusion-based Approach to Detect and Track the Movements of Pedestrians through Partially Occluded Situations.\u201d IEEE Intelligent Vehicles Symposium 2009 (IV2009). [↩<\/a>]<\/li>
  5. Fang. Y., Horn. B.K.P., Masaki I., \u201cSystematic information fusion methodology for static and dynamic obstacle detection in ITS.\u201d 15th World Congress On ITS, 2008. [↩<\/a>]<\/li>
  6. B.K.P.Horn, Y. Fang, I. Masaki, \u201cTime to Contact Relative to a Planar Surface.\u201d IEEE Intelligent Vehicles Symposium 2007. [↩<\/a>]<\/li>
  7. Y. Fang, K. Yamada, Y. Ninomiya,\u00a0\u00a0 B.K.P. Horn, and I. Masaki, \u201cA Shape-Independent-Method for Pedestrian Detection with Far Infrared-images.\u201d\u00a0 Special issue on “In-Vehicle Computer Vision Systems” of IEEE Transactions on Vehicular Technology, Vol.53, No.6, Nov. 2004, pp.1679-1697. [↩<\/a>]<\/li><\/ol><\/div>","protected":false},"excerpt":{"rendered":"

    Under the pressure of increasing population, crowded traffic, the energy crisis, and environmental concerns, current transportation systems have run into…<\/p>\n<\/div>","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[26],"tags":[4033,59,4034,4032],"_links":{"self":[{"href":"https:\/\/wpmu2.mit.local\/wp-json\/wp\/v2\/posts\/611"}],"collection":[{"href":"https:\/\/wpmu2.mit.local\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wpmu2.mit.local\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/wpmu2.mit.local\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/wpmu2.mit.local\/wp-json\/wp\/v2\/comments?post=611"}],"version-history":[{"count":1,"href":"https:\/\/wpmu2.mit.local\/wp-json\/wp\/v2\/posts\/611\/revisions"}],"predecessor-version":[{"id":2374,"href":"https:\/\/wpmu2.mit.local\/wp-json\/wp\/v2\/posts\/611\/revisions\/2374"}],"wp:attachment":[{"href":"https:\/\/wpmu2.mit.local\/wp-json\/wp\/v2\/media?parent=611"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wpmu2.mit.local\/wp-json\/wp\/v2\/categories?post=611"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wpmu2.mit.local\/wp-json\/wp\/v2\/tags?post=611"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}