Show simple item record

dc.contributor.advisorHuang, Junzhou
dc.creatorDeng, Zhifei
dc.date.accessioned2020-01-10T17:53:43Z
dc.date.available2020-01-10T17:53:43Z
dc.date.created2019-12
dc.date.issued2019-12-10
dc.date.submittedDecember 2019
dc.identifier.urihttp://hdl.handle.net/10106/28835
dc.description.abstractAutonomous driving is about to shaping the future of our life. Self-driving vehicles produced by Waymo or many other companies have demonstrated excellent driving capabilities on the road. However, accidents still happen. Correctly recognising the traffic signs, such as stop signs, is critical for a self-driving vehicle. Failing to recognise the traffic signs could lead to fatal accidents. Meanwhile, computer vision technology has made huge progress since the advent of deep learning, for example, image classification, object detection, and instance segmentation. Efforts have been made in developing faster and more accurate object detection methods. Faster R-CNN stands out as one of the most popular framework for object detection. Although frameworks like Faster R-CNN achieved state-of-the-art results in generic object detection, few endeavours have been made for traffic sign detection. Detecting traffic signs from street view images is much more challenging than detection of generic objects from natural images. Street view images have high resolution, while traffic sign tends to be small in those images. Complex background in street view images also adds more difficulty in detecting traffic signs. In this thesis, we proposed a novel two-stage object detection method for solving the challenging problem of detecting traffic signs from large street view images. In the first stage, we detect some less accurate regions which might contain traffic signs. Then we zoom in those candidate regions, and find the exact location of traffic signs in the second stage. The proposed method achieves AP (average precision) of 0.85 on a large street view dataset from an industry partner, which outperforms Faster R-CNN greatly, whose AP is around 0.13. The result reflects the potential of using the two-stage approach to detect small objects from high resolution images.
dc.format.mimetypeapplication/pdf
dc.subjectObject detection
dc.subjectTraffic sign detection
dc.subjectAutonomous driving
dc.titleDETECT TRAFFIC SIGNS FROM LARGE STREET VIEW IMAGES WITH DEEP LEARNING
dc.typeThesis
dc.degree.departmentComputer Science and Engineering
dc.degree.nameMaster of Science in Computer Science
dc.date.updated2020-01-10T17:53:44Z
thesis.degree.departmentComputer Science and Engineering
thesis.degree.grantorThe University of Texas at Arlington
thesis.degree.levelMasters
thesis.degree.nameMaster of Science in Computer Science
dc.type.materialtext


Files in this item

Thumbnail


This item appears in the following Collection(s)

Show simple item record