Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: The increasing number of autonomous vehicles and the rapid development of
computer vision technologies underscore the particular importance of conducting
research on the accuracy of traffic sign recognition. Numerous studies in this
field have already achieved significant results, demonstrating high
effectiveness in addressing traffic sign recognition tasks. However, the task
becomes considerably more complex when a sign is partially obscured by
surrounding objects, such as tree branches, billboards, or other elements of
the urban environment. In our study, we investigated how partial occlusion of
traffic signs affects their recognition. For this purpose, we collected a
dataset comprising 5,746 images, including both fully visible and partially
occluded signs, and made it publicly available. Using this dataset, we compared
the performance of our custom convolutional neural network (CNN), which
achieved 96% accuracy, with models trained using transfer learning. The best
result was obtained by VGG16 with full layer unfreezing, reaching 99% accuracy.
Additional experiments revealed that models trained solely on fully visible
signs lose effectiveness when recognizing occluded signs. This highlights the
critical importance of incorporating real-world data with partial occlusion
into training sets to ensure robust model performance in complex practical
scenarios and to enhance the safety of autonomous driving.