nav emailalert searchbtn searchbox tablepage yinyongbenwen piczone journalimg journalInfo journalinfonormal searchdiv searchzone qikanlogo popupnotification paper paperNew
2026, 02, v.43 47-55
Experimental design of traffic sign detection and recognition in complex scenes based on YOLO-BSNM
Email:
DOI: 10.16791/j.cnki.sjg.2026.02.006
Abstract:

[Objective] This study aims to address the critical challenges associated with recognizing small traffic signs in autonomous driving scenarios. These challenges are particularly pronounced in highly dynamic and visually cluttered environments and adverse meteorological conditions, as well as when the signs are far away. [Methods] To achieve enhanced recognition performance, this study adopted an improved systematic detection method based on the YOLO v11n architecture. In particular, an optimized framework, referred to as YOLO-BSNM, was constructed for complex scenarios by integrating multichannel attention(MCA) with a channel–height–width three-branch dynamic fusion mechanism, a normalized weighted loss(NWDLoss), a 160×160-pixel high-resolution small-object detection head, and a Bi FPN feature fusion module. To enhance the discriminability of features of small objects, MCA employed a channel–height–width three-branch dynamic fusion mechanism. NWDLoss addressed the localization sensitivity issues of traditional IoU-based losses via probabilistic distribution matching. The small-object detection head incorporated a sampling–fusion–extraction layer to mitigate detail loss resulting from resolution decay in deep features. Finally, the BiFPN feature fusion module performed the weighted fusion of multiresolution feature maps to reduce the critical feature loss. [Results] The experimental results demonstrated that the improved YOLO-BSNM algorithm achieved a precision(P) of 81.7%, a recall(R) of 75.4%, and an mAP50 of 83.4% when using the custom dataset, indicating substantial advancements over the baseline algorithm. Concurrently, the model size was smaller by 0.62 million parameters through lightweight optimization, achieving higher detection accuracy with fewer computational resources. The results also indicated that the framework showed enhanced adaptability to challenging scenarios, such as environments with blurring and occlusions. The edge-device deployment capability of the algorithm offers a robust foundation for fulfilling the real-time detection needs of intelligent transportation systems. [Conclusions] This study successfully developed an efficient and accurate traffic sign identification technology based on the YOLO-BSNM model. This technology overcame the limitations of traditional recognition methods and showed improved efficiency and precision in recognizing traffic signs. It provides substantial support for advancing autonomous driving. Deployment and performance verification on an embedded Raspberry Pi 4B platform demonstrated that the optimized model achieved a good balance between architecture lightweightness and detection accuracy, effectively reducing the false alarm and missed detection rates. This advancement meets the objective of enhancing performance while minimizing parameter overhead. YOLO-BSNM accomplished simultaneous enhancements in micro-object feature discrimination and multiscale information integration through the optimized combination of an attention mechanism and a feature fusion strategy. The probabilistic distribution matching–based loss function enhanced the stability of bounding box regression. These technical innovations led to an efficient detection framework, providing a novel approach for detecting micro-objects in complex environments. The lightweight detection system represents a promising solution for deployment in resource-constrained scenarios, such as drone-based remote sensing and satellite image analysis. The integration of multimodal data from infrared sensors, LiDAR, and dynamic environment adaptation algorithms enables the model to overcome detection challenges under variable lighting and extreme weather conditions. This research provides a technical framework for autonomous driving applications and elicits new avenues for investigating universal micro-object detection solutions in computer vision.

References

[1]LEE Y, JEON J, YU J M, et al. Context-aware multi-task learning for traffic scene recognition in autonomous vehicles[C]//2020IEEE Intelligent Vehicles Symposium(Ⅳ). New York, NY,2020:723–730.

[2]STALLKAMP J, SCHLIPSING M, SALMEN J, et al. The German traffic sign recognition benchmark:A multi-class classification competition[C]//The 2011 International Joint Conference on Neural Networks. New York, NY, 2011:1453–1460.

[3]SHIN H C, ROBERTS K, LU L, et al. Learning to read chest x-rays:Recurrent neural cascade model for automated image annotation[Z/OL].(2016-03-28)[2025-04-12]. https://arxiv.org/abs/1603.08486.

[4]KIRAN C G, PRABHU L V, RAJEEV K, et al. Traffic sign detection and pattern recognition using support vector machine[C]//2009 Seventh International Conference on Advances in Pattern Recognition. New York, NY, 2009:87–90.

[5]赵印,尹四清,章永来.改进YOLOv7的交通标志检测算法[J].计算机与现代化, 2025(2):94–99.ZHAO Y, YIN S Q, ZHANG Y L. Improved traffic sign detection algorithm of YOLOv7[J]. Computer and Modernization,2025(2):94–99.(in Chinese)

[6]HAN T X, SUN L N, DONG Q. An improved YOLO model for traffic signs small target image detection[J]. Applied Sciences,2023, 13(15):8754.

[7]CHEN J Z, HUANG H Q, ZHANG R H, et al. YOLO-TS:Real-time traffic sign detection with enhanced accuracy using optimized receptive fields and anchor-free fusion[Z/OL].(2024-10-22)[2025-04-12]. https://arxiv.org/html/2410.17144v1.

[8]鲁杰,王劭琛,魏征.基于YOLOv8-ER的带式输送机煤矸目标检测[J].实验技术与管理, 2024, 41(10):67–73.LU J, WANG S C, WEI Z. YOLOv8-ER belt conveyor coal gangue target monitoring[J]. Experimental Technology and Management, 2024, 41(10):67–73.(in Chinese)

[9]YU Y, ZHANG Y, CHENG Z Y, et al. MCA:Multidimensional collaborative attention in deep convolutional neural networks for image recognition[J]. Engineering Applications of Artificial Intelligence, 2023, 126:107079.

[10]WANG J W, XU C, YANG W, et al. A normalized gaussian wasserstein distance for tiny object detection[Z/OL].(2022-06-14)[2025-04-13]. https://arxiv.org/abs/2110.13389.

[11]WANG C Y, BOCHKOVSKIY A, LIAO H Y M. YOLOv7:Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[C]//2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). New York, NY, 2023:7464–7475.

[12]TAN M X, PANG R M, LE Q V. Efficientdet:Scalable and efficient object detection[Z/OL].(2020-07-27)[2025-04-13].https://arxiv.org/abs/1911.09070.

[13]LIN T Y, DOLLÁR P, GIRSHICK R, et al. Feature pyramid networks for object detection[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). New York,NY:IEEE, 2017:2117–2125.

[14]REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN:Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6):1137–1149.

[15]党宏社,党晨,张选德.基于改进YOLOv5s的交通标志识别算法[J].实验技术与管理, 2022, 39(9):97–102.DANG H S, DANG C, ZHANG X D. Traffic sign recognition algorithm based on improved YOLOv5s[J]. Experimental Technology and Management, 2022, 39(9):97–102.(in Chinese)

[16]SAPKOTA R, CALERO M F, QURESHI R, et al. Yolov10 to its genesis:A decadal and comprehensive review of the you only look once series[Z/OL].(2025-06-13)[2025-09-15]. https://arxiv.org/abs/2406.19407.

[17]ZHANG H, LIANG M Y, WANG Y F. YOLO-BS:A traffic sign detection algorithm based on YOLOv8[J]. Scientific Reports,2025, 15:7558.

[18]秦伦明,张云起,崔昊杨,等.基于改进RT-DETR的极端天气下交通标志检测方法[J].电子测量技术, 2025(9):56–64.QIN L M, ZHANG Y Q, CUI H Y, et al. Improved RT-DETR based method for traffic sign recognition in extreme weather[J].Electronic Measurement Technology, 2025(9):56–64.(in Chinese)

[19]MUHAMMAD M B, YEASIN M. Eigen-cam:Class activation map using principal components[Z/OL].(2020-08-01)[2025-09-15]. https://arxiv.org/abs/2008.00299.

(1)Ultralytics. YOLOv8:Next-generation real-time object detection[EB/OL].[2025-4-12]. https://ultralytics.com/yolov8.

(2)Ultralytics. YOLOv11:Real-time object detection and tracking framework[EB/OL].[2025-4-12]. https://github.com/ultralytics/ultralytics.

Basic Information:

DOI:10.16791/j.cnki.sjg.2026.02.006

China Classification Code:TP183;TP391.41;U463.6

Citation Information:

[1]SONG Jun,CHU Zhihan,FAN Zihao ,et al.Experimental design of traffic sign detection and recognition in complex scenes based on YOLO-BSNM[J].Experimental Technology and Management,2026,43(02):47-55.DOI:10.16791/j.cnki.sjg.2026.02.006.

Fund Information:

2024年南京林业大学研究生优质教学资源建设课题(164070020); 国家自然科学基金(62261040)

Published:  

2026-02-06

Publication Date:  

2026-02-06

Online:  

2026-02-06

quote

GB/T 7714-2015
MLA
APA
Search Advanced Search