|Table of Contents|

CRD-YOLOv5s-based object detection in traffic scenario(PDF)

长安大学学报(自然科学版)[ISSN:1006-6977/CN:61-1281/TN]

Issue:
2025年5期
Page:
186-199
Research Field:
交通工程
Publishing date:

Info

Title:
CRD-YOLOv5s-based object detection in traffic scenario
Author(s):
ZHOU Li DAI Liang YANG Jie LING Zhi-kai
(School of Electronics and Control Engineering, Chang'an University, Xi'an 710064, Shaanxi, China)
Keywords:
object detection in traffic scenario YOLOv5s convolutional block attention module receptive field block decoupled structure smooth intersection over union
PACS:
U495
DOI:
10.19721/j.cnki.1671-8879.2025.05.016
Abstract:
To address the issues of missed and false detections for multi-scale targets in traffic scenarios of complex road environments, an improved model based on YOLOv5s, named CRD-YOLOv5s was proposed. To tackle the problem of missed detections for small targets, a convolutional block attention module(CBAM)was introduced into the backbone network to enhance the feature detection for small objects, and the spatial pyramid pooling module was replaced with a receptive field block(RFB)to strengthen the capability to detect multi objects. The coupled structure of the original head network was substituted with a decoupled structure(DS)to enhance the sufficiency of feature representation and the efficiency of information propagation. The smooth intersection over union(SIoU)loss function was adopted to improve the convergence speed and detection accuracy of the network structure during training. The precision, recall and mean average precision were used to evaluate the model performance, and the test dataset was categorized into pedestrians, cars, traffic lights, and traffic signs. The research results demonstrate that in the car category test, the accuracy of CRD-YOLOv5s is 1.1% higher than YOLOv5s, and the average precision is 0.7% higher than YOLOv5s. For the multi-class detection, the mean average precision of CRD-YOLOv5s reaches 53.3%, exceeding YOLOv5s by 1.3%. CRD-YOLOv5s outperforms the YOLOv5s in multi-object detection, edge object detection and false detection reduction in complex environments. It can significantly improve the detection accuracy from the driver's perspective, effectively reduce the missed and false detections, and is well-suited for object detections in traffic scenarios. While maintaining the competitive performance in single-object detection, it exhibits superior capability in multi-object and complex environment detection. The research results provide an effective mathematical model for practical traffic detection applications and offer a reliable technical support for intelligent traffic detection and traffic safety.7 tabs, 14 figs, 33 refs.

References:

[1] 肖雨晴,杨慧敏.目标检测算法在交通场景中应用综述[J].计算机工程与应用,2021,57(6):30-41.
XIAO Yu-qing, YANG Hui-min. Research on application of object detection algorithm in traffic scene[J]. Computer Engineering and Applications, 2021, 57(6): 30-41.
[2]张凯祥,朱 明.基于YOLOv5的多任务自动驾驶环境感知算法[J].计算机系统应用,2022,31(9):226-232.
ZHANG Kai-xiang, ZHU Ming. Environmental perception algorithm for multi-task autonomous driving based on YOLOv5[J]. Computer Systems and Applications, 2022, 31(9): 226-232.
[3]赵 萍,李 欣,朱少武.基于时空图注意力神经网络的交通道路拥塞和异常预测[J].科学技术与工程,2022,22(3):1271-1278.
ZHAO Ping, LI Xin, ZHU Shao-wu. Traffic road congestion and anomaly prediction based on spatio-temporal graph attention neural networks[J]. Science Technology and Engineering, 2022, 22(3): 1271-1278.
[4]何朋朋.基于深度学习的交通场景多目标检测与分类研究[D].西安:长安大学,2018.
HE Peng-peng. Multi-object detection and classification based on deep learning in traffic scene[D]. Xi'an: Chang'an University, 2018.
[5]杨 康.基于视频的车辆检测理论与方法研究[D].西安:长安大学,2013.
YANG Kang. Research on theory and method of vehicle detection based on images[D]. Xi'an: Chang'an University, 2013.
[6]李永豪.基于YOLOv5s的车辆检测改进算法[D].西安:长安大学,2022.
LI Yong-hao. Improved vehicle detection algorithm based on YOLOv5s[D]. Xi'an: Chang'an University, 2022.
[7]LIU W, ANGUELOV D, ERHAN D, et al. SSD: Single shot multibox detector[C]//LEIBE B, MATAS J, SEBE N, et al. Computer Vision-ECCV 2016, Part Ⅰ. Berlin: Springer, 2016: 21-37.
[8]REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection[C]//IEEE. 2016 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). New York: IEEE, 2016: 779-788.
[9]GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]//IEEE. 2014 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). New York: IEEE, 2014: 580-587.
[10]HE K M, ZHANG X Y, REN S Q, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1904-1916.
[11]GIRSHICK R. Fast R-CNN[C]//IEEE. 2015 IEEE International Conference on Computer Vision(ICCV). New York: IEEE, 2015: 1440-1448.
[12]REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[C]//IEEE. IEEE Transactions on Pattern Analysis and Machine Intelligence. New York: IEEE, 2017: 1137-1149.
[13]REDMON J, FARHADI A. YOLO9000: Better, faster, stronger[C]//IEEE. 2017 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). New York: IEEE, 2017: 6517-6525.
[14]REDMON J, FARHADI A. YOLOv3: An incremental improvement[J]. arXiv, 2018, https://arxiv.org/abs/1804.02767.
[15]BOCHKOVSKIY A, WANG C Y, LIAO H Y M. YOLOv4: Optimal speed and accuracy of object detection[J]. arXiv, 2020, https://arxiv.org/abs/2004.10934.
[16]WANG A, CHEN H, LIU L, et al. YOLOv10: Real-time end-to-end object detection[J]. Advances in Neural Information Processing Systems, 2024, 37: 107984-108011.
[17]KHANAM R, HUSSAIN M. YOLOv11: An overview of the key architectural enhancements[J]. arXiv, 2024, https://arxiv.org/abs/2410.17725.
[18]肖雨晴,杨慧敏.基于改进YOLOv3算法的交通场景目标检测[J].森林工程,2022,38(6):164-171.
XIAO Yu-qing, YANG Hui-min. Object detection based on improved YOLOv3 algorithm in traffic scenes[J]. Forest Engineering, 2022, 38(6): 164-171.
[19]LI Y, LYU C. SS-YOLO: An object detection algorithm based on YOLOv3 and ShuffleNet[C]//XU B, MOU K.Proceedings of 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference(ITNEC 2020). New York: IEEE, 2020: 769-772.
[20]张丽莹,庞春江,王新颖,等.基于改进YOLOv3的多尺度目标检测算法[J].计算机应用,2022,42(8):2423-2431.
ZHANG Li-ying, PANG Chun-jiang, WANG Xin-ying, et al. Multi-scale object detection algorithm based on improved YOLOv3[J]. Journal of Computer Applications, 2022, 42(8): 2423-2431.
[21]宦 海,陈逸飞,张 琳,等.一种改进的BR-YOLOv3目标检测网络[J].计算机工程,2021,47(10):186-193.
HUAN Hai, CHEN Yi-fei, ZHANG Lin, et al. An improved BR-YOLOv3 object detection network[J]. Computer Engineering, 2021, 47(10): 186-193.
[22]朱程铮.高密度交通场景下智能汽车多目标检测与跟踪算法研究[D].镇江:江苏大学,2022.
ZHU Cheng-zheng. Research on multi-target detection and tracking algorithm of intelligent vehicle in high-density traffic scenario[D]. Zhenjiang: Jiangsu University,
2022.
[23]董 想.复杂场景下基于多尺度特征的目标检测算法研究[D].北京:北京交通大学,2022.
DONG Xiang. Research on object detection algorithm based on multi-scale feature in complex scene[D]. Beijing: Beijing Jiaotong University, 2022.
[24]GE Z, LIU S, Wang F, et al. YOLOX: Exceeding YOLO series in 2021[J]. arXiv, 2021, https://arxiv.org/abs/2107.08430.
[25]李明芳. DMP-YOLO:面向自动驾驶的多尺度目标检测算法[J/OL]. 无线电工程,(2025-08-18)[2025-09-15].https://link.cnki.net/urlid/13.1097.TN.20250818.
1352.002.
LI Ming-fang. DMP-YOLO: A multi-scale object detection algorithm for autonomous driving[J]. Radio Engineering,(2025-08-18)[2025-09-15]. https://link.cnki.net/urlid/13.1097.TN.20250818.1352.002.
[26]赵树恩,龚道元,田卓帅.基于改进YOLOv8模型的复杂交通场景目标检测算法[J].重庆交通大学学报(自然科学版),(2025-05-27)[2025-09-15].https://link.cnki.net/urlid/50.1190.U.20250527.1407.004.
ZHAO Shu-en, GONG Dao-yuan, TIAN Zhuo-shuai. Algorithm for object detection in complex traffic scenes based on improved YOLOv8 model[J]. Journal of Chongqing Jiaotong University(Natural Science Edition),(2025-05-27)[2025-09-15]. https://link.cnki.net/urlid/50.1190.U.20250527.1407.004.
[27]黄崇庆,徐慧英,张晓雷,等.BGR-YOLO:基于YOLOv8改进的交通场景下目标检测算法[J].计算机工程与科学,(2025-05-27)[2025-09-15].https://link.cnki.net/urlid/43.1258.TP.20250408.1455.002.
HUANG Chong-qing, XU Hui-ying, ZHANG Xiao-lei, et al. BGR-YOLO: An improved object detection algorithm under traffic scenarios based on YOLOv8[J]. Computer Engineering and Science,(2025-04-08)[2025-09-15]. https://link.cnki.net/urlid/43.1258.TP.20250408.1455.002.
[28]WOO S, PARK J, LEE J Y, et al. CBAM: Convolutional block attention module[M]//FERRARI V, HEBERT M, SMINCHISESCU C, et al. Computer Vision—ECCV 2018, Part Ⅶ. Berlin: Springer, 2018: 3-19.
[29]LIU S T, HUANG D, WANG Y H. Receptive field block net for accurate and fast object detection[C]//FERRARI V, HEBERT M, SMINCHISESCU C, et al. Computer Vision—ECCV 2018, Part Ⅺ. Berlin: Springer, 2018: 404-419.
[30]周 力,惠 飞,张嘉洋,等.基于RDB-YOLOv5的遥感图像车辆检测[J].长安大学学报(自然科学版),2024,44(3):149-160.
ZHOU Li, HUI Fei, ZHANG Jia-yang, et al. Remote sensing images vehicle detection based on RDB-YOLOv5[J]. Journal of Chang'an University(Natural Science Edition), 2024, 44(3): 149-160.
[31]乔 朋,袁 彪,申迎港,等.基于YOLOv5 DeepSORT和虚拟检测区的车轴时空定位方法[J].长安大学学报(自然科学版),2023,43(3):34-44.
QIAO Peng, YUAN Biao, SHEN Ying-gang, et al. Spatio-temporal axle localization method based on YOLOv5 DeepSORT and virtual detection area[J]. Journal of Chang'an University(Natural Science Edition), 2023, 43(3): 34-44.
[32]SZEGEDY C, LIU W, JIA Y Q, et al. Going deeper with convolutions[C]//IEEE. 2015 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). New York: IEEE, 2015: 1-9.
[33]GEVORGYAN Z. SIoU loss: More powerful learning for bounding box regression[J]. arXiv, 2022, https://arxiv.org/abs/2205.12740.

Memo

Memo:
-
Last Update: 2025-09-30