Pedestrian Detection System using YOLOv5 for Advanced Driver Assistance System (ADAS)
Abstract
The technology in transportation is continuously developing due to reaching the self-driving vehicle. The need of detecting the situation around vehicles is a must to prevent accidents. It is not only limited to the conventional vehicle in which accident commonly happens, but also to the autonomous vehicle. In this paper, we proposed a detection system for recognizing pedestrians using a camera and minicomputer. The approach of pedestrian detection is applied using object detection method (YOLOv5) which is based on the Convolutional Neural Network. The model that we proposed in this paper is trained using numerous epochs to find the optimum training configuration for detecting pedestrians. The lowest value of object and bounding box loss is found when it is trained using 2000 epochs, but it needs at least 3 hours to build the model. Meanwhile, the optimum model’s configuration is trained using 1000 epochs which has the biggest object (1.49 points) and moderate bounding box (1.5 points) loss reduction compared to the other number of epochs. This proposed system is implemented using Raspberry Pi4 and a monocular camera and it is only able to detect objects for 0.9 frames for each second. As further development, an advanced computing device is needed due to reach real-time pedestrian detection.
Downloads
References
Manual Kapasitas Jalan Indonesia. PT. Bina Karya (Persero), 1997.
S. Razzaq, F. Riaz, T. Mehmood, and N. I. Ratyal, “Multi-Factors Based Road Accident Prevention System,” 2016 Int. Conf. Comput. Electron. Electr. Eng. ICE Cube 2016 - Proc., pp. 190–195, 2016, doi: 10.1109/ICECUBE.2016.7495221.
L. Liu et al., “Computing Systems for Autonomous Driving: State of the Art and Challenges,” IEEE Internet Things J., vol. 8, no. 8, pp. 6469–6486, 2021, doi: 10.1109/JIOT.2020.3043716.
R. Ahmad et al., “Pedestrian User-Friendly Intelligent Crossing Advance For Improved Safety,” J. Keselam. Transp. Jalan (Indonesian J. Road Safety), vol. 9, no. 1, pp. 71–79, 2022, doi: 10.46447/ktj.v9i1.430.
F. M. Favar, N. Nader, S. O. Eurich, M. Tripp, and N. Varadaraju, “Examining Accident Reports Involving Autonomous Vehicles in California.pdf,” pp. 1–20, 2017.
J. K. Gurney, Sue My Car Not Me: Products Liability and Accidents Involving Autonomous Vehicles, vol. 2013, no. 2. 2013.
V. V. Dixit, S. Chand, and D. J. Nair, “Autonomous vehicles: Disengagements, accidents and reaction times,” PLoS One, vol. 11, no. 12, pp. 1–14, 2016, doi: 10.1371/journal.pone.0168054.
M. Raza, “Autonomous Vehicles: Levels, Technologies, Impacts and Concerns,” Int. J. Appl. Eng. Res., vol. 13, no. 16, pp. 12710–12714, 2018, [Online]. Available: http://www.ripublication.com.
S. Neogi, M. Hoy, K. Dang, H. Yu, and J. Dauwels, “Context Model for Pedestrian Intention Prediction Using Factored Latent-Dynamic Conditional Random Fields,” IEEE Trans. Intell. Transp. Syst., vol. 22, no. 11, pp. 6821–6832, 2021, doi: 10.1109/TITS.2020.2995166.
Y. Wang, B. Feng, and Y. Ding, “DSXplore: Optimizing convolutional neural networks via sliding-channel convolutions,” Proc. - 2021 IEEE 35th Int. Parallel Distrib. Process. Symp. IPDPS 2021, pp. 619–628, 2021, doi: 10.1109/IPDPS49936.2021.00070.
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” IEEE Conf. Comput. Vis. Pattern Recognit., 2016.
G. Jocher, “You Only Look Once v5,” YOLOv5. .
M. Sukkar, D. Kumar, and J. Sindha, “Real-Time Pedestrians Detection by YOLOv5,” in 2021 12th International Conference on Computing Communication and Networking Technologies, ICCCNT 2021, 2021, pp. 1–6, doi: 10.1109/ICCCNT51525.2021.9579808.
Z. Zhang, W. Tao, K. Sun, W. Hu, and L. Yao, “Pedestrian detection aided by fusion of binocular information,” Pattern Recognit., vol. 60, pp. 227–238, 2016, doi: 10.1016/j.patcog.2016.05.006.
W. Liu, S. Liao, W. Ren, W. Hu, and Y. Yu, “High-level semantic feature detection: A new perspective for pedestrian detection,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2019-June, pp. 5182–5191, 2019, doi: 10.1109/CVPR.2019.00533.
Y. Jiang, G. Tong, H. Yin, and N. Xiong, “A Pedestrian Detection Method Based on Genetic Algorithm for Optimize XGBoost Training Parameters,” IEEE Access, vol. 7, pp. 118310–118321, 2019, doi: 10.1109/ACCESS.2019.2936454.
W. Liu, S. Liao, W. Hu, X. Liang, and X. Chen, “Learning efficient single-stage pedestrian detectors by asymptotic localization fitting,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 11218 LNCS, pp. 643–659, 2018, doi: 10.1007/978-3-030-01264-9_38.
Y. Pang, J. Xie, M. H. Khan, R. M. Anwer, F. S. Khan, and L. Shao, “Mask-guided attention network for occluded pedestrian detection,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2019-Octob, pp. 4966–4974, 2019, doi: 10.1109/ICCV.2019.00507.
S. Iftikhar, Z. Zhang, M. Asim, A. Muthanna, A. Koucheryavy, and A. A. Abd El-Latif, “Deep Learning-Based Pedestrian Detection in Autonomous Vehicles: Substantial Issues and Challenges,” Electronics, vol. 11, no. 21, p. 3551, 2022, doi: 10.3390/electronics11213551.
H. S. Lee and K. Kim, “Simultaneous Traffic Sign Detection and Boundary Estimation Using Convolutional Neural Network,” IEEE Trans. Intell. Transp. Syst., vol. 19, no. 5, pp. 1652–1663, 2018, doi: 10.1109/TITS.2018.2801560.
H. A. Ignatious, H. El Sayed, and M. Khan, “An overview of sensors in Autonomous Vehicles,” Procedia Comput. Sci., vol. 198, no. 2021, pp. 736–741, 2021, doi: 10.1016/j.procs.2021.12.315.
V. Wiley and T. Lucas, “Computer Vision and Image Processing: A Paper Review,” Int. J. Artif. Intell. Res., vol. 2, no. 1, p. 22, 2018, doi: 10.29099/ijair.v2i1.42.
H. Deshpande, A. Singh, and H. Herunde, “Comparative analysis on YOLO object detection with OpenCV,” Int. J. Res. Ind. Eng., vol. 9, no. 1, pp. 46–64, 2020, doi: 10.22105/riej.2020.226863.1130.
H. K. Jung and G. S. Choi, “Improved YOLOv5: Efficient Object Detection Using Drone Images under Various Conditions,” Appl. Sci., vol. 12, no. 14, 2022, doi: 10.3390/app12147255.
R. Dwiyanto, D. W. Widodo, and P. Kasih, “Implementasi Metode You Only Look Once ( YOLOv5 ) Untuk Klasifikasi Kendaraan Pada CCTV Kabupaten Tulungagung,” pp. 102–104, 2022.
M. Kasper-Eulaers, N. Hahn, P. E. Kummervold, S. Berger, T. Sebulonsen, and Ø. Myrland, “Short communication: Detecting heavy goods vehicles in rest areas in winter conditions using YOLOv5,” Algorithms, vol. 14, no. 4, 2021, doi: 10.3390/a14040114.
W. Souidene Mseddi, M. A. Sedrine, and R. Attia, “YOLOv5 Based Visual Localization For Autonomous Vehicles,” Eur. Signal Process. Conf., vol. 2021-Augus, pp. 746–750, 2021, doi: 10.23919/EUSIPCO54536.2021.9616354.
J. Redmon and A. Farhadi, “YOLO9000: Better, faster, stronger,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp. 6517–6525, 2017, doi: 10.1109/CVPR.2017.690.
J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” 2018, [Online]. Available: http://arxiv.org/abs/1804.02767.
Y. Zhang, Z. Guo, J. Wu, Y. Tian, H. Tang, and X. Guo, “Real-Time Vehicle Detection Based on Improved YOLO v5,” Sustain., vol. 14, no. 19, 2022, doi: 10.3390/su141912274.
C. Li et al., “YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications,” 2022, [Online]. Available: http://arxiv.org/abs/2209.02976.
V. Pham, D. Nguyen, and C. Donan, “Road Damages Detection and Classification with YOLOv7,” 2022, [Online]. Available: http://arxiv.org/abs/2211.00091.
S. M. Nasution, E. Husni, Kuspriyanto, R. Yusuf, and R. Mulyawan, “Road Information Collector Using Smartphone for Measuring Road Width Based on Object and Lane Detection,” Int. J. Interact. Mob. Technol., vol. 14, no. 2, pp. 42–61, Feb. 2020.
S. Afaq and S. Rao, “Significance Of Epochs On Training A Neural Network,” Int. J. Sci. Technol. Res., vol. 9, no. 06, pp. 485–488, 2020, [Online]. Available: www.ijstr.org.
Copyright (c) 2023 Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi)
This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright in each article belongs to the author
- The author acknowledges that the RESTI Journal (System Engineering and Information Technology) is the first publisher to publish with a license Creative Commons Attribution 4.0 International License.
- Authors can enter writing separately, arrange the non-exclusive distribution of manuscripts that have been published in this journal into other versions (eg sent to the author's institutional repository, publication in a book, etc.), by acknowledging that the manuscript has been published for the first time in the RESTI (Rekayasa Sistem dan Teknologi Informasi) journal ;