Perspective-Corrected Extraction of Trajectories from Urban Traffic Camera Using CNN

Zusammenfassung

A realistic data basis is crucial for the development of Advanced Driver Assistance Systems and traffic research. Traffic cameras at urban intersections provide a way to efficiently obtain information about complex traffic behavior, with the trajectories of road users being particularly relevant. For this purpose, the established Deep Neural Network Mask R-CNN is trained with a self-generated dataset to segment vehicles, cyclists and pedestrians frame-by-frame achieving an Average Precision of 94.07 % for vehicles. For the tracking of objects, the tracking-algorithm SORT was found to be a suitable method. In order to minimize the perspective error of the vehicle position due to the lateral camera perspective, a novel method is presented that estimates ground planes for vehicles based on segmentation. The estimated trajectories are evaluated with data from real measurement runs with an reference vehicle over the intersection area, resulting in an average accuracy of 0.57 m. The generated data can be used in traffic simulation software and to create fully defined scenarios for virtual vehicle testing. Furthermore we provide an open-source implementation of the proposed work at https://github.com/jul095/TrafficMonitoring.

Publikation
2022 International Conference on Connected Vehicle and Expo (ICCVE)