Real Time Vehicle Detection Framework for AVs

March 8, 2020
|

2 min read

diagram of Real Time Vehicle Detection Fusing Lidar and Camera
Real Time Vehicle Detection Fusing Lidar and Camera


Real time vehicle detection is essential for driverless systems. However, the current single sensor detection mode is no longer sufficient in complex and changing traffic environments. Therefore, this paper combines camera and light detection and ranging (LiDAR) to build a vehicle-detection framework that has the characteristics of multi adaptability, high real-time capacity, and robustness.

First, a multi-adaptive high-precision depth-completion method was proposed to convert the 2D LiDAR sparse depth map into a dense depth map, so that the two sensors are aligned with each other at the data level. Then, the You Only Look Once Version 3 (YOLOv3) real-time object detection model was used to detect the color image and the dense depth map.

Finally, a decision-level fusion method based on bounding box fusion and improved Dempster–Shafer (D–S) evidence theory was proposed to merge the two results of the previous step and obtain the final vehicle position and distance information, which not only improves the detection accuracy but also improves the robustness of the whole framework.

We evaluated our method using the KITTI dataset and the Waymo Open Dataset, and the results show the effectiveness of the proposed depth completion method and multi-sensor fusion strategy.

Although LiDAR and cameras can detect the object alone, each sensor has its limitations [15]. LiDAR is susceptible to severe weather such as rain, snow, and fog. Additionally, the resolution of LiDAR is quite limited compared to a camera. However, cameras are affected by light, detection distance, and other factors. Therefore, two kinds of sensors need to work together to complete the object detection task in the complex and changeable traffic environment.

Object detection methods based on the fusion of camera and LiDAR can usually be divided into early fusion (data-level fusion, feature-level fusion) and decision-level fusion (late fusion) according to the different stages of fusion [16].

For the complete article click here.

Get Lidar News in Your Inbox

Weekly updates on lidar tech, geospatial industry news, case studies, and product reviews.

About The Author

Gene Roe - founder of Lidar News

Phoenix Lidar System - complete lidar solutions
SAM Managed geospatial services

Recent Autonomous Vehicles Posts

Autonomous Passenger Ship Sails Without a Captain

The maritime industry recently witnessed a historic milestone with the…

February 24, 2026

IM Motors LS9 LiDAR: Revolutionizing Autonomous Driving

IM Motors Automotive Vision with 3D Lidar The IM Motors…

November 21, 2025

Advances In LiDAR And Radar Accelerate Driving Autonomy

Advances In LiDAR And Radar Accelerate Driving Autonomy – Forbes…

June 30, 2025

Volkswagen Autonomous Shuttles to Use Innoviz Lidar by 2026

Volkswagen autonomous shuttles are moving closer to real-world deployment thanks…

May 28, 2025

Tesla Autopilot vs. Lidar Autonomous Vehicle

Mark Rober, a former NASA engineer and renowned science communicator,…

March 17, 2025

Mercedes Integrates Hesai Lidar in Global Vehicles

Mercedes-Benz Partners with Hesai Lidar Hesai Technology, a Shanghai-based leader…

March 13, 2025

Popular Posts

Stitch3D cloud strategy

Get Lidar News in Your Inbox

Weekly updates on lidar tech, geospatial industry news, case studies, and product reviews.

Frontier Precision Unmanned