Motivation Prior Work Our Idea Results Future Directions

RGB LIDAR Fusion

Kelvin Kang, Calen Robinson, Jonathan Lord-Fonda, Gerald D'Ascoli

Repository

Motivation

The self-driving problem requires detecting and localizing a variety of object classes with high confidence so that the robot can understand its surroundings and reason about how to interact with the flow of traffic to maintain both driver and civilian safety. Modern self-driving methods accomplish this by utilizing a mixture of sensor modalities to better understand the current environment of the car. This combination of sensors strengthens the overall system by covering the shortcomings of each individual sensor alone. For example, LIDAR sensors are accurate and consistent, but expensive and easily obfuscated by dust. Alternatively, image data is dense and cheap but also inconsistent and inaccurate. Therefore, modern self-driving vehicles depend on the fusion of multiple modalities to ensure accuracy and consistency. Training networks that can handle the information from multiple modalities, and even provide multiplicative strengths, is essential to improving the capability of self-driving algorithms.




Car
Books

Prior Work

Self-driving is a hot research area currently, but not that many papers have been written about Lidar-camera sensor fusion for object detection and localization. In [1], Wen and Jo achieve sensor fusion by converting the Lidar information into voxels and superimposing it onto the camera images. In [2], Yang, Liang, and Urtasun propose a method for either using high-definition maps as the basis for a prior, or converting the streamed Lidar data into a 3-d map that can subsequently be used for object detection from a top-down perspective. In [3], a variety of architectures are used to combine Lidar and camera information, specifically early vs late fusion. Early fusion methods start by combining the Lidar and camera information, before passing anything through the networks. Late fusion methods pass raw or transformed data into networks to develop a series of features. These features are then fused and passed through a detection head. In [4], Vora, Lang, Helou, and Beijbom start by creating a segmentation filter over the 2-dimensional image. This semantic painting is then projected onto the lidar point cloud, from which 3d bounding boxes are developed. In [5], Salazar-Gomez, Saavedra-Ruiz, and Romero-Cano use frustum region proposals generated by 2D object detectors to segment LiDAR point clouds into point clusters.
[1] Fast and Accurate 3D Object Detection for Lidar-Camera-Based Autonomous Vehicles Using One Shared Voxel-Based Backbone https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9340187
[2] HDNET: Exploiting HD Maps for 3D Object Detection https://arxiv.org/pdf/2012.11704.pdf
[3] LiDAR-Camera Fusion for 3D Object Detection https://www.rit.edu/academicaffairs/facultyscholarship/submit/download_file.php?id=71433
[4] PointPainting: Sequential Fusion for 3D Object Detection https://arxiv.org/pdf/1911.10150.pdf
[5] High-level camera-LiDAR fusion for 3D object detection with machine learning https://arxiv.org/pdf/2105.11060.pdf

Our Idea

What is the idea?

The central idea is to fuse LiDAR and camera data together to produce robust, 3D object detection (x, y, z, r, p, y) for use in self-driving applications. The purpose of using both LiDAR and camera data is because the camera data can provide strong textural understanding of where cars are located in the image, while the LiDAR data can provide highly-accurate depth information. However, the difficulty with working with LiDAR point cloud data is that they are large in size (~100MB per point cloud) and are unstructured (as opposed to image data that has constant pixel dimensions). We project features from the camera data onto the LiDAR data, which places all of the information into a common embedding shape, allowing us to easily fuse the data via concatenation. This fused data is then processed with a resnet backbone and a YOLO detection head. The reason we used YOLO was because our network is aiming at online applications and this makes YOLO’s speed more favorable over a small increase in accuracy.

Model Layout

Data Preprocessing
The RGB images come from the KITTI dataset at 1242x375 pixels. We resize each image to a 448x448 pixel square to fit the pre-trained ResNet image encoder and augment the images with color jitter and gaussian noise to prevent overfitting. The point cloud comes from the KITTI dataset as a binary file. We translate the binary into an Np x 4 tensor of (x,y,z,R) where Np is the number of points in the cloud. For data augmentation, we add random gaussian noise to the x, y, z, and R values of each point.

PCL Embedding

Point Cloud Embedding
The point cloud embedding process is as follows. We create an L x W x N grid, where L is the length, W is the width, and N is the max number of lidar points that a grid can store. If there are less points in the cell than N, we pad them with zero and note down the number of real points. If there are more than N points per cell, we randomly select N points from it to get an estimated distribution. The voxel grid is then unrolled into a P x N x D matrix, where P is L x W and D is 4. The D dimension stores the (x, y, z, r) tuple for each lidar point. Next we do point augmentation to go from D = 4 to D = 9. The 5 new pieces of information are x, y, z offset from the x, y, z mean of points in their corresponding cell, and the x and y offset from each cell center point. This is then passed through several linear layers to extract learned features, such that the result matrix is P x N x C, where C is 128 or 256. We max along the N dimension and reshape back to L x W x C, which is critical because our point cloud embedding thus preserves its spatial relationship.

Image Embedding
For our image embedder we fine tuned resnet18, but did not use its last 3 layers. The resulting matrix is C x M x M. M is the output shape of the image. We then resize this to C x L x W.

Sensor Fusion
We have a C x L x W lidar embedding and a C x L x W image embedding. We concatenate them to get a 2C x L x W fused embedding.



Sensor Fusion


Detection Head
The detection head at the end of our architecture was a reimplementation of YOLO. YOLO’s end-to-end training allows us to quickly generate 3D bounding boxes from our input data, along with confidence scores for each of them. Similar to 2D YOLO, we did not directly predict the raw values, because they would be numerically stable. For example, objects can be 2, 20, or 120 meters away from our vehicle, orders of magnitude different. This would cause a number of problems with YOLO’s output because it isn’t flexible or precise enough to handle such large variations in some variables, but only differences from 0 to 1 in other variables. Therefore, to create some similarity between the YOLO outputs, the YOLO head predicts the offset from each cell corner that is normalized to the cell height and width. Furthermore, instead of predicting the shape of the bbox, we predict the shape offset from an ‘average’ shape, these are known as anchor boxes. Together, these changes result in YOLO outputting numbers roughly between 0 and 1 for all of YOLO’s diverse outputs. Lastly, predicting the yaw angle in Euler angles is difficult due to wrap around, for example the loss function will consider 0.1 degree and 359.9 degree to be far apart when in reality they are almost identical. Therefore, we predict the real and imaginary components of the yaw angle with complex decomposition. For the loss function, we first create an obj mask to mark down which of the cells contains a ground truth, and a noobj mask which is an inverse of the obj mask. We use the obj mask to ensure that we are only penalizing the model for having incorrect object-related losses when there is a ground truth result in the given cell. The noobj loss is used to penalize false positives. We use Mean Square Error on each of the loss types (obj confidence, no obj confidence, coordinate, shape, and angle), and tune the weights of each of these losses.

Detection Head

Learning Rate
Our initial learning rate of 1e-2 seemed to work well for our training purposes, but when plotted on a logarithmic scale it became clear that our learning rate only functioned correctly for the first 10 or 20 epochs. Therefore, we made an adjustment to our method and added a step-based learning rate scheduler that cut the learning rate by a tenth every ten epochs, down to a minimum of 1e-6. After implementing this learning rate scheduler, our model trained excellently, continuing to reduce its loss even after running for 30 or more epochs. In total, our loss dropped from an initial value of 1 all the way down to 1e-4. This is particularly important for our application as localizing 3d bounding boxes with sufficient IoU is difficult to achieve or improve and our YOLO detection head has a complex loss that requires minimizing along multiple dimensions.

LR Loss

Why does your idea make sense intuitively?

This idea makes intuitive sense because it combines the strengths of different types of sensors. Lidars are useful for determining the precise 3D coordinates of objects in a scene, and cameras are good for classifying objects. Fusing these two sensors can provide robust classification and localization. However, fusing these two sensors requires a good representation of each. Cameras are straight forward, but lidar embeddings are still an area of active research. By creating a latent embedding space that is the same for each sensor we make the process of fusing the data rather straightforward.

How does it relate to prior work in the area?

We took the ideas of sensor fusion in 3D LiDAR-based datasets and applied new and improved object detection algorithms to see the performance. The prior work listed has provided various sensor fusion methods with various learning methods, we built off of various successes found in their work to combine into a novel system for 3D object detection.

Results

We used the KITTI data set and trained our model to detect cars.
Output Bounding Boxes

Note: All RGB Images in the validation set are normalized with mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225].


good1-2d good1
good2-2d good2
good3-2d good3
good4-2d good4
good6-2d good6
Comparison with other models
Based on the performance of many networks that used the image and the lidar data from this dataset (shown below), our network mAP is relatively low and only truly beats out PC-CNN from 2018 while having similar success to MV3D.
Baseline:
Baseline
Our Results:
Validation mAP after 45 epochs = 0.63296 [63.296%]
Negative Results / Model Challenges
Our model struggled with cars in the far distances, particularly due to the sparsity of point cloud points making the YOLO discrete 3D space miss it. As far as self-driving is concerned, this is not an issue because a car that’s too far away to detect is not in danger of an immediate collision. Additionally, the model mAP was reduced when it identified trucks as cars because the ground truth has trucks labeled separately from cars, but in our model implementation this would be standardly correct as we are trying to detect all vehicles in a self-driving application. The model output shown below expresses these challenges with a detection on a truck that does not match up with a ground truth car box as well as a car in the far field that is not detected.
good5-2d good5

Conclusion

Our method has shown a keen ability to detect and localize vehicles from an input of RGB and LiDAR data. As our model stands, it shows great promise for self-driving applications because it identifies and projects accurate bounding boxes for vehicles with a decent success rate. With the implementation of some of the ideas listed under future directions, we believe that the performance of this model could be improved greatly and eventually become the state of the art for vehicle detection and localization from fused sensor data.


AutoCar

Future Directions

In the future, we would like to test our algorithm using different image encoders or backbones such as different sizes of ResNet, MobileNet, or ConvNeXt. We would also like to increase the number of detections our network is making, searching for multiple bounding boxes per grid cell. We could also ask our network to search for multiple object classes, such as pedestrians and street signs, instead of just cars. In addition, we could try a different detection head other than the YOLO. Finally, we would like to test our algorithm on different data sets.