Preprocessed High Definition (HD) maps cannot capture changes in road structure, roadworks or debris. Therefore, onboard sensors must be able to estimate a real-time freespace map in order to avoid unexpected obstacles and navigate safely through an environment.
While radar has traditionally been used for object tracking because of its long-distance sensing and accurate doppler velocity measurement capabilities, the addition of our MIMSO® technology makes it a powerful sensor for gathering dense 3D pointclouds, and at a much lower cost than comparable sensors. The increased angular resolution capability of MIMSO® radars also enables complex perception tasks, like radar-only freespace estimation.
This whitepaper will explore our approach to radar free space mapping & estimation and how this compares to other freespace estimation techniques.
Inverse Sensor Modelling (ISM) defines occupancy as either free, occupied, unobserved or partially observed [1], as seen in Figure 1. These labels are created by casting rays out from the sensor. Occupied
represents any region with a sensor detection, free
represents the area up to the first detection, partially observed
is the region between the first and last detections, and unobserved
is the region beyond the last detection.
We also include statically free
and dynamically free
labels to cater for the influence of static and dynamic object detections. This designation is made possible by our robust ego velocity estimation algorithm, which can estimate ground velocity and thereby remove detections that are not static i.e., dynamic targets. This allows us to create static and dynamic freespace maps, which highlight regions that will always be off limits and regions that might become traversable freespace in the near future.
Our 5D Perception® system is built upon an extensive microservice architecture. These microservices work together to produce perception outputs like freespace estimation. Figure 2 outlines how the data is processed in each microservice. The Odometry, Dynamic Target Filter and SLAM accumulation microservices prepare the radar pointcloud for freespace estimation as follows:
Figure 3 illustrates the effectiveness of radar odometry for pointcloud accumulation. In the leftmost illustration, the road boundaries are very difficult to identify from a single pointcloud frame. In the middle figure, we see how accumulation significantly increases pointcloud density, thereby resulting in the structure of the road becoming clearly visible. Dynamic objects usually cause smearing with this type of accumulation because they are moving from frame to frame. However, this does not occur in our case because dynamic detections have been removed using our Dynamic Target Filter.
Once an accurate accumulated pointcloud has been produced by the microservice system, it is processed by the Freespace Estimation microservice, as described below and as illustrated in Figure 4.
The goal of our freespace estimation algorithm is to identify freespace beyond what can be seen by ray tracing. This makes it possible to estimate freespace that might be out of direct line-of-sight but that is still visible to the radar.
Freespace estimation with camera has been extensively researched and many off-the-shelf road segmentation algorithms with decent performance already exist. Figure 5 below illustrates a comparison between our radar-only freespace algorithm and a camera-based road segmentation approach at a complex intersection. Road segmentation operates on the camera image and is then projected onto a plane in front of the camera to create a Bird’s-Eye-View (BEV). From the satellite image, we can clearly see that the freespace should be represented as the road diverging to the left to exit the roundabout, and the road that follows to the right around the roundabout.
The camera freespace estimate follows the road very accurately and also provides road markings, which radar and some LiDAR cannot do. It is important to have an accurate model of the ground plane and camera intrinsics to project the camera estimate to a BEV. Small changes in pitch angle can cause large errors at long ranges. Radar does not have this issue because range is measured directly. Furthermore, our radar-only freespace estimate can ignore dynamic targets and produce an estimate of static freespace. This is challenging to do using camera because velocity is not directly measured.
Figure 6 shows an example of rain reducing the accuracy of the freespace estimate. Water on the camera lens distorts and blocks the image, leading to an incorrect freespace estimate. Radar is not affected as severely by these weather conditions, which makes it a good complementary sensor to camera.
Inverse Sensor Modelling (ISM) is a common method for calculating freespace using LiDAR data. With this approach, a 3D pointcloud is converted to a 2D BEV representation and then passed to the algorithm. Freespace is marked between the sensor and the first detection along a ray. However, since a 2D BEV representation is used, there might be detail that is visible in 3D, but blocked/lost in the 2D view. As a result, curved and complex roads can have reduced freespace estimation range with this method.
The same roundabout scene from Figure 5 is shown in Figure 7. Our freespace algorithm follows the shape of the road in both directions to a range of 40m. However, the ISM freespace estimate using LiDAR data fails to follow the curvature of the road, which limits the range and accuracy of the estimate. Even though the side of the road is visible in the LiDAR data, ISM is being blocked by points closer to the sensor.
The freespace algorithm showcased in this whitepaper, is a digital signal processing (DSP) algorithm and therefore can be run using a low power CPU. High performing GPUs are not needed in this case, unlike some of our deep learning approaches which can compliment this approach, as they are not always availible.
Unlike LiDAR pointclouds and camera images, radar pointclouds do not require ground segmentation. Reflections from the road are usually weak enough that they do not appear as a detection. LiDARs without velocity estimation and camera images also require dynamic target segmentation. As a result, pointcloud filtering with radar is simplified significantly because ground segmentation and dynamic target segmentation are not necessary. The low cost of this radar processing solution makes it a viable candidate for many industries such as automotive, mining and agriculture.
Figure 8 shows the same scenario as the comparison section above. A scene including curved roads and intersections with islands has been selected because many algorithms tend to fail in these situations. Satellite view, freespace view and the two views overlapping are shown in the figure. The overlapped view illustrates how the freespace estimate follows the shape of the roundabout and the shape of the diverging road.
Figure 8 contains a single example of freespace estimation in this environment. The video in Figure 9 contains more examples of freespace estimation around a roundabout. As can be seen in the video, freespace is quickly and consistently estimated as the vehicle approaches each junction.
Radar has excellent range capabilities, which allows obstacles to be detected early. Detection range is important for fast moving vehicles like cars and drones because large distances can be covered in a small amount of time. In Figure 10, we see an example of a truck pulled into the hard shoulder of a curved road. In the corresponding freespace map, we can see how the road visibly narrows next to the obstacle at 125m.
This scenario is difficult because the curve of the road obscures the obstacle. Despite this, the radar first detects the obstacle at greater than 300m and it is clearly visible in the freespace map at 125m as seen in Figure 11. The speed limit of the vehicle is 100km/h or 27.8m/s. Therefore the vehicle has 10.8 seconds from first detecting the object to process the obstacle and 4.5 seconds to avoid the vehicle using the freespace map.
Indoor environments can be challenging to map because they are often dimly lit, cannot avail of GPS and have targets at both close and long range. These environments are particularly common in industrial use cases such as mining and warehouses. Since our approach uses radar-only to estimate the pose of the vehicle, it is not reliant on GPS or other external odometry methods.
Figure 12 shows an indoor car park environment which has harsh lighting, a variety of close and long range targets, and clutter above the radar. The clutter from the ceiling is removed correctly so that it does not block the freespace estimate. Using our technique, spaces between vehicles on the left and a junction to turn right are both detected at greater than 20m.
Figure 12 is a snapshot from a longer video (shown in Figure 13), which illustrates freespace mapping performance when driving through an indoor car park. Freespace is estimated past a range of 30m, including free parking spaces, junctions, large open areas and dead-ends.
Our radar free space mapping algorithm can be used in a variety of environments, including short range indoor and long range outdoor applications. It has also been shown that radar performs well in challenging lighting and weather conditions, which makes it a good complementary sensor to camera. The low cost of the radar and processing stack makes radar-only freespace a viable option for industries such as automotive, mining and agriculture.
Our radar free space mapping algorithm can function using just one radar or by combining multiple radar pointclouds together. Odometry is calculated using the radar pointcloud and used to accumulate the pointcloud to provide greater density. This is possible because of the accurate doppler velocity estimate that radar provides and our robust ego velocity estimation algorithm.
Estimate accuracy and range is improved by looking beyond line-of-sight. This allows us to predict where the road will become free in the future, well before the vehicle arrives at that location. Many freespace algorithms fail around curved and complex road structures but our estimate does not degrade in these conditions. Furthermore, dynamic objects are easily removed, which allows us to provide a separate static freespace estimate and dynamic freespace estimate.
[1] Probably Unknown: Deep Inverse Sensor Modelling In Radar: https://arxiv.org/pdf/1810.08151.pdf
Provizio, Future Mobility Campus Ireland
Shannon Free Zone
V14WV82
Ireland
Newlab Michigan Central,
Detroit,
MI 48216,
United States