Kevin McAndrew

Software Engineer at Provizio
Apr 11, 2024
Apr 11, 2024
11 minute read

Accurately constructing free space maps using radar only

11 minute read


Preprocessed High Definition (HD) maps cannot capture changes in road structure, roadworks or debris. Therefore, onboard sensors must be able to estimate a real-time freespace map in order to avoid unexpected obstacles and navigate safely through an environment.

While radar has traditionally been used for object tracking because of its long-distance sensing and accurate doppler velocity measurement capabilities, the addition of our MIMSO® technology makes it a powerful sensor for gathering dense 3D pointclouds, and at a much lower cost than comparable sensors. The increased angular resolution capability of MIMSO® radars also enables complex perception tasks, like radar-only freespace estimation.

This whitepaper will explore our approach to freespace estimation using radar-only and how this compares to other freespace estimation techniques.

Definition of Freespace

Inverse Sensor Modelling (ISM) defines occupancy as either free, occupied, unobserved or partially observed [1], as seen in Figure 1. These labels are created by casting rays out from the sensor. Occupied represents any region with a sensor detection, free represents the area up to the first detection, partially observed is the region between the first and last detections, and unobserved is the region beyond the last detection.

We also include statically free and dynamically free labels to cater for the influence of static and dynamic object detections. This designation is made possible by our robust ego velocity estimation algorithm, which can estimate ground velocity and thereby remove detections that are not static i.e., dynamic targets. This allows us to create static and dynamic freespace maps, which highlight regions that will always be off limits and regions that might become traversable freespace in the near future.

Figure 1: An example scene with the ego vehicle and vehicle at range along a straight road with a junction. (Left) ISM’s ideal interpretation of freespace. (Right) Our ideal interpretation of freespace.

Technology Overview

Our 5D Perception® system is built upon an extensive microservice architecture. These microservices work together to produce perception outputs like freespace estimation. Figure 2 outlines how the data is processed in each microservice. The Odometry, Dynamic Target Filter and SLAM accumulation microservices prepare the radar pointcloud for freespace estimation as follows:

  • Odometry calculates the position, velocity, orientation angle and rate of orientation angle change from the radar pointcloud only.
  • The Dynamic Target Filter uses the ego velocity estimate from Odometry to calculate the ground velocity of each detection. Dynamic detections (a ground velocity greater than 0m/s) are removed so that only the static scene remains.
  • SLAM accumulation then uses the Odometry output to move old saved pointclouds to their current position with respect to the radar. This increases pointcloud density by overlapping old pointclouds with the newly captured pointcloud data.
Figure 2: High-level radar microservices architecture diagram

Figure 3 illustrates the effectiveness of radar odometry for pointcloud accumulation. In the leftmost illustration, the road boundaries are very difficult to identify from a single pointcloud frame. In the middle figure, we see how accumulation significantly increases pointcloud density, thereby resulting in the structure of the road becoming clearly visible. Dynamic objects usually cause smearing with this type of accumulation because they are moving from frame to frame. However, this does not occur in our case because dynamic detections have been removed using our Dynamic Target Filter.

Figure 3: Pointcloud from a single frame (left), accumulated pointcloud using radar odometry (middle) and occupancy gridmap from accumulated pointcloud (right)

Once an accurate accumulated pointcloud has been produced by the microservice system, it is processed by the Freespace Estimation microservice, as described below and as illustrated in Figure 4.

  • Height filtering removes elevated targets (bridges, overpasses or ceilings) that the vehicle can safely pass underneath without obstruction.
  • An occupancy gridmap is created to simplify computation and to create a structured pointcloud with known neighbours (shown in the rightmost illustration of Figure 3 above).
  • Clustering is performed to detect static features such as curbs, fences, walls, buildings and other static obstacles.
  • Finally, a freespace polygon is stitched together from the previous processing stages.
Figure 4: Illustration of processing steps required for freespace estimation

The goal of our freespace estimation algorithm is to identify freespace beyond what can be seen by ray tracing. This makes it possible to estimate freespace that might be out of direct line-of-sight but that is still visible to the radar.

Comparing Freespace Performance

Radar compared to camera

Freespace estimation with camera has been extensively researched and many off-the-shelf road segmentation algorithms with decent performance already exist. Figure 5 below illustrates a comparison between our radar-only freespace algorithm and a camera-based road segmentation approach at a complex intersection. Road segmentation operates on the camera image and is then projected onto a plane in front of the camera to create a Bird’s-Eye-View (BEV). From the satellite image, we can clearly see that the freespace should be represented as the road diverging to the left to exit the roundabout, and the road that follows to the right around the roundabout.

Figure 5: Satellite view of roundabout scene with position and direction of vehicle (left). Comparison between the camera estimate BEV (middle) and our freespace estimate (right) at a diverging road

The camera freespace estimate follows the road very accurately and also provides road markings, which radar and some LiDAR cannot do. It is important to have an accurate model of the ground plane and camera intrinsics to project the camera estimate to a BEV. Small changes in pitch angle can cause large errors at long ranges. Radar does not have this issue because range is measured directly. Furthermore, our radar-only freespace estimate can ignore dynamic targets and produce an estimate of static freespace. This is challenging to do using camera because velocity is not directly measured.

Figure 6: Camera on a wet day with rain on the lens (left) and the freespace estimate from this image (right)

Figure 6 shows an example of rain reducing the accuracy of the freespace estimate. Water on the camera lens distorts and blocks the image, leading to an incorrect freespace estimate. Radar is not affected as severely by these weather conditions, which makes it a good complementary sensor to camera.

Radar compared to LiDAR

Inverse Sensor Modelling (ISM) is a common method for calculating freespace using LiDAR data. With this approach, a 3D pointcloud is converted to a 2D BEV representation and then passed to the algorithm. Freespace is marked between the sensor and the first detection along a ray. However, since a 2D BEV representation is used, there might be detail that is visible in 3D, but blocked/lost in the 2D view. As a result, curved and complex roads can have reduced freespace estimation range with this method.

The same roundabout scene from Figure 5 is shown in Figure 7. Our freespace algorithm follows the shape of the road in both directions to a range of 40m. However, the ISM freespace estimate using LiDAR data fails to follow the curvature of the road, which limits the range and accuracy of the estimate. Even though the side of the road is visible in the LiDAR data, ISM is being blocked by points closer to the sensor.

Figure 7: Satellite view of roundabout scene with position and direction of vehicle (left). Comparison between LiDAR estimate (middle) and our estimate (right) at a diverging road

Use Cases

The freespace algorithm showcased in this whitepaper, is a digital signal processing (DSP) algorithm and therefore can be run using a low power CPU. High performing GPUs are not needed in this case, unlike some of our deep learning approaches which can compliment this approach, as they are not always availible.

Unlike LiDAR pointclouds and camera images, radar pointclouds do not require ground segmentation. Reflections from the road are usually weak enough that they do not appear as a detection. LiDARs without velocity estimation and camera images also require dynamic target segmentation. As a result, pointcloud filtering with radar is simplified significantly because ground segmentation and dynamic target segmentation are not necessary. The low cost of this radar processing solution makes it a viable candidate for many industries such as automotive, mining and agriculture.

Curved roads and complex junctions

Figure 8 shows the same scenario as the comparison section above. A scene including curved roads and intersections with islands has been selected because many algorithms tend to fail in these situations. Satellite view, freespace view and the two views overlapping are shown in the figure. The overlapped view illustrates how the freespace estimate follows the shape of the roundabout and the shape of the diverging road.

Figure 8: Satellite view (left), freespace view (middle) and overlapping view (right)

Figure 8 contains a single example of freespace estimation in this environment. The video in Figure 9 contains more examples of freespace estimation around a roundabout. As can be seen in the video, freespace is quickly and consistently estimated as the vehicle approaches each junction.

Figure 9: Freespace estimation results (green), radar pointcloud (black) and camera view of challenging road layout scenario

Long range scenarios

Radar has excellent range capabilities, which allows obstacles to be detected early. Detection range is important for fast moving vehicles like cars and drones because large distances can be covered in a small amount of time. In Figure 10, we see an example of a truck pulled into the hard shoulder of a curved road. In the corresponding freespace map, we can see how the road visibly narrows next to the obstacle at 125m.

Figure 10: Truck circled in red on freespace map (left) and camera (right).

This scenario is difficult because the curve of the road obscures the obstacle. Despite this, the radar first detects the obstacle at greater than 300m and it is clearly visible in the freespace map at 125m as seen in Figure 11. The speed limit of the vehicle is 100km/h or 27.8m/s. Therefore the vehicle has 10.8 seconds from first detecting the object to process the obstacle and 4.5 seconds to avoid the vehicle using the freespace map.

Figure 11: Vehicle in hard shoulder at range on a curved road

Indoor scenarios

Indoor environments can be challenging to map because they are often dimly lit, cannot avail of GPS and have targets at both close and long range. These environments are particularly common in industrial use cases such as mining and warehouses. Since our approach uses radar-only to estimate the pose of the vehicle, it is not reliant on GPS or other external odometry methods.

Figure 12 shows an indoor car park environment which has harsh lighting, a variety of close and long range targets, and clutter above the radar. The clutter from the ceiling is removed correctly so that it does not block the freespace estimate. Using our technique, spaces between vehicles on the left and a junction to turn right are both detected at greater than 20m.

Figure 12: Free parking space detected beyond vehicles (orange) and right turn junction detected (blue)

Figure 12 is a snapshot from a longer video (shown in Figure 13), which illustrates freespace mapping performance when driving through an indoor car park. Freespace is estimated past a range of 30m, including free parking spaces, junctions, large open areas and dead-ends.

Figure 13: Indoor car park scene


Our radar-only freespace estimation algorithm can be used in a variety of environments, including short range indoor and long range outdoor applications. It has also been shown that radar performs well in challenging lighting and weather conditions, which makes it a good complementary sensor to camera. The low cost of the radar and processing stack makes radar-only freespace a viable option for industries such as automotive, mining and agriculture.

Our radar-only freespace estimation algorithm can function using just one radar or by combining multiple radar pointclouds together. Odometry is calculated using the radar pointcloud and used to accumulate the pointcloud to provide greater density. This is possible because of the accurate doppler velocity estimate that radar provides and our robust ego velocity estimation algorithm.

Estimate accuracy and range is improved by looking beyond line-of-sight. This allows us to predict where the road will become free in the future, well before the vehicle arrives at that location. Many freespace algorithms fail around curved and complex road structures but our estimate does not degrade in these conditions. Furthermore, dynamic objects are easily removed, which allows us to provide a separate static freespace estimate and dynamic freespace estimate.


[1] Probably Unknown: Deep Inverse Sensor Modelling In Radar:


Sign up for our newsletter

Stay up to date with all the latest news and events at Provizio.
Email address

Testbed, Deliveries and 5D Perception® Demo Drives

Provizio, Future Mobility Campus Ireland
Shannon Free Zone

Company Information

Provizio Ltd
VAT Number: IE3638928AH
Company Number: 654660 (Registered in Ireland)

Testbed, deliveries and 5D Perception® demo drives

Provizio, Future Mobility Campus Ireland
Shannon Free Zone

Sales & Support

Newlab Michigan Central,
MI 48216,
United States

Provizio Ltd
VAT Number: IE3638928AH
Company Number: 654660 (Registered in Ireland)
Reach out to us and see how we can help
Talk to us
Copyright © 2024 Provizio, Ltd. All rights reserved.