Accurately constructing free space maps using radar only

Vehicles, cyclists and pedestrians are obstacles that are to be expected on the road. While it is possible to train models to precisely classify these obstacles using on-board sensors, unexpected obstacles like roadworks, fallen trees or traffic accidents are much more difficult to train for due to the rarity of suitable training data. In addition, High Definition (HD) maps lack the real-time data to represent such unexpected obstacles, so a real-time freespace map must be constructed in order to detect them.

Not only is it possible to create freespace maps using radar alone, but there are also many cases where radar is superior. In this blog, we will explore how we create freespace maps and how this approach performs across a number of common but challenging environments.

Our Architecture

Before freespace can be estimated, radar pointclouds must be processed and filtered. Microservices pre-process the radar data for upstream tasks in our 5D Perception® system. Each microservice involved in freespace estimation can be seen in Figure 1:

Figure 1: High-level radar microservices architecture diagram

The freespace estimation algorithm uses the pre-processed pointcloud to create an accurate freespace map. Figure 2 outlines the individual steps of the algorithm. The goal of our freespace estimation algorithm is to identify freespace beyond what can be seen by ray tracing. A generalised approach to freespace estimation allows us to incorporate multiple detections along each beam. This makes it possible to estimate freespace that might be out of direct line-of-sight but that is still visible to the radar.

Figure 2: Illustration of processing steps required for freespace estimation

Performance in Challenging Environments

Curved Roads and Complex Junctions

Freespace estimation algorithms like Inverse Sensor Modelling (ISM) struggle to estimate freespace around corners. This is largely due to the data losses incurred when 3D pointcloud data is converted into a 2D Bird’s Eye View (BEV) representation, which can obstruct detail at further ranges. Figure 3 shows a road scenario where ISM has reduced range. Our freespace estimate follows the curvature of the road beyond line-of-sight to 50m+.

Figure 3: Scene with roads diverging to the left and the right. The satellite view with position and direction of the vehicle is shown on the left. An ISM freespace estimate (yellow) using LiDAR data is shown in the centre. Our freespace estimate (green) is on the right.

Obstacles at Long Range

Detection range is important for fast moving vehicles like cars and drones because large distances can be covered in a small amount of time. Figure 4 shows a vehicle that has stopped in the hard shoulder on a motorway. Using our sensors, this vehicle is first detected at 300m and an accurate freespace map is created at 125m. The speed limit of the vehicle is 100km/h or 27.8m/s. Therefore the vehicle has 10.8 seconds from first detecting the object to process the obstacle and 4.5 seconds to avoid the vehicle using the freespace map.

Figure 4: Truck circled in red on freespace map (left) and camera (right).

Figure 5 shows the camera and freespace estimate overlaid. The obstacle is closer to make it easier to see the freespace. Freespace narrows as expected beside the obstacle and the truck and is precisely removed.

Figure 5: Freespace overlaid over original camera, showing freespace area narrowing around truck

Indoor Car Park

Indoor environments can be challenging to map because they are often dimly lit, cannot avail of GPS and have targets at close range and long range. Figure 6 shows an indoor car park environment which has harsh lighting and clutter above the radar from the car park ceiling. Detections from the ceiling are filtered out correctly and do not block the freespace estimate. Using our technique, spaces between vehicles on the left and a junction to turn right are both detected at greater than 20m.

Figure 6: Free parking space detected beyond vehicles (orange) and right turn junction detected (blue)

Conclusion

It is clear that our new class of MIMSO® imaging radars are more than adequate for freespace estimation. Estimate accuracy and range is improved by looking beyond line-of-sight. Many freespace algorithms fail around curved and complex road structures but our estimate does not degrade in these conditions.

We have shown how our radar-only freespace estimation algorithm performs indoors and outdoors at a variety of ranges. Furthermore, lighting and weather conditions do not affect performance unlike LiDAR and camera systems. The low cost of the radar and processing stack makes radar-only freespace a viable option for industries such as automotive, mining and agriculture.

To learn more, read our white paper.

CES 2024: How Provizio Optimised 5D Perception® in a Single Week

Imagine a vehicle that can constantly learn and improve over time, just like we drivers do. That's the future of perception technology and ubiquitous autonomy, and Provizio is at the forefront of this exciting revolution.

Traditionally, radar systems were limited by the hardware they were built with, but with the rise of software-defined radars, we now have the ability to adapt and improve radar performance through fine-tuning and over-the-air (OTA) updates.

At CES 2024, we leveraged these technologies to continually optimise the performance of our 5D Perception® system using the unique driving environment that is Las Vegas. In the short couple of days we were demonstrating our latest technology developments to our current and future partners, our perception performance was continually learning, adapting and improving. How? Let’s take a look.

A closer look at our technology

5D Perception®

5D Perception® refers to the unique capability of Provizio radar sensors to “see” and perceive the world in five dimensions:

5D Perception® Illustrated

When speaking of optimising our 5D Perception® system at CES, our main focus was to improve performance with respect to object detection, classification, tracking and freespace mapping - the 5th dimension of our perception system. To do this, we needed to re-train our Convolutional Neural Net (CNN) algorithms based on data recorded from Las Vegas roads. The catch? We had to do it LIVE!

Fine-Tuning our Neural Networks

Think of our 5D Perception® system like a student. It needs to be able to learn and adapt to different situations (generalisation), but it also needs to be good at specific tasks (fine-tuning). For example, our radars are trained on a massive dataset of roads from Europe. But the US is a different environment, with wider roads, different markings, and even unfamiliar vehicles like large semi-trucks or pickup trucks.

To address this, we continued to fine-tune our system using data captured directly in Las Vegas while demonstrating to our customers. Using our custom automated data pipeline, data collected from the vehicle was uploaded directly to the cloud. From here, the pipeline extracted and processed the data, making it available to use for fine-tuning our neural network models.

With this data, our engineers created a new dataset by combining a mix of data captured from Las Vegas with data from our general balanced dataset. Our engineers then used this data to fine-tune our existing generalised model overnight, seamlessly delivering improved performance for the next day of captures and demos.

An illustration of radar data flow from one layer to the next in the Neural Network

The result: In addition to providing 35% less false-negative vehicle detections, our fine-tuning of the neural networks improved vehicle detection range by over 20m on Las Vegas streets. In addition, we also noted a 45% improvement in pedestrian tracking stability and a 25% improvement in stability for the tracking of cars and large-vehicles.

Optimising Freespace

The Freespace microservice acts as a virtual guide, identifying safe driving paths and highlighting potential obstacles on the road. Since the service relies primarily on data from point-cloud clustering, road boundary estimation and odometry, the optimisation process at CES consisted of finding the perfect balance between raising the clustering threshold while maintaining reliable road boundary estimation.

As a result of these optimisations, the freespace detection distance for Las-Vegas boulevards and large interstate roads went from an average maximum range of 53m to an optimised maximum range of 86m. This is a 62% increase in average maximum range for the detection of freespace - day and night, in all weather conditions.

Freespace is highlighted to indicate boundaries while avoiding other cars on the road.

Conclusion

By leveraging the power of our software-defined radars and OTA updates, we are able to perform near real-time optimisation of our perception system to provide higher performance in a variety of areas - from classification & tracking, to radar-based odometry and freespace detection.

In just a few days, our system demonstrated:

This is just a glimpse of the future of radar-based perception technology, where software systems working on-the-edge are constantly learning and adapting. We believe that making the transition to scalable L3+ a reality is a challenge that can be solved by continuously learning and improving our 5D Perception® solution.

To learn more, read our white paper.

How We Leverage AI in Our Products

The integration of Artificial Intelligence (AI) in vehicle perception systems has revolutionised the landscape of autonomous vehicles (AVs). At Provizio, AI is a core component of our innovative 5D Perception technology, enabling on-the-edge object detection, classification, and tracking, as well as enhancing the performance of our sensors. At Autosens this week, our Senior Machine Learning Engineer, Dane Mitrev, took to the stage to present the latest progress on how AI is being leveraged within Provizio. Let’s dive in to some key takeaways below:

The Provizio Approach

At Provizio, we are leveraging AI to deliver on three core goals:

  1. To deliver LiDAR-level resolution performance, while leveraging the robust sensing and cost advantages of radar technology.
  2. To deliver powerful on-the-edge perception capabilities while minimising resource demands.
  3. To deliver a solution that can improve over time, leveraging crowdsourced datasets to improve perception capabilities and deliver enhanced Software Over The Air (SOTA) features to our customers.

Our Implementation

With 5D Perception, AI is utilised in a unique Tri-Level design to deliver compound enhancement from the signal level, through to the point cloud, and finally at the fusion level. In this way, we make the most of our hardware systems by using intelligent software to squeeze out the best possible performance from each layer of the stack. Let’s start with the first layer - point cloud denoising.

Neural Networks for Point Cloud Denoising and Enhancement

Stage 1: Training Dataset

The first phase in creating an effective AI model starts with good quality training data. At Provizio, we generate such data using a 3-stage process:

  1. Sensor Synchronisation: Inputs from radar, LiDAR & camera are used to create an accurate source of ground-truth data. Inputs from each respective sensor need to be synchronised, such that objects and their locations are consistent.
  2. Point Cloud Filtering Based on Lidar Data and Camera Semantic Segmentation: During this process, LiDAR and camera data is compared. Using semantic segmentation, each individual pixel from a video frame is classified into categories like “road”, “pedestrian”, “vehicle” and so on. This is cross referenced with 3D LiDAR point cloud data to ensure any “noise” (erroneous points on the point cloud) are removed.
  3. Manual Corrections: Human visual inspection is used to ensure the resultant training data is as accurate as possible.
Stage 2: Lightweight Models for Real-Time denoising

2D Convolutional Neural Networks (CNN): With this method, 3D radar point cloud data is transformed into 2D projections, which simplifies the training process. A unique CNN architecture is then trained on a dataset where noise in the radar data has been identified and labelled. During this process, the CNN learns to identify patterns in the data that represent noise and as a result, once training is complete, the CNN can be used to identify and filter noise from previously unseen data.

Stage 3: Point Cloud Super Resolution

In a similar way to how noise patterns are identified and removed using CNNs, patterns that denote real objects can also used to improve the resolution of point cloud outputs. In this case, during the training process, the CNN learns to understand the spatial relationships within high-resolution ground truth point cloud datasets and predict where additional points should be added to increase resolution. Once trained, the CNN can take lower resolution point clouds and enhance them by adding additional points in a way that increases detail and accuracy.

Provizio 5D Perception super-resolution neural net

Neural Networks for Object Detection, Tracking and Freespace Estimation

Once the data from our radar sensors is de-noised and enhanced as per the above systems, a further set of neural networks is used to process this data with the goal of understanding the real-world environment it represents. In this respect, our hardware and software teams worked closely together to develop an understanding of how to build a neural network that could extract the most information from the radar point clouds. In doing so, several efficiencies in the process were identified to create a lightweight system, capable of performing advanced perception tasks on-the-edge.

The Provizio Advantage

The above provides a high-level outline of the modular process we use in Provizio to maximise the value output of our products. Not only does this approach enable greater maintainability over time, but by developing both the hardware and software for our devices, we posses a unique ability to produce high quality outputs at a fraction of the cost of our competitors. By leveraging AI within our 5D Perception system, we deliver:

Conclusion

The application of AI in vehicle perception is a field rich with innovation and challenges. As AI continues to evolve, Provizio is at the forefront of addressing the complex technical hurdles affecting the safety and real-world variability of autonomous systems, such that a future of zero accidents will become possible for all.

5D Perception Will Eliminate Crashes

As a title, that’s a bold statement. After a lot of hard work it’s also a statement the Provizio team are now confident to stand over. As covered in an earlier post, we started Provizio to solve the driving problem; the driving problem being the reasons we crash, the reasons for unconscionable road fatalities or what the autonomous industry often refers to as “the hard bit”.

Autonomous vehicles were meant to end road crashes while simultaneously providing low cost mobility, but this goal depended on ubiquitous autonomy which we are no closer to today than 10 years and $100bn’s of investment ago. Autonomous Vehicle (AV) groups have made impressive strides in delivering driverless vehicles in geo-fenced regions. However, while this has proved that it is indeed possible to deliver safe autonomy on our roads, the cost associated with these platforms means they cannot be mass deployed and so cannot impact the driving problem in the near term.

Provizio imagined an affordable perception system that could give every driver AV grade perception, 360° insights in all weather conditions and pave a true path to ubiquitous autonomy; that’s why we created 5D Perception®.

Provizio 5D Perception® point cloud delivered by a single forward facing sensor.

The Perception Problem

AV’s today use a combination of sensors and compute resources to perceive the world and take the human driver out of the loop.

These sensors are then fused on a central compute where a lot of heavy lifting software takes over, deploying processing algorithms, machine learning (ML) and artificial intelligence (AI) to deliver a passenger safely from A to B.

Unfortunately, there are several disadvantages to this approach:

  1. It’s a complex and expensive platform. LiDAR is largely what allows these vehicles to succeed but as a sensor, it is expensive to build and hard to scale.
  2. LiDAR needs to be seen to see and therefore impacts vehicle aesthetics. A number of companies are making inroads on building scalable solid-state LiDAR but most acknowledge and focus on forward facing perception. The balance of the system is also high cost as it requires external mounting and cleaning.
  3. LiDAR struggles to perform in bad weather. Radar doesn’t have these cost or environmental issues which is why it is deployed in almost all vehicles on the road today but today’s Radars lack the resolution a LiDAR offers…until now.
  4. LiDAR produces an immense amount of data which places a huge demand on vehicle compute and networking systems in order to process the data in real-time. Automakers are struggling to both reduce manufacturing and materials costs, while simultaneously increasing computational power and network bandwidth.

Introducing 5D Perception®

5D Perception is a sensor to perception level solution designed to target the core constraints impeding the progression of today’s automotive perception technology. Our proprietary super-resolution imaging radar, which sees the world in a LiDAR like 3D point cloud and also delivers the precise range and velocity of every point, provides the 4-dimensional sensing element of our 5D solution. Uniquely, the entire radar backend is built on a Graphics Processing Unit (GPU), which allows us to deliver AI perception on-the-edge. This is what we call the 5th D in our 5D Perception® solution. This offers multiple advantages to our partners and ultimately all road users:

Levelling up with 5D Perception®

An Accelerated Path to Production

To deliver 5D Perception® and get the most from our high-resolution radar, we developed a perception stack which is designed to leverage best-in-class sensors from partner suppliers of camera and LiDAR sensors. This provides a clear path for OEM and AV partners to develop increasingly complex safety and autonomous solutions including:

Teamwork to Make the Dreamwork

Since foundation we promised to partner, partner, partner to achieve our goal of zero road deaths and a true path to autonomy. The real key to this technology achieving mass adoption is the ability for it to be implemented at scale, with Provizio working alongside OEM’s and Tier 1 auto manufacturers to bring this technology to series production and on our roads by 2025.

Our core IP licence model has us working with some very exciting partners already and we will be announcing additional partners throughout 2023, however we are always open to more. We are looking for:

By leveraging our proprietary 5D Perception® technology, Provizio can supply class leading systems for a fraction of the current cost and with low integration complexity. That has led to applications beyond traditional automotive. On any one-day, Provizio and our partners are testing on roads in Shannon and Palo Alto, in German cornfields, down Pittsburgh mines or scooting in Stockholm. We’d love to hear about your challenges and how 5D Perception® and MIMSO® could make an impact.