Blog Post
5.26.2020
Rethinking Cruise’s AV Development Loop During COVID-19
Share
On March 16th, we suspended our on-road testing following the initial guidance issued from Mayor London Breed and the public health officials. This required us to rethink our AV development loop. Our multi-year investments in simulation have resulted in very robust simulation capabilities, which allowed us to continue making rapid progress on AV performance while the majority of our vehicles are off the road.
Despite a heavy reliance on simulation, we still prefer to include road-testing to tease out gaps in our test coverage and discover things we may not be simulating correctly. Our development cycle typically follows a simple loop:
Gather data by driving our fleet of autonomous vehicles (AVs) throughout San Francisco
Analyze the data for new events, gaps, and opportunities to improve
Update the AV code and simulation code
Test the new AV code in simulation
Deploy code to our AV fleet, and then go back to one
It is immediately obvious that to continue making progress, we must rely on our simulation software. Our simulation frameworks allow for AV code testing without real-world driving, but it’s not that simple. There are many challenges we must overcome and some caveats we must keep in mind. For starters, accuracy is a core challenge.
AV simulations must replicate what happens in the real world as accurately as possible in order to give correct feedback and signaling to engineering. We are on a continual quest to close the gap between our simulation results and what we see on the road with our physical test vehicles. We enforce accurate world rules and ensure our simulated vehicles accurately interpret the world and make good decisions.
For every scenario we want to test in simulation, we have to choose how much to invest in accuracy. No vehicle dynamics software model is perfectly accurate. They can get complex, and it would be easy for us to spend months of development effort to resolve extremely minor differences between how the AV acts in simulation and on the road.
With limited time and resources, we have to make choices. For example, we ask how accurately we should model tires, and whether or not it is more important than other factors we have in our queue, like modeling LiDAR reflections off of car windshields and rearview mirrors or correctly modeling radar multipath returns.
Tires are complicated, yet also essential to explaining vehicle motion. This is why, for example, perhaps the most commonly used tire slip model used in industry is called the Magic Tire Formula. Rather than explain the mix of science and alchemy that describe every nuance of tire behavior from first principles, a very active field of research resulting in a bevy of PhD theses, it provides a simple way to explain empirical data. Tools like these give us an ability to accurately predict on-road vehicle behavior due to tire slip without requiring that we become experts on how precisely rubber tires deform, depend on the car’s mileage, or even change if we’re on Market Street at sunrise or Mission Street at noon. Tires are a good example that illustrates our overall decision-making process: how accurately we should model each component, how we should allocate our efforts across multiple variables in our models, and the impact of getting the modeling wrong
Another way we manage trade-offs is our use of a variety of simulation frameworks to serve the different levels of accuracy different tests require. Some simulation frameworks don’t need 3D graphics to execute because they test subsystems of the AV stack. These simulations are easier to create, cheaper to execute, and can run on commodity hardware at 100x real-time. Meanwhile, other frameworks need full 3D visibility, weather simulation, and elevation data to be certain the simulated AV correctly interprets its surroundings. Based on what the simulation tests for, we know how to prioritize the different accuracy needs and compute resources.
Beyond the above challenges, we also have sensors to model. Decades of video game engines have taught us that we can create visual scene data for cameras (which is what you see when you play a video game), but what about LiDAR and radar? Many LiDARs capture 3D data in slices, and radar captures electromagnetic reflections which have the appearance of oddly-shaped blobs. The real challenge is how to accurately model LiDAR and radar in real time while balancing fidelity and runtime performance. It’s a complex problem involving refraction in the air, reflections, ghosting, and many complexities — much more than we can discuss today.
Regardless of whether our vehicles are on the road, our AV engineers continue to test and develop code using our powerful Simulation frameworks. To continuously improve the vehicle’s performance, it’s crucial for simulation frameworks to have high-fidelity models, accurately simulated AV platforms, interactive and realistic roadway users (e.g. drivers, pedestrians, cyclists), and highly detailed environments. We believe that continued investment in simulation tools will lead to an even faster rate of improvement, so we will continue to invest in these frameworks for years to come.
If you’re interested in being at the epicenter of rapid AV development, consider joining us.