Automotive AI: Applications that come with challenges in creation


Automotive AI: Applications that come with challenges in creation

BM: We work with OEMs in Detroit in the US, Germany, Japan and other locations, working closely with their machine learning, data architecture and AI teams to help power systems that they're building for exterior systems for vehicles. This includes lane keeping and adaptive cruise control just to name a couple of applications, as well as in-cabin systems.

The thing I would say about automakers in 2024 is that they're increasingly becoming software companies that have thousands of employees who are software engineers and data scientists. They are building quite sophisticated systems to develop these new technologies which they see as critical to their company strategies. We have the pleasure of supporting all these teams in building out their visual AI infrastructure, which they have brought in-house rather than outsourcing.

Jason Corso (JC): At the higher level, we examine the teaming between the driver and the automobile. It's important to think about what humans and AI are each good at. Humans are good at adapting to changing and new situations that require reasoning - both dynamic reasoning and also ethics. On the other hand, humans are bad at redundant, repetitive things like everyday tasks, whereas AI is really good at doing repetitive things over and over again.

What does that mean? For driving, AI has the potential to augment the human's ability to maintain eyes on the road, around you and in your blind spots, when you may not be able to do it because you're tired or distracted, or for some other reason. This will increase the overall safety of driving or safety of the road systems and that's in a scenario with AI and humans working in sync. I think that's going to come first before we see Level 4 or Level 5 autonomy. We are already seeing a lot of amazing enhancements in this augmentation of human driving with AI.

Which ADAS features are imminent, versus longer term projects for companies?

BM: We're seeing specific technologies being rolled out that can alert drivers to identify opportune moments to change lanes or overtake other vehicles safely and in a way that doesn't distract them from the road. Lane keeping or adaptive cruise control are features that are already in the market, but which are being constantly evolved and improved, leveraging not just sensor-based systems but also other modalities and visual data to improve their power, reduce costs and make them more available on more types of vehicles.

In-cabin solutions are a key focus of the teams that we support. Think cameras that are mounted in strategic positions that can identify driver drowsiness and distraction (i.e. when the driver isn't watching the road), then issue alerts or interventions as necessary. This kind of innovation is already on the market and will continue to get more sophisticated as AI-powered systems become more capable and in-tune with driver attention and awareness.

There are also imminent features that will automate tasks that are hard for human drivers to execute, such as backing a truck into a loading dock. Another innovation is parking assistance where the car can identify if a parking spot is feasible and park in it automatically. We could also see vehicles that warn against hazards as you exit the car, such as a passing cyclist, to prevent collisions.

Some other visual AI features we could see in the next few years are those that help reduce the risk of inexperienced or poor drivers which is made possible through driver recognition. Some examples that could be options for consumers to choose from:

There are also ADAS features that adjust to real-time conditions which we would expect to be rolled out in the longer term:

What are some of the challenges OEMs face when developing these projects?

JC: I think the ultimate challenge in many AI and visual AI projects is the long tail problem. It's pretty easy to get 80% performance on a typical problem with machine learning or AI these days, but usually decent performance is based on typical scenarios that drivers are used to. When you think about deploying AI in the real world, you can't deploy a pedestrian avoidance system that works four times out of five. That would be a problem. That's due to the visual and behavioural complexity of operating in the everyday world.

It's very difficult to build systems that have seen enough examples of all the variability or all the variations of situations that various autonomous systems will be expected to behave in. It is even harder to build a general intelligence capable of reasoning about these myriad situations.

Although accidents happen, humans are rather good at applying adaptive on-the-fly responses and reflexes that tend to align with what we've experienced as best practices. While the automotive industry aims to get there, today's systems can't quite capture those capabilities. Even a student can download open-source code and use open-source data and quickly train a model that gets decent performance. But getting it to production quality that interprets the high variability that happens in the real world requires expertise, time, lots of money, lots of testing, and lots of patience.

That's ultimately the game that I think many of these companies are playing right now: trying to figure out how to cover as many of those hardest cases as possible while ensuring confidence in safety.

What do you see the future holding for the use of AI in the automotive industry? Not just in terms of safety, but more broadly?

BM: As I mentioned earlier, modern automakers are really software companies, and their value to the consumer is measured by safety and reliability. All the rest of it is really coming from the software or AI-enabled features that are now becoming possible for vehicles.

I'm personally excited that safety is one of the first areas where we're seeing some real progress in terms of what these technologies can do - whether it's in-cabin awareness or keeping vehicles safe and reducing accidents. That's one of the first areas where we're seeing real returns from AI investment.

Automakers are great at thinking long term. The lifespan of a vehicle in the real world is 20 plus years and as a result, there's an extended roll out period that will happen. But these automakers are making the right investments today. It starts with gathering the right data to address some of the challenges and getting us to 99.999% reliability. That begins with being able to have very sophisticated data collection systems that can help automakers gather enough data to understand all of the different situations, edge cases, anomalies, and so forth. We're well into that journey now and it's exciting to see the automakers make that transition and be set up and positioned for success.

Is there anything else that either of you want to add?

BM: Although it's easy to focus on machines and algorithms taking complete control when talking about vehicles, humans play a key role in the entire lifecycle of the next generation of automotive technology. Humans play a key role in making sure that the data that is being fed into these AI systems in vehicles is accurate and free of bias. It's essential that those working on autonomous vehicles consider whether the system meaningfully increases the quality of both their product and the driver experience. It's not something that can be fully automated right away. We'll all be living with these cars in the future so it's important that the human element is a part of their development from beginning to end.

"Automotive AI: Applications that come with challenges in creation " was originally created and published by Just Auto, a GlobalData owned brand.

Previous articleNext article

POPULAR CATEGORY

corporate

10639

tech

11464

entertainment

13068

research

5966

misc

13880

wellness

10573

athletics

13901