Self-driving car AI may learn how to read the world from games like Grand Theft Auto, thanks to a paper published by a group of researchers this week.
The Intel Labs and Darmstadt University researchers were involved in training vehicle AI to recognise objects in the world - people, cars, sidewalks, poles, signs, trees, and so on - using annotated real-life images as reference points.
Those images - from the Cambridge-Driving Labeled Video Database, or CamVid - had to have each object outlined and annotated by hand, in a process that took a great deal of time.
However, the researchers (Stephan R. Richter, Vibhav Vineet, Stefan Roth, and Vladlen Koltun) found that video from open-world games like Grand Theft Auto V were realistic enough to use in the AI-training process - and could be annotated in a fraction of the time.
Using software that intercepted "communication between the game and the graphics hardware," annotations could be applied to objects with a single click, and propagated between frames automatically.
Video game imagery can also be customised in-game, enabling the creation of data sets with varying lighting and weather conditions without having to wait for a rainy day.
What's more, they found that the video game data improved resultant AI performance, with "Models trained with game data and just 1/3 of the CamVid training set [outperforming] models trained on the complete CamVid training set."
Thankfully, though it may teach cars to see, Grand Theft Auto V will not teach cars the road rules.