She awoke, surrounded by stopped cars on the 110 Freeway.
“Moira, this isn’t the fastest way to Phoenix.”
Letters appeared across the base of the windshield:
ʏᴏᴜ ғᴇʟʟ ᴀsʟᴇᴇᴘ. ɪ ᴡᴀs ʟᴏɴᴇʟʏ.
The current goal for engineers integrating Artificial Intelligence into self-driving cars is to create a fully autonomous vehicles (Level 5 of AI). These cars will need a level of human-like cognition, not only to make decisions, but to try to understand humans and out-think other drivers and pedestrians. Researchers working at Oxford University on the Oxbotica project Selenium are training AI’s to ask questions like “Where am I? What’s around me? What do I do next?” Inspired by these developments in AI, writer Cria Cow asks the ultimate question: is it possible that a fairly human AI, trained to interpret human social behaviour, could eventually become a social creature itself? And what would happen if it did?
//Cow @criacow writes about random things, works in IT automation, and
hopes to retire before the computers take over and make us obsolete.//