Insects and rodents could be the key to understanding how artificial intelligence works – and how it powers the new generation of autonomous vehicles.
Research fresh out of Australia’s ARC Centre of Excellence for NeuroRobotics suggests that biology could be the roadmap for deep learning and AI.
The Centre is looking to create artificial intelligent systems that respond to their environment in a flexible way. To do this, they will reverse engineer ‘small minds’, including those of insects and rodents.
While it’s not entirely clear how they plan to do this, Centre director Andrew Barron says that current AI systems have to be trained for millions of hours to become, say, a self driving car – but it’s the only thing that they are trained for.
This is different to what biological brains do – for example humans can drive (relatively) safely while holding a conversation or thinking through a maths problem at the same time.
“Humans do all of that seamlessly – we’re intuitive, we’re cognitively flexible, we’re smart and we’re efficient. Those are the capacities we want to bring into AI. The easiest way to do it is to learn how minds do it and then reverse engineer it. It’s a no-brainer,” Barron explains.
“A deep learning system is not flexible and a self-driving car is a one trick pony. This system is pointless for a farmer or a miner. These industries need a car or a device that can decide on its priorities, monitor fuel levels and move around safely to get a job done,” Barron says.
He adds that the neuroscience field already understands how insect brains work, which makes them an achievable first step for the programme.
“We want to build AI which has limits as to what it can do; limits which are intrinsically part of the architecture of its mind. This will help the public to understand and trust AI,” Barron continues. “With this computational framework we can interrogate the decisions it makes, line by line.”
The Centre is currently applying for further funding to continue the research. It will not only include practical AI development, but also ethics and policy developments as well.
“One of the biggest problems with AI at the moment is that we have the development of technology happening completely independently of ethical or policy considerations. Then, as the technology emerges, there’s a frantic scramble to assess legislation and ethical consequences,” Barron says.
“Putting these considerations into the design program means we’re designing for outcomes that will benefit society. Our approach is to ensure we balance societal transformations caused by AI technology with a need to ensure the rights, dignity and security of people.”