The Psychology of Artificial Intelligence

The Psychology of Artificial Intelligence

Stopping robots from banging into walls

Imagine a situation where we have a robot, some water, and a cup. The robot will have to take the water and pour it into the cup. Right now, that is a complex task for a robot. To begin with, the fact that the cup is there for the water to be poured in – is not an obvious conclusion for a robot. To its machine vision, the water and the cup are two distinct items – one fluid and one solid – with no specific functional interrelation, unless that is purposely fed to its mechanical brain.

For humans, however, it comes quite naturally. When we see water, we know that we can drink it and it needs to be poured into a cup for us to drink from it. This is a concept that has gotten ingrained in our brain. AI research and development company DeepMind is now looking at human psychology concepts to develop a new approach to reinforcement learning, particularly the theory of affordance. This is a necessary approach to create a robot that would not have to be fed with every possible situation to draw its conclusion. Rather, it would go on discovering newer possibilities as it functions – much like we, humans, do.

While still in the early stages, DeepMind researchers are hoping that its initial experiments may lay a theoretical foundation for scaling the idea up to much more complex actions in the future. They believe that one day robots will develop a general understanding of the possibilities that an object affords; this is the core of the Theory of Affordance.

As a psychological concept in visual perception, it is quite simple. Humans and even animals perceive the world around us not only in terms of object shapes, and spatial relationships, but also in terms of what we can do with them – and this drives action. This is what we want future machines/robots to do without being told to, by simply observing things around them, surmising the possibilities, and acting accordingly.

The researchers created a simple virtual scenario. They placed a virtual agent in a 2D environment with a wall down the middle, and had the agent explore its range of motion until it had learned what the environment would allow it to do – its affordances. The researchers then gave the agent a set of simple objectives to achieve through reinforcement learning, such as moving a certain distance to the right or to the left. They found that, compared with an agent that hadn’t learned the affordances, it avoided any moves that would cause it to get blocked by the wall partway through its motion, setting it up to achieve its goal more efficiently.

Apply this to an everyday scenario where a robot needs to move from point A to point B in a factory. The RL-algorithm, in this regard, using the ‘affordances’ of individual aspects of the system, eliminates a number of (im)possibilities – such as the ability of the robot to move through walls or furniture. By avoiding several moves that would prevent it from achieving its target, it is essentially becoming much more efficient as a system. In other words, instead of the robot banging its head against wall, it travels from point A to B without an incident. This will enable robots/machines to move ‘freely’ by assessing the possibilities of the environment around it.

Image source: ‘What can I do here? A Theory of Affordances in Reinforcement Learning’ by Khimya Khetarpal, Zafarali Ahmed, Gheorghe Comanici, David Abel and Doina Precup. Paper available here.

Why it matters

The primary reason for using the theory of affordances in reinforced learning is to achieve two goals: “decreasing the computational complexity of planning and enabling the stable learning of partial models from data, which can generalize better than full models.”  The researchers believe that despite the drawbacks of their existing research, this will pave the way for future incorporation of affordances in streamlining machine learning algorithms. Additionally, by applying this principle to more complex hierarchical reinforcement learning models, ever-expanding research in the field may become simpler in the long run.

Ever since its advent, AI and psychology have been intertwined at the core – especially due to the fact that AI systems are designed akin to the human brain – with neural networks constantly firing energy impulses to carry out tasks. Given the cruciality of psychology in AI and the use of a multitude of different psychological aspects in understanding and improving it, the field of AI psychology looks rather promising and ready for the road ahead.

Leave a comment

Your email address will not be published. Required fields are marked *

© 2023 Praxis. All rights reserved. | Privacy Policy
   Contact Us