icon

Baby robots feel the love

Marianne Freiberger Share this page

Baby robots feel the love



Researchers have for the first time created robots that can develop and express emotions. If you treat these robots well, they'll form an attachment to you, looking for hugs when they feel sad and responding to reassuring strokes when they are distressed. They are capable of expressing anger, fear, sadness, happiness, excitement and pride and will demonstrate very visible distress if you fail to give them comfort when they need it. And they can even display different personality traits.

Lola Cañamero comforting a sad robot.

"This behaviour is modelled on what a young child does," says Lola Cañamero of the University of Herfordshire, who led the research. "This is also very similar to the way chimpanzees and other non-human primates develop affective bonds with their caregivers." It's the first time that early attachment models of human and other primates have been used to program robots.

The researchers believe that the ability to interact, learn and show emotions is crucial if robots are ever going to become an integral part of human society. Without it, people will find them alien, repetitive and eventually boring. But how do you get emotions, which are far from well-understood in humans, into machines that only understand maths?

First, you need to have a good look at emotional development in humans. Being able to form bonds with people who look after them, usually mothers or fathers, is vital in the emotional and cognitive development of children. Caregivers act as a secure base for infants, giving comfort, soothing distress and providing the courage to explore the environment and learn.

In a 2008 paper Cañamero and her colleague Antoine Hiolle described a basic blueprint for mimicking this aspect of the caregiver-infant relationship. Robots work on a step-by-step basis. At any given time step a robot takes impressions from the outside world as input, for example visual information from cameras and tactile information from sensors on its body. It then uses that information to update variables that mimic emotions and which also give the cues as to what action to take.

In Cañamero and Hiolle's model the level of comfort given by the caregiver at a given time step $t$ is measured by a variable $T_{care}(t)$, which depends in what kind of comfort is being given at time $t$. For example, if the robot values touch over sight, you might define $T_{care}(t)=0.5$ if the robot can see the caregiver with its camera eyes, $T_{care}(t)=0.8$ if it can feel that it's being stroked with the sensors on its body, and $T_{care}(t)= 1$ if both are happening at the same time, indicating maximum comfort

If the caregiver isn't visible or touching, you can set $T_{care}(t)=0$, though for a more sophisticated robot, you might want make it depend on the level of care given at the previous time step $T_{care}(t-1)$ using the equation $$T_{care}(t)=\beta T_{care}(t-1).$$ Here, $\beta1$ is a number indicating for how long the re-assuring effect of comfort lasts. To create a needy robot, you'd choose a small value of $\beta$, meaning that the effects of care decay quickly, and for a more independent personality you'd choose a larger one.

The robot's exploration of its environment can be mimicked using learning algorithms. Its perceptions of the outside world are represented by arrays of numbers, such as the red-green-blue colour values associated to pixels in an image taken by a camera. Learning algorithms work by comparing the input arrays to previously stored "memories", also represented by number arrays. Depending on how numerically close a new sensory clue is to existing arrays, the mathematical algorithm classifies it along with existing memories or files it as a new memory.

This approach also gives a way of modelling the robot's emotional reponse to its environment. The numerical discrepancy between sensory inputs, taken in at time step $t$, and existing memories can be used to describe the robot's surprise at what it's perceiving, stored in a variable $Sur(t)$. In their model Cañamero and Hiolle also introduced another variable, $Mas(t)$ which reflects how well the robot is mastering new impressions - it's based on how well the learning algorithm is doing at classifying the new input. The better the robot's mastery of new input, the lower the value of the variable $Mas(t)$.

These two variables, together with the level of comfort given by the caregiver, can now be used to describe the robot's emotional state at time step $t$. It's measured by a variable $A(t)$, where $A$ stands for arousal. If there's no comfort from the caregiver, any arousal is only due to cues from the outside world. You can describe this by defining $A(t)$ only in terms of the robot's surprise and its mastery of the situation, for example by setting $$A(t)=\frac{ Sur(t)+Mas(t)}{2} \;\; \mbox{if T_{care}(t)=0}.$$ If the caregiver is giving comfort, then the current level of arousal depends on just how much comfort there is and on how aroused the robot was at the previous time step. You can capture this using the equation $$A(t)=A(t-1)-\alpha T_{care}(t) \;\; \mbox{if T_{care}(t)>0}.$$ Here $\alpha$ is a number describing the soothing effect of the caregiver's comfort. A large value of $\alpha$ means that arousal diminishes quickly as the caregiver comforts the robot and a small value means that the robot takes a longer time to calm down after it's been excited.

Your robot is now ready for action, based on its emotional state. Low arousal eventually leads to boredom, so you can program your robot to start turning around itself looking for new stimuli if average arousal has been low over the last few time steps. High levels of arousal indicate distress, so you can make your robot look around for the caregiver or even "bark" to attract attention when recent average arousal has been high. If arousal has been neither too low nor too high, then this indicates that the robot is quite happily entertained by what's going on. You can program it to simply keep on looking and learning when this is the case.

Peekaboo!

In 2009 Cañamero and Hiolle, together with Kim Bard, released simple robots that had been programmed along these lines into the wild. They let them loose in the London Science Museum, asking people to interact with them and recording their responses. The robots were sitting on a playmat together with a set of toys to explore. They were able to recognise their caregivers using face recognition technology and feel their touch using contact sensors on their bodies. And they were able to turn around themselves to look for new adventures and to bark in order to attract attention. Flashing LEDs on their heads indicated the level of their stimulation.

"Even though the [people who played with the robots] varied greatly in terms of age and familiarity with the technology, they all engaged in the interaction and reported a high level of enjoyment," the researchers said in their report on the experiment. "This demonstrates how such a simple setting is sufficient to trigger engagement from an adult independent of age, gender, or knowledge of how robots function."

Interestingly, museum visitors preferred the robot that had been programmed to be needy over the more independent one. "The majority of the subjects classified the non-needy robot as boring, less entertaining and even frustrating, since the robot did not seem to solicit and react to them," said the researchers.

Using a caregiver as a safe base from which to explore is just one aspect of the infant-caregiver bond. Another is the ability to form that bond in the first place and to adapt its strength according to the level of care the infant receives. Based on the early attachment process that human and chimpanzee infants undergo when they develop a preference for a primary caregiver, Cañamero and her colleagues have developed programs which endow robots with just that ability. The new prototypes revealed this week can adapt to the actions and mood of their human caregivers and become particularly attached to an individual who interacts with the robot in a way that is particularly suited to its personality and needs. The more they interact, and are given the appropriate feedback and level of engagement from the human caregiver, the stronger the bond developed and the amount learned.

What's more, the new protoytpes can express their emotions in a slightly more human way than simply using barks or flashing LEDs. They can hunch their shoulders, raise their arms, or cower in fear. "We are working on non-verbal cues and the emotions are revealed through physical postures, gestures and movements of the body rather than facial or verbal expression,” says Cañamero.

But cute as the new baby robots may be, can these machines ever become more than just crude imitations of the real thing? Some scientists believe that a few well-chosen behavioural rules can indeed sprout complex behaviours that are more than the sum of their parts. The idea is that complex behaviours are emergent phenomena: their complexity stems not from the rules that govern them, which can be quite simple, but emerges from their interactions. It's a bit like the weather: it's easy to describe the various components that drive the weather, pressure, temperature and so on, but once they get going and interact, there's no way of predicting the outcome more than a few days in advance.

If human cognition, human emotions and human intelligence are indeed emergent phenomena built on a few simple rules, then we may one day be able to build robots that are to all intents and purposes human. It's all about identifying the right set of rules.


More information

The new robots were developed as part of the interdisciplinary project FEELIX GROWING (Feel, Interact, eXpress: a Global approach to development with Interdisciplinary Grounding), funded by the European Commission and coordinated by Lola Cañamero.

The following two papers describe the arousal model mentioned in this article and the experiment in the Science Museum:

The paper Attachment bonds for human-like robots describe how robots can be programmed to develope bonds to the person they first see at "birth".

The paper Constructing Emotions provides a view of emotions as emergent phenomena.

Comments

Permalink

It`s a bit sci fi but could building machines that can develop empathy / feelings lead to systems that have an element of self interest ? I realise it`s a common theme for movies to have machines with a conscience ( good machines ) and machines with self interest ( bad machines ) , how do engineers and programmers promote and develop this technology without scaring the public ?

Although many scientists agree with the benefits of genetic engineering there is a lot of negativity towards it amongst the general public fed by a never ending stream of scare stories / films etc.......I`ve met people who believe that the "Matrix" could be reality

And you are very funny, too. The movie "AI" comes to mind. Part of me wants to own one, but like you say, what if it feels I don't play with it enough? Will it beat my face in? And will it age along with me, or will it beat me in the face when I'm an old fogy?