SOCIAL ROBOTICS AND PERSUASIVE TECHNOLOGIES, BY FABIO FOSSA

In this article, scholar and researcher Fabio Fossa elaborates on the problem of using bias, the stereotypes that affect our perception of the world, to promote the acceptance of robots, a technique explicitly employed in social robotics. The aim is to facilitate the introduction of robots in various social contexts, i.e. to ensure that the human user does not perceive too wide a gap between the interaction he is used to – that with other humans – and that with the artificial system.

But, the author asks, is exploiting design bias to persuade the user to use social robots an ethical operation? What are the risks, what human rights would not be respected here?

 

SOCIAL ROBOTICS AND PERSUASIVE TECHNOLOGIES

by Fabio Fossa *

It is often thought that the philosophy and ethics of technology are only there to set and draw limits for technological research. This seems to me to be a very limited view. Certainly, limits and boundaries are useful when circumstances require them. But studies on the ethics of digital technology and robotics cannot be reduced to this. There is much more to it than that. Ethical analysis is proactive, seeking integration with engineering and design knowledge. It is about understanding how the technologies we build interact with us in order to develop them better, i.e., in a way that minimises risk and promotes social and individual well-being.

Right now, I am working on various topics related to the ethics of autonomous driving at the Department of Mechanical Engineering of the Politecnico di Milano. However, part of my past research, on which I continue to work, has to do with social robotics. In particular, I am interested in the issue that through specific design choices it is possible to influence the perception and mental model that users form of a given technological system. This could be done, for instance, to promote acceptance or trust in a robot. The question is: what is permissible in this respect? What is the line between legitimate influence and manipulation, between support and deception? And what might the consequences be?

 

Exploiting design bias? This is the question

Asking these questions is particularly important because one of the ways to make robots more familiar and user-friendly is to exploit the so-called biases, the stereotypes through which we simplify reality and orient ourselves in the social world.

Biases are mental associations that are deeply embedded in the way we interpret the social world. They help us to deal with its complexity. Through them we form expectations about the roles, competences and less obvious characteristics of our fellow human beings. These associations are deeply wired in our brain, which says nothing about their actual consistency. Sometimes they are completely airy-fairy. They work a bit like shortcuts. Shortcuts often get us where we want to go faster and easier. Other times they lead us astray and we end up doing or getting hurt.

Among the most common are gender biases. For example, we associate caring and understanding skills with the female gender, authority and competence with the male gender. Together with Dr Irene Sucameli from the University of Pisa, we asked ourselves: what happens when these biases are used to optimise human-machine relations, for example in the case of conversational agents?

This is not a hypothesis. Exploiting biases concerning our perception of the world to promote robot acceptance is a technique used in social robotics, even explicitly. The aim is to facilitate the introduction of robots into various social contexts, i.e. to ensure that the human user does not perceive too wide a gap between the interaction he is used to – that with other humans – and that with the artificial system.

If the continuity between these two worlds can be maintained, users will be able to interact with robots in much the same way as they expect to interact with people. They will not feel the gap too much, will accept the system and use it as the designers anticipated. A simplistic but effective example might be this: when designing a social robot for home care, where care and understanding are essential, it is advisable to adopt shapes, tone of voice and features usually associated with the female gender. In this way the users’ expectations, although based on prejudices, will be satisfied and the interaction will be successful.

Now, if we think that social robots can bring important benefits to society, we will be motivated to use this strategy to facilitate their acceptance. But what effects do we risk causing?

The suspicion that we are playing with fire seems legitimate. Many stereotypes spread socially without any rational basis. We find them ready-made and reproduce them not only (and perhaps not so much) out of explicit conviction, but also implicitly, as an automatism. Of course, some of these stereotypes may be harmless. Others, however, may convey unjust, offensive and discriminatory beliefs about certain social groups. For example, consider the prejudice that competence and authority are masculine traits: a historical and cultural legacy that continues to affect the status of women in our society. Should we not at least admit the possibility that indulging in these stereotypes through the design of social robots may end up consolidating and institutionalising them, when it would be much better to get rid of them?

 

What to do about bias?

Of course, the doubt I have just raised is based on an assumption that not everyone accepts. The question only makes sense if one admits the possibility that biased dynamics may transfer from interactions with robots to interactions between humans. Some researchers argue that it is a mistake to admit this permeability between the two dimensions. Others, however, point out that the idea of using bias as a design resource only makes sense if one admits from the outset that the separation between the two forms of interaction is not so clear-cut. On the other hand, if interactions with robots had nothing to do with interactions between humans, the use of biases would make no sense. The very use of biases in the design of social robots already calls into question any neat separation between the two forms of interaction. If it makes sense to exploit biases to facilitate interactions between humans and robots, then one must admit the possibility that the same biases can be transferred back to human-to-human relations.

Let’s consider a hypothetical example. A team of programmers needs to develop a conversational agent to manage the most common pathologies in order to reduce the workload of general practitioners. In order to choose the characters for the avatar, the team conducts empirical research which shows that the public associates the image of a competent and authoritative doctor with that of a white man in his 60s with white hair and a beard. Consequently, the team adapts the avatar to the users’ expectations, so as to facilitate acceptance and simplify interaction. In doing so, however, the team consolidates users’ prejudiced expectations, reinforcing the association between a certain gender, a certain skin colour, a certain age, and professional qualities such as authority and competence. An association that is unlikely to stand up to scrutiny. What impact will this technology have on patients’ confidence in, for example, young female doctors of colour starting out in the profession? Will the team have contributed to the transmission and consolidation of prejudiced and discriminatory expectations?

If the doubt seems well-founded, it is natural to ask how best to proceed. A first possibility would be to eliminate the use of bias from the design of robots altogether. However, this is an extremely difficult option to pursue: there are good reasons to doubt that it is possible, given the pervasiveness of bias. And the risk that the product will be rejected by users is high. We could then propose to make discriminatory biases off-limits and allow the use of the more innocuous ones. But who should distinguish between discriminatory and harmless bias? The companies, the designers, the legislator, the policy makers? In the latter case, how can it be ensured that long governmental delays do not hamper the research and development of robots?

Another possible avenue is to devote the design of social robots not only to fostering their acceptance, but also to actively countering social discrimination by designing robots that challenge and question users’ prejudiced expectations. This is an interesting idea, but one that raises many questions of social ethics: what biases would be allowed to be acted upon in this way? And who should decide this? Aren’t we undermining the autonomy of the user, aren’t we manipulating them without their knowledge, however for good?

 

Social robotics and deception

In a sense, the exploitation of bias to facilitate the acceptance of social robots can be described as a form of deception. Deception is an important issue in the ethics of social robotics. I became more aware of this when I wrote a chapter on this very topic for the book Automi e Persone. Introduzione all’etica dell’intelligenza artificiale e della robotica, which I edited together with Viola Schiaffonati and Guglielmo Tamburrini for Carocci and which was published in October 2021.

Some scholars have spoken out strongly against the idea of using robots to meet social needs – for example, to alleviate loneliness in the elderly and lonely. The most common criticism is that the whole practice is about deceiving people who are vulnerable, and therefore prone to fall prey to manipulation. The need for human contact felt by lonely people cannot be satisfied by social robots. Passing them off as substitutes for that is deceptive and disrespectfully exploits conditions of distress and vulnerability.

This is a very strong criticism, which casts a slanted light on the empirical studies that have been conducted in this regard. Many of these show that the feeling of well-being of the subjects involved is actually improved when they are allowed to interact with social robots even in quite basic ways. One of the first studies in this direction was carried out with Paro, a robot in the shape of a baby seal that makes noises and moves its body according to the strokes it is given. It has been used in retirement homes and hospitals to alleviate the loneliness of the elderly, many of whom suffer from Alzheimer’s or dementia.

The empirical data on subjects’ perceptions of their own wellbeing when provided with social robots can be challenged by focusing on their extreme need for human contact. The people involved in the experiments are highly motivated to be deceived, so that it is at least legitimate to doubt their self-awareness. In short, empirical data are not enough to resolve the issue. Those who are not convinced will say that the deception worked too well! But it remains to be shown that the subjects’ real needs have really been met, that their dignity has been respected, and that their well-being has been pursued as it deserves. If we add to this the fact that there are obvious commercial interests behind social robotics, the matter becomes even more complicated.

An interesting point to me is that very often it is assumed that the vulnerable person is being deceived unconsciously, that they are in a sense at the mercy of the social robot. But why not instead assume that they are consciously managing their world of make-believe? The experience of conscious self-deception is not that unusual. We all allow ourselves to be fooled by fictions, for example when scenes of life depicted on a screen or written on book pages move us as if they were reality. Or when the dynamics of a game involve us completely and we let ourselves be carried away by its internal logic. Even if we are aware that we are immersed in a fiction, this does not mean that the gratifying and beneficial effect disappears. On the contrary, I would say.

The crucial point here is the awareness of deception. Unlike the placebo effect, where there is no awareness of the fiction, in play or in some aesthetic experiences we know that the structure is fictitious, but the more seriously we take it, the more rewarding the experience is. And it does not seem to me that in this case there are problems to be raised with respect to personal dignity and autonomy. Why not think that the same could happen with the fiction of social robotics?

However, it remains true that in our debate the focus is on vulnerable subjects, in whose case it is difficult to establish to what extent they are actually able to handle experiences of conscious self-deception. Having said that, it still seems sensible to investigate the issue from this perspective as well.

 

Deception and appearance: the case of anthropomorphism

It can then be noted that some deceptions and fictions are deeply rooted in the way we deal with the complexity of the environment in which we live. If certain appearances are intimately part of our minds, and some of them affect the way robots are perceived and interpreted, perhaps it is not the case to reject them outright in design. Perhaps it is better to understand how to recognise them, minimise the damage that can follow, and exploit the benefits they offer. Here, ethical reflection clearly has a crucial role to play, which cannot be adequately fulfilled by merely drawing limits and setting boundaries.

An example here is anthropomorphism. Anthropomorphism is the tendency to interpret phenomena according to categories and patterns that pertain to human life. It is an attitude so deeply rooted in us that it seems to be a constitutive part of our way of existing and interacting with the environment. In the case of robotics, anthropomorphism is so natural that trying to oppose it with specific design choices is a hopeless attempt. We need to learn to come to terms with it, to manage this tendency in a way that helps us to promote the benefits and reduce the risks of social robotics. That means designing robots in a way that supports, but only to the right extent, our tendency to anthropomorphise.

As in the case of prejudice, the benefits of anthropomorphism are in fact linked to its exploitation in an intelligent and responsible way. In other words, it allows us to approach the robot not as something totally foreign and alien, but as an interlocutor that is, on the whole, familiar, even though different from us. On the other hand, the risks are linked to situations in which anthropomorphism is not dosed with measure, so that the user either rejects the robot as a foreign body or is led to lose awareness of the differences that distinguish it from human beings – to the point of treating a tool as if it were an individual. The problem is to define the optimal level of anthropomorphism, which concerns the artful handling of various design elements of the robot. This is a very delicate issue, but crucial to ensure that the user remains aware of the fiction and enjoys the associated benefits.

The question remains how to define the optimal level of anthropomorphism. This is an issue that reveals some serious difficulties. On the one hand, the question seems mainly empirical: one has to experiment. But one cannot think of doing so without ever making a mistake, and every error can be followed by considerable moral damage in this very sensitive area. This is a complex issue that needs to be dealt with to prevent a more unscrupulous approach from gaining the upper hand, where it is the users in general who pay the price for experiments on anthropomorphism.

 

Fabio Fossa is a post-doc research fellow at the Department of Mechanical Engineering of the Politecnico di Milano, where he works on the philosophy of artificial agents and the ethics of autonomous vehicles. His research focuses on applied ethics, philosophy of technology, ethics of robotics and AI, and the thought of Hans Jonas. He is the Editor-in-Chief of the journal InCircolo and a founding member of the research group Zetesis.

Articoli correlati

Last news
Fiorella Operto

CodyRoby turns 10!

    In 1014, in a brain storming meeting, a gathering to invent ideas, between Alessandro Bogliolo of EU CODE WEEK (the European week for

Leggi Tutto »