Sep 14, 2022 |
|
(Nanowerk News) The digital and physical worlds are becoming more and more populated by intelligent computer programmes called agents. Agents have the potential to intelligently automate many daily tasks such as maintaining an agenda, driving, interacting with a phone or computer, and many more. However, there are many challenges to solve before getting there. One of them is that agents need to recognize and express intentions, Michele Persiani shows in his thesis in computing science, defended at Umeå University, Sweden.
|
In the future many of our electronic devices will be populated by at least one agent. However, before that can happen we need to engineer their functioning for human interaction, that is to make them behave in a way understandable to us, as well as making them understand what we want from them.
|
In his thesis “Expressing and Recognizing Intentions” Michele Persiani addresses parts of these challenges by focusing on intentionality. This is useful to consider when agents perform goal-directed behavior, by initiating sequences of actions with the goal of achieving something. Whenever an agent enacts such a behavior, it is crucial to maintain understanding towards its collaborators on what it’s doing, what is its goal is and how it will achieve it. Otherwise, misunderstandings can lead to dangerous situations where the human is unaware of the robot and vice versa.
|
“This need of understanding arises simply because we can’t allow having a powerful machine next to us having no idea of what it is doing” he says. “Imagine it’s 2050, you wake up in the morning and your butler robot is busy doing something, but you have no idea what it is. Is it cleaning the floor? Preparing a meal? Let’s hope it’s not throwing the cat out of the window.”
|
In his thesis Michele Persiani uses an established model from daily-life psychology referred to as Theory of Mind, which explains how computer agents can think about other agents, thus forming hypotheses on their goals and beliefs. This model is formed by another important concept of daily-life psychology, rationality:
|
“Applied to our case, we will make the robot think about our goal by making it assume we are driven by rationality, and vice versa, we will make it reason on how we are trying to understand its goals, by assuming that we think it is a rational being” he says.
|
These processes of recognizing and expressing intentions are quite complicated, and previous research commonly considers them with distinct sets of techniques. However, the thesis shows how these processes can share a common underlying computational architecture, and inside of it represent a dual of each, not only in words but also in the formulas. This is an attempt towards the unification in a single computational architecture, encompassing what it means to access intentionality when more than one agent is concerned, and it will hopufully be a milestone for research in the area, Michele Persiani says.
|
“The integration of intelligent agents in our daily lives is going to be a long process, and we should be optimistic” he says.
|