11 May 2023

How to remain Human in the World of Artificial Intelligence

Prof. Ryszard Tadeusiewicz: Thanks to Artificial Intelligence, we can build a future where everyone lives with dignity. But for the world to have justice, we must first believe in that justice.

Author: Maria Mazurek,

Interviews

Source: Pixabay

Artificial Intelligence is already changing the world, and this is just the beginning. US President Joe Biden met with representatives of the most powerful AI companies a few days ago, calling for responsibility and acting to protect people. Do we have a lot to fear? An interview with Prof. Ryszard Tadeusiewicz, roboticist and computer scientist, several times rector of the AGH University of Science and Technology in Krakow, popularizer of science, a specialist in Artificial Intelligence, publicist and member of the Forecast Committee of the Polish Academy of Sciences. Prof. Ryszard Tadeusiewicz is also a member of the HTT’s Scientific Board.

Your house is surprising.
Why?

I would expect from a specialist in robotics and Artificial Intelligence an apartment filled with technological advances. Yours is filled with art.
For me, the source of admiration is, first of all, man and what was created by his hands. I leave the robots in the laboratory.

An engineer who is in love with a human?
For an engineer, too, a human should be central. Technology was created to serve man.

Yet some people have been feeling cornered by it lately. Rightfully?
It depends on what they fear.

Let’s start with the darkest scenario: the enslavement of humans by robots, as predicted by many writers or filmmakers.
This vision is intellectually stimulating, but let’s leave it to science-fiction creators. To stay with rational thinking, we must start with the fact that Artificial Intelligence – although built on the model of human intellect – is free of instincts, dreams, emotions, and desires. Besides, to fight us, it would first have to have a reason to compete with us for resources. And what would robots want from us? Money? They don’t need them. Our housing? Neither. Food? After all, they don’t feel hungry. Positions and functions? A robot doesn’t need prestige or to have its ego flattened because it simply doesn’t have one.

And if I ask you, do we have reason to fear for our jobs?
Here the matter is more complicated. For years now, we have been slowly getting used to the robot displacing the laborer, becoming familiar with the sight of deserted production halls in the automotive or armaments industries. Of course, at this point, not all mechanical work can yet be done by robots, but this will change. A few days ago, I was at a presentation of a strawberry-picking robot created by scientists from the Agricultural University of Cracow. Until now, this was too specialized and delicate a job for a machine to take over: you have to search for these strawberries, carefully pick them, and on top of that, select the ripe fruit, leaving the green or rotten ones on the bed. Today, machines can do this. And they will be able to do more and more.

While we’ve been getting used to machines taking jobs away from “blue collars” – that is, labor, production, or lower-level administrative workers – what’s new is that Artificial Intelligence is taking over the work done by intellectuals: journalists, scientists, lawyers. It is estimated that the courts will soon be flooded with AI-written lawsuits. Scientists have been using technology in collecting or analyzing data for some time, but until now, they have been in charge of concluding the data. This is about to change. As for journalists or writers, on the other hand…

GPT chat has appeared, and it’s pretty good in writing.
Pretty good? It is excellent at it! But if you ask my opinion, I will always prefer texts written by a human. They are imperfect and unique in a charming way, and you can find that “something” in them: a piece of humanity you won’t find in texts created by language bots. Of course, this applies to in-depth reports, interviews, or fiction. When it comes to simply summarizing information or writing a sports report in a hurry – Artificial Intelligence, if it’s not already better than a human, will be in a while.

We have talked about the future and Artificial Intelligence many times, but a few years ago. Back then, it was a niche issue, a bit futuristic. Now we have a social discussion. Is this an impression, or has the development of Artificial Intelligence accelerated strongly in recent months?
Both yes and no. Language models are not a new idea: for decades, scientists, myself among them, have been engaged in analyzing language structures and designing these models. For the past 30 years, we have been trodding the ground, and now humanity is beginning to build impressive structures on it. The development of Artificial Intelligence is a matter of the last three decades. Only the first AI systems were concerned with proving mathematical theorems. That’s impressive, but let’s be honest: not very practical. And even less attractive to people outside the industry. Then, slowly, expert systems began to appear, in which the knowledge of experts in a particular field is “loaded.” The expert provides the knowledge, the user provides the question, and the algorithm quite efficiently associates the facts and provides the answer. Now we have language chat systems that can write highly efficiently, dialogue with us, and build surprisingly brilliant and accurate answers. And people, sometimes in a dishonest way, are increasingly willing to use them. I often get various articles prepared for scientific journals or congresses to review. Do you know what I am most often asked by editors who ask me to review such texts?

Do you suspect that Artificial Intelligence wrote them?
Exactly so. Teachers face a similar problem when giving students written papers.

How can they check whether students or a language model wrote them?
Technically, it’s almost impossible.

Then what can they do?
Talk to each student. Ask him about the content of the essay, why he came up with such and not another conclusion, and what thoughts he has. If I were a school teacher, this is what I would do. Of course, sometimes a student can come out credibly during such a conversation, even though he used a language chat. But if he did – it means that he at least assimilated the content he created, thought about something, and analyzed it.

Or does asking students to do written work, in the age of GPT chat, just not make sense anymore?
I believe it does make sense. Written works in education serve several important functions: they test a student’s knowledge, improve his use of language, and teach him to formulate conclusions on his own. If a student uses a language model, there is a danger that he will not gain these competencies. Secondly – and perhaps this is even more dangerous – if he learns to cheat in this matter, there is a risk that he will also cheat in others. We know from neuroscience and psychology that the school period is a time of intense formation and consolidation of behavioral patterns, values, and life attitudes. If we think with concern about the future of the world, including in the context of new technologies and Artificial Intelligence, then we must be serious about shaping children’s morality. You asked how we can “catch” them using Artificial Intelligence. And I would approach this from a completely different perspective….

Which one?
Give them credit. Traditional education is not built on trust. Meanwhile, the social or psychological sciences point out straightforwardly: people endowed with trust try not to fail it. In Scandinavian countries – the few where schooling looks a bit different – students are not under such control. They could easily cheat. Even so – or rather: that’s why – Scandinavian kids don’t do it. Why? Because they are trusted.

Should new technologies be present in schools?
Yes. In the past, the teacher was the master who passed the knowledge to students. Today, students can instantly find the answer to any question online. A school needs to consider this to stay caught up. It will lose. So the teacher should become a guide: not so much the one who passes knowledge, but the one who shows how to use it wisely and responsibly. And if he is to do so, technology should be present in the school.

At the same time: the more we are surrounded by technology, the more we should cultivate humanistic attitudes in young people. The more we should take care of the moral spine of the younger generation. In a world full of robots, we should remain Humans.

What does it mean to you: to be Human?
To love. To care about others. To be responsible – for yourself, loved ones, and the world. To nurture humility and serenity in yourself. To wish others well. You mentioned that more and more people feel fear of Artificial Intelligence. I think that harming people – can only be done by people. Artificial Intelligence is only a tool – yes, a potentially dangerous one – in the hands of humans. In this sense, it is not much different from a hoe with which we dig up the garden. Both serve a human purpose. The main difference is that it is much more difficult to hurt someone with a hoe: you need to have direct contact with him, use force, and perhaps be noticed. Using AI or technology in general, we can destroy someone anonymously with a click of the mouse. I’m not just talking about hacking attacks, data theft, cybercrime. It is sometimes enough to slander, ridicule, accuse someone anonymously.

And if robots are to replace us at work, we will be left with free time and frustration to discredit others online.
Time: perhaps. Frustration: not necessarily. I think that for those who will want to work – work will be found. Yes, many of the professions in which we are employed today will soon disappear, or few workers will remain. However, I am convinced that more will appear, which I still cannot name or even imagine today. Today’s change is, in a sense, a reflection of one that humanity (and relatively recently) has already experienced: the first industrial revolution. Before the weaving machine, all materials were woven by hand, and later clothes were sewn from these materials, also by hand. When mechanical weaving machines appeared, weavers faced unemployment. They were terrified and angry enough to start destroying these machines en masse and setting fire to the factories. The labor market changed in the light of this revolution, it’s true – but in retrospect, can we conclude that it was for the worse? Does anyone miss doing the job of a weaver? It was hard work. In its place, on the other hand, a new profession has emerged: clothing designer. And in the same way, in place of the professions that will soon disappear, new ones will appear. In a few decades, we will probably be driving autonomous cars, without a driver. I can easily imagine driverless cabs, even air cabs. Consequently, there will be a need to reorganize the interior of our vehicles completely, perhaps along the lines of moving lounges. And if so, designers of such machines and their interiors will probably emerge.

Agreed, except that – just as there are fewer clothing designers than there were weavers – we will need fewer designers of autonomous cars than we now need professional drivers.
I didn’t say the work would be for everyone. I said it would be for those who want it.

What about the others?
I imagine that they will be provided with a steady income that will allow them to live with dignity without being employed. Those who work or otherwise make above-average contributions to society will be given additional bonuses.

It sounds like a utopia.
From today’s perspective, yes, but if we look at the economy on a macro scale, thanks to technological development, much more goods could be produced than now. So, to device this world anew is, first of all, a matter of social justice and responsible distribution of resources; this, among other things, is what we do at the Forecasting Committee of the Polish Academy of Sciences. People are afraid of change. That’s natural. This was the case during the first industrial revolution, and it was also the case during the agrarian revolution. Yet I encourage people to consider the coming changes an opportunity, not a threat. I want to believe that with the development of Artificial Intelligence, the world will be a place where every person can live with dignity.

It may be economically feasible, but my doubt is about something else: is this social justice even possible? Aren’t we facing a repeat of what we have experienced many times in history: that change will only deepen social inequality?
For there to be justice in the world, we must first believe in that justice. Adopting the right optics shapes tomorrow. If we recognize that the world will not be better – it probably won’t be. But we can also build the kind of future we dream of. And here we return to school education: if we pass on the right values to children, then young people – or at least most of them – will live according to these values. And build a fair, better future. A future in a world of Artificial Intelligence that serves the good of all people, not exclusion or destruction.