Japan Technology

“If you rely too much on AI, your identity will be remade before you know it,” an ethicist warns – GIGAZINE


memo


In recent years, the rise of AI has expanded the possibilities of technology, but at the same time, the moral, ethical, and philosophical questions brought about by AI have also emerged. A book containing many essays on the moral dilemmas caused by AI,AI MoralityHe is researching the ethics of AI at the University of Zurich in Switzerland.Muriel LeuenbergerHe argues that, “If we rely too much on AI, there is a risk that our own identities will be arbitrarily remade.”

AI ‘can stunt the skills necessary for independent self-creation’: Relying on algorithms could reshape your entire identity without you realizing | Live Science
https://www.livescience.com/technology/artificial-intelligence/ai-can-stunt-the-skills-necessary-for-independent-self-creation-relying-on-algorithms-could-reshape-your-entire- identity-without-you-realizing


In today’s society, all kinds of services and apps are used to tell people who you’re friends with, who you’ve talked to, where you’ve been, what kind of music, movies, and games you like, and what news you’ve read. We collect information such as “what you bought with your credit card.” This information is already being used in AI-powered recommendation functions, and large companies such as Google and Facebook are using information such as a person’s political opinions, consumer preferences, personality, employment status, whether they have children, and whether they have a mental illness. You can predict risks etc.

As the use of AI and the digitalization of our lives continues to advance, a future in which AI knows people better than they themselves is becoming a reality. “An individual user profile generated by an AI system may be able to more accurately describe that person’s values, interests, personality traits, prejudices, mental illness, etc. than the user himself,” Leuenberger said. “Technology can already provide people with personal information they didn’t even know.”

If AI knows more about a person than they themselves do, it seems reasonable to rely on it to help them choose the partner and friends they will make, the next job they will do, the parties they will attend, the house they will buy, etc. . However, Leuenberger argues that relying too much on AI has two problems: “trust” and “one’s own creativity.”


◆How can we trust AI?
For example, in a situation where “Friend A introduced me to potential lover B,” before actually meeting potential lover B, you should consider the following points: “Can I trust this friend A?” If Friend A is drunk, there is a possibility that his judgment has been impaired by alcohol, and if Friend A’s own love life has been a series of failures, you may want to be cautious. Another important determining factor is how much friend A knows potential lover B and why he introduced her to him.

It’s difficult to take these factors into account even when the other person is a human, but it becomes even more difficult when the person making the recommendation is an AI. It is difficult to know how well an AI knows itself and whether the information it has can be trusted, and many AI systems haveprejudicebutbeIt is also known that it is wise to avoid blindly trusting AI.

Also, if you are talking to a human, you can ask “Why did you think that way?”, but this is often not possible with an AI recommendation system that does not have a chat function. Intent is difficult to assess. The algorithms behind AI decisions are typically proprietary to the company and cannot be accessed by users, and even if they are, they can only be understood by specialized knowledge. Furthermore, Leuenberger says that AI’s behavior has a “black box” nature that even developers cannot understand, making it almost impossible to interpret its intentions.


◆AI will eliminate the ability to create one’s own identity
Even if fully reliable AI emerges, Leuenberger argues that there will still be concerns about “the ability of people to create their own identities.” The AI ​​that tells a person what to do is built on the idea that “identity is information that the user or the AI ​​has access to.” In other words, the idea that it is possible to determine “who a person is” and “what they should do” through statistical analysis, facts about personal data, psychology, social systems, human relationships, biology, economics, etc. is.

However, this overlooks the fact that “people are not passive subjects of their identities; identities are something that they actively and dynamically create and choose for themselves.” philosopher’sJean-Paul Sartreteeth”Existence precedes essenceexistentialismHe advocated that people have the freedom to envision their own identity and act towards it.

Leuenberger said, “We are constantly creating ourselves, and this must be free and independent.” “Where were you born?” “How tall are you?” “What did you ask your friend yesterday?” Within the framework of certain facts, such as ‘Did you say that?’, it is fundamentally free and morally imperative to construct one’s own identity and define what is meaningful to oneself. The goal is not to discover a unique and correct way of being, but to choose one’s own identity and take responsibility for it.”

It is true that AI provides a quantified view of a person and a guideline for their behavior, but it is up to the individual to decide what kind of behavior they will take and what kind of person they will become through doing so. There is a need. By blindly trusting and following AI, we are giving up our freedom and responsibility to create our own identity.

Leuenberger said that constantly using AI to find the music you like, the job you want, or the politician you vote for can inhibit the skills you need to create an independent identity. “Making good choices in life and building an identity that is meaningful and makes you happy is a feat. By subcontracting this power to AI, you are taking responsibility for your life and who you are.” “We gradually lose that,” Leuenberger said.


Indeed, if we continue to follow AI recommendation systems, we may be able to live a comfortable life. However, this risks ceding the right to create identity to large tech companies and organizations.

Choosing something for yourself can lead to failure, but being exposed to something that doesn’t match who you are or being thrown into an environment that you’re not good at can also be an opportunity for growth. Leuenberger says, “Moving to a city you don’t like can disrupt the rhythm of your life, which can prompt you to look for new hobbies.” Constantly relying on an AI recommendation system can “Their identity may become fixed.”

The rigidity of identity caused by reliance on AI will be further reinforced when AI profiling becomes a “self-fulfilling prophecy.” In other words, by reshaping one’s identity as predicted by AI, the things recommended by AI will increasingly match one’s preferences, and the identity formed by AI will become permanent. .

To prevent AI from reshaping your identity, Leuenberger recommends setting aside recommendation systems and choosing your own entertainment and activities. This requires research in advance and may be uncomfortable at times, but it can also be an opportunity for growth and develop the ability to form your own identity.

Copy the title and URL of this article

Avatar

Vasundhara Mali

About Author

Leave a comment

Your email address will not be published. Required fields are marked *

You may also like

Japan Technology

It turns out that TikTok’s algorithm may be actively suppressing criticism of the Chinese government

It has been revealed that searching for terms such as “Uighur” and “Tiananmen Square” on TikTok is likely to result
Japan Technology

Even Apple has difficulty centering text in app layouts

Software engineer Martin Wojcik pointed out that the UI of Apple’s native Calculator app on macOS is misaligned. It is