DealMakerz

Complete British News World

We fail to distinguish humans from artificial intelligence

We fail to distinguish humans from artificial intelligence

digital evolution

It is difficult for us to know whether a person or a machine wrote a presentation. A study points to the fact that we are no better than chance. Archive the photo. Photo: Tim Arrow/TT

Technology

Phrases like “I usually walk in the woods” make us think that the texts were written by people.

Researchers are now suggesting a special tone for AI so we don’t feel cheated.

We think that texts containing complex and monotonous sentences are generated by a robot. But a text that takes personal experiences or describes things in the first person, well then we conclude that it was written by a human being. In any case, according to an American study, 4,600 people had to read a large number of texts.

The researchers used a number of different AI models of language and trained it to write self-descriptive texts. The texts were of three types — job applications, presentations for dating sites, and presentations by hosts on Airbnb, a home rental site. It also allowed people to type corresponding scripts and a total of 7,600 presentations were used.

Half right

On average, the participants guessed correctly in half of the cases. But there were patterns of what made them believe this or that. When the researchers asked them to justify why they thought certain text was generated by artificial intelligence, the responses came back as containing complex sentences.

If a text contains sentences in the first person and if it relates to their own experiences, participants believe it was written by a human. Language perceived as warm was associated with a human writer and a more monotonous style with artificial intelligence

Human intuition runs counter to the current design of language generated by artificial intelligence. Existing AI language models generate text based on common formulas, while we seem to think it must be an unusual and exotic language, says Mor Naman, one of the researchers behind the study, in an article in the Cornell Chronicle.

Feedback didn’t help

The researchers also tested offering participants a reward in a group if they answered correctly. They simply wanted to test if it made them try harder and thus answer more accurately.

But the results were the same for those who were not offered a reward. Nor did it matter whether the participants received direct feedback on whether they answered correctly or incorrectly. It was still correct only about 50 percent of the time.

The researchers explained the reason in the article, which was published in a scientific journal PNAS Magazineif the AI-generated text needs a special “AI dialect” so that it’s clearly not human but, say, the Chat GPT behind it.

In this way, AI-generated text can fulfill a defining role in communication as it provides us with accurate information without having us humans suspect each other and lead to uncertainty as to whether a machine or a human is creating the content they are writing.

Petra Heidbaum/TT

facts

Chat GPT was created by Open AI and is a text bot trained to generate text. According to the developers, it can answer follow-up questions and correct itself.

The bot has learned to speak by analyzing vast amounts of text from the internet, and is having a huge impact in the fall of 2022.

Open AI was founded in 2015 by some tech entrepreneurs, including current CEO Sam Altman and Elon Musk, and is headquartered in San Francisco in the United States. The mission is to build artificial intelligence that will benefit humanity.

The company is currently working on the fourth version of the service, which is rumored to launch in 2023 and is capable of more advanced things than the version currently available for testing.

Facts: Open AI

See also  Asus has released a redesigned tablet with a really cool design