April 19, 2024

Westside People

Complete News World

Why do AI chatbots tell lies and act strangely? I look in the mirror.

Why do AI chatbots tell lies and act strangely?  I look in the mirror.

When Microsoft added a chatbot to its Bing search engine this month, people noticed it was providing all kinds of false information about Gap, Mexican nightlife, and singer Billie Eilish.

Then, when journalists and other early testers got into lengthy conversations with Microsoft’s AI bot, it descended into gruff, intimidating, and alarming behaviour.

In the days since the Bing bot’s behavior became a worldwide sensation, people have struggled to understand the strangeness of this new creation. Too often, scientists have said that humans deserve a lot of the blame.

But there’s still a bit of a mystery about what the new chatbot can do — and why it’ll do it. Its complexity makes it difficult to dissect and even predict, and researchers view it through a philosophical lens as well as the hard code of computer science.

Like any other student, an AI system can learn bad information from bad sources. Is this strange behaviour? It may be a chatbot’s distorted reflection of the words and intentions of the people using it, said Terry Sejnowski, a neuroscientist, psychologist and computer scientist who helped lay the intellectual and technical foundation for modern artificial intelligence.

This happens when you go deep into these systems. Research paper about this phenomenon This month in the scientific journal Neural Computation. “Whatever you are looking for – whatever you want – they will provide.”

Google too show off new chatbot, Bard, this month, but scientists and journalists quickly realized he was writing nonsense about the James Webb Space Telescope. OpenAI, a San Francisco startup, kicked off the chatbot boom in November when it introduced ChatGPT, which doesn’t always tell the truth.

The new chatbots are powered by a technology scientists call a Large Language Model, or LLM. These systems learn by analyzing vast amounts of digital text extracted from the Internet, which includes loads of discredited, biased, and other toxic material. Also, the script that chatbots learn from is getting a little outdated, because they have to spend months analyzing it before the public can use it.

See also  Valorant developer delivers a disappointing update to the replay system

As it analyzes this sea of ​​good and bad information online, the LLM learns to do one particular thing: guess the next word in a string of words.

It works like a giant version of the autocomplete technology that suggests the next word as you type an email or instant message on your smartphone. Looking at the sequence “Tom Cruise is ____,” one might guess “actor.”

When you chat with a chatbot, the bot doesn’t just rely on everything it has learned from the internet. He relies on everything you said to him and everything he said. It’s not just about guessing the next word in a sentence. It’s guessing the next word in the long piece of text that includes both your words and hers.

The longer the conversation, the more the user will inadvertently influence what the chatbot says. If you want him to get angry, he gets angry, said Dr. Sejnowski. If you coax it into being creepy, it gets creepy.

The alarming backlash over Microsoft’s chatbot’s bizarre behavior has overshadowed an important point: the chatbot has no personality. It delivers instant results through an incredibly complex computer algorithm.

Microsoft appears to be cutting back on the strangest behavior when it put a limit on the lengths of discussions with the Bing chatbot. It was like learning from a test car driver that going too fast for too long would cause its engine to burn out. Microsoft partner OpenAI and Google are also exploring ways to control the behavior of their bots.

But there is a caveat to this reassuring: Because chatbots learn from so much material and piece it together in such a complex way, researchers aren’t entirely clear on how chatbots produce their final results. Researchers watch what bots do and learn to place limits on that behavior—often after it has happened.

See also  PS4: Elden Ring 9.00 backport takes the scene by surprise

Microsoft and OpenAI decided that the only way they could detect what chatbots were going to do in the real world was by letting them get lost — and rolling them when they walked away. They think their big public experiment is worth the risk.

Dr. Sejnowski compared the behavior of Microsoft’s chatbot to the Mirror of Erised, a mysterious artifact in J.K. Rowling’s Harry Potter novels and the many films based on its creative world of young wizards.

“Maverick” is “desire” spelled backwards. When people discover the mirror, it seems to provide truth and understanding. But she doesn’t. It shows the deep-seated desires of anyone who stares at it. And some people go crazy if they stare for too long.

“Because the human being and the LLM both mirror each other, over time they will tend toward a common conceptual state,” said Dr. Sejnowski.

It wasn’t surprising, he said, that journalists started seeing creepy behavior in the Bing chatbot. Whether consciously or unconsciously, they were nudge the system in an uncomfortable direction. As chatbots take our words and reflect them back on us, they can reinforce and amplify our beliefs and convince us to believe what they tell us.

Dr. Sejnowski was among a small group of researchers in the late 1970s and early 1980s who began seriously exploring a type of artificial intelligence called a neural network, which is driving today’s chatbots.

A neural network is a mathematical system that learns skills by analyzing numerical data. This is the same technology that allows Siri and Alexa to recognize what you say.

Around 2018, researchers at companies like Google and OpenAI began building neural networks that learned from vast amounts of digital text, including books, Wikipedia articles, chat logs, and other things posted online. By identifying billions of patterns in all of that text, LLMs have learned to create text on their own, including tweets, blog posts, speeches, and computer programs. They can even hold a conversation.

See also  Bluesky is my favorite Twitter clone to date

These systems are a reflection of humanity. They learn their skills by analyzing text posted by humans on the Internet.

That’s not the only reason chatbots generate problematic language, said Melanie Mitchell, an AI researcher at the Santa Fe Institute, an independent lab in New Mexico.

When creating text, these systems do not repeat what is on the Internet word for word. They generate new text on their own by combining billions of styles.

Even if researchers only trained these systems on peer-reviewed scientific literature, they might still produce statements that were scientifically absurd. Even if they only learned from the text that it was true, they may still produce falsehoods. Even if they only learn from the wholesome text, they may still be able to generate something creepy.

“There’s nothing stopping them from doing that,” Dr. Mitchell said. “They’re just trying to produce something that sounds like human language.”

AI experts have long known that this technology exhibits all kinds of unexpected behaviour. But they can’t always agree on how to explain this behavior or how quickly chatbots can improve.

Because these systems learn from so much more data than we humans can comprehend, even AI experts can’t understand why they’re generating a specific piece of text at any given moment.

Dr. Czejkowski said he believes that in the long term, new chatbots have the potential to make people more efficient and give them ways to do their jobs better and faster. But this comes with a caveat for both the companies that build these chatbots and the people who use them: They can also lead us further from the truth and into some dark places.

“This is uncharted territory,” said Dr. Czejkowski. “Humans have never experienced this before.”