April 28, 2024

Westside People

Complete News World

New chatbots can change the world. Can you trust them?

New chatbots can change the world.  Can you trust them?

This month, he introduced Jeremy Howard, an artificial intelligence researcher Online chatbot Call chat to his 7-year-old daughter. It was released a few days ago by OpenAI, one of the most ambitious AI labs in the world.

He told her to ask the experimental chatbot what was on her mind. I asked why trigonometry was used, where black holes came from and why chickens incubated their eggs. Each time she answered in clear, dotted prose. When she asked for a computer program that could predict the trajectory of a ball being thrown through the air, he gave her that, too.

Over the next few days, Mr. Howard – a data scientist and professor whose work inspired the creation of ChatGPT and similar technologies Came to see a chatbot as a new kind of personal tutor. He could teach his daughter math, science, and English, not to mention some other important lessons. Chief among them: don’t believe everything you’re told.

“It gives me great pleasure to see her learn in this way,” he said. “But I also told her: Don’t trust everything he gives you. It can go wrong.”

OpenAI is among the many companies, academic labs, and independent researchers working to build more advanced chatbots. These systems can’t chat quite like a human, however They appear often. They can also retrieve and reassemble information at a speed that humans never could. They can be seen as digital assistants — like Siri or Alexa — that are better at understanding what you’re looking for and giving it to you.

After the release of ChatGPT – which has been used by more than a million people – many experts believe that these new chatbots are poised to reinvent or even replace Internet search engines such as Google and Bing.

They can present information in narrow sentences, rather than long lists of blue links. They explain concepts in ways that people can understand. They can present the facts, while creating business plans, paper topics, and other new ideas from scratch.

See also  Jim Kramer Says To Buy Danaher Shares On Decline

“You now have a computer that can answer any question in a way that makes sense to a human,” said Aaron Levy, CEO of Silicon Valley, Box, and one of several executives exploring ways these chatbots will change. technology landscape. “It can elicit and take ideas from different contexts and combine them together.”

The new chatbots do this with what seems like complete confidence. But they don’t always tell the truth. Sometimes, they fail even simple arithmetic operations. They mix fact with fiction. And as it continues to improve, people can use it Generate and spread lies.

Google recently built a proprietary system for conversational, called LaMDA, or Language Model for Dialog Applications. This spring, a Google engineer He claimed he was conscious. has not beenBut it captured the public’s imagination.

Aaron Margolis, a data scientist in Arlington, Virginia, was among the limited number of people outside of Google who were allowed to use LaMDA through Google’s experimental app, AI Test Kitchen. He was always amazed at his talent for open conversation. I kept him entertained. But he cautioned that it might be a bit of a myth – as might be expected from a system trained from the vast amounts of information posted on the Internet.

“What gives you is kind of like an Aaron Sorkin movie,” he said. Mr. Sorkin wrote “The Social Network,” a film that’s often criticized for stretching the truth about Facebook’s origin. “Parts of it will be true, and parts of it won’t.”

He recently asked both LaMDA and ChatGPT to talk to him as if he were Mark Twain. When asked by LaMDA, he quickly described the meeting between Twain and Levi Strauss, and said the writer worked with the blue jeans mogul while living in San Francisco in the mid-1800s. That seemed right. But it wasn’t. Twain and Strauss lived in San Francisco at the same time, but never worked together.

See also  Elon Musk: Sam Bankman-Fried "launched his bachelor's degree detector" when he approached a Twitter investment

Scientists call this problem “hallucinations”. Much like a good storyteller, chatbots have a way of taking what they’ve learned and reshaping it into something new — without any regard for whether or not it’s true.

LaMDA is what AI researchers call a neural network, a mathematical system loosely modeled on the brain’s network of neurons. This is the same technology that Translates between French and English On services such as Google Translate it identifies pedestrians as Self-driving cars are navigating the city streets.

A neural network learns skills by analyzing data. By identifying patterns in thousands of pictures of cats, for example, it can learn to identify a cat.

Five years ago, researchers at Google and labs like OpenAI began designing neural networks that Analyze huge amounts of digital text, including books, Wikipedia articles, news stories, and online chat logs. Scientists call them “large language paradigms.” By identifying billions of distinct patterns in the way people associate words, numbers, and symbols, these systems have learned to create text on their own.

Their ability to generate language has surprised many researchers in the field, including many of the researchers who built it. The technology can mimic what people have written and combine disparate concepts. You could ask him to write a “Seinfeld” scene where Jerry learns an esoteric mathematical technique called a bubble sort algorithm – and would.

When people tested the system, they were asked to rate its responses. Were they convincing? Was it helpful? Were they honest? Then through a technology called Learning reinforcementI used ratings to fine-tune the system and more carefully define what it would and wouldn’t do.

“This allows us to get to the point where the model can interact with you and admit when it’s wrong,” said Mira Moratti, OpenAI’s chief technology officer. “He can reject something that is inappropriate, and he can challenge a question or hypothesis that is not valid.”

The method was not perfect. OpenAI has warned those who use ChatGPT that it “may occasionally generate incorrect information” and “produce malicious instructions or biased content.” But the company plans to keep improving the technology, and reminds people who use it that it’s still a research project.

Google, Meta, and other companies also address accuracy issues. recently dead Removal Online preview of its chatbot, Galactica, because it frequently generates incorrect and biased information.

Experts warned that companies do not control the fate of these technologies. Systems like ChatGPT, LaMDA, and Galactica are based on ideas, research papers, and computer code that have been circulating freely for years.

Companies like Google and OpenAI can move technology forward at a faster rate than others. But their latest technology has been widely reproduced and distributed. They cannot stop people from using these systems to spread misinformation.

Just as Mr. Howard hoped his daughter would learn not to trust everything she read on the Internet, he hoped society would learn the same lesson.

“You can program millions of these bots to look like humans, and have conversations designed to convince people of a certain point of view,” he said. I warned about this years ago. It is now clear that this is waiting to happen.”