您的位置:首页>智东西 >

AI Chatbots Are Coming to Search Engines. Can You Trust Them? 世界快资讯

来源:scientific  

Months after the chatbot ChatGPTwowed the worldwith its uncanny ability towrite essaysand answer questions like a human, artificial intelligence (AI) is coming to Internet search.

Three of the world’s biggest search engines — Google, Bing and Baidu — last week said they will be integrating ChatGPT or similar technology into their search products, allowing people to get direct answers or engage in a conversation, rather than merely receiving a list of links after typing in a word or question. How will this change the way people relate to search engines? Are there risks to this form of human–machine interaction?


【资料图】

Microsoft’s Bing uses the same technology as ChatGPT, which was developed by OpenAI of San Francisco, California. But all three companies are using large language models (LLMs).LLMs create convincing sentences by echoing the statistical patterns of textthey encounter in a large database. Google’s AI-powered search engine, Bard, announced on 6 February, is currently in use by a small group of testers. Microsoft’s version is widely available now, although there is a waiting list for unfettered access. Baidu’s ERNIE Bot will be available in March.

Before these announcements, a few smaller companies had already released AI-powered search engines. “Search engines are evolving into this new state, where you can actually start talking to them, and converse with them like you would talk to a friend,” says Aravind Srinivas, a computer scientist in San Francisco who last August co-founded Perplexity — an LLM-based search engine that provides answers in conversational English.

Changing trust

The intensely personal nature of a conversation — compared with a classic Internet search — might help to sway perceptions of search results. People might inherently trust the answers from a chatbot that engages in conversation more than those from a detached search engine, says Aleksandra Urman, a computational social scientist at the University of Zurich in Switzerland.

A2022 study1by a team based at the University of Florida in Gainesville found that for participants interacting with chatbots used by companies such as Amazon and Best Buy, the more they perceived the conversation to be human-like, the more they trusted the organization.

That could be beneficial, making searching faster and smoother. But an enhanced sense of trust could be problematic given that AI chatbots make mistakes. Google’s Bardflubbed a questionabout theJames Webb Space Telescopein its own tech demo, confidently answering incorrectly. And ChatGPT has a tendency to create fictional answers to questions to which it doesn’t know the answer — known by those in the field as hallucinating.

A Google spokesperson said Bard’s error “highlights the importance of a rigorous testing process, something that we’re kicking off this week with our trusted-tester programme”. But some speculate that, rather than increasing trust, such errors, assuming they are discovered, could cause users to lose confidence in chat-based search. “Early perception can have a very large impact,” says Sridhar Ramaswamy, a computer scientists based in Mountain View, California and chief executive of Neeva, an LLM-powered search engine launched in January. The mistakewiped $100 billion from Google’s valueas investors worried about the future and sold stock.

Lack of transparency

Compounding the problem of inaccuracy is a comparative lack of transparency. Typically, search engines present users with their sources — a list of links — and leave them to decide what they trust. By contrast, it’s rarely known what data an LLM trained on — is it Encyclopaedia Britannica or a gossip blog?

“It’s completely untransparent how [AI-powered search] is going to work, which might have major implications if the language model misfires, hallucinates or spreads misinformation,” says Urman.

If search bots make enough errors, then, rather than increasing trust with their conversational ability, they have the potential to unseat users’ perceptions of search engines as impartial arbiters of truth, Urman says.

She has conducted as-yet unpublished research that suggests current trust is high. She examined how people perceive existing features that Google uses to enhance the search experience, known as‘featured snippets’, in which an extract from a page that is deemed particularly relevant to the search appears above the link, and‘knowledge panels’— summaries that Google automatically generates in response to searches about, for example, a person or organization. Almost 80% of people Urman surveyed deemed these features accurate, and around 70% thought they were objective.

Chatbot-powered search blurs the distinction between machines and humans, says Giada Pistilli, principal ethicist at Hugging Face, a data-science platform in Paris that promotes the responsible use of AI. She worries about how quickly companies are adopting AI advances: “We always have these new technologies thrown at us without any control or an educational framework to know how to use them.”

This article is reproduced with permission and wasfirst publishedon February 132023.

关键词: Aravind Srinivas

最新文章