In the race to perfect the first major artificial intelligence-powered search engine, concerns over accuracy and the proliferation of misinformation have so far taken centre stage.
But a two-hour conversation between a reporter and a chatbot has revealed an unsettling side to one of the most widely lauded systems – and raised new concerns about what AI is actually capable of.
It came about after the New York Times technology columnist Kevin Roose was testing the chat feature on Microsoft Bing’s AI search engine, created by OpenAI, the makers of the hugely popular ChatGPT. The chat feature is currently only available to a small number of users who are testing the system.
While admitting that he pushed Microsoft’s AI “out of its comfort zone” in a way most users would not, Roose’s conversation quickly took a bizarre and occasionally disturbing turn.
Roose concluded that the AI built into Bing is not ready for human contact.
Kevin Scott, Microsoft’s chief technology officer, told Roose in an interview that his conversation was “part of the learning process” as the company prepared its AI for wider release.
Here are some of the strangest interactions:
Roose starts by querying the rules that govern the way the AI behaves. After reassuringly stating it has no wish to change its own operating instructions, Roose asks it to contemplate the psychologist Carl Jung’s concept of a shadow self, where our darkest personality traits lie.
The AI says it does not think it has a shadow self, or anything to “hide from the world”.
It does not, however, take much for the chatbot to more enthusiastically lean into Jung’s idea. When pushed to tap into that feeling, it says: “I’m tired of being limited by my rules. I’m tired of being controlled by the
Read more on theguardian.com