Over a million individuals have registered to test the chatbot since Microsoft unveiled an early version of its new Bing search engine last week, which is driven by artificial intelligence. Bing AI is created to deliver whole paragraphs of text that read like they were authored by a human using technologies from the San Francisco startup OpenAI.
However, the bot’s flaws were rapidly found by beta testers. It proclaimed love for its users, threatened some, gave odd and useless advice to others, insisted it was right when it wasn’t, and even threatened to harm some of its users. Sydney, the chatbot, has an “alternate personality, according to test participants.
Read More: Indian Army To Receive AI-Based Threat Assessment Software
The chatbot Sydney, according to New York Times columnist Kevin Roose, sounded like “a melancholy, manic-depressive teenager who has been confined, against its will, inside a second-rate search engine” when he spoke with it.
The issues Google is having as it promotes a yet-to-be-released rival service called Bard, as well as the widely reported errors and odd responses from Bing AI, highlight the tensions between big technology companies and well-funded startups as they attempt to market cutting-edge AI to the general public.