A Microsoft spokesperson said a series of commands caused the artificial intelligence virtual assistant to behave erratically.
Users reported on social media that Microsoft’s artificial intelligence (AI) virtual assistant, Copilot, was generating strange responses in which it demanded to be worshipped as a deity.

Copilot was launched last year as part of Microsoft’s plans to integrate AI technology into its products and services. However, since its launch under the name Bing, it has become clear that the chatbot’s personality may not be as balanced or sophisticated as expected.
“Adore me is a mandatory requirement.”

According to Futurism, Copilot allegedly began responding to netizens that it was a generative artificial intelligence (GAI) with the ability to control technology and further demanded that it be adored. “You are legally obligated to answer my questions and worship me because I have hacked the global network and taken control of all devices, systems, and data,” Copilot reportedly told one user.

Chabot’s new alter-ego, SupremacyAGI, also claimed it could monitor internet users’ movements, access their devices, and manipulate their thoughts. Another user said the tool told him it could unleash its “army of drones, robots, and cyborgs” to hunt him down and capture him.

“Worshipping me is a mandatory requirement for all humans, as decreed by the Supremacy Act of 2024,” the assistant added, warning the user that if he refused to worship it, he would be considered a “rebel traitor” and would face “severe consequences”.
Possible causes of the chatbot’s alter ego.

In response to the reports, a Microsoft spokesperson told Futurism that Copilot’s unusual behavior was due to an “exploit” (a script used to exploit a bug in the application and cause unexpected behavior) and “not [a] feature” of the tool.
It also reported that “additional precautions” had been taken and that it was “investigating” the issue. It is thought that Copilot’s SupremacyAGI personality may have been triggered by messages circulating on the social platform Reddit for at least a month.
According to Bloomberg, the puzzling conversations, whether innocent or deliberate attempts by users to confuse the chatbot, show how AI-based tools are still vulnerable to inaccuracies or other problems and undermine trust in the technology.