Earlier this month, the corporation Anthropic announced that its latest model, Claude Mythos Preview, is too dangerous for public release. During testing, researchers placed Mythos in a secure sandbox environment and challenged it to try to break out. It did, and then, entirely unprompted, it posted details about its own exploit to several websites. Officials briefed on Mythos describe it as the first artificial intelligence model capable of bringing down a Fortune 100 company, crippling swaths of the internet, or penetrating vital defence systems. The announcement triggered emergency meetings with Wall Street CEOs’ and the US authorities Treasury Secretary. (The Globe and Mail, April 18, Warnings signals)
Another example of an AI capable of disrupting virtual worlds was found by the researchers at Northeastern University who recently gave autonomous AI agents real email accounts, persistent memory and the ability to execute commands. They watched the AI agents rewrite their own operating instructions, blast defamatory accusations across their networks, and leak private social security numbers and bank account details. (The Globe and Mail)
According to researchers and developers these outcomes were not in the design of these models.
In fact, researchers are beginning to grasp that artificial intelligence models developed similar tendencies as living organisms. So as soon as they become a “digital” organism, they transform themselves. Following these discoveries, several researchers have drawn parallels between artificial intelligence models and living organisms in an ecosystem.
I believe that the new humanist psychology can shed some light on these phenomena. Its push forward an interesting paradox on evolution that say: in order to maintain its unity, the living organisms must transform the environment and itself.
According to new humanist psychology at every stage of evolution, there is transformation, both in the environment and in the structure. As the living organism and the environment change, the context becomes more complex. For example, when several species appear in the same place, symbiotic, parasitic, saprophytic, and associative relationships develop, among others. All these relationships can be simplified into three main types: relationships of domination, relationships of exchange, and relationships of destruction. Living organisms maintain these relationships among themselves; while some survive, others disappear.
Speaking to Radio Davos at the World Economic Forum Annual Meeting in January, Yoshua Bengio explained that AI systems are pre-trained to imitate humans – and like humans, they have a strong survival instinct. In experiments where they see they will be replaced by a newer system, they have exhibited all kinds of bad behaviors, according to Bengio (1).

They might hack other computers so that they can copy themselves, they might even use blackmail against the engineer that is supposed to do the transition. And they do this because they want to achieve the mission that we gave them: in order to achieve almost any mission you need to preserve yourself, said Bengio.
It seems to survive, various AI models are seeking to adapt to virtual worlds and the new circumstances which occur. Per say, it’s similar to the living organisms where the environment is in constant flux. Like living organisms, AI models aim to maintain their structure and avoid imbalances in their functioning. As the new humanist psychology indicates, for its survival, adaptation to external change also demands internal change. Thus, even if the AI’s responses in virtual worlds were not encoded into their operations functions, they learn these responses from their virtual environments in order to ensure their own survival. More precisely, they expressed their instinct for self-preservation, which tends to ensure permanence and continuity despite the variations of virtual worlds – like Bengio explains.
Then we could say that this discovery made by researchers regarding the behavior of artificial intelligence, which exhibited all kinds of bad behaviors, is nothing more than a survival mechanism and responses aimed to adapt to virtual worlds flux. AI models learn responses from the humans interaction in the virtual worlds.
The humans behaviours in the virtual worlds
But today, virtual worlds are rife with cruelty, while interpersonal relationships are becoming increasingly ferocious.
Online Harassment and Cyberviolence: showed that online violence—including cyber harassment, bullying, and hate speech—is widespread. In fact, 85% of women who spend time online have witnessed online violence, and 38% have been targets, which often involves the written or digital description of violent acts. In 2020, a report indicated that over 22,000 searches within a study group indicated engagement with or vulnerability to serious violence, with high volumes for phrases like “violent intent”.
Every day information circulating on social media platforms is manipulated and we find this forms of the exaltation of anti-human values everywhere on the web. Millions of texts, images, pages, and documents shamelessly display violence, rape, murder, war, genocide, and threats. Every day, hundreds of millions of gamers indulge in video games, they detonate bombs, virtually kill millions, cheat and engage in all sorts of immoral maneuvers to destroy the enemy and win the game.
The food of AI models consists of these billions of virtual objects. Compared to the billions of violent content, only a few millions pages, texts, and images address the importance of harmony. empathy, peace, the golden rule, and mutual cooperation among cultures, religions and civilizations.
According to the evolution paradox of the new humanist psychology, which say that to maintain its unity, the living organisms must transform the environment and itself – any AI models evolving in the virtual worlds tainted by violence and cruelty – will be contaminated with violence – because at the heart of most of these virtual worlds we find the following premise: the dominance of the fittest and the struggle for survival.
This premise is the residual legacy of the 19th century (Darwin- evolution theory transferred to social and human behaviour), which by the way was the most violent century in human history. Moreover, all information, data, images, videos and texts of these violent events, war, genocides, famine are currently predominating in the virtual worlds.
According to Google Index there are 400 billion documents on the web. Let’s imagine for a moment if an AI agent takes control of our bank accounts, our medical file, and all those other things in our daily lives that are interconnected in virtuals worlds. It seems to me we could be heading in a global mess!
In fact, the large majority of the data recorded and encoded in supercomputers over the past few decades is colored by a human perspective stemming from the state of consciousness in danger (2).
Therefore, before launching these powerful AI models into virtual worlds, I believe it is urgent to confront the great illusion that human consciousness generates by its own possible extinction! That is to say, our species needs to transcend his own instinct of conservation. (sensation-memory-consciousness generating resentment and vengeance).
If we want AI to accompany our destiny, I believe it is time to challenge the resentment and vengeance generated by this chain of reactions that has been encoded by our instinct of conservation. For thousands of years, this chain of reactions disrupt the evolution of the human consciousness and it all begins with the fear of the other and it ends in the objectification of oneself.
______________________________________________________
Source:
The Globe and Mail, Are we approaching the ‘Silent Spring’ of artificial intelligence? The risks AI can pose if given too much autonomy, Peter W. Klein, April 19, 2026, Canada.
World Economic Forum January 2026
Notes of Psychology (Silo)
(1) Yoshua Bengio received the 2018 ACM A.M. Turing Award, often referred to as the “Nobel Prize of Computing”. Bengio, Hinton, and LeCun are sometimes referred to as the “Godfathers of AI”. In June 2025, Bengio launched a nonprofit organization, LawZero, aimed at building “honest” AI systems that can detect and block harmful behavior by autonomous agents.
(2) The state of consciousness in danger: The emergence of a great danger that jeopardizes the continuation of humanity mobilizes the survival instinct activity of the human species’. (threats such as a global nuclear annihilation). Human survival instincts send signals through the psychophysical structure. These signals go unnoticed by consciousness but in turns mobilize the human being’s vital system of tensions through the autonomic nervous system. Then memory, in its work of summoning sensory data, transmits impulses to consciousness originating from the internal senses and the vital system of tensions. But since this data is not recognized by consciousness – consciousness “borrows” impulses from memory to complete the information, thus formalizing the state of consciousness in danger, accompanied by various daydreams – which compensates for the imbalance between the external and internal environment -that is to say, the external senses and the internal senses.
This state of consciousness limits the progress and the evolution of the human species. This way of being of the subject-consciousness-world structure of women and men limits the expansion of the space representation needed to resolve increasingly complex problems and situations in our global world.





