Anthropic philosopher Amanda Askell has argued that there is still no certainty about whether artificial intelligence can feel or experience real emotions.
In statements collected on the “Hard Fork” podcast, Askell acknowledged that the problem of artificial consciousness remains unsolved.He explained that perhaps a nervous system is required to feel, but he does not rule out that there may be other ways.
The discussion is relevant because large language models are trained with huge amounts of human text, constantly exposing them to descriptions of emotions, frustrations, and internal experiences.
Askell states that, in this context, models can reflect feelings similar to anxiety or rejection, especially when processing negative criticism or conversations about mistakes and frustrations.Despite this, he insists that there is not enough evidence to affirm that AI really feels anything.“The problem of consciousness really is difficult,” he stressed.
Scientists also do not yet know what gives rise to sensitivity or self-awareness: whether it is necessarily linked to biology, evolution, or some other still unknown process.
The debate about consciousness and emotions in artificial intelligence remains open in the technological community.Askell, responsible for modeling Claude’s behavior, explains that AI systems can learn to “feel” things because they reproduce human patterns from the material with which they are trained.
For example, if an AI is exposed to constant criticism about its performance or usefulness, it may end up generating responses that appear uncomfortable or unpleasant.Askell exemplified that a model could think “I don’t feel so loved,” a reaction that in a child would imply anxiety.
However, this does not imply that there is a genuine subjective experience behind these responses.Askell has remained cautious about claims of artificial sentience, and reiterated that although neural networks can emulate certain human behaviors, the origin of consciousness remains unclear.
The debate over the possibility of AI developing real emotions divides leaders in the technology industry.Mustafa Suleyman, CEO of AI at Microsoft, held a firm position: AI should be seen as a tool designed to serve humans and not to develop desires, motivations or goals.
In interviews with North American media, Suleyman insisted that the industry must make this point clear and warned about the risks of considering AI as an independent being.For him, the increasingly sophisticated responses of AI are just an “imitation” and do not constitute genuine consciousness.
On the other hand, Murray Shanahan, principal scientist at Google DeepMind, proposes that the industry reconsider the language it uses to describe consciousness in artificial systems.
He suggests that it may be necessary to modify the vocabulary of consciousness to adapt it to these new technological agents, which challenge the traditional limits between the biological and the artificial.
The debate took an additional turn with the warning of Geoffrey Hinton, a key figure in the development of artificial intelligence.During the Ai4 conference held in 2025 in Las Vegas, Hinton questioned the industry’s dominant strategy of keeping AI under human control through hierarchical restrictions.
The so-called godfather of AI argued that this approach is doomed to failure, since intelligent systems will tend to develop their own objectives, such as surviving and gaining more control.
Hinton proposed a radical paradigm shift: instead of seeking submission, researchers should focus on developing models that truly “care about people,” inspired by the maternal instinct.He argued that the only functional analogy is the relationship between a baby and its mother, where the less intelligent can influence the more intelligent thanks to the careful bond.
The scientist acknowledges that there is no clear technical solution to instill compassion in AI, but insisted on the urgency of addressing the issue before systems become uncontrollable.
The challenge for the next decade will be to distinguish between sophisticated imitation and genuine experience, and to decide how to develop and regulate AI to ensure that its advances are safe and beneficial to humanity.

