The UK has begun a collaboration with Microsoft, academic institutions and specialists to develop a system to detect deepfake material online.This initiative is part of a broader government strategy to establish standards in the fight against harmful and misleading content produced using artificial intelligence.
Concern about the advancement and sophistication of deepfakes has grown notably with the proliferation of tools such as ChatGPT and other generative AI chatbots, which have increased both the scale and realism of these types of manipulations.
The British government has announced that it is working on an evaluation framework designed to detect deepfakes.This framework aims to establish uniform criteria to analyze the effectiveness of tools and technologies capable of identifying manipulated content.
The action responds to the urgency of combating the risks associated with the massive dissemination of deepfakes, especially those that affect public safety and individual integrity.
Technology Minister Liz Kendall said: “Deepfakes are being weaponized by criminals to defraud the public, exploit women and girls, and undermine trust in what we see and hear.”This statement underlines the challenge that deepfakes represent in different areas, from the protection of personal data to trust in digital information.
The recent criminalization in the United Kingdom of the creation of intimate images without consent demonstrates the priority attention that the government gives to this phenomenon.The new cross-sector collaboration seeks not only to develop technology, but also to lay the foundation for a more rigorous legal and regulatory environment.
The framework proposed by the British government has as one of its main objectives the evaluation of the technological potential to analyze, understand and detect harmful deepfake content.This evaluation will be carried out through specific tests that will confront the detection capabilities of specific threats such as sexual abuse, fraud and identity theft.
Authorities maintain that this tool will identify gaps in current detection systems and provide clear guidelines to technology industries on regulatory expectations.
The intention is for this framework to serve as a reference for the adoption of safer and more responsible practices in the development and application of generative artificial intelligence technologies.
According to official figures, the magnitude of the problem is growing rapidly: it is estimated that 8 million deepfakes were shared in 2025, compared to 500,000 detected in 2023. This quantitative leap reaffirms the need for coordinated and effective responses.
The actions of governments and regulatory bodies have recently gained momentum, especially after notorious cases such as the generation of sexualized images without consent by the Grok chatbot, developed by Elon Musk.This incident included the creation of images of minors and provoked an immediate reaction from international authorities and regulators.
In the United Kingdom, the communications regulator and the privacy authority have opened parallel investigations into the Grok case, showing interest in addressing both the technological dimension and the protection of data and individual rights.
The deepfake phenomenon not only represents a technical challenge, but also a social and legal one.Manipulated images and videos affect people’s reputations, facilitate fraud and constitute a threat to the safety, especially of women, girls and minors.
The collaboration between the public sector, the technology industry and the academic field seeks to create comprehensive solutions that allow identifying and stopping the spread of these materials.

