China is turning up the heat on chatty artificial intelligence systems, rolling out draft rules on Saturday aimed at making AI safer and more ethical—especially those designed to act like humans and interact emotionally with users.
The country’s top internet watchdog wants tougher oversight on consumer-facing AI that mimics human personalities, thinking, and communication styles. These new proposals apply to AI services available to the Chinese public that try to connect emotionally using text, images, audio, video, or other media.
According to regulators, providers would have to warn users against spending too much time with these “companion” AIs and step in if someone seems to be getting addicted. Providers would also be tasked with ensuring safety all the way from development to everyday operation. The draft suggests mandatory systems for reviewing algorithms, guarding data, and protecting personal information.
Read More: China says 700 generative AI models filed as tech breakthroughs accelerate
There’s a strong focus on mental health, too. The rules would require service providers to check on user wellbeing and gauge whether someone might be emotionally reliant or showing signs of addiction. If users show extreme behaviour or unhealthy dependence, companies must step in and help.
The proposed measures also draw a clear line when it comes to content: AI services must not produce material that threatens national security, spreads fake news, or promotes violence and obscenity.
If you’re wondering what’s next for AI in China, all eyes are on these draft rules as public feedback comes in and the government continues its push to keep cutting-edge tech in check.

