China has released draft regulations aimed at tightening oversight of artificial intelligence systems that interact with users in human-like ways, marking the latest step in Beijing’s broader effort to govern rapidly advancing AI technologies. The proposals, issued by the Cyberspace Administration of China (CAC) and opened for public consultation, target AI services that simulate human personalities, emotions and communication through text, audio, images or video. Authorities say the move is intended to balance innovation with safeguards for users’ psychological well-being, data security and social stability .
Under the draft rules, companies providing such AI services would be required to clearly inform users when they are interacting with an artificial system rather than a human, warn against excessive use and intervene if signs of emotional dependence or addiction emerge. Providers would also be held responsible for safety throughout the entire lifecycle of their products, including algorithm management, protection of personal information and content moderation. The draft outlines strict prohibitions on AI-generated content that could endanger national security, spread misinformation, promote violence or undermine social order, reflecting longstanding regulatory priorities in China .
The proposed framework builds on earlier measures introduced in recent years to regulate generative AI and other digital technologies. By focusing specifically on emotionally engaging, human-like AI, regulators are responding to growing public and official concern over the societal impact of increasingly sophisticated systems. The CAC said feedback from industry and the public will be considered before the rules are finalised, with the outcome likely to shape how both domestic and foreign AI developers operate in one of the world’s largest technology markets .
Â
Leave a Reply