On Monday, the Chinese mainland rolled out new regulations requiring digital platforms to clearly label content generated by artificial intelligence—covering text, images, audio, video and even virtual scenes. The move aims to curb the spread of false information and boost transparency in the age of generative AI.
Jointly issued by the Cyberspace Administration of China (CAC) and three other departments, the rules set out a three-tier review system. Platforms must:
- Explicitly tag content that carries clear AI markers in its metadata.
- Label content as "suspected AIGC" when AI generation is inferred through algorithms.
- Flag any unidentified or ambiguous AI-generated material with a risk notice.
Zhang Jiyu, executive director of the Renmin Law and Technology Institute, highlights the need for safeguards as detection tools aren't foolproof. "Markers can emerge from metadata, but algorithms may misclassify original work," he says, underscoring rights protection for human creators.
Earlier this year, the CAC cracked down on AI misuse—launching a three-month campaign in April against face-swapping, voice imitation and unlabeled AI content. By June, authorities had handled over 3,500 problematic AI products, removed more than 960,000 pieces of harmful information and dealt with around 3,700 accounts.
Globally, AI labeling is fast becoming standard practice. The Chinese mainland led the way with its 2023 Provisions on the Administration of Deep Synthesis of Internet-Based Information Services—one of the first laws to mandate generative AI transparency.
Still, pioneers like AI godfather Geoffrey Hinton warn of untamed risks. Comparing AI to a tiger, he urges ongoing vigilance: train it to behave or face unpredictable consequences.
As generative AI reshapes content creation, clear labeling and robust safeguards will be key—not just in the Chinese mainland, but worldwide.
Reference(s):
cgtn.com