At the opening of the latest United Nations General Assembly, more than 200 leading voices – tech pioneers, Nobel Prize winners, and policy experts – joined forces to demand urgent global action on AI safety. Their message? Nations must agree on "red lines" that AI must never cross, and fast.
The open letter, signed by scientists from AI heavyweights Anthropic, Google DeepMind, Microsoft and OpenAI, warns that while AI holds "immense potential to advance human well-being," its unchecked growth poses grave risks. The coalition calls for internationally agreed bans on AI uses deemed too dangerous under any circumstances.
Proposed red lines include:
- Entrusting AI with command of nuclear arsenals or lethal autonomous weapons
- Deploying AI for mass surveillance or social scoring
- Conducting cyberattacks or impersonating individuals
Signatories argue that without clear boundaries, AI could soon outstrip human control, leading to engineered pandemics, widespread disinformation, mass manipulation – including targeting of children – alongside national security threats, mass unemployment and human rights violations.
With AI rapidly advancing, the letter urges governments to have red lines in place by the end of next year – a timeline meant to match the pace of innovation. As the global community watches, these experts hope to steer AI development toward safety and responsibility before it's too late.
Reference(s):
Scientists urge global AI 'red lines' as leaders gather at UN
cgtn.com



