Last month at the late-November Web Summit in Lisbon, MIT physicist Max Tegmark delivered a stark warning about the breakneck speed of artificial intelligence research.
He told RAZOR's Neil Cairns that tech firms are racing to build systems that could one day surpass human intelligence, yet these AI projects often move forward without the rigorous safety checks required in other high-stakes industries.
Tegmark highlighted that today's AI tools remain narrow and task-focused, but researchers are actively pushing toward Artificial General Intelligence (AGI) and even Artificial Superintelligence. Machines that can learn, adapt and act autonomously pose a unique threat: their combined intelligence and physical capabilities might slip beyond human control.
Despite the looming risks, Tegmark believes we can steer AI onto a safer path. He pointed out that industries from aviation to medicine mandate strict testing and certification before products reach the public, and argued that AI companies should face the same fundamental standards.
Public concern is growing, and experts across countries are already calling for limits on systems that could break free of human oversight. Tegmark warns that if governments act now, AI can still deliver major breakthroughs in science and medicine without putting humanity at risk.
In 2014, Tegmark founded the Future Life Institute to campaign for AI safety and push for regulations on companies developing advanced AI. His message is clear: with timely action and common-sense rules, superintelligent machines can be an asset, not a hazard.
Reference(s):
cgtn.com




