A wave of concern has emerged from within the artificial intelligence community as current and former employees of leading AI companies, including OpenAI and Google DeepMind, have publicly highlighted the potential dangers of unregulated AI development.
On Tuesday, eleven present and former employees from OpenAI, backed by Microsoft, along with one current and one former employee from Google DeepMind, penned an open letter expressing their worries. They argue that the profit-driven nature of these AI businesses is hindering effective oversight and governance, stating, \"We do not believe bespoke structures of corporate governance are sufficient to change this.\"
The letter outlines several risks associated with unregulated AI, such as the spread of misinformation, the erosion of independent AI systems, and the exacerbation of existing social inequalities. Alarmingly, the authors caution that these unchecked developments could even lead to scenarios threatening human extinction.
Instances have already been observed where AI-powered image generators from companies like OpenAI and Microsoft have produced disinformation related to voting processes, despite existing policies aimed at preventing such outcomes.
Furthermore, the signatories highlight that AI firms currently maintain \"weak obligations\" to share critical information about their systems' capabilities and limitations with governments. They emphasize that relying on voluntary disclosures from these companies is inadequate for ensuring public safety.
This open letter adds to a growing chorus of voices raising safety concerns around generative AI technologies. These technologies can rapidly produce human-like text, imagery, and audio, making it imperative to address the associated risks proactively.
In related news, OpenAI announced on Thursday that it has successfully disrupted five covert influence operations that attempted to exploit its AI models for deceptive activities across the internet, underscoring the ongoing challenges in safeguarding AI technologies from malicious use.
Reference(s):
OpenAI, Google DeepMind's current and former staff warn of AI risks
cgtn.com