Chinese AI startup DeepSeek has upended the economics of artificial intelligence development by slashing computing costs for training large language models (LLMs) to just 25% of industry standards. The company achieved this breakthrough by optimizing 32-bit model training to function effectively at 8 bits, maintaining performance while dramatically reducing energy consumption.
CCG senior researcher Andy Mok highlights that DeepSeek's innovation – driven primarily by China-educated engineers – challenges assumptions about Western dominance in advanced tech R&D. “This proves world-class innovation isn’t confined to Silicon Valley or Ivy League labs,” Mok told myglobalnews.net.
The company's decision to open-source its model amplifies its impact, positioning it as a viable alternative to ChatGPT while enabling developing nations to harness AI tools without prohibitive costs. Analysts suggest this could accelerate digital transformation across the Global South in sectors from education to healthcare.
With global AI infrastructure costs projected to hit $1 trillion by 2030, DeepSeek’s approach offers a potential blueprint for sustainable, accessible AI development worldwide.
Reference(s):
China's DeepSeek challenges AI economics with cost-cutting innovation
cgtn.com