Microsoft_CEO_Praises_DeepSeek_Amid_AI_Data_Use_Controversies

Microsoft CEO Praises DeepSeek Amid AI Data Use Controversies

In a surprising turn of events, Satya Nadella, the CEO of Microsoft, lauded DeepSeek, the Chinese AI model that has recently stirred the tech industry with its impressive performance and cost-effectiveness.

During Microsoft's quarterly earnings call on Wednesday, Nadella highlighted DeepSeek's advancements, stating that the AI model showcases \"some real innovation\" and represents \"all good news\" for the industry. He further remarked that AI development cycles are \"no different\" from traditional computing, emphasizing the blend of innovation and stability in AI advancements.

On the same day, Microsoft rolled out the DeepSeek-R1 reasoning model to its cloud platform users. This new offering allows users to view the AI's \"thought process,\" providing greater transparency and control over AI interactions.

Accusations Amid Utilization

Despite the praise, Microsoft, a significant investor in OpenAI—DeepSeek's U.S. rival—is investigating allegations that DeepSeek may have accessed data from OpenAI without authorization. OpenAI has informed media outlets of evidence suggesting that DeepSeek utilized their services to train its AI models, potentially violating OpenAI's terms of service. However, neither OpenAI nor Microsoft has presented concrete evidence to support these claims.

The scrutiny extends beyond corporate statements. A nominee from the Trump administration, Howard Lutnick, has publicly accused DeepSeek of using "stolen" U.S. technology. On Wednesday, Lutnick informed U.S. senators of the government's intent to address these issues. Additionally, Trump's AI adviser, David Sacks, stated in media interviews that there is \"substantial evidence\" DeepSeek has \"distilled the knowledge\" from OpenAI's models.

<

Critics like tech investor and Cornell University lecturer Lutz Finger have weighed in on the matter, pointing out the hypocrisy in Big Tech's stance. Finger noted, \"Distillation will violate most terms of service, yet it's ironic – or even hypocritical – that Big Tech is calling it out. Training ChatGPT on Forbes or New York Times content also violated their terms of service.\"

Understanding 'Distillation'

Distillation refers to the process where a new AI model learns by repeatedly querying a larger model and learning from its outputs. DeepSeek has described its use of distillation in its public research papers, claiming to employ the DeepSeek-R1 reasoning model as the \"bigger one\" to train other models like Alibaba's Qwen and Meta's Llama, enhancing their reasoning capabilities.

What sets DeepSeek apart is the availability of its distilled models and the original R1 for free download. This allows users with less powerful hardware, including smartphones, to run these models offline with full control—something not possible with ChatGPT, where the underlying model remains inaccessible even to paid users.

Social media has raised concerns, noting that DeepSeek sometimes identifies itself as ChatGPT, hinting at possible data misappropriation. However, it's important to recognize that, like all AI models, DeepSeek doesn't always provide truthful responses. For instance, a previous version of Google's Gemini chatbot identified itself as Baidu's Ernie bot when queried in Chinese, though Baidu never accused Google of data theft.

As the investigation unfolds, the tech world watches closely to see how these allegations will impact the future of AI development and cross-company collaborations.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top