Critiqs

Microsoft bans DeepSeek app over data and propaganda fears

microsoft-bars-deepseek-app-over-data-and-propaganda-fears

Brief

  • Microsoft bans DeepSeek for staff due to user data storage on China servers and propaganda worries.
  • Despite enabling DeepSeek’s R1 model in Azure, the app stays barred over safety and content control risks.
  • Unlike some AI rivals, DeepSeek faces stricter enforcement as Microsoft weighs business and security needs.

Microsoft has officially confirmed that its staff members are prohibited from accessing DeepSeek due to significant concerns surrounding data safety and threats of propaganda. Speaking before the Senate, Brad Smith, who is Vice Chairman and President at Microsoft, revealed the decision not to allow the app’s use and confirmed that DeepSeek remains absent from Microsoft’s own app store as a protective measure.

This marks the first occasion Microsoft has publicly discussed its ban, despite other organizations and nations already having imposed similar restrictions. Central to Microsoft’s decision is the fact that DeepSeek stores user data on servers located in China, where such data can be accessed via laws that require compliance with intelligence agencies.

Security Risks and Strategic Adjustments

Alongside data location worries, the company is also wary of possible content manipulation driven by state interests, citing a real risk of answers being shaped by Chinese government messaging. The DeepSeek application is also known to heavily restrict information on topics the Chinese government deems controversial.

Interestingly, Microsoft enabled support for DeepSeek’s R1 model through its Azure platform earlier this year, shortly after the tool gained widespread attention. This arrangement differs from distributing the DeepSeek app itself, since the model is open source and anyone can host it independently to avoid direct data transfers to China.

Nevertheless, concerns persist about the broader risks such as the potential for the AI model to spread misinformation or expose vulnerabilities in its generated code. During his testimony, Smith also mentioned that Microsoft had intervened directly in DeepSeek’s model to remove certain damaging elements, though the company did not clarify these changes in detail.

DeepSeek reportedly underwent thorough safety reviews prior to being offered on Azure, undergoing what Microsoft described as intensive red teaming and risk evaluation. Microsoft’s choice to restrict DeepSeek stands apart from its handling of other AI chat competitors, with options like Perplexity Chat remaining available to Windows users.

It’s noteworthy that similar restrictions are not consistently enforced against every rival AI application, as several offerings from Google, including its browser and chatbot, are not listed in the Microsoft webstore. This selective policy highlights the complex landscape of app availability when balancing business interests and national security priorities.

SHARE

Add a Comment

What’s Happening in AI?

Stay ahead with daily AI tools, updates, and insights that matter.

This looks better in the app

We use cookies to improve your experience on our site. If you continue to use this site we will assume that you are happy with it.

Log in / Register

Join the AI Community That’s Always One Step Ahead