The Chinese AI model DeepSeek R1 is currently causing quite a stir - not only because of its technological capabilities, but also because of significant data protection risks and ethical concerns. While the model attracts companies with its cost efficiency and open source availability, DeepSeek raises serious questions about data protection and potential political and social biases.
Background and current developments
DeepSeek is a Chinese startup that released its new AI model DeepSeek R1 in January 2025. It is a so-called reasoning model that competes with established AI solutions such as those from OpenAI. DeepSeek R1 is attracting particular attention due to its cost-efficient and resource-saving architecture, which makes the operation of powerful AI applications more affordable. In addition, the model is available as open source, which challenges the AI landscape and could break up existing market structures.
After its release on January 20, 2025, the technology stock markets reacted sensitively, as the combination of high efficiency and open source strategy put established players under pressure. At the same time, DeepSeek was immediately targeted by data protection authorities.
The Italian data protection authority has banned the use of DeepSeek in Italy because the provider was unable to provide sufficient answers to data protection questions. Particularly problematic: DeepSeek argued that European data protection regulations were not applicable to its model. Regulatory authorities in Germany have also announced that they will examine DeepSeek more closely.
Data protection problems
Storage and use of user data
DeepSeek reserves the right in its terms of use to store and use all user input (“prompts”) for training purposes. This also applies to sensitive information such as personal data or business secrets that are unintentionally entered in prompts. In addition to chat histories and uploaded files, this also includes the pattern and rhythm of keystrokes.
Processing in a third country that is not secure under data protection law
DeepSeek is developed and operated from China - a country that does not provide data protection standards comparable to the GDPR. The provider does not provide an Agreement on Data Processing, standard contractual clauses or other mechanisms that could enable data protection-compliant use.
Security risks and data leaks
In addition to the data protection concerns, a massive data leak was recently discovered at DeepSeek. Security researchers discovered that chat histories and secret access keys were freely available on the internet. Although this gap was quickly closed, the incident shows that DeepSeek has significant security issues.
Potential access by the Chinese authorities
As DeepSeek originates from China, there is a risk that Chinese authorities will gain direct access to stored data. This not only increases data protection problems, but also raises ethical questions regarding state surveillance and the possible use of DeepSeek for political or security-related purposes.
Recommendations for practice
In view of the considerable data protection and security risks, companies should exercise a high degree of caution when using DeepSeek.
Use of DeepSeek as a cloud service is not recommended
The use of DeepSeek via the internet (including API access) is currently not GDPR-compliant and should be avoided. Companies expose themselves to high legal risks by using the software.
On-premises installation as an alternative
If a company would like to use DeepSeek anyway, this should only be done on the company's own servers (on-premises) in order to retain control over data processing. However, further data protection measures must be taken into account:
- Data protection impact assessment (DPIA): A DPIA should be carried out before implementation in order to assess potential risks.
- Documentation in the register of processing activities (ropa): The use must be documented in detail.
- Technical and organizational measures (TOM): Security measures to secure the AI system are essential.
Internal guidelines for the use of AI
egardless of DeepSeek, companies should develop clear internal guidelines for the use of AI models. In particular, this should be defined:
- Which AI services are allowed to be used,
- Which data can be entered into AI models,
- Which security measures must be followed.
Summary
DeepSeek is a technologically impressive but highly problematic AI platform in terms of data protection. The service stores user input, processes data in a third country that is unsafe under data protection law and has serious security risks.
For companies, this means
- The use of the cloud service is not GDPR-compliant and should be avoided.
- On-premises solutions can be an alternative, but require comprehensive protective measures.
- Clear internal AI guidelines are essential to ensure the responsible use of AI.
Until DeepSeek offers comprehensive data protection and security guarantees, its use in companies with high data protection requirements is not recommended. Instead, companies should look for GDPR-compliant alternatives.