
Italy’s competition authority has closed its investigation into the Chinese artificial intelligence platform DeepSeek. The decision followed the company’s agreement to strengthen warnings about inaccurate AI-generated responses, often referred to as “hallucinations.” The case reflects Europe’s growing focus on transparency and reliability in generative AI tools.
Why the Investigation Began
The probe began in June 2025 and was led by Italy’s antitrust and consumer protection authority, the Autorità Garante della Concorrenza e del Mercato (AGCM)
Regulators raised concerns that DeepSeek did not clearly inform users about the risk of false or misleading AI-generated outputs. Authorities warned that users could mistakenly treat AI-generated responses as verified information, especially in sensitive contexts.
Hallucinations occur when artificial intelligence systems confidently produce incorrect or fabricated answers. This limitation is widely recognized in large language models and remains a key challenge in generative AI development.
https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
Commitments Made by DeepSeek
DeepSeek avoided penalties by offering binding commitments to the regulator. The company agreed to introduce clearer and more visible warnings for users. These notices now appear earlier and explain the limits of AI-generated content in simple terms.
The firm also committed to improved disclosures and ongoing compliance checks. The AGCM said these steps made risk information easier to understand and more immediate for users. Based on these improvements, the authority closed the case without imposing fines.
A Wider Regulatory Backdrop
Italy’s decision fits into a broader European push to regulate artificial intelligence. The country had previously taken action against DeepSeek over data protection concerns. Similar investigations are now emerging in other European markets as regulators increase oversight of consumer-facing AI tools.
DeepSeek has gained attention for developing low-cost AI models that compete with Western platforms. However, its rapid expansion has also attracted regulatory scrutiny across jurisdictions.
What This Means for AI Users
The ruling sends a clear signal to AI companies operating in Europe. Platforms must actively warn users about potential inaccuracies in AI-generated content. Risk disclosures can no longer remain hidden in terms and conditions.
Although no fine was imposed, the regulator retained oversight powers. Authorities can still act if DeepSeek fails to meet its commitments. The case sets a precedent for how AI transparency may be enforced worldwide.
More relatable news: