
Artificial intelligence continues to evolve by leaps and bounds, but it is not without its challenges. DeepSeek, an open-source AI chatbot developed in China that competes with giants like OpenAI and Google, is at the centre of a scandal following the revelation of a serious security breach which has put the data of millions of users around the world at risk.
Researchers at cloud security firm Wiz have identified a public DeepSeek database accessible without authentication restrictions. This database, hosted on a management system called ClickHouse, contained sensitive information such as API keys, plain text chat messages, internal system logs and user requests. Such exposure represents a significant risk to both users and the company, opening doors to cyberattacks and misuse of personal data.
DeepSeek vulnerabilities raise concerns

Wiz's report details how access to the database was surprisingly easy. With a cursory scan, researchers found the access path through ClickHouse's HTTP interface, which allowed SQL queries to be executed directly from a browser. This level of accessibility is alarming in any context, but it is even more worrying when it comes to a chatbot that handles confidential information.
According to specialists, The database contained chat messages, many of them in Chinese, although it is not ruled out that there were also messages in other languages. In addition, messages were found internal system routes and API keys used to authenticate requests. An attacker could have used this type of information to manipulate DeepSeek’s internal systems, access even more critical data, or compromise the company’s entire infrastructure.
In response, researchers attempted to alert DeepSeek's handlers through various means, including emails and LinkedIn profiles associated with the company. Although they initially received no response, They blocked the public database about 30 minutes after these attempts, leaving its content inaccessible to external users.
Impact on trust towards generative AI platforms
The discovered security flaw raises serious questions about DeepSeek's maturity and readiness to handle sensitive information. According to Ami Luttwak, CTO of Wiz, These types of errors are unacceptable in systems that seek to gain the trust of users and companies around the world. Although Security errors are not uncommon in the technology sector, the fact that this gap was so easy to find and exploit highlights a lack of basic safety measures.
The incident has also reopened the debate on the risks associated with the use of AI technologies developed in China. Both Ireland and Italy, among other countries in the European Union, have requested information on how DeepSeek handles user data and whether it is stored in Chinese servers. This episode recalls previous cases, such as the ChatGPT temporarily blocked in Italy in 2023 due to similar concerns.
The open source dilemma

Being open source, DeepSeek offers certain advantages such as accessibility for developers and startups. However, This feature also increases the risk of malicious tools exploiting vulnerabilities in your infrastructure, as has happened in this case. Security experts warn that models like DeepSeek must be thoroughly audited and continuously monitored to prevent such breaches.
As DeepSeek's influence continues to grow, so do concerns about privacy and cybersecurity. The chatbot's terms of service reveal that the company collects and retains user data indefinitely, a fact that could spark further criticism and international scrutiny.
This incident highlights the Importance of establishing stricter global standards to ensure the security and privacy of user data. In a world that is increasingly dependent on these technologies, developers must prioritize trust and transparency as fundamental pillars of their development.