COMMENTARY: The newest AI model to emerge out of China shook up the AI industry this week. DeepSeek was built on a meager budget of $6 million compared to the $250 billion U.S. companies will spend on AI infrastructure this year. It’s efficient – DeepSeek’s algorithms apparently cut down on the data processing time needed for model training. And it’s open source – developers can access, modify, and contribute to its code, promising greater collaboration and AI innovation.
[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]
That said, DeepSeek’s cutting-edge capabilities are not without significant cybersecurity and privacy risks. Let’s understand how threat actors can potentially exploit DeepSeek technology:
- Spear phishing: AI-powered spear phishing attacks are capable of tricking more than 50% of their targets. With DeepSeek’s advanced language generation capabilities, threat actors, including non-native English speakers, can create convincing and targeted spear phishing attacks that appear as though they’ve originated from legitimate and trusted sources. These messages can be tailored according to a target’s interests, job role, online interactions and social media connections. DeepSeek can also help automate such attacks and aid in target identification and selection.
- Bias and misinformation: It’s already being alleged that DeepSeek algorithms have a pro-China bias, that it has a censorship problem that favors Chinese Communist Party narratives. State-sponsored threat actors can leverage DeepSeek to scan social media platforms, online forums, and news trends. The model could then pinpoint divisive topics and generate content aimed at deepening societal polarization and spreading propaganda.
- Impersonation: If DeepSeek has been equipped with generative AI capabilities such as text-to-video or text-to-audio, then in theory, attackers can use it to create deepfakes (synthetic audio and video) that they can operationalize in online fraud. Its text-to-image generator dubbed the Janus-Pro, has already raised concerns around potential misuse of the tool for creating deepfakes and promoting disinformation. Attackers can also train the AI tool to mimic someone’s writing style and mannerisms, making it appear as though a trusted person was communicating with the target.
- Data security and privacy risks: Employees might unknowingly share critical business data, trade secrets, or customer information while interacting with the AI, which could lead to data being exposed or falling into the hands of malicious actors if the chatbot’s security vulnerabilities are exploited. What makes this even more worrisome: DeepSeek is based in China – so all its data gets stored on Chinese servers, and the Chinese government can access the information when needed.
- Profiling and surveillance campaigns: With an ability to process large datasets, DeepSeek can create detailed profiles of individuals, corporations, and governments. In the event of a data breach, attackers could exploit sensitive information such as healthcare records and financial data to enhance the AI’s predictive capabilities. This could lead to serious privacy and security risks. For example, the tool can categorize individuals based on their online activities. Attackers can then weaponize these insights to predict movements and deploy targeted social engineering attacks.
- Open-source challenges: As an open-source tool, DeepSeek exposes its code to both developers as well as malicious actors. This openness leads to vulnerabilities being discovered and exploited, since attackers can freely inspect and modify the code for their own nefarious purposes. Furthermore, the ability for anyone to modify or redistribute the tool creates the risk of malicious versions being deployed, potentially magnifying its ability to carry out devastating cyberattacks.
How to mitigate the risks posed by DeepSeek
Organizations can take several proactive steps to safeguard themselves from the risks associated with DeepSeek and similar advanced AI models.
- Train employees on AI risks: It’s now critical to train employees to identify social engineering attacks such as AI-generated spear phishing and deepfake content. Apart from social engineering, it’s also important to educate staff on the unique threats posed by DeepSeek, such as biases and misinformation, data privacy risks, online profiling and surveillance risks.
- Limit access: Unless it’s business-critical, organizations should block access to the DeepSeek app, website, and APIs. If an organization wants a local deployment, it’s essential that they perform a thorough code audit to identify security weaknesses or vulnerabilities. We also recommend that they restrict DeepSeek from interacting with sensitive or confidential data, such as trade secrets, healthcare records, financial data, and employee details.
- Set clear AI policies with proactive communications: Companies must establish robust policies and governance around the use of AI. They need to proactively communicate these policies and rules to all employees, detailing acceptable-use guidelines and approval procedures required for utilizing such tools.
While DeepSeek showcases impressive performance and efficiency for much less money, there are some major cybersecurity and privacy concerns. Threat actors will soon harness these tools to make their attacks more deceptive and successful. To mitigate risks posed by foreign AI models like DeepSeek, it’s essential for organizations and individuals to remain vigilant, tighten AI usage guidelines, educate and train their teams around AI risks, and if feasible, block these models from entering the organization or accessing data.
Stu Sjouwerman, founder and CEO, KnowBe4
SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.