Chinese AI startup DeepSeek has made waves since its debut on 20 January, prompting regulatory scrutiny across Asia Pacific (APAC). Governments in South Korea, Australia, and Taiwan have already taken action, citing concerns over the possibility of data collection and misuse.
As debates on data privacy, ethics, national security, and data leakage concerns intensify, organisations must address these issues as AI adoption accelerates, amid a surge in AI investments across the region. For example, Singapore recently announced its intention to allocate S$150 million to support AI development and deployment.
Despite this rapid pace of AI adoption and investment, only 16 out of 47 jurisdictions in APAC have some form of AI guidance or regulations. The region lacks a unified framework governing generative AI and large language models (LLMs), making compliance a country-by-country challenge, leaving organizations to set their own guardrails.
Understandably, this makes organizations anxious. Generative AI was singled out as the fastest-growing area of concern in Proofpoint’s 2024 Voice of the CISO Report. In APAC, CISOs in Korea lead the charge in feeling that they are most at risk from generative AI.
As generative AI, LLMs, and other advanced applications become more integrated into daily workflows, simply banning the use of generative AI tools is not an option for forward-thinking businesses. Instead, cybersecurity responses must evolve to be flexible, human-centric, and highly tailored. Many users who use generative AI tools will adopt a ‘reward versus risk’ mindset, unfortunately leaving business and cyber risks as an afterthought. This shift demands organizations to move beyond traditional, content-focused Data Loss Prevention (DLP) products and adopt behavior and human-centric platforms that protect against careless, malicious, or compromised users across email, endpoint, and cloud.
AI in the workplace: What are the risks?
Data protection and privacy remain trending topics when it comes to LLMs. Tools like DeepSeek and ChatGPT rely on user input – such as uploaded text or prompts – to generate responses.
Take DeepSeek for example. Users are free to copy text into its prompt box and are allowed to split prompts for even larger data inputs – increasing the potential for data loss, misuse, and exposure. What seem to be minute tasks such as running an internal email through generative AI to improve structure or grammar can introduce security risks, as this content could contain sensitive information such as Personally Identifiable Information (PII) or confidential company financials, and now forever live on the internet.
AI systems can unintentionally expose sensitive information in many ways, from overfitting and inadequate data sanitization to unauthorized integration into personal devices. Australia and Taiwan have banned their government agencies from using DeepSeek, while Japan’s digital transformation minister has recently cautioned public officials from using the application. In Australia, one hospital in Perth banned its doctors from using ChatGPT after discovering staff had been using the software to write medical notes which were then being uploaded to patient record systems. While there was no breach of confidential information on this occasion, the risks are great.
Governance and regulation
A major challenge for organizations is maintaining visibility on who, why, and how their employees are using generative AI technologies. Without this visibility, it becomes impossible to monitor data inputs or put in proactive cybersecurity policies to thwart risks.
Another glaring issue is for organisations to ensure compliance with the AI regulations and data protection laws across multiple jurisdictions where they operate. For example, Vietnam’s draft Digital Technology Industry law outlines several requirements for AI deployment, such as mandates for energy efficiency and ethical use. The fragmented AI regulatory landscape in APAC makes compliance an ongoing struggle for organizations operating in multiple APAC markets.
Beyond input risks, AI-generated outputs can be just as potentially fraught. Should generative AI tools be used to generate code or content, there is no guarantee that the output is free from plagiarism, vulnerabilities, and inaccuracies – leaving organizations exposed to security breaches and issues with patents and registered IP.
Threat vectors
As we use generative AI to enhance the productivity and quality of our work, the same can be said for threat actors, who use AI models to train their attacks on vast datasets. These datasets can come from social media feeds or chat logs, to hyper-personalise their techniques and create convincing lures.
AI has eliminated common scam giveaways such as mistranslations, spelling or grammatical errors. To add on, the open nature of generative AI platforms removes traditional barriers like skill level or cost, making sophisticated cyberattacks accessible to anyone with basic technical knowledge and malicious intent.
In October 2024, OpenAI acknowledged that it had disrupted over 20 “operations and deceptive networks from around the world,” following Proofpoint’s report of the first signs of such activity. This marked the first official confirmation that mainstream AI tools could be used to enhance offensive cyber operations.
A people problem needs a people solution
AI presents a double-edged sword – while it drives innovation, the risk of data leakage is also a serious concern. With 1 percent of users being responsible for 88 percent of data loss events, it is undeniable that the human element still stands at the very core of cybersecurity.
Mitigating the human factor of data loss requires a three-pronged approach: who, why, and how.
- Know who is using AI: Identify which employees are interacting with AI tools and in what context.
- Understand why they are using it: Are they using AI for productivity, or perhaps to further their understanding on a subject matter? Understanding the context behind AI usage is critical here.
- Visibility on how it is being used: Ensure that the user behavior here is aligned with organizational security policies.
Building up this information landscape will help address any potential risks arising from AI usage. Without context, you cannot determine intent and build evidence around incidents and offending users. Whenever you need to confront a malicious insider, you should have irrefutable evidence. One Proofpoint customer in APAC who experienced an employee taking sensitive data elaborated, “If we found someone leaking company data, we can’t afford to be wrong. Visibility is in the best interest of every single user in the company. We need to walk through what exactly a user was doing.”
Organizations need to take a blended and multi-layered approach that combines threat intelligence, behavioral AI, detection engineering and semantic AI to block, detect and respond to more advanced threats and data loss incidents. This holistic human-centric approach alerts on behaviors that are not usually spotted with traditional, content-centric only systems, resulting in a higher efficacy, operational efficiencies, and much lower to no false positives.
Finally, while tools matter, they mean little without user awareness and behavioral change. It is vital to address AI risks in DLP policies, set clear parameters for its use and deliver ongoing security training targeted to the vulnerabilities, roles and competencies of your employees. After all, even the world’s most technologically advanced cyber risks are no match for security-savvy staff.
Jennifer Cheng is Cybersecurity Strategist, APJ at Proofpoint.
Jennifer is the Cybersecurity Strategist for Asia Pacific and Japan (APJ) at Proofpoint, where she drives product marketing strategy across the APJ region. In her role, she provides expertise to customers and partners on Proofpoint’s strategies within people-centric security, risk management, data privacy, security awareness, and compliance.
Prior to joining Proofpoint, Jennifer held various strategic and technical roles at Cisco and most recently, WatchDox where she drove all aspects of product go-to-market through to sales enablement and content development.
Jennifer holds a Master of Business Administration in Strategic and Entrepreneurial Management from the Wharton School of the University of Pennsylvania and a BSE in Computer Science from the University of Michigan.
TNGlobal INSIDER publishes contributions relevant to entrepreneurship and innovation. You may submit your own original or published contributions subject to editorial discretion.