APAC enterprises face data security threats from unchecked Shadow AI, warns Mimecast

APAC enterprises face data security threats from unchecked Shadow AI, warns Mimecast

How do we stop employees from using unsanctioned AI tools to expose businesses to data leaks and compliance risks?

By on

Enterprises are adopting AI and embedding into operations to drive efficiency and innovation. A growing, often unnoticed challenge is emerging – Shadow AI. Shadow AI takes place when employees use unsanctioned AI tools to streamline tasks, automate processes, and analyse data, often without IT or security approval.

This creates a blind spot for IT leaders, exposing enterprises to security breaches, data leaks, and regulatory non-compliance. Take, for instance, DeepSeek surpassed 20 million daily active users just 20 days after its launch in January 2025, reaching 40 percent of OpenAI's ChatGPT.

Mimecast’s vice president and GM of Asia Pacific and Japan, David Sajoto told iTnews Asia, that Shadow AI is already happening in organisations, often without the IT team’s knowledge. Unlike traditional shadow IT, where unauthorised applications can be restricted, AI poses a deeper challenge –- once information is fed into an external AI model, it cannot be retrieved or erased.

A huge concern is that employees are unaware that uploading proprietary data into public AI models means losing control over that data forever, Sajoto said.

He mentioned DeepSeek, for example, explicitly states that user data is stored in mainland China and governed by Chinese regulations, raising security and compliance concerns.

The need for a human centric security approach

IT leaders may assume an approved AI tool list is enough to control usage. However, in remote and hybrid work environments, employees can still experiment with external AI tools for document creation, workflow automation, or data analysis.

Sajoto suggests a four-step approach and mentions that employees are an organisation’s biggest asset and its weakest link in cybersecurity.

First, a human-centric security approach helps IT teams understand employee behaviour, identify high-risk actions, and create training programs that prevent security lapses.

“We need to shift the focus from restricting AI usage to actively managing the human risk factor,” said Sajoto.

Second, enterprises should align AI policies with country-specific regulations, ensuring compliance with frameworks including ISO 42001.

Clear internal policies on AI usage, including permitted AI tools, how data should be handled, and what security controls must be in place, should be decided to prevent unauthorised AI usage.

Third, he mentioned organisations need to deploy AI governance tools that provide real-time monitoring and control over AI usage within networks.

Solutions including insider risk management platforms help enterprises track AI interactions, flag suspicious data transfers, and enforce security policies automatically.

”As AI tools evolve, ongoing employee training helps staff understand the risks of unsanctioned AI, while regular policy updates address emerging threats,” said Sajoto.

Governments can help manage associated risks

While no specific incidents of Shadow AI-related breaches have been publicly detailed in the APAC or ASEAN region, there is a trend of enterprises and governments working to manage the associated risks.

Sajoto said organisations are developing guidelines at national, regional, and industry levels, while local governments are stepping up efforts by introducing regulatory frameworks.

He cited Singapore’s AI Governance Framework as a recent example.

The framework aims to strike a balance between enabling employees to leverage AI tools while ensuring that organisations maintain control over governance and security.

Traditional training programs fail because they apply a one-size-fits-all approach that ignores individual risk levels.

Enterprises should implement risk-based training programs that assess employee behaviour and assign risk scores.

Continuous monitoring and behavioural analytics help IT teams detect high-risk actions before they escalate into security threats.

- David Sajoto, VP & GM, APJ, Mimecast

He added that by combining technology with employee awareness, organisations build a security-first culture where compliance becomes second nature.

With targeted training, real-time monitoring, and clear AI policies, enterprises can stay ahead of Shadow AI risks and protect sensitive data without stifling innovation, Sajoto said.

To reach the editorial team on your feedback, story ideas and pitches, contact them here.
© iTnews Asia
Tags:

Most Read Articles

By using our site you accept that we use and share cookies and similar technologies to perform analytics and provide content and ads tailored to your interests. By continuing to use our site, you consent to this. Please see our Privacy Policy for more information.