top of page
MM_edited.png

How does shadow or unconscious AI use increase cyber incidents and risk?

How does shadow or unconscious AI use increase cyber incidents and risk?

Unauthorised 'unconscious use of AI' often referred to as "shadow AI" didn't exist 3 years ago - yet has significantly increased risk of cyber incidents and other breaches of compliance in firms through several mechanisms. The main drivers are unsanctioned data sharing with public AI tools, lack of policy and enforcement, and the exploitation of these channels by threat actors to access or exfiltrate sensitive data, all due to unconscious or shadow use of AI in business environments.


  • Higher Breach Rates and Costs: In 2025, shadow AI accounted for approximately 20% of global data breaches, and these incidents were more expensive to resolve, adding an average of $670,000 per breach compared to those involving sanctioned AI. This is partly because shadow AI systems often go undetected within organisations, giving attackers more time to exploit vulnerabilities.


  • Unauthorised Data Exposure: Employees using unsanctioned AI services (e.g., ChatGPT, Gemini, Bard with personal accounts) have been found to paste or upload sensitive data, including proprietary code, customer details, and HR records, without proper security measures. These actions dramatically raise the risk of data leakage, which can then be exploited by cybercriminals for social engineering or other attacks.


  • Lack of Security Controls and Oversight: Nearly all organisations that experienced an AI-related breach lacked adequate AI access controls, with many reporting no solid AI governance or monitoring policies. The absence of visibility into shadow AI usage creates blind spots for security teams, making it harder to protect and detect data exfiltration or compromise.


  • Exploitable Attack Surface: Attackers target vulnerabilities in AI systems that are unsupervised and not managed by corporate IT, such as downloaded models from open repositories or SaaS AI tools accessed without IT approval. These weak points can be used to inject malicious code, harvest credentials, or access sensitive company information.


  • Amplification of Attacker Capabilities: Cybercriminals themselves leverage AI to craft advanced phishing campaigns, automate reconnaissance, and exploit misconfigured or poorly monitored shadow AI deployments. In 2025, 87% of organisations reported experiencing an AI-driven cyberattack, and multichannel attacks using or targeting shadow AI increased in frequency and sophistication.


  • Unintentional Leakage of IP: Staff using their own AI apps without the protection of corporate licenses can pass on information care of 'open LLMs' that can feed vital information - not just to criminals, but to the competition.


What common causes link piecemeal AI adoption to increased corporate cyber or regulatory risk?


Piecemeal AI adoption - where AI is implemented independently by various teams or for isolated projects without central coordination- commonly increases corporate cyber and regulatory risk due to several key causes:


  • Shadow AI and Unmonitored Deployments: Employees or business units often adopt unsanctioned or unvetted AI tools, creating shadow IT. This bypasses established IT security policies and increases the attack surface, allowing data leaks, model manipulation, or unauthorised data access to go undetected.


  • Fragmented Data Governance: Siloed AI projects fail to enforce consistent standards on data use, access, and storage. This inconsistency can lead to the improper handling of sensitive information, raising the risk of regulatory violations (such as those related to GDPR or the EU AI Act).


  • Inconsistent Security Controls: Without central oversight, some AI deployments lack necessary encryption, authentication, and access management, making them easier targets for hackers and increasing the chance of breaches.


  • Patchwork Vendor and Cloud Practices: Multiple teams using different AI vendors or cloud platforms, each with separate security and compliance arrangements, leads to gaps in assurance, supply chain vulnerabilities, and overlapping or conflicting compliance requirements.


  • Lack of Monitoring, Audit, and Explainability: Decentralised deployments rarely support robust logging, monitoring, or explainability. This undermines detection of misuse, makes regulatory reporting more difficult, and impedes incident response.


  • Regulatory Gaps: Fragmented AI adoption means compliance teams often do not have a full view of how AI is being used, resulting in missed requirements, errors in legal disclosures, and difficulty demonstrating compliance in audits or investigations.


In summary, a lack of centralised risk management, oversight, and policy enforcement in piecemeal AI adoption often translates directly into heightened cyber exposures and more frequent regulatory lapses for organisations.


Learn How to Build Your Strategy to Assure AI Transformation Success


The AI Leadership Labs created a new 4-part, executive training programme on how to lead your company on its journey to find your 'AI Northstar' and achieve top quartile success to transform your business with AI. This is the only UK course purposely designed to help design and take forward a truly holistic change programme for rapid AI adoption.


                 "Brand-Led AI Mastery: From IP Protection to Customer Delight".  

Discover how to lead your team to stay one step ahead with a 'unified strategy' for securely elevating the value of your brand with AI innovation, scaling operations, sales, and marketing with confidence.


Join the founder of The AI Leadership Labs, together with fellow authors like NXT Horizon's Jonathan Pieterse, Centrica's Steve King, and a team with 200+ years of combined practical experience in leading strategic business change programmes and AI.


Learn how to transform the entire way you lead and build outstanding, trusted brands in the age of AI. Workshops designed to help find their 'AI North Star' and convert today's AI challenge into a limitless competitive advantage.   



15 Min Discovery Call
15min
Book Now

 

Join the AI Leadership Labs Group and convert ideas and knowledge into peer-reviewed, actionable plans that reduce commercial risk and increase gains:

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page