Updated 09 Apr, 2026
Published 09 Apr, 2026

AI – The new attack surface

Article 2 of 5 in the series “When security fails – and trust breaks”


AI assistants are rapidly becoming part of the core infrastructure of many organisations.

They are connected to strategy documents, customer information and internal processes. That makes them extremely useful – but if security is not in place, also extremely vulnerable.

Many organisations are adopting AI quickly and effectively. Fewer are asking the critical question:

What does the AI system actually have access to – and who could exploit that?

Why this matters now

AI assistants are gaining increasingly broad access to organisational data – from documents and email to internal processes and decision-making foundations.

At the same time, attackers are developing their own AI tools to find weaknesses faster than humans can respond.

The result is a new kind of attack surface: not just systems and data – but the decision-making logic of the organisation itself.

AI security is not the same as traditional application security

Many working in IT security are well acquainted with classic attack types – SQL injection, cross-site scripting, misconfigured access controls. That knowledge is important and remains relevant.

But AI systems introduce a new class of challenges that require a different way of thinking.

Traditional application security AI system security
Protect code and data Protect code, data and decision logic
Known attack types (SQLi, XSS, etc.) New types: prompt injection, agent misuse
Logging of actions Logging of interactions and responses
Access control per user Access control per role, context and query
Risk assessment at launch Ongoing risk assessment as AI learns and changes

What makes AI particularly demanding from a security perspective is that these are decision-making systems. If the data foundation is compromised – or the instructions the system operates on – then the answers and decisions built on them are compromised too. That is not just a technical problem. It is a trust problem.

The most common security failures in AI implementations

In NetNordic’s security assessments of AI systems, we see the same patterns repeat. They are rarely spectacular. They are typically the result of AI being adopted quickly – and security review arriving too late.

Technical shortcomings we see most often:

  • Access permissions that are far too broad – the principle of least privilege has not been applied
  • Exposed API endpoints without adequate authentication
  • System prompts and instructions stored without access controls
  • No logging or monitoring of what users are actually asking the system

Organisational shortcomings are equally common:

  • New AI technology adopted without prior risk assessment
  • No data classification before AI was given access to systems
  • Unclear who within the organisation “owns” security responsibility for the AI solution
  • AI systems not included in regular security testing
An important observation:

Most AI security issues we uncover are not caused by malicious intent from within.

They occur because someone adopted a powerful tool – without thinking through what it actually had access to.

AI versus AI: a qualitative shift in the threat landscape

In the previous article, we described an incident in which a global professional services firm had an internal AI assistant compromised. What matters is not just what happened, but how.

The attack was not carried out by a hacker manually testing the system. It was executed by an autonomous AI attack tool that independently identified targets, analysed the attack surface and exploited vulnerabilities – without any human intervention. The entire process took under two hours.

This is not a hypothetical scenario. Cyberattacks increased by 30% in Sweden, and DDoS attacks rose by 466% after Sweden was accepted into NATO – a pattern similar to Finland’s experience during its own accession in 2023. Tools like FraudGPT – a generative AI tool built for crafting phishing attacks and deceptive campaigns – are already in active use across the region.

This represents a qualitative shift in the threat landscape:

→ Attack speed is increasing dramatically – human response times are no longer sufficient

→ The threshold for sophisticated attacks is falling sharply. What once required expertise and time can now be automated

→ Attackers are already using AI to find weaknesses. That means organisations should use the same technology to test themselves

This is not a future risk. It is happening now – and it is happening to organisations that do not believe they are interesting enough to be targeted.

When AI gains access to an organisation's data and processes, it also becomes part of the attack surface. The question is not just what AI can do for you – but what someone could get AI to do against you.

Stian Lysnes, Lead Security Consultant, NetNordic

The Nordic confidence gap

The data from Tietoevry’s Nordic Cyber Resilience Report 2024 is striking: 54% of organisations across Finland and Sweden reported experiencing at least one severe cyberattack in the past year. Nearly 9 in 10 anticipate the number of attacks to increase further – yet only 32% feel confident in their organisation’s ability to detect and respond to incidents.

That gap – between exposure and preparedness – is precisely where AI-related vulnerabilities tend to go undetected longest. An AI system that has been silently compromised does not trigger traditional security alerts. It simply starts providing subtly wrong answers.

54%
of Nordic organisations experienced at least one severe cyberattack in the past year (Tietoevry, 2024)
32%
feel confident in their ability to detect and respond to incidents (Tietoevry, 2024)
87%
anticipate the number of attacks to increase further in coming years (Tietoevry, 2024)

Security by design for AI systems: what it means in practice

The solution is not to halt AI adoption. The solution is to do it correctly.

Security by design means that security is assessed and built in before a system goes live – not cleaned up afterwards. For AI systems, this means:

Organisational measures Technical measures
Classify data before AI is given access Least privilege consistently applied
Define security ownership for AI solutions Access segmentation per role and need
Risk assessment before new AI tools are adopted Systematic testing of API endpoints
Training in secure use of AI tools Logging and monitoring of AI interactions

Both columns are essential. Technical measures without organisational buy-in are not maintained. And organisational processes without technical implementation create nothing but false reassurance.

At NetNordic, we help organisations with AI security assessments, data classification, access policy and penetration testing of AI solutions – because we are convinced that AI can be adopted safely.

Five questions leadership should be able to answer about AI in the organisation

Regardless of which AI tools your organisation uses – internal or from third-party providers – these are a sound starting point:

FIVE QUESTIONS LEADERSHIP SHOULD BE ABLE TO ANSWER ABOUT AI IN THE ORGANISATION

1. Which data do the AI systems have access to – and is all of that access necessary?

2. Who can query the system, and what answers can it provide?

3. Are the system’s instructions (system prompts) stored securely and tested?

4. Are interactions with AI systems logged and analysed?

5. Are AI solutions included in the organisation’s regular security testing?

If you do not know the answer to all five – that is a strong argument for a review. Not because the organisation is necessarily vulnerable. But because you should know.

AI is not the problem – a missing security perspective is

Organisations that successfully adopt AI in a safe and sustainable way are not doing anything mysterious. They are doing the same thing they have always done with critical systems: assessing risk, classifying data, testing implementations and monitoring usage.

This is what we call Security by Design – element one in the foundation of digital trust.

In the next article, we look at element two: network architecture and segmentation. Because even the best AI security programme requires a robust network structure beneath it.

Have your organisation’s AI solutions been assessed from a security perspective?

Get in touch for a no-obligation review with our advisers.

→ netnordic.com/contact

The full series: “When Security Fails – and Trust Breaks”

1. When a cyber attack becomes a reputational crisis

2. AI – The new attack surface

3. Segmentation – the network that stops the attack

4. Test yourself – before the attackers do 

5. Security is a leadership responsibility 

Sources and references

  • Tietoevry: Nordic Cyber Resilience Report 2024
  • McKinsey: Nordic cybersecurity – the region’s next powerhouse, 2025
  • ENISA: Threat Landscape 2024
  • Nordic Cyber Group: 2024 Cybersecurity Trends – Nordic and wider EU regions (FraudGPT reference)
  • OWASP: Top 10 for Large Language Model Applications, 2025
  • NetNordic security assessments and client cases, 2024–2025
  • Anonymised case: global professional services firm, 2025

Author

Lars Lindeberg

CISO at NetNordic | CISSP

Contact Us

Feel free to call us directly on our telephone number +47 67 247 365, send us an email salg@netnordic.no, or fill in the form and we will get back to you as soon as possible! Thanks!

Latest content

Our newsletter

Latest news and updates directly to your inbox.