<iframe src="https://www.googletagmanager.com/ns.html?id=GTM-5MNKFGM7" height="0" width="0" style="display:none;visibility:hidden">
New call-to-action

The Invisible War Against Your Business

Published by Joseph Antoun
April 04, 2025 @ 10:00 AM

The Invisible War Against Your Business

Why Hackers Are Winning with AI, and How to Fight Back Before It’s Too Late

Artificial intelligence is changing the dynamics of cybersecurity. Not by creating new attack types, but by making existing ones faster, cheaper, and harder to stop.

This was the focus of CIONET’s recent round table session in Brussels on April 2nd 2025, where CIOs, CISOs and digital leaders came together for a closed-door discussion on what AI really means for cyber defence.

Thanks to our business partner TrendMicro, Nadine Serneels and especially Eric Skinner, who joined us all the way from Canada, for making this exchange possible.


AI is not inventing new threats. It’s supercharging the old ones.

The consensus in the room was clear:

  • Phishing emails are now flawless: The grammar is perfect, the language feels familiar, and the tone mimics internal communication styles.
  • AI helps attackers move faster: Recon, scanning, and coding malware are no longer tasks for experts. With the right prompt, anyone can do it.
  • Exploits are easier to create: GenAI tools allow even low-skilled actors to identify vulnerabilities and develop attack paths quickly.
  • Deepfakes are credible enough to work: One participant tested staff with a fake CEO video. Half followed the instructions without question.

Three years ago, phishing emails had typos and odd phrasing. Now they're indistinguishable from real ones.


Trust is the real vulnerability

Most attacks don’t succeed because of technical gaps. They succeed because of human ones.

Participants pointed to several recurring issues:

  • Assumptions replace verification: Staff act on what looks familiar without checking its authenticity.
  • Processes are ignored when they’re unclear: If a security process creates friction, it gets bypassed.
  • Executives are often excluded: Leaders may skip awareness training or override controls, unintentionally weakening the organisation’s defence.

You can have the best tools in place. But if people aren’t confident enough to challenge a suspicious request, none of it matters.

One story involved a spear phishing attempt so realistic that only a bounce-back error revealed the scam. The employee followed protocol and called the CEO to confirm, which prevented the damage. But as others noted, that kind of caution is still the exception.


AI threat types discussed

Several specific risks came up during the discussion:

  • Adversarial inputs: Slight manipulations that trick AI systems
  • Model inversion: Extracting personal data from public LLMs
  • AI-powered malware: Code that adapts to bypass detection
  • Data poisoning: Feeding bad data to AI to skew its outputs
  • Nation-state tools: Concerns over platforms backed by geopolitical actors

The question isn’t whether we trust Chinese tools. It’s whether we should trust American ones either.


Third-party risk is growing fast

One member described a breach involving a long-standing supplier. The partner hadn’t kept up with modern controls, and sensitive health data was leaked as a result.

The insight was sharp: you can’t assume long-term partners are secure just because they’ve been around.

 

Most of the risk isn’t in our core. It’s in the layers around us. Vendors, consultants, old integrations.

Visibility across the full ecosystem is now just as critical as internal security.


Blocking AI tools? It’s not enough

Some organisations have started to whitelist approved tools (e.g. Copilot, ChatGPT, Mistral) and block others. But that doesn’t solve the problem:

  • Employees use personal phones and devices
  • GenAI features are already embedded in common platforms
  • Banning tools without visibility leads to shadow usage

A better approach discussed:

  • Monitor usage trends before setting policy
  • Create clear guidelines tied to data classification
  • Focus on behaviour and outcomes, not blanket restrictions

Blocking gives the illusion of control. It doesn’t reflect how people really work.


What actually helps

Participants shared what is working inside their organisations:

1. Train judgement, not just compliance

Teach employees to question unusual requests. Role-based training works better than generic modules.

2. Make verification easy and routine

Double-checking shouldn’t feel awkward. It should be expected, especially when money or credentials are involved.

3. Involve leadership

Awareness starts at the top. If the board and CEO skip training, others will follow.

“The CEO should be the first to complete the security module. Not the last.”


5 questions to ask your teams today:

  1. Are staff comfortable challenging unusual requests, even from execs?
  2. Is security awareness training changing behaviour?
  3. Are third-party suppliers held to the same security standards?
  4. Do you know which AI tools are being used across the company?
  5. Are you tracking how sensitive data is being shared with GenAI tools?


With thanks to our participants:

And to our partner TrendMicro: Nadine Serneels and Eric Skinner

Moderated by: Joseph Antoun
Event managed by: Ivana Bradvica

 

Posted in:CIONET Belgium

No Comments Yet

Let us know what you think

You May Also Like

These Stories on CIONET Belgium

Subscribe by Email