Why Hackers Are Winning with AI, and How to Fight Back Before It’s Too Late
Artificial intelligence is changing the dynamics of cybersecurity. Not by creating new attack types, but by making existing ones faster, cheaper, and harder to stop.
This was the focus of CIONET’s recent round table session in Brussels on April 2nd 2025, where CIOs, CISOs and digital leaders came together for a closed-door discussion on what AI really means for cyber defence.
Thanks to our business partner TrendMicro, Nadine Serneels and especially Eric Skinner, who joined us all the way from Canada, for making this exchange possible.
The consensus in the room was clear:
Three years ago, phishing emails had typos and odd phrasing. Now they're indistinguishable from real ones.
Most attacks don’t succeed because of technical gaps. They succeed because of human ones.
Participants pointed to several recurring issues:
You can have the best tools in place. But if people aren’t confident enough to challenge a suspicious request, none of it matters.
One story involved a spear phishing attempt so realistic that only a bounce-back error revealed the scam. The employee followed protocol and called the CEO to confirm, which prevented the damage. But as others noted, that kind of caution is still the exception.
Several specific risks came up during the discussion:
The question isn’t whether we trust Chinese tools. It’s whether we should trust American ones either.
One member described a breach involving a long-standing supplier. The partner hadn’t kept up with modern controls, and sensitive health data was leaked as a result.
The insight was sharp: you can’t assume long-term partners are secure just because they’ve been around.
Most of the risk isn’t in our core. It’s in the layers around us. Vendors, consultants, old integrations.
Visibility across the full ecosystem is now just as critical as internal security.
Some organisations have started to whitelist approved tools (e.g. Copilot, ChatGPT, Mistral) and block others. But that doesn’t solve the problem:
A better approach discussed:
Blocking gives the illusion of control. It doesn’t reflect how people really work.
Participants shared what is working inside their organisations:
Teach employees to question unusual requests. Role-based training works better than generic modules.
Double-checking shouldn’t feel awkward. It should be expected, especially when money or credentials are involved.
Awareness starts at the top. If the board and CEO skip training, others will follow.
“The CEO should be the first to complete the security module. Not the last.”
With thanks to our participants:
And to our partner TrendMicro: Nadine Serneels and Eric Skinner
Moderated by: Joseph Antoun
Event managed by: Ivana Bradvica