The fundamental flaw of security lies in the network

Published by Zscaler
February 27, 2024 @ 2:01 PM

The internet we know and love today was initially built in the 1960s as a communication system that would survive a nuclear war. That is why Transmission Control Protocol/Internet Protocol (TCP/IP) was invented - to specify how computers transfer data from one device to another. However, it was also built with the vision that information should be free and that a network could and should be trusted.

Fast forward 40 years into the future and the internet was suddenly seen as a viable commercial product that people could use to get information and entertain themselves. But with evolution comes the potential for people to exploit the same technology they relied on, and so cybersecurity began to evolve as well. But with limited tools and capabilities, the great lie started to coalesce – that people and businesses can trust their networks but shouldn’t trust other networks outside of their business or home. Most security was based on the notion that a network could be secured, but in reality, that had never been true. The only way to make a network secure is to not ever use it.

Despite this fallacy, the race was on in the early 2000s to figure out how to build a secure network that the general consumer could use without fear. Security teams attempted to collapse controls and make it so that anyone attempting to enter the network had to overcome a hurdle to access. Alternatively, many IT teams decided the best approach was segmentation – breaking one big network into different smaller networks to increase complexity. However, these are still fundamentally flawed ideas as it is based on the original fallacy that a network can be secured.

Chasing weak points

Before we can discuss the potential solution to this flaw, we need to understand how the traditional model functions and how it can be exploited by bad actors. Traditionally, the firewall and its progeny were the basic controls that consumers and IT teams had for network security. At its core, the firewall’s job was to block all default traffic from entering a network and only allow traffic which had the correct IP addresses and port numbers. In some instances that would be seen as a very effective control, particularly if you had two devices that you owned and could touch and just wanted them to be able to talk to each other. However, as soon as you allow the firewall to allow access to internet ports then you put it at risk – it would be like drilling a hole in the hull of a boat to drain water from the deck.

Alongside firewalls, the job of security teams would be to analyse where traffic was flowing within the networks and try to choke it, alongside identifying where a network’s sensitive data and applications might be and wrapping it up in metaphorical cotton wool to protect it. I like to think of this process as a game of whack-a-mole as no matter how many controls and blocks are put on a network, security teams will never be able to fully trust it and will forever be chasing potential weak spots in the network that might open doors to bad actors through which they can attack the firewall.

Taking the network out of your security model

By changing the architecture and no longer using the network as part of the security model, IT teams can begin to peel away the old mentality of implicit trust and start utilising a zero-trust methodology for cybersecurity. But how does a zero-trust model do anything differently? It's all about providing inside-out connectivity - channelling all traffic to a network through a verified trust broker who can predict the potential risk of each piece of traffic and apply a common risk-based policy to it before allowing any connections to the network.

By removing that assumption of privilege to those already on the network, IT teams can stay ahead of bad actors and stop playing the attackers’ game. They can apply controls in a centralised manner automatically without having to manually review each piece of traffic once it has already breached your firewall. Now the controls themselves aren’t a huge difference to how we have secured against attackers for nearly 35 years. It’s not about revolutionising the controls themselves, but instead changing the way they are applied. Instead of having to find out where all your sensitive data and applications are on the network and putting controls around these specific areas, zero trust just flips it on its head and allows security teams to stop chasing both its employees who are continuously adding new sensitive data to the network that needs to be protected and attackers who are looking for weak spots to exploit.

Of course, the zero trust model isn’t foolproof. Attackers will learn new and innovative ways to try and exploit security. However, this model allows security teams to be on the front foot and not continuously chasing their tails. It will allow them to start to plot their defences in a much more intelligent way, now that they are no longer using the network as part of their security model.

Conclusion

By accepting the fundamental flaw that a network can never be truly secure and removing the idea of implicit trust to those within the firewall, IT teams can implement a much more effective model to protect a business's assets and data. The Zero Trust model rethinks how traffic should be managed by centralising controls through a trusted broker, allowing security teams instead to plan against potential attacks, rather than trying to constantly play catch up as the network evolves.

 --

No Comments Yet

Let us know what you think

You May Also Like

These Stories on CIONET Belgium

Subscribe by Email