Friday, 10 May 2019

On the State of Firewalls: are NGFWs (becoming) obsolete?

Between the last blog post and this one, I’ve moved from K-12 into Higher Education, at the first place in Sub-Saharan Africa to have Internet connectivity. This is a vastly different environment in some ways – in particular, firewalling is quite different. You’re dealing with a user population that is entirely adults. Some of those adults engage in legitimate research on things that some would consider a bad idea (malware) or “morally dubious” (porn, pop-up ads, etc.), or needs unfiltered traffic (network telescopes, honeypots, big data “science DMZs” ). The particular University I work at has generally had a liberal outlook with regards to personal freedoms (and concomitant responsibility) – I think that’s generally a good thing and exactly where higher education should be.

We’re currently looking at doing a hardware refresh of our ~7 year old enterprise firewalls – mainly because the support on the current solutions is eye-watering. The present solution works fine (although it has quite limited capacity for logging – about 8 hours of our traffic), and it’s approaching vendor EoL status. Interestingly, even moving to a newer (and, because Moore’s law, more performant) hardware platform from the same vendor saves us money over a number of years. So we’re thinking about what we need, and that’s prompted some musings about the state of firewalls…



Reducing privacy and security to be able to “better filter” traffic is an argument that does not hold a lot of water in some environments (like ours), and so SSL/TLS interception (MITM) is really a non-starter (for us).

Virtually everything is moving across to SSL/TLS (if it hasn’t already done so). All those lovely application identification based NGFW firewall filter features? Virtually useless. “Content” filtering? Hah! Virus scanning? Nope. DLP? No, sorry.

Unless you’re willing to MITM every single connection (perhaps with some careful exemptions for things like healthcare and personal/business finance), you’re probably done for - at least whilst encrypted traffic is passing through the firewall.

On top of that, implemented in certain ways, features like certificate pinning and HSTS mean you’ll usually have to exempt various sites and services (Google Apps don’t like SSL interception, for instance), which reduces the utility of even doing this yet further. For example, it’s quite hard to allow enterprise google drive and not have someone exfil data through personal Drive (although not totally impossible if you can see inside every packet). If I were a cloud provider, I’d probably be thinking about selling dedicated IP addresses, like web hosts used to before SNI existed and you wanted to do SSL, so my enterprise customers had an easier life. Of course, doing this in a DDOS resilient way that scales across CDN/cloud edge is not trivial – at least without an on-premises middleware box or some sort of VPN or tunnel and a guarantee of where traffic can wind up within the cloud service provider). And of course, in the era of the smartphone (virtual “work” sub-systems on devices aside), you’re one 3G data session or coffee shop hotspot away from whatever you’re worried your users are going to do anyway…

There was some temporary hope to some of this – SNI inspection of SSL/TLS certificates themselves. Of course, the Internet community have (quite correctly) decided that this ability in and of itself is a privacy violation, so we now have a draft standard that encrypt SNI (cloudflare and firefox support it, for example – see ESNI) – so you have no way of figuring out what that TLS encrypted packet is about without MITM, decryption (or perhaps an invasive browser plugin).

A move to gigantic, amorphous clouds makes whitelisting “safe” (or blacklisting “unsafe”) IPs really hard - and is of course why we used to look inside the unencrypted packets to find out exactly where that request was going to/coming from.

(Warning: million dollar ideas ahead.) Without MITM, that pretty much just leaves DNS as a place you might be able to exert significant control; the way DNS works is going to make implementing ideas like ESNI hard, so it's a fairly long-term bet. I would therefore not be at all surprised if a hardware firewall vendor will soon suggest that you make your firewall(s) your client's recursive DNS server(s). It’s not much of a jump, software wise, to turn content filtering lists of domains/URLs and pattern matches into DNS software that returns NXDOMAIN for things you don’t care to allow (or conversely, only resolves those you choose to whitelist, and otherwise forwards you to a captive portal with an error message). Those people who use DNS (like OpenDNS/Umbrella) to implement filtering are now arguably ahead of the game (for the time being). Of course, unless objectionable things are a) within a “boundable” list of IP addresses that are b) dedicated to that function, your clever users’s easiest workaround is simply to use the IP of the service they want, bypassing DNS entirely, with more advanced users hacking their hosts file. A “stateful” approach to that concept might be to include functionality like “only allow requests that pass our stateful filter ruleset, AND which have a recent (within TTL since lookup) corresponding DNS lookup, resolving that dst IP, that wasn’t NXDOMAIN from that client; OR are part of an existing, authorised connection”. Oh, and if you allow VPNs and/or client DNS traffic out from your network, well, good luck with that. Of course, once you mess with DNS, you will inevitably get some ICT researcher being understandably grumpy…

And of course, people are breaking DNS, too... DNS over HTTPS and DNS over TLS. I don't think it's unreasonable in an Enterprise to insist your DNS servers are used and to block other services. Things on "public" and even "guest" networks are, arguably, rather different.

Of course what firewall vendors all currently say is “you need to MITM your traffic; look at our expensive, shiny ASICs that accelerate that”. *facepalm*

So, in the absence of magical DNS hacks, what do modern networks that can’t or won’t implement MITM need? Well, it’s back to the 1990s or early 2000s. Stateful firewalls, with vendor maintained lists of naughty (and perhaps "nice") IP addresses. This suggests that, at least for any organisation that’s not willing to do MITM, you need a more modest set of boxes (or that more modest firewalls can presumably handle way more traffic at close to multigigabit line speeds), and a much more modest service subscription – basically, they should supply a list of IPs that allow you to drop connections to/from IPs hosting e.g. malware C&C, and any IPs that are just “naughty” for other reasons (within your existing threat/content categories). You’re back to matching on tuples of protocol, src and dst IP, and src and dst ports, with some address lists, and use of connection state (new/established/related/invalid). If some really good open source/crowdsourced shared resources of data for address lists appear, it’s going to make justifying spending big bucks on traditional enterprise firewall vendors hard work (particularly if you’re FUD resistant).

For those of you that have to be able to content filter (because think of the children!) or do e.g. DLP for compliance reasons, you’re almost certainly going to have to MITM traffic or do some really draconian “whitelist only” filtering; this has copious downsides to it, of course, but arguably remain the only realistic current options to MITM at the network “border”.

So, if you are implementing MITM, NGFWs will continue to work quite well for you.

If you’re not, well, their days are very much numbered.

This (perhaps simplistically) suggests that enterprise networks need to behave or be treated more like transit ISPs – they carry your packets regardless of what they are - unless they’re demonstrably “really bad”, in which case, you’ll find them blackholed in some way - and you don’t trust them. All other filtering and security rests in the hands of your apps, which requires a shift in thinking within the enterprise about how security is achieved. NAC needs to keep bad actors out of your LAN, or at least mitigate their threats; identifying users and devices within your network is increasingly important. The network edge is inside your application now.

For the necessarily paranoid, going back to running things as if you still used enterprise mainframes with dumb terminals (and strip searching people for cameras) might be required and somewhat effective (think SCIF). Right up until you encounter an in-house adversary with an eidetic memory, or your enterprise app can be run on something that can take screenshots… Of course, none of that is realistic in a “modern” enterprise, and most don’t need quite that level of paranoia.

Defence-in-Depth, “no perimeter” or “borderless” modes of thinking and design are increasingly imperative, and the challenges of BYOD multiply. Not all of those challenges are technical – many of them are policy, enforcement and training related.

Fun times.

No comments:

Post a Comment