This lead me to thinking about the need for more thorough consideration of IT security throughout careers, and in particular, the danger of blindly relying on other people's information.
I've embargoed this post until now, because it contains a low-content description of a potential vulnerability, and I can see that steps that should address it have been taken - hence the publication date; this was written shortly after that talk spurred some thoughts...
I'm a big believer in two things: education/learning, and knowing about a lot of different things. I tend to pick up on tangentially relevant information all the time. However, I think this needs to be much more rigorously pursued in a certain context by all IT professionals - security and its implications. The other aspect of this is rigorous application of that knowledge!
Most IT people have at least some grasp of security. They're rarely going to just run code or script snippets they find online (or do they?) - but they may be less wary depending on the source, and, particularly, if it gets into published books by well established networking luminaries from top tier publishers. They're probably doing most, or all of the other best practice security stuff, too. They know there are still probably gaps.
But are there times we ourselves make things worse through action rather than inaction?
Have we put something in place that is worse than the default state?
Note: I'll mainly be talking about "traffic" or "packets", but this applies equally to consider how you handle any "authentication", "data", "trust" or "input" and IT processes/services of any kind!
Beware the "trusted recipe"
How wary are you of "trusted recipes"?
It appears quite a lot of people aren't wary of a trusted recipe, because the speaker found this compromise worked against several networks of significant scale, all of which probably have very experienced network "architects" and "engineers", most of whom probably think or can demonstrate that they have significant security experience. But this got them, nonetheless. Likely, they assumed the authors had considered (and perhaps even tested) the impacts of the "best practice" configuration they were suggesting, or that it had the manufacturer's tacit blessing.
Unfortunately, the oversight in this "trusted recipe" leads to the ability to directly connect to core router management planes from internet addresses - across the internet (with a few underlying assumptions, which commonly exist). This only happens if you implement a particular part of this best practice hardening script, or come up with the same idea independently to "harden" your router whilst allowing certain justifiable actions to happen. Oops. It's not an instant compromise or RCE, but it is problematic, and exploitable for nefarious subsequent use in a variety of underhanded ways.
As soon as someone context-shifted my brain from "achieve an end goal" to "consider the security implications", I had an immediate light bulb moment that was precisely congruent with the rest of the talk based on the relevant config snippet - this line was a disaster. It is blindingly obvious once you pause to think about it (as so many transformative experiences are). It is surprising how many people don't have this lightbulb moment - even where we expect to find (very) skilled and experienced professionals. That's the gap we need to address.
In intentionally very sketchy summary, it requires a crafted (but trivially so) packet that exploits an assumption being made about the type of traffic they're trying to allow through the router - for a tool hardly anyone uses, but that arguably has "nice internet citizen" written on it to allow to work properly through your internet routers - a laudable goal in and of itself.
The discoverer has done the responsible disclosure thing, contacted the publisher, equipment vendor, and the networks they found out were vulnerable. You may or may not notice errata or advisories stemming from this at some later date; I hope so. Informed networks of course quickly remedied the issue.
I think this underlines the need for (even?) better security awareness and training.
I think we all regularly read and look for known security issues in the products and services we use - and certainly read the heck out of release notes looking for trouble - but how often do we look for the errata of technical publications like books?
There are possibly thousands of code/config snippets that work, solve a problem, and as a side-effect, open a hole large enough to drive a supertanker through. Some systems even ship in states that are arguably like this. It stands to reason then you're most at risk when you know not what you do (standing proudly atop the summit of Mount Stupid of Dunning-Kruger), when you're in a rush - and when you don't stop to evaluate things carefully enough, no matter the reason.
Learn all the things! Derivative meme of Allie Brosh's Clean all the Things! |
What's good security training, anyway?
First up, I don't think everyone involved in IT needs to become a certified pen-tester, ethical hacker, or anything like that (although if that interests you, absolutely go for it, and more knowledge here in more people is going to help the entire industry). As with all things, the more you know, the more you can contribute, and the more you do it, the more natural and reflexive it becomes. The more central security is to your current (or desired) profession, the more attention you should pay to it. Security MUST be a consideration for all IT people, no matter your level or specialisation.
What you DO need is three things:
- Firstly, get a decent and thorough grounding in information security. I'd argue CompTIA's Security+ is enough to start with, reasonably cost-effective to find learning materials for and certify in, and anyone with more than 3 years of IT experience should do it if they haven't already.
- Secondly, you need to build from that - keep an eye on even the popular online press, follow some infosec people online, have conversations about this stuff, and you'll see what the big exploits and threats are; add those to your mental database of nefarious ways of pwning expectations; make sure to pause and reflect on how and why they worked, and what the mitigations are or could have been applied (not only "fix the broken code"). Concentrate first on your most "important" job area(s), then start to learn about those areas that interface with that - and then another degree of separation out at least.
- Thirdly, develop an appropriately devious mind! Think how you could bypass the assumptions you've made to secure your network/app/business process/etc. - and plug those holes. Rinse, repeat, ad nauseam!
You must then marry these three key areas, extracting the devious thoughts of the second and applying them to your knowledge from the first, synthesized into the devious mind of the third point! Look particularly for results of the "law of unintended consequences" - what is the allow rule you're configuring actually allowing - is that really what you expected; is it actually more permissive than at face value? Is there a trivial/effective/realistic (or is it very damaging if they do find a bypass) way for someone to bypass or exploit that assumption? Here, you're only really going to catch this kind of problem once you've got at least a basic understanding of what the underlying protocols are doing, what the filter rules are examining (and what they are not) - and, most importantly, what your assumptions are for traffic - including how it will handle traffic that is either intentionally or unintentionally "odd". If you thought learning the multitudinous parts of a tcp/ip packet or an Ethernet frame in detail was kind of pointless rote learning, here is exactly where that level of knowledge starts to pay off.
Obviously, it's worth adding a bit of risk assessment and management, with the traditional likelihood-times-impact formula and assess those results and guide your efforts, but good risk assessments may be quite hard to give objectively if you don't understand enough about the technology being assessed or the techniques that may be used against you. Remember, a business can always over-ride a risk with (more or less effective) mitigating controls and management decisions that supersede best practice or your advice - but make certain this is formally adopted at the right levels, and that your informed considered opinion is noted, as relevant (particularly if you say "don't do this, because of these reasons" - sadly, professional CYA is necessary at times). Audit risk logs are handy records here (super-privileged information, because they'll usually highlight ALL the known exploitable holes in your infrastructure). No organisation is perfect, but you need to be able to live with delivering the best possible information on which to assess and manage risk, and provide mitigations where that specific risk can't be otherwise eliminated.
Traditionally, IT ran on a "secure the boundaries" model - simplistically, the people looking after the firewalls were trusted to get it right. That's not going to work in a cloud-centric, "borderless" world - security has to be baked in at every level of your organisation, from the first tier helpdesk tech that resets passwords up to the highest levels of your management structure; as far as possible, all users of your services should also have sufficient awareness to not run straight into the security equivalent of a burning building, shouting "YOLOOOOOOooooooooo.......!".
Whilst there was an early paradigm of "Be conservative in what you do, be liberal in what you accept from others" (often reworded as "Be conservative in what you send, be liberal in what you accept") (Postel's law / Robustness Principle) this may end up causing you considerable grief, particularly if this hasn't been rigorously implemented by those elsewhere in your infrastructure "stack"! Also, you can "accept" something and throw it in the bin if it fails subsequent tests, but the more (necessary) levels of careful examination everything goes through, the less likely a lapse in one of them leads to problems. Conversely, unnecessary layers things pass through are an expanded attack surface and may represent a net decrease in your security.
In constructing internet services, permissive allowance tends to lead to a world of hurt these days. We've certainly widely moved to "default to deny" stances in things like firewalls - long ago, by necessity. "Assume compromise" is also an increasingly common mindset - how do you detect and then resolve that?
Think about all the attacks that have resulted from crafted packets - things like the Ping of Death, LAND and Smurf attacks, against which most systems are now long since hardened. There will doubtless be subsequent incredibly obvious-in-hindsight attacks demonstrated in future. You need to use these examples to start to examine your assumptions about how you're handling traffic/data - no matter what part you play in the professional IT field, or even - to some degree - as an "end user".
I vividly recall the first "real" hacker I met - at school in the mid 1990s. The guy was about 5 years younger than I was, writing his own operating systems and OS-like frontends to DOS, and had the kind of devious mind that asked "what if" questions about EVERYTHING. He showed us the hilarious bypass of the school library's brand new "anti-theft" system, based on RFID-like tags. When you legitimately checked out a book, there was an insert you put into the pocket with the anti-theft tag in it. It turns out that the anti-theft tags in the books cancelled each other out, so long as you aligned the tags right next to each other (just being in general proximity wasn't enough); as long as you "borrowed" multiples of two, well, checkouts were for dummies. Simples! Of course, we responsibly disclosed this to the horrified librarian, who then swore us to secrecy. That was a large number of years ago now, and the library was since moved and refurbished, so, well, hopefully they've sorted that out. I have long taken examples like that, my own attempts to get around things (as a thought experiment), and the exploits I've heard about to richly inform ways of raising the bar for others to exploit my systems! Working in schools finds you a lot of inquisitive teenage minds with time on their hands, many of whom are only too happy to show where your assumptions fail (Universities, even more so)... Obviously, if you work in some industries, the stakes are much, much higher.
Do we examine the assumptions of what we are allowing carefully enough?
I'd argue that we often don't, from two perspectives -
- That we often think other people know more than we do, and are therefore probably right;
- We look for reasonably quick fixes to problems to move on to solve yet more of our never-ending to do list - sometimes basic functionality is "good enough" - but is that basic functionality achieved dangerously?
The key insight you need to pick up from pen-testing and hacking are that people can modify packets or other data in unexpected ways, and do not necessarily follow your approved or expected way of doing things.
Make sure you're not basing the security of key infrastructure off assuming people won't specially craft (or won't intentionally mess with) packets or do things in odd ways.
- What assumptions have you made?
- Does that assumption fail safely - i.e. if someone messes with a packet in a particular way, does your filter still work, does it do what you expect it to, or does it result in unintentional remote exploit?
- What are you REALLY saying with each and every security rule you put in place and the order they are encountered, and the order in which they are evaluated? Why is your infrastructure constructed that way? Is there a better way?
- Are you regularly reviewing security controls and infrastructural decisions critically (not only seeing that they are still needed or signed off on, but that they haven't got an unintended effect)?
Certainly, you need to move towards models like zero trust - but remember, each time you open something, you're granting at least some trust to something else - you need to watch out for when that something isn't quite what you expect!
Bottom line: challenge everything, particularly anything to do with security. Never trust anything anyone else has written, unless you're either willing to accept the risk, or "audit" every single line for it to actually meet your intended purpose - and_no_more (we had a BGP policy called and_no_more, which denied all, explicitly called after all the previous export policies to allow what we definitely wanted in the ways we wanted it - and avoid any inadvertent leaks). I've been in several organisations where even if there is an implicit deny all at the end of a firewall on that platform, there is an explicit one there, too.
A likely common oversight: We are probably far too permissive with outgoing filters, in many cases, and it is often only when we're concerned with DLP that we really start paying attention to the garbage that goes out of our networks. A basic standard should be things like blocking outgoing tcp/25 from client network addresses that are not sanctioned MTAs, contacts to known C&C networks, and making sure spoofed packets aren't leaving our network (e.g. uRPF or other controls that amount to that). Stricter than that often makes sense - if there are protocols that should NOT be hitting the internet (SMB, anyone?) drop them! This is obviously much easier at network edges that through global tier one networks, and is best controlled at or near customer edge nodes (if not already done between trust domains internally).
This doesn't mean you have to throw everything other people write in the bin - it calls for you (and your colleagues, to cast more eyes - and brains - over it) to carefully assess each change you're thinking of making, and that you need to recognise everyone is human and errs from time to time; this particularly stands for any configurations you're borrowing from elsewhere.
You're probably not going going to (be able to) audit the code of the operating systems you rely on, nor many of the software programs, and few people even have sufficient expertise to do that - but you should question your assumptions in particular about how and why you are configuring security devices and policies in particular way, and take steps to secure the "human element" in particular.
Definitely spend some time thinking about this in code or scripts you write.
If you lurk in service provider communities, you'll find people complain about how hard it is to secure router control planes, and that even the best available filters people share have gaps or gotchas. Whilst running a more or less airgapped management network is fairly easy within a campus or individual datacentre, it's much harder to do so across an internet-scale network (partly because of cost, partly because of complexity - achieveing parallel robustness of customer circuits AND control circuits is expensive, and compromising on management resilience is shooting yourself in the foot where you can't simply pop downstairs and poke things). Scale and complexity are potent underminers of security - both because the more moving parts there are, the harder it is to keep everything as secure as possible (assumptions about who is doing what between or within teams are a common human failing, as well as it simply increasing the difficulty and potential attack surface) and, of course, because larger things are more interesting targets. It's a big field, but you can eat an elephant - one piece at a time.
Whereto from here?
Quite who needs to police these things for the good of the Internet/IT ecosystem is an interesting question, but we can safely assume that some time in the future, if we don't get our own house in order, policy and regulation will take the place of elective professional standards and norms. Keep this in the RFC and BCP contexts, not in the realm of law, regulation and policy as long as we can!
Considering how critical the Internet is becoming (more utility-like every day) this also perhaps starts to beg the question of when IT itself will become a more regulated profession like accounting, actual engineering or medicine, with expected and more or less enforced standards of knowledge and ongoing professional development (within the limits of human error). You certainly have it within your own power to ensure your actions and (usually) those of your colleagues are carefully assessed for their assumptions, and tested in appropriate ways. Diversity of thought here can be very valuable - but obstructionist "no, because security" on everything can go too far and ultimately undermine your efforts. The rise in people calling for "anti-fragility" and the expurgation of "anti-patterns" strongly echoes that we're moving from where IT was in many senses a pioneering toy, to one where it's become essential infrastructure - like highway bridges, hospitals, water, landlines, gas, aviation, sewerage and electricity, which if they were managed in a "move fast and break things / live life in beta" manner would almost certainly have the world looking rather different than it does. The key difference? Arguably, licensed professionals and rather different attitudes and tolerances to failure and breakage.
IT in may ways remains a somewhat "apprenticeship" based industry - you might do some reading here and there, and you may do courses, degree, certs and so on, but at the end of the day, much of what you do is emulation of those who have gone before you, picked up working side by side with them (or inheriting their code or infrastructural decisions over generations!). This is why we place a big premium on "experience" - the education arguably isn't rigorous, in depth or practical enough to replace years of "on the job" training and seeing first hand what works and what does not - and adjusting to individual little idiosyncrasies unique to every organisation and "their" way of doing IT. The seniority in our teams comes as much from having seen first hand why things are the way they are, and why things aren't another way, and being able to predict (with varying degrees of confidence) how further change might perturb a "stable" system - and perhaps decades of thought, learning and experience that leads to "gut feels" that work surprisingly well.
"Or equivalent experience" is a key phrase here. You are not going to be allowed to design and build huge bridges because you have 15 years of experience laying rebar and concrete for them - you will need a civil, mechanical or structural engineering qualification (depending on what you're designing!), and progressive experience of designing and project managing construction of bridges that don't fall down - and membership to a professional body that enforces standards. There are vaguely similar computer-related bodies in some countries, but I don't think they are yet anywhere near as rigorous as those for more "traditional" professional qualifications. We should expect they will become more like those traditional professional bodies, we should have input into making sure that they are highly regarded and rigorous marks of professional competence. As IT becomes more and more central to "everything" we do, the stakes are ever higher, and our professional responsibilities, must grow to meet those expectations. A key danger we need to guard against, I think, is people who get too far firmly astride the Dunning-Kruger summit of mount stupid - people need to move off that lofty peak and into areas where they learn and grow, and get the holistic experience they need to excel. I've certainly been there...
We must also recognise that moving to professionalise IT in this way also raises considerable barriers to entry from under-represented groups and for people without significant financial wealth. Much of IT may end up being shut off to those who cannot afford a multi-year higher degree, and more or less "intern" early professional practice requirements (with all the challenges that brings where it is un(der)paid). As we consider fighting for professionalising IT, we should also fight for mechanisms that ensure ongoing social justice - at the very least within our profession. At the moment, grit, determination and a little luck can get you on board a promising career train; it would be a shame to completely lose that path to self-improvement and professional growth. There are few other careers that offer such stratospheric potential from modest beginnings.
Every job in “IT” is an “IT Security” job. To be aware of the context of your decisions on attacker abilities is universal.
— Swift⬡nSecurity (@SwiftOnSecurity) July 11, 2020
At the highest maturity there should hardly be an IT Security department. It’s everybody’s job to provide operational assurance. There’s no delineation.
Thinking & Learning works everywhere...
If you needed another reason to develop such knowledge and patterns of thinking, that same knowledge and method of application makes you a MUCH better troubleshooter and implementer - and the more of the infrastructure you understand, the further those insights stretch, and the better you can deal with weird interactions between all the parts. Eventually, you will - kicking and screaming, perhaps - realise humans are one of those parts and have to learn about them, too!
Learning, thinking and experience are synergistic; the more you have or do of each of them, the better you are able to find solutions (and problems!); this whole is more than the sum of its parts. This is particularly the case where you spend some time engaged in critical self-reflection to reinforce learning and discover gaps and things you're not good at. You may also need to develop a good ability to see when you need to hand over to someone else - and discover where you find and cultivate those people, but that's another topic; in brief, teams, communities of practice and diversity all enrich what we can collectively achieve.
If you're consistently the "smartest person in a room", you need to go and find some other rooms to hang out in, because you'll learn a heck of a lot more that way. If you feel like the dumbest person in the room, it can actually be very motivating to get a significant helping of Clue, ASAP. There are forums that seek to do precisely that - find them; they tend to exist in hacker communities and professional networks. Find the right ones, and you'll pick up on all the latest norms, some cutting edge practices, and a rich history of skeletons that explain why the world is the way it is. Respect those gatherings of minds, and make sure you observe their written and unwritten rules about respect and confidentiality. A lot of what is said is within a certain professional "circle of trust" that is understood will not really go beyond the borders of those walls (virtual or otherwise) until it is appropriate (if it ever is).
Obviously, it is impossible to literally see all the talks and read all the things, but you need to leverage the key concepts and principles that are revealed to you as much as you can - and then go out and get some more. Just like you need to eat and drink every day, you need to learn and think a bit every day too in order that your brain is sustained. If you can find a topic that is so interesting to you that learning it does not seem like work, but is something you would rather do instead of some other leisure activity you'd normally engage in, then you have been blessed with an excellent hobby. If you're exceptionally lucky, those interests will align with topics that are useful to your career, too. Sure, you can do some "learning as a chore", but "learning as fun" is much more sustainable, and, I'd argue, the main type of learning you should do outside of work hours. We've all been grabbed by excitement about a project, looked at the clock, and realised it is somehow 3am (the last time that happened to me, I was trying to learn how to do something I was doing in Bash in Python instead). Watch out for burn-out if all of your hobbies and leisure hours are indistinguishable from aspects of your day job; outside of a tiny minority of people, this does not end well!
I'll leave you with a further industry perspective on this:
Earlier today I was on a small conference call with a peer. And they said this other team didn’t have the knowledge or contextual awareness to make a decision.
— Swift⬡nSecurity (@SwiftOnSecurity) July 10, 2020
So I asked, “Why are you and I qualified but they’re not?”
The other tech thought for a bit.
“We’re always researching.”
No comments:
Post a Comment