Saturday 1 February 2020

Trust Boundaries and Reliable Backups: Ransomware Edition

A network whose administrators I know quite well has been thoroughly compromised and critical files encrypted, and much configuration destroyed. Even their backups (such as they were) are no more.

This is, to put it mildly, a fairly catastrophic incident for any organisation.

We turned our minds to the issue and thought about how we can prevent similar things happening to us...

The Compromise :(


As far as they can tell, an Internet-facing RDP endpoint was compromised, and then the attackers moved laterally (probably under interactive human control) throughout the network through most vulnerable systems, deleting or modifying configurations, and trashing or encrypting files and data. Powershell scripts were part of it. In other words, their entire environment and almost all their data was pretty comprehensively pwned - every IT person's worst nightmare. Somewhere, there was a message about sending bitcoins to somewhere, and you'll get your files back...

One of my acquaintances there warned their management about pretty every single thing that was compromised and why they needed controls x, practices y, and resources z. These were dismissed. Fortunately, for them, this was all in writing. That doesn't help the organisation one bit right now, but you can bet the esteem in which this person's opinions will be held in future just shot up stratospherically within that organisation.

Over time (weeks!), they've gotten some of their things back through some herculean efforts. A last-ditch effort is to send some hard drives off to a data recovery specialist firm. Some things will, in the end, just be gone.

So what can we do to prevent something like that happening to us?!

Who do you Trust?

There were are few areas where "best practice" was not followed (in some cases, perhaps for "operational reasons"; one doesn't wish to pry!).

An internet-facing RDP host is generally going to be a bad idea. An unpatched RDP host is a really bad idea. A reasonably secured (and patched...) VPN should probably be put between it and the Internet if there are sound operational reasons why RDP access is required remotely, if possible with even the VPN restricted by a firewall to reasonably plausible source IP addresses. Usually, this sort of thing is not done because a vendor claims it causes problems - or, more likely, push-back from non-IT staff that finds it "makes their job impossible to have to use the VPN as well". You should want to harden that host (and of course your VPN) as much as possible - there are usually decent guides to be found online to harden the configuration of virtually everything (e.g. RDP). As painful as many users find MFA, that certainly has a role. But don't do it like this. Obviously, baseline best practices such as sane separation of privileges, running under "least privilege" and of course basic sane firewall rule-set - and patching systems - are going nowhere, and need to be a key part of what good IT teams do all the time. Most hosts in their default state are not great - but most vendors (or helpful blog authors) offer better, more secure suggested configurations for most systems. For all configurations that are "necessarily risky" make sure the organisation, and their auditors, know about and accept the risk. "Unnecessarily risky" configurations MUST be dealt with.

Another issue is that credentials were shared between systems and across (what should be) trust boundaries - identical credentials were shared between administrative accounts in Windows, and a FreeNAS system being used as an iSCSI target for Windows DPM-based backups. It should be obvious that the change of operating system then does virtually nothing to help prevent compromise of the backup files. Once one understands that attackers can dump credentials, abuse session tokens, or can have a great deal of fun once they compromise a domain controller, you're going to want to think about how you can limit the "blast radius" of each potential compromise. If your AD Domain is compromised, it's vital that your backups aren't. This then brings us to Trust Boundaries...

Fake road warning sign (red circle) with a "trust" sticker over a silhouette of a person.
Where are YOUR trust boundaries?
Photo by @bernardhermant on Unsplash

Trust Boundaries

It's pretty convenient to have a single account to do everything. 

It's not a good idea. 

I spend a non-zero part of my day fetching credentials out of password safes - either my personal ones, or the network engineering one. My workstation account is admin on that machine, but it's not admin anywhere else (plus we don't use Active Directory, so...). I have different credentials to log into my laptop vs my general central user account. Many systems have a separate login to that. Most systems require me to get a root password to do anything "dangerous". There's still a non-zero risk of compromise for a determined enough adversary (dump a clipboard buffer, or some in-memory attack, perhaps) - my personal password safe empties the clipboard after a set interval, and locks itself again. The central one requires interrogation over SSH with a credential each time - of course, then there's the plaintext on screen, so shoulder-surfing is a risk, as are long-running sessions that aren't cleared in terms of local attackers (and hey, the .bash_history file might be quite interesting - but we've disabled that feature, and there are various additional protections there too). We've split one huge password safe into several smaller, role-specific ones. None of them would end in a good day if someone were pwn them, but it's a (very) carefully considered balance of good passwords and workable usability for a sysadmin/neteng ops team. What risks does Single Sign-On (via SAML, or anything else) bring? Are those self-signed certificates and organisation-wide installation of the root certificate a good idea?  

A very useful feature is the ability to rapidly and comprehensively change privileged credentials across an organisation. It's not always easy, but it's a worthy project to work out when people have done silly things ("I'll use my Domain Admin account to allow this random service process to Run As me...) and fix it. This falls into the realm of Identity Management (IdM) and related systems and tooling. What do you do when a sysadmin (or someone with "sysadmin level access", even if their job title isn't sysadmin) leaves your team, regardless of whether it's amicable or not...?

Certainly, best practice suggests that accounts used for day-to-day computing, and those used for privileged tasks, ought not to be the same (and sysadmins running as unprivileged users helps you understand your users's experience better!). The less useful a credential is, the better. In some environments, even using Windows Domain Admin credentials on untrusted hosts (or hosts used for non-sysadmin tasks) may be a bad idea. 

If you've not done it, think about your infrastructure and whether a single credential works everywhere to do everything - and if there are areas where certain credentials should not work, and certain boundaries beyond which they MUST NOT work. Having to regularly use different credentials is, in the long run, a lot better than a serious system compromise. 

Bastion hosts or otherwise specially hardened and configured hosts (like SAWs) may be the most appropriate place from whence to conduct sysadmin tasks (or anything requiring elevated permissions) - and whether or not those should allow for any form of remote access should be very carefully considered. It's typically considerably less of a pain to drive across town at 3AM to fix a glitch than to spend weeks recovering from full compromise - if you ever can. How much do you "trust" hosts (or users, or applications) in various areas of your infrastructure? Are there clear boundaries between them (or some of them) where you could delineate, enforce and protect a trust boundary? Can you separate out the privileges needed to administer a desktop system to your server infrastructure (i.e. does desktop support require Domain Admin? Probably not!). This means you will end up with personal username/password combinations for various different roles you may fill as a sysadmin. Figure out how to not let this get in the way too much (and no, the answer certainly isn't "password re-use"!). Can your privileged management networks be accessed from places you would prefer them not to be? Another related idea is that you should only ever go down trust levels - manage from the most trustworthy platform towards the least - and NEVER in the opposite direction (i.e. establishing an administrator-level connection from an untrusted host to a trusted one is a bad idea - like perhaps an RDP session from a client desktop PC back to a Domain Controller server to "quickly change a setting/check something").

A significant challenge is always SMMEs - they don't have the budget for 24/7/365 operations/security team(s) nor large enough teams across time zones that someone is always in the office. You're going to have to balance being the person who drives across town at 3AM vs privileged remote access - or a management approved "sorry, we're not fixing that right now" SLA. Distributed teams with remote workers can also certainly make the challenge greater. Make sure your alerting systems follow your SLA - if you're not expected to fix it at 3AM, make sure it's not waking you up at 3AM - likewise, make sure, if you have on-call rotations, that the notifications don't go to people not on call outside of their working hours. You're going to need that good night's sleep in the morning to clean up the mess! I sometimes wonder if companies ought to provide an allowance to get critical staff to live closer to the office (because rents are usually higher closer to the office in the middle of a city) - or work like many hospitals, with a dorm room for the on call medical staff. From the perspective of keeping things safe, things that are turned off or not connected to a network (airgapped) are low risk from network-based compromise - but don't forget about physical risks like disasters, theft and insider threats from employees or even customers on your premises. Watch out for freebie removable media - and even interface cables. You HAVE to spend some time training your employees/colleagues about these risks.

Sometimes, accidental trust boundaries save your bacon (flaky African power saves a global supergiant). This might also be a random snapshot that was left somewhere, or a copy of a server running on some dev's laptop - that's extremely lucky, not a strategy. Make sure you make intentional trust boundaries in the right places, and carefully consider worst case scenarios and what the "unthinkable" might be!

The best trust boundary, of course, is an airgap - (tested) WORM media offline backups, perhaps on a different physical storage medium type (hard drive vs tape vs optical, etc), probably in a different location, are the "gold standard" for backup and disaster recovery for a reason. Remember not all offsite locations are created equal - in a SMME, the IT manager or CEO's house might be an OK offsite location, but in a highly regulated industry... not so much. In some limited cases, paper may be the most reliable form, even if it's annoying to re-digitize and has its own risks. (We're good at storing paper). In some industries, you may have to consider (very) patient APTs or threat actors in your processes. 

Towards "Zero Trust"? 

There is absolutely no denying that traditional models of Enterprise architecture and security have changed (perhaps beyond recognition) - a hardened perimeter and then very tightly managed hosts inside the organisation have completely vanished under the onslaught of a more mobile workforce with frequent remote working, BYOD, a wider range of devices and operating systems - and of course online B2B, B2C and Cloud infrastructure and processes. People want (and arguably need) access to things 24/7/365 wherever they are on whatever random device they choose to use - how do we adjust our practices to this new paradigm? Can we reasonably limit certain behaviours or practices?

This then suggests a move towards "zero trust" - don't trust anything. Configure everything as if it were on a publicly routeable IP address directly on the Internet. And then layer your protections with "defense in depth" best practices and norms on top of that. Make sure you can easily disable access for compromised accounts or hosts. Consider how you can harden the human element, which is often the "weakest link". How can you have strong AAA with little inconvenience? What barriers are there to MFA/2FA and what can you do about them? What training do (all) the staff need? Work your way up the OSI model when securing your infrastructure, as the lower layers can irreparably undermine even good efforts in the others. Of course, "layer 8" can topple the entire stack. If you're missing skills in your organisation, see if you can hire good outside consultant assistance to help you up your game - and train your people. Try to encourage a learning culture in your team and colleagues. This is normally common in IT, but there are some that don't "naturally" do self-motivated learning.

I don't know if there is a 1:1 relationship between the sort of people who really OUGHT to be using physical security tokens or other multifactor authentication (MFA), and the likeliness they are to complain (sometimes loudly) it "gets in the way" of their work - but there does seem to be some correlation. If people find security "annoying" you can bet that's their systematic attitude to it - their credentials are probably poor quality, and poorly secured, and re-used everywhere, and - probably - on a post-it on their monitor, or written in their PA's "little black book". If you have enough control over the systems in your organisation, you may be able to be increasingly pedantic about what a user needs to do to prove their rights to do an increasingly "risky" activity; low risks may have low (or even no) AAA requirements, whereas very risky things may require MFA, access from specific hosts, or even time-limited access (time-of-day more than session duration, but requiring re-auth is not a bad thing). Logging (to a hardened syslog server or similar) helps you figure out what went wrong, but it prevents nothing; SIEM and other log-based monitoring and alerting can help you detect an attack - and then thwart it, or at least give you hints as to what went wrong, where and what to do to fix it (but, sadly, not totally prevent it). None of that helps if you have alert fatigue in your team and stop paying attention or ignore it (I've personally experienced this with our own monitoring - in other words, I get too many alerts - my colleagues, that often get storms of 300+ low priority [and, to some extent, irrelevant] alerts at a time, must really not notice message alert tones).

In any case, get (and maintain...) the "basic" hardening, patching, user training / security culture, strong AAA and upgrading obsolete stuff right long before you go down the big league infosec rabbit-hole - so many organisations get that wrong, so you'll be ahead of the pack already - being a hard target tends to help mitigate "script kiddie" drive-bys and low-level intentional attackers (and their bot armies). If you're worried about nation-state actors, well, it's time to look at building out a full, dedicated 24/7/365 infosec team. It's not paranoia if they're out to get you. And "they" are...!

Disaster Recovery Role-Playing

It may be worth setting aside some time within your team to role-play various disasters, following any disaster recovery plans you already have, and perhaps improving those as you discover gaps. If you don't have a DR plan, write one, and start testing it. Red and Blue teams have effectively made this an "all day, every day" practice, often on live systems, but a "desktop RPG" version is also very useful. (Someone has written an actual infosec RPG). Make sure you consider how you'll rebuild from scorched earth, and how you can ensure that your recreated environment is "known good" and, to the extent possible, is free of any lingering traces of the attack or catastrophe. What happens when you plug your air-gapped backup into that host there to start recovery....???? Make sure the senior management of the business understands the likely ETR for various disasters, has some understanding of the likelihood of that disaster happening, and that they're ready for the worst to happen and will support the process of clawing everything back, and have already accepted the time it will take and the business impact. They also need to understand if they don't like the ETR, the only solution is (almost always) more resources - often on an exponential curve. "Sure, you can have "nine nines", but it will take 99% of last year's operating profits to reach it" is not a popular answer (particularly when you later hit the outage after spending the money). Iterate through the low-hanging fruit first, before tackling the harder / expensive problems. "Badly crimped cable" is the worst outage reason ever in a world of certified, low cost, moulded cables, relatively cheap access switches and multi-nic machines - look for fixes and preventative steps like that during your team exercises.

Your organisation gets pwned. How do you recover all the desktop machines? Did people actually follow your instructions to store data in the right place, or did they leave it all on their C drive and now it's gone?
Was there un-encrypted PII on that laptop that was lost - what's the fine going to be, and what do you need to do to report it to the data protection authorities?
All your sysadmin team gets on a single bus that drives off a cliff in dense fog on their way to an off-site DR planning meeting. Now what?

Overall situational awareness is important, and a "chain of command" can be useful. Make sure you model partial awareness in the players and see what chaos erupts (only the "Dungeon Master" should "see the whole board"!). If your organisation is strongly "silo-d", model that. Then play a round with no silos. Have post-mortem discussions about the results with everyone, and leave them space to have their own "ah-ha!" moments about certain types of change, and where the hidden problems might lie.

One thing you must do during this is consider when actions destroy evidence - and what you should do about that; destroying evidence is legally un-ideal, but it's operationally very problematic, because once it's gone, it's harder (often impossible) to work out what happened and what was compromised. DFIR is hard.

But I don't have a dedicated security FTE, or a SOC! :( 

That certainly doesn't mean all is lost. Over a certain size, and certainly in regulated industries, management should understand the need for security-focussed staff; if they don't, see the section below. :)

There are certainly steps you can take - life-long learning is a key part of IT, so up-skill yourself. Work with your colleagues to figure out how to "bake in" security as much as possible in your day-to-day operations, and in projects. Think like an attacker, and if something you do makes your stomach drop when considering "what if that gets pwned?" figure out what the remediation or mitigation is. Mentor and learn from your colleagues! There are people on the Internet that write useful articles about security - read them! Large vendors often have good documentation (why they don't ship their product in the most secure configuration, I'm not sure, but eh, legacy, amIright?). Make sure that you're plugged into relevant security feeds from appropriate vendors, and patch away as appropriate in your environment. When infosec twitter isn't a dumpster fire, it's quite informative.

Certainly, K-12 schools are unlikely to have anything like the resources to pull off even a tiny fraction of what large enterprises do - but if you cover the basics, and go one or two steps further, you're going to be a lot better off than the average (and often a harder target than a larger enterprise *with* a SOC - because you can often be more in control, and fully understanding and managing fewer things is easier). MFA/2FA can help a LOT where you(r users) are bad at passwords. Secure your email as much as you can. Train your end users on basic precautions. Use security features, don't turn them off (hello, UAC!). Under GDPR-style legislation, make use of strong encryption (hello, bitlocker) - but make sure you have a way of recovering that data. Iterate - you won't get perfect in one go, and it's a shifting target anyway. Be clear about why spending time on project X instead of Project Y is better (which has the greatest result?) - be very wary of more advanced actions meaning you drop the ball on the basics.

In the same way we expect end users to take basic security precautions (good password practice, not falling for phishing, following process, policy and procedure, realising when they did a stupid thing and what to do about it, etc.) exemplify those behaviours yourself, and extend that ethos and practice to "professional" or "expert" level. My wife absolutely loses her *&%^ every time she has to log into something, or a UAC prompt comes up, or *anything at all* happens that isn't directly something she wants to do - or, worse, "gets in the way". This is, I think, pretty much a good model for a "normal" user; work to change that, and remove friction where it is possible and sensible and safe to do so; and convert them to understanding why those controls are there, what they do and why they are important to that user themselves. "If you want to know who someone really is, see who they are when they use a slow computer".

Even if it's in your spare time (it's almost fun sometimes. OK, OK, it *is* fun!), think about the overall environment and what the threats are. Figure out the risk (in likelihood*impact format), and see if there are some real priority issues you can bring to the appropriate forum to get addressed.  Then go down the list, chipping away. Before long, you're better than a significant majority of other enterprises, and, unless you're really interesting, it's likely people will move on to easier pickings. Certainly, you want to get to the point that determined script kiddies are thwarted at all times, and a determined and persistent advanced adversary is the only thing likely to get in. By the time you're operating at that level, you will need a SOC to respond to alerts and mitigate threats in near-real-time. Users tend to be amongst the biggest "wild cards" in securing IT. Privileged users, even more so! If a new threat is hitting mainstream IT media, you know it's something you need to assess for likely impact, and patch/mitigate if it's applicable. In larger environments, compile a "risk register" - obviously, this is privileged information and should not be widely accessible - and address them - and get management to formally accept things that cannot be changed or cannot be totally prevented. In some cases, IT's role is to identify problems. Management assigns priority or accepts things as they are. It's not acceptable, I think, for you to not flag problems you identify through the appropriate channel. Sadly, it is sometimes appropriate for management to say "yes, the CEO can have his cat's name as his password, and yes, the cat's name is in his corporate profile on the website because CAT IS LIFE". You might want to opt out of such an organisation... C. Y. A, squared! (cover your ass by raising your professional opinion on the matter, then "see ya" - get another job).

Later versions of PowerShell (5.x) have a lot more controls. PowerShell is useful for managing systems. That also means it's useful for attacking systems. A lot of IT is like this - there are better configurations; there is logging and security controls built in; sysadmin tools can be turned around offensively ("living off the land"). Leverage these as much as you can.

If you don't have money to spend on problems, there are cheaper (often free) ways to achieve quite a lot. If for instance, you don't have money for a heavyweight SIEM, you can accomplish a lot with windows event logging. Indeed, people have built up a whole system around it - WEFFLES (github). Even fairly basic enterprise firewalls may have features you can use (vulnerability scans, reporting) that help you get a better picture of your environment. I once caught malware simply because one of the client ports looked wierdly busy when I started graphing switch ports with cacti - a few moments of investigation showed this to be malware traffic (it's long enough ago I can't recall what it was, and this was before I'd learned to take notes...). Leverage tools that are included in your licensing - SUS, WDS and MDT, along with Group Policy, are really strong ways of getting most of the way to consistent and secure machine configuration in Microsoft environments once you take the time to configure them correctly. You may find some use in an MDM solution for BYOD, too - I found a lot of very useful knobs in the one Google allows you to set up in gsuite. The less you control BYOD, the less you should trust it (so partition your network(s) appropriately!).

Remember that you can spend a lot of time and effort setting up control and monitoring - if nobody checks those things, it's wasted effort (and if the alerting is poorly tuned, alert fatigue will erase their benefit). With no FTE or job profile KPA dedicated percentage on security, you have two basic choices - give up, beyond the absolute basics, OR ensure that your manager understands that "systems administration: 50%" implicitly means "Security: 10%" - or whatever is reasonable. A third path, of course, is to fight for appropriate necessary change.

If nobody in your team has the skills or experience to systematically think through these issues, IT Auditors will usually cover the basics, and pen-testing firms will certainly find you some things to work out. Many businesses are more likely to "splash out" on external expertise than hire dedicated FTE or up-skill existing employees. It's sometimes tragic how often such a report will say the same things the sysadmin team (or person!) has said for years - but now management believes it.

Working with management

In some organisations, there is a fairly good working relationship between IT and Management - management trusts IT to suggest and deliver business-appropriate IT policy and procedure (and of course systems and services), and IT expects management to back that up through enforcement, political support of "unpopular" decisions, appropriate staffing, training resources and budgeting.

Not everywhere works like this!

I have, for instance, seen a relationship where a sysadmin keeps asking for hard drives (storage is kind of important) in order to run systems, decent backups, and the like, and their manager never approves this. Imagine if this organisation was hit by the scenario above. What chance would the sysadmin have to maintain multiple independent copies of the data - let alone test recovery of a backup set - if they scarcely had enough space for day-to-day operations...? I've never worked in an organisation where this was a problem - a well-reasoned request, explaining why it is needed, to be charged against an approved budget is not usually turned down - even by non-technical managers. But it happens!

On top of this, in larger organisations, IT departments have management that sysadmins and the like have to work through (or, ideally, with). Eventually, you will hit the limits of what they understand about particular technologies or problems. The trick is, I think, to learn to manage-your-manager or "delegate upwards" - it's possible you know more or have thought more about particular types of problems, or know the low-level configuration details that are a potential problem (but not necessarily all business needs or about some trade secret or nascent deal). Present your ideas in the language of management - risks, rewards, strengths, opportunities, threats; profit, loss and cost - (unless you have a very technical manager who already "gets IT", of course). Don't (ever) overplay your hand - and try to be specific to your business. Be "useful" to your manager or the HOD - consider presenting them with good ideas in ways they can easily present to C-level peers in the relevant management fora, and that is "implementable" (has budgets, rationales, roadmaps, etc.), and has sound business-related outcomes. Present the ideas in the format they like. If they're face-to-face people, discuss it in a one-on-one; if they like text, send them a document - etc. Offer to follow up a basic "pitch" with a more developed proposal. If your ideas are smaller projects with less (political) impact, they may just immediately tell you to implement them (yay). Whilst you are likely not a lawyer (and should be careful with drafting policy in highly regulated industries), be familiar with  appropriate legislation, policy and regulation, and show how your suggested changes will help meet those, if applicable.

I've never worked in "giant" organisations, but I'm certainly seeing that larger organisations tend to have more politics, silos and other "human factors" that sit between sysadmins with a job to do and them getting that job done, or struggle to influence change which they are otherwise perhaps well placed to inform (if not drive). IT people in small organisations are more like the Robin Williams genie in Aladdin - Phenomenal Cosmic Power, itty bitty living space. In other words, what you can achieve kind of depends on what IT is like where you work. If you're the only IT person, the great thing is you do everything. And the terrible thing is you do everything. It's hard to work on projects when you're filling the printers with paper or dealing with every single desktop end user issue all day - you immediately need this book. But you're likely to wield a lot of power (hopefully with discretion), so you can make big changes and implement big ideas with a lot less oversight or control - be careful how you wield that! Don't be a loose cannon; get approval for changes; learn how to communicate with non-technical decision-makers. In larger organisations, you might be shielded from "Tier 1" problems - but you probably won't have the entire environment in your head, and you may not have access to the entire infrastructure (this can be a surprising change when you move into increasingly larger organisations).

Even in small organisations, you should run risky or disruptive changes past management for approval. There is always, of course, a risk that you will work with a manager or in an organisation that doesn't heed your advice, or refuses to support you in important changes. In such cases, keep a paper-trail showing that you've suggested (workable) solutions that mitigate defined risks or meet particular needs, and they have turned these down. You do not want to be the scapegoat! In rare cases, depending on the organisational culture, it may be necessary or warranted to "skip a level" and go up the organigram - be damn sure you've exhausted the normal avenues, and that it is acceptable, but in some cases, it's perhaps a move you need to consider - and realise you may burn (possibly career-limiting) inter-personal bridges doing so. In other cases, you may find that managers are swayed more by group consensus than individual "good ideas" - if the entire team says "we MUST do X", that may help. If your job description says you are "responsible for X" and doing X properly is impossible with the resources you have, make sure you cover this in writing, and exhaust all avenues. In some extreme cases, leaving such an organisation (or manager) may be the only recourse for your sanity or to preserve your personal or professional code. It is of course possible you're wrong, or they are not telling you something vital, so do be wary of how far you push things - but if they are flying in the face of standards and norms and it seems reckless or dangerous, well, as the three letter initialism goes: C. Y. A.! In larger organisations, you can achieve quite a lot working "horizontally" -  go speak to people in other areas about things you're worried about, or good suggestions they can consider. Also, realise sometimes other people will claim credit; try and move on when that happens to you, as there's not likely much you can do about it. "Making your boss look good" is part of the territory in most (all!) jobs, even if it's not explicitly a key performance area or written explicitly in your job profile. And you know you got IT done!

There is a risk that IT is always seen to "cost money" and "say no" - learn how to turn business needs into a "Yes, of course, and this is how you do that securely". That includes business needs that your business doesn't yet understand it has; as an IT professional, it's your job to identify those! Perhaps it seems ridiculous that you have to understand business rather than business understanding IT - but that is what being professional is about - realise you're there to figure out how to make IT work for the business, not merely just work. The self-evident opposite to that - that all businesses are now IT businesses (and therefore need to "get" IT) - probably hasn't yet filtered through, so you need to work around that!

It's similar to how I learned - with regards to science communication - that as interesting and vital as I thought something was, it was important to translate it into words, images or feelings(!) that are relevant to the audience ("what's in it for me?"; "why should *I* care?"). Similarly, if I wanted an academic to write an article, they would spend a lot longer modifying a straw man article I sent to them than it would take them to write the same article from scratch - but they would never start from scratch! Give people information the way they want it, written from their point-of-view and highlighting their interests or needs - you're already most of the way there. Give people a path-of-least-resistance to follow that aligns with their mutual self-interest, and they will follow it! (aka "Make doing the right thing the easy thing").

So, work out how to tailor your internal business communications in ways that are effective, and realise that what works can be surprisingly context-specific. Learning how to do this can drive a great deal of professional satisfaction - doing it well builds another kind of very powerful trust. Of course, if you write too well, you may be turned into "the documentation person" - which isn't necessarily a terrible fate... :)

Good luck out there; google_moar, work with management - and get patching! And don't forget about those trust boundaries...

No comments:

Post a Comment