With mounting pressure around cyber literacy in the Boardroom, Directors are looking for specifics around what will be expected of them. Likewise, organizations are wondering what is fair for Directors to expect of management. Drawing on experiences from both sides of the table, following are reasonable expectations that leverage Director talents to establish effective cyber oversight.
I'll do this using a mnemonic to guide program governance internally and externally - TRIC: Threats, Risks, Incidents, and Compliance. I've found this effectively captures the range of governance discussions around cyber while serving as a quick reference to avoid missing important areas. While this simple acronym can actually make a healthy four-slide agenda for any cyber governance meeting, I'll use it in this four-part series to structure reasonable expectations of and from Directors.
Episode 1: Threats
A threat is different from a risk. Threats are high-level and can be organized into a short list of major categories. I like to bucket these by the objective ("why") of an adversary. Others have attempted to organize threats by threat actors ("who"), techniques ("how"), or elements ("what"), but the result is a never-ending list of overlapping categories such as insider threat, malware, Russia, denial of service, phishing, ransomware, and more each day. That approach is challenging because a given incident or news story will never fit neatly into a single class, and it is difficult to say that your organization cares more about one than others. Organizing by objectives, on the other hand, creates a tight group of six: sabotage, extortion, data theft (with four sub-types), fraud, resource hijacking (repurposing your infrastructure for nefarious purposes), and "watering hole" attacks (compromising your organization to attack your customers). Most cyber attacks will fit neatly into one of those categories, and each of those categories will carry with it a unique set of attack methods and appropriate controls.
A discussion of threats is not only appropriate but essential in the Boardroom. With finite resources and seemingly infinite attacks, a cybersecurity program will only find success if it can determine what to deprioritize.
Director question: What is our cybersecurity mission?
Good answer: (hypothetical) We are concerned about intellectual property theft and extortion, but not so much about sabotage. Read all about it in our Cybersecurity Strategy document.
So how can Directors participate in ratifying the mission and reviewing it periodically? Let's use a classic heat map to set up that discussion.
Here we see our six threat objectives portrayed with inherent likelihood and impact values. This is not too unfamiliar; traditional Enterprise Risk Management uses similar visuals to communicate risks due to operations, pending litigation, geopolitical strife, exchange rates, climate change and so many other topics. What's important for cyber, however, is the opportunity this presents to bring the Board into the discussion.
Director questions: How frequently is this happening? Is this happening to firms similar to ours? May we hear from someone who experienced an incident of this type?
Good answers: Experts who have worked first-hand on incident investigations and response can cite specific incidents and - more importantly - trends that separate one-off random occurrences from true patterns that should be concerning.
It's valuable to view Director oversight of cybersecurity not as a rush to apply newly-developed cybersecurity skills, but as an application of deep previously-existing expertise and experience outside cyber. While Directors will rarely keep up with outside experts or senior management when it comes to evaluating the inherent likelihood or x-axis value of threat attempts on the heat map, they are unmatched for assessing the inherent impact should an event unfold. Armed with a description of what an extortion or sabotage attack could look like, Directors will understand financial projections, strategic initiatives, business unit expectations, M&A plans, and similar factors that are essential to evaluate impact. Directors can and should participate in validating and even setting the y-axis values of these threats, and it should be done with an eye toward practicality.
Director question: What assumptions underpin the assessed impact of these threats?
Good answers: We believe we can tolerate a weeklong outage in our core products with a high - but not severe - impact. Would the Board like to discuss that scenario?
But how do we avoid an ultra-conservative approach where we claim every cyber threat would be existential just to dodge potential liability? Enter the risk appetite statement.
Many of us have been on the receiving end of risk appetite setting expectations for years, but rarely has it manifested in something beyond a bravado statement on the web site and 10K about how seriously cyber is taken. That has passed muster in regulatory and third-party examinations to date because more specific examples simply haven't existed. Using the heat map above, however, the risk appetite concept not only takes form, but becomes essential to avoid committing to addressing every threat. Scroll back up to the heat map and take a closer look at the risk appetite line. Threat objectives with inherent risk values above this line define the organization's cyber mission. These threats become intelligence collection priorities, meaning that the Board wishes to remain apprised of news and developments about them. If you've rated extortion in this region you are expecting quarterly updates to include extortion-related news - perhaps in appendices. Some regulatory authorities are requiring Board level briefings on threat intelligence, and without defined priorities an unfiltered stream of headlines can be overwhelming and overshadow less publicized but more company-relevant information.
While the combination of risk appetite and inherent risk scores for top threats can ensure the Board is subscribed to the right news feeds, an even more valuable product may be simply declaring that top threats must be addressed. This means that testing and control deployment should align with what is known about these. If third-party red team (ethical hacking) engagements are commissioned, these threats above the risk appetite line set the objective for your testers and the research topics for threat intelligence activity.
Director question: What testing are we performing to see if the techniques that worked elsewhere for these threats would work against us?
Good answer: We have rotated across our top inherent threats in conducting intelligence-driven tests, and here is what we learned and what we did about the results...
So far we've talked about the inherent risk values of threat objectives. To recap, the likelihood (how likely is someone to try this on us?) and impact (if it worked, how bad would it be here?) combine to set the program mission, collection priorities, and the focus of testing. But what about residual risk? There is absolutely a place for that as well!
Using our trusty heat map, we can plot residual values to communicate how our program looks after you take into account the controls we have deployed. At first this can be fairly conceptual, with senior management in an internal cyber governance committee walking through scenarios and candidly assessing if the documented techniques would work against the company. With subject matter experts who know the true story in the room, management can estimate the residual values of threat objectives with some accuracy. But nothing takes the place of hands-on testing, and tests that were prioritized by specific inherent threats will produce concrete data about their potential to manifest. Actual incidents, of course, should also cause management to reassess residual risk values, but the goal is to identify issues before they are exploited. Adding residual values to a heat map can look like this:
Again we have the separate concepts of likelihood and impact. With residual risk, the likelihood, or x-axis, reflects how likely an adversary would be to succeed if they tried to accomplish the threat objective at the company. This should be updated by management regularly to communicate the result of any testing, ranging from vulnerability scanning and bug bounty programs to sophisticated red team engagements from ethical hacking firms. This is the vehicle for management to proactively share that they have become aware of the increased potential for an issue before it manifests into an incident.
But what about residual impact? Segmentation or "blast radius" controls, as my friend Phil Venables would say, reduce residual impact. These controls are less common, so it is not unusual to see minimal abatement of the y-axis values of threats between inherent and residual values. Examples where controls can make a difference include isolating banking accounts to minimize fraud impact or strict network segmentation to isolate the effect of ransomware.
Director question: What have we learned about our program's susceptibility to these threats through testing?
Good answers: We found problems. Every time we test we find new problems, but they are usually limited in severity. We address them quickly, identify thematic issues and implement long-term education and cultural fixes, and repeat the process.
And where does our risk appetite statement fit in when it comes to residual risk? The risk appetite defines the threshold of acceptability on residual risk. Risks sitting above the line represent a mandate for action from the Board to management.
Director question: I see that after recent testing you determined the residual risk of fraud exceeds our appetite. What do you need to drive this value down?
Good answers: We need to delay the promised release of these new products or features to free the resources needed to mitigate these risks in the next quarter. Longer term, we will need to increase engineering resource or lower delivery expectations to maintain an acceptable risk level. We need full support from senior management around policy compliance and a zero-tolerance policy for employees refusing to adopt secure practices.
Director question: If we increase our risk appetite, what negative scenarios will become more likely? If we decrease our risk appetite, what costs and sacrifices would that trigger?
Good answers: If we increase our appetite we will stop testing against the following threats and accept the following residual risks, which will significantly increase the likelihood of this scenario... If we decrease our appetite, our mission will expand to include the following threats, and the following investments will be needed to move residual risk into an acceptable area...
As you can see, there are ample opportunities for Directors to use their business and risk management experience to govern cybersecurity via threat management. Likewise, a simple taxonomy of risks and the application of classic likelihood and impact models can set up a useful Board level discussion on the mission of the cybersecurity program and its status at any given point.
Stay tuned for episode 2, where we will talk about Risks... and how we can boil down thousands of discrete technical issues into a single visual that communicates what the Board can influence - Remediation Agility.