Jun 16, 2018 8 min read Governance

Briefing Your Board on Cybersecurity Part 1/3: Corporate Governance 101 for Security Professionals

Cybersecurity is undeniably one of the most concerning topics in corporate board rooms today. Directors are looking not only for assurance around the obvious risks, but for general education around this new and complex subject. Clarity is sought around what is expected of Directors, how exposed a firm is to the latest breach on the news, how management assesses cybersecurity risk, and how a firm's program stacks up in independent reviews. A knock-on effect of this is a raft of questions among CISOs and other senior executives around the remit and role of corporate governance in cybersecurity and the cadence, materials, and metrics that best serve meet those needs. While the answers to these questions are certainly evolving and will always differ among organizations and cultures, I've found some tips and pointers that have resonated with my peers during industry discussions. This collection of tips mirrors a few talks I've given and I hope fellow CISOs, security practitioners, corporate governance professionals, and corporate Directors may find it helpful. This episode will review general corporate governance from the perspective of a security professional, while episodes 2 and 3 will dive into specific strategies and materials that may prove useful when presenting to your Full Board and Board Committees, respectively.

Corporate Governance 101

It doesn't hurt to level-set on the mission and composition of Boards to set the stage for an executive looking to tailor an engagement strategy.

Foremost it is worth remembering that Directors represent shareholders. This holds for public and private companies alike, though of course a privately-held firm is more likely to have Directors representing more specific if not polarized investment interests. A highly-regulated firm - public or private - is likely to have Directors with specific backgrounds and expertise in regulation, and a public company is likely to have Directors who specialize in corporate responsibility and public benefit.

Second, note the distinction among "NEDs", "EDs", and "Experts". NEDs - or Non-Executive Directors - are the traditional Board Directors that come to mind and comprised of non-employees who likely serve on other Boards, may hold "day jobs," are likely not integrated into your internal corporate systems, but nonetheless are likely bound by a specific contract (more on that later). EDs - or Executive Directors - are full time staff such as your CEO, CFO, etc. who serve on Boards and committees and represent company management. Experts are notable as outside resources who may serve on specific product committees or in a consultative capacity for specific subject matter, but who are likely not voting Board members and should thus not be privy to all confidential discussion. These distinctions should be noted in determining the scope of attendance for any cybersecurity engagement.

Boards generally function via specific smaller committees in addition to the full Board, with specific responsibilities delegated to committees. While traditional committees include areas such as compensation and regulatory oversight, cybersecurity oversight is usually delegated to risk, audit, or technology committees. An emerging trend is the creation of an actual cybersecurity committee, though the most common approach today seems to be the appointment of a Director with practical cybersecurity experience as a member or chair of the risk committee. It is worth noting that many firms will maintain multiple Boards to cover subsidiaries who must hold a degree of independence from the parent firm due to regulatory/jurisdictional requirements or to reflect a unique ownership structure that may differ from the parent. In those cases it is worth remembering that a subsidiary Board exists for the sake of independence and cybersecurity must be addressed in the context of the specific subsidiary and its concerns, requirements, and regulation. Subsidiary Boards need to know that their voices are heard and their unique considerations are fully addressed.

The consensus I have heard among CISO peers is that detailed and quantitative cybersecurity reports are usually delivered at the committee level quarterly or semi-annually and focused on measuring and mitigating risk, while reports to the full Board are usually made annually and more focused on subject matter education. My experiences will thus be organized into those relevant for full-Board interaction (Episode 2) versus those relevant for Board committee meetings (Episode 3).

There is a great deal of new material being delivered to a Board audience, and it serves a presenter well to fit material into the language and constructs with which Board members are already familiar rather than trying to teach a language.

TIP: Don't teach a new language and a complicated subject at the same time.

To that end, here are some phrases and constructs that are likely to resonate with Directors:

Risk = Likelihood x Impact

As security professionals we tend to scoff at that basic construct and quickly move on to our parlance of threat actors, vectors, TTPs, and threat = motivation x capability. A good example of "Cyberspeak" you should avoid with the Board is this excerpt from the MITRE Cyber Threat Susceptibility Assessment document:

I recommend against using language like that during Board briefings. While we've spent years arguing about that jargon and debating the idiosyncracies of various terms, stick to what people know. Yes, Risk equals Likelihood times Impact. It's a true story, it's gospel, and Directors are likely to be thinking in this construct regardless of what we present. So stick to it and mold your delivery to it so you can concentrate on your unique message and avoid translation tasks. We can talk about Crown Jewel Analysis at Blackhat.

Independence

The very existence of Boards is based on "trust but verify" and "let's not take your word for it - we want an outside voice". This will manifest itself in several ways, but an important lesson is to include external support in any strong assertions you are making and don't expect your word alone to be authoritative. Cite external references, engage third-party attestations, and make outside experts available to the Board. You will see some concrete examples of this in the next episodes.

Lines of Defence

The Lines of Defence (LoD) model is increasingly popular in risk management and likely to be familiar to at least your Risk Committee members. It definitely picked up steam earlier in the UK via the likes of the Bank of England (and hence my continued British spelling of Defence when discussing it), but it is fairly ubiquitous these days. In a layperson's terms the (3) LoD are well summarized as 1: people who solve problems, 2: people who identify problems, and 3: people who ensure those tasks are being consistently and well performed. More descriptively, outside cyber the roles are often filled by technologists/IT, Enterprise Risk, and Internal Audit respectively. So where does cybersecurity fit in? A little Googling will reveal material backing security's place in the second line, and a little more Googling will build a case for the first line. I believe both may be technically correct, however, and will reflect the progeny of your security program. In my observations, where the CISO position and program were created in reaction to a security incident, the focus will often be on controls and incident response and thus the program will fit will into a first line categorization. Where a firm established the role, on the other hand, in response to increased governance, regulatory, and/or customer scrutiny, the focus will often be on maturity, consistency, and documentation, and thus the program will often fit into a second line. I believe these same phenomena also explain dramatically differing reporting lines for CISOs, with first-line groups often reporting to a Chief Information Officer and second line groups often reporting to a Chief Risk Officer or General Counsel. Still other firms will have "all of the above" within the CISO organization, report as a peer of the CIO and CRO orgs, and preserve independence between the control/first line functions within security and the proactive assessment/second-line functions. While I believe that last model is the most holistic and "mature", it is all about filling the unmet need in your organization. What is important for the sake of governance interaction, however, is recognizing how your organization is structured and where you fit to avoid confusion or, worse, being measured against a role that isn't yours. Define where you fit in the LoD internally, make that clear to governance, and establish clarity with your internal IT and Enterprise Risk groups so you are singing the same tune. Mention the Lines of Defence in your strategy documentation and program discussions with the Board.

Risk Appetite Statements, Limits, and Red/Amber/Green Thresholds

Directors - and particularly Risk Committee members or other Risk Management professionals - will be used to discussing operational risk in the context of quantifiable metrics, contructing a statement about a numerical threshold they are willing to tolerate, and being alerted as that threshold is approached and exceeded. As an example consider uptime, fraud loss rates, or manufacturing defects. All lend themselves to statements such as "we will tolerate no less than 99.999% uptime measured on a monthly basis...", "the Board will be notified if loss in a calendar month exceeds $25m USD...", or "Q3 saw our scrap rate reduced by 15% from..." Unforunately, finding analogs in Cybersecurity is challenging. As a security professional it is important to know what you are walking into and being measured against. While it is not reasonable to force yourself into manufacturing data that looks like this, it is entirely beneficial to be aware of what preceeded you and explain if and why you are presenting risk metrics in a different format. If your cybersecurity risk is focused on fraud, by all means report loss rates. If, however, like many of us you prioritize many boolean threat objectives such as sabotage, you may find your data wanting. Having a "we will tolerate 0 breaches" statement accompanied by "good news - no breaches again" is not very productive. What can work in this structure is raw incidents. Structuring limits around the number of incidents aligned to a well-defined severity can be productive. If, for example, you define high severity incidents as those that have impact OR are the result of targeted malicious intent and define critical incidents that meet BOTH criteria, you can effectively model limits around the number of events tolerable at high or critical levels, the threshold where the Board/committee is notified immediately, and show trends during quarterly updates. I do not recommend, however, setting a threshold around a metric the firm has no control over. In that regard use impact as a litmus test. Phishing attempts should not warrant a call to the Board. Phishing failure, however, may.

TIP: Don't set a risk limit around an event that the firm has no control over.

Limits work well with lagging indicators; save leading indicators for risk maps

When it comes to managing discrete risks vis a vis patching, if you can bundle risks into a digestible number of parallel contructs and truly rate them based on how they affect the likelihood or impact of an event that would concern the board, it may be feasible to assign service level agreement-style limits to the amount of time for which you are willing to allow a risk of a certain level to remain unmitigated and solicit Board attention when violations of those thresholds are threatened or occur. Remember, though, that while risk is meant to be a leading indicator to prevent future issues, limits and thresholds are lagging indicators describing past events even if the spirit is intended to drive changes for future impact. Scrap rate is a past event, uptime is a past event, financial loss is a past event, and cybersecurity incidents are past events. Risks, however, represent potential future events. So I would advise you not try to shoehorn security metrics into limit metrics beyond incidents and instead save true risk data for more qualitative reports we'll discuss in Board Committee material in Episode 3.