Jan 31, 2022 4 min read CyberOps

Patching is Overrated

Patching became a household term during the Equifax security breach and Congressional hearings. While IT maintenance and hygiene have their place in running a secure environment, over-emphasis can distract limited resources from more important tasks or trigger operational risks.

Patches are only relevant when a security vulnerability is known and addressed by a vendor. So whether it is a 0-day vulnerability discovery without a patch yet available or just the unavoidable window between the time a patch is available and even the most diligent enterprise can get it applied, you can be absolutely certain that you will have situations where you are unpatched. Further, making any production change in a panic introduces significant operational risk and can impair intended functionality. You can't survive like that. It's like setting sail on rubber raft known to pop holes at random. Patching isn't the answer for that one either.

But here is the good news: Contrary to popular misconception, vendor security flaws contribute only a small percentage of exploitable vulnerabilities in reality. Misconfiguration likely takes the prize for first place, with coding errors (many of which are technically misconfiguration as well) a close second. Hacking into an enterprise infrastructure - whether in a datacenter or the cloud - is much more likely to abuse the lack of input validation in an application allowing SQL injection, default credentials in an unprotected product management interface, or an improperly secured debugging interface than to run an exploit against an unpatched product.

Why is that? First, remember that most products are built to do the terrible things you are trying to protect against. A service with PII almost certainly is made to disclose that PII - but to authorized individuals. If the data were never to be used, it should have never been collected to begin with (that's a whole other topic...) So you've already paid many engineers and invested in tons of hardware to make it possible to crank out reports of your sensitive data - just to the right people. But as an intruder, it would be silly at best to start from scratch and engineer all the interfaces and logic that it would take to query, decrypt, and exfiltrate your data. Rather, you are going to use existing tooling and just tweak the authorization bit. Maybe that means you just use a valid administrator login, or maybe it means you use SQL injection to query a database that has stored procedures embedded to prepare your data.

Pivoting from data theft to sabotage, your systems have on/off switches and other consoles you've invested heavily to build. As a hacker it is much more appealing to just find that interface and click the "off" button. After all, exploit code is generally detectable. In the fairytale land of security theater, product exploits are out there getting stopped by intrusion prevention systems all day. In reality, people aren't running exploits over the wire, and IPS is sitting around burning up budget. In real life, people are finding the page that says "click here to download all the secret data" and clicking the button. Even ransomware and viral malware like Netya made most of their gains by dumping and replaying credentials via authorized communication channels - and not by levering the exploits they bundled as a back up vector.

So should you abandon your efforts to keep software up to date? Absolutely not. Rather, you should see that as preventative maintenance and not a reactive measure. If you find yourself canceling holidays and working all night to deploy patches, something may be lacking in your architectural approach.

  • Limit attack surface. If you have port 445 open to Windows servers from the Internet, you will fail. The answer will never be patching. You need to lock down ingress to a finite set of essential services you can monitor for maintenance.
  • Continuously test that surface. Run Bug Bounties, pen tests, and red team engagements focused on the specific outcomes you are concerned about.
  • Choke down egress in production. Whether it's a configuration error or an unpatched system, exploits are likely to require outbound network connections that are not needed in normal operation. If you are just a glutton for canceling weekend plans and working all night, do it this weekend to lock down recursive DNS and stop allowing your production servers to resolve fully qualified domain names in .club or .su. Killing DNS recursion alone can stop the majority of outbound callbacks and tunnels. To finish the job, use cloud security groups, network acls, or - as a last resort - proxy servers to lock down layer 3 network communication to the few services and hosts needed.
  • Focus your detection investment on behaviors - not signatures. Look for unusual usage of credentials, networks connections, reconnaissance commands, and data movement instead of exploit strings. After all, if you can load an exploit string into an intrusion detection system, you probably loaded it into preventative tools and disabled the capability already anyway - so you are just watching failed attempts. If you instead look at everyone running >#whoami or authenticating to 5+ servers in <30 seconds, you'll stand a chance no matter how they got in.

Patching needs to be a back-stop, and not a first line of defense. When you analyze a breaking vulnerability, finding yourself vulnerable should levy an indictment on the architectural decisions you made yesterday, not the software upgrades you need to run tonight.