Patching became a household term during the Equifax security breach and Congressional hearings. While IT maintenance and hygiene have their place in running a secure environment, over-emphasis can distract limited resources from more important tasks or trigger operational risks.
Patches are only relevant when a security vulnerability is known and addressed by a vendor. So whether it is a 0-day vulnerability discovery without a patch yet available or just the unavoidable window between the time a patch is available and even the most diligent enterprise can get it applied, you can be absolutely certain that you will have situations where you are unpatched. Further, making any production change in a panic introduces significant operational risk and can impair intended functionality. You can't survive like that. It's like setting sail on rubber raft known to pop holes at random. Patching isn't the answer for that one either.
But here is the good news: Contrary to popular misconception, vendor security flaws contribute only a small percentage of exploitable vulnerabilities in reality. Misconfiguration likely takes the prize for first place, with coding errors (many of which are technically misconfiguration as well) a close second. Hacking into an enterprise infrastructure - whether in a datacenter or the cloud - is much more likely to abuse the lack of input validation in an application allowing SQL injection, default credentials in an unprotected product management interface, or an improperly secured debugging interface than to run an exploit against an unpatched product.
Why is that? First, remember that most products are built to do the terrible things you are trying to protect against. A service with PII almost certainly is made to disclose that PII - but to authorized individuals. If the data were never to be used, it should have never been collected to begin with (that's a whole other topic...) So you've already paid many engineers and invested in tons of hardware to make it possible to crank out reports of your sensitive data - just to the right people. But as an intruder, it would be silly at best to start from scratch and engineer all the interfaces and logic that it would take to query, decrypt, and exfiltrate your data. Rather, you are going to use existing tooling and just tweak the authorization bit. Maybe that means you just use a valid administrator login, or maybe it means you use SQL injection to query a database that has stored procedures embedded to prepare your data.
Pivoting from data theft to sabotage, your systems have on/off switches and other consoles you've invested heavily to build. As a hacker it is much more appealing to just find that interface and click the "off" button. After all, exploit code is generally detectable. In the fairytale land of security theater, product exploits are out there getting stopped by intrusion prevention systems all day. In reality, people aren't running exploits over the wire, and IPS is sitting around burning up budget. In real life, people are finding the page that says "click here to download all the secret data" and clicking the button. Even ransomware and viral malware like Netya made most of their gains by dumping and replaying credentials via authorized communication channels - and not by levering the exploits they bundled as a back up vector.
So should you abandon your efforts to keep software up to date? Absolutely not. Rather, you should see that as preventative maintenance and not a reactive measure. If you find yourself canceling holidays and working all night to deploy patches, something may be lacking in your architectural approach.
Patching needs to be a back-stop, and not a first line of defense. When you analyze a breaking vulnerability, finding yourself vulnerable should levy an indictment on the architectural decisions you made yesterday, not the software upgrades you need to run tonight.