I've been using the term "West Coast CISO" a lot lately. While it feels like CISOs used to be either network/infrastructure CISOs or risk manager CISOs, now the split is having to make room for the CISO heavily focused on code security. The image is one of a CISO born in the cloud, focused on delivering (security) bug-free code, and thus focusing architecturally on CI/CD, change control, and automation, to oversimplify. This emphasis on code is contrasted with network controls and discussion of firewalls, DDoS, endpoints, WAF, and VPN/perimeter concepts you'll hear in the financial district. And that's before you even get to compliance, risk management, incident response, or threat & vulnerability management.
Much of this is simply a reflection of the evolution of technology. Firms today don't have firewalls, LANs, or servers. So it makes sense that software agents and access control lists just aren't top of mind. But producing code is nothing new. So when the rest of infrastructure fell away, did security just get more bandwidth to focus on code, or are the exploits and incidents truly leveraging code defects more frequently? It matters because I'm seeing not only security leaders, but business leaders and tech executives equating cybersecurity with code quality first and foremost. So is this a healthy shift, with security leaders getting up to speed with new architectural paradigms? Or is this a "when all you have is a hammer, everything is a nail" situation with software engineers turned CISOs simply doing what they do best?
One way of looking at application security or AppSec is that you make best efforts to avoid security bugs in engineering, but assume the worst and mitigate the most impactful defects like input validation, SQL injection, insecure direct object reference outside the deployment stack in something like a (well tuned) WAF. You complement that with a ton of testing - bug bounties being my personal favorite - and refocus developer education when you see thematic trends in your findings. Maybe that is the "east coast approach". The "west coast" approach, on the other hand, is to put 50-75% of your security thoughts into getting secure code written up front.
While I love the improvements in code quality this trend is netting, I'm not convinced the threat intelligence is supporting the shift left. While anyone vending a specific solution will find data to convince you the problem they solve is the most urgent, my anecdotal observations in reading incident dossiers, indictments, and listening to peer confessionals is that good old misconfiguration is still king - or at least tied with credential misuse (I'm avoiding the overloaded term "Identity and Access Management" that wants to turn this space into a whole industry as well). To be fair, the engineering-focused security program is going to scoop up some configuration security as well when it comes to automating the deployment and configuration of infrastructure-as-code. But there are still many configuration items eluding the grasp of automation up to and including the provisioning of cloud accounts themselves. And credential abuse (including the trend in abusing legacy multifactor authentication methods) is generally totally outside the auspices of code. So when I had an advisory client ask me for recommendations on full security code reviews and build-time code review tools, I had to pause. The more you shift left, the bigger the sea you are trying to cover, the higher the false positive rate, and the more likely you are going to find things that simply will not be exploitable. Given finite resources, maybe a mid-sized shop is better off to run some cursory IDE-integrated source code checker/recommender but focus on outside-in testing of the final product. Not only is this approach going to find coding errors, but it's also going to find configuration issues, authorization issues, and logic flaws that just aren't going to manifest to an algorithm evaluator.
Nobody likes to admit deprioritizing any facet of security. But maybe the dream of security-flawless code is not only unachievable, but leaves too many unrelated exploit avenues. Maybe AppSec is ok at 15 to 25 percent of a program. Or maybe it truly is everything?