I can't even express how good it is to be back in the field, solving problems and working with the enterprises again. It's interesting how little the landscape changes in software security and how many of the same challenges that existed during my GE days (2003-2008) are still around today.
The landscape has certainly matured and the techniques organizations are using have become more structured and formalized. Even as many less mature organizations [think CMMI level 1] struggle to set up repeatable processes which will eventually get them off the hamster wheel, they inadvertently work towards rigidity and slowness which puts them at odds with the agile and ever increasingly rapid pace of modern development. This isn't to say that every organization is adopting DevOps practices, but one of security's three big faults is that it tends to slow down development and release processes.
Bearing that in mind, there is a progression that organizations tend to go through. They have to spend time building and executing software security work and then provide tools to help strengthen programs. As a sales engineer for the for a good many years, I can start to pick out patterns that organizations follow as they mature and migrate up the CMMI ladder. Most organizations start at CMMI Level 1: Initial, where you're likely to find adhoc testing and the security organization acting largely as a 'security speed bump' (I'll explain later). Where they go from there heavily depends on who's driving the security ship, how good their knowledge of the business is, and what their background and inter-personal skills are, but I can tell you for certain that the progression up the ladder is not linear, in terms of effort, capital and time.
Interestingly, there still remain primarily two models of operation in software security: the audit model and the contributor model. The basic premises of these models haven't changed in a long time, but as technology and thinking marches forward there have been some advancements. These two models are supported by vendors including HP Enterprise Security Products (as you may have guessed) with varying nuances and degrees of focus on goals.
If you're not familiar with the models I'm talking about here I'll explain my viewpoint...
The Auditor Model
Based on the time I've spend in enterprises, the most likely model that software security organizations first adopt has been that of the auditor. If you've been in security for a while, you're probably familiar with this model although maybe you've called it something different. In this model security is typically the 'problem' of the security organization. It should be no surprise that this is where most organizations start and many get stuck there. You can tell you're in the auditor model because you're likely a 'tollgate' or a 'checkbox' on the way to production-release. Once the code is ready, tested (by the QA organization or internal to a DevOps team), someone from the security organization gets a chance to take a look at the code.
Whether you're using dynamic testing methodologies, static analysis or code review or whatever your method of choice, the security bit is a 'step.' Security falls into the bucket of audit verification. You'll hear someone say something like "we're ready for the security verification step. We go live tomorrow." I wish I had a quarter for every time I'd heard that.
The auditor model tends to run into issues of time, scale and fixability (the ability to remediate issues that are discovered), among others. While the auditor model proves quite problematic, it's still one of the primary ways I see organizations tackling software security even today. Organizations who are stuck in the auditor model for more than 12 months are a clear indication of the importance of security in the software development lifecycle... clearly security isn't succeeding at becoming critical-path.
The Contributor Model
Some organizations that adopt security more holistically, and those who are finding religion in DevOps, are adopting the contributor model. Instead of security 'verfication' being a step on the way to achieving production certification or approval, security tends to be lock-step with other development steps. In the contributor model developers are assessing their own code, either with tools within their IDE, or at build-time through scripted technology that calls a static or dynamic analysis tool as part of a nightly or weekly build process.
In the contributor model, the developer is directly engaged, and while the code does get audited at some point, it's not necessarily the final step on the long path to production approval. In fact, I can attest to how many organizations I've worked with who don't perform regular audit activity because they trust their developers as contributors, and instead perform spot-check audits. Here's how that process works.
As the developer or 'contributor' develops code it's push-analyzed at regular intervals. Whether it's source code peer review, in-IDE tooling or build-time analysis, security happens throughout the development process. The results of the process are fed into a management system which keeps track of vulnerabilities, developers who do self-assessments, and trending, and high-level reporting. Applications or developers with high levels of critical issues are flagged for further audit, while those who do a reasonable job mitigating their own issues are spot-checked at some pre-determined interval. This way, security doesn't stand in the way of the release cycle, but security folks still get peace-of-mind that the code being released is at least of a base-minimum quality. Is this perfect? No. Does it raise the bar significantly while allowing for automation and process velocity? Absolutely.
There are countless numbers of hybrids between these two. I've even seen a couple of processes which involve outsourced testing and remediation by a third party. Any combination could be perfectly valid in their own right for their unique situation. But these are the two primary models which I have observed and continue to run into. Moving off of the auditor model is difficult, but the key is having good measurements and lots of supporting data. I'm referring to having 3-5 KPIs that answer the question "are we doing a good job?" But this is rarely answered by "Yes, we have less critical vulns than we did last year" or similar. In the next post I'll talk more about measuring the right things.
Cross Posted From Following the Wh1t3Rabbit