Business Cases For Software Security Initaitives

Sunday, July 05, 2009

Original Source:

I covered the topic of business cases for software security initiatives in the past in articles (ISSA Journal 2006, In-secure Magazine 2008) as well as in presentations to security conferences (Black Hat in 2006 and OWASP in 2008). When asked how I can make the business case for software security that's how I articulate my answer:
1) Approach the business case from information risk management perspective
2) Quantify software security failure costs with software engineering data
3) Justify software security expense with cost vs. benefits analysis
4) Adopt a long term investment strategy

Building software security into the organization’s software engineering and information security practices is best accomplished by following software security maturity models (e.g. BSIMM or SAMM) as well as by adopting frameworks to build security in the SDLC. Software security frameworks integrate software security activities in the SDLC along with other organization information security processes such as information risk management, defect management, patch management, training and awareness.

A pre-requisite for the software security initiative business case is the availability of the organization risk data that include risk management, vulnerability metrics as well as of software engineering data such as defect management. From the software engineering perspective for example, the assumption is that your organization already measures the costs for fixing software security failures due to known vulnerabilities as well as the cost of fixing the ones resulting from incidents/exploits. Total software security failure costs include both the cost of business impact in exploiting failures (e.g. cost of a vulnerability exploit that caused harm to the organization such as denial of service) as well as the cost to fixing a known defect due to a security issue, being a security bug, a design flaw or a mis-configuration.
The problem of the security metrics is that implies that the organization software and information security practices are matured enough to use data from risk management, fraud management, vulnerability assessment, software engineering/project management, quality assurance measured and correlated.
A metrics that correlates software engineering and information risk management disciplines for example, not only implies that development teams have already started adopting security in the SDLC (e.g. by using processes such as MS SDL, OWASP CLASP, Cigital ™TP) but also that they have started working together with security teams to measuring software security risks and manage them through the SDLC.
Basically the business case for the software security initiative needs the data that the initiative is suppose to provide. In essence this is a chicken vs. egg problem you can only manage what you measure and you need metrics to make the business case for.
From information security perspective, the business case for software security need to start from the organization's information risk management data, business impact analysis and correlate application vulnerabilities as critical when these correlate to business impacts.
For this reason (i.e. the lack of organization software security data) the business case for software security is one that is hard to make. So what are the alternatives for the business case in absence of such risk data ? The answer is you need to make assumptions on software engineering costs and the cost of fixing vulnerabilities as well the financial losses that vulnerability exploits might cause.If the business case needs to be made by engineering and software development teams for example, you can assume a software engineering perspective and refer to public studies that analyze the cost of "software defects”. A NIST study on the economic impact of insecure testing for example shows that cost of fixing defects is 100 times more expensive during system testing than coding. You can localize this metrics to how much it would cost to your organization to fix vulnerability from quality/defect management perspective. Assuming your organization had adopted a web application penetration testing process, some vulnerability metrics can also be used and correlated with the cost of fixing them.

If your organization application security and information security practices are reactive rather than proactive, you can refer to the cost of producing security patches (e.g. hotfixes) to fix vulnerabilities. In absence of company data you can make assumptions such as that the cost of engineering, developing, testing and deploying a patch to your vulnerable software/web application is let say $ 10,000: it is realistic to assume that the fixing this patch earlier in the SDLC would have cost you 10% of patching costs and made saving your company 90 % of overall patching costs (e.g. 9,000 $)

But just including patching costs is not conservative enough for a real estimate of total software security failure costs: you need also to include the business impact of exploits such as either the risk of exploiting a known vulnerability or an unknown vulnerability (e.g. Zero Day) such as the ones exploited and do not follow responsible public disclosure causing the organization intangible costs.. Even in absence of a vulnerability exploit it is still important to factor the cost posed by the business impact to the organization caused by the exploit of the vulnerability. In the case of intangible costs for example what is the intangible cost of cross site scripting vulnerability publicly disclosed on site? How much is the cost of reputation damage to have such vulnerability publicly disclosed? Any public published vulnerability can cause intangible loss to company reputation, the company brand and the franchise and affect customer confidence on the company product and services.
Would intangible costs by themselves justify the existence of a responsible disclosure process to engage security researchers that have found your site vulnerabilities: YES. Would this justify fixing all known vulnerabilities before going into production with a penetration test? YES

But to really factor software failure costs as the business impact of exploiting a vulnerability it is important to correlate attacks with vulnerabilities and the business impact that cause. The recent data from the Web Hacking Incident DB that correlates public information from security incidents with web application attack vectors for example has SQL injection as #1 (19% of all attacks) that includes manual targeted attacks as well as mass SQL injection bots. From the perspective of attack vs. risk prioritization SQL injection vulnerabilities represents the ones that most likely will be exploited to cause harm to your organization and are the ones that would produce high failure costs (e.g. use for break into authentication, upload malware, denial of service, un-authorized access to sensitive data), when mitigated, SQL injection vulnerabilities would provide the most benefit in terms of mitigating business impacts.Since the SQL injection vulnerabilities root cause is coding such as using concatenated SQL statements instead of store procedures or prepared statements, fixing SQL injection vulnerabilities in the code alone would make the case of adopting secure code reviews.

Most organization's directors of technology and security try to sell technologies and new initiatives to high management with the sales pitch of "money for the bang" MFTB. The MBTB business case answers the basic question: if I spend that much on a security technology or process what is the benefit for security ? In technical terms this means doing a Cost vs Benefit Analysis (CBA). CBA can be used in security to correlate the total cost of security to increased information or software security assurance. Dan Geer covers well this analysis as related to data security in his book "Economics and Strategies of Data Security". By analogy, in the case of software security, "money for the bang" decision spending need to take into account all failure costs such as the total cost of failing as business impact as well the total cost of finding, fixing, testing and deploying the security defect. The cost of software security failure can be compared against the "anticipation costs" that are the costs incurred in proactively spending in software security initiatives. The general law is that failure costs decrease exponentially as anticipation cost raise From risk management perspective this means that the overall software security costs decrease up to a minimum and then would raise again when you'll spend more money on anticipating the failure then of what actually the failure might cause. . You will reach an optimal where more spending in anticipation cost is not worth it. This optimal spending for anticipation costs is about 40% (to be exact 37% according to Gordon and Loeb research: The Economics of Information Security Investment) of the failure costs. According to these figures it is fair to assume that optimal spending for defensive coding is 37% of what your software failure costs are. Assume for example software security failures cost is $ 10 ML it would be optimal to spend as much as $ 3.7 ML in acquiring software security tools and technology, develop new software security process as well as in new software security training activities.

Assume your organization has fraud data that can be correlated to the e-commerce channel this data can also be used to make this business case: a spending of as much as 37% of the fraud costs in application and software security initiatives can be justified.

In the case of fraud data related identity theft occurring via the web channel, you can factor the overall fraud related to data loss potentially impacting your organization and consider that 14% of all publicly reported data loss incidents occur via the web channel according to the data collected from Assuming 2003 FTC data the potential loss per identity theft incident is $ 655 per incident. Assume you are serving a population of 4 million customers, the potential loss would be of $ 2,6 Billion and with probability of identify theft occurrence of 4.6 % (also FTC data) the projected loss would be $ 120 ML for which 14% or $ 16 ML would be the cost of data losses via the web channel. With this assumptions on data loss impact, an information, application and software security program aimed to protect customer data access via the web channel that cost as much as $ 16 ML would be justified for a company with a customer base of 4 million on-line customers.

A quantitative risk assessment can be used to determine the extent on which a software security initiative can reduce risk from potential losses. The correlation has to take into account the probability of the event and the loss that the event can cause. This is difficult to quantify in general for software security issues since assumes a cause-effect between vulnerability exploit and financial impact. Nevertheless it can be used for rought estimates, assume a web application that delivers banking services for example and that the loss caused by an event such as denial of service impact on-line transactions for 3 million customers with an average of $ 20 per transaction: the loss per single DOS event (SLE) is $ 60 ML. Assume that the probability that a new SQL injection vulnerability would cause a denial of service is 30% (Annualized Rate of Occurrence) then the Annual Loss Expected (ALE) is $ 1.8 ML. If the cost of the new security countermeasures that will stop the security incident is less than $ 1.8 M than the organization should implement it.Assume the countermeasure in this case is the total cost of secure code reviews, you need to factor the cost of tools and technologies/APIs (e.g. source code analysis and penetration tools), of the security engineering process (e.g. documentation and metrics) as well of software security training and awareness for developers. The tools and technologies need to include the Total Cost Of ownership that is both the cost of acquiring and maintaining the technology.

Besides cost vs benefit analysis and quantitative risk assessment, the return of security investment (ROSI) can be used to make the software security business case around effectiveness of a software security initiative. ROSI answers the question if I spend 100K in software security initiative do I save more money by fixing defects with a penetration test, secure coding or threat modeling. Again this is where the metrics is essential:making the case with ROSI assumes you already collect SDLC data that show how much it cost to perform software security per each phase, the number of issues being identified at each phase and the how many are fixed at each phase you can make the business case for an activity vs another. Otherwise you can reference public study of ROSI from Kevin Soo Study " for every 100K spent on software security, 21K are saved by doing application threat modeling during design, 15 k are saved by doing source code analysis and 12 k are saved when defects are found with penetration tests. Overall the earlier you invest in security the greater the return.

Post Rating I Like this!
avelin injector