AppSec Mistakes Companies Make and How to Fix Them

Tuesday, April 24, 2012

Fergal Glynn


Article by Niru Raghavan

Veracode Marketing recently polled a list of InfoSec luminaries, asking them “What is the biggest mistake companies make with Application Security and how can they fix it?”

We’re pleased to present the responses from a wide array of security experts including Bill Brenner of CSO Magazine, Andrew Hay of the 451 Group, Jack Daniel of Tenable Network Security and Veracode’s own, Chris Wysopal.

While all our experts have their unique perspectives, some common themes arose including the basic idea of taking application security more seriously and committing to a programmatic approach vs. ad hoc manual testing.

We want to thank all our respondents for participating and we welcome your thoughts too – use our comment area and tell us, “What do you think is the biggest appsec mistake companies are making today?”

Q:  What is the biggest mistake companies make with Application Security and how can they fix it?

Alan Shimel, Co-Founder and Managing Partner, The CISO Group, @ashimmy

While some may point to technical coding and development mistakes in App Security, I am going to keep my answer simple. The single biggest mistake companies make about Application Security is complacency around thinking the world is static.

Apps don’t live in a static world any more than we do. Even if the app code has not changed, it does not mean that server, client or network has remained static. Patches, upgrades and new equipment are constant in the technology field. Just because you checked and re-checked your code and app when you first installed it and you haven’t upgraded or changed it doesn’t mean it is secure today.

Did you account for things like a change in the firewall rule set? What about a patch to the OS of server you are using? A hypervisor update maybe? How about which clients are accessing the site or what browser and browser version they are using? There are so many variables that are constantly changing.

Too many people think that App Security testing is a onetime thing or something done only when you update your app code, but there is more to App Security then just the app.

I remember back when performing vulnerability scans on a regular basis was considered too much. Once every year or two at best was the norm. Today, no one would consider an annual vulnerability test good security. The same for App Security. Even if you have a WAF or some other technology sitting in front of your App, you need to be testing and probing for vulnerabilities, exploitability and reachability on a regular basis.

An app security program that makes testing a regular event is the best thing companies can do to improve their app security. Don’t be complacent and don’t think if it was secure then, it is secure now!

Alan Shimel is co-founder and Managing Partner at The CISO Group, Alan Shimel is responsible for driving the vision and mission of the company. The CISO Group offers security consulting and PCI compliance management for the payment card industry. Prior to The CISO Group, Alan was the Chief Strategy Officer at StillSecure. Shimel was the public persona of StillSecure as it grew from start up to helping defend some of the largest and most sensitive networks in the world. Shimel is an often-cited personality in the technology community and is a sought-after speaker at industry and government conferences and events. His commentary about the state of security, open source and life is followed closely by many industry insiders via his blog and podcast, “Ashimmy, After All These Years” ( Alan is now also a regular contributor to The CISO Group’s security.exe blog and podcast, as well as the Secure Cloud Review blog. Alan has helped build several successful technology companies by combining a strong business background with a deep knowledge of technology. His legal background, long experience in the field, and New York street smarts combine to form a unique personality.

Andrew Hay, Senior Security Analyst, 451 Research, @andrewsmhay

With regards to purchased COTS applications, I think that organizations place too much trust in the delivered product. Most organizations accept the product at face value and believe that the vendor has performed enough due diligence to ensure that the product is secure – I mean, who would release a product riddled with security problems simply to make money?

The answer, pretty much every vendor out there. Software vendors are in the business of making money and to do that they must take certain risks along the way to ensure that they can get their product to market in a timely manner. We’d all like to think that software vendors want to protect their customer base but, in reality, most simply want to protect their ability to sell future products into that customer base.

Unfortunately, most vendors treat security (i.e. bugs, vulnerabilities, etc.) as issues to be fixed using out of band updating mechanisms (such as patches) or in the next major release – new feature and functionality permitting, of course. Every organization should test new (and recently updated) applications deployed within their environment for vulnerabilities and potential breach vectors with the skepticism that the vendor may not have performed enough due diligence to secure its own software.

Andrew Hay is a Senior Security Analyst for 451 Research’s Enterprise Security Practice (ESP). Andrew provides technology vendors, private equity firms, venture capitalists and end users with strategic advisory services – including competitive research, new product and go-to-market positioning, investment due diligence and tactical partnership, and M&A strategy. Through his work at 451 Research, Andrew has been instrumental in securing tens of millions of dollars in equity investment for numerous security product vendors. He is a veteran strategist with more than a decade of experience related to endpoint, network and security management technologies.

Bill Brenner, Senior Editor, CSO Magazine, @BillBrenner70

Companies rush their apps to market with no consideration for the security implications. An app is just another product to be sold, and when there’s money to be made, developers aren’t given the time to make sure the code is ironclad against potential exploits. So the apps go to market with security vulnerabilities the bad guys eventually discover.

The solution is to slow down the production process and allow developers to build security into the app from day one. Do I think there are any moves in the right direction on this front? A little bit. But not nearly enough. The sad part is that most customers would rather wait and have a more secure app than have quick access to one that leaves them vulnerable to thievery.

Bill Brenner reports, writes and edits content focusing on the latest information threats and defenses—from social networking to mobile phone security to glamorous subjects like SAS70 replacement SSAE16. He writes CSO’s daily Salted Hash news analysis blog. Bill has more than 17 years of experience as a reporter and editor, and has focused on information security for the past seven and a half years. Before joining CSO that was senior news writer for TechTarget’s Security Media Group. Going further back, he spent four years as an assignment editor at The Eagle-Tribune daily newspaper of Massachusetts’ Merrimack Valley region.

David LeBlanc, Senior Security Technologist, Microsoft

There are actually a set of related problems, depending on the age of the code. If a company has an older code base that has been around for a long time, then it is likely that the code wasn’t built using modern programming practices, or with any concern to security. If you attempt to retrofit security into older code, it is going to be difficult – older code is difficult to refactor, and advice along the lines of “Just use Standard Template Library containers, and now errors will be safe crashes” is just really not practical unless you have time thoroughly review, refactor and retest the code.

A second problem is thinking that you can consider security later, which happens more often with new code. People make similar mistakes with performance – if your performance or security problems are due to deep design flaws, then fixing them is going to be difficult. I find that I am usually given time to write something once, maybe with a rewrite if I’m fortunate – thinking I’m going to come back and tidy it up later is just not realistic. You may not do all the security work you’d like – shipping is a feature, especially in a start-up – but if you’re at least thinking about security, you’re more likely to avoid making major problems.

A third problem is lack of education – people usually don’t come out of school knowing anything about security, nor do they typically get it working in many places. If you want best results, some of every team needs to have people who spend time understanding how security applies to what they do. Having a core group who live and breathe security is great if you can afford it, and you need experts to be able to help with tough problems, but the best results come when the people closest to the code are thinking about security.

Perhaps the biggest problem is thinking that there is some magic solution. All the systems and processes you can put into place to help ensure security and all the tools you might run are great – they all help. But they’re not sufficient. What really works in the long run is to have people who care about creating a quality product, understand that security is an important part of quality, and are willing to do hard work to achieve that quality. That’s why I named ‘Writing Secure Code’ after ‘Writing Solid Code’ – solid, robust code is most often secure code. Solid, secure code takes hard work – which is the real solution.

David LeBlanc is a senior security technologist in Microsoft’s Information Technology Group. His primary role is defending the Microsoft network from attack. He has worked in the security field throughout his professional life, including working at Internet Security Systems where he was the primary engineer on ISS’s award-winning security products. David serves on a number of external security-related advisory boards.

Eoin Keary, Global Vice Chair, OWASP and Director/CTO, BCC Risk Advisory @EoinKeary

The lack of developer & stake holder awareness and relying on a single time limited penetration test to ensure an application is secure is a common and big mistake. Developers should not need to be security experts but if they were aware of the tools available to them I believe they would write more secure code. If it is simply calling an encoding API or performing negative unit testing, this is not a large overhead if developers and architects are made aware early in the SDLC.

Understanding the risks “out there” be it client security issues or traditional server issues software design and architecture security would vastly improve if the threats and potential attacks were known. From a secure code perspective one fix may mitigate many classes of vulnerabilities if it was implemented. Strategies such as using a common enterprise-wide component for authentication or data validation pays back large dividends. All a developer would be required to do is use the component/API when they invoke certain functions. They do not need to even know why in some circumstances but all they need to know is to do it.

Reliance on a single pen test, with a finite test window is also a big mistake. It puts the security of the application immediately on the “back-foot”. It must be remembered that an attacker has as long as they need to breach a system while defense relying on a single pen test is a loosing game. I’d suggest a continuous monitoring approach, like an “application radar” which constantly scans an application and flags changes as they occur. This can also help with metrics and correlation of vulnerabilities with code changes. Continuous monitoring coupled with a robust SDLC is a more sustainable approach than a single pen test.

Eoin Keary is Global Vice chair of OWASP he is also a director with, based in Dublin, Ireland.

Guillaume Lovet, Head of Fortiguard Research EMEA, Fortinet, @GuillaumeLovet

Inconsistent patching policy, if they have one at all. I do understand that patching production machines, especially those providing a service to customers, cannot always be done right away, and that the stability of the patched servers must be ensured first in a testing environment. But in that case, you must be 100% sure that the IPS system shielding your servers does block any attack that could exploit the vulnerability that will be fixed by the patch.

Vulnerable applications are perhaps not the only entry point to a company’s information systems, but given the availability of free and public tools implementing exploits for known vulnerabilities, it is likely the easiest, and the one an attacker will probably try first. Thus, by blocking it with an IPS system, not only do you prevent the attack, but you also become informed that you’re subject to penetration attempts, and that attackers may from now on attempt to leverage more sophisticated attack vectors.

Guillaume Lovet is the head of Fortinet’s FortiGuard security research team in EMEA and a regular speaker at international security conferences.

Jack Daniel, Product Manager, Tenable Network Security, @jack_daniel

I am not an “application security guy”, but I have seen (and possibly even made) some fundamental appsec mistakes as a network security practitioner- let’s look at two of them.

The first mistake is more social than technical. When there are questions or concerns about application security issues many of us in the network or systems admin world tend to avoid having a meaningful conversation about it, dismissing the ability of programmers, brogrammers, or DBAs to understand the fundamentals of security, as least as we see “the fundamentals of security”.

The other mistake is that when network and systems security practitioners discover appsec problems we sometimes shrug and move on, thinking there is nothing we can do about appsec issues. We need to understand the problem- which probably requires a conversation (see above) with those who understand the issue. Then we need to explore and implement mitigations where possible, or at least crank up monitoring on systems potentially exposed by the problem(s).

We all need to get better at discussing the issues we face if we are going to make progress securing our environments, none of us can do it alone.

Jack Daniel, Technical Product Manager for Tenable Network Security, has over 20 years experience in network and system administration and security, and has worked in a variety of practitioner and management positions. Jack’s Uncommon Sense Security was recently named the Most Entertaining Security Blog at the 2012 Security Blogger Awards. An early member of the information security community on Twitter, @jack_daniel is an active and vocal Twitter user. A technology community activist, he supports several information security and technology organizations; Jack co-founded and organizes Security B-Sides events, gatherings of security enthusiasts born from the desire for people to share and learn in an open environment. Jack is a frequent speaker at technology and security events including Shmoocon, DEFCON, SecTor, RSA, and Security BSides. Jack is a CISSP, holds CCSK, and is a Microsoft MVP for Enterprise Security.

Marco Ramilli, Computer Science Researcher, Marco Ramilli’s Blog

I cannot find a general answer to this specific question. Companies are different and each one views Security from different perspectives. So my first step to answer would be to categorize companies trying to investigate the different points of view. I see 3 company types, under the Security perspective.

1. Companies who provide security. These companies are the most involved into Security such as, Antivirus companies, Security consultant companies and security related software companies.

2. Companies who design software. These companies strongly need security but “security” is not their main business. For example Facebook provides an innovative communication channel and its main business is very far from security related contents. Skype, another good communication software, strongly needs built in security, but security itself is not a Skype business. Or again, WordPress, strongly needs security, but it’s main business is to provide a quick, easy and friendly way to build your web corners.

3. Finally, non software companies. Those companies have a very different business model. They do not provide software and or security, they just use software, and often, they even don’t know what security is. For example private clinics, hardware providers, shopping centers, car builders and so on…

For each of the three categories I believe we have different problems.

Companies which belong to the first category are, obviously, the most conscious about application security issues, they provide solutions and consultancies specifically to solve such issues. My personal point of view, here, regards education. Often security related companies tend to sell their product(s)/consultancy(ies) without caring much about education. Even if the customer could use the best ANtiVirus engine but he does not know how to use it, or why to use it… The best antivirus engine does not make a secure system per-se. I think the main companies mistake in the first category is the lack of costumers education. These kind of companies should educate their customers about security, about the risks behind the corner if security policies are not respected.

Companies which belong to the second category often do the mistake of considering “security” only at the end of the software development process. My specific experience on this kind of companies is pretty wide and the big majority of the analyzed companies uses to introduce security only at the end of the development process, sometimes even after functional tests. Security should be included into the analysis layer, at the beginning as a straight and clear software requirement. Again, security should be a requirement and not a property. Respecting security requirements often means to make huge and deep changes into the developing process. Companies which add these requirements at the end of the process often need to radically change what they have done so far, this is a huge cost for the company, so huge that small companies often decide to not implement security at all. We have examples of that in everyday life.

Finally, companies which belong to the third category often are unaware about application security. Doctors, nurses or hardware engineers don’t care about software security and tend to delegate security to the used applications. But even if the doctor is using the most innovative and the most secure clinical records management system and he uses a weak password or, even worst, he shares his own password with colleagues, the security of the whole system breaks. Those companies should understand that security is a whole process beginning from security consultants, following to developers and ending with them: the end-users. If only a ring of this chain is weak the overall system security will fail.

Security, application security, computer security, information security, or call it as you prefer, is one of the hardest requirements to respect. It is much more easy to compromise a system rather then protect it. Indeed the attacker needs only one weak ring to compromise the security of a whole system, contrary the system needs to assure “security” at all its stages, even where non security related users (companies) are involved.

Marco Ramilli is a computer scientist researcher with an intensive hacking background. He got his PhD in computer security from University of Bologna (Italy). Marco has been working with US Government (National Institute of Standards and Technology, Security Division) and currently works with the University of California, Davis (Security Labs) on new security paradigms, penetration testing methodologies and electronic voting systems’ security.

Mikko Hypponen, Chief Research Officer, F-Secure, @Mikko

The biggest mistake companies make is focusing on Application Security only on traditional computer systems and ignoring it on all other platforms. They should be looking at the big picture and understand that Application Security should apply to any platform that can run general-purpose applications.

Mikko Hypponen is the world champion in Xevious.

Neil Roiter, Director of Research, Corero Network Security, @nroiter

Software Security is not a Series of Tasks — It’s Commitment to a Program

We’ve had more than a decade of coming to grips with the self-evident truth that insecure software is the prime attack vector for cyber criminals, industrial espionage, hacktivists and agents of unfriendly nation states. Yet, far too many enterprises address software security on an ad hoc basis, responding to a particular need at a particular time. The greatest mistake in addressing software security is in not really addressing it.

Companies are prompted to review their security practices at various points. Perhaps they are obligated to meet security requirements laid down by a partner or large customer for a new business initiative. Or, management decides that an important new business application is a good time to start paying attention to security. Maybe a data breach or revealing penetration test prompts a knee-jerk reaction to “fix it.”

In many cases, regulations such as PCI DSS, which mandates code review under Section 6.6, may spur companies to do enough to meet the requirements. This is not a substitute for a comprehensive software security program. We hear it time and again: Security does not flow from compliance, but, for the most part, compliance flows as a consequence of strong security programs.

Unsystematic, reactionary software security efforts often are costly and ineffective. Companies that do not have a well-defined and sustainable software security program often lack sufficient personnel with the requisite security expertise to properly vet applications at any point in the cycle: source code, compiled programs, and installed applications. All too often, there are no well-defined roles and no accountability.

The end-product is software that remains highly insecure and the next software security fire drill will be no better. Those who do not learn from history are certainly doomed to repeat it.

Enterprises have no excuse for failing to recognize insecure software as perhaps the greatest security threat to their business. It’s well established that the application layer is the vector for the overwhelming majority of successful attacks. Witness one headline-grabbing event after another, exploiting both zero-day vulnerabilities and flaws that easily could be fixed.

However, while many enterprises have a more systematic and effective approach to dealing with network-layer threats, they are far slower to apply the same logic to the risk posed by insecure software. Or, they are not prepared to apply the level of effort and resources required to implement a comprehensive sustainable program.

Though it is counterproductive, it is not surprising. Enterprises may easily have hundreds, perhaps thousands of legacy applications that have not been vetted adequately for security. How does one establish priorities for review and testing? Who will perform the testing and who will be held accountable for it and for subsequent remediation? How will the enterprise fund security personnel, either internal or outsourced? And how will the enterprise implement and sustain both a secure software development life cycle going forward and systematic review of existing applications?

It’s all somewhat daunting, but while the natural tendency to shy away from the challenge may be understandable at some level, it’s a risky approach when one considers the potential impact to the business. The latest Ponemon Institute survey states that the average cost of a single data breach is $5.5 million (and that’s down from the previous year). The risk associated with the theft of intellectual property is perhaps even more compelling: Cyber criminals, competitors and foreign entities are after source code, designs, business plans, research secrets, etc. Try to measure the cost in lost business, stock value and brand reputation. Then consider software security as an absolute business imperative and the failure to implement enterprise-wide governance in this critical area as an unacceptable business risk At a high level, this means a secure software program that starts with top management support and:

• Defines management and operational roles
• Includes training in application security
• Is built on clearly defined policies, such as tolerance for risk, definition of severity of vulnerabilities, time frames for remediation, etc. , and accountability for enforcement
• Reporting and tracking using metrics that are accepted across the enterprise.

Neil Roiter is director of research at Corero Network Security and editor of the Security Bistro Blog. He is best known for his decade of work as a technology journalist, focusing on information security, risk and compliance. Before joining Corero, he was features editor and senior technology editor at Information Security magazine and, and has written about information security from the days of “Internet hooliganism” and hacking for the sheer perverse joy of it, to today’s world of cyber crime as global business. An expert in network security, he is often quoted in technology publications.

Rosie Sherry, Founder and Community Manager, Software Testing Club, @rosiesherry

(Note: Rosie expanded her answer to include all of testing, not just AppSec)

Assumptions and failing to question

Often I come across situations where companies assume there is only one way of testing. The conversation typically starts with that they want so and so tested and they want very specific deliverables – Test Plan Documentation, Test Cases and of course they want everything tested and all bugs found.

When asked ‘why’ or ‘how’ they want or need all of these specific outputs and outcomes, often they are left speechless.

There can be so much waste in testing. Huge amounts of time planning up front to only find out that what they had been planning for was incorrect. Or perhaps working on things that aren’t actually needed – like lengthy documentation that no one ever requests or reads and written test cases that are irrelevant.

How can it be fixed?

Always question why they are doing something – Is what they are doing adding real value? Is it something that is really needed? Are there better ways to approach the testing?

Never assume: What is really important to achieve within testing? Are standard documented test cases required? Or can coverage be documented in another way? Perhaps through an exploratory testing approach?

Rosie Sherry is started the Software Testing Club 5 years ago as a professional place to for software testers to hang out online. What started out as a side project has, over time, turned into a full blown business. It has grown to produce a publication (The Testing Planet), global community driven meetups, a conference and some training courses. Prior to The Software Testing Club, Rosie worked as a software tester for various startups and digital agencies.

Simon Knight, Editor – The Testing Planet, Software Testing Club, @sjpknight

(Note: Simon expanded his answer to include all of testing, not just AppSec)

I’m not sure if it’s the biggest, but one mistake many companies make with testing is not investing in the people who perform this vital function. For too long the ISTQB Foundation certificate has been a HR pre-requisite for competence in testing, with further and ongoing education for practitioners seemingly deemed redundant.

For many companies we’re living in frugal times and perhaps where money is tight this attitude can be excused. Having a great tester on your team will go a lot further towards gaining a competitive advantage in the marketplace than a mediocre one will though. And there are some other issues to consider too.

To retain a quality tester on staff, he needs to be able to see a future in testing in your company. Without investment in his testing education your tester won’t see testing as a profession in its own right, viewing it instead as a stepping-stone to some other technical or managerial role. In the worst case scenario he will simply leave, taking his testing (and in many cases domain) expertise to a company that does value his skills – as I have done on more than one occasion.

Training for testers doesn’t have to be expensive or certified. The marketplace has exploded with a variety of ways for test professionals to develop new skills – technical, analytical and soft:

• Coaching for example can help your tester identify and overcome weaknesses or build on strengths.
• Attending workshops or conferences can help testers network and share ideas or discuss problems.
• If money is tight, consider more radical ideas. Give your tester carte blanche to define their own [free or minimal cost] training and development routine and allocate them time out from their normal activities to implement it.

Go on – nurture your testers. They deserve it!

Simon Knight works as a tester by day and editor of The Testing Planet by night. When he’s not engaged with one of those activities (or spending time with his family, reading, walking, gaming or tinkering with various coding/scripting languages), he can most likely be found hosting the increasingly regular Birmingham Software Tester Meetup. Simon has also been known to speak and write about testing related subjects.

Wim Remes, Manager, Ernst and Young ITRA FSO, @wimremes

Software Security

If there were a silver bullet solution for Software Security issues, we would all be applying it, wouldn’t we? More than a decade old, SQL injection is still biting us in unpleasant locations and it seems that we are twisting and turning in a pool of quicksand rather than catching a glimpse of the shore as we try to cross the ocean. If I could point at three areas where we can do better, it would go like this:

1. Education

Any developer that comes out of a formal education track has a very limited overview of security requirements for enterprise-grade software. While I understand that there is a limited amount of time that can be spent on security-related subjects, I also believe that the only way to make software more secure is to engrain security in the mindset of the developer. This needs to go much further than bootcamps when they start their first gig or yearly awareness sessions. By then it is too late as it is bolted on to their knowledge and not an integral part of it. Make SDLC and security concepts mandatory in formal education programs.

2. Threat Modeling = overhead

It makes as much sense to do threat modeling as it does to gather user requirements. The big difference is that we don’t have anyone to listen to for the former task. If we keep dwelling in the utopia where security is a 2nd class citizen, we are poised to keep losing. Both for existing software stacks as for new projects, understanding the threats against them is a penultimate priority. Only by understanding the threats, we will be able to protect the software. Threat modeling is also not a one-time exercise. Just like your user requirements threats against your applications are subject to change. Review your threat models regularly and apply changes to you application stack as required, handling them on the same level and with the same priority as those user requirements.

3. Know your frameworks

Just like productivity applications are geared towards making our live easier, development frameworks are ultimately designed to make a developers life easier. Allowing him/her to not to worry about the lower level stuff. In itself this is a good thing but we have to remain aware that the more abstract we are allowed to look at a problem, the higher the risk is that we will overlook the obvious issues we would otherwise see.

Simplicity, while attractive, is oftentimes our biggest enemy in security. Development frameworks are an obvious requirement but you should choose them carefully, for the right reasons and be aware of the security features (and lack thereof) when you chose them. They are an integral part of your development practice and they are, whether you want it or not, an cornerstone for the security of any application you will develop while using them.

Wim Remes is a Manager in the Ernst and Young ITRA FSO practice in Belgium. Apart from his professional activities he acts as Director to the board of (ISC)2, co-organizes BruCON, podcasts on the Eurotrash Security podcast and has spoken at conferences around the globe. He is passionate about information security and finding solutions for complex problems.

Chris Wysopal, Co-Founder, CTO & Chief Information Security Officer, Veracode, @weldpond

The biggest mistake that organizations make is not getting a handle on their application sprawl. They have applications being built, internally and externally, that are not known about to the people charged with managing organizational risk. How can you manage something you don’t know about? Organizations need to understand their application perimeter, those applications that expose the organization’s data and brand to risk. In order to do this all applications that belong to an organization need to be put under an application security program lest unknown risk build up and not understood until a breach occurs.

A simple application security program can keep application sprawl in check. There has to be an awareness by business owners and IT departments that applications should not be deployed on-site, off-site, or by service providers without a small amount of application inventorying and risk identification. Application risk identification may lead to further testing or auditing but in many cases it doesn’t need to. The important part of the process is identifying risk coming from the application layer. Without that the organization is flying blind.

There are many reasons applications risk goes unknown to the organization. A business owner may outsource development and operations of an application to 3rd parties never exposing the existence of the application to the IT department. An application may be built by customizing open source and commercial components without development being involved. A SaaS provider may be utilized to process a marketing campaign. In all of these cases there is risk at the application layer that may bypass organizational controls that may work for internally developed code or outsourced code.

Putting an application inventory and risk registry in place is a great place to start on the path to a mature application security program. It puts a boundary and process around application security which encompasses the entire organization. As the program matures different levels of testing and due diligence can be applied to these applications commensurate with the identified risk.

Chris Wysopal is responsible for the security analysis capabilities of Veracode technology. Mr. Wysopal is recognized as an expert and a well known speaker in the information security field and was recently named one of InfoWorld’s Top 25 CTO’s and one of the 100 most influential people in IT by the editorial staffs of eWeek, CIO Insight and Baseline Magazine. Chris has testified on Capitol Hill on the subjects of government computer security and how vulnerabilities are discovered in software. He also has spoken as the keynote at West Point, to the Defense Information Systems Agency (DISA) and before the International Financial Futures and Options Exchange in London. His opinions on Internet security are highly sought after and most major print and media outlets have featured stories on Mr. Wysopal and his work.

Cross-posted from Veracode

Possibly Related Articles:
Information Security
Testing Application Security Methodologies Vulnerabilities Development Secure Coding Network Security Software Security Assurance vendors
Post Rating I Like this!
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.