News
Building Better Applications: Beyond Secure Coding
- By Matthew Schwartz, Enterprise Systems
- March 29, 2006
Beware the software vulnerability. Thanks to numerous security breaches, unending notifications from companies that personal information may have been compromised, regulations (such as HIPAA and Sarbanes-Oxley), and related audits, more organizations are now striving to create vulnerability-free applications, whether they’re for sale to customers or for use internally.
For help, many companies look to the “secure coding” movement which purports to empower developers to create or rewrite software to be as free of vulnerabilities as possible. It claims that through better training, developers can build applications that validate all inputs, fail safely, properly encrypt data, require strong passwords, prevent buffer overflows, and so on.
The allure is simple: when it comes to software applications, bad code has proven results. “There is good evidence that suggests that as much as 80 percent of the security issues can be traced to coding problems,” writes security consultant Bar Biszick-Lockwood, in a report on proposed revisions to the IEEE P1074 Standard for Developing Software Life Cycle Processes. Ms. Biszick-Lockwood was part of an IEEE group that studied the standard.
Yet how, exactly, does bad code come to exist? “We naturally assume it’s because our developers don’t know how to code for security, so our current focus is on educating them,” she observes. Most developers already know quite a lot about creating secure applications: code development best practices are also security best practices.
After studying the problem of insecure applications, the IEEE team found the cause actually isn’t developers but a lack of project time, money, and executive backing, for which no amount of developer education can compensate. “Time and money is not a secure coding problem, it’s a requirements prioritization problem,” notes Biszick-Lockwood. As such, it’s entirely beyond the control of developers as well as most project teams.
Poor code often belies business problems rather than developer problems. Take the seemingly unending series of notifications to consumers that their personal information may have been compromised. Many of those breaches don’t trace back to specific software vulnerabilities but to lax business processes or simply nonexistent security safeguards.
“You haven’t found a buffer overflow at the root of the privacy problem,” notes Jack Danahy, founder and chief technology officer of Ounce Labs Inc. in Waltham, Mass. “It’s really been a failure to understand certain basic issues, as much as, or even more so, than it’s been any application-specific problem.”
How to Prioritize Security
If better coding practices alone won’t prevent software vulnerabilities, what will?
First, start at the top. “Educate management on what the threats really are, what they look like, and how they exist,” says Dr. Herbert H. Thompson, an applied mathematician who is the chief security strategist of application security services provider Security Innovation in Wilmington, Mass. In short: “Create a training and awareness program for senior management.”
Ensuring executives understand the need to spend time and money on security might be difficult, however, since they often want to see a return on investment, and effective security may not demonstrate that. “It’s really hard to quantify what hasn’t happened to you,” notes Thompson. “How do you prove that you stopped something from happening when it didn’t happen and you didn’t know it was about to happen? You can’t look at the IPS or ask ‘How many times was I pinged from Romania?’ because those were just random scans.”
One technique for calculating the cost savings from improved security, then, is to identify potential security flaws during the software design phase, and how project owners plan to address each. “I then ask myself how much does it cost, on average, to fix a vulnerability or a bug of this type after I’ve already deployed my solution internally,” he says. Many companies already have metrics for the cost to repair, test, and deploy a bug fix. Invariably, problems discovered before code is released require fewer employee-hours to fix than bugs discovered later.
Detailing Secure Projects
Beyond having senior management’s backing to ensure security gets baked in, individual project teams need to adjust their approach, and look beyond which specific business requirements they’re trying to solve. “It should be, ‘I’m trying to solve a business problem but I also need to know how that solution is constrained,’” notes Thompson. “So it’s not just positive requirements about what this thing should do, but also the negative requirements of what this thing should not do.” That allows the project team to build assertions which they can test throughout the project lifecycle to ensure the end result is secure.
After specifying such requirements, the project team should also study the myriad ways—from the nebulous to the technically specific—an attacker might try to compromise the application. Then continually test the software not just with use cases (how it’s meant to be used) but abuse cases (how an attacker might use it).
Some organizations rely on outside companies to help assess their code for security flaws, while other organizations create their own code-review teams internally. Software vendors often dub these code-review teams as red teams or tiger teams, while consulting or financial services companies might refer to them as “Security Centers of Excellence,” says Thompson. Regardless, the approach is the same: “a group of folks that are focused on breaking the software and always asking the question, ‘What would the bad guy do?’”
Finally, while developer education is no panacea, having security-savvy developers is important, especially for using any tool to assess code for security vulnerabilities. “To be able to use such a tool effectively, you need to have a developer trained to some degree with software security, because the biggest challenge with such tools are false positives,” notes Thompson.
Beyond security capabilities built into many development tools, there are also dedicated code-scanning tools. Thompson breaks that market into two categories: software for scanning source code (including C, C++, Java, and C#), from such companies as Clockwork Software Systems, Fortify Software, Ounce Labs, and Secure Software; and Web application scanning tools from such companies as Compuware, Fortify, SPI Dynamics, and Watchfire.
Redefining Secure Coding
Given all the factors necessary to actually create secure applications, should the “secure coding” rubric be discarded? As discussed, creating secure code requires much more than developer education. Even so, “a lot of people think about ‘secure coding’ as being a very programmatic implementation, of looking for flaws at a very technical level,” notes Danahy. “The bigger problems are really, ‘Oops, I forgot to use encryption, or access control, or authentication.’ These are things that are not quality-level, line-by-line, bug problems. These are bigger problems.”
Larger problems require larger fixes, and when it comes to security, “it’s something that has to be fundamentally built into the system,” notes Thompson. Until that happens, expect organizations to continue focusing on “secure coding” rather than on the top-to-bottom organizational changes needed to deliver secure applications.