Software security testing Building Security In Editor: Gary McGraw, XXXXXXXXXX properly, it goes deeper than simple black-box probing on the presenta- tion layer (the sort performed by so-called...

1 answer below »
Write a 1-2 page summary of the following paper.
Identify two or three key points made by the authors in your summary.12-point font, double-spaced, Word document.


Software security testing Building Security In Editor: Gary McGraw, [email protected] properly, it goes deeper than simple black-box probing on the presenta- tion layer (the sort performed by so-called application security tools)—and even beyond the func- tional testing of security apparatus. Testers must use risk-based approaches, grounded in both the system’s architectural reality and the attacker’s mindset, to gauge software security adequately. By identifying risks in the system and creating tests driven by those risks, a software security tester can prop- erly focus on areas of code in which an attack is likely to suc- ceed. This approach provides a higher level of software security as- surance than is possible with classi- cal black-box testing. What’s so different about security? Software security is about making software behave correctly in the presence of a malicious attack, even though software failures usually happen spontaneously in the real world—that is, without intentional mischief. Not surprisingly, standard software testing literature is con- cerned only with what happens when software fails, regardless of in- tent. The difference between soft- ware safety and software security is therefore the presence of an intelli- gent adversary bent on breaking the system. Security is always relative to the information and services being pro- tected, the skills and resources of ad- versaries, and the costs of potential assurance remedies; security is an ex- ercise in risk management. Risk analysis, especially at the design level, can help us identify potential secu- rity problems and their impact.1 Once identified and ranked, soft- ware risks can then help guide soft- ware security testing. A vulnerability is an error that an attacker can exploit. Many types of vulnerabilities exist, and computer- security researchers have created taxonomies of them.2 Security vul- nerabilities in software systems range from local implementation errors (such as use of the gets() function call in C/C++), through interpro- cedural interface errors (such as a race condition between an access- control check and a file operation), to much higher design-level mis- takes (such as error handling and re- covery systems that fail in insecure fashions or object-sharing systems that mistakenly include transitive trust issues). Vulnerabilities typically fall into two categories—bugs at the implementation level and flaws at the design level.3 Attackers generally don’t care whether a vulnerability is due to a flaw or a bug, although bugs tend to be easier to exploit. Because attacks are now becoming more sophisti- cated, the notion of which vulnera- bilities actually matter is changing. Although timing attacks, including the well-known race condition, were considered exotic just a few years ago, they’re common now. Similarly, two-stage buffer-overflow attacks using trampolines were once the domain of software scientists, but now appear in zero-day exploits.4 Design-level vulnerabilities are the hardest defect category to han- dle, but they’re also the most preva- lent and critical. Unfortunately, ascertaining whether a program has design-level vulnerabilities requires great expertise, which makes finding such flaws not only difficult, but also particularly hard to automate. Examples of design-level prob- lems include error handling in object-oriented systems, object sharing and trust issues, unprotected data channels (both internal and ex- ternal), incorrect or missing access- control mechanisms, lack of auditing/logging or incorrect log- ging, and ordering and timing errors (especially in multithreaded sys- tems). These sorts of flaws almost al- ways lead to security risk. Risk management and security testing Software security practitioners per- form many different tasks to manage software security risks, including • creating security abuse/misuse cases; • listing normative security requirements; BRUCE POTTER Booz Allen Hamilton GARY MCGRAW CigitalS ecurity testing has recently moved beyond the realm of network port scanning to include probing software behavior as a critical aspect of system be- havior (see the sidebar). Unfortunately, testing software security is a commonly misunderstood task. Done Software Security Testing PUBLISHED BY THE IEEE COMPUTER SOCIETY � 1540-7993/04/$20.00 © 2004 IEEE � IEEE SECURITY & PRIVACY 81 Building Security In • performing architectural risk analysis; • building risk-based security test plans; • wielding static analysis tools; • performing security tests; • performing penetration testing in the final environment; and • cleaning up after security breaches. Three of these are particularly closely linked—architectural risk analysis, risk-based security test planning, and security testing—be- cause a critical aspect of security test- ing relies on probing security risks. Last issue’s installment1 explained how to approach a software security risk analysis, the end product being a set of security-related risks ranked by business or mission impact. (Figure 1 shows where we are in our series of articles about software security’s place in the software development life cycle.) The pithy aphorism, “software security is not security software” provides an important motivator for security testing. Although security features such as cryptography, strong authentication, and access control play critical roles in software secu- rity, security itself is an emergent property of an entire system, not just the security mechanisms and fea- tures. A buffer overflow is a security problem, regardless of whether it exists in a security feature or in the noncritical GUI. Thus, security testing must necessarily involve two diverse approaches: • testing security mechanisms to en- sure that their functionality is properly implemented, and • performing risk-based security testing motivated by understand- ing and simulating the attacker’s approach. Many developers erroneously believe that security involves only the addition and use of various secu- rity features, which leads to the in- correct belief that “adding SSL” is tantamount to securing an applica- tion. Software security practitioners bemoan the over-reliance on “magic crypto fairy dust” as a reaction to this problem. Software testers charged with security testing often fall prey to the same thinking. How to approach security testing Like any other form of testing, secu- rity testing involves determining who should do it and what activities they should undertake. Who Because security testing involves two approaches, the question of who should do it has two answers. Stan- dard testing organizations using a traditional approach can perform functional security testing. For ex- ample, ensuring that access control mechanisms work as advertised is a classic functional testing exercise. On the other hand, traditional QA staff will have more difficulty performing risk-based security test- ing. The problem is one of expertise. First, security tests (especially those resulting in complete exploit) are difficult to craft because the designer must think like an attacker. Second, security tests don’t often cause direct security exploits and thus present an observability problem. A security test could result in an unanticipated outcome that requires the tester to perform further sophisticated analy- sis. Bottom line: risk-based security testing relies more on expertise and experience than we would like. How Books like How to Break Software Se- curity and Exploiting Software help educate testing professionals on how to think like attackers.4,5 Nev- ertheless, software exploits are sur- prisingly sophisticated these days, and the level of discourse found in books and articles is only now com- ing into alignment. White- and black-box testing and analysis methods both attempt to understand software, but they use different approaches depending on whether the analyst or tester has ac- cess to source code. White-box analysis involves analyzing and un- derstanding source code and design. It’s typically very effective in finding programming errors (bugs when au- tomatically scanning code and flaws when doing risk analysis); in some cases, this approach amounts to pat- tern matching and can even be auto- mated with a static analyzer (the sub- ject of a future installment of this department). One drawback to this kind of testing is that it sometimes reports a potential vulnerability where none actually exists (a false positive). Nevertheless, using static analysis methods on source code is a good technique for analyzing certain kinds of software. Similarly, risk analysis is a white-box approach based on a deep understanding of software architecture. Black-box testing refers to ana- lyzing a running program by prob- ing it with various inputs. This kind 82 IEEE SECURITY & PRIVACY � SEPTEMBER/OCTOBER 2004 Abuse cases Security requirements Risk analysis External review Risk-based security tests Static analysis (tools) Risk analysis Penetration testing Security breaks Requirements and use cases Design Test plans Code Test results Field feedback Figure 1. The software development life cycle. Throughout this series, we’ll focus on specific parts of the cycle; here, we’re examining risk-based security testing. Building Security In of testing requires only a running program and doesn’t use source- code analysis of any kind. In the se- curity paradigm, malicious input can be supplied to the program in an effort to break it: if the program breaks during a particular test, we might have discovered a security problem. Black-box testing is possi- ble even without access to binary code—that is, a program can be tested remotely over a network. If the tester can supply the proper input (and observe the test’s effect), then black-box testing is possible. Any testing method can reveal possible software risks and potential exploits. One problem with almost all kinds of security testing (whether it’s black or white box) is the lack of it—most QA organiza- tions focus on features and spend very little time understanding or probing nonfunctional security risks. Exacerbating the problem, the QA process is often broken in commercial software houses due to time and budget constraints and the belief that QA is not an essential part of software development. An example: Java Card security testing Doing effective security testing re- quires experience and knowledge. Examples and case studies like the one we present here are thus useful tools for understanding the approach. In an effort to enhance payment cards with new functionality—such as the ability to provide secure card- holder identification or remember personal preferences—many credit- card companies are turning to multi- application smart cards. These cards use resident software applications to process and store thousands of times more information than traditional magnetic-stripe cards. Security and fraud issues are critical concerns for the financial institutions and merchants spear- heading smart-card adoption. By developing and deploying smart- card technology, credit-card com- panies provide important new tools in the effort to lower fraud and abuse. For instance, smart cards typ- ically use sophisticated crypto sys- tems to authenticate transactions and verify the identities of card- holders and issuing banks. How- ever, protecting against fraud and maintaining security and privacy are both very complex problems because of the rapidly evolving na- ture of smart-card technology. The security community has been involved in security risk analy- sis and mitigation for Open Plat- form (now known as Global Plat- form, or GP) and Java Card since early 1997. Because product secu- rity is an essential aspect of credit- card companies’ brand protection regimen, these companies spend plenty of time and effort on security testing. One central finding empha- sizes the importance of testing particular vendor implementations according to our two testing cate- gories: adherence to functional se- curity design and proper behavior under particular attacks motivated by security risks. The latter category, risk-based security testing (linked directly to risk analysis findings) ensures that cards can perform
Answered Same DaySep 16, 2021

Answer To: Software security testing Building Security In Editor: Gary McGraw, XXXXXXXXXX properly, it goes...

Neha answered on Sep 19 2021
134 Votes
Software Security Testing
The software security is all about software which can be used correctly even if there is a malicio
us attack. The system should behave properly even if there is any software failure in the real world which is without any intentional mischief. The difference between the security and safety of a software is the presence of intelligent adversary which bent on breaking the system. The security is related with the information and services which are protected about the software and the skills and resources of the adversaries along with the cost of the remedies for potential assurance. It is an exercise to manage the risk.
The risk analysis is very helpful at the design level and it allows us to find out potential security problems and their impact over the software. After finding out the risk we can rank them accordingly and they can guide us for security testing of the software.
There are different types of vulnerabilities and a taxonomy has been created by the researchers of computer security over them. The security vulnerability in a software system can be a local implementation of the errors or a complex attack. The attackers do not have any concern with the reason of vulnerability whether it is a bug...
SOLUTION.PDF

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here