Professional Documents
Culture Documents
Abstract Applications make up the core of any system--for example small applications serving critical roles (e.g. Basic Input/Output System); word processors; firewalls; e-mail servers; and operating systems--and, as a result, applications must be written both in a secure fashion and with security in mind or they may become the weakest link, allowing the circumvention of various physical and logical access controls. Currently, many in management are unsure of where to integrate security in their programs, unsure of the impact to cost and schedule, and unsure of how to build security into their programs. Furthermore, many designers and developers are too worried about making a product work and tight deadlines to think about security if they are not specifically directed to do so. Thanks to the above, many times installation and operation documentation neglect security and the measures that the administrator, operator, and owner of the software should take to ensure that the application is secure. In this vein, this work helps all members of a project understand where security should be integrated into a general software development lifecycle, illustrates why security should exist at these points, and shows how to keep software safe no matter the stage in the lifecycle. By separating the lifecycle into responsibilities as well as incorporating the stages, all personnel will understand both the responsibilities of all other personnel (as well as themselves) and when to perform their responsibilities. This work also helps all personnel to understand why each responsibility is important, how to properly execute that responsibility, and what the impacts may be to the project if something is not performed or is given an inappropriately low amount of attention.
Introduction
There are currently numerous software development models--for example iterative development, incremental development, agile, waterfall, spiral, and extreme programming (XP).(Lethbridge and Laganiere 2005)(Seacord 2005) Recently, the United States federal government and several major software development companies have created secure software development models or attempted to introduce security in existing models, however they often fail to provide proper information for either the management or the designers and developers to actually implement the proposed models.(M. Howard 2005)(Defense Information Systems Agency 2008)(Software Process Subgroup of the Task Force on Security across the Software Development Lifecycle 2004)(Security Across the Software Development Lifecycle Task Force 2004) Moreover, the current works often neglect to give complete instructions to the various personnel to whom the work is addressed or fail to give enough reason for adopting the proposed security measures. In some cases, the works are designed with websites and the underlying scripts or interpreted applications (C# and Java) and do not adequately identify the measures and implications that an application, as defined in a more general sense, and its development team would need.(Defense Information Systems Agency 2008)(Defense Information Systems Agency 2008)
Deployment Phase Requirements Phase
Test Phase
Architecture Phase
Implementation Phase
Design Phase
More recently, however, the National Institute of Standards and Technology used a general lifecycle that can be implemented by almost any project for almost any reason.(Owens, Lauderdale, et al. 2008) The lifecycle involves six phases, none of which can be skipped, but which can be restarted any number of times. The lifecycle, shown above, starts at the requirements phase (where requirements are gathered), and then moves to the architecture phase (where the architecture is developed), to the design phase (where the actual design is created), to the implementation phase (where the programming is performed), to the test phase (where the application is heavily tested), and then to the deployment phase (where the application is deployed). If a bug is discovered after the application is deployed, we repeat the above process, which bares resemblance to all of the aforementioned models. The details of the various phases are examined on a role-by-role basis below.
Application program managers (APM) are key personnel who oversee the management of the development of an application. These persons are responsible for setting the budget, scheduling, labor, and keeping a development process on track. They give designers and developers high-level blueprints and goals using budgets and scheduling, so they are very influential when it comes to the priority and role of security in the software development lifecycle. 2
Without the support of upper-level management, in particular, program managers, security will likely suffer under the auspice of greater functionality or performance; however, if upper-level management supports a strong emphasis on security during the development lifecycle, it is likely that security, functionality, and performance all increase. Most often, management misunderstands security because designers and developers are concerned with functionality and none of the parties understands how to securely design software, as well as what software security is and is not. A secure application is one that is properly designed, coded, safeguarded during all phases of development and rollout, comes with adequate documentation to securely deploy the application, and is properly decommissioned when the time comes for the application to be replaced. While the APM should be involved in all phases of the lifecycle cited in Section 1, they are most influential during the requirements and architecture phases because these phases are less technical in many ways and allow the APM to influence the direction of the program at the policy level. To best create policies, guidelines, and standards an APM must have a broad grasp of the security concerns facing applications today and the cost and schedule impacts therein.
The client has the duty to ensure that any operating constraints are acceptable in the clients environment and that all assumptions are valid or viable. For example, the client should not be expected to add thousands of dollars worth of equipment unless the equipment is extremely outdated and should be certain to properly convey what assumptions are acceptable in the architecture phase. Because the application should leverage the architecture, it is best to ensure that the architecture is well understood and well defined, even for general applications such as operating systems.
Application Designers
Application design flaws are considered the source of about half of the software security problems.(Software Process Subgroup of the Task Force on Security across the Software Development Lifecycle 2004) Other problems are because of requirement and implementation flaws, but design flaws still outnumber all other types of security bugs. As a result, it is important that designers take careful heed of the requirements, architecture, and implementation considerations during the actual design phase. At this point, the designer should decide upon the appropriate language to be used and create a set of programming standards and guidelines that can be enforced. These documents should include a list of explicitly denied libraries and calls because of the danger that the calls could create if not carefully programmed or abused. Appendix C Sample Unsafe Function List has numerous examples from various languages that may be used as a base point by developers. Additionally, the designer must carefully examine the requirements and architecture to ensure that the application can leverage the security afforded to it by the architecture, meet all requirements, and have a design that both lends itself well to the chosen language and is secure in design.
scripting languages, the compilers and interpreters equalize code more--thus reducing the programmers ability to make code faster without using tricks, but also reducing the amount of knowledge and skill required to program in the language. A side affect of much of this is that the higher the language is, the more likely that users will need to run additional software to actually execute the code, which decrease performance and places additional applications that can become attack vectors. For example, assembly, C, and C++ to compile into machine code and do not require additional software. In order to run a Java application, one must have the Java runtime environment, which is small. In order to run PHP, one must have the much larger PHP interpreter installed and likely have a web server hosting the PHP pages (although the latter is not a requirement). Since we see an increase in what is required to execute the application, we also see an increase in our surface of attack. Additionally, the higher the language, the less likely that the programmer can do things--that are often insecure anyway (Seacord 2005)--to increase the performance of the application. Designers should remember there is no single language that can fit all projects, but that the choice of language influences the coding standards, the practices, the required skill sets of the programmers, the required experience, and can influence both security and functionality. Moving too low may introduce excessive bugs, while moving too high may actually increase the surface of attack, as well as decrease performance and functionality. (Seacord 2005)(Howard and LeBlanc 2003) The decision will affect the complexity of the coding standards, the performance of the application, and the overall security of the hosting system. 3.1.2 Creating And Enforcing Standards And Guidelines After choosing a language, it is important for designers to create a series of standards and guidelines prior to anyone even beginning to write code. It is important that the designers tailor the standards to the language or languages that were chosen in 3.1 and that they provide both standards and guidelines to the developers prior to entering the implementation phase of the lifecycle. Standards will provide the developers enforceable rules by which to program the application and guidelines will provide best practices that should be followed to produce the best code possible. Often, when developing standards, security is never taken into consideration. While the standards often dictate things such as commenting appropriately (and defining this), file nomenclature, how variables are to be named, and other such details, they often lack the many tenets of building secure software. For example, standards should provide an unsafe function list from which deviations require waivers and are taken very seriously. While it can be argued that most functions are safe and only unsafe if improperly used, many functions in Appendix C Sample Unsafe Function List are there because they are either too easy to use improperly, too hard to use properly, often misused, or have been behind many previous software vulnerabilities. The list should be kept as a living document, incorporating functions that are commonly producing major vulnerabilities in the current project or previous projects led by the team. For example, if during the lifecycle it is discovered that the software often suffers from overflows because strcpy is used and the function can be replaced, the team should place it in the unsafe function list and disallow further use of the function without first requiring a waiver and intense peer review of the code requiring the waiver. Standards should also include the dos and donts of security. For example, only thread-safe and reentrant functions may be used in multithreaded code. While this may seem an obvious step to many, a developer who is not keen on multithreaded programming or who does not realize that the code is multithreaded can overlook it. A simple method of enforcing this would be to not mix multithreaded code into a class with non-multithreaded code and have the multithreaded classes be clearly identifiable in both the design documentation and the code. When the code is properly separated as described above, simply searching for non-reentrant or multithreaded unsafe functions in the appropriate classes allows for quick and speedy discovery of some of the most common reasons for race conditions. 5
Additionally, the standards should make it very clear that the developers are not to deviate from the selected programming languages and to not mix languages in a given module or class. For example, C code such as strcpy and goto should be banned from all C++ files, assembly from all C files, and Java files should never contain any C or C++. Additionally, the modules themselves should contain only a single language. This ban will increase performance, decrease compilation time, decrease code complexity, enforce loose coupling, and in many cases help make code more secure. Some of the insecurities that crop into machine or byte code are actually the direct result of how compilers deal with or handle the mixing of languages. By separating the modules, it is less likely that the languages will be accidentally mixed in a given file. Guidelines, while allowing more flexibility than standards, show the developer how to properly code for the project. As such, the guidelines (and standards for that matter) should only illustrate good, clean, secure coding practices. C++ guidelines or guides should not contain double pointers in C format (e.g. char**), but instead use arrays, C guidelines should not contain goto, and PHP or other website language guides should properly validate any user input. By illustrating what to do, the guides and standards serve to educate developers and so should be given the utmost attention by expert designers and developers with attention paid to both detail and security.
Vista as slow, cumbersome, and little better than Windows XP in looks even though Vista (on a properly powered machine) is none of those things. (Hansell 2008)(Lohr 2008)(Espiner 2007)(Poeter 2008)
Application Developers
While design and requirement flaws are difficult to alleviate, an application that is poorly written or that was not written with security in mind may prove next to impossible to fix. If developers program recklessly or with little skill, the application may be so rot with flaws that it must be developed from scratch. Some of these flaws may be simple and others may be complex or contain egregious logic flaws. Additionally, rogue developers present an undeniable and often overlooked threat to the security, stability, and functionality of an application. As such, it is important that the development team be keen on both secure programming and how to keep the source code safe.
at worst CLIP would trigger a NULL pointer or similar crash in the addressee, resulting in the loss of a segment of the distributed computing system. CLIP also contained errors where data truncation occurred (e.g. CLPV200001940), cutting off data that was supposed to be transmitted. Because there were no message integrity mechanisms in CLIP, it was difficult to tell if the data were purposefully left off, if it were cut off, or even that it were transmitted (or existed) in the first place. The receiver was led to believe that the message was complete, when in fact it was not. For example, the message, Attack target Tango-Delta-One-Niner and then change your course to 115813. Intercept the tanker there and refuel before going to Rammstein and provide air cover, would be truncated to Attack target Tango-Delta-One-Niner and then change your course to 115813. Intercept the tanker there and refuel. The receiver would not know to go to Rammstein and provide air cover and would have to ask for the next mission or may make some assumption, such as return to base. This flaw could--at best--simply increase network traffic, if the sender resends that particular question (the increased traffic would be the cost of the original text plus another header), slowing the time in which the question is answered. This makes a difference on a slow network, which is why protocols such as the Hyper Text Transfer Protocol (HTTP) allow the requesting of multiple files in a single query. The flaw could also result in an affirmative response by the receiver, who does not have the complete message. Even more important would be if the next message from the sender tells the receiver to Then return to base. If such were the case, the receiver would attack the target, change course, intercept the tanker and refuel, and then return to base, never traveling to Rammstein to provide air cover as intended. This results in the potential loss of life if such a scenario were to materialize. Another, perhaps more disturbing integrity issue observed in CLIP occurred when data was created that was not there before. CLPV200001817 serves as an example of this. When CLIP created a certain JREAP-A message, CLIP always appends sixteen extra bits to the end of the message. During testing, the Verification and Validation (V&V) test team noticed that the appended bits were hexadecimal zero. The amount of data is half a word long (one word equals 32 bits on a 32-bit processor and there are 8 bits in a byte), which is a large amount of information (2 characters or a virtual address). To application security professionals, the major concern was the question of where this data came from. Developers, on the other hand, were most concerned with the question of the effect that this extra data had on a client or if filters will drop the packets as malformed. To application security professionals and developers, the question of whether or not CLIP could accidentally terminate (likely in a failure) was a big concern, or if CLIP created what is equivalent to an off-by-one buffer overflow on either the middleware or endware. If CLIP overflowed the buffer, this may have been a math error, an integer overflow, another overflow that had already occurred, or because a malicious backdoor has been inserted into the source code. While it is most likely an accidental mathematical error in this particular case, an attacker could insert such mistakes in an attempt to slowly leak sensitive data to a monitoring and untrustworthy user--thereby using CLIP as a Trojan. The user may be an application--perhaps a host--that has been designed to gather the extra sixteen bits and is slowly putting the pieces together, or a malicious operator. The data may even be indicative of a time bomb put into the code such that when the developer is fired, he or she can send out a single signal; in a distributed system such as what CLIP creates, this signal would slowly spread and wreak havoc on the network. As such, the data fields should have had integrity checks. In CLIPs case, the problem was twofold: the heavy use of C code in a C++ application led to direct memory manipulation and pointers (which in a complicated application is bounded to eventually cause problems) and the use of code that clearly violated the coding standards set forth by the designers. The development team made heavy use of junior programmers, subcontractors who were not required to use the primary contractors standards and third party code that was never examined for correctness and tested to ensure that the code functioned properly. 4.1.2 Data Manipulation On July 20, 2008, Amazons S3 network experienced a problem that eventually required the entire S3 network to be shutdown and every system on it to be reset. The S3 network is a giant distributed system that provides data 8
storage capabilities to numerous other corporations and spans across two continents. The system is a lot like the distributed application that CLIP is a part of, only simplified, with far fewer functions, and less critical goals. The S3 is well protected and all data leaving the network has integrity checks, as does all data passed in the system except for one single message: the system state message. That morning, a faulty device flipped a single bit in the devices report of the system state. The integrity failure was transmitted to all of the neighbors in the system and an endless loop of chatter began between the neighbors and eventually between all nodes in the distributed system trying to figure out the real state of the distributed system. The chatter became so strong that almost no other processing or communication took place and the entire system had to taken down, the state had to be cleared, and then everything had to be brought back up and synchronized. No networking protocol checksum (e.g. TCP and UDPs checksums) could have caught this loss of integrity because it happened prior to the encapsulation but the single bit being flipped resulted in the complete loss of service for eight hours. It took about two and a half years for a device to fault such that an incorrect--but valid--system state was transmitted, but all that it took was a single bit in the state to be manipulated such that it was no longer correct and the entire network had to be shutdown. (The Amazon S3 Team 2008) Again using CLIP as an example since it suffered similar bugs that resulted in it sending out bad information because CLIP provided no data integrity checks outside of those already performed by the platform--assuming that the platforms have checks on network packets. If CLIP were to modify data either because of a programming error or a bug (or because of a malicious attack), the modification may never be caught, but certainly not by CLIP. For example, CLIP is known to have programming and logic errors that could create an intentional or unintentional data modification, such as double deallocation. If a pointer were to be deallocated twice, the consequences would vary but could include exploding maldata across the entire heap and even destroying the systems stack. Furthermore, CLIP had numerous buffer overflow vulnerabilities such that the overflow could modify other data in memory (there are no real protections by the Integrity RTOS or Fedora under which CLIP was designed to operate that would prevent an overflow), which may then be transmitted across the network. The modification may just be a single byte, but as the above example from Amazon shows, even a single bit can create havoc--especially in a distributed application. Single byte mutations, commonly referred to as off-by-one overflows are common in applications but have proven particularly deadly in networking applications because of the amount of packets that they process.1 Applications such as CLIP also rely heavily on inter-process communication (IPC), making it plausible that data be mutated during the IPC handshake--perhaps because of an overflow of the buffer underlying the IPC messaging protocol or some similar problem. This sort of issue may not be detected, especially if perror and errno (or similar functions and variables in other languages) are used as they may introduce race conditions that result in the error flag being switch to no error or the error descriptor flag being switched to some other error description. If a problem were to accidentally occur, such as a single bit get flipped making a NULL terminator change to something else, the result could be devastating depending on the language used (e.g. C) if the data is not properly checked for correctness and integrity problems. IPC can also suffer from underflows, deadlock, livelock, starvation, and other issues, which can result in data manipulation or the loss of data.
For examples, please see http://arts.kmutt.ac.th/lng104/Web%20LNG104/3rdWorldHacker/howto/DosAttack.htm (Nestea), http://vulnerabilities.aspcode.net/20928/Off+by+one+buffer+overflow+in+the+parse+element.aspx, http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=421283, and http://en.securitylab.ru/notification/285890.php. 9
the management of the code is fundamental to the codes security because a breach of the codes storage locations could be devastating to the integrity of the code and the application itself. As such, the code should be carefully monitored for rogue developers, malicious intruders, and accidental or unjustified modifications. The code must also be monitored to ensure that concurrent development is handled in a safe manner whereby integration of the code or branches will not result in insecurities or bugs. 4.2.1 Observing Secure Programming Practices A good design is key to a well-built, well thought-out application, but even with a solid design, an application may be coded such that it contains numerous flaws. As such, it is important that all of the developers follow the standards and guidelines put forth by the design team, as well as the design plans themselves. It is also important that the development team take heed to Section 4.1 and ensure that data integrity mechanisms are used and are sufficient to detect (and in some cases recover from) the loss of integrity. Development teams should be sure to follow a structured development model, especially on large projects or those where modifications to the same file are likely to be concurrent. It is imperative that during both the design and development phase, best practices and guidelines from the language creators and maintainers (if available such as with PHP and Java) are used alongside industry and security best practices (see Appendix B Secure Programming Guideline Repositories for some of the more common repositories for such information). The development team must also ensure that the design is sound and will not create unforeseen problems; if such problems are noticed, the design may need to be reviewed and revised. Secure development should include having the source code examined for security flaws. While security flaws are often thought of as concerns held exclusively by security professionals, most of the flaws hold functionality concerns as well. For example, buffer overflows and integer overflows are major security concerns. If a buffer or integer were to overflow were to occur, the application could (at best) crash, which is functionally undesirable. While it is true that security is largely concerned with the misuse of the flaw, few people would find it desirable for an application to crash in the middle of something important (as defined on a user-by-user basis), especially if the data is lost. Another example is that of command injection. While it is unlikely, depending on the application, that someone accidentally injects a command that is successfully executed, if a malicious user were to leverage that to create a denial of service or perform harmful activities, the functional impacts can be severe. In order to be both effective and efficient, it is best for development teams to perform peer reviews when changes are made and to leverage that process by having the code also examined for security flaws. If CLIP had been properly examined for functionality problems, the double deallocation would likely have been caught; if CLIP had been examined at all by the development team for security problems, it would likely not have contained more than thirty-eight thousand flaws (Owens, CT&E Report for the CLIP Release 1.2.1.26 2008). 4.2.2 Securing The Repository On May 17, 2001, an attacker managed to use a compromised Secure Shell (SSH) client on a legitimate users computer to gain access to the Apache Software Foundations main servers. After gaining initial access, the attacker used a vulnerability in the OpenSSH server on one of Apaches computers to escalate privileges. After this, the attacker began replacing files with Trojanized files, designed to further penetrate the network. The attackers activities were detected during a nightly audit of the systems binary files and Apache immediately took the compromised server offline. The server turned out to be one of Apaches major servers--handling the public mailing lists, web services, source code repositories of all of the Apache projects, and the binary distribution of the projects. Apache had to install a fresh operating system, change all of the passwords, remove the backdoors, and then begin the process of using both manual procedures and automatic processes to determine what and if any source code and binaries files related to the projects were changed. (Frields 2008)(Apache Software Foundation 2001) Source code is more human-readable in many cases than the compiled version of the code. Additionally, source code often times also includes comments that may help to understand the code, while compiled code does not normally include such information. Because it is easy to modify source code to contain a hidden backdoor, logic 10
bomb, time bomb, or other malicious code that is not likely to be found in a large project, the source code is often a target of disgruntled employees, employees who did not leave on the best of terms, and otherwise malicious or curious persons who gain or have access to the repository. (Howard and LeBlanc 2003)(Pfleeger and Pfleeger 2007)(Seacord 2005) Since the source code can be modified for malicious purposes with little or no knowledge of the actual application and because it is constantly modified by authorized users, it must be well protected against unauthorized access or modification. An authorized user should only be allowed to modify specific sections of code when directed to and under no other circumstances. Furthermore, such access should be routinely audited, looking for suspicious activity. As such, it is important that the repository have access controls and auditing such that all changes can be traced to a specific user at a specific date and time. If this cannot be achieved, the source code is not secure and must be assumed to have been tampered with. If malcode is entered into the repository, source code audits and code reviews should be performed on the entire code base to ensure that the software is safe. Without this assurance, the software can only be assumed to be malware and harmful to the system upon which the compiled code is placed. For this reason, it is important that the developer keep track of all changes, verify that the changes are valid, and continuously audit user access rights. If a breech occurs, its cost is markedly high given the cost of examining all code both manually and through automated, as the Apache case study illustrated.
Application Testers
Many of the design and implementation flaws that exist today were either not caught during testing or were ignored. The testers verify that the software does what it is required to do, what it is supposed to do, and behaves appropriately under all conditions. Tests are normally well-structured and well-documented procedures that are designed to create reproducibility and allow for quick and easy regression testing. This structured testing process, however, can reduce the amount of data that actually comes out of testing and often results in the simplistic pass and fail mentality.(Dowd, McDonald and Schuh 2006) While it is logical to create structured test procedures for most tests, the testing team must be afforded flexibility and encouraged to follow any leads discovered during testing to maximize the effectiveness of the testing. Positive testing is one of the most common forms of testing during development--that is to say, many of the tests performed against software verify that the software does what it should do.(Dowd, McDonald and Schuh 2006) Negative testing is far more complex and generally involves a greater number of possible tests. For example, if users are allowed to input any positive number between twenty and fifty test teams will often verify that the software operates properly when those numbers are provided and may check a handfull of other numbers (such as -1, 0, 1, 19, and 51). It would consume far too much time to check the softwares behavior when presented every single value that a user could input. The test team may have performed some basic bounds checking, but it is far less likely that types were checked. For example, suppose that a were input instead of a number. If the software does not properly validate the data it may crash, it may convert a to some sort of a number, or it may just pass the input straight to a backend. Test teams should always try to provide multiple data types to a given input. Another common mistake that developers make that may or may not be tested by test teams envolves encodings. For example, an application may expect input to be encoded using ASCII, but the user may be inputting data using UTF-8. There have been numerous vulnerabilities against such software as Apache Tomcat, Apache HTTP Server, and Internet Information Services (IIS) that stemmed from allowing multiple encodings without properly verifying user input or the encoding being used. Test teams, as a result, should check the encoding and language requirements and ensure that the application behaves appropriately when other encodings are presented, but also when the allowed encodings are presented.
11
Since more and more software is either multithreaded or multiprocess-based, test teams should attempt to trigger race conditions. While there are numerous ways to do this, the most common method is to slow down the application by either slowing down the system clock or by feeding massive amounts of data to the system and lowering the priority of the process (or processes). Many times, however, tests for race conditions can only be generalized and partially documented because of the nature of the flaw and the way that most race conditions operate. These tests will require the test team to be flexible and to adjust the strategy to not only the application, but also the operating conditions and the reactions of the system. There are numerous other flaws that should be tested for such as memory leaks, double allocation, wild pointers, underflows, logic bombs, input validation, input sanitization, data protection, inadequate access controls, injections (command, SQL, LDAP, etc), integer overflows, format string vulnerabilities, and buffer overflows. Many times, the most effective means of testing for a wide variety of problems is through fuzz testing. There are many different fuzzing applications available that allow the fuzzing of numerous protocols and specifications. Fuzzing allows testers to test a large amount of input--both valid and invalid--in an automated, generally unsufisticated manner, much like brute-forcing a password. If possible, the test team should couple the testing activities with an exhastive fuzzing of all input methods available.
If software is properly sponsored, designed, developed, and well tested, it is less likely to contain serious flaws. Once compiled, however, the application still faces multiple threats and must be guarded by both the developers and those who purchase the software or software license. The majority of this burden falls upon the release managers and the application owners--the clients or people who are actually deploying the software. Once the source code is compiled and distributed or located such that the binaries may retrieved by others (e.g. on a webserver), the binary files and all other files (e.g. configuration files) are also vulnerable to replacement. For this reason, the files must be subjected to change management and protected, just as the source code was. Modifying binary files to include malcode can be accomplished, but likely requires the attacker to decompile the binaries or use a debugger if the attacker does not want the application to crash following the execution of the malcode. In most instances, the attacker would need to understand assembly to insert the malcode, but this isnt a requirement, particularly if the attacker can gain access to the source code--thus, source code and the binary files should never be colocated--or if the binary file is simply byte-code (e.g. a Java .class file or a Flash SWF file). In the week prior to August 22, 2008, the Fedora Project and Red Hat detected a large compromise of multiple systems on their networks. The impact of the compromise on the two networks were different, as the systems compromised were difference. One of the more important systems compromised on Fedoras network was a system used to sign packages (the signature verifying the integrity of the projects source and binary files). Fedora had verify the integrity of the binary and source files under its control as well as change the key used to sign packages. Red Hat was not as lucky because checks on file integrity showed that the attacker managed to modify and then sign a series of packages relating to OpenSSH. In both cases, the integrity of the binaries that are distributed to the user base was questioned. The organization (Fedora being owned by Red Hat) was already using signing to verify package integrity, which helped to detect the modifications. Red Hat users who downloaded the malicious updates could not, however, have known that the repository was cracked because the attacker used the Red Hat key to sign the malicious packages (at the time of this writing, Fedora maintains that the attacker did not modify any Fedora update packages). Red Hat has since had to release a patch to replace the malicious binaries with the proper ones on any system that automatically patched to the malicious versions. (Frields 2008)(Red Hat Network 2008)
12
The above case studies from Red Hat and Fedora illustrate why it is important to safeguard the binary files, especially at their distribution point. The aforementioned Apache case study (Section 4.2.2) also illustrates this point. In all of the cases, the files had to be painstakingly gone through to ensure that they had not been tampered with. In the case of Red Hat and Fedora, the Byzantine Generals Problem held true because the faulty device was detectable by comparing the hash and validating the signatures--assuming that one did both using multiple, distinct sources. While doing this made it easier to detect that integrity was lost in the binary distribution files, it still did not replace the need for tedious, manual review of all of the files because a simple requirement in the Byzantine Generals Problem cannot be met by current (or practical) technology: a non-faulty devices signature cannot be forged by a faulty device. As such, a manual review following a breech is required, but automated means can certainly help to detect the initial breech. To best secure the applications files, the release manager should ensure that secure installation and operation documentation are created and made available to the application owner. The documentation should list any ports and protocols used by the application, to aid firewall administrators. The documentation should also provide instructions to secure the application, such as setting proper access controls in the application, installing only what is required by the application owner, setting access controls on the applications files, and anything else that will provide a more secure operating environment. In short, the documentation should provide the application owner the ability to reduce the applications attack surface and mitigate many unknown vulnerabilities. The application owner should then be certain to follow the documentation.
Conclusion
This paper helps to define a secure development model that can integrate into most existing software development lifecycles. The paper also explains the duties that each person--from management downshould perform to help facilitate the model and create a secure application. Management must take an active role in the development process and ensure that security is given proper funding and attention. Designers must be certain to completely understand the architecture upon which the application is to operate, design coding standards and guidelines that ensure the software is written securely, and create a secure design. The programmers must adhere to the standards and guidelines set forth by the designers and be certain to code in a secure manner. The test team must perform both positive and negative testing, follow a test plan while remaining flexible enough to discover issues that detailed plans could not find, and test all input. The release manager and application owner must make certain that the binaries are protected such that they cannot be replaced or modified by unauthorized persons and that the application is installed and operated in the most secure manner possible. As this paper explains, if any of the above personnel fail in their duties, the software may be operated with egregious vulnerabilities that could be exploited by malicious attackers or accidentally triggered by normal operation.
13
Appendix A - Bibliography
Apache Software Foundation. "Apache.Org compromise report, May 30th, 2001: Unauthorized Access to Apache Software Foundation Server." Apache. May 30, 2001. http://www.apache.org/info/20010519-hack.html (accessed September 1, 2008). Bacon, Jean, and Tim Harris. Operating Systems: Concurrent and Distributed Software Design. London: Addison-Wesley, 2003. Defense Information Systems Agency. Application Security and Development Checklist. Version 2, Release 1. 2008. . Application Security And Development Security Technical Implementation Guide. Version 2, Release 1. 2008. Dowd, Mark, John McDonald, and Justin Schuh. The Art of Software Security Assessment: Identifying and Preventing Software Vulnerabilities. New York: Addison-Wesley Professional, 2006. Espiner, Tom. Lawyers: Vista branding confused even Microsoft. November 28, 2007. http://news.zdnet.com/2100-9595_22-177815.html?tag=nl.e550 (accessed November 2, 2008). Frields, Paul W. "Infrastructure report, 2008-08-22 UTC 1200." Fedora Project. August 22, 2008. https://www.redhat.com/archives/fedora-announce-list/2008-August/msg00012.html (accessed September 1, 2008). Hansell, Saul. "Microsoft Tries to Polish Vista." New York Times, July 22, 2008: Bits. Howard, Michael. "How Do They Do It?: A Look Inside the Security Development Lifecycle At Microsoft." MSDN Magazine, November 2005. Howard, Michal, and David LeBlanc. Writing Secure Code. 2nd Edition. Redmond, Washington: Microsoft Press, 2003. Lamport, Leslie, Robert Shostak, and Marshall Pease. "The Byzantine Generals Problem." ACM Transactions on Programming Languages and Systems, July 1982: 382-401. Lethbridge, Timothy C., and Robert Laganiere. Object-Oriented Software Engineering: Practical software development using UML and Java. 2nd Edition. New York: McGraw Hill, 2005. Lohr, Steve. "Et Tu, Intel? Chip Giant Wont Embrace Microsofts Windows Vista." New York Times, June 25, 2008: Bits. Mao, Wenbo. Modern Cryptography: Theory and Practice. Boston: Prentice Hall, 2003. Mockapetris, Paul. "DomainNames - Implementation And Specification (RFC 1035)." The Internet Engineering Task Force - Request for Comments. November 1987. http://www.ietf.org/rfc/rfc1035.txt (accessed September 4, 2008). Owens, Daniel. Certification Test & Evaluation Report for the Common Link Integration Processing (CLIP) Release 1.2.1.26. Certification Test and Evaluation Report, San Diego: Booz Allen Hamilton, 2008.
14
Owens, Daniel. Common Link Integration Processing (CLIP) Change Management. Whitepaper, San Diego: Booz Allen Hamilton, 2008. Owens, Daniel. Common Link Integration Processing (CLIP) Integrity. Whitepaper, San Diego: Booz Allen Hamilton, 2008. Owens, Daniel, Lawrence Lauderdale, Karen Scarfone, and Murugiah Souppaya. NIST 800-118: Guide to Enterprise Password Management (Draft). National Institute of Standards and Technology, 2008. Pfleeger, Charles P., and Shari L. Pfleeger. Security in Computing. 4th Edition. Westford, Massachusetts: Prentice Hall, 2007. Poeter, Damon. Microsoft, Not Intel, Scrapped Vista Capable Hardware Requirements. November 17, 2008. http://www.crn.com/software/212100378?cid=microsoftFeed (accessed November 18, 2008). Red Hat Network. "Critical: openssh security update - RHSA-2008:0855-6." Red Hat Support. August 22, 2008. http://rhn.redhat.com/errata/RHSA-2008-0855.html (accessed September 1, 2008). Seacord, Robert C. Secure Coding in C and C++. Boston: Addison Wesley, 2005. Security Across the Software Development Lifecycle Task Force. Improving Security Across The Software Development Lifecycle. National Cyber Security Summit, 2004. Software Process Subgroup of the Task Force on Security across the Software Development Lifecycle. Processes to Produce Secure Software: Towards more Secure Software. Vol. I. National Cyber Security Summit, 2004. Stallings, William, and Lawrie Brown. Computer Security: Principles and Practice. Upper Saddle River, New Jersey: Prentice Hall, 2008. The Amazon S3 Team. Amazon S3 Availability Event: July 20, 2008. July 20, 2008. http://status.aws.amazon.com/s3-20080720.html (accessed August 19, 2008).
15
16
17
Unsafe Function Atoll Fprintf Gets Fgets Crypt System Exec* (e.g. Execlp) Strerror Chmod Usleep Ttyname Readdir Ctime Gmtime Getlogin Snprintf Popen ShellExecute ShellExecuteEx Setuid Setgid
Alternative Strtoll Vfprintf No safer alternative; check for overflows No safer alternative; check for overflows Crypt_r No safer alternative; parameters must be carefully checked No safer alternative; parameters must be carefully checked Strerror_r Fchmod Nanosleep/Setitimer Ttyname_r Readdir_r Ctime_r Gmtime_r Getlogin_r Vsnprintf No safer alternative; parameters must be carefully checked No safer alternative; parameters must be carefully checked No safer alternative; parameters must be carefully checked No safer alternative No safer alternative
ASP/.NET
Unsafe Function Stackalloc Alternative No safer alternative; ensure that the functions using this memory do not overflow Strlcpy
Strcpy/Wcscpy/Lstrcpy/_tcscpy/
18
Unsafe Function _mbscpy Strcat/Wcscat/Lstrcat/_tcscat/_mbscat Strncpy/Wcsncpy/Lstrcpyn/_tcsncpy/ _mbsnbcpy Strncat/Wcsncat/_tcsncat/_mbsnbcat Mem*/CopyMemory Sprintf/Swprintf _snprintf/_snwprintf Printf/_sprintf/_snprintf/vprintf/vsprintf/ Wide character variants of the above Strlen/_tcslen/_mbslen/Wcslen Gets Scanf/_tscanf/wscanf >> MultiByteToWideChar _mbsinc/_mbsdec/_mbsncat/_mbsncpy/ _mbsnextc/_mbsnset/_mbsrev/_mbsset/ _mbsstr/_mbstok/_mbccpy/_mbslen CreateProcess/CreateProcessAsUser/ CreateProcessWithLogon WinExec/ShellExecute LoadLibrary/LoadLibraryEx/SearchPath TTM_GETTEXT _alloca/malloc/calloc/realloc Recv IsBadXXXPtr
Alternative
Strlcat Strlcpy
StringCchLength Fgets Fgets Stream.width No safer alternative; check for overflows No safer alternative; check for overflows and other vulnerabilities
No safer alternative; parameters must be carefully checked No safer alternative; parameters must be carefully checked No safer alternative; check for overflows Use new WSAEventSelect No safer alternative; check for overflows, race conditions, error handling vulnerabilities, denial of service vulnerabilities,
19
Alternative
SecureZeroMemory No safer alternative; parameters must be carefully checked If possible, do not use the unsafe code; if required, ensure that memory problems, overflows, and other security issues are protected against If possible, do not use the unsafe code; if required, ensure that memory problems, overflows, and other security issues are protected against
Java
Unsafe Method/Class Serialization Refraction Class.forName Class.newInstance Runtime.exec Any method marked depreciated Alternative Mark non-public variables transient No safer alternative No safer alternative No safer alternative No safer alternative Java 1.4.2 SE http://java.sun.com/j2se/1.4.2/docs/api/deprecated-list.html Java 1.5 SE http://java.sun.com/j2se/1.5.0/docs/api/index.html?deprecated-list.html Java 5 EE http://java.sun.com/javaee/5/docs/api/index.html?deprecated-list.html Java 6 SE http://java.sun.com/javase/6/docs/api/index.html?deprecated-list.html
Perl
Unsafe Function Alternative
20
System Exec Open Glob Eval Goto Dbmclose Dbmopen Gethostbyaddr Do /e | (OR) $# $* $[ ` (Backtick) Any functions, methods, or classes marked depreciated
No safer alternative; parameters must be carefully checked No safer alternative; parameters must be carefully checked No safer alternative; parameters must be carefully checked No safer alternative; perform the listing and then sort No safer alternative No safer alternative Untie Tie No safer alternative; do not trust the result &SUBROUTINE No safer alternative No safer alternative Depreciated /m or /s 0 No safer alternative; backticks, to include qx{}, should be avoided If possible, do not use the unsafe code; if required, ensure that memory problems, overflows, and other security issues are protected against
PHP
Unsafe Function System Exec Shell_exec Popen Passthru Eval Alternative No safer alternative; parameters must be carefully checked No safer alternative; parameters must be carefully checked No safer alternative; parameters must be carefully checked No safer alternative; parameters must be carefully checked No safer alternative; perform the listing and then sort No safer alternative
21
Unsafe Function Mysql_* ` (Backtick) Any functions, methods, or classes marked depreciated
Alternative Use Mysqli_ and prepared statements No safer alternative; backticks should be avoided If possible, do not use the unsafe code; if required, ensure that memory problems, overflows, and other security issues are protected against
Python
Unsafe Function Os.system Exec Os.popen Execfile Eval Input Compile Marshal Module Pickle Module Rexec Any functions, methods, or classes marked depreciated Alternative No safer alternative; parameters must be carefully checked No safer alternative; parameters must be carefully checked No safer alternative; parameters must be carefully checked No safer alternative No safer alternative Raw_input No safer alternative No safer alternative No safer alternative Deprecated If possible, do not use the unsafe code; if required, ensure that memory problems, overflows, and other security issues are protected against
ColdFusion
Unsafe Function <cflogout> <cfobject> <cfhttp> <cfftp> Alternative <cfset StructClear(Session) /> <cfobject secure = yes /> <cfhttp secure = yes /> <cfftp secure = yes />
22
Unsafe Function <cfregistry> <cfexecute> <cfauthenticate> IsAuthorized <cffile> <cfcontent> Any functions, methods, or classes marked depreciated
Alternative No safer alternative Raw_input <cflogin> Deprecated Ensure proper server configuration and sandbox Ensure proper server configuration and sandbox If possible, do not use the unsafe code; if required, ensure that memory problems, overflows, and other security issues are protected against
23