Professional Documents
Culture Documents
72
Web Application
Security
Iberic Web Application Security Conference
IBWAS 2009
Madrid, Spain, December 10-11, 2009
Revised Selected Papers
13
Volume Editors
Carlos Serro
ISCTE-IUL Lisbon University Institute
OWASP Portugal Ed. ISCTE
Lisboa, Portugal
E-mail: carlos.serrao@iscte.pt
Vicente Aguilera Daz
Internet Security Auditors
OWASP Spain
Barcelona, Spain
E-mail: vicente.aguilera@owasp.org
Fabio Cerullo
OWASP Ireland
OWASP Global Education Committee
Rathborne Village, Ashtown, Dublin, Ireland
E-mail: fcerullo@owasp.org
1865-0929
3-642-16119-7 Springer Berlin Heidelberg New York
978-3-642-16119-3 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting,
reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication
or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965,
in its current version, and permission for use must always be obtained from Springer. Violations are liable
to prosecution under the German Copyright Law.
springer.com
Springer-Verlag Berlin Heidelberg 2010
Printed in Germany
Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India
Printed on acid-free paper
06/3180
Preface
IBWAS 2009, the Iberic Conference on Web Applications Security, was the first
international conference organized by both the OWASP Portuguese and Spanish chapters in order to join the international Web application security academic and industry
communities to present and discuss the major aspects of Web applications security.
There is currently a change in the information systems development paradigm. The
emergence of Web 2.0 technologies led to the extensive deployment and use of Webbased applications and Web services as a way to develop new and flexible information
systems. Such systems are easy to develop, deploy and maintain and they demonstrate
impressive features for users, resulting in their current wide use. The social features
of these technologies create the necessary massification effects that make millions
of users share their own personal information and content over large web-based interactive platforms. Corporations, businesses and governments all over the world are also
developing and deploying more and more applications to interact with their businesses, customers, suppliers and citizens to enable stronger and tighter relations with
all of them. Moreover, legacy non-Web systems are being ported to this new intrinsically connected environment.
IBWAS 2009 brought together application security experts, researchers, educators
and practitioners from industry, academia and international communities such as
OWASP, in order to discuss open problems and new solutions in application security.
In the context of this track, academic researchers were able to combine interesting
results with the experience of practitioners and software engineers.
The conference held at the Escuela Universitaria de Ingeniera Tcnica de Telecomunicacin of the Universidad Politcnica de Madrid (EUITT/UPM) was organized
for the very first time and represented a step forward in the OWASP mission and
organization. During the two days of the conference, more than 50 attendees enjoyed
different types of sessions, organized around different topics. Two renowned keynote
speakers, diverse invited speakers and several accepted communications were presented and discussed at the conference. During these two days, the conference agenda
was distributed in two major abstract panels, industry and research sessions, organized
according to the following topics:
VI
Preface
On the final day of the conference, a panel discussion was held around a specific
topic: Web Application Security: What Should Governments do in 2010. From this
discussion panel a set of conclusions were reached and some specific recommendations were produced:
1. Challenge governments to work with organizations such as OWASP to increase the transparency of Web application security, particularly with respect
to financial, health and all other systems where data privacy and confidentiality requirements are fundamental.
2. OWASP will seek participation with governments around the globe to develop recommendations for the incorporation of specific application security
requirements and the development of suitable certification frameworks within
the government software acquisition processes.
3. Offer OWASP assistance to clarify and modernize computer security laws,
allowing the government, citizens and organizations to make informed decisions about security.
4. Ask governments to encourage companies to adopt application security standards that, where followed, will help protect us all from security breaches,
which might expose confidential information, enable fraudulent transactions
and incur legal liability.
5. Offer to work with local and national governments to establish application
security dashboards providing visibility into spending and support for application security.
Although organized together by the OWASP Portugal and Spain chapters, IBWAS
2009 was a truly international event and welcomed Web application security experts
from all over the world, supported by the OWASP open and distributed community.
We, as organizers of the IBWAS 2009 conference, would like to thank the different
authors who submitted their quality papers to the conference, and the members of the
Programme Committee for their efforts in reviewing the multiple contributions that we
received. We would also like to thank the amazing keynote and panel speakers for
their collaboration in making IBWAS 2010 a success.
Finally, we would like to thank the EUITT/UPM for hosting the event and for all their
support.
December 2009
Carlos Serro
Vicente Aguilera Daz
Fabio Cerullo
Organization
Programme Committee
Chairs
Secretary
Members
VIII
Organization
Table of Contents
Abstracts
The OWASP Logging Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Marc Chisinevski
11
13
15
17
19
21
23
25
Table of Contents
Papers
A Semantic Web Approach to Share Alerts among Security Information
Management Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Jorge E. L
opez de Vergara, Vctor A. Villagr
a, Pilar Holgado,
Elena de Frutos, and Iv
an Sanz
27
39
51
63
75
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
83
The presentation explained current shortcomings of Security Information Management systems. A new solution and a working prototype were presented.
In the current Security Information Management Systems it is difficult to obtain
relevant views of consolidated data (for instance alarms concerning different clients
and different Data Centres on different periods of time), the difficult to calculate
essential indicators for management (such as risk indicators such as Annual Lost Expectancy for Assets and the Cost effectiveness of proposed safeguards), difficult to
compare with historical data and also some severe performance issues.
The proposed solution for these problems is based on the usage of multidimensional database, which presents several advantages, such as presenting risk assessment and safeguard cost-effectiveness scenarios to CFO/CEO and presenting data
through different useful views (Client, Asset, Data Center, Time, Geography). The
Client view is particularly important for Software-as-a-Service and Cloud providers in
order to assess conformity with Service Level Agreements and legal obligations for
each customer. The Asset view is essential for management, allowing them to assess
the risks for business processes and information.
To achieve this, the raw data acquired by the Security Information Management
system (events on servers) needs to be correlated and consolidated. The following
facts need to be taken into account when assessing the risk: an asset has an intrinsic
value and the assets value increases if other assets (information, business processes,
servers) depend on it. Also, risk indicators are easy to calculate and analyze and it is
easier to clearly define aggregation levels, such as raw data (Event, Server) and
consolidated data (Alarm (correlated events), Asset, Client, Datacenter, Time, Geography). Reporting queries no longer run on the Security Information Management
systems production database, it is possible to analyze the data (Drill-down, roll-up,
slice) without writing SQL and to integrate data from different sources.
C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 1, 2010.
Springer-Verlag Berlin Heidelberg 2010
SQL Injection has been around for over 10 years, and yet it is still to this day not truly
understood by many security professionals and developers. With the recent mass
attacks against sites across the world, and well publicised data breaches with SQL
Injection as a component, it has again come to the fore of vulnerabilities under the
spotlight, however many consider it to only be a data access issue, or parameterized
queries to be a panacea. This talk explores the deeper, darker areas of SQL Injection,
hybrid attacks, SQL Injection worms, and exploiting database functionality. Explore
what kinds of things we can expect in future.
C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 3, 2010.
Springer-Verlag Berlin Heidelberg 2010
In this talk Dinis Cruz will show the OWASP O2 Platform, which is an open
source toolkit specifically, designed for developers and security consultants to be
able to perform quick, effective and thorough 'source-code-driven' application
security reviews. The OWASP O2 Platform (http://www.owasp.org/index.php/
OWASP_O2_Platform) consumes results from the scanning engines from Ounce
Labs, Microsoft's CAT.NET tool, FindBugs, CodeCrawler and AppScan DE, and
also provides limited support for Fortify and OWASP WebScarab dumps. In the
past, there has been a very healthy skepticism on the usability of Source Code
analysis engines to find commonly found vulnerablities in real world applications.
This presentation will show that with some creative and powerful tools, it IS possible to use O2 to discover those issues. This presentation will also show O2's
advanced support for Struts and Spring MVC.
C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 5, 2010.
Springer-Verlag Berlin Heidelberg 2010
The growth and complexity of the underground cybercrime economy has grown significantly over the past couple of years due to a variety of factors including the rise of
social media tools, the global economic slowdown, and an increase in the total number of Internet users. For the past 3 years, PandaLabs has monitored the ever-evolving
cybercrime economy to discover its tactics, tools, participants, motivations and victims to understand the full extent of criminal activities and ultimately bring an end to
the offenses. In October of 2008, PandaLabs published findings from a comprehensive study on the rogueware economy, which concluded that the cybercriminals behind fake antivirus software applications were generating upwards of $15 million per
month. In July of 2009, it released a follow-on study that proved monthly earnings
had more than doubled to approximately $34 million through rougeware attacks distributed via Facebook, MySpace, Twitter, Digg and targeted Blackhat SEO. This session will reveal the latest results from PandaLabs ongoing study of the cybercrime
economy by illustrating the latest malware strategies used by criminals, examining the
changes in their attack strategies over time. The goal of this presentation is to raise the
awareness of this growing underground economy.
C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 7, 2010.
Springer-Verlag Berlin Heidelberg 2010
The Microsoft ITs Information Security (InfoSec) group is responsible for information security risk management at Microsoft. We concentrate on the data protection of
Microsoft assets, business and enterprise. Our mission is to enable secure and reliable
business for Microsoft and its customers. We are an experienced group of IT professionals including architects, developers, program managers and managers.
This talk will present different technologies developed by Infosec to protect Microsoft and released for free, such as CAT.NET, SPIDER, SDR, TAM and SRE and how
they fit into SDL (Security Development Lifecycle).
C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 9, 2010.
Springer-Verlag Berlin Heidelberg 2010
By now everyone knows that security must be built in to software; it cannot be bolted
on. For more than a decade, scientists, visionaries, and pundits have put forth a multitude of techniques and methodologies for building secure software, but there has been
little to recommend one approach over another or to define the boundary between
ideas that merely look good on paper and ideas that actually get results. The alchemists and wizards have put on a good show, but it's time to look at the real empirical
evidence.
This talk examines software security assurance as it is practiced today. We will
discuss popular methodologies and then, based on in-depth interviews with leading
enterprises such as Adobe, EMC, Google, Microsoft, QUALCOMM, Wells Fargo,
and Depository Trust Clearing Corporation (DTCC), we present a set of benchmarks
for developing and growing an enterprise-wide software security initiative, including
but not limited to integration into the software development lifecycle (SDLC). While
all initiatives are unique, we find that the leaders share a tremendous amount of common ground and wrestle with many of the same problems. Their lessons can be applied in order to build a new effort from scratch or to expand the reach of existing
security capabilities.
C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 11, 2010.
Springer-Verlag Berlin Heidelberg 2010
C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 13, 2010.
Springer-Verlag Berlin Heidelberg 2010
Through the last five years, we, in the security field, have been witnessing an increase
in the number of attacks to (web) application user's credentials, and the refinement
and sophistication these attacks have been gaining. There are currently several methods and mechanisms to increase the strength of the authentication process for web
applications. To improve the user authentication process, but also to improve the
transaction authentication. As an example, one can think of adding one-time password
tokens, or digital certificates, EMV cards, or even SMS one-time codes. However,
none of these methods comes for free, nor do they provide perfect security. Also, one
must consider usability penalties, mobility constraints, and, of course, the direct costs
of the gadgets. Moreover, there's evidence that not all kinds of attacks can be stopped
by even the most sophisticated of these methods. So, where do we stand? What
should we choose? What kind of gadgets should we use for our business critical app,
how much will they increase the costs and reduce the risk, and, last but not least, what
kind of attacks well be unable to stop anyway? This presentation will focus on ways
to figure out how to evaluate the pros and cons of adding these improvements, given
the current threats.
C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 15, 2010.
Springer-Verlag Berlin Heidelberg 2010
The presentation Cloud Computing: Benefits, risks and recommendations for information security will cover some the most relevant information security implications
of cloud computing from the technical, policy and legal perspective.
Information security benefit and top risks will be outlined and most importantly,
concrete recommendations for how to address the risks and maximise the benefits for
users will be given.
C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 17, 2010.
Springer-Verlag Berlin Heidelberg 2010
The primary aim of the OWASP Top 10 is to educate developers, designers, architects and organizations about the consequences of the most important web application
security weaknesses. The Top 10 provides basic methods to protect against these high
risk problem areas and provides guidance on where to go from here.
The Top 10 project is referenced by many standards, books, tools, and organizations, including MITRE, PCI DSS, DISA, FTC, and many more. The OWASP Top 10
was initially released in 2003 and minor updates were made in 2004, 2007, and this
2010 release. We encourage you to use the Top 10 to get your organization started
with application security.
Developers can learn from the mistakes of other organizations. Executives can start
thinking about how to manage the risk that software applications create in their
enterprise.
This significant update presents a more concise, risk focused list of the Top 10
Most Critical Web Application Security Risks. The OWASP Top 10 has always been
about risk, but this update makes this much more clear than previous editions, and
provides additional information on how to assess these risks for your applications.
For each top 10 item, this release discusses the general likelihood and consequence
factors that are used to categorize the typical severity of the risk, and then presents
guidance on how to verify whether you have problems in this area, how to avoid
them, some example flaws in that area, and pointers to links with more information.
C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 19, 2010.
Springer-Verlag Berlin Heidelberg 2010
Secure applications do not just happen they are the result of an organization deciding that they will produce secure applications. OWASPs does not wish to force a
particular approach or require an organization to pick up compliance with laws that
do not affect them as every organization is different.
However, for a secure application, the following at a minimum are required:
Organizational management which champions security
Written information security policy properly derived from national standards
A development methodology with adequate security checkpoints and
activities
Secure release and configuration management
Many of the tools, documentation and controls developed by OWASP are influenced by requirements in international standards and control frameworks such as
COBIT and ISO.
Furthermore, OWASP resources can be used by any type of organization ranging
from universities to financial institutions in order to develop, test and deploy secure
web applications. This presentation will introduce you to some of the most successful
projects such as:
- OWASP Enterprise Security API which can be used to mitigate most common flaws in web applications;
- OWASP ASVS which is intended as a standard on how to verify the security
of web applications;
- OWASP Top 10 which helps to educate developers, designers, architects and
organizations about the consequences of the most important web application
security weaknesses;
- OWASP Development Guide which shows how to architect and build a secure application;
- OWASP Code Review Guide which shows how to verify the security of an
application; source code;
OWASP Testing Guide which shows how to verify the security of your running
application.
Finally, as OWASP believes education is a key component in building secure applications, some of the initiatives being carried out by the OWASP Global Education
Committee are going to be highlighted.
C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 21, 2010.
Springer-Verlag Berlin Heidelberg 2010
How secure must an application be? To take the appropriate measures we have to
identify the risks first and think about the measures later. Threat risk modelling is an
essential process for secure web application development. It allows organizations to
determine the correct controls and to produce effective countermeasures within
budget. This presentation is about how to do a Tread Risk Modelling. What is needed
to start and where to go from there!
C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 23, 2010.
Springer-Verlag Berlin Heidelberg 2010
C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 25, 2010.
Springer-Verlag Berlin Heidelberg 2010
1 Introduction
Security is an important issue for Internet Service Providers (ISP). They have to keep
their systems safe from external attacks to maintain the service levels they provide to
costumers. Security threats are identified at routers, firewalls, intrusion detection
systems, etc. generating several alerts in different formats. To deal with all these incidents, ISPs usually have a Security Information Management System (SIMS) [1],
which collects the event data from their network devices to manage and correlate the
information about any incident. A SIMS is useful to detect intrusions at a global level,
centralizing the alarms from several security devices.
A step forward in this type of systems would be the distribution of alerts among
SIMS from different ISPs and different vendors for an early response to network incidents. Thus, mechanisms to communicate security notifications and actions have to be
developed. These mechanisms will let the collaboration among SIMS to share information about incoming attacks. For this, it is important to homogenise the information the
C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, pp. 2738, 2010.
Springer-Verlag Berlin Heidelberg 2010
28
SIMS are going to share. A data model has to be defined to address several problems
associated with representing intrusion detection alert data: alert information is inherently heterogeneous, some alerts are defined with very little information and others
provide much more information; and intrusion detection environments are different,
the same attack can contain different information. Current solutions provide a common
XML format to represent alerts, named IDMEF (Intrusion Detection Message Exchange Format) [2]. Although this format is intended to exchange messages, it is not a
good solution in a collaborative SIMS scenario, as each SIMS would flood the other
SIMS with such messages. It would be better that a SIMS asks other SIMS about certain alerts, and later infers what is its situation based on that information. However,
IDMEF has not been defined to query for an alert set.
A way to solve this is to use ontologies [3], which have been precisely defined to
share knowledge. Ontologies have been previously proposed to formally describe and
detect complex network attacks [4, 5, 6]. In this paper we propose to define an ontology based on IDMEF, where the alerts are represented as instances of Alert classes in
that ontology. The use of an ontology language also improves the information definition, as restrictions can be specified beyond data-types (for instance, cardinality).
With this ontology, each SIMS can store a knowledge base of alerts, and share it using semantic web interfaces. Then, other SIMS can ask about alerts by querying such
knowledge bases through semantic web interfaces. As a result, a SIMS would be able
to share their knowledge with other domain SIMS. The knowledge would include
policies, incidents, actualizations, etc. In a first phase, this sharing has been constrained to share alert incidents.
The rest of the paper is structured as follows. Next section presents the architecture
of collaborative SIMs based on knowledge sharing. Then, IDMEF ontology is explained, showing the process followed in its definition, as well as how to query it.
After this, an implementation of the system that receives IDMEF alerts and stores
them in a knowledge base is described. Results obtained in the different modules are
also provided. Finally, some conclusions and future work lines are given.
29
Instance
generator
SIMS2
query
generator
IDMEF alert
IDMEF instance
Alert
knowledge
base 1
SPARQL
query
Semantic Web
interface
Alert
knowledge
base 2
3 IDMEF Ontology
IDMEF format provides a common language to generate alerts about suspicious
events, which let several systems collaborate in the detection of attacks, or in the
treatment of the stored alerts. Although IDMEF has some advantages (integration of
several sources, use of a well supported format), it has also drawbacks (heterogeneous
data sources led several alerts of a same attack which do not contain the same
information).
To solve the identified problems, we have defined an alert ontology based on the
IDMEF structure. In this process it is worth remarking that IDMEF has been defined
following a model of classes and properties, making easier the ontology definition,
with a more or less direct mapping. The ontology has been defined using OWL [11],
leveraging the advantages of the semantic web (distribution, querying, inferencing,
etc.), and also the results of [12]. Several class restrictions have been defined (cardinality, data types) by analyzing the IDMEF definition contained in [2].
The following conventions have been taken to define the IDMEF ontology:
Class names start with a capital letter and it is the same as the IDMEF class name.
Property names starts with a lower-case letter and has the format domain_propertyName, where domain is the name of the class to which the property
belongs, and propertyName is the name of the property.
The following rules have also been taken:
Each class in an IDMEF message maps to a class in the IDMEF ontology.
Each attribute of an IDMEF class is mapped to a data-type property in the corresponding ontology class.
Classes that are contained in other class are mapped in general to object-type properties. An exception to this are aggregated classes that contain text, which have
been mapped to data-type properties.
A subclass of an IDMEF class is also represented as a subclass in the ontology,
inheriting all the properties of its parent class.
When an IDMEF attribute cannot contain several values, it is mapped to a functional class.
30
When an IDMEF attribute can only have some specific values, the ontology define
them as the allowed values.
Numeric attributes are represented as numeric data-types properties, dates are represented as datetime data-type properties, and the rest as string data-type properties.
Following the rules above, the ontology has been defined. Fig. 2 shows a representation of the Alert class, its child classes (OverflowAlert, ToolAlert and CorrelationAlert), and other referred classes (Classification, AdditionalData, Target, Source,
Assessment, CreateTime, AnalyzerTime, DetectTime, Analyzer). This figure has been
generated using the Protg [13] ontology editor. The boxes represent the classes and
the arcs can be inheritance (in black, labelled isa) and aggregation (in blue, labelled
with the property names) relationships. A UML (Unified Modelling Language) representation could also be provided, using the UML profile for OWL [14].
Our definition enables a mapping from IDMEF messages to IDMEF ontology instances. In this way, the information contained on each IDMEF message is translated
to an instance of Alert, with instances of Target, Source, etc. as this information is
contained on each message. The ontology includes other additional classes, so any
IDMEF message can be represented in the ontology.
With respect to a plain XML IDMEF message, the ontology provides several advantages. For instance, the information can be restricted as defined in the IDMEF
definition [2]. Moreover, query languages such as SPARQL can be used to query all
the information contained in the knowledge base, and it is not limited to the scope of a
concrete XML document, which would be the case of IDMEF messages.
To query the knowledge base, SPARQL has been chosen, given that is has been recently recommended by the W3C as the RDF/RDFS and OWL query language [9].
Using such language a query can be defined as follows:
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX idmef: <http://www.dit.upm.es/IdmefOntology.owl#>
SELECT ?alert ?id ?target_address
WHERE {
?alert rdf:type idmef:Alert ;
idmef:alert_messageid ?id ;
idmef:alert_target ?target .
?target idmef:target_node ?tnode .
?tnode idmef:node_address ?taddress .
?taddress idmef:address_address ?target_address
}
The query starts with PREFIX clauses, to define the namespaces to be used to
identify the queried classes and properties. After this, the variables alert, id and target_address that meet a set of conditions are requested: alert variable is of type Alert,
which has the properties alert_messageid and alert_target. Then, alert_target property refers to an instance with an address value, identified with the variable
target_address.
31
32
4 Implementation
The architecture proposed in section 2 has been implemented. Apart from the components provided by existing semantic web implementations (mainly Joseki server), we
have implemented the module that stores the IDMEF alerts in the knowledge base
(instance generator), as well as the module that queries alerts of an external knowledge base (query generator). Subsections below present such implementations, providing later some results in section 5.
4.1 Instance Generator
A module has been developed to map the IDMEF messages to ontology instances.
This module has been developed in Java, taking advantage of the libraries that this
language provides for parsing XML documents and ontologies. Fig. 3 shows the steps
that have to be performed to generate and save instances in the knowledge base:
Open
IDMEF
message
(file)
Parse
IDMEF
message
(XML)
Create
IDMEF
ontology
instances
Save
IDMEF
ontology
instances
33
Create models.
Read and write models.
Load models in memory.
Query a model: look for information inside the model.
Operations on models: union, intersection, difference.
Models can be stored in many ways, including OWL files, as well as representations
of the ontology on a relational database. In this last case, there are several storing possibilities, depending on the library used to represent the ontology on the database. Precisely, SDB is a Jena library specifically designed to provide storage in SQL databases,
both proprietary and open source. This storage can be done through the SDB API.
4.2 Query Generator
The Knowledge base, where the alerts are stored, can be queried through semantic
web interface by other SIMS. For this, another module has been developed, which
performs SPARQL queries to a Joseki server through HTTP. This server accesses the
Knowledge Base and it obtains the results of that query. These results are then received by the query module.
To connect the query module to Joseki, it is necessary to use the ARQ library [15],
which is a query engine for Jena. The query module can execute any SPARQL query.
For most habitual queries, we have implemented a program which does the query
depending on a series of parameters. For instance:
All alerts depending on the time:
Alerts in the last week.
Alerts in the current day.
Alerts in a day.
Alerts in an interval of time.
Alerts queried using other parameters:
Source IP address.
Target IP address.
Source port.
Target port.
Alert type.
Target of the attack.
Source of the attack.
Tools of the attack.
Overflow Alert.
Analyzer.
Assessments of the attacks: impact, actions, etc.
5 Results
The implemented modules, presented above, have been tested to know their performance. All the results have been obtained in a computer equipped with an Intel Core2
Duo E8500 processor at 3.16 GHz with 6 MB L2 Cache and 2 Gbyte RAM. Previous
tests with older computers provided worse results.
34
IDMEF message
Assessment
Correlated Alert
Disallowed Service
Load Module
Load Module 2
Phf
Ping of Death
Policy Violation
Scanning
Teardrop
JDBC
1235
1250
1250
1220
1250
1220
1220
1265
1235
1220
SDB
1040
1035
1050
1050
1035
1035
1035
1035
1035
1035
SPARQL/Update
640
640
625
640
610
625
640
610
610
These times are measured after the database is created and the ontology model is
represented on the database. If the database and the model have to be created, there
are two possibilities:
Use of JDBC (Java Database Connectivity), with a time of around 1.900 s.
Use of SDB library, with a time of around 1.125 s, faster than the previous case.
Both JDBC and SDB libraries facilitate the connection to databases containing ontologies from Java application independently of the operating system. These libraries
are also compatible with different databases. In addition, SDB is a Jena component
designed specifically to support SPARQL queries and it provides storage in both
proprietary and open source SQL databases.
Once the database has been created, there are three alternatives to insert the instances on the ontology database: JDBC, SDB and SPARQL/Update [16]. With respect to the last alternative, SPARQL/Update is an extension to SPARQL that lets a
programmer the definition of insert clauses, whereas JDBC and SDB can insert data
in the ontology by creating ontology data structures in memory that are later stored.
From our experiments, the best measurements are obtained if the language
SPARQL/Update is used to insert the instances. They are approximately a 60% of the
time when SDB library is used, and a 50% compared to when plain JDBC is used. In
the case of the Assessment message there is an exception, because it contains characters that cannot be used in the SPARQL/Update sentence. In this case, the SDB library should be used instead.
5.2 Query Generator
Some measurements have also been taken with respect to the time that it takes to perform a concrete query from the query module to a test knowledge base with 112 alerts
35
through the Joseki server. Simplified versions of the queries used for the experiment
are shown below (they also included other variables that could be useful about other
alert properties):
Alerts depending on a time interval:
PREFIX rdf:
<http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX idmef: <http://www.dit.upm.es/IdmefOntology.owl#>
SELECT ?alert ?time
WHERE {
?alert rdf:type idmef:Alert .
?alert idmef:alert_createTime ?createTime .
?createTime idmef:createTime_time ?time .
FILTER (?time > time1).
FILTER (?time < time2)
}
where time1 and time2 are properly replaced to query for a concrete period of time.
Alerts depending on the source IP address.
PREFIX rdf:
<http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX idmef: <http://www.dit.upm.es/IdmefOntology.owl#>
SELECT ?alert ?sourceAddress
WHERE {
?alert rdf:type idmef:Alert.
?alert idmef:alert_source ?source.
?source idmef:source_node ?node.
?node idmef:node_address ?address.
?address idmef:address_address ?sourceAddress.
FILTER (?sourceAddress = ipAddr)
}
36
Obtained results
23
9
32
Time (ms)
547
500
641
Obtained results
1
Time (ms)
453
Table 4. Knowledge base query times depending on the target IP of the alerts
Obtained results
11
33
77
Time (ms)
500
625
750
Obtained results
2
13
7
Time (ms)
468
484
468
As shown, the time to retrieve the results is dependent on the number of alerts that
match the query, but not on the query itself. Further tests have to be performed with
larger knowledge bases.
37
6 Conclusions
This work has assessed the applicability of semantic web technologies in security
information management systems, providing a way to semantically share information
among different security domains. For this, an ontology based on IDMEF has been
defined, which can hold all the information of any IDMEF message. To test this ontology, we have also defined and implemented a semantic collaborative SIMS architecture, where each SIMS stores its IDMEF alerts in a knowledge base and can query
other SIMS knowledge bases using a SPARQL interface.
The test performed to store alerts showed the times to save such alerts, which can
be acceptable for a prototype but not for a production system that receives tens of
alerts per second. Thus, some approaches have been done to improve these times. On
the one hand, Jena SDB library has been used to optimize the storage of the ontology
in a database. On the other hand, the use of SPARQL/Update has been proposed, to
limit the saving time to that information contained on each alert. Another improvement has been the parsing of alerts continuously, to avoid launching a Java process
each time an IDMEF message arrives the instance generator. In this way, we could
reduce the storing time to a half from the initial approach.
With respect to the query modules, we have done preliminary tests with good results. We will generate further tests, modifying the size of the knowledge base to
check how the system performs with larger data sets. It is also important to note that
the instances of old alerts are periodically deleted from the knowledge base. This
avoids its size grow ad infinitum.
As another future work, we will study how to do inference with the information
contained in the knowledge bases.
Acknowledgements. This work has been done in the framework of the collaboration
with Telefnica I+D in the project SEGUR@ (reference CENIT-2007 2004,
https://www.cenitsegura.es), funded by the CDTI, Spanish Ministry of Science and
Innovation under the program CENIT.
References
1. Dubie, D.: Users shoring up net security with SIM. Network World (September 30, 2001)
2. Debar, H., Curry, D., Feinstein, B.: The Intrusion Detection Message Exchange Format
(IDMEF). IETF Request for Comments 4765 (March 2007)
3. Gruber, T.R.: A Translation Approach to Portable Ontology Specifications. Knowledge
Acquisition 5(2), 199220 (1993)
4. Undercoffer, J., Joshi, A., Pinkston, A.: Modeling computer attacks: an ontology for intrusion detection. In: Vigna, G., Krgel, C., Jonsson, E. (eds.) RAID 2003. LNCS, vol. 2820,
pp. 113135. Springer, Heidelberg (2003)
5. Geneiatakis, D., Lambrinoudakis, C.: An ontology description for SIP security flaws.
Computer Communications 30(6), 13671374 (2007)
6. Dritsas, S., Dritsou, V., Tsoumas, B., Constantopoulos, P., Gritzalis, D.: OntoSPIT: SPIT
management through ontologies. Computer Communications 32(1), 203212 (2009)
7. Joseki A SPARQL Server for Jena, http://www.joseki.org/
8. Jena A Semantic Web Framework for Java, http://jena.sourceforge.net/
38
9. Prudhommeaux, E., Seaborne, A.: SPARQL Query Language for RDF. W3C Recommendation (January 15, 2008)
10. SDB - A SPARQL Database for Jena, http://jena.sourceforge.net/SDB/
11. McGuinness, D.L., van Harmelen, F.: OWL Web Ontology Language Overview. W3C
Recommendation (February 10, 2004)
12. Lpez de Vergara, J.E., Vzquez, E., Martin, A., Dubus, S., Lepareux, M.N.: Use of ontologies for the definition of alerts and policies in a network security platform. Journal of
Networks 4(8), 720733 (2009)
13. Gennari, J.H., Musen, M.A., Fergerson, R.W., Grosso, W.E., Crubzy, M., Eriksson, H.,
Noy, N.F., Tu, S.W.: The evolution of Protg: an environment for knowledge-based systems development. Int. J. Hum.-Comput. Stud. 58(1), 89123 (2003)
14. Object Management Group: Ontology Definition Metamodel Version 1.0. OMG document
number formal/2009-05-01 (May 2009)
15. ARQ - A SPARQL Processor for Jena, http://jena.sourceforge.net/ARQ/
16. Seaborne, A., Manjunath, G., Bizer, C., Breslin, J., Das, S., Davis, I., Harris, S., Idehen,
K., Corby, O., Kjernsmo, K., Nowack, B.: SPARQL Update, A language for updating
RDF graphs. W3C Member Submission (July 15, 2008)
1 Introduction
Nowadays web applications handle more and more sensitive information. As a consequence, web applications are an attractive target for attackers, who are able to perform
attacks causing devastating consequences. Therefore, the proper protection of these
systems is very important and it becomes necessary for the site administrators to assess the security of web applications.
In addition, these days, most of network-capable devices, including simple consumer electronics such as printers and photo frames, have an embedded web interface
for easy configuration [1]. These web interfaces can also suffer a large variety of
attacks, therefore they should also be protected [1].
This paper presents a tool for the assessment of the security of different web authentication schemes.
Usually, some web application areas have restricted access. Authentication allows
to verify the identity of the person accessing the web application.
Our tool is able to analyse the security of web applications using two HTTP authentication schemes, namely Basic Authentication and Form-Based Authentication.
The Basic Authentication is a challenge-response mechanism that is used by a
server to challenge a client and by a client to provide authentication information. In
this scheme the user agent authenticates itself providing a user-ID and a password
C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, pp. 3949, 2010.
Springer-Verlag Berlin Heidelberg 2010
40
when accessing to a protected space. The server will authorize the request only if it
can validate the user-ID and password for the protection space corresponding to the
URI of the request.
The Form-based Authentication is the most widely used authentication scheme.
When the client accesses a protected service or resource, the user is required to fill in
a form entering a username and a password. These credentials are submitted to the
web server, where they are validated against the database containing the usernames
and the passwords from all users registered in the web application. The access will
only be allowed if the credentials are present in the database.
Further information about these HTTP authentication schemes is presented in
Sec.2.
WASAT can be applied against any web application having an authentication
mechanism. This tool can mount dictionary and brute force attacks of varying complexity against the target web site. User and password files can be configured to be
used as search space. Variations on the passwords can be generated using an easy
special syntax in the password file, which allows to perform exhaustive searches.
Also, low-signature attacks can be developed with this tool, in order to avoid detection. Several strategies can be used to generate low-signature attacks, like distributing
the requests of a user in several time periods.
The number of threads used by the application can be configured by the user in order to improve the speed of the program. Also, a list of proxies can be specified by the
user in order to make the request anonymous.
The configuration session data can be stored in a file and opened later, making easier to initialize a new session. Moreover, the process can be paused and continued later.
WASAT also has a useful and complete help file for users.
The rest of the paper is organized as follows. Section 2 reviews different authentication schemes. In Sec. 3 several mechanisms that can be used by web servers to
detect brute force attacks are exposed. Section 4 refers to related work. In Sect.5 the
features and the behavior of WASAT are explained. Section 6 exposes the future
work and finally, in Sec.7, the conclusions of this work are captured.
41
42
4 Related Work
There are several popular tools similar to our application, such as Crowbar [3], Brutus
[4], Caecus [5], THC-Hydra [6] andWebSlayer [7].
All these tools have been tested and several of their features have been considered.
The importance of some of these features was explained in the section 3.
The considered features are the following:
Multi-Threading. It refers to the ability to establish different connections with the
server concurrently and speed up the process.
Proxy Connection. Using proxies make possible to establish anonymous connections to the server.
Password Generation. Automatic password generation allows the user build many
password combinations without writing a huge wordlist.
Inter-Request Time. It refers to the minimum time interval between attempts with
the same username.
Restore Sessions. The use of sessions let the user restore previous aborted sessions.
Multi-Platform. It means the tool can run in any platform, thus the application is
not platform-dependent.
Proxy Connection and inter-Request-Time make possible to avoid IP-based and
time-based anti-brute-force mechanisms respectively.
In Table 1, these tools are compared against WASAT, according to the selected
features.
Table 1. Cracking tools comparison
Feature/Tool
Hydra
Multi-Threading
Yes
Proxy Connection
Single
Password Generation No
Inter-Request Time
No
Restore Sessions
Yes
Multi-Platform
No
Caecus Brutus
Yes
List
No
No
No
No
Yes
Single
Limited
No
No
No
Yes
No
No
No
No
No
Yes
Yes
Single
List
Generator Script
No
Yes
No
Yes
No
Yes
43
A experimental comparison regarding the time required for brute force attacks has
not been included in this paper as it depends on the bandwidth and the server load.
5 Application Description
WASAT offers the possibility to specify the configuration of the target web application and the desired authentication method to be used. The program preferences can
also be configured by the user. After specifying the configuration, the analyses can
start. It can also be paused or stopped. The configuration parameters of every session
can be saved in a file and a configuration file can be loaded as well.
The current version of WASAT can be downloaded from http://www.iec.csic.es/wasat.
A snapshot of the main window of WASAT is presented in Fig. 1.
44
Basic Authentication. If the basic authentication was selected in the Target tab, the
error code in this tab should be chosen. There are two possible values for the HTTP
Error Code: 200 OK or 302 Object Moved.
Form-Based Authentication. If the form-authentication was chosen in the Target
tab, some parameters are to be defined in the Form-Based tab:
Request Settings
These are parameters regarding the request settings:
Method. This is the HTTP method used in the form submission. The default value
of this parameter is GET.
User ID. This parameter refers to the input text element name corresponding to the
username used in the form.
Password ID. This parameter corresponds to the input text element name corresponding to the password used in the form.
Arguments. This parameter is optional. All other input arguments used by the form
should be written here. They are usually the hidden fields in the form. The submit
button name and value should be included too. It is important that every argument
(except the first one) is preceded by the & sign. Note that this text should be
HTTP coded, thus for example, no blanks or spaces are allowed and they must be
replaced by a + sign.
Referer. This parameter is optional. The Referer header should be written here in
case the login page requires it.
User Agent. This parameter is optional. The user can enter the User-Agent
header if the login page requires it.
Cookie. This parameter is optional. The Cookie header can be established in this
parameter in case the login page needs it.
HTML Response
Some parameters should also be filled concerning the HTML response.It is necessary
to distinguish between the error page after an unsuccessful login attempt (the credentials are wrong and the request failed) and the welcome page after a successful attempt (the credentials are correct and the request succeeded). WASAT provides two
stop methods to differentiate both pages.
The first method uses words that only appears in the error page or in the welcome
page to distinguish them. The second method is based on the length of the pages in
order to differentiate both pages.
Firstly, the user should choose the method: Search for string or Content-Length
comparison.
The search for string method checks for the presence of a word or sentence
which only appears in the welcome page or for the absence of a word or sentence
which only appears in the error page. This option needs to retrieve the whole page to
search for the given string. In this case, the parameters are the following:
Succeed. It is any sentence which appears only in the page reached after a valid
username/password pair has been guessed. This parameter is optional since in
many cases it is not known in advance.
45
Failure. It should contain any sentence which appears in the error login page (and
never in the correct page), after an invalid username/password pair has been
checked. This parameter is mandatory.
The content-length comparison method checks the length of the error and welcome
pages. This method does not require to retrieve the whole page, but only the headers,
thus is much faster. If this option is chosen, the parameters are these ones:
Succeed. This is an optional parameter. It refers to the length in bytes of the welcome page.
Failure. It is mandatory. It is the length in bytes of the error page.
Variation. This parameter is optional. This parameter can be supplied in order to
accommodate small variations due to banners or other changing elements in web
pages which may affect the total length of the page.
A snapshot of the Configuration window is shown in Fig. 2.
Wordlists. In this tag the wordlist files and the processing instructions are defined.
Wordlist files
The program reads a list of usernames from a file and for each username tries to log in
using every password defined in the password list file. In order to generate lowsignature attacks, the application also reads a list of passwords from a file and for
each password tries to log in using every username defined in the usernames list file.
46
47
$Un: it tries all uppercase letters from A to Z n times. Example: $L4 will try
AAAA, AAAB, . . . , ZZZY, ZZZZ.
$Wn: this tries numbers from 0 to 9 and all letters (uppercase and lowercase) n
times. Example: $W5 will try 00000, 00001, . . ., AAAAA, AAAAB, . . . ,
ZZZZZ, aaaaa, aaaab, . . . , zzzzy, zzzzz.
The above keywords can be used in any position or even alone. The only limitation is
that several keywords cannot be used in the same password definition.
5.2 Program Preferences
The application makes possible to configure the program preferences.
General. Two general parameters can be set:
Number of sockets. This is an important parameter as it specifies the number of
sockets running in parallel. Obviously, the more sockets, the more speed. However,
the rate of speed gain is not linear with the number of sockets, but logarithmic.
This means that beyond a given value, there is little or no gain of speed. Recommended values for the sockets limit are usually below 50. For most purposes and
bandwidths, 4 is a rather fast option. It defaults to 1.
Timeout. It determines the time in milliseconds the socket waits for the server
reply. The default value is 10 milliseconds.
Other options can be defined:
Use Proxy. It determines if proxies are used. The list of proxies is defined in the
tab Proxies.
Inter-request time. It specifies the minimum time (in milliseconds) between the
requests of an specific user. This low-signature strategy allows to distribute the requests of a user over time. The default value is 10 milliseconds.
Proxies. A list of proxies can be defined when needed. The option Use Proxy in the
tab General should be checked to use the list. Specifying a list of proxies makes the
request anonymous. The following information is needed for every defined proxy:
Host. It refers the proxy server IP address or host name.
Port. It is the proxy server port number.
If authentication is needed to use the proxy, then the option Authentication required should be checked and the following parameters entered:
Username. It is a valid username.
Password. It is a valid password.
Logging. In this tab the settings about the log file can be established.
Log File. The user can check the option Log results to file if the results are to be
logged in a file, whose path and name must be specified too. When checking the
option Log activity report, general operations performed by the program, like
opening or closing files, initializing or terminating, will be logged to the file.
48
5.3 Commands
Definition File. The button New starts a new analysis session. All the information
entered in the configuration frame can be saved in a definition file when clicking
Save. When clicking the Open button the definition file is loaded in the
configuration. The facility of opening this file simplifies the task of initializing the
program for a new brute force session.
Analysis Execution. By clicking the Start button the analysis starts, using the
parameters established in the configuration and the preferences. The analysis can be
paused and later resumed or completely stopped.
6 Future Work
These days, many web applications provide captchas [8] in order to determine
whether the user is a human or a machine. The use of captchas has become a very
popular mechanism for web applications to prevent brute force attacks. To our knowledge, none of the existing authentication security tools implements a means to bypass
this barrier.
As future work, we are working to include in WASAT an anti-captcha mechanism
using artificial intelligence techniques. This feature will let the application bypass the
captchas barrier, and permit the assessment for a wider range of web applications.
7 Conclusions
An intuitive and complete Web Authorization Security Analysis Tool has been
presented in this paper. This application is designed for the security assessment of
different web related authentication schemes, namely Basic Authentication and
Forms-Based Authentication. The configuration of the analysis process against the
target web application and the program preferences can be specified by the user.
The application is platform independent, and present several advantages compared
with other popular existing tools while it has hardly any of their drawbacks. First,
WASAT has features that make the authentication assesstment easier for the user, like
automatic password generation, wordlist variations, aborted sessions restoring, and a
complete and user friendly help. Second, WASAT has features that avoid time-based
and IP-based anti-brute-force mechanisms on the server side, like low signature attacks mounting and proxy connections. Third, the use of multithreading improves the
efficiency drastically, making possible to perform multiple authentication attempts
simultaneously.
Acknowledgements
We would like to thank the Ministerio de Industria, Turismo y Comercio, project
SEGUR@ (CENIT2007-2010), project HESPERIA (CENIT2006-2009), the Ministerio de Ciencia e Innovacion, project CUCO (MTM2008-02194), and the Spanish
National Research Council (CSIC), programme JAE/I3P.
49
References
1. Bojinov, H., Bursztein, E., Lovett, E., Boneh, D.: Embedded Management Interfaces:
Emerging Massive Insecurity. In: Black Hat Technical Security Conference, Las Vegas,
NV, USA (2009)
2. Berners-Lee, T., Fielding, R., Frystyk, H.: Hypertext Transfer Protocol HTTP/1.0. (1996),
http://www.ietf.org/rfc/rfc1945.txt
3. Crowbar: Generic Web Brute Force Tool (2006),
http://www.sensepost.com/research/crowbar/
4. Hobbie: Brutus (2001), http://www.hoobie.net/index.html
5. Sentinel: Caecus. OCR Form Bruteforcer (2003),
http://sentinel.securibox.net/Caecus.php
6. Hauser, V.: THC-Hydra (2008), http://freeworld.thc.org/thc-hydra/
7. Edge-Security: WebSlayer (2008),
http://www.edge-security.com/webslayer.php
8. Carnegie Mellon University: CAPTCHA: Telling Humans and Computers Apart Automatically (2009), http://www.captcha.net/
Informatica64, S.L.
Universidad Rey Juan Carlos
{chema,mfernandez,amartin}@informatica64.com,
antonio.guzman@urjc.es
2
Abstract. In 2007 the classification of the ten most critical vulnerabilities for
the security of a system establishes that code injection attacks are the second
type of attack behind XSS attacks. Currently the code injection attacks are
placed first in this ranking. In fact Most critical attacks are those that combine
XSS techniques to access systems and code injection techniques to access the
information.. The potential damage associated with this type of threats, the total
absence of background and the fact that the solution to mitigate this vulnerability must be implemented by systems administrators and the database vendors
justify an in-depth analysis to estimate all the possible ways of implementation
of this attack technique.
Keywords: Code injection attacks, connection strings, web application authentication delegation.
1 Introduction
SQL injection attacks are probably the most known attacks related to a web application through its database architecture. There are a lot of researches done over this kind
of vulnerability to conclude that to establish the correct filtering levels necessary to
inputs of the systems is the development team task for preventing an attack can thus
be successful.
In the case of the attack will be presented in this article, the responsibility rests not
only on developers, but it also affects the system administrator and the database vendor. This is an injection attack that affects web applications but rather than focus on
its implementation focusing on connections that are established with from the application and the database.
According to OWASP [1] in 2007 the classification of the ten most critical vulnerabilities for the security of a system establishes that code injection attacks are the
second type of attack behind XSS attacks. In 2010 code injection attacks are the ones
that occupy the first position of this ranking. Currently most used and most criticality
attacks are attacks that combine XSS techniques to access systems with code injection
techniques to access the information. This is the case for the so-called connection
string parameter pollutions attacks. Potential criticality of this type of vulnerabilities
and the total absence of background justify an in-depth analysis to estimate all vectors
of implementation relating to this attack technique.
C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, pp. 5162, 2010.
Springer-Verlag Berlin Heidelberg 2010
52
C. Alonso et al.
The paper structure is presented in three main sections. The first is this short introduction where the most significant aspects of the connection strings and existing
mechanisms for the implementation of the web application authentication will be
introduced briefly below. Section 2 proposes a comprehensive study of this new attack technique, with an extensive collection of test cases. Finally, the article concludes briefly summarizing the lessons learned from the work.
1.1 Connections Strings
Connection strings [2] are used to connect applications to database engines. The syntax used on these strings depends on the database engine to be connected to and on
the provider or driver used by the programmer to establish the connection.
One way or another, the programmer must specify the server to connect to, the database name, the credentials to use, and the connection configuration parameters, such
as timeout, alternate databases, communication protocol or the encrypting options.
The following example shows a common connection string used to connect to a
Microsoft SQL Server database:
Data Source=Server,Port; Network Library=DBMSSOCN;
Initial Catalog=DataBase; User ID=Username;
Password=pwd;
As can be seen, a connection string is a collection of parameters, separated by
semicolons (;), which contains value key pairs. The attributes used in the example
correspond to the ones used in the .NET Framework Data Provider for SQL Server,
which is chosen by programmers when they use the SqlConnection class in their
.NET applications. Obviously, it is possible to connect to SQL Server using different
providers such as:
.NET Framework Data Provider for OLE DB (OleDbConnection)
.NET Framework Data Provider for ODBC (OdbcConnection)
SQL Native Client 9.0 OLE DB provider
The most common and recommended way for connections between SQL server
and .NET applications, is to use the default Framework provider, where the connection string syntax is the same to the different versions of SQL Server (7, 2000, 2005
and 2008). This is the one chosen in this article to illustrate the examples.
1.2 Web Application Authentication Delegation
There can be two ways to define an authentication system for a web application: create an own credential system, or delegate authentication to the database engine.
In most of the cases, the application developer chooses to use only one user to connect to the database. This user will represent the web application inside the database
engine. Using this connection, the web application will make queries to a custom
users table where the user credentials are managed.
As only one user can access all content of the database, it is impossible to implement a granular permission system over the different objects in the database, or to
trace the actions of each user, delegating these tasks in the web application itself. If an
attacker is able to take advantage of any vulnerabilities of the application to access the
database, this will be completely exposed. This architecture is the one used by CMS
Database engine
53
Web application
systems such as Joomla or Mambo among other very commonly used on the Internet.
The target of any attacker is to extract database users table rows in order to access the
users credentials.
The alternative consists in an authentication process delegation, so the connection
string is used to check the user credentials leaving all the responsibility on the database engine. This system allows applications to delegate the credential management
system to the database engine.
This alternative is mandatory to be used in all those applications that manage the
database engine. This is necessary in order to connect to a system with users who
have special permissions or roles, in order to perform administration tasks.
54
C. Alonso et al.
55
As can be seen, the application is making use of Microsoft SQL Server users to access the database engine. Taking this information into account, and attacker can perform a Connection String Parameter Pollution Attack. The idea of this attack is to add
a parameter to the connection string that existed previously in it. The component used
in .NET applications set up the parameter with the last value in the connection string.
This means that having two Data Source parameters in a connection string, the one
used is the last one. Knowing this behavior and with this environment the following
CSPP attacks can be done.
2.3.1 CSPP Attack 1: Hash Stealing
An attacker can place a Rogue Microsoft SQL Server connected to the Internet with a
Microsoft SQL Server credential sniffer listening (In this sample CAIN [6] has been chosen). For the attacker it will be enough to perform a CSPP attack in the following way:
User_Value:
56
C. Alonso et al.
57
As can be seen in the Fig. 5, when the port is listening, as in the current example,
the error message obtained shows that no Microsoft SQL Server is listening on it, but
a TCP connection was established.
In this second case, a TCP connection could not be completed and the error message is different. Using these error messages a complete TCP scan can be done
against a server. Of course, this technique can also be used to discover internal Servers within the DMZ in which the web application is running.
58
C. Alonso et al.
59
An attacker can log into the database engine and hence to the Web application to
manage the whole system. As can be seen in the following figure (Fig. 9), this is due
to the fact that all the users and the network services have access to the server.
2.3.3.2
Example 4: myLittleAdmin and myLittleBackup. In mylittleAdmin and
myLittlebackup tools, it is possible to check out the connection string used to get the
access. Looking at it, the parameter pollution injected in order to obtain access to the
system can be clearly seen.
60
C. Alonso et al.
Fig. 10 shows how the Data Source parameter, after the User ID parameter, has
been injected with the localhost value. This parameter, Data Source, is also the first
one of the connection string. In this example their values are different; however, the
one that is taken into consideration is the last one, meaning the injected one.
The same happens with the parameter Integrated Security that appears initially
with the NO value but the one that counts is the one injected in the password parameter with value YES. The result is total access to the server with the system account
which the web application is running with, as can be seen in Fig. 11.
61
2.3.3.3 Example 5: ASP.NET Enterprise Manager. The same attack also works on
the latest public version of ASP.NET Enterprise manager, so, as can be seen in the
following login form, an attacker can perform the CSPP injection to get access to the
web application.
And as a result of it, access can be obtained, just as can be seen in the following
screenshot.
3 Conclusions
All these examples show the importance of filtering any user input in web applications.
Moreover, these examples are a clear proof of the importance of maintaining the software. Microsoft released ConnectionStringbuilder in order to avoid these kinds of
attacks, but not all projects were updated to use these new and secure components.
62
C. Alonso et al.
These techniques also apply to other databases such as Oracle Databases which allow administrators to set up Integrated security to the database. Besides, in Oracle
Connection Strings it is possible to change the way a user gets connected by forcing
the use of a sysdba session.
MySQL databases do not allow administrators to configure an Integrated Security
authentication process. However, it is still possible to inject code and manipulate
connection strings to try to connect against internal servers which were used by developers and not published on the Internet.
In order to avoid these attacks the semicolon must be filtered, all the parameters
sanitized and the firewall should be hardened in order to filter not only inbound connection but also outbound connection from internal servers that are sending NTLM
connection through the internet. Databases administrator should also apply a hardening process in the database engine to restrict the access permits to only the necessary
users by a minimum privilege policy.
References
1. The Open Web Application Security Project, http://www.owasp.org
2. Connection Strings.com, http://www.connectionstrings.com
3. Ryan, W.: Using the Sql Connection String Builder to guard against Connection String Injection Attacks,
http://msmvps.com/blogs/williamryan/archive/2006/01/15/
81115.aspx
4. Connection String Builder (ADO.NET),
http://msdn.microsoft.com/en-us/library/ms254947.aspx
5. Carettoni, L., di Paola, S.: HTTP Parameter Pollution,
http://www.owasp.org/images/b/ba/
AppsecEU09_CarettoniDiPaola_v0.8.pdf
6. Cain, http://www.oxid.it/cain.html
7. ASP.NET Enterprise Manager in SourceForge,
http://sourceforge.net/projects/asp-ent-man/
8. ASP.NET Enterprise Manager in MyOpenSource,
http://www.myopensource.org/internet/
asp.net+enterprise+manager/download-review
9. PHPMyAdmin, http://www.phpmyadmin.net/
10. myLittleAdmin, http://www.mylittleadmin.com
11. myLittleBackup, http://www.mylittlebackup.com
12. myLittleTools, http://www.mylittletools.net
13. Microsoft SQL Server Web Data Administrator,
http://www.microsoft.com/downloads/details.aspx?
FamilyID=c039a798-c57a-419e-acbc-2a332cb7f959&displaylang=en
14. Microsoft SQL Server Web Data Administrator in Codeplex project,
http://www.codeplex.com/SqlWebAdmin
1 Introduction
One of the current computing trends is the information systems distribution, in particular using the Internet. Critical systems are constantly deployed on the World Wide
Web, where crucial and confidential information crosses the bit waves of the information highway or it is stored in an unsecure remotely located database.
Most of these critical systems are used on a daily basis, and there is an inherent
sense of security in each of the web applications that may not correspond to their real
security status and real needs. Andrey Petukhov and Dmitry Kozlov [1] make a reference to a survey, which states that 60% of vulnerabilities actually affect web applications, emphasizing even more the concerns in the relation between web applications
and classified information. The objective of this paper is to focus in the Portuguese
web applications security panorama, which will be divided it in two major areas: government online public services and online banking web applications. Although these
two main areas differ from each other, they have a common front-end to communicate
C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, pp. 6373, 2010.
Springer-Verlag Berlin Heidelberg 2010
64
with people - web applications - which will allow the entire testing process and subsequent methodologies to be the same, or very similar.
These assessments are mostly motivated by the perception of the lack of investment
in security and the everyday growth of new attacks in the web and into critical web
applications [2]. The aim of this paper is to check for vulnerabilities/exploits and to
produce a report for each of the tested web applications in order to communicate the
testing methods, the vulnerabilities found and the necessary corrective measures that
need to be taken to establish mitigate those flaws and ultimately benefit the end-user.
This paper proposes the usage of a webapplications security analysis methodology
to determine their security level, against some of the most identified security threats,
based on the best practices on the web application security market. This work will
make use of freely available security assessment frameworks and automated tools, to
conduct these security evaluation tests. The work to be conducted will take advantage
of the large set of tools and documents produced by the Open Web Application Security Project (OWASP) and other similar initiatives.
The Portuguese government public services web applications were mostly created
after the 2006s Simplex [3] program launch. Simplex is a strategic priority of the
Portuguese Government, launched with the ambitious objective of decentralizing
most of the public services, reducing the gap between citizen and public administration, reinforce the idea of a great investment in the technologies sector and make the
public sector more and more efficient. As a result, most public services offered by the
government are now supported by information systems available over the World
Wide Web. From a citizen point of view, political interests generate the preconceived
idea that these programs often overcome the usual and recommended processes for
planning their components and introduction in the market, choosing deployment
speed in detriment of quality.
The financial sector, in particular banks, have always been the target of an enormous amount of effort by attackers in order to compromise clients assets as well as
banks credibility, either from rival entities or individual attackers.
This work will be conducted in the context of a MSc work, which will conduct the
necessary security assessment and present the results and conclusions at the end of the
work. In the context of this paper, it will be impossible to present any results and
therefore it will mostly focus on the web applications selection, the methodologies
identification and the results processing mechanism.
Web Applications Security Assessment in the Portuguese World Wide Web Panorama
4.
5.
6.
65
66
4.
5.
6.
7.
8.
9.
10.
These vulnerabilities are not equally distributed in web applications, and so, the
tests will be conducted in such a way that the most common vulnerabilities will be
more intensely tested, validating the web application for the most common flaws.
Although only these vulnerabilities are identified, it will not invalidate testing for
new vulnerabilities, which are not described in the previous list.
2.3 Selection of the Web Applications to Be Tested
To conduct a serious evaluation work, a pre-selection of target entities will be done.
This is an important step since it will allow choosing a representative set of entities
belonging to each domain. The Portuguese WWW panorama can therefore be assessed in these two domains (public services and bank institutions) without testing
every possible entity or web application, especially regarding bank institutions. It
must be pointed out that not every entities in each domain has to be present in this list,
if the most important and representative ones are present and tested, it will give a
pretty conclusive information on the overall state of the Portuguese WWW panorama.
To represent the government public services, the chosen web applications are the
ones that allow citizens to perform crucial operations on behalf of individual and
collective entities. The second area represents Portuguese bank entities, and is composed of different banks, private and public, which can also make an interesting comparison between security implemented in private and public sectors.
Table 1. Web application set to be tested
Web Applications Security Assessment in the Portuguese World Wide Web Panorama
67
The public administration services portals list as the work presented in this paper
progresses, can and most probably will, be extended. Through the web applications
described in this section, can be performed many critical actions, and more services
are available inside these portals, so the main entry point will be one of the portals
described here. Although, with the process of testing and further investigation, different portals/web applications can appear, and if sufficiently relevant, will be included
in the results of this assessment.
2.4 Web-Applications Security Assessment Methodology
This section provides a description of the security assessment methodology, which
will be followed to conduct the tests on the selected web applications, and how the
tests are going to be structured. As it was previously stated in this paper, these tests
will be performed in web applications where all the documentation about them, including source code, software and network infrastructure is not available, and all the
access to it is not at all possible. This situation forces the methodology to be based on
a black box approach, which fits in one particular testing method for web applications: Penetration Testing.
The penetration testing process is divided into two main phases:
Passive mode: Information Gathering, where it is precisely going to be gathered some information about the web applications mostly through discovery,
reconnaissance and enumeration techniques. This is the first step in penetration testing because as we know nothing about the web application we are
going to test, this is the way to know its surrounding environment.
Active mode: Vulnerability Analysis, where more specific tests will be done
in order to assess particular vulnerabilities related to the web applications
business logic, session management, data validation, among others.
The high level view of the whole project methodology is described below:
1.
2.
3.
4.
5.
6.
7.
Discovery;
Document and analysis of the Discovery results;
Create attack simulations on the target entity;
Analysis of each attack;
Document the results of the Attacks;
Solutions to mitigate the problems (when possible);
Presentation of the results to the entity (if required by them).
68
Web Applications Security Assessment in the Portuguese World Wide Web Panorama
69
70
Buffer Overflows
User Specified Object Allocation
User Input as a Loop Counter
Writing User Provided Data to Disk
Failure to Release Resources
Storing too Much Data in Session
Web Applications Security Assessment in the Portuguese World Wide Web Panorama
71
Information Disclosure
Directory Indexing
Information Leakage
Path Traversal
Predictable Resource Location
Logical Attacks
Abuse of Functionality
Denial of Service
Insufficient Anti-automation
Insufficient Process Validation
Although there may be some overlapping between both the methodologies, this
will, of course, help to cover more efficiently web applications threats since the references for penetration testing tests will be from these two major organizations which
have focused their efforts in this common purpose.
2.6 Tests Results
As a final stage, the results from the tests will be collected, including information on
how the vulnerabilities can be exploited, which the exploitation risks are and what is
the vulnerability impact on the web application, treat the data for each web application tests and draw conclusions. Any security issues that are found will be presented
to the system owner together with an assessment of their impact and with a proposal
for mitigation or a technical solution.
In the final document, as suggested by Andres Andreu [9], should be present data
important for the target entity, which should become aware of issues like:
In order to better analyze and demonstrate to the stakeholders, if needed, the results
of these tests, the document will be structured with the following sections:
72
The work presented here is not only bounded by technical constraints (as it was
presented on the last section), but it has to handle with legal considerations which can
pose themselves as a major blocking force to the success of this work. The following
section of the paper highlights these issues.
3 Legal Constraints
Besides the normal technical details that will need to be handled, one of the major
problems/challenges identified within this work are related with legal aspects/constraints. Most of the work described in this paper has to be bounded by legislation. In particular, the case of penetration testing, when not properly authorized by
the target tested entity, can have harmful legal consequences.
In the process of this work, one stage will be to ask permission to the target entities
to perform these tests, which of course, can, or cant be granted. One other issue will
be regarding the results, some entities can accept that these tests are performed,
mostly in because it is in their own interest, but demand that the results remain protected from external viewers.
From one perspective, it is somehow still a question whether this permission has to
be asked in the scope of this project. Although these tests can in some cases present a
threat to the web application itself, and consequently, to the entity holding it, the intention is not to perform any criminal or bad intentioned act against it and the tests
will only rely in actions, which any external user can perform.
Nonetheless, authorizations will be asked and measures will be taken in order to
minimize possible legal problems and functional problem to the targets, when performing these tests. These measures can be summarized in:
Getting the target entity to establish and agree with us, the testers, clear
time frames for pen testing exercise;
Getting the target entity to clearly agree that we are not liable for anything going wrong that may have been triggered by our actions;
Find if the target entity has any non disclosure agreements that have to
be signed prior to the pen tests;
Getting the target entity relevant contacts for any unexpected situation.
As a last resource, if permission is denied, the project scope can be adapted, not
invalidating the whole project, but changing targets to more receptive ones.
4 Conclusions
The work presented in this paper defines the methodologies, techniques and tools that
will be used to conduct the Portuguese web applications security assessment. These
assessments should be considered of the highest importance by the entities, which
develop and distribute those web applications, mostly because they serve the purpose
of performing high sensitive operations.
A set of Portuguese public services and financial banking services were chosen and
a methodology was drawn, defining testing phases, processes and tools that could
Web Applications Security Assessment in the Portuguese World Wide Web Panorama
73
identify the most common vulnerabilities in web applications bounded by the recommendations and best practices advocated by international organizations, such as
OWASP.
As an end result, it will be clearly identified, for each web applications, if they
have security flaws. Reports will be produced clearly explaining which and how tests
were performed, which were the identified vulnerabilities and solutions or workarounds, if found, for mitigating the problems found. It will also be provided information on how severe those flaws were and which implications they have, or could have
for the entity holding the web application.
Although full security assessments should also be based in documentation and
code reviewing, which can reveal hidden security issues, these penetration tests
should provide a very close view on the web applications security.
This work can also be the guideline for extrapolating penetrations tests to other
web applications, which can be very important and interesting from a business point
of view, especially because these tools, methodologies and frameworks, are freely
available. Penetration testing can provide to these two sectors a huge service since the
Portuguese Government and banks obviously rely in their reputation and service
availability to maintain a certain amount of trust with clients, which many times justify investments in the security area.
In particular these assessments will allow these entities to answer questions they
probably make themselves every day: What is our level of exposure?, Can our
critical applications be compromised? and What risks are we running by operating
on the Internet?.
References
1. Petukhov, A., Kozlov, D.: Detecting Security Vulnerabilities in Web Applications Using
Dynamic Analysis with Penetration Testing, Computing Systems Lab, Department of Computer Science, Moscow State University (2008)
2. Holz, T., Marechal, S., Raynal, F.: New Threats and Attacks on the World Wide Web. IEEE
Computer Society, Los Alamitos (2006)
3. Simplex Program, http://www.simplex.pt
4. Budiarto, R., Ramadass, S., Samsudin, A., Noor, S.: Development of Penetration Testing
Model for Increasing Network Security. IEEE Press, Los Alamitos (2004)
5. Arkin, B., Stender, S., MCGraw, G.: Software Penetration Testing. IEEE Press, Los
Alamitos (2005)
6. van der Stock, A., et al.: OWASP Top 10 the ten most critical web application security vulnerabilities. In: OWASP (2007)
7. Agarwwal, A., et al.: OWASP Testing Guide v3.0. In: OWASP (2008)
8. Auger, R., et al.: Web Application Security Consortium: Threat Classification. WASC Press
(2004)
9. Andreu, A.: Pen Testing for Web Applications, Wiley Publishing, Inc., 10475 Crosspoint
Boulevard Indianapolis, IN 46256 (2006)
Abstract. Every day increases the number of Web applications and Web services due to migration that is occurring in this type of environments. In these
scenarios, it is very common to find all types of vulnerabilities affecting web
applications and traditional methods of protection at the network and transport
level, not enough to mitigate them. What is more, there are also situations
where the availability of information systems is vital for proper functioning. To
protect our systems from these threats, we need a component acting on the layer
7 of the OSI model, which includes the HTTP protocol that allows us to analyze
traffic and HTTPS that is easily scalable. To solve these problems, the paper
presents the design and implementation of an Open Source application firewall,
ModSecurity, emphasizing the use of the positive security model, and the deployment of high availability environments.
Keywords: application firewall, whitelist
ModSecurity, OpenBSD, Carp, Pfsync.
analysis,
high
availability,
1 Introduction
Due to the large number of threats in web applications, it is essential to protect our
information systems. In that context, it is vitally important to follow a design process
with security measures that ensure the integrity, confidentiality and availability of
these resources.
Generally, most information systems have network-level protections, sophisticated
enough, to block malicious attacks in the first four layers of TCP/IP model, while the
exploitation of vulnerabilities in the application layer increases and these existing
measures, such as firewalls or intrusion detection systems in the network layer or
transport layer, are not sufficient. The security in Web applications and Web services
is a big problem due to lack of measures to protect systems from these threats.
It is important to note that the fact of introducing an application firewall in our
network topology increases the points of failure and reduces the SLA, so important in
Web environments. Therefore techniques must be implemented to ensure high availability for business continuity.
The solution is to implement an application firewall that is scalable, and responsive
to the issues we raised. To develop the project we will use open source solutions, because they offer a low cost and great flexibility to configure and set the requirements.
The open source alternative that was chosen was ModSecurity [1] because it offers a
C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, pp. 7582, 2010.
Springer-Verlag Berlin Heidelberg 2010
76
few advantages: includes countless security features, stability, reliability, good documentation and its free. We will perform the configurations using this free open
source software under the GNU GPLv2 license [2] in combination with Apache,
ModProfiler [3] and the OpenBSD [4] operating system.
2 ModSecurity
ModSecurity operates as an Apache module, intercepting HTTP traffic and performing a comparison process for each request. If a request is classified as an attack, ModSecurity would follow the actions specified in the configuration. The main function of
this solution is filtering requests, analyzing the content of HTTP requests, both incoming traffic and outgoing traffic. One advantage of ModSecurity over NIDS is the
ability to filter HTTPS traffic. In a scenario with a network IDS filtering requests, if
traffic was encrypted using SSL/TLS, IDS could not parse these requests, so the attacks go undetected. In this case, the use of SSL/TLS, which in most cases protect us,
would be an advantage for the attacker to hide their actions. However, ModSecurity
(working embedded in the web server) processes the data once it has been deciphered.
First, mod_ssl does the decoding of the request and once in plain text, ModSecurity
analyze it correctly.
The life cycle of requests, passes through a series of steps with the goal of optimizing the search procedure anomalies and block the attack as soon as possible. With this
we will increase the performance, because if we are sure that the request is malicious
in phase 1, there is no need to analyze this request in the rest of phases.
77
The process comprises five phases, the first phase is the analysis of the HTTP
headers (REQUEST_HEADERS), then filtering is done in the body of HTTP requests
(REQUEST_BODY phase), in this last process are detected the highest number of
attacks. Then the RESPONSE_HEADERS phase and the RESPONSE_BODY phase
will be performed. Both would analyze requests for response to prevent information
leaks. Finally, LOGGING phase will be processed, it is the responsible for the registration (log) of the complete request, and it is very useful for future forensic analysis
in case of intrusion or other scenarios.
3 Security Models
There are two security models for the classification of systems, both can coexist. The
negative model can be used to generalize and the positive to particularize.
In the negative model, everything is allowed by default, except what is explicitly
prohibited, while in the positive model, everything that is not expressly permitted is
forbidden. The IDS/IPS systems used in Web applications, specifically ModSecurity
can operate in both modes.
On the negative security model, the system would require a black list of rules with
the goal of blocking malicious requests. When a request arrives, it starts a search
process in the database, which contains all known attacks, if a match is found, the
request will be locked. Some of these systems work in conjunction with scoring rules,
giving a score to each request and blocking those that exceed a certain threshold.
In the positive security model, you can create a template from the Web application
where to specify in detail the operations allowed in the application. Everything outside this template will be locked. You must carefully specify the format of all parameters, so if an attacker makes changes by sending unauthorized values the access will
be blocked.
The positive model is more appropriated and provides more safety in critical environments, than the negative model. The first mentioned, helps to protect the systems
from not known attacks or 0day exploits. These 0day exploits are programs or scripts
that exploit vulnerabilities for which there is no patch or solution for correction. The
big potential of this tool makes necessary a greater effort to create a scenario of this
type.
One tool that tries to facilitate the construction of rules with a white list approach is
REMO [5] (Rule Editor for ModSecurity), which offers a graphical interface where
the process of writing rules in the positive model it is easier, but does not support
automation.
To help configure a firewall application with the guidelines of the positive model,
there is a tool called ModProfiler [3], which analyzes the traffic passing through it.
Observe what is valid and what is not, and can define the types and the maximum size
of the parameters. This system operates under the premise of denying everything that
is not known as valid. Normally the default web applications allow any HTTP
method, number and type of parameters, although in most cases work with a smaller
number of factors.
78
Follow this model and establish the correct configuration, gives us several
advantages:
Prevent attacks that attempt to exploit HTTP methods other than
those permitted, which otherwise could be used by default.
Prevent information leaks of files that are hosted on the server but
are not part of the application and kept for oblivion in the root directory of the web server.
Prevent use of enabled debug modes that provide much useful information to a potential attacker and block any operation outside of
the operation that is considered valid within the web application.
By using this approach we can specify for each web application files and interfaces
that will be used. Each of them, specifying the number of parameters, type and size
limits and other parameters such as encoding or HTTP methods allowed to use.
79
We will need three network cards to use the pfsync functionality, which will keep
the states of all active communications in high availability, mitigating the loss of connection if the Master server falls and recovering all states in the Slave.
The operation of CARP protocol is very simple, it acts as a virtual interface with a
corresponding virtual IP and MAC address, i.e. that the O.S. has created this interface
for managing data and their respective counterparts. With the pfsync functionality we
can share the estates of pf in time with all nodes.
To configure CARP on the external virtual interface we will use the carp0 interface
and the internal virtual interface will use carp1.
root@master:~# cat /etc/hostname.carp0
inet 172.26.0.1 255.255.255.0 172.26.0.255 vhid 1 advskew 0 pass
secretkey
root@master:~# cat /etc/hostname.carp1
inet 10.10.10.1 255.255.255.0 10.10.10.255 vhid 2 advskew 0 pass
secretkey
For the slave computer the configuration will be similar, but we will need to modify
the value "advskew" at 100 as a weight value.
root@slave:~# cat /etc/hostname.carp0
inet 172.26.0.1 255.255.255.0 172.26.0.255 vhid 1 advskew 100
pass secretkey
root@slave:~# cat /etc/hostname.carp1
inet 10.10.10.1 255.255.255.0 10.10.10.255 vhid 2 advskew 100
pass secretkey
80
We can start the network firewall "packet filter" from the command line or by
modifying the file "/ etc / rc.conf" to start automatically every time you start the
system.
root@master:~# pfctl d
root@master:~# pfctl e
root@master:~# cat /etc/rc.conf | grep pf\=YES
pf=YES
Once we have raised our firewall we will need to configure the network interface in
each computer, in our case, are two so we will use a crossover cable. Our dedicated
interface to synchronization will be the physical interface VIC2, which will be specified in the pfsync interface, pointing to the address of the other computer.
root@master:~# cat /etc/hostname.vic2
inet 192.168.0.1 255.255.255.0 NONE
root@master:~# cat /etc/hostname.pfsync0
up syncdev vic2 syncpeer 192.168.0.2
root@slave:~# cat /etc/hostname.vic2
inet 192.168.0.2 255.255.255.0 NONE
root@slave:~# cat /etc/hostname.pfsync0
up syncdev vic2 syncpeer 192.168.0.1
6 Performance Charts
The following chart shows the time it took to serve 1, 100 and 400 requests on the
Web Server without protection, the results of testing an intermediate computer without applying any filtering and the requests made with de ModSecurity filters enabled.
The tests were performed in a LAN at Gigabyte and the web served size is 20,000
bytes.
The setup used in the performance testing is detailed below:
The Web Server Apache has been configured for reverse proxy mode, using ProxyPass and ProxyPassReverse directives:
<Location />
<IfModule security2_module
Include /path/www.example.com/modsecurity2.conf
</IfModule>
<IfModule mod_proxy.c>
81
ProxyRequests off
ProxyPass http://www.example.com:80/
ProxyPassReverse http://www.example.com:80/
</IfModule>
</Location>
Web Server
Reverse Proxy
Time
6
5
4
3
2
1
0
1
100
400
Hits
7 Conclusions
Security in Web applications and Web services should require more than just a Layer
3 firewall. The number of attacks in these environments has increased so dramatically
that we need a firewall on Layer 7 that understands the HTTP protocol, able to protect
us against threats.
The web application firewall, as described in this article meets the expectations and
solves the problems submitted, among other features is able to analyze SSL / TLS
traffic in both modes: black list and white list.
The design and implementation has been developed in high availability environment, where it is very important having a service always available and avoid denial of
service to legitimate users.
Security is very important throughout the life cycle of software development, and
also, in the network filtering systems in each layer.
References
1. ModSecurity Open Source Web Application Firewall,
http://www.modsecurity.org
2. GNU GPLv2 License, http://www.gnu.org/licenses/gpl-2.0.html
3. ModProfiler, http://www.modsecurity.org/projects/modprofiler/
4. OpenBSD Operating System, http://www.openbsd.org
5. OWASP, http://www.owasp.org/
6. CARP and pfsync guide, http://www.kernel-panic.it
82
Author Index
Almeida, Miguel 15
Alonso, Chema 51
Alvarez, Gonzalo 39
Knobloch, Martin
Catteddu, Daniele 17
Cerullo, Fabio E. 19, 21
Chisinevski, Marc 1
Clarke, Justin 3
Corrons, Luis 7
Cruz, Dinis 5
Martn, Alejandro
27
51
Perez-Villegas, Alejandro
Roses, Simon
de Frutos, Elena
23
39
27
Fernandez, Manuel 51
Fern
andez-Sanguino, Javier
`
Gracia, Angel
Puigvent
os
Guzm
an, Antonio 51
Harper, Dave 11
Holgado, Pilar 27
25
75
Sanz, Iv
an 27
Serr
ao, Carlos 63
Siles, Raul 13
Teodoro, Nuno 63
Torrano-Gimenez, Carmen
Villagr
a, Vctor A.
27
39