You are on page 1of 5

Web Technologies and Cyber Security

Literature Review

Web 2.0 is a loosely defined intersection of web application features that facilitate participatory information sharing, interoperability, user-centered design and collaboration on the World Wide Web. Web applications are major target of attackers because most of the web applications contain Business logic so if its vulnerabilities are exploited than enterprise may suffer lot in terms of finance as well as reputation. Exploiting a single vulnerability in a popular site (e.g., a social networking site) an attacker can infect a large number of users by spreading malware. Therefore, there is a need for security tools like Web Application Firewall and techniques to protect web applications which deals with their ad hoc, dynamic nature. Effective IT security encompasses three main elements: Confidentiality, Integrity and Availability. Confidentiality requires strict controls to ensure that only authorised persons have access to the data on a need-to-know, need-to-do basis.Ensuring the Integrity of information prevents it from being modified in unexpected ways. Availability means that data and resources will be accessible when and where they are needed. The goal is to protect information in line with its relative value and importance to the business process. All the Business Application i.e. ERP, CRM, HRMS, FMS etc. are nothing but Web Application or Web Services. It is Unsafe Cyber World. As per SANS 2011/ 2012 information security risk, the highest risk is from the critical vulnerabilities in web application enabling the data behind the website to be compromised. Web applications of all kinds, whether online shops or partner portals, have in recent years increasingly become the target of hacker attacks. The attackers are using methods which are specifically aimed at exploiting potential weak spots in the web application software itself - and this is why they are not detected, or are not detected with sufficient accuracy, by traditional IT security systems such as network firewalls or IDS/IPS systems.

The basic principle is that every web application should be developed as secure as possible. This is because the later vulnerability is detected in the life cycle of a web application, the greater the risk of a successful attack, and often also the amount of work involved in correcting the issue. Web application Firewall is An intermediary device, sitting between a web -client and a web server, which is used as a security device protecting the web applications from attack. A web application firewall is an operational security control which monitors HTTP traffic in order to protect web applications from attacks. Web application Firewall does Deep Packet Inspection. As Network firewall allow any HTTP traffic to pass through it without any checking so any malicious code within that packet may harm web application. So inspection of every HTTP packet is needed .Web application firewall check each HTTP parameter and if it finds any malicious code it can modify or discard that packet. Web application Firewall can be deployed as Network-level device, as Reverse proxy or Embedded in web server. Web Application Firewall can be signature based or anomaly based. Signature based Web application Firewall, e.g. Snort, mod_security, includes a signature database of known attacks raising an alert when it matches an attack in its database. Those signatures (also known as rules) typically address widely used systems or applications for which security vulnerabilities exist and are widely advertised. Since signatures are specifically designed to match known attacks, this type of Web Application Firewall normally has a very low rate of false alarms (false positives). However, also due to that specific and static nature of signatures, signature-based Web Application Firewall is not likely to detect even slight modifications of a known attack. This is a serious disadvantage because zeroday and polymorphic attacks (where a change in the attack payload does not affect the attack effectiveness) will go unnoticed by the web application firewall until the signature database is updated. Signature-based Web Application Firewalls are ineffective against zero-day attacks because those attacks are unknown and as such, no signature that exactly matches them can exist in advance. Given that new attacks appear often, if not daily, and that a gap normally exists between the time a new attack is first detected and the time a signature is ready for it (first

someone has to write the signature, and then web application firewall administrators have add it to their signature database), this means that systems remain exposed to the attack for the entire duration of that period of time, giving attackers a window of opportunity to gain control of the system. Due to the dynamic and impromptu nature of web traffic, this limitation severely impairs the usability of signature-based web application firewall for the detection of attacks against web servers and applications. To overcome the limitation inherent to signature-based systems being unable to detect previously unknown attacks, researchers have sought other ways to detect intrusions, namely by using anomaly-based methods. An anomaly-based web application firewall works by building a statistical model of usage patterns describing the normal behavior of the monitored resource (which is nothing more than a set of characteristics observed during the normal operation), this process usually defined as the training phase. Then, after the statistical model for the normal and expected behavior is created, the system uses a similarity metric to compare new input requests with the model, and generates alerts for those deviating significantly, considering them anomalous. Unfortunately, most of the web application firewalls used nowadays are still signature-based, and few anomaly based web application firewall have been deployed in production environments, mainly due to the fact that signature-based web application firewall are easier to implement, simpler to configure and less effort needed to maintain them. The high rates of false alarms associated with anomaly-based web application firewall have always impaired its acceptance and popularity, namely for cases where the monitored resource is not stationary and changes frequently like in the case of web applications. If the statistical models are too strict and the system is unable to generalize and account for variations, then it is likely that many false alarms are generated. In case of anomaly based web application firewall if web application is changed and anomaly models are not re trained than the amount of false positives will increase significantly. A complete re-training would be expensive in terms of time; typically, it requires O (P) where P represents the number of HTTP messages required to train a model. Such re-training is not always feasible since new, attack-free training data is unlikely to be available soon after there is change in web application. To collect a sufficient amount of data for the model re training, the

new version of the application must be executed and real, legitimate clients have to interact with it in a controlled environment. This task requires much time and efforts. More importantly, those parts that have changed in the application must be known in advance. If web application is changed and anomaly models are not re trained immediately than web application is also vulnerable to zero-day exploits. In web 2.0, web applications user interfaces are highly reactive because of the extensive use of JavaScript. JavaScript is a powerful, object-based scripting language. JavaScript programs can be embedded directly in HTML web pages. When combined with the

DocumentObjectModel (DOM) defined by a web browser, JavaScript allows you to create Dynamic HTML content.[1] Scripting allows an attacker to embed malicious JavaScript within http request. The dynamic nature of the JavaScript language and its tight integration with the browser make it difficult to detect and block malicious JavaScript code. The goal is our research is to capture responses containing client-side code (e.g., JavaScript), inspect it and fill a list of probable HTTP requests directed toward the server.

REFERENCES [1] D Flannagan .1996.JavaScript: the definitive guide.

WRITTEN BY Ms. Nidhi Barot Asst. Professsor


SAL INSTITUTE OF TECHNOLOGY AND ENGINEERING RESEARCH

Ahmedabad,Gujarat.

You might also like