You are on page 1of 15

Antivirus

"antivirus" is protective software designed to defend your computer against malicious software. Malicious software, or "malware" includes: viruses, Trojans, keyloggers, hijackers, dialers, and other code that vandalizes or steals your computer contents. In order to be an effective defense, your antivirus software needs to run in the background at all times, and should be kept updated so it recognizes new versions of malicious software.
Antivirus refers to the products and technology used to detect malicious code. Antivirus software can typically prevent malware from infecting your system and it typically can remove malicious code that has infected the system. Most antivirus vendors share information and resources to ensure rapid response to malicious code outbreaks. Legitimate antivirus vendors participate in independent testing which certifies their products to detect and/or disinfect a specific set of malware. Traditional antivirus software use signature scanning to scan files for malicious code either in realtime, automatically as they are introduced to the system, or as manually requested by the user. Other types of antivirus products rely on integrity checking and/or behavior blocking to respectively prevent files from modificaton or stop certain actions from taking place. Antivirus or anti-virus software is used to prevent, detect, and remove malware, including but not limited to computer viruses, computer worm, trojan horses, spyware and adware. This page talks about the software used for the prevention and removal of such threats, rather than computer security implemented by software methods. A variety of strategies are typically employed. Signature-based detection involves searching for known patterns of data within executable code. However, it is possible for a computer to be infected with new malware for which no signature is yet known. To counter such so-calledzero-day threats, heuristics can be used. One type of heuristic approach, generic signatures, can identify new viruses or variants of existing viruses by looking for known malicious code, or slight variations of such code, in files. Some antivirus software can also predict what a file will do by running it in a sandbox and analyzing what it does to see if it performs any malicious actions. No matter how useful antivirus software can be, it can sometimes have drawbacks. Antivirus software can impair a computer's performance. Inexperienced users may also have trouble understanding the prompts and decisions that antivirus software presents them with. An incorrect decision may lead to a security breach. If the antivirus software employs heuristic detection, success depends on achieving the right balance between false positives and false negatives. False positives can be as destructive as false [1] negatives. Finally, antivirus software generally runs at the highly trusted kernel level of the operating system, creating a potential avenue of attack.

Web server
The primary function of a web server is to deliver web pages on the request to clients. This means delivery of HTML documents and any additional content that may be included by a document, such as images, style sheets and scripts. A client, commonly a web browser or web crawler, initiates communication by making a request for a specific resource using HTTP and the server responds with the content of that resource or an error message if unable to do so. The resource is typically a real file on the server's secondary memory, but this is not necessarily the case and depends on how the web server is implemented. While the primary function is to serve content, a full implementation of HTTP also includes ways of receiving content from clients. This feature is used for submittingweb forms, including uploading of files. Many generic web servers also support server-side scripting, e.g., Active Server Pages (ASP) and PHP. This means that the behaviour of the web server can bescripted in separate files, while the actual server software remains unchanged. Usually, this function is used to create HTML documents "on-the-fly" as opposed to returning fixed documents. This is referred to as dynamic and static content respectively. The former is primarily used for retrieving and/or modifying information from databases. The latter is, however, typically much faster and more easily cached. Web servers are not always used for serving the world wide web. They can also be found embedded in devices such as printers, routers, webcams and serving only a local network. The web server may then be used as a part of a system for monitoring and/or administrating the device in question. This usually means that no additional software has to be installed on the client computer, since only a web browser is required (which now is included with most operating systems).

History of web servers The world's first web server In 1989 Tim Berners-Lee proposed a new project with the goal of easing the exchange of information between scientists by using a hypertext system to his employer CERN. The project resulted in Berners-Lee writing two programs in 1990:

A browser called WorldWideWeb The world's first web server, later known as CERN httpd, which ran on NeXTSTEP

Between 1991 and 1994, the simplicity and effectiveness of early technologies used to surf and exchange data through the World Wide Web helped to port them to many different operating systems and spread their use among socially diverse groups of people, first in scientific organizations, then in universities and finally in industry. In 1994 Tim Berners-Lee decided to constitute the World Wide Web Consortium (W3C) to regulate the further development of the many technologies involved (HTTP, HTML, etc.) through a standardization process.
Common features

Virtual hosting to serve many Web sites using one IP address Large file support to be able to serve files whose size is greater than 2 GB on 32 bit OS Bandwidth throttling to limit the speed of responses in order to not saturate the network and to be able to serve more clients Server-side scripting to generate dynamic Web pages, still keeping web server and website implementations separate from each other

Path translation

Web servers are able to map the path component of a Uniform Resource Locator (URL) into:

A local file system resource (for static requests) An internal or external program name (for dynamic requests)

For a static request the URL path specified by the client is relative to the web server's root directory.

Consider the following URL as it would be requested by a client:


http://www.example.com/path/file.html

The client's user agent will translate it into a connection to www.example.com with the following HTTP 1.1 request:
GET /path/file.html HTTP/1.1 Host: www.example.com

The web server on www.example.com will append the given path to the path of its root directory. On an Apache server, this is commonly /home/www (On Unixmachines, usually /var/www). The result is the local file system resource:
/home/www/path/file.html

The web server then reads the file, if it exists and sends a response to the client's Web browser. The response will describe the content of the file and contain the file itself or an error message will return saying that the file does not exist or is unavailable.
Load limits

A web server (program) has defined load limits, because it can handle only a limited number of concurrent client connections (usually between 2 and 80,000, by default between 500 and 1,000) per IP address (and TCP port) and it can serve only a certain maximum number of requests per second depending on:

Its own settings The HTTP request type Content origin (static or dynamic) The fact that the served content is or is not cached The hardware and software limitations of the OS where it is working

When a web server is near to or over its limits, it becomes unresponsive.


Kernel-mode and user-mode web servers

A web server can be either implemented into the OS kernel, or in user space (like other regular applications). An in-kernel web server (like TUX on GNU/Linux or Microsoft IIS on Windows) will usually work faster, because, as part of the system, it can directly use all the hardware

resources it needs, such as non-paged memory, CPU time-slices, network adapters, or buffers. Web servers that run in user-mode have to ask the system the permission to use more memory or more CPU resources. Not only do these requests to the kernel take time, but they are not always satisfied because the system reserves resources for its own usage and has the responsibility to share hardware resources with all the other running applications. Also, applications cannot access the system's internal buffers, which causes useless buffer copies that create another handicap for user-mode web servers. As a consequence, the only way for a user-mode web server to match kernel-mode performance is to raise the quality of its code to much higher standards, similar to that of the code used in web servers that run in the kernel. This is a significant issue under Windows, where the user-mode overhead is about six times greater than that under Linux.[2]
Overload causes

At any time web servers can be overloaded because of:


Too much legitimate web traffic. Thousands or even millions of clients connecting to the web site in a short interval, e.g., Slashdot effect; Distributed Denial of Service attacks; Computer worms that sometimes cause abnormal traffic because of millions of infected computers (not coordinated among them); XSS viruses can cause high traffic because of millions of infected browsers and/or web servers; Internet bots. Traffic not filtered/limited on large web sites with very few resources (bandwidth, etc.); Internet (network) slowdowns, so that client requests are served more slowly and the number of connections increases so much that server limits are reached; Web servers (computers) partial unavailability. This can happen because of required or urgent maintenance or upgrade, hardware or software failures,back-end (e.g., database) failures, etc.; in these cases the remaining web servers get too much traffic and become overloaded.

Overload symptoms

The symptoms of an overloaded web server are:

Requests are served with (possibly long) delays (from 1 second to a few hundred seconds). 500, 502, 503, 504 HTTP errors are returned to clients (sometimes also unrelated 404 error or even 408 error may be returned). TCP connections are refused or reset (interrupted) before any content is sent to clients. In very rare cases, only partial contents are sent (but this behavior may well be considered a bug, even if it usually depends on unavailable system resources).

Anti-overload techniques

To partially overcome above load limits and to prevent overload, most popular Web sites use common techniques like:

managing network traffic, by using: o Firewalls to block unwanted traffic coming from bad IP sources or having bad patterns; o HTTP traffic managers to drop, redirect or rewrite requests having bad HTTP patterns; o Bandwidth management and traffic shaping, in order to smooth down peaks in network usage; deploying Web cache techniques; using different domain names to serve different (static and dynamic) content by separate web servers, i.e.: o http://images.example.com o http://www.example.com using different domain names and/or computers to separate big files from small and medium sized files; the idea is to be able to fully cache small and medium sized files and to efficiently serve big or huge (over 10 - 1000 MB) files by using different settings; using many web servers (programs) per computer, each one bound to its own network card and IP address; using many web servers (computers) that are grouped together so that they act or are seen as one big web server (see also Load balancer); adding more hardware resources (i.e. RAM, disks) to each computer; tuning OS parameters for hardware capabilities and usage; using more efficient computer programs for web servers, etc.; using other workarounds, especially if dynamic content is involved.

Cloud Computing
Cloud computing platforms are growing in popularity, but why? What unique advantages does a cloud computing architecture offer to companies in todays economic climate? And what just what is cloud computing, anyway? Lets explore the cloud computing infrastructure and its impact on critically important areas to IT, like security, infrastructure investments, business application development, and more. Most IT departments are forced to spend a significant portion of their time on frustrating implementation, maintenance, and upgrade projects that too often dont add significant value to the companys bottom line. Increasingly, IT teams are turning to cloud computing technology to minimize the time spent on lower-value activities and allow IT to focus on strategic activities with greater impact on the business. The fundamental cloud computing infrastructure has won over the CIOs of some of the worlds largest organizationsthese once-skeptical executives never looked back after experiencing first-hand the host of benefits delivered by cloud computing technology.

Proven Web-services integration. By their very nature, cloud computing technology is much easier and quicker to integrate with your other enterprise applications (both traditional software and cloud computing infrastructure-based), whether third-party or homegrown. World-class service delivery. Cloud computing infrastructures offer much greater scalability, complete disaster recovery, and impressive uptime numbers. No hardware or software to install: a 100% cloud computing infrastructure. The beauty of cloud computing technology is its simplicity and in the fact that it requires significantly fewer capital expenditures to get up and running. Faster and lower-risk deployment. You can get up and running in a fraction of the time with a cloud computing infrastructure. No more waiting months or years and spending millions of dollars before anyone gets to log into your new solution. Your cloud computing technology applications are live in a matter of weeks or months, even with extensive customization or integration. Support for deep customizations. Some IT professionals mistakenly think that cloud computing technology is difficult or impossible to customize extensively, and therefore is not a good choice for complex enterprises. The cloud computing infrastructure not only allows deep customization and application configuration, it preserves all those customizations even during upgrades. And even better, cloud computing technology is ideal for application development to support your organizations evolving needs. Empowered business users. Cloud computing technology allows on-the-fly, point-and-click customization and report generation for business users, so IT doesnt spend half its time making minor changes and running reports.

Automatic upgrades that dont impact IT resources. Cloud computing infrastructures put an end to a huge IT dilemma: If we upgrade to the latest-andgreatest version of the application, well be forced to spend time and resources (that we dont have) to rebuild our customizations and integrations. Cloud computing technology doesnt force you to decide between upgrading and preserving all your hard work, because those customizations and integrations are automatically preserved during an upgrade. Pre-built, pre-integrated apps for cloud computing technology. The Force.com AppExchange features hundreds of applications built for cloud computing infrastructure, pre-integrated with your Salesforce CRM application or your other application development work on Force.com.

A Cloud Computing Infrastructure: Whats the Value?


Cloud computing infrastructures and salesforce.coms Force.com platform have won over the CIOs of some of the worlds largest organizations. These forward-thinking (yet extremely security-conscious) tech executives fully vetted Force.com and realized the value cloud computing technology offers. Salesforce.com frees companies from traditional software and its hidden costs, high failure rates, unacceptable risks, and protracted implementations. All while providing a comprehensive, flexible platform that meets the needs of businesses of every size, from the world's largest enterprises to small and mid-sized companies everywhere. Salesforce.com minimizes the risk involved in application development and implementation. After all, technology should solve your business problems, not create more headaches. With salesforce.com and the Force.com cloud computing technology, you'll be free to focus on solving strategic problems instead of worrying about infrastructure requirements, maintenance, and upgrades. The cloud computing infrastructure also promises significant savings in administrative costsmore than 50 percent in comparison to client/server software. The areas in which cloud computing saves administrative costs include:

Basic customization. The Force.com cloud computing technologys point-and-click tools empower administrators and business users to perform basic customizations themselves. Real-time reporting. Easy wizards step users through report and dashboard creation, so ITs queue is free of report requests. Security and sharing models. The sharing model built into the Force.com cloud computing infrastructure protects sensitive data while making the management of security profiles much less time-consuming. Multiple languages and currencies. Included support for 13 languages and all currencies make managing a global application easier.

No wonder so many CIOs are restructuring their companies around a cloud computing infrastructure.

Cloud Computing Technology & Application Development


Cloud computing technology is sparking a huge change in application development circles. Just like the changes that moved publishing technology from paper to bits, making it possible for us to have information about anything in the world right at our fingertips in a flash, the move to a cloud computing infrastructure for application development is making it possible to build robust, enterprise-class applications in a fraction of the time and at a much lower cost. The Force.com platform ushers in a new era of applications in the cloud that bring the power and success of Salesforce CRM to your whole companynot just sales, service, and marketing. New types of application innovation are now possible through a combination of no programming point-and-click wizards, toolkits for the most popular development languages for creating client-side applications, and Apex Code, salesforce.coms programming language for our Force.com platform. Because the resulting applications will run natively on Force.com, developers gain many advantages.

Cloud computing technology boasts all the benefits of multitenancy, including built-in security, reliability, upgradeability, and ease of use. Out-of-the-box features such as analytics, offline access, and mobile deployment speed application development. Theres no need to worry about managing and maintaining any server infrastructure, even as applications scale to thousands of users. You can join a community of thousands of developers also focused on business application development for cloud computing infrastructures. The Force.com AppExchange marketplace provides an outlet for all your business application development and access to tens of thousands of salesforce.com customers.

By eliminating the problems of traditional application development, cloud computing technology frees you to focus on developing business applications that deliver true value to your business (or your customers). The Force.com platform lets IT innovate while avoiding the costs and headaches associated with servers, individual software solutions, middleware or point-to-point connections, upgradesand the staff needed to manage it all.

Hardware Devices
computer hardware are component devices which are typically installed into or peripheral to a computer case to create a personal computer upon which system software is installed including a firmware interface such as a BIOS and an operating system which supports application software that performs the operator's desired functions. Operating systems usually communicate with devices through hardware buses by using software device drivers.

HARDWARE is The physical parts of a computer Hardware is a collective term. Hardware includes not only the computer proper but also the cables, connectors, power supply units, and peripheral devices such as the keyboard, mouse, audio speakers, and printers. DEVICES are any machines or components that's attached to a computer. Examples of devices include disk drives, printers, mice, and modems. These particular devices fall into the category of peripheral devices because they are separate from the main computer. Most devices, whether peripheral or not, require a program called a device driver that acts as a translator, converting general commands from an application into specific commands that the device understands. Hardware devices are generally devices that require a program

In relation to your computer, a hardware device is any physical device that you might attach to the computer. This includes, but is not limited to: a hard drive a diskette drive (I refuse to call 3.5" diskettes floppies) a USB flash drive an external hard drive a CD drive (internal or external) a DVD drive (internal or external) a printer a scanner a modem (internal or external) a cable modem a monitor a pc-driven wormhole generator

Mother board

The motherboard is the main component inside the case. It is a large rectangular board with integrated circuitry that connects the other parts of the computer including the CPU, the RAM, the disk drives (CD, DVD, hard disk, or any others) as well as any peripherals connected via the ports or the expansion slots. Components directly attached to the motherboard include: The central processing unit (CPU) performs most of the calculations which enable a computer to function, and is sometimes referred to as the "brain" of the computer. It is usually cooled by a heat sink and fan. Newer CPUs include an on-die Graphics Processing Unit (GPU). The chip set mediates communication between the CPU and the other components of the system, including main memory.

RAM (random-access memory) stores resident part of the current running OS (OS core and so on) and all running processes (application parts, using CPU or input/output (I/O) channels or waiting for CPU or I/O channels). The BIOS includes boot firmware and power management. The Basic Input Output System tasks are handled by operating systemdrivers. Newer motherboards use Unified Extensible Firmware Interface instead of BIOS. Internal buses connect the CPU to various internal components and to expansion cards for graphics and sound. Current The north bridge memory controller, for RAM and PCI Express PCI Express, for expansion cards such as graphics, lannd and physics processors, and high-end network interfaces PCI, for other expansion cards SATA, for disk drives

ATA External bus controllers support ports for external peripherals. These ports may be controlled directly by the south bridge I/O controller or based on expansion cards attached to the motherboard through the PCI bus. USB FireWire eSATA SCSI

Power supply
A power supply unit (PSU) converts alternating current (AC) electric power to low-voltage DC power for the internal components of the computer. Some power supplies have a switch to change between 230 V and 115 V. Other models have automatic sensors that switch input voltage automatically, or are able to accept any voltage between those limits. Power supply units used in computers are nearly always switch mode power supplies (SMPS). The SMPS provides regulated direct current power at the several voltages required by the motherboard and accessories such as disk drives and cooling fans.

Removable media devices


CD (compact disc) - the most common type of removable media, suitable for music and data. CD-ROM Drive - a device used for reading data from a CD. CD Writer - a device used for both reading and writing data to and from a CD.

DVD (digital versatile disc) - a popular type of removable media that is the same dimensions as a CD but stores up to 12 times as much information. It is the most common way of transferring digital video, and is popular for data storage. DVD-ROM Drive - a device used for reading data from a DVD. DVD Writer - a device used for both reading and writing data to and from a DVD. DVD-RAM Drive - a device used for rapid writing and reading of data from a special type of DVD.

Blu-ray Disc - a high-density optical disc format for data and high-definition video. Can store 70 times as much information as a CD. BD-ROM Drive - a device used for reading data from a Blu-ray disc. BD Writer - a device used for both reading and writing data to and from a Blu-ray disc.

HD DVD - a discontinued competitor to the Blu-ray format. Floppy disk - an outdated storage device consisting of a thin disk of a flexible magnetic storage medium. Floppies are used today mainly for loading device drivers not included with an operating system release (for example, RAID drivers). Iomega Zip drive - an outdated medium-capacity removable disk storage system, first introduced by Iomega in 1994. USB flash drive - a flash memory data storage device integrated with a USB interface, typically small, lightweight, removable, and rewritable. Capacities vary, from hundreds of megabytes (in the same range as CDs) to tens of gigabytes (surpassing, at great expense, Blu-ray discs). Tape drive - a device that reads and writes data on a magnetic tape, used for long term storage and backups.

Secondary storage
Hardware that keeps data inside the computer for later use and remains persistent even when the computer has no power. Hard disk - for medium-term storage of data. Solid-state drive - a device similar to hard disk, but containing no moving parts and stores data in a digital format. RAID array controller - a device to manage several internal or external hard disks and optionally some peripherals in order to achieve performance or reliability improvement in what is called a RAID array.

Sound card
Enables the computer to output sound to audio devices, as well as accept input from a microphone. Most modern computers have sound cards built-in to the motherboard, though it is common for a user to install a separate sound card as an upgrade. Most sound cards, either built-in or added, have surround sound capabilities.

Input and output peripherals


Input and output devices are typically housed externally to the main computer chassis. The following are either standard or very common to many computer systems

Input
Text input devices Keyboard - a device to input text and characters by depressing buttons (referred to as keys or buttons). Pointing devices Mouse - a pointing device that detects two dimensional motion relative to its supporting surface.

Optical Mouse - uses light (laser technology) to determine mouse motion.

Trackball - a pointing device consisting of an exposed protruding ball housed in a socket that detects rotation about two axes.

Touchscreen - senses the user pressing directly on the monitor.

Gaming devices Joystick - a hand-operated pivoted stick whose position is transmitted to the computer. Game pad - a hand held game controller that relies on the digits (especially thumbs) to provide input. Game controller - a specific type of controller specialized for certain gaming purposes.

Image, Video input devices Image scanner - a device that provides input by analyzing images, printed text, handwriting, or an object.

Web cam - a video camera used to provide visual input that can be easily transferred over the internet.

Audio input devices Microphone - an acoustic sensor that provides input by converting sound into electrical signals.

Output
Printer - a device that produces a permanent human-readable text of graphic document. Dot Matrix Printer Laser Printer

Ciode display OLED - Organic Light-Emitting Diode

Monitors

Basic computer components Keyboard Image scanner Microphone Pointing device (Graphics tablet Joystick Light pen Mouse Touchpad Touchscreen Trackball) Webcam (Softcam) Monitor Printer Speakers Optical disc drive (CD-RW DVD+RW) Floppy disk Memory card USB flash drive

Input devices

Output devices Removable data storage

Computer case

Central processing unit (CPU) Hard disk / Solid-state drive Motherboard Network interface controller Powe access memory (RAM) Sound card Video card Ethernet Firewire (IEEE 1394) Parallel port Serial port Thunderbolt Universal Serial Bus (USB)

Data ports

You might also like