You are on page 1of 16

Unit I

Introduction to Computer

Hardware – Input – Output Devices


Input Devices

In computing, an input device is any peripheral (piece of computer hardware equipment) used to
provide data and control signals to an information processing system such as a computer or other
information appliance. Examples of input devices include keyboards, mouse, scanners, digital cameras
and joysticks.

Keyboard:
In computing, a keyboard is a typewriter-style device, which uses an arrangement of buttons or keys,
to act as mechanical levers or electronic switches. Following the decline of punch cards and paper tape,
interaction via teleprinter-style keyboards became the main input device for computers.

A keyboard typically has characters engraved or printed on the keys and each press of a key typically
corresponds to a single written symbol. However, to produce some symbols requires pressing and
holding several keys simultaneously or in sequence. While most keyboard keys produce letters,
numbers or signs (characters), other keys or simultaneous key presses can produce actions or execute
computer commands.

Despite the development of alternative input devices, such as the mouse, touch screen, pen devices,
character recognition and voice recognition, the keyboard remains the most commonly used device for
direct (human) input of alphanumeric data into computers.

A Key board comprises of keys of different types. They are follows:


1. Alphanumeric Keys: Alphabetical, numeric, and punctuation keys are used in the same fashion
as a typewriter keyboard to enter their respective symbol into a word processing program, text
editor, data spreadsheet, or other program. Many of these keys will produce different symbols
when modifier keys or shift keys are pressed. The alphabetic characters become uppercase when
the shift key or Caps Lock key is depressed. The numeric characters become symbols or
punctuation marks when the shift key is depressed. The alphabetical, numeric, and punctuation
keys can also have other functions when they are pressed at the same time as some modifier
keys.
2. Modifier keys: Modifier keys are special keys that modify the normal action of another key,
when the two are pressed in combination. For example, <Alt> + <F4> in Microsoft Windows
will close the program in an active window. In contrast, pressing just <F4> will probably do
nothing, unless assigned a specific function in a particular program. By themselves, modifier
keys usually do nothing.
The most widely used modifier keys include the Control key, Shift key and the Alt key. The
AltGr key is used to access additional symbols for keys that have three symbols printed on them.
On the Macintosh and Apple keyboards, the modifier keys are the Option key and Command
key, respectively. On MIT computer keyboards, the Meta key is used as a modifier and for
Windows keyboards, there is a Windows key. Compact keyboard layouts often use a Fn key.
"Dead keys" allow placement of a diacritic mark, such as an accent, on the following letter (e.g.,
the Compose key).
3. Cursor keys: Navigation keys or cursor keys include a variety of keys which move the cursor to
different positions on the screen. Arrow keys are programmed to move the cursor in a specified
direction; page scroll keys, such as the Page Up and Page Down keys, scroll the page up and
down. The Home key is used to return the cursor to the beginning of the line where the cursor is
located; the End key puts the cursor at the end of the line. The Tab key advances the cursor to
the next tab stop. The Insert key is mainly used to switch between overtype mode, in which the
cursor overwrites any text that is present on and after its current location, and insert mode,
where the cursor inserts a character at its current position, forcing all characters past it one
position further. The Delete key discards the character ahead of the cursor's position, moving all
following characters one position "back" towards the freed place. On many notebook computer
keyboards the key labeled Delete (sometimes Delete and Backspace are printed on the same
key) serves the same purpose as a Backspace key. The Backspace key deletes the preceding
character. Lock keys lock part of a keyboard, depending on the settings selected. The lock keys
are scattered around the keyboard. Most styles of keyboards have three LEDs indicating which
locks are enabled, in the upper right corner above the numeric pad. The lock keys include Scroll
lock, Num lock (which allows the use of the numeric keypad), and Caps lock.
4. System Commands: The SysRq and Print screen commands often share the same key. SysRq
was used in earlier computers as a "panic" button to recover from crashes (and it is still used in
this sense to some extent by the Linux kernel; see Magic SysRq key). The Print screen
command used to capture the entire screen and send it to the printer, but in the present it usually
puts a screenshot in the clipboard. The Break key/Pause key no longer has a well-defined
purpose. Its origins go back to teleprinter users, who wanted a key that would temporarily
interrupt the communications line. The Break key can be used by software in several different
ways, such as to switch between multiple login sessions, to terminate a program, or to interrupt a
modem connection.

Mouse: In computing, a mouse is a pointing device that detects two-dimensional motion relative to a
surface. This motion is typically translated into the motion of a pointer on a display, which allows for
fine control of a graphical user interface.
Physically, a mouse consists of an object held in one's hand, with one or more buttons. Mice often also
feature other elements, such as touch surfaces and "wheels", which enable additional control and
dimensional input.

Scanner: Scanner is an input device which scans the object into the computer. In computing, an image
scanner—often abbreviated to just scanner, although the term is ambiguous out of context (barcode
scanner, CAT scanner, etc.)—is a device that optically scans images, printed text, handwriting, or an
object, and converts it to a digital image. Commonly used in offices are variations of the desktop
flatbed scanner where the document is placed on a glass window for scanning. Hand-held scanners,
where the device is moved by hand, have evolved from text scanning "wands" to 3D scanners used for
industrial design, reverse engineering, test and measurement, orthotics, gaming and other applications.
Mechanically driven scanners that move the document are typically used for large-format documents,
where a flatbed design would be impractical.

Light Pen: A light pen is a computer input device in the form of a light-sensitive wand used in
conjunction with a computer's CRT display. It allows the user to point to displayed objects or draw on
the screen in a similar way to a touch screen but with greater positional accuracy. It was long thought
[
that a light pen can work with any CRT-based display, but not with LCDs (though Toshiba and Hitachi
displayed a similar idea at the "Display 2006" show in Japan) and other display technologies. However,
in 2011 Fairlight Instruments released its Fairlight CMI-30A, which uses a 17" LCD monitor with light
pen control.

A light pen detects a change of brightness of nearby screen pixels when scanned by cathode ray tube
electron beam and communicates the timing of this event to the computer. Since a CRT scans the entire
screen one pixel at a time, the computer can keep track of the expected time of scanning various
locations on screen by the beam and infer the pen's position from the latest timestamp.

Touchscreen: A touchscreen is an electronic visual display that the user can control through simple or
multi-touch gestures by touching the screen with a special stylus/pen and-or one or more fingers. Some
touchscreens use an ordinary or specially coated gloves to work while others use a special stylus/pen
only. The user can use the touchscreen to react to what is displayed and to control how it is displayed
(for example by zooming the text size).
The touchscreen enables the user to interact directly with what is displayed, rather than using a mouse,
touchpad, or any other intermediate device (other than a stylus, which is optional for most modern
touchscreens).
Touchscreens are common in devices such as game consoles, all-in-one computers, tablet computers,
and smartphones. They can also be attached to computers or, as terminals, to networks. They also play
a prominent role in the design of digital appliances such as personal digital assistants (PDAs), satellite
navigation devices, mobile phones, and video games and some books (Electronic books).

Joystick: A joystick is an input device consisting of a stick that pivots on a base and reports its angle
or direction to the device it is controlling. A joystick, also known as the control column, is the
principal control device in the cockpit of many civilian and military aircraft, either as a center stick or
side-stick. It often has supplementary switches to control various aspects of the aircraft's flight.

Joysticks are often used to control video games, and usually have one or more push-buttons whose state
can also be read by the computer. A popular variation of the joystick used on modern video game
consoles is the analog stick. Joysticks are also used for controlling machines such as cranes, trucks,
underwater unmanned vehicles, wheelchairs, surveillance cameras, and zero turning radius lawn
mowers. Miniature finger-operated joysticks have been adopted as input devices for smaller electronic
equipment such as mobile phones.

Output Devices: An output device is computer hardware equipment used to communicate the results
of data processing carried out by an information processing system (such as a computer) which
converts the electronically generated information into human-readable form.

VDU / CRT: A monitor or a display is an electronic visual display for computers. The monitor
comprises the display device, circuitry and an enclosure. The display device in modern monitors is
typically a thin film transistor liquid crystal display (TFT-LCD) thin panel, while older monitors used a
cathode ray tube (CRT) about as deep as the screen size. Originally, computer monitors were used for
data processing while television receivers were used for entertainment. From the 1980s onwards,
computers (and their monitors) have been used for both data processing and entertainment, while
televisions have implemented some computer functionality. The common aspect ratio of televisions,
and then computer monitors, has also changed from 4:3 to 16:9 (and 16:10).
Printer: It is a peripheral which makes a persistent human-readable representation of graphics or text
on paper or similar physical media. Individual printers are designed to support local and network users
at the same time. Some printers can print documents stored on memory cards or from digital cameras
and scanners.

Consumer and some commercial printers are designed for low-volume, short-turnaround print jobs;
requiring virtually no setup time to achieve a hard copy of a given document. However, printers are
generally slow devices (30 pages per minute is considered fast, and many inexpensive consumer
printers are far slower than that), and the cost per page is actually relatively high. However, this is
offset by the on-demand convenience and project management costs being more controllable compared
to an out-sourced solution. The printing press remains the machine of choice for high-volume,
professional publishing. However, as printers have improved in quality and performance, many jobs
which used to be done on printing presses are now done by print on demand or by users on local
printers; see desktop publishing. Local printers are also increasingly taking over the process of
photofinishing as digital photo printers become commonplace. The world's first computer printer was a
19th-century mechanically driven apparatus invented by Charles Babbage for his difference engine.

Types of Printers: They are as follows –

1. Line printers print an entire line of text at a time. They are as under –

a) Drum printers, where a horizontally mounted rotating drum carries the entire character
set of the printer repeated in each printable character position. The IBM 1132 printer is an
example of a drum printer.

b) Chain or train printers, where the character set is arranged multiple times around a
linked chain or a set of character slugs in a track traveling horizontally past the print line.
The IBM 1403 is perhaps the must popular, and came in both chain and train varieties.
The band printer is a later variant where the characters are embossed on a flexible steel
band. The LP27 from Digital Equipment Corporation is a band printer.

c) Bar printers, where the character set is attached to a solid bar that moves horizontally
along the print line, such as the IBM 1443.

2. Impact Printers rely on a forcible impact to transfer ink to the media. The impact printer uses a
print head that either hits the surface of the ink ribbon, pressing the ink ribbon against the paper.
They are as follows –

a) Dot Matrix Printer, is a type of computer printing which uses a print head that runs back
and forth, or in an up and down motion, on the page and prints by impact, striking an ink-
soaked cloth ribbon against the paper, much like the print mechanism on a typewriter.
However, unlike a typewriter or daisy wheel printer, letters are drawn out of a dot matrix,
and thus, varied fonts and arbitrary graphics can be produced. The common serial dot
matrix printers use a horizontally moving print head. The print head can be thought of
featuring a single vertical column of seven or more pins approximately the height of a
character box. In reality, the pins are arranged in up to four vertically or/and horizontally
slightly displaced columns in order to increase the dot density and print speed through
interleaving without causing the pins to jam. Thereby, up to 48 pins can be used to form
the characters of a line while the print head moves horizontally.

b) Daisy Wheel Printer, is an impact printing technology invented in 1969 by David S. Lee
at Diablo Data Systems. It uses interchangeable pre-formed type elements, each with
typically 96 glyphs, to generate high-quality output comparable to premium typewriters
such as the IBM Selectric, but two to three times faster. Daisy wheel printing was used in
electronic typewriters, word processors and computers from 1972. The daisy wheel is
considered to be so named because of its resemblance to the daisy flower.

3. Laser printing is an electrostatic digital printing process that rapidly produces high quality text
and graphics by passing a laser beam over a charged drum to define a differentially charged
image. The drum then selectively collects charged toner and transfers the image to paper, which
is then heated to permanently fix the image. As with digital photocopiers and multifunction
printers (MFPs), laser printers employ a xerographic printing process, but differ from analog
photocopiers in that the image is produced by the direct scanning of the medium across the
printer's photoreceptor. Hence, it proves to be a much faster process compared to the latter. A
laser beam, typically from an aluminum gallium arsenide semiconductor laser, projects an image
of the page to be printed onto an electrically charged rotating drum coated with selenium or,
more common in modern printers, organic photoconductors. Photoconductivity allows charge to
leak away from the areas exposed to light. Powdered ink particles are then electrostatically
picked up by the drum's charged areas, which have not been exposed to the laser beam. The
drum then prints the image onto paper by direct contact and heat, which fuses the ink to the
paper.

4. Inkjet printing is a type of computer printing that creates a digital image by propelling droplets
of ink onto paper, plastic, or other substrates. Inkjet printers are the most commonly used type of
printer and range from small inexpensive consumer models to very large professional machines
that can cost tens of thousands of dollars, or more.

Storage Devices –

A data storage device is a device for recording (storing) information (data). Recording can be done
using virtually any form of energy, spanning from manual muscle power in handwriting, to acoustic
vibrations in phonographic recording, to electromagnetic energy modulating magnetic tape and optical
discs.

A storage device may hold information, process information, or both. A device that only holds
information is a recording medium. Devices that process information (data storage equipment) may
both access a separate portable (removable) recording medium or a permanent component to store and
retrieve information.

Electronic data storage requires electrical power to store and retrieve that data. Most storage devices
that do not require vision and a brain to read data fall into this category. Electromagnetic data may be
stored in either an analog data or digital data format on a variety of media. This type of data is
considered to be electronically encoded data, whether or not it is electronically stored in a
semiconductor device, for it is certain that a semiconductor device was used to record it on its medium.
Most electronically processed data storage media (including some forms of computer data storage) are
considered permanent (non-volatile) storage, that is, the data will remain stored when power is
removed from the device. In contrast, most electronically stored information within most types of
semiconductor (computer chips) microcircuits are volatile memory, for it vanishes if power is removed.

With the exception of barcodes and OCR data, electronic data storage is easier to revise and may be
more cost effective than alternative methods due to smaller physical space requirements and the ease of
replacing (rewriting) data on the same medium. However, the durability of methods such as printed
data is still superior to that of most electronic storage media. The durability limitations may be
overcome with the ease of duplicating (backing-up) electronic data.

Types of storage devices –


Magnetic Storage Devices
a. Magnetic tape data storage is a system for storing digital information on magnetic tape
using digital recording. Modern magnetic tape is most commonly packaged in cartridges
and cassettes. The device that performs writing or reading of data is a tape drive.
Autoloaders and tape libraries automate cartridge handling.
b. Hard Disk Drive (HDD) is a data storage device used for storing and retrieving digital
information using rapidly rotating disks (platters) coated with magnetic material. An
HDD retains its data even when powered off. Data is read in a random-access manner,
meaning individual blocks of data can be stored or retrieved in any order rather than
sequentially. An HDD consists of one or more rigid ("hard") rapidly rotating disks
(platters) with magnetic heads arranged on a moving actuator arm to read and write data
to the surfaces. The primary characteristics of an HDD are its capacity and performance.
Capacity is specified in unit prefixes corresponding to powers of 1000: a 1-terabyte (TB)
drive has a capacity of 1,000 gigabytes (GB; where 1 gigabyte = 1 billion bytes).
Typically, some of an HDD's capacity is unavailable to the user because it is used by the
file system and the computer operating system, and possibly inbuilt redundancy for error
correction and recovery. Performance is specified by the time to move the heads to a file
(Average Access Time) plus the time it takes for the file to move under its head (average
latency, a function of the physical rotational speed in revolutions per minute) and the
speed at which the file is transmitted (data rate).
c. Floppy Fisk, or diskette, is a disk storage medium composed of a disk of thin and
flexible magnetic storage medium, sealed in a rectangular plastic carrier lined with fabric
that removes dust particles. Floppy disks are read and written by a floppy disk drive
(FDD). Floppy disks, initially as 8-inch (200 mm) media and later in 5¼-inch (133 mm)
and 3½-inch (90 mm) sizes, were a ubiquitous form of data storage and exchange from
the mid-1970s well into the 2000s. By 2010, computer motherboards were rarely
manufactured with floppy drive support; 3½-inch floppy disks can be used with an
external USB floppy disk drive, but USB drives for 5¼-inch, 8-inch and non-standard
diskettes are rare or non-existent, and those formats must usually be handled by old
equipment. While floppy disk drives still have some limited uses, especially with legacy
industrial computer equipment, they have been superseded by data storage methods with
much greater capacity, such as USB flash drives, portable external hard disk drives,
optical discs, memory cards and computer networks.
Optical Storage Devices

a. CD-ROM is a pre-pressed optical compact disc which contains data. The name is an
acronym which stands for "Compact Disc Read-Only Memory". Computers can read
CD-ROMs, but cannot write on the CD-ROM's which are not writable or erasable. Until
the mid-2000s, CD-ROMs were popularly used to distribute software for computers and
video game consoles. Some CDs, called enhanced CDs, hold both computer data and
audio with the latter capable of being played on a CD player, while data (such as software
or digital video) is only usable on a computer (such as ISO 9660 format PC CD-ROMs).

b. DVD also called as "digital video disc" or "digital versatile disc" is a digital optical
disc storage format, invented and developed by Philips, Sony, Toshiba, and Panasonic in
1995. DVDs can be played in many types of players, including DVD players. DVDs offer
higher storage capacity than compact discs while having the same dimensions. Pre-
recorded DVDs are mass-produced using molding machines that physically stamp data
onto the DVD. Such discs are known as DVD-ROM, because data can only be read and
not written or erased. Blank recordable DVD discs (DVD-R and DVD+R) can be
recorded once using a DVD recorder and then function as a DVD-ROM. Rewritable
DVDs (DVD-RW, DVD+RW, and DVD-RAM) can be recorded and erased multiple
times.
Memory –
It refers to the physical devices used to store programs (sequences of instructions) or data (e.g. program
state information) on a temporary or permanent basis for use in a computer or other digital electronic
device. The term primary memory is used for the information in physical systems which function at
high-speed (i.e. RAM), as a distinction from secondary memory, which are physical devices for
program and data storage which are slow to access but offer higher memory capacity. Primary memory
stored on secondary memory is called "virtual memory". An archaic synonym for memory is store.

The term "memory", meaning primary memory is often associated with addressable semiconductor
memory, i.e. integrated circuits consisting of silicon-based transistors, used for example as primary
memory but also other purposes in computers and other digital electronic devices. There are two main
types of semiconductor memory: volatile and non-volatile. Examples of non-volatile memory are flash
memory (sometimes used as secondary, sometimes primary computer memory) and
ROM/PROM/EPROM/EEPROM memory (used for firmware such as boot programs). Examples of
volatile memory are primary memory (typically dynamic RAM, DRAM), and fast CPU cache memory
(typically static RAM, SRAM, which is fast but energy-consuming and offer lower memory capacity
per area unit than DRAM).
Memory

Primary Secondary

RAM ROM Magnetic Optical

Memory PROM Memory


Memory

EPROM Memory
Memory Memory

EEPROM

Primary Memory / Main Memory / Internal Memory –


Primary storage (or main memory or internal memory), often referred to simply as memory, is the only
one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes
them as required. Any data actively operated on is also stored there in uniform manner.
Historically, early computers used delay lines, Williams tubes, or rotating magnetic drums as primary
storage. By 1954, those unreliable methods were mostly replaced by magnetic core memory. Core
memory remained dominant until the 1970s, when advances in integrated circuit technology allowed
semiconductor memory to become economically competitive. It is of following types –

1. Random Access Memory (RAM) – It is a form of computer data storage. A random-access


memory device allows data items to be read and written in roughly the same amount of time
regardless of the order in which data items are accessed. In contrast, with other direct-access
data storage media such as hard disks, CD-RWs, DVD-RWs and the older drum memory, the
time required to read and write data items varies significantly depending on their physical
locations on the recording medium, due to mechanical limitations such as media rotation speeds
and arm movement delays. It is of two types –
a. SRAM – In SRAM, a bit of data is stored using the state of a flip-flop. This form of
RAM is more expensive to produce, but is generally faster and requires less power than
DRAM and, in modern computers, is often used as cache memory for the CPU.
b. DRAM – DRAM stores a bit of data using a transistor and capacitor pair, which together
comprise a memory cell. The capacitor holds a high or low charge (1 or 0, respectively),
and the transistor acts as a switch that lets the control circuitry on the chip read the
capacitor's state of charge or change it. As this form of memory is less expensive to
produce than static RAM, it is the predominant form of computer memory used in
modern computers.
2. Read Only Memory (ROM) – It is a class of storage medium used in computers and other
electronic devices. Data stored in ROM can only be modified slowly or with difficulty, so it is
mainly used to distribute firmware (software that is very closely tied to specific hardware, and
unlikely to need frequent updates). Strictly, read-only memory refers to memory that is hard-
wired, such as diode matrix and the later mask ROM. Although discrete circuits can be altered
(in principle), ICs cannot and are useless if the data is bad. The fact that such memory can never
be changed is a large drawback; more recently, ROM commonly refers to memory that is read-
only in normal operation, while reserving the fact of some possible way to change it. It is of
following types –
a. Programmable read-only memory (PROM), or one-time programmable ROM
(OTP), can be written to or programmed via a special device called a PROM
programmer. Typically, this device uses high voltages to permanently destroy or create
internal links (fuses or antifuses) within the chip. Consequently, a PROM can only be
programmed once.

b. Erasable programmable read-only memory (EPROM) can be erased by exposure to


strong ultraviolet light (typically for 10 minutes or longer), then rewritten with a process
that again needs higher than usual voltage applied. Repeated exposure to UV light will
eventually wear out an EPROM, but the endurance of most EPROM chips exceeds 1000
cycles of erasing and reprogramming. EPROM chip packages can often be identified by
the prominent quartz "window" which allows UV light to enter. After programming, the
window is typically covered with a label to prevent accidental erasure. Some EPROM
chips are factory-erased before they are packaged, and include no window; these are
effectively PROM.

c. Electrically erasable programmable read-only memory (EEPROM) is based on a


similar semiconductor structure to EPROM, but allows its entire contents (or selected
banks) to be electrically erased, then rewritten electrically, so that they need not be
removed from the computer (or camera, MP3 player, etc.). Writing or flashing an
EEPROM is much slower (milliseconds per bit) than reading from a ROM or writing to a
RAM (nanoseconds in both cases).

Software –

Computer software, or simply software, also known as computer programs, is the non-tangible
component of computers. Computer software contrasts with computer hardware, which is the physical
component of computers. Computer hardware and software require each other and neither can be
realistically used without the other.

Computer software includes all computer programs regardless of their architecture; for example,
executable files, libraries and scripts are computer software. Yet, it shares their mutual properties:
software consists of clearly defined instructions that upon execution, instructs hardware to perform the
tasks for which it is designed. Software is stored in computer memory and cannot be touched, just as a
3D model shown in an illustration cannot be touched.

At the lowest level, executable code consists of machine language instructions specific to an individual
processor – typically a central processing unit (CPU). A machine language consists of groups of binary
values signifying processor instructions that change the state of the computer from its preceding state.
For example, an instruction may change the value stored in a particular storage location inside the
computer – an effect that is not directly observable to the user. An instruction may also (indirectly)
cause something to appear on a display of the computer system – a state change which should be
visible to the user. The processor carries out the instructions in the order they are provided, unless it is
instructed to "jump" to a different instruction, or interrupted.

Software is usually written in high-level programming languages that are easier and more efficient for
humans to use (closer to natural language) than machine language.[2] High-level languages are
compiled or interpreted into machine language object code. Software may also be written in a low-level
assembly language, essentially, a vaguely mnemonic representation of a machine language using a
natural language alphabet. Assembly language is converted into object code via an assembler.

It is of following types –
1. System Software - A generic term referring to the computer programs used to start and run
computer systems and networks. It includes –
a. Operating systems – An operating system (OS) is a collection of software that
manages computer hardware resources and provides common services for
computer programs. The operating system is an essential component of the system
software in a computer system. Application programs usually require an operating
system to function.
b. Device drivers – A device driver (commonly referred to as simply a driver) is a
computer program that operates or controls a particular type of device that is
attached to a computer. A driver provides a software interface to hardware
devices, enabling operating systems and other computer programs to access
hardware functions without needing to know precise details of the hardware being
used.
c. Middleware – It is computer software that provides services to software
applications beyond those available from the operating system. It can be described
as "software glue". Middleware makes it /output, so they can focus on the specific
purpose of their application.
d. Utility Software – It is system software designed to help analyze, configure,
optimize or maintain a computer. Utility software usually focuses on how the
computer infrastructure (including the computer hardware, operating system,
application software and data storage) operates. Due to this focus, utilities are
often rather technical and targeted at people with an advanced level of computer
knowledge - in contrast to application software, which allows users to do things
like creating text documents, playing video games, listening to music or viewing
websites
e. Shells and windowing systems – In computing, a shell is a user interface for
access to an operating system's services. Generally, operating system shells use
either a command-line interface (CLI) or graphical user interface (GUI). Mac OS
and Windows are widely used operating systems with GUIs. Similarly,
Windowing system (or window system) is a type of graphical user interface
(GUI) which implements the WIMP (windows, icons, menus, pointer) paradigm
for a user interface. Each currently running application is assigned a usually
resizable and usually rectangular shaped surface of the display to present its
graphical user interface to the user.
2. Application Software – It is the general designation of computer programs for performing user
tasks. Application software may have a general purpose (word processing, web browsers) or
have a specific purpose (accounting, truck scheduling). Since the development and near-
universal adoption of the web, an important distinction that has emerged has been between web
applications — written with HTML, JavaScript and other web-native technologies and typically
requiring one to be online and running a web browser, and the more traditional native
applications written in whatever languages are available for one's particular type of computer.
There has been contentious debate in the computing community regarding web applications
replacing native applications for many purposes, especially on mobile devices such as smart
phones and tablets. Web apps have indeed greatly increased in popularity for some uses, but the
advantages of applications make them unlikely to disappear soon, if ever. Furthermore, the two
can be complementary, and even integrated. Application software can also be seen as being
either horizontal or vertical. Horizontal applications are more popular and widespread, because
they are general purpose, for example word processors or databases. Vertical applications are
niche products, designed for a particular type of industry or business, or department within an
organization. Integrated suites of software will try to handle every specific aspect possible of,
for example, manufacturing or banking systems, or accounting, or customer service. There are
many types of application software:
 An application suite consists of multiple applications bundled together. They usually have
related functions, features and user interfaces, and may be able to interact with each other, e.g.
open each other's files. Business applications often come in suites, e.g. Microsoft Office,
LibreOffice and iWork, which bundle together a word processor, a spreadsheet, etc.; but suites
exist for other purposes, e.g. graphics or music.
 Enterprise software addresses the needs of an entire organization's processes and data flow,
across most all departments, often in a large distributed environment. (Examples include
financial systems, customer relationship management (CRM) systems and supply chain
management software). Departmental Software is a sub-type of enterprise software with a focus
on smaller organizations and/or groups within a large organization. (Examples include travel
expense management and IT Helpdesk.)
 Enterprise infrastructure software provides common capabilities needed to support enterprise
software systems. (Examples include databases, email servers, and systems for managing
networks and security.)
 Information worker software lets users create and manage information, often for individual
projects within a department, in contrast to enterprise management. Examples include time
management, resource management, documentation tools, analytical, and collaborative. Word
processors, spreadsheets, email and blog clients, personal information system, and individual
media editors may aid in multiple information worker tasks.
 Content access software is used primarily to access content without editing, but may include
software that allows for content editing. Such software addresses the needs of individuals and
groups to consume digital entertainment and published digital content. (Examples include media
players, web browsers, and help browsers.)
 Educational software is related to content access software, but has the content and/or features
adapted for use in by educators or students. For example, it may deliver evaluations (tests), track
progress through material, or include collaborative capabilities.
 Simulation software simulates physical or abstract systems for either research, training or
entertainment purposes.
 Media development software generates print and electronic media for others to consume, most
often in a commercial or educational setting. This includes graphic-art software, desktop
publishing software, multimedia development software, HTML editors, digital-animation
editors, digital audio and video composition, and many others.
 Product engineering software is used in developing hardware and software products. This
includes computer-aided design (CAD), computer-aided engineering (CAE), computer language
editing and compiling tools, integrated development environments, and application programmer
interfaces.

Compilers –
A compiler is a computer program (or set of programs) that transforms source code written in a
programming language (the source language) into another computer language (the target language,
often having a binary form known as object code). The most common reason for wanting to transform
source code is to create an executable program.
The name "compiler" is primarily used for programs that translate source code from a high-level
programming language to a lower level language (e.g., assembly language or machine code). If the
compiled program can run on a computer whose CPU or operating system is different from the one on
which the compiler runs, the compiler is known as a cross-compiler. A program that translates from a
low level language to a higher level one is a decompiler. A program that translates between high-level
languages is usually called a language translator, source to source translator, or language converter. A
language rewriter is usually a program that translates the form of expressions without a change of
language. A compiler is likely to perform many or all of the following operations: lexical analysis,
preprocessing, parsing, semantic analysis (Syntax-directed translation), code generation, and code
optimization.

Interpreters –
In computer science, an interpreter is a computer program that directly executes i.e. performs,
instructions written in a programming or scripting language, without previously batch-compiling them
into machine language. An interpreter generally uses one of the following strategies for program
execution:
1. parse the source code and perform its behavior directly
2. translate source code into some efficient intermediate representation and immediately execute
this
3. explicitly execute stored precompiled code made by a compiler which is part of the interpreter
system

Assemblers –
An assembler is a program which creates object code by translating combinations of mnemonics and
syntax for operations and addressing modes into their numerical equivalents. This representation
typically includes an operation code ("opcode") as well as other control bits. The assembler also
calculates constant expressions and resolves symbolic names for memory locations and other entities.
The use of symbolic references is a key feature of assemblers, saving tedious calculations and manual
address updates after program modifications. Most assemblers also include macro facilities for
performing textual substitution—e.g., to generate common short sequences of instructions as inline,
instead of called subroutines. Some assemblers may also be able to perform some simple types of
instruction set-specific optimizations. One concrete example of this may be the ubiquitous x86
assemblers from various vendors. Most of them are able to perform jump-instruction replacements
(long jumps replaced by short or relative jumps) in any number of passes, on request. Others may even
do simple rearrangement or insertion of instructions, such as some assemblers for RISC architectures
that can help optimize a sensible instruction scheduling to exploit the CPU pipeline as efficiently as
possible. Like early programming languages such as FORTRAN, Algol, COBOL and Lisp, assemblers
have been available since the 1950s and the first generations of text based computer interfaces.
However, assemblers came first as they are far simpler to write than compilers for high-level
languages. This is because each mnemonic along with the addressing modes and operands of an
instruction translates rather directly into the numeric representations of that particular instruction,
without much context or analysis. There have also been several classes of translators and semi
automatic code generators with properties similar to both assembly and high level languages, with
speedcode as perhaps one of the better known examples.

Computer Languages –
A programming language is an artificial language designed to communicate instructions to a
machine, particularly a computer. Programming languages can be used to create programs that control
the behavior of a machine and/or to express algorithms. It can be defined as “A computer programming
language is a language used to write computer programs, which involve a computer performing some
kind of computation or algorithm and possibly control external devices such as printers, disk drives,
robots and so on.”
Languages are of following types –
1. High Level Language – In computer science, a high-level programming language is a
programming language with strong abstraction from the details of the computer. In comparison
to low-level programming languages, it may use natural language elements, be easier to use, or
may automate (or even hide entirely) significant areas of computing systems (e.g. memory
management), making the process of developing a program simpler and more understandable
relative to a lower-level language. The amount of abstraction provided defines how "high-level"
a programming language is. Ex: FORTRAN.
2. Low Level Language – In computer science, a low-level programming language is a
programming language that provides little or no abstraction from a computer's instruction set
architecture. Generally this refers to either machine code or assembly language. The word "low"
refers to the small or nonexistent amount of abstraction between the language and machine
language; because of this, low-level languages are sometimes described as being "close to the
hardware". Low-level languages can be converted to machine code without using a compiler or
interpreter, and the resulting code runs directly on the processor. A program written in a low-
level language can be made to run very quickly, and with a very small memory footprint; an
equivalent program in a high-level language will be more heavyweight. Low-level languages are
simple, but are considered difficult to use, due to the numerous technical details which must be
remembered.

By comparison, a high-level programming language isolates the execution semantics of


computer architecture from the specification of the program, which simplifies development.

Computer Generations

First Generation (1940-1956) Vacuum Tubes


The first computers used vacuum tubes for circuitry and magnetic drums for memory, and were often
enormous, taking up entire rooms. They were very expensive to operate and in addition to using a great
deal of electricity, generated a lot of heat, which was often the cause of malfunctions. First generation
computers relied on machine language, the lowest-level programming language understood by
computers, to perform operations, and they could only solve one problem at a time. Input was based on
punched cards and paper tape, and output was displayed on printouts.

The UNIVAC and ENIAC computers are examples of first-generation computing devices. The
UNIVAC was the first commercial computer delivered to a business client, the U.S. Census Bureau in
1951.

Second Generation (1956-1963) Transistors

Transistors replaced vacuum tubes and ushered in the second generation of computers. The transistor
was invented in 1947 but did not see widespread use in computers until the late 1950s. The transistor
was far superior to the vacuum tube, allowing computers to become smaller, faster, cheaper, more
energy-efficient and more reliable than their first-generation predecessors. Though the transistor still
generated a great deal of heat that subjected the computer to damage, it was a vast improvement over
the vacuum tube. Second-generation computers still relied on punched cards for input and printouts for
output. Second-generation computers moved from cryptic binary machine language to symbolic, or
assembly, languages, which allowed programmers to specify instructions in words. High-level
programming languages were also being developed at this time, such as early versions of COBOL and
FORTRAN. These were also the first computers that stored their instructions in their memory, which
moved from a magnetic drum to magnetic core technology. The first computers of this generation were
developed for the atomic energy industry.

Third Generation (1964-1971) Integrated Circuits

The development of the integrated circuit was the hallmark of the third generation of computers.
Transistors were miniaturized and placed on silicon chips, called semiconductors, which drastically
increased the speed and efficiency of computers. Instead of punched cards and printouts, users
interacted with third generation computers through keyboards and monitors and interfaced with an
operating system, which allowed the device to run many different applications at one time with a
central program that monitored the memory. Computers for the first time became accessible to a mass
audience because they were smaller and cheaper than their predecessors.

Fourth Generation (1971-Present) Microprocessors

The microprocessor brought the fourth generation of computers, as thousands of integrated circuits
were built onto a single silicon chip. What in the first generation filled an entire room could now fit in
the palm of the hand. The Intel 4004 chip, developed in 1971, located all the components of the
computer—from the central processing unit and memory to input/output controls—on a single chip. In
1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the
Macintosh. Microprocessors also moved out of the realm of desktop computers and into many areas of
life as more and more everyday products began to use microprocessors. As these small computers
became more powerful, they could be linked together to form networks, which eventually led to the
development of the Internet. Fourth generation computers also saw the development of GUIs, the
mouse and handheld devices.
Fifth Generation (Present and Beyond) Artificial Intelligence

Fifth generation computing devices, based on artificial intelligence, are still in development, though
there are some applications, such as voice recognition, that are being used today. The use of parallel
processing and superconductors is helping to make artificial intelligence a reality. Quantum
computation and molecular and nanotechnology will radically change the face of computers in years to
come. The goal of fifth-generation computing is to develop devices that respond to natural language
input and are capable of learning and self-organization.

Number System –

A numeral system (or system of numeration) is a writing system for expressing numbers, that is, a
mathematical notation for representing numbers of a given set, using digits or other symbols in a
consistent manner. It can be seen as the context that allows the symbols "11" to be interpreted as the
binary symbol for three, the decimal symbol for eleven, or a symbol for other numbers in different
bases.

1. Binary Number System – In mathematics and digital electronics, a binary number is a


number expressed in the binary numeral system, or base-2 numeral system, which represents
numeric values using two different symbols: typically 0 (zero) and 1 (one). More specifically,
the usual base-2 system is a positional notation with a radix of 2. Because of its straightforward
implementation in digital electronic circuitry using logic gates, the binary system is used
internally by almost all modern computers and computer-based devices such as mobile phones.

Conversion from Decimal to Binary – To convert from a base-10 integer numeral to its base-2
(binary) equivalent, the number is divided by two, and the remainder is the least-significant bit.
The (integer) result is again divided by two; its remainder is the next least significant bit. This
process repeats until the quotient becomes zero. e.g. –

Prior value ×2+ Next Next value


bit
0 ×2+ 1 =1
1 ×2+ 0 =2
2 ×2+ 0 =4
4 ×2+ 1 =9
9 ×2+ 0 = 18
18 ×2+ 1 = 37
37 ×2+ 0 = 74
74 ×2+ 1 = 149
149 ×2+ 1 = 299
299 ×2+ 0 = 598
598 ×2+ 1 = 1197
Conversion form Binary to Decimal – To convert a given number from binary to decimal the
binary number is multiplied by 2 in increasing order of power from right to left. The values are
added finally to get the number.
e.g –

Binary 1 0 0 1 0 1 0 1 1 0 1

Decimal 1×210 0×29 0×28 1×27 0×26 1×25 0×24 1×23 1×22 0×21 1×20 1197
+ + + + + + + + + + =

Binary to Hexadecimal – To convert a binary number into its hexadecimal equivalent, divide it
into groups of four bits. If the number of bits isn't a multiple of four, simply insert extra 0 bits at
the left. This adding of zero at the right is called as padding.

(1010010)2 = 0101 0010 grouped with padding = 5216


(11011101)2 = 1101 1101 grouped = DD16

Hexadecimal to Binary – To convert a hexadecimal number into its decimal equivalent,


multiply the decimal equivalent of each hexadecimal digit by the corresponding power of 16 and
add the resulting values:

C0E716 = (12 × 163) + (0 × 162) + (14 × 161) + (7 × 160) = (12 × 4096) + (0 × 256) + (14 × 16) +
(7 × 1) = (49,383)10
Divide the obtained decimal equivalent (49,383)10 to Binary by dividing it by 2.

It can be done directly also by writing the Hexadecimal numbers and writing each numbers
corresponding Binary equivalent.

(C0E7)16 – 1100 0000 1110 0111

You might also like