You are on page 1of 22

Difference between bit,byte and word?

1) Bit: Short for binary digit, the smallest unit of information on a


machine"computer". A single bit can hold only one of two values: 0 or 1.

2) Byte :A sequence of adjacent bits, usually eight, operated on as a unit by a


computer.

3) Word :The size of a word varies from one computer to another, depending on
the CPU. For computers with a 16-bit CPU, a word is 16 bits (2 bytes). On large
mainframes, a word can be as long as 64 bits (8 bytes).Some computers and
programming languages distinguish between shortwords and longwords. A
shortword is usually 2 bytes long, while a longword is 4 bytes.
-------summary:-------
1 bit = 0 or 1
1 Byte = 8 bits
1 word = 2 Bytes = 2 X (8 bits) = 16 bits
Double Word = 4 Bytes=4 X (8 bits)= 32 bits

What is a computer bus?


Article Table of contents

Introduction to the concept of a bus

A bus, in computing, is a set of physical connections (cables, printed circuits, etc.) which
can be shared by multiple hardware components in order to communicate with one
another.

The purpose of buses is to reduce the number of "pathways" needed for communication
between the components, by carrying out all communications over a single data channel.
This is why the metaphor of a "data highway" is sometimes used.
If only two hardware components communicate over the line, it is called a hardware
port (such as a serial port or parallel port).

Characteristics of a bus

A bus is characterised by the amount of information that can be transmitted at once. This
amount, expressed in bits, corresponds to the number of physical lines over which data is
sent simultaneously. A 32-wire ribbon cable can transmit 32 bits in parallel. The term
"width" is used to refer to the number of bits that a bus can transmit at once.

Additionally, the bus speed is also defined by its frequency (expressed in Hertz), the
number of data packets sent or received per second. Each time that data is sent or
received is called a cycle.

This way, it is possible to find the maximum transfer speed of the bus, the amount of
data which it can transport per unit of time, by multiplying its width by its frequency. A
bus with a width of 16 bits and a frequency of 133 MHz, therefore, has a transfer speed
equal to:

16 * 133.106 = 2128*106 bit/s,


or 2128*106/8 = 266*106 bytes/s
or 266*106 /1000 = 266*103 KB/s
or 259.7*103 /1000 = 266 MB/s

Bus subassembly

In reality, each bus is generally constituted of 50 to 100 distinct physical lines, divided
into three subassemblies:

• The address bus (sometimes called the memory bus) transports memory
addresses which the processor wants to access in order to read or write data. It is a
unidirectional bus.
• The data bus transfers instructions coming from or going to the processor. It is a
bidirectional bus.
• The control bus (or command bus) transports orders and synchonisation signals
coming from the control unit and travelling to all other hardware components. It is
a bidirectional bus, as it also transmits response signals from the hardware.

The primary buses

There are generally two buses within a computer:

• the internal bus (sometimes called the front-side bus, or FSB for short). The
internal bus allows the processor to communicate with the system's central
memory (the RAM).
• the expansion bus (sometimes called the input/output bus) allows various
motherboard components (USB, serial, and parallel ports, cards inserted in PCI
connectors, hard drives, CD-ROM and CD-RW drives, etc.) to communicate with
one another. However, it is mainly used to add new devices using what are called
expansion slots connected to the input/outpur bus.

The chipset

A chipset is the component which routes data between the computer's buses, so that all
the components which make up the computer can communicate with each other. The
chipset originally was made up of a large number of electronic chips, hence the name. It
generally has two components:

• The NorthBridge (also called the memory controller) is in charge of controlling


transfers between the processor and the RAM, which is way it is located
physically near the processor. It is sometimes called the GMCH, forr Graphic
and Memory Controller Hub.
• The SouthBridge (also called the input/output controller or expansion controller)
handles communications between peripheral devices. It is also called the ICH
(I/O Controller Hub). The tem bridge is generally used to designate a component
which connects two buses.
It is interesting to note that, in order to communicate, two buses must have the same
width. The explains why RAM modules sometimes have to be installed in pairs (for
example, early Pentium chips, whose processor buses were 64-bit, required two memory
modules each 32 bits wide).

Here is a table which gives the specifications for the most commonly used buses:

Standard Bus width (bits) Bus speed (MHz) Bandwidth (MB/sec)


ISA 8-bit 8 8.3 7.9
ISA 16-bit 16 8.3 15.9
EISA 32 8.3 31.8
VLB 32 33 127.2
PCI 32-bit 32 33 127.2
PCI 64-bit 2.1 64 66 508.6
AGP 32 66 254.3
AGP (x2 Mode) 32 66x2 528
AGP (x4 Mode) 32 66x4 1056
AGP (x8 Mode) 32 66x8 2112
ATA33 16 33 33
ATA100 16 50 100
ATA133 16 66 133
Serial ATA (S-ATA) 1 180
Serial ATA II (S-ATA2) 2 380
USB 1 1.5
USB 2.0 1 60
FireWire 1 100
FireWire 2 1 200
SCSI-1 8 4.77 5
SCSI-2 - Fast 8 10 10
SCSI-2 - Wide 16 10 20
SCSI-2 - Fast Wide 32 bits 32 10 40
SCSI-3 - Ultra 8 20 20
SCSI-3 - Ultra Wide 16 20 40
SCSI-3 - Ultra 2 8 40 40
SCSI-3 - Ultra 2 Wide 16 40 80
SCSI-3 - Ultra 160 (Ultra 3) 16 80 160
SCSI-3 - Ultra 320 (Ultra 4) 16 80 DDR 320
SCSI-3 - Ultra 640 (Ultra 5) 16 80 QDR 640

what is bus clock


Like CPUs, expansion buses also have clock speeds. Ideally, the CPU clock speed and the bus clock
speed should be the same so that neither component slows down the other. In practice, the bus clock
speed is often slower than the CPU clock speed, which creates a bottleneck. This is why new local
buses, such as AGP, have been developed
• the speed at which data is transferred along the main bus in a
computer
(Note This is not always the same speed as the processor speed.)

fre·quen·cy
   ˈfri kwən siShow Spelled[free-kwuh n-see] Show IPA
–noun, plural -cies.
1.
Also, fre·quence. the state or fact of being frequent; frequent
occurrence: We are alarmed by the frequency of fires in the neighborhood.
2.
rate of occurrence: The doctor has increased the frequency of his visits.
3.
Physics .
a.
the number of periods or regularly occurring events of any given kind
in unit of time, usually in one second.
b.
the number of cycles or completed alternations per unit time of a wave
or oscillation. Symbol: F; Abbreviation: freq.
4.
Mathematics . the number of times a value recurs in a unit change of
the independent variable of a given function.
5.
Statistics . the number of items occurring in a given category.

How to Calculate Frequency


This article was created by a professional writer and edited by experienced copy
editors, both qualified members of the Demand Media Studios community. All
articles go through an editorial process that includes subject matter guidelines,
plagiarism review, fact-checking, and other steps in an effort to provide reliable
information.
By an eHow Contributor
updated: March 28, 2011

Calculate Frequency
Frequency is the number of repetitions of a periodic process per 1 second and is measured
in Hertz (Hz). Light or sound are examples of such periodic processes and are called
electromagnetic waves. Frequency is a way to distinguish among different types of waves.
For instance, the frequency of sound is between 20 to 20,000 Hz, while the frequency of light
is about one trillion times higher. It can be calculated either from the energy or the
wavelength using the fundamental physical constants.

Difficulty:
Moderately Easy

Instructions

things you'll need:

• Calculator

1.
o 1
Find the fundamental constants using the links shown below in \"Resources.\"<br
/>Speed of light (c) =299,792,458 m/s. Planck constant (h) =4.13566733E?15 eV s
(1E-15 denotes "10 in power -15").

o 2
Divide energy by the Planck constant to calculate frequency. Frequency(Hz) =
Energy/h=Energy/4.13566733E?15 eV s. Energy needs to be expressed in "electron
volt (eV)" units. For example, to calculate the wavelength of the ultraviolet (UV) light
with the energy 4.5 eV: Frequency =4.5 eV/4.13566733E?15 eV s=1.09E15
Hz=1,090 THz. Note the prefix "Tera(T)-" implies the magnitude of 1E12 (see
resources).

o 3
Divide the speed of light by the wavelength to calculate frequency. Frequency (Hz) =
c/ wavelength(m) =299,792,458 (m/s)/frequency. For example, to calculate the
frequency of the visible red light with the wavelength of 700 nanometers (nm): 700
nm equals to 7E-7 m. Wavelength=299,792,458 (m/s)/7E-7m= 4.28E14=428 THz.

How is a microprocessors speed


measured?
Typically, microprocessors (like the ones found in computers), measure their speed in
hertz. A hertz is the measurement of a cycle in a second. So, 10 hertz means 10 cycles per
second.

In computing, a cycle (or, more specifically, a clock cycle) is the basic unit of
measurement that the CPU uses to carry out instructions given to it by software.
Therefore, in a CPU running at 900MHz, 900 million clock cycles will occur per second.

Software sends commands to the processor called, instructions. These commands are the
basis for how all programs run on a computer and are handled by the computer in a very
complicated manner.

However, a computer running at 3GHz, for example, is not performing 3 billion


instructions per second. Some instructions take multiple cycles to complete and some can
even have other instructions in the same cycle simultaneously.

To complicate matters further, it is not accurate to say that a higher speed processor is
better than another one at a lower speed. Certain AMD processors, for example, run at
lower speeds than comparable Intel processors of their family but, because they use
different architecture, perform at the same (and, sometimes, higher) performance levels
than CPU's with high clock speeds.

Also, processors with some sort of Hyper-Threading technology or, better yet, multiple
cores (like Intel Core 2 Duo processors) will be rated at lowered speeds than other CPU's
in their price range but, because of more than one (virtual) processor is running parallel to
the others, more instructions are performed per clock cycle.

There are also a few more factors to consider but this is the gist of it.

Why the address bus is unidirectional and why


the data bus is bi-directional in 8085?
The address bus is unidirectional becos address information is always given by
microprocessor to i/o devices. The data bus is bidirectional bcos it takes the data from
other devices & also give the data to other i/o devices

increased word size improves speed of


microprocessorWord size
The third factor is the size of the words that both the microprocessor and the buses can
accommodate (go back to the previous segment on computer architecture if you need to
refresh your memory about these). You may sometimes hear people refer to a computer
- or particularly a games console - as a 32-bit, 64-bit or 128-bit machine. These terms
refer to the size of word that the microprocessor can manipulate. The larger the word
size, the more information each word can contain. A 32-bit word can contain twice the
data of a 16-bit word. Therefore increasing the word size improves both the complexity
(more data can be manipulated) and the speed (because it takes the same time to
interpret each word).

Processor states

The CPU power states C0-C3 are defined as follows:

• C0 is the operating state.


• C1 (often known as Halt) is a state where the processor is not executing
instructions, but can return to an executing state essentially instantaneously. All
ACPI-conformant processors must support this power state. Some processors,
such as the Pentium 4, also support an Enhanced C1 state (C1E or Enhanced Halt
State) for lower power consumption.[7]
• C2 (often known as Stop-Clock) is a state where the processor maintains all
software-visible state, but may take longer to wake up. This processor state is
optional.
• C3 (often known as Sleep) is a state where the processor does not need to keep its
cache coherent, but maintains other state. Some processors have variations on the
C3 state (Deep Sleep, Deeper Sleep, etc.) that differ in how long it takes to wake
the processor. This processor state is optional.

[edit] Performance states

While a device or processor operates (D0 and C0, respectively), it can be in one of
several power-performance states. These states are implementation-dependent, but P0 is
always the highest-performance state, with P1 to Pn being successively lower-
performance states, up to an implementation-specific limit of n no greater than 16.

P-states have become known as SpeedStep in Intel processors, as PowerNow! or


Cool'n'Quiet in AMD processors, and as PowerSaver in VIA processors.

• P0 max power and frequency


• P1 less than P0, voltage/frequency scaled
• Pn less than P(n-1), voltage/frequency scaled

Baud rate
The baud rate is the number of times per second a serial communication signal
changes states; a state being either a voltage level, a frequency, or a frequency
phase angle.
If the signal changes once for each data bit, then one bps is equal to one baud. For
example, a 300 baud modem changes its states 300 times a second

Difference between Bit Rate and Baud


Rate
The difference between the two is complicated and intertwining. They are
dependent and inter-related. But the simplest explanation is that a Bit Rate is how many
data bits are transmitted per second. A baud Rate is the measurement of the number of
times per second a signal in a communications channel changes.

Bit rates measure of the number of data bits (that's 0's and 1's) transmitted in one second
in a communication channel. A figure of 2400 bits per second means 2400 zeros or ones
can be transmitted in one second, hence the abbreviation "bps". Individual characters (for
example letters or numbers) which are also referred to as bytes are composed of several
bits.

A baud rate, by definition, means the number of times a signal in a communications


channel changes state or varies. For example, a 2400 baud rate means that the channel
can change states up to 2400 times per second. The term "change state", means that it can
change from 0 to 1 or from 1 to 0 up to X (in this case, 2400) times per second. It also
refers to the actual state of the connection,
such as voltage, frequency or phase level).

The main difference of the two is that one


change of state can transmit one bit – or
slightly more or less than one bit, that
depends on the modulation technique
used. So the bit bate (bps) and baud rate
(baud per second) have this connection:

bps = baud per second x the number of bit


per baud

The number of bit per baud is determined by the modulation technique. Here are two
examples:

When FSK ("Frequency Shift Keying", a transmission technique) is used, each baud
transmits one bit; only one change in state is required to send a bit. Thus, the modem's
bps rate is equal to the baud rate: When we use a baud rate of 2400, you use a modulation
technique called phase modulation that transmits four bits per baud. So:

2400 baud x 4 bits per baud = 9600 bps


Such modems are capable of 9600 bps operation.

Advantages of 64-bit Hardware


64-bit hardware is no longer the next, big thing coming to the computer industry.
Over the past several years, the 64-bit hardware environment has seen significant
penetration in the personal computer and server market as engineering challenges
have been worked through to alleviate concerns by consumers and industry in
having to spend significant monies to invest in the 64 bit hardware environment.
When the first 64 bit computers were introduced, the common complaint was that
there was little to no performance benefit seen by the end-user while the
processors were used to run 32 bit programs. While this is mostly true, new
computer software programs have been written through updates and new releases
to take advantage of the new hardware and realize a resulting performance
benefit.

The 64 Bit Hardware Environment


There has been a market for 64 bit hardware for a number of years in scientific and
mathematical circles. Over the last few years of the 2000s, however, the majority of Intel-
based computers and servers have been sold with the processors. There are still a number
of computers that use 32-bit architectures deployed to the field, however, as advanced
server software packages become increasingly reliant on the greater processing capacity
found with 64 bit processors. These CPU's are capable of calculating individual tasks
twice as fast as a 32-bit CPU, and can make use of significantly more computer RAM
than their 32-bit predecessors that have a 4 GB limitation. 64-bit CPU's are also being
deployed now with a 64-bit data bus which allows the full capability of the processor to
be used.

What Are the Advantages of 64-Bit Hardware?


64-bit hardware has a number of advantages over legacy CPUs. These include the ability
to support up to 1,024 GB of physical and addressable memory, 16 terabytes of virtual
memory, and significantly larger blocks of continuous memory for server-side
applications to use. 64-bit hardware also offers a faster computer bus architecture and
allows threads or processes to call functions faster by passing up to four arguments at a
single time. The processors are also more secure than legacy CPU's by making it harder
for computer hackers to exploit a buffer overflow attack through passing the parameters
for procedure calls through registers first. For Intel-based, Microsoft servers, Microsoft
Patch Guard technology is used to prevent non-Microsoft applications from making
changes to the Windows kernel making it more difficult for unauthorized users to gain
access to the kernel. 64-bit hardware is also considered more scaleable than 32-bit
hardware due to the open-ended approach to the virtual memory address space and
support for significantly more physical memory on the computer.
serial and parallel data transfer

parallel data transfer refers to the type of data transfer in which a group of bits are
transferred simultaneously while serial data transfer refers to the type of data
transfer in which a group of data bits are transferred one bit at a time. so that means
that the amount of data transferred serially is less that the data transferred parallelly
per second.

but serial data transfer requires less cables so if the data has to be transmitted over
longer distances serial data transfer is preffered.

uptil this point all is well and good. the point that i dont understand is that in
computers many of the interfaces that were once parallel are now being made serial.
serial ATA, hyperTransport and Multiol are all serial interfaces. and all these
interfaces have data transfer rates greater than the previous parallel interfaces. how
is that so???? how come serial data transfer is faster than parallel data transfer?

serial vs parallel data transfer

You are right...


The parallel ports are faster, and esier because they dont requiere multiplexing. Tthe
thing is that since the serial ports reqier less pins, or lines, the HW can be smaller. So
engeniers have dedicated more efort on serial ports and protocols, so they can make
smaller devices, imagine a palm with a parallel port conector instead of a USB
conector, the conector alone would be to big. So its not that they are faster, they are
just more developed becase of the efort put into them.

serial parallel data transfer

Parallel buses are hard to run at high frequencies for


a number of reasons the greatest of which are:
1. It is hard to route many signals across a board without introducing timing variation
between them - the more variation the lower maximum frequency is.
2. Many wires switching simultaneously produce lots of EMI and interfere with each
other. More EMI -> less maximum frequency.

Although serial interfaces have less bits, due to the above two reasons they can be
clocked at MUCH higher rates than parallel interfaces and the clock speed increase
outweighs the bandwidth decrease due to less bits being transferred parallely.

PCI -> 33MHz


AGP8X -> 266MHz
HyperTransport -> 1GHz DDR (2 giga-transfers), 16 2-16 serial lanes
PCI Express -> 2.5GHz 1-16 lanes serial
HyperTransport and PCIE are slated to scale up to >6GHz. The numbers speak for
themselves.
serial vs parallel bus

Always parallel is faster, this is why CPU, cache and RAM communicate in parallel and
this is why a 64bit CPU is faster than a 32 or 8 bit one.
On the contrast, a hard disk reads serially. The Hard disk has 2 or more heads. Each
head must read the data and then pass it through a special circuit to combine it to
parallel. Now new ideas said, do not do this, just transmit it serially. It will need less
time to manipulate data serially than to make it parallel compatible. In theory it is
also cheaper since less electronics are involved, in practice it should be more
expensive since it is a new and faster technology that should sell.
Peripherals outsite the PC are prefered to be of serial communication for many
reasons but the speed. The main problem is interference/interaction between
datalines. This is a different story I think.

why is serial communication faster than parallel

Recently PCI Express(serial) is introduced and another standard PCI X(high speed
parallel), but you would have heard about PCI Express only because it is compact use
the same type of connector with huge bandwidth the main thing is that it is serial,
now the serial bus standards are overtaking parallel ones because of their high speed
flexibility as you can easily change the internal protocol not useful or possible in
parallel transfer, and design and production cost.

Asynchronous vs. Synchronous


Most communications circuits perform functions described in the physical and data
link layer of the OSI Model. There are two general strategies for communicating over a
physical circuit: Asynchronous and Synchronous. Each has it's advantages and
disadvantages.

ASYNCHRONOUS

Asynchronous communication utilizes a transmitter, a receiver and a wire without


coordination about the timing of individual bits. There is no coordination between the
two end points on just how long the transmiter leaves the signal at a certain level to
represent a single digital bit. Each device uses a clock to measure out the 'length' of a bit.
The transmitting device simply transmits. The receiving device has to look at the
incoming signal and figure out what it is receiving and coordinate and retime its clock to
match the incoming signal.

Sending data encoded into your signal requires that the sender and receiver are both using
the same enconding/decoding method, and know where to look in the signal to find data.
Asynchronous systems do not send separate information to indicate the encoding or
clocking information. The receiver must decide the clocking of the signal on it's own.
This means that the receiver must decide where to look in the signal stream to find ones
and zeroes, and decide for itself where each individual bit stops and starts. This
information is not in the data in the signal sent from transmitting unit.

When the receiver of a signal carrying information has to derive how that signal is
organized without consulting the transmitting device, it is called asynchronous
communication. In short, the two ends do not always negotiate or work out the
connection parameters before communicating. Asynchronous communication is more
efficient when there is low loss and low error rates over the transmission medium because
data is not retransmitted and no time is spent setting negotiating the connection
parameters at the beginning of transmission. Asynchronous systems just transmit and let
the far end station figure it out. Asynchronous is sometimes called "best effort"
transmission because one side simply transmits, and the other does it's best to receive.

EXAMPLES:
Asynchronous communication is used on RS-232 based serial devices such as on an
IBM-compatible computer's COM 1, 2, 3, 4 ports. Asynchronous Transfer Mode (ATM)
also uses this means of communication. Your PS2 ports on your computer also use serial
communication. This is the method is also used to communicate with an external modem.
Asynchronous communication is also used for things like your computer's keyboard and
mouse.

Think of asynchronous as a faster means of connecting, but less reliable.

SYNCHRONOUS

Synchronous systems negotiate the communication parameters at the data link layer
before communication begins. Basic synchronous systems will synchronize both clocks
before transmission begins, and reset their numeric counters for errors etc. More
advanced systems may negotiate things like error correction and compression.

It is possible to have both sides try to synchronize the connection at the same time.
Usually, there is a process to decide which end should be in control. Both sides can go
through a lengthy negotiation cycle where they exchange communications parameters
and status information. Once a connection is established, the transmitter sends out a
signal, and the receiver sends back data regarding that transmission, and what it received.
This connection negotiation process takes longer on low error-rate lines, but is highly
efficient in systems where the transmission medium itself (an electric wire, radio signal
or laser beam) is not particularly reliable.

What is difference between cache


memory and flash memory?
Cache (pronounced cash) memory is extremely fast memory that is built into a
computer's central processing unit (CPU), or located next to it on a separate chip. The
CPU uses cache memory to store instructions that are repeatedly required to run
programs, improving overall system speed. The advantage of cache memory is that the
CPU does not have to use the motherboard's system bus for data transfer. Whenever data
must be passed through the system bus, the data transfer speed slows to the motherboard's
capability. The CPU can process data much faster by avoiding the bottleneck created by
the system bus.
Flash memory (sometimes called "flash RAM") is a type of constantly-powered
nonvolatile memory that can be erased and reprogrammed in units of memory called
blocks. It is a variation of electrically erasable programmable read-only memory
(EEPROM) which, unlike flash memory, is erased and rewritten at the byte level, which
is slower than flash memory updating. Flash memory is often used to hold control code
such as the basic input/output system (BIOS) in a personal computer. When BIOS needs
to be changed (rewritten), the flash memory can be written to in block (rather than byte)
sizes, making it easy to update. On the other hand, flash memory is not useful as random
access memory (RAM) because RAM needs to be addressable at the byte (not the block)
level.

Can you use transistor as diode


if we are talking about using transistor as
a diode...thn yes it can be used as a
diode by short circutting the base and
the collector of a transistor and now
using its two terminals as two terminals
of PN junction diode.

Can you use diode in place of a


transistor?
no,we cant use transistor by
combination of two diodes because we
will come across a condition called
current hogging.it means large amount
of current flows through the
device,diode can just rectify the signal
but not amplifies it.

Transistors however, can be used as


diodes

Why transistor is not use as diodes in


building logic circuits?

What do you mean? A diode is used to


control direction of power flow;
transistors are used to create logic
circuits (NOR, NAND, OR, AND, XOR,
XNOR, etc.). Generally it is better to
use a diode for jobs diodes are better at
than transistors becuase they are
physically smaller.

What is the working principle of pen


drive?
Pen drive is a small key-ring-size device that can be used to easily transfer files between
USB-compatible systems. It is accessed through the USB port on the computer. It has a
capacity to hold data ranging from 250 MB to 16 GB or even more depending on the
memory stick you choose.

A USB flash drive(pen drive) consists of a NAND-type flash memory data storage device
integrated with a USB (universal serial bus) interface.

A flash drive(pen drive) consists of a small printed circuit board protected inside a
plastic, metal, or rubberized case, robust enough for carrying with no additional
protection—in a pocket or on a key chain, for example.
The USB connector is protected by a removable cap or by retracting into the body of the
drive, although it is not liable to be damaged if exposed.

The development of high-speed serial data interfaces such as USB for the first time made
memory systems with serially accessed storage viable, and the simultaneous development
of small, high-speed, low-power microprocessor systems allowed this to be incorporated
into extremely compact systems.
Serial access also greatly reduced the number of electrical connections required for the
memory chips, which has allowed the successful manufacture of multi-gigabyte
capacities.

Computers access modern flash memory systems(pen drives) very much like hard disk
drives, where the controller system has full control over where information is actually
stored.

Configuring the LPT port


Configuring the LPT port
The LPT1 (parallel) port on a computer is typically used for a printer. If a parallel device is connected to
the physical computer and you want to make the device accessible to a virtual machine, configure the LPT
port 1 setting on the virtual machine to use the port on the physical computer to which the device is
attached.

The following options are available for the LPT port 1 setting of each virtual machine.

• None

Select this option if you do not want the virtual machine to use the LPT1 port of the physical
computer. This is the default setting.
• LPT1

Select this option to configure the virtual machine to use the specified parallel port of the physical
computer for input and output through the virtual machine.
When you start the virtual machine, it attempts to capture the parallel port of the physical
computer. If the parallel port has already been captured, the virtual machine cannot capture it. If
the virtual machine captures the parallel port, the parallel port is not released to the physical
computer or available to any other virtual machine or to the host operating system until the
virtual machine is shut down.

Common applications for serial ports

The RS-232 standard is used by many specialized and custom-built devices. This list
includes some of the more common devices that are connected to the serial port on a PC.
Some of these such as modems and serial mice are falling into disuse while others are
readily available.

Serial ports are very common on certain types of microcontroller such as the Microchip
PIC, where they can be used to communicate with a PC or other serial devices.

• GPS receivers (typically NMEA 0183 at 4,800 bit/s)


• Bar code scanners and other point of sale devices
• LED and LCD text displays
• Satellite phones, low-speed satellite modems and other satellite based transceiver
devices
• Flat-screen (LCD and Plasma) monitors to control screen functions by external
computer, other AV components or remotes
• Test and measuring equipment such as digital multimeters and weighing systems
• Updating Firmware on various consumer devices.
• Some CNC controllers
• Uninterruptible power supply
• Software debuggers that run on a 2nd computer.

Serial and Parallel data transmission

The need to provide data transfer between a computer and a remote terminal has led to the development of
serial communication.

Serial data transmission implies transfer data transfer bit by bit on the single (serial) communication line .

In case of serial transmission data is sent in a serial form i.e. bit by bit on a single line. Also, the cost of
communication hardware is considerable reduced since only a single wire or channel is require for the serial
bit transmission. Serial data transmission is slow as compared to parallel transmission.

However, parallel data transmission is less common but faster than serial transmission. Most data are
organized into 8 bit bytes. In some computers, data are further organized into multiple bits called half words,
full words. Accordingly data is transferred some times a byte or word at a time on multiple wires with each
wire carrying individual data bits. Thus transmitting all bits of a given data byte or word at the same time is
known as parallel data transmission.
Parallel transmission is used primarily for transferring data between devices at the same site. For eg :
communication between a computer and printer is most often parallel, so that entire byte can be transferred
in one operation.

Synchronous Communication

In Synchronous communication scheme, after a fixed number of data bytes a special bit pattern is send
called SYNC by the sending end.

Data transmission take place without any gap between two adjacent characters., however data is send block
by block. A block is a continuous steam of characters or data bit pattern coming at a fixed speed. You will
find a Sync bit pattern between any two blocks of data and hence the data transmission is synchronized.

Synchronous communication is used generally when two computers are communicating to each other at a
high speed or a buffered terminal is communicating to the computer.

Advantages and Disadvantages of Synchronous Communication

Main advantage of Synchronous data communication is the high speed. The synchronous communications
required high-speed peripherals/devices and a good-quality, high brandwidth communication channel.

The disadvantage include the possible in accuracy. Because when a receiver goes out of Synchronization,
loosing tracks of where individual characters begin and end. Correction of errors takes additional time.

c parallel transmission computer definition


Ads by GoogleTransmitting several bits simultaneously using
multiple lines (8, 16, 32, etc.). The pathways between the CPU and memory are
parallel, and between the CPU and peripheral devices, they are typically parallel, but
may also be serial. Contrast with serial transmission. See system bus and PCI

128 bits processor


Ok, what if IBM made a 128-bit processor and Apple used it for their Macs, but it still had 32-bit and
64-bit support? Would this cause any problems or would this be a good thing?

This is hypothetical and if it happened in the next two weeks.


It would be completely unnecessary and I don't think anyone would see any gain whatsoever from
it. Even 64 bits is of questionable usefulness for everyday computing just now. its main advantage
is the ability to address more ram, and 64 bit can address so much, its unlikely we'll need more for
a very long time.

Besides that, the chip would have to have compatibility for 32 and 64 bit built it. its not something
inherent in a newer chip. It has to be designed that way.

And of course, apple's not going to use any IBM chips anymore anyway.
mad jew
Oct 21, 2005, 01:24 AM
Erm, so long as the chip was enabled to work with 32-bit software, it'd be okay. Of course, this is a
pretty far-out hypothetical. It took ages to get to where we are with 64 bit with the transition still
only partially underway. There'd be no real-life benefits of a 128 bit chip at this stage considering
the advantages of 64 bit are still coming to fruition. :)
Chaszmyr
Oct 21, 2005, 01:26 AM
A 128-bit chip would be able to address practically all of the RAM in the known universe, but I don't
see how any of us would benefit from that.
I don't know how the calculations work, but a 32 bit chip can address 4gb of RAM, and the G5 (a 64
bit chip) can address 4tb of RAM, so I am assuming a 128 bit chip may be able to address 4pb of
RAM. (I wouldn't be even a little bit surprised if this is wrong, I'm too lazy to find out for sure, but I
assure you a 128 bit chip can address a huge amount of RAM).
aesth3tic
Oct 21, 2005, 02:51 AM
Ok, what if IBM made a 128-bit processor and Apple used it for their Macs, but it still had 32-bit and
64-bit support? Would this cause any problems or would this be a good thing?

This is hypothetical and if it happened in the next two weeks.

are 128bit processors even in existence? or even being considered?


advocate
Oct 21, 2005, 02:59 AM
I don't know how the calculations work, but a 32 bit chip can address 4gb of RAM, and the G5 (a 64
bit chip) can address 4tb of RAM, so I am assuming a 128 bit chip may be able to address 4pb of
RAM. (I wouldn't be even a little bit surprised if this is wrong, I'm too lazy to find out for sure, but I
assure you a 128 bit chip can address a huge amount of RAM).Huge is right!

A 1-bit machine can address two bytes: the one called 0 and the one called 1.

A 2-bit machine can address four (2*2) bytes: 00, 01, 10, and 11.

A 3-bit machine can address eight (2*2*2) bytes: 000, 001, 010, 011, 100, 101, 110, 111.

A 4-bit machine can address sixteen = 2^4 = 2*2*2*2 bytes.

Similarly, a 32-bit machine can address 2^32 = 2*2*2*...*2 (32 of them) bytes. That's 4 GB, about
4 billion.

A 64-bit machine can address 2^64 bytes. That's about 18 exabytes. 18 followed by 18 zeros.
18446744073709551616 to be precise. That's huge.

Now, a 128-bit machine -- of course, you can extend the pattern. Is it going to be twice as much as
a 64-bit machine? How about four times? Eight? A billion times? No, actually, it's 2^64 times more
than 2^64.

Here's the number:

340282366920938463463374607431768211456

I'm not even going to try to find SI prefixes for that. It's nuts.

So, no, there is absolutely no point for consumer machines to address more than 64 bits of address
space at this point in time. We're not quite in the territory of "number of atoms in the universe" but
we're definitely blasting off our home planet before we are looking at 128-bit machines making any
sense in your personal computer.

Hope that helps clear it up!


486 vs. Pentium Processsors
The following table provides information comparing Intel® 486 and Pentium® processors.
486 Processors Pentium Processors
Clock Speeds 25-100 MHz 60-200 MHz
32 bit addressing
Data flow 32-bit data and addressing
64 bit data path
Pipeline model Single Dual
8K data
Internal cache 8K data/instruction
8K instruction
Cache type Write-through Write-back
Number of transistors 1.3 million 3.2 million
Performance (66 MHz) 54 MIPS 112 MIPS

What is the difference between RTOS and


OS?
RTOS stands for real-time operating system, versus the general-computing operating
system (OS).

The key difference between general-computing operating systems and real-time operating
systems is the need for " deterministic " timing behavior in the real-time operating
systems. Formally, "deterministic" timing means that operating system services consume
only known and expected amounts of time. In theory, these service times could be
expressed as mathematical formulas. These formulas must be strictly algebraic and not
include any random timing components. Random elements in service times could cause
random delays in application software and could then make the application randomly
miss real-time deadlines – a scenario clearly unacceptable for a real-time embedded
system. Many non-real-time operating systems also provide similar kernel services.

General-computing non-real-time operating systems are often quite non-deterministic.


Their services can inject random delays into application software and thus cause slow
responsiveness of an application at unexpected times. If you ask the developer of a non-
real-time operating system for the algebraic formula describing the timing behavior of
one of its services (such as sending a message from task to task), you will invariably not
get an algebraic formula. Instead the developer of the non-real-time operating system
(such as Windows, Unix or Linux) will just give you a puzzled look. Deterministic timing
behavior was simply not a design goal for these general-computing operating systems.

On the other hand, real-time operating systems often go a step beyond basic determinism.
For most kernel services, these operating systems offer constant load-independent timing:
In other words, the algebraic formula is as simple as: T(message_send) = constant ,
irrespective of the length of the message to be sent, or other factors such as the numbers
of tasks and queues and messages being managed by the RTOS.

Many RTOS proponents argue that a real-time operating system must not use virtual
memory concepts, because paging mechanics prevent a deterministic response. While this
is a frequently supported argument, it should be noted that the term "real-time operating
system" and determinism in this context covers a very wide meaning, and vendors of
many different operating systems apply these terms with varied meaning. When selecting
an operating system for a specific task, the real-time attribute alone is an insufficient
criterion, therefore. Deterministic behavior and deterministic latencies have value only if
the response lies within the boundaries of the physics of the process that is to be
controlled. For example, controlling a combustion engine in a racing car has different
real-time requirements to the problem of filling a 1,000,000 litre water tank through a 2"
pipe.

Real-time operating systems are often uses in embedded solutions, that is, computing
platforms that are within another device. Examples for embedded systems include
combustion engine controllers or washing machine controllers and many others. Desktop
PC and other general-purpose computers are not embedded systems. While real-time
operating systems are typically designed for and used with embedded systems, the two
aspects are essentially distinct, and have different requirements. A real-time operating
system for embedded system addresses both sets of requirements.

You might also like