Professional Documents
Culture Documents
Part of a series on
Windows XP
New features
Releases and editions
(x64 · Media Center ·
Fundamentals)
Development history
Criticism
Removed features
Vista vs. XP
v
t
e
Upon its release Windows XP received generally positive reviews, with critics noting
increased performance (especially in comparison to Windows ME), a more intuitive user
interface, improved hardware support, and its expanded multimedia capabilities.[7] Despite
some initial concerns over the new licensing model and product activation system, Windows
XP eventually proved to be popular and widely used. It is estimated that at least 400 million
copies of Windows XP were sold globally within its first five years of availability,[8][9] and at
least one billion copies were sold by April 2014.[10] Sales of Windows XP licenses to original
equipment manufacturers (OEMs) ceased on June 30, 2008, but continued for netbooks until
October 2010. Windows XP remained popular even after the release of newer versions,
particularly due to the poorly received release of its successor Windows Vista. Vista's 2009
successor, Windows 7, only overtook XP in total market share at the end of 2011.[11]
Extended support for Windows XP ended on April 8, 2014, after which the operating system
ceased receiving further support or security updates to most users. As of August 2016,
Windows XP desktop market share makes it the third most popular after Windows 7 and
Windows 10 (and StatCounter also ranks after Windows 8.1 and OS X contrary to
NetMarketshare's data). XP is still very popular in China, with it running on one in four
desktop computers.
Contents
1 Development
o 1.1 "Neptune" and "Odyssey"
o 1.2 "Whistler"
o 1.3 Beta versions
o 1.4 Release
2 New and updated features
o 2.1 User interface
o 2.2 Infrastructure
o 2.3 Networking and internet functionality
o 2.4 Other features
3 Removed features
4 Editions
5 Service packs
o 5.1 Service Pack 1
o 5.2 Service Pack 2
o 5.3 Service Pack 3
6 System requirements
o 6.1 Physical memory limits
o 6.2 Processor limits
7 Support lifecycle
o 7.1 End of support
8 Reception
o 8.1 Market share
9 See also
10 References
11 Further reading
Development
"Neptune" and "Odyssey"
In the late 1990s, initial development of what would become Windows XP was focused on
two individual products; "Odyssey", which was reportedly intended to succeed the future
Windows 2000, and "Neptune", which was intended to succeed the MS-DOS–based
Windows 98 with a Windows NT-based product designed for consumers. Based on the NT
5.0 kernel in Windows 2000, Neptune primarily focused on offering a simplified, task-based
interface based on a concept known internally as "activity centers". A number of activity
centers were planned, serving as hubs for email communications, playing music, managing or
viewing photos, searching the Internet, and viewing recently used content. A single build of
Neptune, 5111 (which still carried the branding of Windows 2000 in places), revealed early
work on the activity center concept, with an updated user account interface and graphical
login screen, common functions (such as recently used programs) being accessible from a
customizable "Starting Places" page (which could be used as either a separate window, or a
full-screen desktop replacement).[12][13][14]
However, the project proved to be too ambitious. Microsoft would ultimately shelve Bill
Gates' 1998 promise that Windows 98 would be the final MS-DOS–based version of
Windows; at the WinHEC conference on April 7, 1999, Steve Ballmer announced an updated
version of Windows 98 known as Windows Millennium. Microsoft also planned to push back
Neptune in favor of an interim, but consumer-oriented Windows NT OS codenamed
"Asteroid". Concepts introduced by Neptune would influence future Windows products; in
Windows ME, the activity center concept was used for System Restore and Help and Support
Center (which both combined Win32 code with an interface rendered using Internet
Explorer's layout engine), the hub concept would be expanded on Windows Phone, and
Windows 8 would similarly use a simplified user interface running atop the existing
Windows shell.[15][16]
"Whistler"
In January 2000, shortly prior to the official release of Windows 2000, technology writer Paul
Thurrott reported that Microsoft had shelved both Neptune and Odyssey in favor of a new
product codenamed Whistler, after Whistler, British Columbia, as many Microsoft
employees skied at the Whistler-Blackcomb ski resort.[5] The goal of Whistler was to unify
both the consumer and business-oriented Windows lines under a single, Windows NT
platform: Thurrott stated that Neptune had become "a black hole when all the features that
were cut from [Windows ME] were simply re-tagged as Neptune features. And since Neptune
and Odyssey would be based on the same code-base anyway, it made sense to combine them
into a single project".[14] At WinHEC in April 2000, Microsoft officially announced and
presented an early build of Whistler, focusing on a new modularized architecture, built-in CD
burning, fast user switching, and updated versions of the digital media features introduced by
ME. Windows general manager Carl Stork stated that Whistler would be released in both
consumer- and business-oriented versions built atop the same architecture, and that there
were plans to update the Windows interface to make it "warmer and more friendly".[12][14]
In June 2000, Microsoft began the technical beta testing process; Whistler was expected to be
made available in "Personal", "Professional", "Server", "Advanced Server", and "Datacenter"
editions. At PDC on July 13, 2000, Microsoft announced that Whistler would be released
during the second half of 2001, and also released the first preview build, 2250. The build
notably introduced an early version of a new visual styles system along with an interim theme
known as "Professional" (later renamed "Watercolor"), and contained a hidden "Start page"
(a full-screen page similar to Neptune's "Starting Places"), and a hidden, early version of a
two-column Start menu design.[17] Build 2257 featured further refinements to the Watercolor
theme, along with the official introduction of the two-column Start menu, and the addition of
an early version of Windows Firewall.[14]
Beta versions
Microsoft released Whistler Beta 1, build 2296, on October 31, 2000. Build 2410 in January
2001 introduced Internet Explorer 6.0 (previously branded as 5.6) and the Microsoft Product
Activation system. Bill Gates dedicated a portion of his keynote at Consumer Electronics
Show to discuss Whistler, explaining that the OS would bring "[the] dependability of our
highest end corporate desktop, and total dependability, to the home", but also "move it in the
direction of making it very consumer-oriented. Making it very friendly for the home user to
use." Alongside Beta 1, it was also announced that Microsoft would prioritize the release of
the consumer-oriented versions of Whistler over the server-oriented versions in order to
gauge reaction, but that they would be both generally available during the second half of
2001 (Whistler Server would ultimately be delayed into 2003).[18] Builds 2416 and 2419
added the File and Transfer Settings Wizard and began to introduce elements of the operating
system's final appearance (such as its near-final Windows Setup design, and the addition of
new default wallpapers, such as Bliss).[19]
In April 2001, Microsoft controversially announced that XP would not integrate support for
Bluetooth or USB 2.0 on launch, requiring the use of third-party drivers. Critics felt that in
the case of the latter, Microsoft's decision had delivered a potential blow to the adoption of
USB 2.0, as XP was to provide support for the competing, Apple-developed, FireWire
standard instead. A representative stated that the company had "[recognized] the importance
of USB 2.0 as a newly emerging standard and is evaluating the best mechanism for making it
available to Windows XP users after the initial release."[22] The builds prior to and following
Release Candidate 1 (build 2505, released on July 5, 2001), and Release Candidate 2 (build
2526, released on July 27, 2001), focused on fixing bugs, acknowledging user feedback, and
other final tweaks before the RTM build.[21]
Release
In June 2001, Microsoft indicated that it was planning to, in conjunction with Intel and other
PC makers, spend at least US$1 billion on marketing and promoting Windows XP.[23] The
theme of the campaign, "Yes You Can", was designed to emphasize the platform's overall
capabilities; an initial slogan, "Prepare to Fly", was dropped due to sensitivity issues after the
September 11 attacks.[24] A prominent aspect of Microsoft's campaign was a U.S. television
commercial featuring Madonna's song "Ray of Light"; a Microsoft spokesperson stated that
the song was chosen due to its optimistic tone and how it complimented the overall theme of
the campaign.[25][26]
On August 24, 2001, Windows XP build 2600 was released to manufacturing. During a
ceremonial media event at Microsoft Redmond Campus, copies of the RTM build were given
to representatives of several major PC manufacturers in briefcases, who then flew off on
decorated helicopters. While PC manufacturers would be able to release devices running XP
beginning on September 24, 2001, XP was expected to reach general, retail availability on
October 25, 2001. On the same day, Microsoft also announced the final retail pricing of XP's
two main editions, "Home" and "Professional".[21][27]
User interface
While retaining some similarities to previous versions, Windows XP's interface was
overhauled with a new visual appearance, with an increased use of alpha compositing effects,
drop shadows, and "visual styles", which completely change the appearance of the operating
system. The amount of effects enabled are determined by the operating system by the
computer's processing power, and can be enabled or disabled on a case-by-case basis. XP
also added ClearType, a new subpixel rendering system designed to improve the appearance
of fonts on liquid-crystal displays.[28] A new set of system icons were also introduced.[29][30]
The default wallpaper, Bliss, is a photo of a landscape in the Napa Valley outside Napa,
California, with rolling green hills and a blue sky with stratocumulus and cirrus clouds.[31]
The Start menu received its first major overhaul on XP, switching to a two-column layout
with the ability to list, pin, and display frequently used applications, recently opened
documents, and the traditional cascading "All Programs" menu. The taskbar can now group
windows opened by a single application into one taskbar button, with a popup menu listing
the individual windows. The notification area also hides "inactive" icons by default. The
taskbar can also be "locked" to prevent accidental moving or other changes. A "common
tasks" list was added, and Windows Explorer's sidebar was updated to use a new task-based
design with lists of common actions; the tasks displayed are contextually relevant to the type
of content in a folder (i.e. a folder with music displays offers to play all the files in the folder,
or burn them to a CD).
The "task grouping" feature introduced in Windows XP showing both grouped and individual
items
Fast user switching allows additional users to log in to a Windows XP machine without
existing users having to close their programs and logging out. Although only one user at the
time can use the console (i.e. monitor, keyboard and mouse), previous users can resume their
session once they regained control of the console.[32]
Infrastructure
Windows XP uses prefetcher to improve startup and application launch times.[33][34] It also
became possible to revert the installation of an updated device driver, should one not produce
desirable results.[35]
Numerous improvements were also made to system administration tools such as Windows
Installer, Windows Script Host, Disk Defragmenter, Windows Task Manager, Group Policy,
CHKDSK, NTBackup, Microsoft Management Console, Shadow Copy, Registry Editor,
Sysprep and WMI.[36][further explanation needed]
Windows XP was originally bundled with Internet Explorer 6, Outlook Express 6, Windows
Messenger, and MSN Explorer. New networking features were also added, including Internet
Connection Firewall, Internet Connection Sharing integration with UPnP, NAT traversal
APIs, Quality of Service features, IPv6 and Teredo tunneling, Background Intelligent
Transfer Service, extended fax features, network bridging, peer to peer networking, support
for most DSL modems, IEEE 802.11 (Wi-Fi) connections with auto configuration and
roaming, TAPI 3.1, and networking over FireWire.[37] Remote Assistance and Remote
Desktop were also added, which allow users to connect to a computer running Windows XP
from across a network or the Internet and access their applications, files, printers, and devices
or request help.[38] Improvements were also made to IntelliMirror features such as Offline
Files, Roaming user profiles and Folder redirection.
Other features
Removed features
Main article: List of features removed in Windows XP
Some of the programs and features that were part of the previous versions of Windows did
not make it to Windows XP. CD Player, DVD Player, and Imaging for Windows are replaced
with Windows Picture and Fax Viewer, Windows Media Player, and Windows shell.
NetBEUI and NetDDE are deprecated and are not installed by default. DLC and AppleTalk
network protocols are removed. Plug-and-play–incompatible communication devices (like
modems and network interface cards) are no longer supported.
Service Pack 2 and Service Pack 3 also remove features from Windows XP but to a less
noticeable extent. For instance, Program Manager and support for TCP half-open connections
are removed in Service Pack 2. The Energy Star logo and the address bar on the taskbar are
removed in Service Pack 3.
Editions
Main article: Windows XP editions
Diagram representing the main editions of Windows XP. It is based on the category of the
edition (grey) and codebase (black arrow).
Windows XP was released in two major editions on launch; Home Edition, and Professional.
Both editions were made available at retail as pre-loaded software on new computers, and in
boxed copies. Boxed copies were sold as "Upgrade" or "Full" licenses; the "Upgrade"
versions were slightly cheaper, but require an existing version of Windows to install. The
"Full" version can be installed on systems without an operating system or existing version of
Windows.[23] Both versions of XP were aimed towards different markets; Home Edition is
explicitly intended for consumer use and disables or removes certain advanced and
enterprise-oriented features present on Professional, such as the ability to join a Windows
domain, Internet Information Services, and Multilingual User Interface. Windows 98 or ME
can be upgraded to either version, but Windows NT 4.0 and Windows 2000 can only be
upgraded to Professional.[40] Windows' software license agreement for pre-loaded licenses
allows the software to be "returned" to the OEM for a refund if the user does not wish to use
it.[41] Despite the refusal of some manufacturers to honor the entitlement, it has been enforced
by courts in some countries.[42][43]
Two specialized variants of XP were introduced in 2002 for certain types of hardware,
exclusively through OEM channels as pre-loaded software. Windows XP Media Center
Edition was initially designed for high-end home theater PCs with TV tuners (marketed under
the term "Media Center PC"), offering expanded multimedia functionality, an electronic
program guide, and digital video recorder (DVR) support through the Windows Media Center
application.[44] Microsoft also unveiled Windows XP Tablet PC Edition, which contains
additional pen input features, and is optimized for mobile devices meeting its Tablet PC
specifications.[45] Two different 64-bit editions of XP were made available; the first, Windows
XP 64-Bit Edition, was intended for IA-64 (Itanium) systems; as IA-64 usage declined on
workstations in favor of AMD's x86-64 architecture (which was supported by the later
Windows XP Professional x64 Edition), the Itanium version was discontinued in 2005.[46]
Microsoft also targeted emerging markets with the 2004 introduction of Windows XP Starter
Edition, a special variant of Home Edition intended for low-cost PC's. The OS is primarily
aimed at first-time computer owners, containing heavy localization (including wallpapers and
screen savers incorporating images of local landmarks), and a "My Support" area which
contains video tutorials on basic computing tasks. It also removes certain "complex" features,
and does not allow users to run more than three applications at a time. After a pilot program
in India and Thailand, Starter was released in other emerging markets throughout 2005.[47] In
2006, Microsoft also unveiled the FlexGo initiative, which would also target emerging
markets with subsidized PCs on a pre-paid, subscription basis.[48]
As the result of unfair competition lawsuits in Europe and South Korea, which both alleged
that Microsoft had improperly leveraged its status in the PC market to favor its own bundled
software, Microsoft was ordered to release special versions of XP in these markets that
excluded certain applications. In March 2004, after the European Commission fined
Microsoft €497 million (US$603 million), Microsoft was ordered to release "N" versions of
XP that excluded Windows Media Player, encouraging users to pick and download their own
media player software. As it was sold at the same price as the version with Windows Media
Player included, certain OEMs (such as Dell, who offered it for a short period, along with
Hewlett-Packard, Lenovo and Fujitsu Siemens) chose not to offer it. Consumer interest was
miniscule, with roughly 1,500 units shipped to OEMs, and no reported sales to
consumers.[49][50][51][52] In December 2005, the Korean Fair Trade Commission ordered
Microsoft to make available editions of Windows XP and Windows Server 2003 that do not
contain Windows Media Player or Windows Messenger.[53] The "K" and "KN" editions of
Windows XP were released in August 2006, and are only available in English and Korean,
and also contain links to third-party instant messenger and media player software.[54]
Service packs
Three service packs were released for Windows XP, containing various bug fixes and the
addition of certain features. Each service pack is a superset of all previous service packs and
patches so that only the latest service pack needs to be installed, and also includes new
revisions.[55]
Service Pack 1
Service Pack 1 (SP1) for Windows XP was released on September 9, 2002. It contained over
300 minor, post-RTM bug fixes, along with all security patches released since the original
release of XP. SP1 also added USB 2.0 support, Microsoft Java Virtual Machine, .NET
Framework support, and support for technologies used by the then-upcoming Media Center
and Tablet PC editions of XP. The most significant change on SP1 was the addition of Set
Program Access and Defaults, a settings page which allows programs to be set for certain
types of activities (such as media players or web browsers) and for access to bundled,
Microsoft programs (such as Internet Explorer or Windows Media Player) to be disabled.
This feature was added to comply with the settlement of United States v. Microsoft Corp.,
which required Microsoft to offer the ability for OEMs to bundle third-party competitors to
software it bundles with Windows, and give them the same level of prominence as those
normally bundled with the OS (such as Internet Explorer and Windows Media
Player).[56][57][58]
On February 3, 2003, Microsoft released Service Pack 1a (SP1a). This release removed
Microsoft Java Virtual Machine as a result of a lawsuit with Sun Microsystems.[59]
Service Pack 2
Windows Security Center was added in Service Pack 2.
Service Pack 2 (SP2) was released on August 25, 2004,[60] SP2 added new functionality to
Windows XP, such as WPA encryption compatibility and improved Wi-Fi support (with a
wizard utility), a pop-up ad blocker for Internet Explorer 6, and partial Bluetooth support.
Service Pack 2 also added new security enhancements (codenamed "Springboard"),[61] which
included a major revision to the included firewall (renamed Windows Firewall, and now
enabled by default), Data Execution Prevention gained hardware support in the NX bit that
can stop some forms of buffer overflow attacks. Raw socket support is removed (which
supposedly limits the damage done by zombie machines) and the Windows Messenger
service became disabled by default, which was an attack vector for pop-up advertisements to
be displayed as system messages without a web browser or any additional software.
Additionally, security-related improvements were made to e-mail and web browsing. Service
Pack 2 also added Security Center, an interface which provides a general overview of the
system's security status, including the state of the firewall and automatic updates. Third-party
firewall and antivirus software can also be monitored from Security Center.[62]
In August 2006, Microsoft released updated installation media for Windows XP and
Windows Server 2003 SP2 (SP2b) to contain a patch requiring ActiveX controls to be
manually activated in accordance with a patent held by Eolas.[63][64] Microsoft has since
licensed the patent, and released a patch reverting the change in April 2008.[65] In September
2007, another minor revision known as SP2c was released for XP Professional, extending the
number of available product keys for the operating system to "support the continued
availability of Windows XP Professional through the scheduled system builder channel end-
of-life (EOL) date of January 31, 2009."[66]
Service Pack 3
Windows XP Service Pack 3 (SP3) was released to manufacturing on April 21, 2008, and to
the public via both the Microsoft Download Center and Windows Update on May 6,
2008.[67][68][69][70]
It began being automatically pushed out to Automatic Updates users on July 10, 2008.[71] A
feature set overview which details new features available separately as stand-alone updates to
Windows XP, as well as backported features from Windows Vista, has been posted by
Microsoft.[72] A total of 1,174 fixes have been included in SP3.[73] Service Pack 3 can be
installed on systems with Internet Explorer versions 6, 7, or 8.[74] Internet Explorer 7 and 8
are not included as part of SP3.[75] Service Pack 3 is not available for the 64 bit version of
Windows XP, which is based on Windows Server 2003 kernel.
New features in Service Pack 3
NX APIs for application developers to enable Data Execution Prevention for their
code, independent of system-wide compatibility enforcement settings[76]
Turns black hole router detection on by default[77]
Support for SHA-2 signatures in X.509 certificates[77]
Network Access Protection client
Group Policy support for IEEE 802.1X authentication for wired network adapters.[78]
Credential Security Support Provider[79]
Descriptive Security options in Group Policy/Local Security Policy user interface
An updated version of the Microsoft Enhanced Cryptographic Provider Module
(RSAENH) that is FIPS 140-2 certified (SHA-256, SHA-384 and SHA-512
algorithms)[77]
Installing without requiring a product key during setup for retail and OEM versions
Service Pack 3 also incorporated several previously released key updates for Windows XP,
which were not included up to SP2, including:
Service Pack 3 contains updates to the operating system components of Windows XP Media
Center Edition (MCE) and Windows XP Tablet PC Edition, and security updates for .NET
Framework version 1.0, which is included in these editions. However, it does not include
update rollups for the Windows Media Center application in Windows XP MCE 2005.[84] SP3
also omits security updates for Windows Media Player 10, although the player is included in
Windows XP MCE 2005.[84] The Address Bar DeskBand on the Taskbar is no longer
included due to antitrust violation concerns.[85]
Server (computing)
From Wikipedia, the free encyclopedia
This article needs additional citations for verification. Please help improve this
article by adding citations to reliable sources. Unsourced material may be challenged
and removed. (July 2015) (Learn how and when to remove this template message)
Client–server systems are today most frequently implemented by (and often identified with)
the request–response model: a client sends a request to the server, which performs some
action and sends a response back to the client, typically with a result or acknowledgement.
Designating a computer as "server-class hardware" implies that it is specialized for running
servers on it. This often implies that it is more powerful and reliable than standard personal
computers, but alternatively, large computing clusters may be composed of many relatively
simple, replaceable server components.
Contents
1 History
2 Operation
3 Purpose
4 Hardware requirement
o 4.1 Large servers
o 4.2 Clusters
o 4.3 Appliances
5 Operating systems
6 Energy consumption
7 See also
8 Notes
9 References
10 Further reading
History
The use of the word server in computing comes from queuing theory,[4] where it dates to the
mid 20th century, being notably used in Kendall (1953) (along with "service"), the paper that
introduced Kendall's notation. In earlier papers, such as the Erlang (1909), more concrete
terms such as "[telephone] operators" are used.
In computing, "server" dates at least to RFC 5 (1969),[5] one of the earliest documents
describing ARPANET (the predecessor of Internet), and is contrasted with "user",
distinguishing two types of host: "server-host" and "user-host". The use of "serving" also
dates to early documents, such as RFC 4,[6] contrasting "serving-host" with "using-host".
The Jargon File defines "server" in the common sense of a process performing service for
requests, usually remote, with the 1981 (1.1.0) version reading:
SERVER n. A kind of DAEMON which performs a service for the requester, which often
runs on a computer other than the one on which the server runs.
Operation
A network based on the client–server model where multiple individual clients request
services and resources from centralized servers
Strictly speaking, the term server refers to a computer program or process (running program).
Through metonymy, it refers to a device used to (or a device dedicated to) running one or
several server programs. On a network, such a device is called a host. In addition to server,
the words serve and service (as noun and as verb) are frequently used, though servicer and
servant are not.[a] The word service (noun) may refer to either the abstract form of
functionality, e.g. Web service. Alternatively, it may refer to a computer program that turns a
computer into a server, e.g. Windows service. Originally used as "servers serve users" (and
"users use servers"), in the sense of "obey", today one often says that "servers serve data", in
the same sense as "give". For instance, web servers "serve [up] web pages to users" or
"service their requests".
The server is part of the client–server model; in this model, a server serves data for clients.
The nature of communication between a client and server is request and response. This is in
contrast with peer-to-peer model in which the relationship is on-demand reciprocation. In
principle, any computerized process that can be used or called by another process
(particularly remotely, particularly to share a resource) is a server, and the calling process or
processes is a client. Thus any general purpose computer connected to a network can host
servers. For example, if files on a device are shared by some process, that process is a file
server. Similarly, web server software can run on any capable computer, and so a laptop or a
personal computer can host a web server.
While request–response is the most common client–server design, there are others, such as
the publish–subscribe pattern. In the publish–subscribe pattern, clients register with a pub–
sub server, subscribing to specified types of messages; this initial registration may be done by
request–response. Thereafter, the pub–sub server forwards matching messages to the clients
without any further requests: the server pushes messages to the client, rather than the client
pulling messages from the server as in request–response.[7]
When referring to hardware, the word server typically designates computer models
specialized for their role. In general, a server performs its role better than a generic personal
computer.
Purpose
Main category: Servers (computing)
The purpose of a server is to share data as well as to share resources and distribute work. A
server computer can serve its own computer programs as well; depending on the scenario,
this could be part of a quid pro quo transaction, or simply a technical possibility. The
following table shows several scenarios in which a server is used.
Almost the entire structure of the Internet is based upon a client–server model. High-level
root nameservers, DNS, and routers direct the traffic on the internet. There are millions of
servers connected to the Internet, running continuously throughout the world[8] and virtually
every action taken by an ordinary Internet user requires one or more interactions with one or
more server. There are exceptions that do not use dedicated servers; for example peer-to-peer
file sharing, some implementations of telephony (e.g. pre-Microsoft Skype).
Hardware requirement
A rack-mountable server with the top cover removed to reveal internal components
Hardware requirement for servers vary widely, depending on the server's purpose and its
software.
Since servers are usually accessed over a network, many run unattended without a computer
monitor or input device, audio hardware and USB interfaces. Many servers do not have a
graphical user interface (GUI). They are configured and managed remotely. Remote
management include MMC, SSH or a web browser.
Large servers
Large traditional single servers would need to be run for long periods without interruption.
Availability would have to be very high, making hardware reliability and durability
extremely important. Mission-critical enterprise servers would be very fault tolerant and use
specialized hardware with low failure rates in order to maximize uptime. Uninterruptible
power supplies might be incorporated to insure against power failure. Servers typically
include hardware redundancy such as dual power supplies, RAID disk systems, and ECC
memory,[9] along with extensive pre-boot memory testing and verification. Critical
components might be hot swappable, allowing technicians to replace them on the running
server without shutting it down, and to guard against overheating, servers might have more
powerful fans or use water cooling. They will often be able to be configured, powered up and
down or rebooted remotely, using out-of-band management, typically based on IPMI. Server
casings are usually flat and wide, and designed to be rack-mounted.
These types of servers are often housed in dedicated data centers. These will normally have
very stable power and Internet and increased security. Noise is also less of a concern, but
power consumption and heat output can be a serious issue. Server rooms are equipped with
air conditioning devices.
Clusters
Appliances
A class of small specialist servers called network appliances are generally at the low end of
the scale, often being smaller than common desktop computers.
Operating systems
Sun's Cobalt Qube 3; a computer server appliance (2002); running Cobalt Linux (a
customized version of Red Hat Linux, using the 2.2 Linux kernel), complete with the Apache
web server.
On the Internet the dominant operating systems among servers are UNIX-like open source
distributions, such as those based on Linux and FreeBSD,[11] with Windows Server also
having a very significant share. Proprietary operating systems such as z/OS and OS X Server
are also deployed, but in much smaller numbers.
Specialist server-oriented operating systems have traditionally had features such as:
In practice, today many desktop and server operating systems share similar code bases,
differing mostly in configuration.
Energy consumption
In 2010, data centers (servers, cooling, and other electrical infrastructure) were responsible
for 1.1-1.5% of electrical energy consumption worldwide and 1.7-2.2% in the United
States.[13] One estimate is that total energy consumption for information and communications
technology saves more than 5 times its carbon footprint[14] in the rest of the economy by
enabling efficiency.
See also
Computing portal
Mobile Server
Notes
1.
1. A CORBA servant is a server-side object to which method calls from remote method
invocation are forwarded, but this is an uncommon usage.
References
1.
"Server Data Recovery | SQL, Exchange & RAID Server Data Recovery". Retrieved 2016-
09-28.
Windows Server Administration Fundamentals. Microsoft Official Academic Course.
111 River Street, Hoboken, NJ 07030: John Wiley & Sons. 2011. pp. 2–3. ISBN 978-0-470-
90182-3.
Comer, Douglas E.; Stevens, David L. (1993). Vol III: Client-Server Programming and
Applications. Internetworking with TCP/IP. Department of Computer Sciences, Purdue
University, West Lafayette, IN 479: Prentice Hall. pp. 11d. ISBN 0-13-474222-2.
Richard A. Henle, Boris W. Kuvshinoff, C. M. Kuvshinoff (1992). Desktop computers: in
perspective. Oxford University Press. p. 417. Server is a fairly recent computer networking
term derived from queuing theory.
Rulifson, Jeff (June 1969). DEL. IETF. RFC 5. Retrieved 30 November 2013.
Shapiro, Elmer B. (March 1969). Network Timetable. IETF. RFC 4. Retrieved 30
November 2013.
Using the HTTP Publish-Subscribe Server, Oracle
"Web Servers". IT Business Edge. Retrieved July 31, 2013.
Li, Huang, Shen, Chu (2010). ""A Realistic Evaluation of Memory Hardware Errors and
Software System Susceptibility". Usenix Annual Tech Conference 2010" (PDF).
"Google uncloaks once-secret server". CNET. CBS Interactive.
"Usage statistics and market share of Linux for websites". Retrieved 18 Jan 2013.
"Server Oriented Operating System". Retrieved 2010-05-25.
Markoff, John (31 Jul 2011). "Data Centers Using Less Power Than Forecast, Report
Says". NY Times. Retrieved 18 Jan 2013.
14. "SMART 2020: Enabling the low carbon economy in the information age" (PDF).
The Climate Group. 6 Oct 2008. Retrieved 18 Jan 2013.
Further reading
Network science
Theory
Graph
Complex network
Contagion
Small-world
Scale-free
Community structure
Percolation
Evolution
Controllability
Graph drawing
Social capital
Link analysis
Optimization
Reciprocity
Closure
Homophily
Transitivity
Preferential attachment
Balance theory
Network effect
Social influence
Network types
Informational (computing)
Telecommunication
Transport
Social
Biological
Artificial neural
Interdependent
Semantic
Spatial
Dependency
Flow
Graphs
Features
Clique
Component
Cut
Cycle
Data structure
Edge
Loop
Neighborhood
Path
Vertex
Adjacency list / matrix
Incidence list / matrix
Types
Bipartite
Complete
Directed
Hyper
Multi
Random
Weighted
Metrics
Algorithms
Centrality
Degree
Betweenness
Closeness
PageRank
Motif
Clustering
Degree distribution
Assortativity
Distance
Modularity
Efficiency
Models
Topology
Random graph
Erdős–Rényi
Barabási–Albert
Watts–Strogatz
Exponential random (ERGM)
Hyperbolic (HGN)
Hierarchical
Stochastic block model
Dynamics
Boolean network
agent based
Epidemic/SIR
Lists
Categories
Topics
Software
Network scientists
Category:Network theory
Category:Graph theory
v
t
e
Network topology is the arrangement of the various elements (links, nodes, etc.) of a
computer network.[1][2] Essentially, it is the topological[3] structure of a network and may be
depicted physically or logically. Physical topology is the placement of the various
components of a network, including device location and cable installation, while logical
topology illustrates how data flows within a network, regardless of its physical design.
Distances between nodes, physical interconnections, transmission rates, or signal types may
differ between two networks, yet their topologies may be identical.
An example is a local area network (LAN). Any given node in the LAN has one or more
physical links to other devices in the network; graphically mapping these links results in a
geometric shape that can be used to describe the physical topology of the network.
Conversely, mapping the data flow between the components determines the logical topology
of the network.
Contents
1 Topology
2 Classification
o 2.1 Point-to-point
o 2.2 Bus
2.2.1 Linear bus
2.2.2 Distributed bus
o 2.3 Star
2.3.1 Extended star
2.3.2 Distributed Star
o 2.4 Ring
o 2.5 Mesh
2.5.1 Fully connected network
2.5.2 Partially connected network
o 2.6 Hybrid
o 2.7 Daisy chain
3 Centralization
4 Decentralization
5 See also
6 References
7 External links
Topology
Diagram of different network topologies.
Two basic categories of network topologies exist, physical topologies and logical
topologies.[4]
The cabling layout used to link devices is the physical topology of the network. This refers to
the layout of cabling, the locations of nodes, and the interconnections between the nodes and
the cabling.[1] The physical topology of a network is determined by the capabilities of the
network access devices and media, the level of control or fault tolerance desired, and the cost
associated with cabling or telecommunications circuits.
In contrast, logical topology is the way that the signals act on the network media, or the way
that the data passes through the network from one device to the next without regard to the
physical interconnection of the devices. A network's logical topology is not necessarily the
same as its physical topology. For example, the original twisted pair Ethernet using repeater
hubs was a logical bus topology carried on a physical star topology. Token ring is a logical
ring topology, but is wired as a physical star from the media access unit. Logical topologies
are often closely associated with media access control methods and protocols. Some networks
are able to dynamically change their logical topology through configuration changes to their
routers and switches.
Classification
The study of network topology recognizes eight basic topologies: point-to-point, bus, star,
ring or circular, mesh, tree, hybrid, or daisy chain.[5]
Point-to-point
The simplest topology with a dedicated link between two endpoints. Easiest to understand, of
the variations of point-to-point topology, is a point-to-point communications channel that
appears, to the user, to be permanently associated with the two endpoints. A child's tin can
telephone is one example of a physical dedicated channel.
Bus
In local area networks where bus topology is used, each node is connected to a single cable,
by the help of interface connectors. This central cable is the backbone of the network and is
known as the bus (thus the name). A signal from the source travels in both directions to all
machines connected on the bus cable until it finds the intended recipient. If the machine
address does not match the intended address for the data, the machine ignores the data.
Alternatively, if the data matches the machine address, the data is accepted. Because the bus
topology consists of only one wire, it is rather inexpensive to implement when compared to
other topologies. However, the low cost of implementing the technology is offset by the high
cost of managing the network. Additionally, because only one cable is utilized, it can be the
single point of failure.
Linear bus
The type of network topology in which all of the nodes of the network are connected to a
common transmission medium which has exactly two endpoints (this is the 'bus', which is
also commonly referred to as the backbone, or trunk) – all data that is transmitted between
nodes in the network is transmitted over this common transmission medium and is able to be
received by all nodes in the network simultaneously.[1]
Note: When the electrical signal reaches the end of the bus, the signal is reflected back down
the line, causing unwanted interference. As a solution, the two endpoints of the bus are
normally terminated with a device called a terminator that prevents this reflection.
Distributed bus
The type of network topology in which all of the nodes of the network are connected to a
common transmission medium which has more than two endpoints that are created by adding
branches to the main section of the transmission medium – the physical distributed bus
topology functions in exactly the same fashion as the physical linear bus topology (i.e., all
nodes share a common transmission medium).
Star
In local area networks with a star topology, each network host is connected to a central hub
with a point-to-point connection. So it can be said that every computer is indirectly connected
to every other node with the help of the hub. In Star topology, every node (computer
workstation or any other peripheral) is connected to a central node called hub, router or
switch. The switch is the server and the peripherals are the clients. The network does not
necessarily have to resemble a star to be classified as a star network, but all of the nodes on
the network must be connected to one central device. All traffic that traverses the network
passes through the central hub. The hub acts as a signal repeater. The star topology is
considered the easiest topology to design and implement. An advantage of the star topology
is the simplicity of adding additional nodes. The primary disadvantage of the star topology is
that the hub represents a single point of failure.
Extended star
A type of network topology in which a network that is based upon the physical star topology
has one or more repeaters between the central node and the peripheral or 'spoke' nodes, the
repeaters being used to extend the maximum transmission distance of the point-to-point links
between the central node and the peripheral nodes beyond that which is supported by the
transmitter power of the central node or beyond that which is supported by the standard upon
which the physical layer of the physical star network is based.
If the repeaters in a network that is based upon the physical extended star topology are
replaced with hubs or switches, then a hybrid network topology is created that is referred to
as a physical hierarchical star topology, although some texts make no distinction between the
two topologies.
Distributed Star
A type of network topology that is composed of individual networks that are based upon the
physical star topology connected in a linear fashion – i.e., 'daisy-chained' – with no central or
top level connection point (e.g., two or more 'stacked' hubs, along with their associated star
connected nodes or 'spokes').
Ring
A ring topology is a bus topology in a closed loop. Data travels around the ring in one
direction. When one node sends data to another, the data passes through each intermediate
node on the ring until it reaches its destination. The intermediate nodes repeat (retransmit) the
data to keep the signal strong.[4] Every node is a peer; there is no hierarchical relationship of
clients and servers. If one node is unable to retransmit data, it severs communication between
the nodes before and after it in the bus.
Mesh
The value of fully meshed networks is proportional to the exponent of the number of
subscribers, assuming that communicating groups of any two endpoints, up to and including
all the endpoints, is approximated by Reed's Law.
In a partially connected network, certain nodes are connected to exactly one other node; but
some nodes are connected to two or more other nodes with a point-to-point link. This makes
it possible to make use of some of the redundancy of mesh topology that is physically fully
connected, without the expense and complexity required for a connection between every node
in the network.
Hybrid
Hybrid networks combine two or more topologies in such a way that the resulting network
does not exhibit one of the standard topologies (e.g., bus, star, ring, etc.). For example, a tree
network (or star-bus network) is a hybrid topology in which star networks are interconnected
via bus networks.[6][7] However, a tree network connected to another tree network is still
topologically a tree network, not a distinct network type. A hybrid topology is always
produced when two different basic network topologies are connected.
A star-ring network consists of two or more ring networks connected using a multistation
access unit (MAU) as a centralized hub.
Two other hybrid network types are hybrid mesh and hierarchical star.[6]
Daisy chain
Except for star-based networks, the easiest way to add more computers into a network is by
daisy-chaining, or connecting each computer in series to the next. If a message is intended for
a computer partway down the line, each system bounces it along in sequence until it reaches
the destination. A daisy-chained network can take two basic forms: linear and ring.
A linear topology puts a two-way link between one computer and the next. However,
this was expensive in the early days of computing, since each computer (except for
the ones at each end) required two receivers and two transmitters.
By connecting the computers at each end, a ring topology can be formed. An
advantage of the ring is that the number of transmitters and receivers can be cut in
half, since a message will eventually loop all of the way around. When a node sends a
message, the message is processed by each computer in the ring. If the ring breaks at a
particular link then the transmission can be sent via the reverse path thereby ensuring
that all nodes are always connected in the case of a single failure.
Centralization
The star topology reduces the probability of a network failure by connecting all of the
peripheral nodes (computers, etc.) to a central node. When the physical star topology is
applied to a logical bus network such as Ethernet, this central node (traditionally a hub)
rebroadcasts all transmissions received from any peripheral node to all peripheral nodes on
the network, sometimes including the originating node. All peripheral nodes may thus
communicate with all others by transmitting to, and receiving from, the central node only.
The failure of a transmission line linking any peripheral node to the central node will result in
the isolation of that peripheral node from all others, but the remaining peripheral nodes will
be unaffected. However, the disadvantage is that the failure of the central node will cause the
failure of all of the peripheral nodes.
If the central node is passive, the originating node must be able to tolerate the reception of an
echo of its own transmission, delayed by the two-way round trip transmission time (i.e. to and
from the central node) plus any delay generated in the central node. An active star network
has an active central node that usually has the means to prevent echo-related problems.
A tree topology (a.k.a. hierarchical topology) can be viewed as a collection of star networks
arranged in a hierarchy. This tree has individual peripheral nodes (e.g. leaves) which are
required to transmit to and receive from one other node only and are not required to act as
repeaters or regenerators. Unlike the star network, the functionality of the central node may
be distributed.
As in the conventional star network, individual nodes may thus still be isolated from the
network by a single-point failure of a transmission path to the node. If a link connecting a leaf
fails, that leaf is isolated; if a connection to a non-leaf node fails, an entire section of the
network becomes isolated from the rest.
To alleviate the amount of network traffic that comes from broadcasting all signals to all
nodes, more advanced central nodes were developed that are able to keep track of the
identities of the nodes that are connected to the network. These network switches will "learn"
the layout of the network by "listening" on each port during normal data transmission,
examining the data packets and recording the address/identifier of each connected node and
which port it is connected to in a lookup table held in memory. This lookup table then allows
future transmissions to be forwarded to the intended destination only.
Decentralization
In a mesh topology (i.e., a partially connected mesh topology), there are at least two nodes
with two or more paths between them to provide redundant paths to be used in case the link
providing one of the paths fails. This decentralization is often used to compensate for the
single-point-failure disadvantage that is present when using a single device as a central node
(e.g., in star and tree networks). A special kind of mesh, limiting the number of hops between
two nodes, is a hypercube. The number of arbitrary forks in mesh networks makes them more
difficult to design and implement, but their decentralized nature makes them very useful. In
2012 the IEEE published the Shortest Path Bridging protocol to ease configuration tasks and
allows all paths to be active which increases bandwidth and redundancy between all
devices.[8][9][10][11][12]
This is similar in some ways to a grid network, where a linear or ring topology is used to
connect systems in multiple directions. A multidimensional ring has a toroidal topology, for
instance.
A fully connected network, complete topology, or full mesh topology is a network topology in
which there is a direct link between all pairs of nodes. In a fully connected network with n
nodes, there are n(n-1)/2 direct links. Networks designed with this topology are usually very
expensive to set up, but provide a high degree of reliability due to the multiple paths for data
that are provided by the large number of redundant links between nodes. This topology is
mostly seen in military applications.
Router (computing)
From Wikipedia, the free encyclopedia
This article is about the network device. For the woodworking tool, see Router
(woodworking).
A typical home or small office router showing the ADSL telephone line and Ethernet network
cable connections
A router[a] is a networking device that forwards data packets between computer networks.
Routers perform the traffic directing functions on the Internet. A data packet is typically
forwarded from one router to another through the networks that constitute the internetwork
until it reaches its destination node.[2]
A router is connected to two or more data lines from different networks.[b] When a data
packet comes in on one of the lines, the router reads the address information in the packet to
determine the ultimate destination. Then, using information in its routing table or routing
policy, it directs the packet to the next network on its journey. This creates an overlay
internetwork.
The most familiar type of routers are home and small office routers that simply pass IP
packets between the home computers and the Internet. An example of a router would be the
owner's cable or DSL router, which connects to the Internet through an Internet service
provider (ISP). More sophisticated routers, such as enterprise routers, connect large business
or ISP networks up to the powerful core routers that forward data at high speed along the
optical fiber lines of the Internet backbone. Though routers are typically dedicated hardware
devices, software-based routers also exist.
Contents
1 Applications
o 1.1 Access
o 1.2 Distribution
o 1.3 Security
o 1.4 Core
o 1.5 Internet connectivity and internal use
2 Historical and technical information
3 Forwarding
4 See also
5 Notes
6 References
7 External links
Applications
When multiple routers are used in interconnected networks, the routers exchange information
about destination addresses using a dynamic routing protocol. Each router builds up a routing
table listing the preferred routes between any two systems on the interconnected networks.[3]
A router has interfaces for different physical types of network connections, such as copper
cables, fibre optic, or wireless transmission. It also contains firmware for different
networking communications protocol standards. Each network interface uses this specialized
computer software to enable data packets to be forwarded from one protocol transmission
system to another.
Routers may also be used to connect two or more logical groups of computer devices known
as subnets, each with a different network prefix. The network prefixes recorded in the routing
table do not necessarily map directly to the physical interface connections.[4]
Control plane: A router maintains a routing table that lists which route should be used
to forward a data packet, and through which physical interface connection. It does this
using internal pre-configured directives, called static routes, or by learning routes
using a dynamic routing protocol. Static and dynamic routes are stored in the Routing
Information Base (RIB). The control-plane logic then strips non essential directives
from the RIB and builds a Forwarding Information Base (FIB) to be used by the
forwarding-plane.
Forwarding plane: The router forwards data packets between incoming and outgoing
interface connections. It routes them to the correct network type using information
that the packet header contains. It uses data recorded in the routing table control
plane.
Routers may provide connectivity within enterprises, between enterprises and the Internet, or
between internet service providers' (ISPs) networks. The largest routers (such as the Cisco
CRS-1 or Juniper T1600) interconnect the various ISPs, or may be used in large enterprise
networks.[6] Smaller routers usually provide connectivity for typical home and office
networks. Other networking solutions may be provided by a backbone Wireless Distribution
System (WDS), which avoids the costs of introducing networking cables into buildings.
All sizes of routers may be found inside enterprises.[7] The most powerful routers are usually
found in ISPs, academic and research facilities. Large businesses may also need more
powerful routers to cope with ever increasing demands of intranet data traffic. A three-layer
model is in common use, not all of which need be present in smaller networks.[8]
Access
A screenshot of the LuCI web interface used by OpenWrt. This page configures Dynamic
DNS.
Access routers, including 'small office/home office' (SOHO) models, are located at customer
sites such as branch offices that do not need hierarchical routing of their own. Typically, they
are optimized for low cost. Some SOHO routers are capable of running alternative free
Linux-based firmwares like Tomato, OpenWrt or DD-WRT.[9]
Distribution
Distribution routers aggregate traffic from multiple access routers, either at the same site, or
to collect the data streams from multiple sites to a major enterprise location. Distribution
routers are often responsible for enforcing quality of service across a wide area network
(WAN), so they may have considerable memory installed, multiple WAN interface
connections, and substantial onboard data processing routines. They may also provide
connectivity to groups of file servers or other external networks.
Security
See also: Universal Plug and Play § Problems with UPnP, and Wi-Fi Protected Setup
§ Vulnerabilities
External networks must be carefully considered as part of the overall security strategy. A
router may include a firewall, VPN handling, and other security functions, or these may be
handled by separate devices. Many companies produced security-oriented routers, including
Cisco PIX series, Juniper NetScreen and WatchGuard. Routers also commonly perform
network address translation, (which allows multiple devices on a network to share a single
public IP address[10][11][12]) and stateful packet inspection. Some experts argue that open
source routers are more secure and reliable than closed source routers because open source
routers allow mistakes to be quickly found and corrected.[13]
Core
Edge router: Also called a Provider Edge router, is placed at the edge of an ISP
network. The router uses External BGP to EBGP routers in other ISPs, or a large
enterprise Autonomous System.
Subscriber edge router: Also called a Customer Edge router, is located at the edge of
the subscriber's network, it also uses EBGP to its provider's Autonomous System. It is
typically used in an (enterprise) organization.
Inter-provider border router: Interconnecting ISPs, is a BGP router that maintains
BGP sessions with other BGP routers in ISP Autonomous Systems.
Core router: A core router resides within an Autonomous System as a back bone to
carry traffic between edge routers.[16]
Within an ISP: In the ISP's Autonomous System, a router uses internal BGP to
communicate with other ISP edge routers, other intranet core routers, or the ISP's
intranet provider border routers.
"Internet backbone:" The Internet no longer has a clearly identifiable backbone,
unlike its predecessor networks. See default-free zone (DFZ). The major ISPs' system
routers make up what could be considered to be the current Internet backbone core.[17]
ISPs operate all four types of the BGP routers described here. An ISP "core" router is
used to interconnect its edge and border routers. Core routers may also have
specialized functions in virtual private networks based on a combination of BGP and
Multi-Protocol Label Switching protocols.[18]
Port forwarding: Routers are also used for port forwarding between private Internet
connected servers.[7]
Voice/Data/Fax/Video Processing Routers: Commonly referred to as access servers or
gateways, these devices are used to route and process voice, data, video and fax traffic
on the Internet. Since 2005, most long-distance phone calls have been processed as IP
traffic (VOIP) through a voice gateway. Use of access server type routers expanded
with the advent of the Internet, first with dial-up access and another resurgence with
voice phone service.
Larger networks commonly use multilayer switches, with layer 3 devices being used
to simply interconnect multiple subnets within the same security zone, and higher
layer switches when filtering, translation, load balancing or other higher level
functions are required, especially between zones.
The very first device that had fundamentally the same functionality as a router does today
was the Interface Message Processor (IMP); IMPs were the devices that made up the
ARPANET, the first TCP/IP network. The idea for a router (called "gateways" at the time)
initially came about through an international group of computer networking researchers
called the International Network Working Group (INWG). Set up in 1972 as an informal
group to consider the technical issues involved in connecting different networks, later that
year it became a subcommittee of the International Federation for Information Processing.[19]
These devices were different from most previous packet switching schemes in two ways.
First, they connected dissimilar kinds of networks, such as serial lines and local area
networks. Second, they were connectionless devices, which had no role in assuring that
traffic was delivered reliably, leaving that entirely to the hosts.[c]
The idea was explored in more detail, with the intention to produce a prototype system as part
of two contemporaneous programs. One was the initial DARPA-initiated program, which
created the TCP/IP architecture in use today.[20] The other was a program at Xerox PARC to
explore new networking technologies, which produced the PARC Universal Packet system;
due to corporate intellectual property concerns it received little attention outside Xerox for
years.[21] Some time after early 1974, the first Xerox routers became operational. The first
true IP router was developed by Virginia Strazisar at BBN, as part of that DARPA-initiated
effort, during 1975-1976. By the end of 1976, three PDP-11-based routers were in service in
the experimental prototype Internet.[22]
The first multiprotocol routers were independently created by staff researchers at MIT and
Stanford in 1981; the Stanford router was done by William Yeager, and the MIT one by Noel
Chiappa; both were also based on PDP-11s.[23][24][25][26] Virtually all networking now uses
TCP/IP, but multiprotocol routers are still manufactured. They were important in the early
stages of the growth of computer networking, when protocols other than TCP/IP were in use.
Modern Internet routers that handle both IPv4 and IPv6 are multiprotocol, but are simpler
devices than routers processing AppleTalk, DECnet, IP and Xerox protocols.
From the mid-1970s and in the 1980s, general-purpose mini-computers served as routers.
Modern high-speed routers are highly specialized computers with extra hardware added to
speed both common routing functions, such as packet forwarding, and specialised functions
such as IPsec encryption. There is substantial use of Linux and Unix software based
machines, running open source routing code, for research and other applications. Cisco's
operating system was independently designed. Major router operating systems, such as those
from Juniper Networks and Extreme Networks, are extensively modified versions of Unix
software.
Forwarding
Further information: Routing and IP forwarding
The main purpose of a router is to connect multiple networks and forward packets destined
either for its own networks or other networks. A router is considered a layer-3 device because
its primary forwarding decision is based on the information in the layer-3 IP packet,
specifically the destination IP address. When a router receives a packet, it searches its routing
table to find the best match between the destination IP address of the packet and one of the
addresses in the routing table. Once a match is found, the packet is encapsulated in the layer-
2 data link frame for the outgoing interface indicated in the table entry. A router typically
does not look into the packet payload,[citation needed] but only at the layer-3 addresses to make a
forwarding decision, plus optionally other information in the header for hints on, for example,
quality of service (QoS). For pure IP forwarding, a router is designed to minimize the state
information associated with individual packets.[27] Once a packet is forwarded, the router
does not retain any historical information about the packet.[d]
The routing table itself can contain information derived from a variety of sources, such as a
default or static routes that are configured manually, or dynamic routing protocols where the
router learns routes from other routers. A default route is one that is used to route all traffic
whose destination does not otherwise appear in the routing table; this is common – even
necessary – in small networks, such as a home or small business where the default route
simply sends all non-local traffic to the Internet service provider. The default route can be
manually configured (as a static route), or learned by dynamic routing protocols, or be
obtained by DHCP.[e][28]
A router can run more than one routing protocol at a time, particularly if it serves as an
autonomous system border router between parts of a network that run different routing
protocols; if it does so, then redistribution may be used (usually selectively) to share
information between the different protocols running on the same router.[29]
Besides making a decision as to which interface a packet is forwarded to, which is handled
primarily via the routing table, a router also has to manage congestion when packets arrive at
a rate higher than the router can process. Three policies commonly used in the Internet are
tail drop, random early detection (RED), and weighted random early detection (WRED). Tail
drop is the simplest and most easily implemented; the router simply drops new incoming
packets once the length of the queue exceeds the size of the buffers in the router. RED
probabilistically drops datagrams early when the queue exceeds a pre-configured portion of
the buffer, until a pre-determined max, when it becomes tail drop. WRED requires a weight
on the average queue size to act upon when the traffic is about to exceed the pre-configured
size, so that short bursts will not trigger random drops.
Another function a router performs is to decide which packet should be processed first when
multiple queues exist. This is managed through QoS, which is critical when Voice over IP is
deployed, so as not to introduce excessive latency.
Yet another function a router performs is called policy-based routing where special rules are
constructed to override the rules derived from the routing table when a packet forwarding
decision is made.[30]
Router functions may be performed through the same internal paths that the packets travel
inside the router. Some of the functions may be performed through an application-specific
integrated circuit (ASIC) to avoid overhead caused by multiple CPU cycles, and others may
have to be performed through the CPU as these packets need special attention that cannot be
handled by an ASIC.
See also
Computer networking portal
DECbit
Mobile broadband modem
Modem
Residential gateway
TCAM Content addressable memory (hardware acceleration of route-search)
Wireless router
Notes
1.
References
1.
"router". Oxford English Dictionary (3rd ed.). Oxford University Press. September
2005. (Subscription or UK public library membership required.)
"Overview Of Key Routing Protocol Concepts: Architectures, Protocol Types,
Algorithms and Metrics". Tcpipguide.com. Retrieved 15 January 2011.
"Cisco Networking Academy's Introduction to Routing Dynamically". Cisco. Retrieved
August 1, 2015.
Requirements for IPv4 Routers,RFC 1812, F. Baker, June 1995
Requirements for Separation of IP Control and Forwarding,RFC 3654, H. Khosravi & T.
Anderson, November 2003
"Setting uo Netflow on Cisco Routers". MY-Technet.com date unknown. Retrieved 15
January 2011.
"Windows Home Server: Router Setup". Microsoft Technet 14 Aug 2010. Retrieved 15
January 2011.
Oppenheimer, Pr (2004). Top-Down Network Design. Indianapolis: Cisco Press.
ISBN 1-58705-152-4.
"Windows Small Business Server 2008: Router Setup". Microsoft Technet Nov 2010.
Retrieved 15 January 2011.
See "Network Address Translation (NAT) FAQ".
Cf. "RFC 3022 – Traditional IP Network Address Translator (Traditional NAT)".
But see "Security Considerations Of NAT" (PDF). University of Michigan.
Printer (computing)
From Wikipedia, the free encyclopedia
This article possibly contains original research. Please improve it by verifying the claims
made and adding inline citations. Statements consisting only of original research should be
removed. (February 2015) (Learn how and when to remove this template message)
HP LaserJet 5 printer
The Game Boy Pocket Printer, a thermal printer released as a peripheral for the Nintendo Game Boy
This is an example of a wide-carriage dot matrix printer, designed for 14-inch (360 mm) wide paper,
shown with 8.5-by-14-inch (220 mm × 360 mm) legal paper. Wide carriage printers were often used
in the field of businesses, to print accounting records on 11-by-14-inch (280 mm × 360 mm) tractor-
feed paper. They were also called "132-column printers"
Play media
The introduction of the low-cost laser printer in 1984 with the first HP LaserJet, and the
addition of PostScript in next year's Apple LaserWriter, set off a revolution in printing known
as desktop publishing. Laser printers using PostScript mixed text and graphics, like dot-
matrix printers, but at quality levels formerly available only from commercial typesetting
systems. By 1990, most simple printing tasks like fliers and brochures were now created on
personal computers and then laser printed; expensive offset printing systems were being
dumped as scrap. The HP Deskjet of 1988 offered the same advantages as laser printer in
terms of flexibility, but produced somewhat lower quality output (depending on the paper)
from much less expensive mechanisms. Inkjet systems rapidly displaced dot matrix and daisy
wheel printers from the market. By the 2000s high-quality printers of this sort had fallen
under the $100 price point and became commonplace.
The rapid update of internet email through the 1990s and into the 2000s has largely displaced
the need for printing as a means of moving documents, and a wide variety of reliable storage
systems means that a "physical backup" is of little benefit today. Even the desire for printed
output for "offline reading" while on mass transit or aircraft has been displaced by e-book
readers and tablet computers. Today, traditional printers are being used more for special
purposes, like printing photographs or artwork, and are no longer a must-have peripheral.
Starting around 2010, 3D printing became an area of intense interest, allowing the creation of
physical objects with the same sort of effort as an early laser printer required to produce a
brochure. These devices are in their earliest stages of development and have not yet become
commonplace.
Contents
1 Types of printers
2 Technology
o 2.1 Modern print technology
2.1.1 Toner-based printers
2.1.2 Liquid inkjet printers
2.1.3 Solid ink printers
2.1.4 Dye-sublimation printers
2.1.5 Thermal printers
o 2.2 Obsolete and special-purpose printing technologies
2.2.1 Impact printers
2.2.1.1 Typewriter-derived printers
2.2.1.2 Teletypewriter-derived printers
2.2.1.3 Daisy wheel printers
2.2.1.4 Dot-matrix printers
2.2.1.5 Line printers
2.2.2 Liquid ink electrostatic printers
2.2.3 Plotters
o 2.3 Other printers
3 Attributes
o 3.1 Printer control languages
o 3.2 Printing speed
o 3.3 Printing mode
o 3.4 Monochrome, colour and photo printers
o 3.5 Page yield
o 3.6 Cost per page
o 3.7 Business model
o 3.8 Printer steganography
o 3.9 Wireless printers
4 See also
5 References
6 External links
Types of printers
Personal printers are primarily designed to support individual users, and may be connected to
only a single computer. These printers are designed for low-volume, short-turnaround print
jobs, requiring minimal setup time to produce a hard copy of a given document. However,
they are generally slow devices ranging from 6 to around 25 pages per minute (ppm), and the
cost per page is relatively high. However, this is offset by the on-demand convenience. Some
printers can print documents stored on memory cards or from digital cameras and scanners.
Networked or shared printers are "designed for high-volume, high-speed printing." They are
usually shared by many users on a network and can print at speeds of 45 to around 100
ppm.[3] The Xerox 9700 could achieve 120 ppm.
A virtual printer is a piece of computer software whose user interface and API resembles that
of a printer driver, but which is not connected with a physical computer printer.
Technology
The choice of print technology has a great effect on the cost of the printer and cost of
operation, speed, quality and permanence of documents, and noise. Some printer technologies
don't work with certain types of physical media, such as carbon paper or transparencies.
A second aspect of printer technology that is often forgotten is resistance to alteration: liquid
ink, such as from an inkjet head or fabric ribbon, becomes absorbed by the paper fibers, so
documents printed with liquid ink are more difficult to alter than documents printed with
toner or solid inks, which do not penetrate below the paper surface.
Cheques can be printed with liquid ink or on special cheque paper with toner anchorage so
that alterations may be detected.[4] The machine-readable lower portion of a cheque must be
printed using MICR toner or ink. Banks and other clearing houses employ automation
equipment that relies on the magnetic flux from these specially printed characters to function
properly.
Toner-based printers
A laser printer rapidly produces high quality text and graphics. As with digital photocopiers
and multifunction printers (MFPs), laser printers employ a xerographic printing process but
differ from analog photocopiers in that the image is produced by the direct scanning of a laser
beam across the printer's photoreceptor.
Another toner-based printer is the LED printer which uses an array of LEDs instead of a laser
to cause toner adhesion to the print drum.
Inkjet printers operate by propelling variably sized droplets of liquid ink onto almost any
sized page. They are the most common type of computer printer used by consumers.
Solid ink printers, also known as phase-change printers, are a type of thermal transfer printer.
They use solid sticks of CMYK-coloured ink, similar in consistency to candle wax, which are
melted and fed into a piezo crystal operated print-head. The printhead sprays the ink on a
rotating, oil coated drum. The paper then passes over the print drum, at which time the image
is immediately transferred, or transfixed, to the page. Solid ink printers are most commonly
used as colour office printers, and are excellent at printing on transparencies and other non-
porous media. Solid ink printers can produce excellent results. Acquisition and operating
costs are similar to laser printers. Drawbacks of the technology include high energy
consumption and long warm-up times from a cold state. Also, some users complain that the
resulting prints are difficult to write on, as the wax tends to repel inks from pens, and are
difficult to feed through automatic document feeders, but these traits have been significantly
reduced in later models. In addition, this type of printer is only available from one
manufacturer, Xerox, manufactured as part of their Xerox Phaser office printer line.
Previously, solid ink printers were manufactured by Tektronix, but Tek sold the printing
business to Xerox in 2001.
Dye-sublimation printers
A dye-sublimation printer (or dye-sub printer) is a printer which employs a printing process
that uses heat to transfer dye to a medium such as a plastic card, paper or canvas. The process
is usually to lay one colour at a time using a ribbon that has colour panels. Dye-sub printers
are intended primarily for high-quality colour applications, including colour photography;
and are less well-suited for text. While once the province of high-end print shops, dye-
sublimation printers are now increasingly used as dedicated consumer photo printers.
Thermal printers
Epson MX-80, a popular model of dot-matrix printer in use for many years
The following technologies are either obsolete, or limited to special applications though most
were, at one time, in widespread use.
Impact printers
Impact printers rely on a forcible impact to transfer ink to the media. The impact printer
uses a print head that either hits the surface of the ink ribbon, pressing the ink ribbon against
the paper (similar to the action of a typewriter), or, less commonly, hits the back of the paper,
pressing the paper against the ink ribbon (the IBM 1403 for example). All but the dot matrix
printer rely on the use of fully formed characters, letterforms that represent each of the
characters that the printer was capable of printing. In addition, most of these printers were
limited to monochrome, or sometimes two-color, printing in a single typeface at one time,
although bolding and underlining of text could be done by "overstriking", that is, printing two
or more impressions either in the same character position or slightly offset. Impact printers
varieties include typewriter-derived printers, teletypewriter-derived printers, daisywheel
printers, dot matrix printers and line printers. Dot matrix printers remain in common use in
businesses where multi-part forms are printed. An overview of impact printing[6] contains a
detailed description of many of the technologies used.
Typewriter-derived printers
Teletypewriter-derived printers
Main article: Teleprinter
The common teleprinter could easily be interfaced to the computer and became very popular
except for those computers manufactured by IBM. Some models used a "typebox" that was
positioned, in the X- and Y-axes, by a mechanism and the selected letter form was struck by a
hammer. Others used a type cylinder in a similar way as the Selectric typewriters used their
type ball. In either case, the letter form then struck a ribbon to print the letterform. Most
teleprinters operated at ten characters per second although a few achieved 15 CPS.
Daisy wheel printers operate in much the same fashion as a typewriter. A hammer strikes a
wheel with petals, the "daisy wheel", each petal containing a letter form at its tip. The letter
form strikes a ribbon of ink, depositing the ink on the page and thus printing a character. By
rotating the daisy wheel, different characters are selected for printing. These printers were
also referred to as letter-quality printers because they could produce text which was as clear
and crisp as a typewriter. The fastest letter-quality printers printed at 30 characters per
second.
Dot-matrix printers
Main article: Dot matrix printer
sample output from 9-pin dot matrix printer (one character expanded to show detail)
The term dot matrix printer is used for impact printers that use a matrix of small pins to
transfer ink to the page. The advantage of dot matrix over other impact printers is that they
can produce graphical images in addition to text; however the text is generally of poorer
quality than impact printers that use letterforms (type).
Dot matrix printers can either be character-based or line-based (that is, a single horizontal
series of pixels across the page), referring to the configuration of the print head.
In the 1970s & 80s, dot matrix printers were one of the more common types of printers used
for general use, such as for home and small office use. Such printers normally had either 9 or
24 pins on the print head (early 7 pin printers also existed, which did not print descenders).
There was a period during the early home computer era when a range of printers were
manufactured under many brands such as the Commodore VIC-1525 using the Seikosha Uni-
Hammer system. This used a single solenoid with an oblique striker that would be actuated 7
times for each column of 7 vertical pixels while the head was moving at a constant speed.
The angle of the striker would align the dots vertically even though the head had moved one
dot spacing in the time. The vertical dot position was controlled by a synchronised
longitudinally ribbed platen behind the paper that rotated rapidly with a rib moving vertically
seven dot spacings in the time it took to print one pixel column.[7][8] 24-pin print heads were
able to print at a higher quality and started to offer additional type styles and were marketed
as Near Letter Quality by some vendors. Once the price of inkjet printers dropped to the point
where they were competitive with dot matrix printers, dot matrix printers began to fall out of
favour for general use.
Some dot matrix printers, such as the NEC P6300, can be upgraded to print in colour. This is
achieved through the use of a four-colour ribbon mounted on a mechanism (provided in an
upgrade kit that replaces the standard black ribbon mechanism after installation) that raises
and lowers the ribbons as needed. Colour graphics are generally printed in four passes at
standard resolution, thus slowing down printing considerably. As a result, colour graphics can
take up to four times longer to print than standard monochrome graphics, or up to 8-16 times
as long at high resolution mode.
Dot matrix printers are still commonly used in low-cost, low-quality applications such as cash
registers, or in demanding, very high volume applications like invoice printing. Impact
printing, unlike laser printing, allows the pressure of the print head to be applied to a stack of
two or more forms to print multi-part documents such as sales invoices and credit card
receipts using continuous stationery with carbonless copy paper. Dot-matrix printers were
being superseded even as receipt printers after the end of the twentieth century.
Line printers
Main article: Line printer
Line printers, as the name implies, print an entire line of text at a time. Four principal designs
exist.
Print drum from drum printer
Drum printers, where a horizontally mounted rotating drum carries the entire character set
of the printer repeated in each printable character position. The IBM 1132 printer is an
example of a drum printer. Drum printers are also found in adding machines and other
numeric printers (POS), the dimensions are compact as only a dozen characters need to be
supported.[9]
Chain or train printers, where the character set is arranged multiple times around a linked
chain or a set of character slugs in a track traveling horizontally past the print line. The IBM
1403 is perhaps the most popular, and comes in both chain and train varieties. The band
printer is a later variant where the characters are embossed on a flexible steel band. The
LP27 from Digital Equipment Corporation is a band printer.
Bar printers, where the character set is attached to a solid bar that moves horizontally along
the print line, such as the IBM 1443.[10]
A fourth design, used mainly on very early printers such as the IBM 402, features
independent type bars, one for each printable position. Each bar contains the character set
to be printed. The bars moves vertically to position the character to be printed in front of
the print hammer.[11]
In each case, to print a line, precisely timed hammers strike against the back of the paper at
the exact moment that the correct character to be printed is passing in front of the paper. The
paper presses forward against a ribbon which then presses against the character form and the
impression of the character form is printed onto the paper.
Comb printers, also called line matrix printers, represent the fifth major design. These
printers are a hybrid of dot matrix printing and line printing. In these printers, a comb of
hammers prints a portion of a row of pixels at one time, such as every eighth pixel. By
shifting the comb back and forth slightly, the entire pixel row can be printed, continuing the
example, in just eight cycles. The paper then advances and the next pixel row is printed.
Because far less motion is involved than in a conventional dot matrix printer, these printers
are very fast compared to dot matrix printers and are competitive in speed with formed-
character line printers while also being able to print dot matrix graphics. The Printronix
P7000 series of line matrix printers are still manufactured as of 2013.
Line printers are the fastest of all impact printers and are used for bulk printing in large
computer centres. A line printer can print at 1100 lines per minute or faster, frequently
printing pages more rapidly than many current laser printers. On the other hand, the
mechanical components of line printers operat with tight tolerances and require regular
preventive maintenance (PM) to produce top quality print. They are virtually never used with
personal computers and have now been replaced by high-speed laser printers. The legacy of
line printers lives on in many computer operating systems, which use the abbreviations "lp",
"lpr", or "LPT" to refer to printers.
This section needs additional citations for verification. Please help improve this article by
adding citations to reliable sources. Unsourced material may be challenged and removed.
(May 2012) (Learn how and when to remove this template message)
Liquid ink electrostatic printers use a chemical coated paper, which is charged by the print
head according to the image of the document. The paper is passed near a pool of liquid ink
with the opposite charge. The charged areas of the paper attract the ink and thus form the
image. This process was developed from the process of electrostatic copying.[12] Color
reproduction is very accurate, and because there is no heating the scale distortion is less than
±0.1%. (All laser printers have an accuracy of ±1%.)
Worldwide, most survey offices used this printer before color inkjet plotters become popular.
Liquid ink electrostatic printers were mostly available in 36 to 54 inches (910 to 1,370 mm)
width and also 6 color printing. These were also used to print large billboards. It was first
introduced by Versatec, which was later bought by Xerox. 3M also used to make these
printers.[13]
Plotters
Other printers
A number of other sorts of printers are important for historical reasons, or for special purpose
uses:
Attributes
Printer control languages
Most printers other than line printers accept control characters or unique character sequences
to control various printer functions. These may range from shifting from lower to upper case
or from black to red ribbon on typewriter printers to switching fonts and changing character
sizes and colors on raster printers. Early printer controls were not standardized, with each
manufacturer's equipment having its own set. The IBM Personal Printer Data Stream (PPDS)
became a commonly used command set for dot-matrix printers.
Today, most printers accept one or more page description languages (PDLs). Laser printers
with greater processing power frequently offer support for variants of Hewlett-Packard's
Printer Command Language (PCL), PostScript or XML Paper Specification. Most inkjet
devices support manufacturer proprietary PDLs such as ESC/P. The diversity in mobile
platforms have led to various standardization efforts around device PDLs such as the Printer
Working Group (PWG's) PWG Raster.
Printing speed
The speed of early printers was measured in units of characters per minute (cpm) for
character printers, or lines per minute (lpm) for line printers. Modern printers are measured in
pages per minute (ppm). These measures are used primarily as a marketing tool, and are not
as well standardised as toner yields. Usually pages per minute refers to sparse monochrome
office documents, rather than dense pictures which usually print much more slowly,
especially colour images. PPM are most of the time referring to A4 paper in Europe and letter
paper in the United States, resulting in a 5-10% difference.
Printing mode
A string of characters
A bitmapped image
A vector image
A computer program written in a page description language, such as PCL or PostScript
Some printers can process all four types of data, others not.
Character printers, such as daisy wheel printers, can handle only plain text data or rather
simple point plots.
Pen plotters typically process vector images. Inkjet based plotters can adequately reproduce
all four.
Modern printing technology, such as laser printers and inkjet printers, can adequately
reproduce all four. This is especially true of printers equipped with support for PCL or
PostScript, which includes the vast majority of printers produced today.
Today it is possible to print everything (even plain text) by sending ready bitmapped images
to the printer. This allows better control over formatting, especially among machines from
different vendors. Many printer drivers do not use the text mode at all, even if the printer is
capable of it.[citation needed]
A monochrome printer can only produce an image consisting of one colour, usually black. A
monochrome printer may also be able to produce various tones of that color, such as a grey-
scale. A colour printer can produce images of multiple colours. A photo printer is a colour
printer that can produce images that mimic the colour range (gamut) and resolution of prints
made from photographic film. Many can be used on a standalone basis without a computer,
using a memory card or USB connector.
Page yield
The page yield is number of pages that can be printed from a toner cartridge or ink
cartridge—before the cartridge needs to be refilled or replaced. The actual number of pages
yielded by a specific cartridge depends on a number of factors.[15]
For a fair comparison, many laser printer manufacturers use the ISO/IEC 19752 process to
measure "the" toner cartridge yield.[16][17][18]
In order to fairly compare operating expenses of printers with a relatively small ink cartridge
to printers with a larger, more expensive toner cartridge that typically holds more toner and
so prints more pages before the cartridge needs to be replaced, many people prefer to estimate
operating expenses in terms of cost per page (CPP).[16][17][19][20][21][22]
Business model
Often the "razor and blades" business model is applied. That is, a company may sell a printer
at cost, and make profits on the ink cartridge, paper, or some other replacement part. This has
caused legal disputes regarding the right of companies other than the printer manufacturer to
sell compatible ink cartridges. To protect their business model, several manufacturers invest
heavily in developing new cartridge technology and patenting it.
Other manufacturers, in reaction to the challenges from using this business model, choose to
make more money on printers and less on the ink, promoting the latter through their
advertising campaigns. Finally, this generates two clearly different proposals: "cheap
printer – expensive ink" or "expensive printer – cheap ink". Ultimately, the consumer
decision depends on their reference interest rate or their time preference. From an economics
viewpoint, there is a clear trade-off between cost per copy and cost of the printer.[23]
Printer steganography
An illustration showing small yellow tracking dots on white paper, generated by a color laser printer
Wireless printers
More than half of all printers sold at U.S. retail in 2010 were wireless-capable, but nearly
three-quarters of consumers who have access to those printers weren't taking advantage of the
increased access to print from multiple devices according to the new Wireless Printing
Study.[26]
See also
3D printing
Cardboard modeling
List of printer companies
Print (command)
Printer driver
Print screen
Print server
Printable version
Label printer
Printer friendly
Printer point
Printer (publishing)
Printmaking
References
1.
Morley, Deborah (April 2007). Understanding Computers: Today & Tomorrow, Comprehensive
2007 Update Edition. Cengage Learning. p. 164. ISBN 9781305172425.
Abagnale, Frank (2007). "Protection Against Cheque Fraud" (PDF). abagnale.com. Retrieved
2007-06-27.
J. L. Zable; H. C. Lee (November 1997). "An overview of impact printing" (PDF-2031 KB). Journal of
Research and Development. IBM. pp. 651–668. doi:10.1147/rd.416.0651. ISSN 0018-8646.
(subscription required)
"VIC-1525 Graphics Printer User Manual" (PDF). Commodore Computer. Retrieved 22 February
2015.
Wolff, John. "The Olivetti Logos 240 Electronic Calculator - Technical Description". John Wolff's
Web Museum. Retrieved 22 February 2015.
IBM Corporation (1963). IBM 402, 403 and 419 Accounting Machines Manual of Operation (PDF).
"Measuring Yield: The ISO Standard for Toner Cartridge Yield for Monochrome LaserJet
Printers". Hewlett-Packard.
"ISO Page Yields". quote: "Many original equipment manufacturers of printers and multifunction
products (MFPs), including Lexmark, utilize the international industry standards for page yields
(ISO/IEC 19752, 19798, and 24711)."
"How We Test Printers: Cost Per Page Calculation". 2011. Computer Shopper.
Artz, D (May–Jun 2001). "Digital steganography: hiding data within data". IEEE Xplore. 5 (3): 75,
80. Retrieved April 11, 2013.
"List of Printers Which Do or Do Not Display Tracking Dots". Electronic Frontier Foundation.
Retrieved 11 March 2011.
SGD includes a number of different object types. The set of objects available, and the
attributes for each object, are collectively called the schema. SGD objects are based on the
commonly-used LDAP version 3 schema. These objects have been extended, using the
standard method of doing so, to support SGD functionality. For more information on the
LDAP schema, see RFC 2256.
You use objects to represent the different parts of your organization. Together, the objects
form your organizational hierarchy. SGD uses a local repository to store all the objects in the
organizational hierarchy.
In the SGD Administration Console, you use the following tabs to manage the organizational
hierarchy:
User Profiles
Applications
Application Servers
The following sections describe these tabs, the objects that they can contain, and how they
are used. The System Objects organization is also described.
On the command line, you manage the organizational hierarchy with the tarantella
object family of commands. You can also populate the organizational hierarchy using a
batch script.
You can use other objects, such an Organizational Unit (OU) object, to subdivide your
organization. For example, you might want to use an OU for each department in your
organization. An OU can contain other OUs, to further subdivide your organization.
User Profile objects are used to represent a user (or a group of users if you are using LDAP or
Active Directory authentication).
Organization, OU and User Profile objects have an Assigned Applications tab. You use this
tab to assign applications to users. The applications listed on the Assigned Applications tab
are the applications a user can access through SGD.
The most important influence on the design of the hierarchy is the authentication
mechanisms you use.
For example, if you use UNIX system authentication, you can structure the hierarchy
however you like. However, with LDAP authentication, you might need to mirror part
of your LDAP directory structure.
The settings for User Profile objects and OU objects can be inherited from the object's
parent in the organizational hierarchy. For example if everyone in a department needs
an application, assign the application to the OU that represents the department. Every
user belonging to that OU gets the applications assigned to the OU.
User profile objects are used to give users access to particular applications and
customized settings. Depending on the authentication mechanisms you are using, a
default user profile is often used and this might be sufficient for your needs. This is
particularly true if you use an LDAP directory to assign applications to users.
The following table lists the object types that are available on the User Profiles tab and how
they are used.
You can use OU objects to subdivide the applications organization. For example, you might
want to use an OU to contain the applications for a department in your organization.
Use a naming convention for each application or document object type. The name of the
application or document object is displayed to users.
Application, Group, and OU objects have an Assigned User Profiles tab. You use this tab to
assign applications to users. The users listed on the Assigned User Profiles tab are the users
that can access the application through SGD.
Application objects have a Hosting Application Servers tab. You use this tab to assign
application servers to applications. The application servers listed on the Hosting Application
Servers tab are the application servers that can run the application.
The following table lists the object types that are available on the Applications tab and how
they are used.
Object Type Description
Use an OU object to divide the applications into different
departments, sites, or teams in your organization.
Directory:
On the command line, you create an OU object with the
Organizational
tarantella object new_orgunit command.
Unit
OU objects have an ou= naming attribute.
You can use OU objects to subdivide the application servers organization. For example, you
might want to use an OU to contain the application servers on a particular site.
Application Server objects have a Hosted Applications tab. You use this tab to assign
applications to application servers. The applications listed on the Hosted Applications tab are
the applications that are configured to run on the application server.
The following table lists the object types that are available on the Applications Server tab and
how they are used.
Object Type Description
Use an OU object to divide the application servers into different
departments, sites, or teams in your organization.
Directory:
On the command line, you create an OU object with the
Organizational
tarantella object new_orgunit command.
Unit
OU objects have an ou= naming attribute.
The System Objects organization contains the Global Administrators role object. This object
determines who is a Secure Global Desktop Administrator, and who can run the SGD
administration tools.
The System Objects organization also contains profile objects. These are default user profile
objects for use with the various authentication mechanisms supported by SGD.
You can edit objects in the System Objects organization, but you cannot add, delete, move, or
rename objects.