Professional Documents
Culture Documents
ASP:-
Microsoft's Active Server Pages (ASP) technology provides a
framework for building dynamic HTML pages which enable Internet and
Intranet applications to be interactive.
Databases:-
Simply put, a database is a computerized record keeping
system. More completely, it is a system involving data, the hardware that
physically stores that data, the software that utilizes the hardware's file
system in order to 1) store the data and 2) provide a standardized method
for retrieving or changing the data, and finally, the users who turn the data
into information.
Design:-
Web site design is more than just some text and pretty
graphics.
DHTML:-
"Dynamic HTML" is typically used to describe the combination
of HTML, style sheets and scripts that allows documents to be animated.
Dynamic HTML allows a Web page to change after it's loaded into the
browser --there doesn't have to be any communication with the Web server
for an update. You can think of it as 'animated' HTML. For example, a piece
of text can change from one size or color to another, or a graphic can move
from one location to another, in response to some kind of user action, such
as clicking a button.
Graphics:-
Resources, demos and tutorials on the basics of graphics
design and construction, including integrating images into your Web pages.
HTML:-
Hypertext Markup Language is the fundamental building stuff of
the Web. We present several articles, tutorials, and references on HTML.
Multimedia:-
Multimedia makes your Web sites come alive. These
tutorials can help make that happen.
Perl:-
A interpretative language used in CGI for handling text files
PHP:-
An open-source server-parsed embedded scripting language.
Usability:-
Make certain that your Web sites are "user-friendly".
XML:-
Extensible Markup Language (XML) is a human-readable, machine-
understandable, general syntax for describing hierarchical data, applicable
to a wide range of applications (databases, e- commerce, Java, Web
development, searching, etc.).
Internet Basics: The Internet
Internet History: The Internet's roots can be traced to the 1950s with the
launch of Sputnik, the ensuing space race, the Cold War and the development
of ARPAnet (Department of Defense Advanced Research Projects Agency),
but it really took off in the 1980s when the National Science Foundation
used ARPAnet to link its five regional supercomputer centers. From there
evolved a high-speed backbone of Internet access for many other types of
networks, universities, institutions, bulletin board systems and commercial
online services. The end of the decade saw the emergence of the World
Wide Web, which heralded a platform-independent means of communication
enhanced with a pleasant and relatively easy-to-use graphical interface.
To access the Internet, you need the following minimum configuration (as of
spring '97). You can sometimes make do with less but you'll notice
shortcomings.
To truly understand how much of the Internet operates, including the Web,
it is important to understand the concept of client/server computing. The
client/server model is a form of distributed computing where one program
(the client) communicates with another program (the server) for the purpose
of exchanging information.
Technical Detail
As outlined in Figure 1, middleware services are sets of
distributed software that exist between the application and
the operating system and network services on a system node
in the network.
Specifically, FTP is a commonly used protocol for exchanging files over any
network that supports the TCP/IP protocol (such as the Internet or an
intranet). There are two computers involved in an FTP transfer: a server and
a client. The FTP server, running FTP server software, listens on the
network for connection requests from other computers. The client
computer, running FTP client software, initiates a connection to the server.
Once connected, the client can do a number of file manipulation operations
such as uploading files to the server, download files from the server, rename
or delete files on the server and so on. Any software company or individual
programmer is able to create FTP server or client software because the
protocol is an open standard. Virtually every computer platform supports the
FTP protocol. This allows any computer connected to a TCP/IP based
network to manipulate files on another computer on that network regardless
of which operating systems are involved (if the computers permit FTP
access). There are many existing FTP client and server programs. FTP
servers can be setup anywhere between game servers, voice servers,
internet hosts, and other physical servers.
Overview
FTP runs exclusively over TCP. FTP servers by default listen on port 21 for
incoming connections from FTP clients. A connection to this port from the
FTP Client forms the control stream on which commands are passed to the
FTP server from the FTP client and on occasion from the FTP server to the
FTP client. For the actual file transfer to take place, a different connection
is required which is called the data stream. Depending on the transfer mode,
the process of setting up the data stream is different.
In active mode, the FTP client opens a random port (> 1023), sends the FTP
server the random port number on which it is listening over the control
stream and waits for a connection from the FTP server. When the FTP
server initiates the data connection to the FTP client it binds the source
port to port 20 on the FTP server.
In order to use active mode, the client sends a PORT command, with the IP
and port as argument. The format for the IP and port is "h1,h2,h3,h4,p1,p2".
Each field is a decimal representation of 8 bits of the host IP, followed by
the chosen data port. For example, a client with an IP of 192.168.0.1,
listening on port 1025 for the data connection will send the command “PORT
192,168,0,1,4,1”. The port fields should be interpreted as 4*256 + 1 = 1025.
In passive mode, the FTP Server opens a random port (> 1023), sends the
FTP client the server's IP address to connect to and the port on which it is
listening (a 16 bit value broken into a high and low byte) over the control
stream and waits for a connection from the FTP client. In this case the FTP
client binds the source port of the connection to a random port greater than
1023.
To use passive mode, the client sends the PASV command to which the
server would reply with something similar to "227 Entering Passive Mode
(127,0,0,1,78,52)". The syntax of the IP address and port are the same as
for the argument to the PORT command.
In extended passive mode, the FTP Server operates exactly the same as
passive mode, however it only transmits the port number (not broken into
high and low bytes) and the client is to assume that it connects to the same
IP address that was originally connected to. Extended passive mode was
added by RFC 2428 in September 1998.
While data is being transferred via the data stream, the control stream sits
idle. This can cause problems with large data transfers through firewalls
which time out sessions after lengthy periods of idleness. While the file may
well be successfully transferred, the control session can be disconnected by
the firewall, causing an error to be generated.
Resuming uploads is not as easy. Although the FTP protocol supports the
APPE command to append data to a file on the server, the client does not
know the exact position at which a transfer got interrupted. It has to obtain
the size of the file some other way, for example over a directory listing or
using the SIZE command.
In ASCII mode (see below), resuming transfers can be troublesome if client
and server use different end of line characters.
TELNET
TELNET (TELetype NETwork) is a network protocol used on the Internet or
local area network (LAN) connections. It was developed in 1969 beginning
with RFC#0015 and standardized as IETF STD 8, one of the first Internet
standards. It has limitations that are considered to be security risks.
The term telnet also refers to software which implements the client part of
the protocol. TELNET clients have been available on most UNIX systems for
many years and are available for virtually all platforms. Most network
equipment and OSs with a TCP/IP stack support some kind of TELNET
service server for their remote configuration (including ones based on
Windows NT). Recently, SSH has begun to dominate remote access for Unix-
based machines.
The protocol has many extensions, some of which have been adopted as
Internet Standards. IETF standards STD 27 through STD 32 define various
extensions, most of which are extremely common. Other extensions are on
the IETF standards track as proposed standards.
Usenet
Usenet is one of the oldest computer network communications systems still
in widespread use. It was established in 1980, following experiments from
the previous year, over a decade before the World Wide Web was
introduced and the general public got access to the Internet. It was
originally conceived as a "poor man's ARPANET," employing UUCP to offer
mail and file transfers, as well as announcements through the newly
developed news software. This system, developed at University of North
Carolina at Chapel Hill and Duke University, was called USENET to
emphasize its creators' hope that the USENIX organization would take an
active role in its operation (Daniel et al, 1980).
The articles that users post to Usenet are organized into topical categories
called newsgroups, which are themselves logically organized into hierarchies
of subjects. For instance, sci.math and sci.physics are within the sci
hierarchy, for science. When a user
When a user posts an article, it is initially only available on that user's news
server. Each news server, however, talks to one or more other servers (its
"newsfeeds") and exchanges articles with them. In this fashion, the article
is copied from server to server and (if all goes well) eventually reaches every
server in the network. The later peer-to-peer networks operate on a similar
principle; but for Usenet it is normally the sender, rather than the receiver,
who initiates transfers. Some have noted that this seems a monstrously
inefficient protocol in the era of abundant high-speed network access.
Usenet was designed for a time when networks were much slower, and not
always available. Many sites on the original Usenet network would connect
only once or twice a day to batch-transfer messages in and out.
In the early times, many articles posted a notice at the end disclosing if the
author was free of, or had any financial motive, or axe to grind, in posting
about any product or issue. That was back when the community was the
pioneering computer society.
Today, almost all Usenet traffic is carried over the Internet. The current
format and transmission of Usenet articles is very similar to that of
Internet email messages. However, Usenet articles are posted for general
consumption; any usenet user has access to all newsgroups, unlike email,
which requires a list of known recipients.
Gopher
Gopher is a distributed document search and retrieval network protocol
designed for the Internet. Its goal was similar to that of the World Wide
Web, and it has been almost completely displaced by the Web.
The Gopher protocol offers some features not natively supported by the
Web and imposes a much stronger hierarchy on information stored in it. Its
text menu interface is well-suited to computing environments that rely
heavily on remote computer terminals, common in universities at the time of
its creation. Some consider it to be the superior protocol for storing and
searching large repositories of information.
Origins
The original Gopher system was released in late spring of 1991 by Mark
McCahill, Farhad Anklesaria, Paul Lindner, Dan Torrey, and Bob Alberti of
the University of Minnesota. Its central goals were:
Proxy server
In computer networks, a proxy server is a server (a computer system or an
application program) which services the requests of its clients by making
requests to other servers. A client connects to the proxy server, requesting
a
file, connection, web page, or other resource available from a different
server.
A proxy server provides the resource by connecting to the specified server,
with
some exceptions: A proxy server may alter the client's request or the
server's
response. A proxy server may service the request without contacting the
specified server.
(A proxy server that passes all requests and replies unmodified is not called
a
proxy server. It is a gateway.)
A proxy server can be placed in the user's local computer, or at specific key
points between the user and the destination servers or the Internet.
Web proxy
Proxies that focus on WWW traffic are called web proxies. Many web
proxies
attempt to block offensive web content. Other web proxies reformat web
pages for
a specific purpose or audience (e.g., cell phones and PDAs or persons with
disabilities). Network operators can also deploy proxies to intercept
computer
viruses and other hostile content served from remote web pages. (For
example,
Microsoft Internet Security and Acceleration Server.)
Many organizations — including families, schools, corporations, and countries
—
use proxy servers to enforce acceptable network use policies (see
content-control software) or to provide security, anti-malware and/or
caching
services. A traditional web proxy is not transparent to the client application,
An open proxy is a proxy server which will accept client connections from
any IP
address and make connections to any Internet resource. Abuse of open
proxies is
currently implicated in a significant portion of e-mail spam delivery.
Spammers
frequently install open proxies on unwitting end users' operating systems by
means of computer viruses designed for this purpose. Internet Relay Chat
(IRC)
abusers also frequently use open proxies to cloak their identities.
Because proxies might be used for abuse, system administrators have
developed a
number of ways to refuse service to open proxies. IRC networks such as the
Blitzed network automatically test client systems for known types of open
proxy.
Reverse proxy
A reverse proxy is a proxy server that is installed in the neighborhood of
one
or more web servers. All traffic coming from the Internet and with a
destination
of one of the web servers goes through the proxy server. There are several
reasons for installing reverse proxy servers:
Security: the proxy server is an additional layer of defense and therefore
protects the web servers further up the chain
Encryption / SSL acceleration: when secure web sites are created, the SSL
encryption is often not done by the web server itself, but by a reverse
proxy
that is equipped with SSL acceleration hardware. See Secure Sockets Layer.
Load balancing: the reverse proxy can distribute the load to several web
servers, each web server serving its own application area. In such a case, the
reverse proxy may need to rewrite the URLs in each web page (translation
from
externally known URLs to the internal locations)
HEAD
Asks for the response identical to the one that would correspond to a
GET request, but without the response body. This is useful for
retrieving meta-information written in response headers, without
having to transport the entire content.
GET
Requests a representation of the specified resource. By far the most
common method used on the Web today. Should not be used for
operations that cause side-effects (using it for actions in web
applications is a common misuse). See 'safe methods' below.
POST
Submits data to be processed (e.g. from an HTML form) to the
identified resource. The data is included in the body of the request.
This may result in the creation of a new resource or the updates of
existing resources or both.
PUT
Uploads a representation of the specified resource.
DELETE
Deletes the specified resource.
TRACE
Echoes back the received request, so that a client can see what
intermediate servers are adding or changing in the request.
OPTIONS
Returns the HTTP methods that the server supports. This can be used
to check the functionality of a web server.
CONNECT
For use with a proxy that can change to being an SSL tunnel.
HTTP servers are supposed to implement at least the GET and HEAD
methods and, whenever possible, also the OPTIONS method.
Request message
The request line and headers must all end with CRLF (i.e. a carriage return
followed by a line feed). The empty line must consist of only CRLF and no
other whitespace.
Simple Mail Transfer Protocol (SMTP) is the de facto standard for e-mail
transmissions across the Internet. Formally SMTP is defined in RFC 821
(STD 10) as amended by RFC 1123 (STD 3) chapter 5. The protocol used
today is also known as ESMTP and defined in RFC 2821.
Description
SMTP uses TCP port 25. To determine the SMTP server for a given domain
name, the MX (Mail eXchange) DNS record is typically used, falling back to a
simple A record in the case of no MX (not all MTAs (Mail Transfer Agents)
support fallback). Some current mail transfer agents will also use SRV
records, a more general form of MX, though these are not widely adopted.
SMTP is a "push" protocol that does not allow one to "pull" messages from a
remote server on demand. To do this a mail client must use POP3 or IMAP.
Another SMTP server can trigger a delivery in SMTP using ETRN
The design of POP3 and its procedures supports end-users with intermittent
connections (such as dial-up connections), allowing these users to retrieve e-
mail when connected and then to view and manipulate the retrieved messages
without needing to stay connected. Although most clients have an option to
leave mail on server, e-mail clients using POP3 generally connect, retrieve all
messages, store them on the user's PC as new messages, delete them from
the server, and then disconnect. In contrast, the newer, more capable
Internet Message Access Protocol (IMAP) supports both connected and
disconnected modes of operation. E-mail clients using IMAP generally leave
messages on the server until the user explicitly deletes them. This and other
facets of IMAP operation allow multiple clients to access the same mailbox.
Most e-mail clients support either POP3 or IMAP to retrieve messages;
however, fewer Internet Service Providers (ISPs) support IMAP. The
fundamental difference between POP3 and IMAP4 is that POP3 offers
access to a mail drop; the mail exists on the server until it is collected by
the client. Even if the client leaves some or all messages on the server, the
client's message store is considered authoritative. In contrast, IMAP4
offers access to the mail store; the client may store local copies of the
messages, but these are considered to be a temporary cache; the server's
store is authoritative.
Clients with a leave mail on server option generally use the POP3 UIDL
(Unique IDentification Listing) command. Most POP3 commands identify
specific messages by their ordinal number on the mail server. This creates a
problem for a client intending to leave messages on the server, since these
message numbers may change from one connection to the server to another.
For example if a mailbox contains five messages at last connect, and a
different client then deletes message #3, the next connecting user will find
the last two messages' numbers decremented by one. UIDL provides a
mechanism to avoid these numbering issues. The server assigns a string of
characters as a permanent and unique ID for the message. When a POP3-
compatible e-mail client connects to the server, it can use the UIDL
command to get the current mapping from these message IDs to the ordinal
message numbers. The client can then use this mapping to determine which
messages it has yet to download, which saves time when downloading. IMAP
has a similar mechanism, using a 32-bit UID (Unique IDentifier) that is
required to be strictly ascending. The advantage of the numeric UID is with
large mailboxes; a client can request just the UIDs greater than its
previously stored "highest UID". In POP, the client must fetch the entire
UIDL map.
Like many other older Internet protocols, POP3 originally supported only an
unencrypted login mechanism. Although plain text transmission of passwords
in POP3 still commonly occurs, POP3 currently supports several
authentication methods to provide varying levels of protection against
illegitimate access to a user's e-mail. One such method, APOP, uses the MD5
hash function in an attempt to avoid replay attacks and disclosure of a
shared secret. Clients implementing APOP include Mozilla, Thunderbird,
Opera, Eudora, KMail and Novell Evolution. POP3 clients can also support
SASL authentication methods via the AUTH extension.
POP3 works over a TCP/IP connection using TCP on network port 110. E-mail
clients can encrypt POP3 traffic using TLS or SSL. A TLS or SSL connection
is negotiated using the STLS command. Some clients and servers, like Google
Gmail, instead use the deprecated alternate-port method, which uses TCP
port 995.
General overview
CORBA “wraps” program code into a bundle containing information about the
capabilities of the code and how to invoke it. The wrapped objects can then
be invoked from other programs or CORBA objects across a network.
This diagram illustrates how the generated code is used within the CORBA
infrastructure:
This picture does not reflect all typically used possibilities. Normally the
server side has the Portable Object Adapter (POA) that redirects calls
either to the local servants or (to balance the load) to the other servers.
Also, both server and client parts often have interceptors that are
described below.
Apart from remote objects, the CORBA and RMI-IIOP define the concept
of the OBV. The code inside the methods of these objects is executed
locally by default. If the OBV has been received from the remote side, the
needed code must be either a priori known for both sides or dynamically
downloaded from the sender. To make this possible, the record, defining
OBV, contains the Code Base that is a space separated list of URLs from
where this code should be downloaded. The OBV can also have the remote
methods.
The OBV's may have fields that are transferred when the OBV is
transferred. These fields can be OBV's themselves, forming lists, trees or
arbitrary graphs. The OBV's have a class hierarchy, including multiple
inheritance and abstract classes.
CORBA Component Model (CCM) is an addition to the family of CORBA
definitions. It was introduced with CORBA 3, and it describes standard
application framework for CORBA components. It is an extension of
"language independent Enterprise Java Beans (EJB)". It provides an
abstraction of entities that can provide and accept services through well-
defined named interfaces called ports.