You are on page 1of 32

ADL-75 Ecommerce V3

Assignment - A
Question 1. Explain why B2B and B2C initiatives require different IT
infrastructures.

B2B typically has fewer users with larger transaction volume per user.
B2C typically involves larger number of individual customers with
intermittent transactions, or lower dollar values per transaction
Comprehensive infrastructure needed to exchange B2B transactions with
partners, translate business documents between any of the many B2B ecommerce standards now in use, and provide reporting and visibility into B2B
processes and networks. This infrastructure should include: a global B2B
infrastructure that spans every major economic region in the world; ondemand B2B data translation and delivery; and B2B business process
management and activity monitoring
The purpose of every e-business is to utilize technology in a way that
enhances communication and the company's profitability.
Business-to-business (B2B) use of technology would enhance efficiency
within the company's supply chain, while business-to-consumer (B2C), also
known as e-commerce, technologies would facilitate a transaction between a
company and its consumers.

Question 2. What are the three stages of the new technology adoption
curve or S-curve? And what stage do many experts believe e-commerce is
entering?

Part I:
The Three Stages of S curve are:
1) Readiness (initial readiness stage),

2) Intensification(intensify acceptance of the new technology), and


3) Impact (technology becomes mainstream)

S-Curve Framework
The S-Curve emerged as a mathematical model and was afterwards applied
to a variety of fields. It describes for example the development of the
embryo, the diffusion of viruses, the utility gained by people as the number
of consumption choices increases, and so on.
In the innovation management field the S-Curve illustrates the introduction,
growth and maturation of innovations as well as the technological cycles that
most industries experience. In the early stages large amounts of money,
effort and other resources are expended on the new technology but small
performance improvements are observed. Then, as the knowledge about the
technology accumulates, progress becomes more rapid. As soon as major
technical obstacles are overcome and the innovation reaches a certain
adoption level an exponential growth will take place. During this phase
relatively small increments of effort and resources will result in large
performance gains. Finally, as the technology starts to approach its physical
limit, further pushing the performance becomes increasingly difficult, as the
figure below shows.

Consider the supercomputer industry, where the traditional architecture


involved single microprocessors. In the early stages of this technology a
huge amount of money was spent in research and development, and it

required several years to produce the first commercial prototype. Once the
technology reached a certain level of development the know-how and
expertise behind supercomputers started to spread, boosting dramatically
the speed at which those systems evolved. After some time, however,
microprocessors started to yield lower and lower performance gains for a
given time/effort span, suggesting that the technology was close to its
physical limit (based on the ability to squeeze transistors in the silicon
wafer). In order to solve the problem supercomputer producers adopted a
new architecture composed of many microprocessors working in parallel.
This innovation created a new S-curve, shifted to the right of the original one,
with a higher performance limit (based instead on the capacity to co-ordinate
the work of the single processors).

Usually the S-curve is represented as the variation of performance in


function of the time/effort. Probably that is the most used metric because it
is also the easiest to collect data for. This fact does not imply, however, that
performance is more accurate than the other possible metrics, for instance
the number of inventions, the level of the overall research, or the profitability
associated with the innovation.
One must be careful with the fact that different performance parameters
tend to be used over different phases of the innovation, as a result the
outcomes may get mixed together, or one parameter will end up influencing
the outcome of another. Civil aircraft provides a good example, on early
stages of the industry fuel burn was a negligible parameter, and all the
emphasis was on the speed aircrafts could achieve and if they would thus be
able to get off the ground safely. Over the time, with the improvement of the
aircrafts almost everyone was able to reach the minimum speed and to take
off, which made fuel burn the main parameter for assessing performance of
civil aircrafts.

Overall we can say that the S-Curve is a robust yet flexible framework to
analyze the introduction, growth and maturation of innovations and to
understand the technological cycles. The model also has plenty of empirical
evidence, it was exhaustively studied within many industries including
semiconductors, telecommunications, hard drives, photocopiers, jet engines
and so on.
Currently, Ecommerce is entering Intensification Stage as believed by many

experts

Question 3. What do you understand by a digital signature? Explain it's


application and verification diagrammatically.
A digital signature or digital signature scheme is a mathematical scheme for
demonstrating the authenticity of a digital message or document. A valid digital
signature gives a recipient reason to believe that the message was created by a
known sender, and that it was not altered in transit. Digital signatures are
commonly used for software distribution, financial transactions, and in other cases
where it is important to detect forgery or tampering. Digital signatures are often
used to implement electronic signatures, a broader term that refers to any
electronic data that carries the intent of a signature, but not all electronic signatures
use digital signatures.
In some countries, including the United States, India, and members of the European
Union, electronic signatures have legal significance. However, laws concerning
electronic signatures do not always make clear whether they are digital
cryptographic signatures in the sense used here, leaving the legal definition, and so
their importance, somewhat confused. Digital signatures employ a type of
asymmetric cryptography. For messages sent through a non secure channel, a
properly implemented digital signature gives the receiver reason to believe the
message was sent by the claimed sender. Digital signatures are equivalent to
traditional handwritten signatures in many respects; properly implemented digital
signatures are more difficult to forge than the handwritten type. Digital signature
schemes in the sense used here are cryptographically based, and must be
implemented properly to be effective. Digital signatures can also provide nonrepudiation, meaning that the signer cannot successfully claim they did not sign a
message, while also claiming their private key remains secret; further, some nonrepudiation schemes offer a time stamp for the digital signature, so that even if the

private key is exposed, the signature is valid nonetheless. Digitally signed messages
may be anything representable as a bit string: examples include electronic mail,
contracts, or a message sent via some other cryptographic protocol.

A digital signature
(not to be confused with a digital certificate) is an electronic signature that can be
used to authenticate the identity of the sender of a message or the signer of a
document, and possibly to ensure that the original content of the message or
document that has been sent is unchanged. Digital signatures are easily
transportable, cannot be imitated by someone else, and can be automatically timestamped. The ability to ensure that the original signed message arrived means that
the sender cannot easily repudiate it later. A digital signature can be used with any
kind of message, whether it is encrypted or not, simply so that the receiver can be
sure of the sender's identity and that the message arrived intact. A digital
certificate contains the digital signature of the certificate-issuing authority so that
anyone can verify that the certificate is real.

How It Works

Assume you were going to send the draft of a contract to your lawyer in another
town. You want to give your lawyer the assurance that it was unchanged from what
you sent and that it is really from you.
1. You copy-and-paste the contract (it's a short one!) into an e-mail note.
2. Using special software, you obtain a message hash (mathematical summary) of
the contract.

3. You then use a private key that you have previously obtained from a publicprivate key authority to encrypt the hash.
4. The encrypted hash becomes your digital signature of the message. (Note that it
will be different each time you send a message.)
At the other end, your lawyer receives the message.
1. To make sure it's intact and from you, your lawyer makes a hash of the received
message.
2. Your lawyer then uses your public key to decrypt the message hash or summary.
3. If the hashes match, the received message is valid

Question 4. What are the various types viruses? What can a virus to do the
computer?

A computer virus can get into your computer and it tries to intercept information
thats sent from your computer and received from your computer. It can also try and
steal your personal information, such as passwords and PIN numbers. If you do
suspect you have a virus try not to use your credit card on the internet. Download a
anti-virus a.s.a.p.
It can do a multitude of things, all harmful. Many viruses, which disguise themselves
as tracking cookies, are meant to allow access to personal information that you give
out over the internet. If you are shopping and this happens, it can be used for
identity theft. Viruses can also slow down your computer significantly, erase
information, destroy vital data, even shut down your computer, but those are rather
extreme ones and pretty rare.some viruses encrypt themselves in a different every
every time so it is impossible to find them using anti-virus as anti-virus uses a
signature string to locate them computer virus is a computer program that can copy
itself[1] and infect a computer. The term "virus" is also commonly but erroneously
used to refer to other types of malware, including but not limited to adware and
spyware programs that do not have the reproductive ability. A true virus can spread
from one computer to another (in some form of executable code) when its host is
taken to the target computer; for instance because a user sent it over a network or
the Internet, or carried it on a removable medium such as a floppy disk, CD, DVD, or
USB drive.Viruses can increase their chances of spreading to other computers by
infecting files on a network file system or a file system that is accessed by another
computer.
As stated above, the term "computer virus" is sometimes used as a catch-all phrase
to include all types of malware, even those that do not have the reproductive ability.
Malware includes computer viruses, computer worms, Trojan horses, most rootkits,
spyware, dishonest adware and other malicious and unwanted software, including
true viruses. Viruses are sometimes confused with worms and Trojan horses, which
are technically different. A worm can exploit security vulnerabilities to spread itself
automatically to other computers through networks, while a Trojan horse is a
program that appears harmless but hides malicious functions. Worms and Trojan
horses, like viruses, may harm a computer system's data or performance. Some
viruses and other malware have symptoms noticeable to the computer user, but
many are surreptitious or simply do nothing to call attention to themselves. Some
viruses do nothing beyond reproducing themselves.
TYPES OF VIRUSES

Introduction : There are thousands of viruses, and new ones are discovered every
day. It is difficult to come up with a generic explanation of how viruses work, since
they all have variations in the way they infect or the way they spread. So instead,
we'll take some broad categories that are commonly used to describe various types
of virus.
File Viruses (Parasitic Viruses) : File viruses are pieces of code that attach
themselves to executable files, driver files or compressed files, and are activated
when the host program is run. After activation, the virus may spread itself by
attaching itself to other programs in the system, and also carry out the malevolent
activity it was programmed for. Most file viruses spread by loading themselves in
system memory and looking for any other programs located on the drive. If it finds
one, it modifies the program's code so that it contains and activates the virus the
next time it's run. It keeps doing this over and over until it spreads across the
system, and possibly to other systems that the infected program may be shared
with.
Besides spreading themselves, these viruses also carry some type of destructive
constituent that can be activated immediately or by a particular 'trigger'. The
trigger could be a specific date, or the number of times the virus has been
replicated, or anything equally trivial. Some examples of file viruses are Randex,
Meve and MrKlunky.
Boot Sector Viruses : A boot sector virus affects the boot sector of a hard disk,
which is a very crucial part. The boot sector is where all information about the drive
is stored, along with a program that makes it possible for the operating system to
boot up. By inserting its code into the boot sector, a virus guarantees that it loads
into memory during every
boot sequence. A boot virus does not affect files; instead, it affects the disks that
contain them. Perhaps this is the reason for their downfall. During the days when
programs were carried around on floppies, the boot sector viruses used to spread
like wildfire. However, with the CD-ROM revolution, it became impossible to infect
pre-written data on a CD, which eventually stopped such viruses from spreading.
Though boot viruses still exist, they are rare compared to new age malicious
software. Another reason why they're not so prevalent is that operating systems
today protect the boot sector,which makes it difficult for them to thrive. Examples of
boot viruses are Polyboot.B and AntiEXE.
Multipartite Viruses : Multipartite viruses are a combination of boot sector viruses
and file viruses. These viruses come in through infected media and reside in
memory. They then move on to the boot sector of the hard drive. From there, the
virus infects executable files on the hard drive and spreads across the system.
There aren't too many multipartite viruses in existence today, but in their heyday,

they accounted for some major problems due to their capacity to combine different
infection techniques. A significantly famous multipartite virus is Ywinz.
Macro Viruses : Macro viruses infect files that are created using certain
applications or programs that contain macros. These include Microsoft Office
documents such as Word documents, Excel spreadsheets, PowerPoint presentations,
Access databases, and other similar application files such as Corel Draw, AmiPro,
etc. Since macro viruses are written in the language of the application, and not in
that of the operating system, they are known to be platform-independent they can
spread between Windows, Mac, and any other system, so long as they're running
the required application. With theever-increasing capabilities of macro languages in
applications, and the possibility of infections spreading over networks,these viruses
are major threats.The first macro virus was written for Microsoft Word and was
discovered back in August 1995. Today, there are thousands of macro viruses in
existence-some examples are Relax, Melissa.A and Bablas.
Network Viruses : This kind of virus is proficient in quickly spreading across a
Local Area Network (LAN) or even over theInternet. Usually, it propagates through
shared resources, such as shared drives and folders. Once it infects a new system, it
searches for potential targets by searching the network for other vulnerable
systems. Once a new vulnerable system is found, the network virus infects the
other system, and thus spreads over the network. Some of the most notorious
network viruses are Nimda and SQLSlammer.
E-mail Viruses : An e-mail virus could be a form of a macro virus that spreads itself
to all the contacts located in the host's email address book. If any of the e-mail
recipients open the attachment of the infected mail, it spreads to the new host's
address book contacts, and then proceeds to send itself to all those contacts as
well. These days, e-mail viruses can infect hosts even if the infected e-mail is
previewed in a mail client. One of the most common and destructive e-mail viruses
is the ILOVEYOU virus.
.
-----------------------------------------------------------Computer Virus is a kind of malicious software written intentionally to enter a
computer without the users permission or knowledge, with an ability to replicate
itself, thus continuing to spread. Some viruses do little but replicate others can
cause severe harm or adversely effect program and performance of the system. A
virus should never be assumed harmless and left on a system. Most common types
of viruses are mentioned below:

Question 5. What is the purpose of the domain name system (DNS)?


The Domain Name System (DNS) is a hierarchical naming system built on a
distributed database for computers, services, or any resource connected to the
Internet or a private network. Most importantly, it translates domain names
meaningful to humans into the numerical identifiers associated with networking
equipment for the purpose of locating and addressing these devices worldwide.
An often-used analogy to explain the Domain Name System is that it serves as the
phone book for the Internet by translating human-friendly computer hostnames into
IP addresses. For example, the domain name www.example.com translates to the
addresses 192.0.32.10 (IPv4) and 2620:0:2d0:200::10 (IPv6).
The Domain Name System makes it possible to assign domain names to groups of
Internet resources and users in a meaningful way, independent of each entity's
physical location. Because of this, World Wide Web (WWW) hyperlinks and Internet
contact information can remain consistent and constant even if the current Internet
routing arrangements change or the participant uses a mobile device. Internet
domain names are easier to remember than IP addresses such as
208.77.188.166 (IPv4) or 2001:db8:1f70::999:de8:7648:6e8 (IPv6). Users take
advantage of this when they recite meaningful Uniform Resource Locators (URLs)
and e-mail addresses without having to know how the computer actually locates
them. The Domain Name System distributes the responsibility of assigning domain
names and mapping those names to IP addresses by designating authoritative
name servers for each domain. Authoritative name servers are assigned to be
responsible for their particular domains, and in turn can assign other authoritative
name servers for their sub-domains.
This mechanism has made the DNS distributed and fault tolerant and has helped
avoid the need for a single central register to be continually consulted and
updated.In general, the Domain Name System also stores other types of
information, such as the list of mail servers that accept email for a given Internet
domain. By providing a worldwide, distributed keyword-based redirection service,
the Domain Name System is an essential component of the functionality of the
Internet.
A DNS sever is where the computer goes to translate a web address that you type in
into a series of numbers and goes to that address. So basically you type
www.geekstogo.com into Internet Explorer (or any other web browser, it works in
exactly the same way). The browser goes to a DNS server either you've specified or
it has been given. It converts geekstogo.com into a series of numbers, in this case
72.232.135.12 and goes there.When you specify DNS servers in the fashion you
have, this is the order they're referred to when looking up IP addresses. Basically
you go to a web site, the computer asks (in your case) the server at 208.67.222.222
for the proper number. If this server doesn't give a number (for
example because its overloaded with requests or offline or generally not working)
then the computer will ask the server at 208.67.220.220 for the site's IP. Then it just
claims there is no page to find.You can add as many DNS servers as you like, the
computer will just work its way down the list trying to find a requested site's proper
address before timing out. A common scenario when connected to a provider is that
the provider is so busy with its user-base the DNS servers get overloaded. So you
can connect but you can't go anywhere. Name System, or DNS, makes browsing the

Web simpler and more intuitive. It allows the tens of millions of computers
connected to the Internet to find one another and communicate efficiently. DNS also
allows individual nations to identify and optimize their websites for local
populations, according to the Internet Corporation for Assigned Names and
Numbers.
Hierarchies : Domain names are grouped into a series of top-level domains or TLDs
such as .com, .net, .org and .gov. In addition, every country has its own TLD: for
example, the TLD for the United States is ".us"; ".fr" represents France, ".in" denotes
India, and so on. The TLD appears at the end of the full domain name.
The second-level domain contains the name of the website. For example, in
"ehow.com", the second-level domain name is "ehow".The third-level domain, which
appears at the beginning of some domain names, was used in the early days of the
World Wide Web to signify that the domain was either a website (represented by
".www") or a file transfer site (".ftp").
The third-level domain is now used to signify any sub-domain, which is often just a
sub-section of a particular website.
Convenience : Without DNS, people wishing to access a particular online resource
would have to know the IP address or would be required to look it up. The IP address
is a cumbersome series of three-digit numbers separated by dots or decimal points.
The DNS system automatically converts these long numbers into convenient domain
names that humans
can easily use and remember.
Optimized Service : The top-level domain often indicates the nation of origin
through a two-character abbreviation. The ability to recognized websites by country
allows national registry operators to apply the best mix of linguistic and cultural
policies for those domains, thereby optimizing websites for convenient access by
users in each nation

Assignment - B
Question 1. What is one of the benefits of layering to a complex system?
Layering is the construction of multiple applications on top of a common IT
infrastructure. One of the benefits is that layers are functionally independent, which
allows system developers to specialize in their application and make improvements
without affecting the other applications or the underlying infrastructure

Interoperability - Layering promotes greater interoperability between devices


from different manufacturers and even between different generations of the same
type of device from the same manufacturer.

Greater Compatibility - One of the greatest of all of the benefits of using a


hierarchal or layered approach to networking and communications protocols is the
greater compatibility between devices, systems and networks that this delivers.
Better Flexibility - Layering and the greater compatibility that it delivers goes a
long way to improving the flexibility; particularly in terms of options and choices,
that network engineers and administrators alike crave so much.
Flexibility and Peace of Mind - Peace of mind in knowing that if worst comes to
worst and a key core network device; suddenly and without prior warning decides to
give up the ghost, you can rest assured that a replacement or temporary stand-by
can be readily put to work with the highest degree of confidence that it will do the
job.
Even though it may not be up to doing the job at the same speed it will still do it; at
least, until a better, more permanent solution can be implemented. This is a state of
affairs that is much more acceptable than for a lengthy cessation of network
services or assets unavailability to occur. 80% is oh so much more pleasing than
0%.
Increased Life Expectancy - Increased product working life expectancies as
backwards compatibility is made considerably easier. Devices from different
technology generations can co-exist thus the older units do not get discarded
immediately newer technologies are adopted.
Scalability - Experience has shown that a layered or hierarchal approach to
networking protocol design and implementation scales better than the horizontal
approach.
Mobility - Greater mobility is more readily delivered whenever we adopt the
layered and segmented strategies into our architectural design
Value Added Features - It is far easier to incorporate and implement value added
features into products or services when the entire system has been built on the use
of a layered philosophy.
Cost Effective Quality - The layered approach has proven time and time again to
be the most economical way of developing and implementing any system(s) be they
small, simple, large or complex makes no difference.
This ease of development and implementation translates to greater efficiency and
effectiveness which in turn translates into greater economic rationalization and
cheaper products while not compromising quality.
Modularity - I am sure that you have come across plug-ins and add-ons. These are
common and classical examples of the benefits to be derived from the use of a
hierarchal (layered) approach to design.

Innate Plasticity - Layering allows for innate plasticity to be built into devices at
all levels and stages from the get-go, to implementation, on through optimization
and upgrade cycles throughout a component's entire useful working lifecycle
thereafter.
The Graduated, Blended Approach to Migration - Compatibility enables technologies
to co-exist side-by-side which results in quicker uptake of newer technologies as the
older asset investments can still continue to be productive. Thus migration to newer
technologies and standards can be undertaken in stages or phases over a period of
time. This is what is known as the graduated blended approach; which is the
opposite of the sudden adoption approach.
Standardization and Certification - The layered approach to networking protocol
specifications facilitates a more streamlined and simplified standardization and
certification process; particularly from an "industry" point of view. This is due to the
clearer and more distinct definition and demarcation of what functions occur at
each layer when the layered approach is taken.
Task Segmentation - Breaking a large complex system into smaller more
manageable subcomponents allows for easier development and implementation of
new technologies; as well as facilitating human comprehension of what may be very
diverse and complex systems.
Portability - Layered networking protocols are much easier to port from one
system or architecture to another.
Compartmentalization of Functionality - The compartmentalization or layering
of processes, procedures and communications functions gives developers the
freedom to concentrate on a specific layer or specific functions within that layer's
realm of responsibility without the need for great concern or modification of any
other layer.
Changes within one layer can be considered to be in self-contained isolation;
functionally speaking, from the other layers. Modifications at one layer will not
break or compound the other layers.
Side-Kicks - The development of "Helper" protocols or side-kicks is much easier
when a layered approach to networking protocols is embraced. This is especially so
when it comes to the development of "helper" protocols that are developed more or
less as after-thoughts because the need arose.
Reduced Debugging Time - The time spent debugging can be greatly reduced as
a direct result of taking the layered approach to developing network protocols
because debugging is made easier and faster when using the layered approach as
opposed to not using it.
Promotion of Multi-Vendor Development - Layering allows for a more precise
identification and delineation of task, process and methodology. This permits a
clearer definition of what needs to be done, where it needs to be done, when it

needs to be done, how it needs to be done and what or who will do it. It is these
factors that promote multi-vendor development through the standardization of
networking components at both the hardware and software levels because of the
clear and precise delineation of responsibilities that layering brings to the
developers' table.
Easier Binding Implementation - The principle of binding is far easier to
implement in layered, tiered, and hierarchal systems. Humans also tend to
understand this form easier than the flat model.
Enhanced Troubleshooting and Fault Identification - Troubleshooting and fault
identification are made considerably easier thus resolution times are greatly
reduced. Layering allows for examination in isolation of subcomponents as well as
the whole.
Enhanced Communications Flow and Support - Adopting the layered approach
allows for improved flow and support for communication between diverse systems,
networks, hardware, software, and protocols.
Support for Disparate Hosts - Communications between disparate hosts is
supported more or less seamlessly thus Unix, PC, MAC & Linux to name but a few
can freely interchange data.
Reduction of the Domino Effect - Another very important advantage of a layered
protocol system is that it helps to prevent changes in one layer from affecting other
layers. This helps to expedite technology development.
Rapid Application Development (RAD) - Work loads can be evenly distributed
which means that multiple activities can be conducted in parallel thereby reducing
the time taken to develop, debug, optimize and package new technologies ready for
production implementation.
Question 2. What is the difference between a web site and a portal?
Portal vs Site
A portal is generally a vehicle by which to gain access to a multitude of 'services'. A
web site is a destination in itself.
As such the term website refers to a location on the Internet (see this) that is unique
and can be accessed through a URL (see this). By that definition a web portal is in
fact also a website.
However there is a distinction between the two terms based on the subject and
content of the website.
A website is also a web portal if;
It transmits information from several independent sources that can be, but not
necessarily are, connected in subject; thus offering a public service function for the
visitor which is not restricted to presenting the view(s) of one author.

The Portal and website can be differentiated as :


Authentication:
Portal: It provides facility of Logging-In. Provides you with information based on who
you are.
e.g. mail.yahoo.com, gmail.com, rediffmail.com
Website: No log-in.
e.g. www.yahoo.com
Personalization:
Portal: Limited, focused content. Eliminates the need to visit many different sites.
e.g. You type in your user name and password and see your yahoo mail only.
Website: Extensive, unfocused content written to accommodate anonymous users
needs.
Customization :
Portal: You will select and organize the materials you want to access. Organized with
the materials you want to access.
Website: Searchable, but not customizable. All content is there for every visitor.
e.g. you can navigate to yahoo mail, yahoo shopping, geo cities, yahoo group. If you
wish to use any of these services you will either have to authenticate yourself and
see things personalized to you or you can simply visit sections that are for everyone
like yahoo news were if you are not signed in then the default sign in is guest.
Question 3. What is the most valuable function of the proxy server?
A proxy server has a large variety of potential purposes, including:
To keep machines behind it anonymous (mainly for security).

To speed up access to resources (using caching). Web proxies are commonly


used to cache web pages from a web server.
To apply access policy to network services or content, e.g. to block undesired

sites.

To log / audit usage, i.e. to provide company employee Internet usage


reporting.

To bypass security/ parental controls.

To scan transmitted content for malware before delivery.

To scan outbound content, e.g., for data leak protection.

To circumvent regional restrictions.

Case Study
Information Management in E Commerce ABC Ltd is a manufacturer of mobile
handsets. It has its manufacturing plant in Bangalore and its offices and retail
outlets in different cities in India and
abroad. The organization wants to have information systems connecting all the
above facilities and also providing access to its suppliers as well as customers.
Questions:
(a) Discuss various issues in developing information systems and fulfilling
information needs at different levels in the organization.
Ans:
Information Systems (IS) is an academic/professional discipline bridging the
business field and the well-defined computer science field that is evolving toward a
new scientific area of study. An information systems discipline therefore is
supported by the theoretical foundations of information and computations such that
learned scholars have unique opportunities to explore the academics of various
business models as well as related algorithmic processes within a computer science
discipline. Typically, information systems or the more common legacy information
systems include people, procedures, data, software, and hardware (by degree) that
are used to gather and analyze digital information. Specifically computer-based
information systems are complementary networks of hardware/software that people
and organizations use to collect, filter, process, create, & distribute data
(computing).Computer Information System(s) (CIS) is often a track within the
computer science field studying computers and algorithmic processes, including
their principles, their software & hardware designs, their applications, and their
impact on society
Yes, there are many issues that would be faced while implementing and developing
an information system. some of the key points are :

Integrating the system throughout the organization and yet serving


specific needs

Training managers and employees

Managing the costs of information

Managing user demands on the system

Among the most important are low productivity, a large number of failures, and an
inadequate alignment of ISs with business needs. The first problem, low
productivity, has been recognized in the term software crisis, as indicated by the
development backlog and maintenance problems . Simply, demands for building
new or improved ISs have increased faster than our ability to develop them. Some
reasons are: the increasing cost of software development (especially when
compared to the decreasing cost of hardware), the limited supply of personnel and
funding, and only moderate productivity improvements.
Second, IS development (ISD) efforts have resulted in a large number of outright.
These failures are sometimes due to economical mismatches, such as budget and
schedule overruns, but surprisingly often due to poor product quality and
insufficient user satisfaction. For example, one survey (Gladden 1982) estimates
that 75% of IS developments undertaken are never completed, or the resulting
system is never used. According to the Standish Group (1995) only 16% of all
projects are delivered on time and within their budget. This study, conducted as a
survey among 365 information technology managers, also reveals that 31% of ISD
projects were canceled prior to completion and the majority, 53%, are completed
but over budget and offer less functionality than originally specified. Unfortunately
this area has not been studied in enough detail to find general reasons for failures.
As a result, we must mostly rely on cases and reports on ISD failures.
Third, from the business point of view, there has been growing criticism of the poor
alignment of ISs and business needs. While an increasing part of organizations
resources are spent on recording, searching, refining and analyzing information, the
link between ISs and organizational performance and strategies has been shown to
be dubious. For example, most managers and users are still facing situations where
they cannot get information they need to run their units. Hence, ISD is continually
challenged by the dynamic nature of business together with the ways that business
activities are organized and supported by ISs.
All the above problems are further aggravated by the increasing complexity and
size of software products. Each generation has brought new application areas as
well as extended functionality leading to larger systems, which are harder to design,
construct and maintain. Moreover, because of a large number of new technical
options and innovations available - like client/server architectures, object-oriented
approaches, and electronic commerce - novel technical aspects are transforming
the practice of ISD. All in all, it seems to be commonly recognized that ISD is not
satisfying organizations needs, whether they are technical, economical, or
behavioral. Consequently, companies world-wide are facing challenges in
developing new strategies for ISD as well as in finding supporting tools and ways of
working

(b) Explain different security threats in the context of e-commerce for


the above company.

For ABC ltd,the vulnerability of a system exists at the entry and exit points within
the system which can be classified as below:

Shopper
Shopper' computer
Network connection between shopper and Web site's server
Web site's server
Software vendor

Points the attacker can target

Attacks
This section describes potential security attack methods that abc ltd could face from an attacker or
hacker.

Tricking
Some of the easiest and most profitable attacks are based on tricking the shopper,
also known as social engineering techniques. These attacks involve surveillance of
the shopper's behavior, gathering information to use against the shopper. For
example, a mother's maiden name is a common challenge question used by
numerous sites. If one of these sites is tricked into giving away a password once the
challenge question is provided, then not only has this site been compromised, but it
is also likely that the shopper used the same logon ID and password on other sites.
A common scenario is that the attacker calls the shopper, pretending to be a
representative from a site visited, and extracts information. The attacker then calls
a customer service representative at the site, posing as the shopper and providing

personal information. The attacker then asks for the password to be reset to a
specific value.

Another common form of social engineering attacks are phishing schemes. Typo
pirates play on the names of famous sites to collect authentication and registration
information. For example, http://www.ibm.com/shop is registered by the attacker as
www.ibn.com/shop. A shopper mistypes and enters the illegitimate site and provides
confidential information. Alternatively, the attacker sends emails spoofed to look
like they came from legitimate sites. The link inside the email maps to a rogue site
that collects the information.

Snooping
Millions of computers are added to the Internet every month. Most users' knowledge
of security vulnerabilities of their systems is vague at best. Additionally, software
and hardware vendors, in their quest to ensure that their products are easy to
install, will ship products with security features disabled. In most cases, enabling
security features requires a non-technical user to read manuals written for the
technologist. The confused user does not attempt to enable the security features.
This creates a treasure trove for attackers.
A popular technique for gaining entry into the shopper's system is to use a tool,
such as SATAN, to perform port scans on a computer that detect entry points into
the machine. Based on the opened ports found, the attacker can use various
techniques to gain entry into the user's system. Upon entry, they scan your file
system for personal information, such as passwords.
While software and hardware security solutions available protect the public's
systems, they are not silver bullets. A user that purchases firewall software to
protect his computer may find there are conflicts with other software on his system.
To resolve the conflict, the user disables enough capabilities to render the firewall
software useless.

Sniffing the network


In this scheme, the attacker monitors the data between the shopper's computer and
the server. He collects data about the shopper or steals personal information, such
as credit card numbers.

There are points in the network where this attack is more practical than others. If
the attacker sits in the middle of the network, then within the scope of the Internet,
this attack becomes impractical. A request from the client to the server computer is
broken up into small pieces known as packets as it leaves the client's computer and
is reconstructed at the server. The packets of a request is sent through different
routes. The attacker cannot access all the packets of a request and cannot decipher
what message was sent.
Take the example of a shopper in Toronto purchasing goods from a store in Los
Angeles. Some packets for a request are routed through New York, where others are
routed through Chicago. A more practical location for this attack is near the
shopper's computer or the server. Wireless hubs make attacks on the shopper's
computer network the better choice because most wireless hubs are shipped with
security features disabled. This allows an attacker to easily scan unencrypted traffic
from the user's computer.

Attacker sniffing the network between client and server

Guessing passwords
Another common attack is to guess a user's password. This style of attack is manual
or automated. Manual attacks are laborious, and only successful if the attacker
knows something about the shopper. For example, if the shopper uses their child's
name as the password. Automated attacks have a higher likelihood of success,
because the probability of guessing a user ID/password becomes more significant as
the number of tries increases. Tools exist that use all the words in the dictionary to
test user ID/password combinations, or that attack popular user ID/password
combinations. The attacker can automate to go against multiple sites at one time .

Using denial of service attacks


The denial of service attack is one of the best examples of impacting site
availability. It involves getting the server to perform a large number of mundane

tasks, exceeding the capacity of the server to cope with any other task. For
example, if everyone in a large meeting asks you your name all at once, and every
time you answer, they ask you again. You have experienced a personal denial of
service attack. To ask a computer its name, you use ping. You can use ping to build
an effective DoS attack. The smart hacker gets the server to use more
computational resources in processing the request than the adversary does in
generating the request.
Distributed DoS is a type of attack used on popular sites, such as Yahoo!. In this
type of attack, the hacker infects computers on the Internet via a virus or other
means. The infected computer becomes slaves to the hacker. The hacker controls
them at a predetermined time to bombard the target server with useless, but
intensive resource consuming requests. This attack not only causes the target site
to experience problems, but also the entire Internet as the number of packets is
routed via many different paths to the target.

Denial of service attacks

Using known server bugs


The attacker analyzes the site to find what types of software are used on the site.
He then proceeds to find what patches were issued for the software. Additionally, he
searches on how to exploit a system without the patch. He proceeds to try each of
the exploits. The sophisticated attacker finds a weakness in a similar type of
software, and tries to use that to exploit the system. This is a simple, but effective
attack. With millions of servers online, what is the probability that a system
administrator forgot to apply a patch?

Using server root exploits


Root exploits refer to techniques that gain super user access to the server. This is
the most coveted type of exploit because the possibilities are limitless. When you
attack a shopper or his computer, you can only affect one individual. With a root
exploit, you gain control of the merchants and all the shoppers' information on the
site. There are two main types of root exploits: buffer overflow attacks and
executing scripts against a server.
In a buffer overflow attack, the hacker takes advantage of specific type of computer
program bug that involves the allocation of storage during program execution. The
technique involves tricking the server into execute code written by the attacker.
The other technique uses knowledge of scripts that are executed by the server. This
is easily and freely found in the programming guides for the server. The attacker
tries to construct scripts in the URL of his browser to retrieve information from his
server. This technique is frequently used when the attacker is trying to retrieve data
from the server's database.

Assignment - C
1. The primary focus of most B2C applications is generating ____.
(a). Revenue
(b). Product
(c). Service
(d). Web Site
2. Which is most significant for web based advertisers?
(a). Impressions
(b). Page Views
(c). Click Thoughts
(d). Hits

3. Digital products are particularly appealing for a company's bottom line


because of(a). The freedom from the law of diminishing returns
(b). The integration of the value chain.
(c). The increase in brand recognition.
(d). The changes they bring to the industry.
4. The differences between B2B and B2C exchanges include
(a) Size of customer set
(b) Transaction volume
(c) Form of payment
(d) Level of customization on products/services
(A). a and b
(B). a, b, and c
(C). b and c
(D). All of the above
5. What is the most significant part of e-commerce:
(a). B2B
(b). B2E
(c). B2C
(d). C2C
6. Security-and-risk services include-(a). Firewalls & policies for remote access
(b). Encryption and use of passwords
(c). Disaster planning and recovery
(d). All of the above
(e). a & b only

7. Business Plans are important when trying to find capital to start up your new
business. Important elements of a business plan include:
(a). Sales And Marketing
(b). Human resources handbook
(c). Business description
(d). a and c
8. Based on the study, in the supply side initiatives, which of the following
clusters was the only one found to be critical enterprise-wide?
(a). IT management
(b). Communications
(c). Data management
(d). IT-architecture-and-standards

9. E-commerce increases competition by: erasing geographical boundaries,


empowering customers and suppliers, commoditizing new products, etc. How do
companies usually solve this problem?
(a). By competing on price
(b). By selling only through traditional channels.
(c). By lowering costs
(d). By creating attractive websites
10. On which form of e-commerce does Dell Computer Corporation rely in
conducting its business?
(a). B2E
(b). B2C
(c). B2B
(d). None of the above

(e). All of the above


11. What is the 'last mile' in the last mile problem? The link between your...
(a). Computer and telephone
(b). Home and telephone provider's local office
(c). Office and server
(d). Home and internet service provider
12. Which of the following is a function of a proxy server?
(a). Maintaining a log of transactions
(b). Caching pages to reduce page load times
(c). Performing virus checks
(d). Forwarding transactions from a user to the appropriate server
(e). All of the above
13. An example of the supply chain of commerce is :
(a). A company turns blocks of wood into pencils.
(b). A department supplies processed data to another department within a
company.
(c). A consumer purchases canned vegetables at the store.
(d). None of the above
14. Just after your customers have accepted your revolutionary new e-commerce
idea, which of the following is not expected to immediately happen?
(a). Competitor catch-up moves
(b). Commoditization
(c). First-mover expansion
(d). None of the above
15. Which of the following statements about E-Commerce and E-Business is true?
(a). E-Commerce involves buying and selling over the internet while

E-Business does not.


(b). E-Commerce is B2C (business to consumer) while E-Business is B2B
(business to business).
(c). E-Business is a broader term that encompasses E-Commerce (buying
and
selling) as well as doing other forms of business over the internet.
(d). None of the above.
16. Where do CGI (Common Gateway Interface) application programs or scripts run?
(a). On the client through a web browser
(b). On the client through temporary stored files
(c). On the web server
(d). Where the user installs them
(e). None of the above
17. In which model the application logic is partitioned among the clients and
multiple specialized servers?
1. Two tier
2. Three tier
3. N tier
Options:
(a). 1
(b). 2
(c). 2 & 3
(d). 3
18. What of the following are the 3 types of web information system logic?
(a). Presentation, business, information/data
(b). Presentation, information/data, active server pages
(c). Business, information/data, client/server

(d). None
19. Software, music, digitized images, electronic games, pornography can be
revenue sources for the B2C e-commerce
(a). Selling services
(b). Doing customization
(c). Selling digital products
(d). Selling physical products
20. What e-commerce category is the largest in terms of revenue?
(a). Business to Business (B2B)
(b). Intra-Business (B2E)
(c). Business to Consumer (B2C)
(d). Consumer to Consumer (C2C)
21. An application layer protocol, such as FTP or HTTP, is transparent to the
end user.
(a) Always
(b) Never
(c) Sometimes
(d) None Of Above
22. B2B & B2C IT initiatives can use the same E-Commerce platforms
(a) Always
(b) Never
(c) Sometimes
(d) None Of Above
23. B2B involves small, focused customer set with large transaction volume per
customer, periodic consolidated payments and significant customizations of
products and services

(a) Always
(b) Never
(c) Sometimes
(d) None Of Above
24. Two computers can communicate using different communication protocols.
(a) Always
(b) Never
(c) Sometimes
(d) None Of Above
25. Which is/are types of e - commerce?
(a). B2B
(b). B2C
(c). C2C
(d). All the above
26. Which of the following items is used to protect your computer from unwanted
intruders?
(a). A cookie.
(b). A browser.
(c). A firewall.
(d). A server.
27. For selling physical products on the Internet, what is the key to
profitability?
(a). Hook
(b). Cost Control
(c). Brand Recognition
(d). Customization

28. Which of the following B2C companies is the best example of achieving its
financial success through controlling its cost?
(a). Yahoo
(b). Amazon
(c). E-Bay
(d). Google
(e). None of the above
29. AsianAvenue.com, BlackVoices.com, iVillage.com, SeniorNet.org are all
examples of what?
(a). Intermediary Services websites
(b). Physical Communities
(c). B2C websites
(d). Virtual Communities
30. Which of the following is the least attractive product to sell online?
(a). Downloadable music
(b). Software
(c). A pda
(d). Electronic stock trading
31. In the e-mail address jgreen03@gsm.uci.edu what is the top-level domain
(a). gsm
(b). uci
(c). edu
32. What do you think cookies do
(a). They are threat to privacy
(b). The help the user not to repeat some input info
(c). They personalize user's webpage

(d). B and c
(e). All of the above
33. Much of Amazon.com's initial success can be attributed to which of the
following:
(a). Low prices
(b). Brand recognition
(c). Fast web connections
(d). Customer service
34. It is particularly difficult to maintain the competition advantage based on
________.
(a). Quality
(b). Efficiency
(c). Price
(d). Internal Cost Reduction
(e). Brand
35. What type of application has the potential to change a market or even create
a new market?
(a). Software application
(b). Intelligent application
(c). Killer application
(d). Business application
36. Why did the e-commerce boom, as evidenced by soaring stock prices of
Internet businesses such as Pets.com and e Toys, went bust in 2000?
(a). Websites started by techies who lack business knowledge
(b). Lack of good business model
(c). Investors' and entrepreneurs' greed and ignorance

(d). All of the above


37.. Why can't new connection infrastructure like DSL, Cable Modem, and fiber
optics solve the last mile problem?
(a). Availability
(b). Cost
(c). Distance
(d). All of the Above
38. These are all the uses of plug-ins except?
(I). Air fresheners
II. Speed up data transmission
III. Enhance browser capability
IV. To view different file types
(a) I and II
(b) III and IV
(c) I
(d) I and IV
39. A system with universally accepted standards for storing, retrieving,
formatting, and displaying information in a networked environment best defines:
(a) A web site.
(b) A web location.
(c) The World Wide Web.
(d) An intranet.
40. What's the real potential of e-commerce?
(a) Making a profit
(b) Generating Revenue
(c) Improving efficiency

(d) Buying and selling on the internet and WWW

You might also like