You are on page 1of 6

Web 3.

0 – The intelligent web


I have a dream for the Web [in which computers] become capable of analyzing
all the data on the Web  – the content, links, and transactions between people
and computers. A ‘Semantic Web’, which should make this possible, has yet to
emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy
and our daily lives will be handled by machines talking to machines. The
‘intelligent agents’ people have touted for ages will finally materialize.
– Tim Berners-Lee, Founder of World Wide Web
You've decided to go see a movie and grab a bite to eat afterward. Booting up
your PC, you head to Google to search for theater, movie and restaurant
information. In total, you visit half a dozen Web sites before you're ready to
head out the door.
The next generation of the Web - Web 3.0 - will make such tasks faster and
easier. Instead of multiple searches, you might type a complex sentence or two
in your Web 3.0 browser, and then it will do the rest. The Web 3.0 browser will
analyze your response, search the Internet for all possible answers, and then
organize the results for you. That's not all. It will act like a personal assistant. As
you search the Web, the browser learns what you are interested in. The more
you use the Web, the more your browser learns about you and the less specific
you'll need to be with your questions.

Web 1.0 was an isolated information library. It had a linear transactional


relationship i.e. the content was generated by the individual content producers,
which were available to the customers via hyperlinked documents and the end-
users could not add content on the web. Hence, a lot of information that was
available could not be put on the internet in web 1.0 and it existed as an isolated
information silo. With the increasing popularity of the internet and the no. of end
– users, the need for a new and interactive web was felt. Hence, there was a
transition of Web sites from these isolated information silos to sources of content
and functionality and web 2.0 was born. In brief, the characteristics of Web 2.0
include:
 The ability for visitors to make changes to Web pages: Amazon allows
visitors to post product reviews. Using an online form, a visitor can add
information to Amazon's pages that future visitors will be able to read.
 Using Web pages to link people to other users: Social networking sites like
Facebook and MySpace are popular in part because they make it easy for
users to find each other and keep in touch.
 Fast and efficient ways to share content: YouTube is the perfect example. A
YouTube member can create a video and upload it to the site for others to
watch in less than an hour.
 New ways to get information: Today, Internet surfers can subscribe to a
Web page's Really Simple Syndication (RSS) feeds and receive notifications
of that Web page's updates as long as they maintain an Internet
connection.
 Expanding access to the Internet beyond the computer: Many people
access the Internet through devices like cell phones or video game
consoles; before long, some experts expect that consumers will access the
Internet through television sets and other devices.

Right now, when you use a Web search engine, it looks for Web pages that
contain the keywords found in your search terms. The search engine can't tell if
the Web page is actually relevant for your search. It can only tell that the keyword
appears on the Web page. This is when web 3.0 comes into the picture. A Web
3.0 search engine could find not only the keywords in your search, but also
interpret the context of your request. It would return relevant results and suggest
other content related to your search terms. It would treat the entire Internet as a
massive database of information available for any query.

But, how can we make the web intelligent? How can the search engine know
what you want? Let us suppose that I am a stamp collector and I love to collect
stamps and know about their significance. Now, over the years, I have collected
a lot of stamps and about each stamp I have made a document. Consequently, I
have a lot of documents now. But, if I want to know about a specific stamp, how
will I be able to do so? The answer is web. This is the web we have today: A
huge collection of documents. The words of all those documents are indexed
which enable searching for keywords. Now suppose I search for all red stamps,
what do I get?
Red stamps
Stamps from Cambodia (Khmer Rouge)
Stamps from the Red Sea
Stamps from the 140th anniversary of the Red Cross
Stamps with red dragons etc.

This doesn’t seem very intelligent. But how can the computer understand what I
want? This means that we will have to structurally describe the terms for the
web to understand them. Describing data in a structured way can best be done
in a database. For web 3.0, a stamp will be described as:

The picture on the stamp is a PO Box

When a computer understands what data means, it can do intelligent search,


reasoning and combining. Now, to enliven this revolutionary idea of an
intelligent web, the use some semantic technologies is imperative. These are
RDF, XML, URI, SPARQL, XDI, XRI, SWRL, XFN, OWL, API and OAUTH. Sounds
complicated! Don’t worry; the stamp collector is here to explain it the easy way.

Meaning is about understanding. To understand we need a language. A language


starts with words. Things mean something in words. Online, we describe things
with XML. Now, we can’t understand words alone. We also need grammar. Online
grammar is RDF (Resource Description Framework). With RDF Scheme we can
define concepts and make simple relations between them. But, RDF scheme is
limited. A language needs more expression and logic to make good reasoning
possible. That’s why OWL (The Web Ontology Language) was invented. Finally, to
reason you need rules. For example, I got this stamp from my uncle. The rule for
calling someone my uncle is that one of my parents has a brother. Rules are
formulated in SWRL (Semantic Web Rule Language). So, Words in XML, grammar
in RDF (scheme) and OWL, and finally the rules in SWRL. Also, to define queries
for the database, we need a query language and SPARQL (Protocol and RDF Query
Language) serves the purpose.
Because the web is decentralized and data is in many places, not only language is
important. Exchange of data between different machines is the key. To make a
connection a machine needs a source. For this, we use resource identifiers. Best
known resource identifier is the URI (which consists of a name (URN) and a
location (URL)).

Because URI’s have international limitations and the need for data-exchange
between machines is rapidly growing there is a successor: XRI (Extensible
Resource Identifier). There is a standard for sharing, linking and synchronizing
data. This standard is called XDI (XRI Data Interchange). With all this I am capable
of using the power of all different data resources on the web.

But, data is protected. We need consent and a key to gain access. The key to
certain data is described in an API (an application programming interface). An
open standard for accessing (authentication) the API is OAuth.

Talking in terms of the present scenario and how web 3.0 will be implemented.

The internet is very much alive and kicking. The first generation of internet
science primarily gave information. But, with the rise of sites like facebook and
Amazon, the web has become increasingly interactive. On this web 2.0, it’s
mostly the user who produces the content. Without contributors, there would
be no facebook, and without people posting information on Wikipedia and their
clips on Youtube, there would be no interaction on these sites. Meanwhile,
most people have become familiar with web 2.0. Blogging, tagging, social
networking and social bookmarking have paved the way to a next step in the
development of the web. The step to the intelligent and omnipresent, web 3.0.
Web 3.0 is not totally different from what we know now. It is in many respects,
a continuation of existing techniques. Think of the so called recommended
systems that make a personal approach by a website possible. Amazon has
cleverly used this system for a long time now, by offering their clients products
that other people with the same interest bought before them. And on last.fm,
you can listen online to music that caters to your personal wishes by using smart
systems. These sites are in a continuous learning process and they anticipate
what their users like or dislike. Important for sites like last.fm and Amazon is
that a song or a book gets extra information added by the user.

Now, looking at the other side of the coin, web 3.0 is not a cake walk. There are
many questions still asking for an answer. The Semantic Web can’t work all by
itself. If at all it did, it would be called the “Magic Web”. It will need some help to
become a reality. For example, it is not very likely that you will be able to sell your
car just by putting your RDF file on the Web. You will need society-scale
applications which will bridge the gap between the consumers and processors of
Semantic Web data through the use of some semantic web agents or services.
Not only this, we will also require more advanced collaborative applications that
make real use of shared data and annotations. Another major problem to be
catered to by the experts is the threat to the privacy of the users. If your Web 3.0
browser retrieves information for you based on your likes and dislikes, could
other people learn things about you that you'd rather keep private by looking at
your results? What if someone performs an Internet search on you? Will your
activities on the Internet become public knowledge? The researchers and
initiators of web 3.0 are pondering upon these challenges and are making an
attempt to make web 3.0 a better, safer and richer experience.

Looking beyond web 3.0, theories range from conservative predictions to


guesses that sound more like science fiction films spring up. To mention a few,

 According to technology expert and entrepreneur Nova Spivack, the


development of the Web moves in 10-year cycles. In the Web's first
decade, most of the development focused on the back end, or
infrastructure, of the Web. Programmers created the protocols and code
languages we use to make Web pages. In the second decade, focus shifted
to the front end and the era of Web 2.0 began. Now people use Web
pages as platforms for other applications. They also create mashups and
experiment with ways to make Web experiences more interactive. We're
at the end of the Web 2.0 cycles now. The next cycle will be Web 3.0, and
the focus will shift back to the back end. Programmers will refine the
Internet's infrastructure to support the advanced capabilities of Web 3.0
browsers. Once that phase ends, we'll enter the era of Web 4.0. Focus will
return to the front end, and we'll see thousands of new programs that use
Web 3.0 as a foundation.

 The Web will build on developments in distributed computing and lead to


true artificial intelligence. In distributed computing, several computers
tackle a large processing job. Each computer handles a small part of the
overall task. Some people believe the Web will be able to think by
distributing the workload across thousands of computers and referencing
deep ontologies. The Web will become a giant brain capable of analyzing
data and extrapolating new ideas based off of that information.

 The Web will extend far beyond computers and cell phones. Everything
from watches to television sets to clothing will connect to the Internet.
Users will have a constant connection to the Web, and vice versa. Each
user's software agent will learn more about its respective user by
electronically observing his or her activities.

It's too early to tell which (if any) of these future versions of the Web will come
true. It may be that the real future of the Web is even more extravagant than
the most extreme predictions. We can only hope that by the time the future of
the Web gets here, we can all agree on what to call it.

You might also like