You are on page 1of 14

Finding and Archiving the Internet Footprint

Simson Garfinkel and David Cox


Naval Postgraduate School
Monterey, CA, USA
February 10, 2009

Abstract

As a result, preserving and cataloging the earliest electronic records consisted of two intertwined problems: the
task of finding and copying the data off magnetic media before the media deteriorates, and the challenging of
reading older and sometimes obscure formats that are no
longer in widespread use[1].
Archivists are now on the brink of a far more disruptive change than the transition from paper to electronic
media: the transition from personal to cloud computing. In the very near future an archivist might enter the
office of a deceased writer and find no electronic files
of personal significance: the authors appointment calendar might split between her organizations Microsoft
Exchange server and Yahoo Calendar; her unfinished and
unpublished documents stored on Google Docs; her diary
stored at the online LiveJournal service; correspondence
archived on the Facebook walls of her close friends; and
her most revealing, insightful and critical comments scattered as anonymous and pseudonymous comments on the
blogs of her friends, collaborators, and rivals.
Although there are numerous public and commercial
projects underway to find and preserve public web-based
content, these projects will not be useful to future historians if there is no way to readily find the information
that is of interest. And of course, none of the archiving
projects are able to archive content that is private or otherwise restrictedas will increasingly be the case of personal information that is stored in the cloud.

With the move to cloud computing, archivists face the


increasingly difficult task of finding and preserving the
works of an originator so that they may be readily used by
future historians. This paper explores the range of information that an originator may have left on computers out
there on the Internet, including works that are publicly
identified with the originator; information that may have
been stored using a pseudonym; anonymous blog postings; and private information stored on web-based services like Yahoo Calendar and Google Docs. Approaches
are given for finding the content, including interviews,
forensic analysis of the originators computer equipment,
and social network analysis. We conclude with a brief
discussion of legal and ethical issues.
Keywords: Forensics, Search, Historical Record, Information Gathering

Introduction

With the introduction of home computers and electronic


typewriters in the late 1970s, archivists were forced to
confront the fact that a persons papers might, in fact,
no longer be on paper[29]. The power of word processing made writers among the first to embrace information
technology outside of government and the financial sector.
And because writers often made small purchases and were
not constrained by prior investment, they frequently purchased equipment from small niche manufacturers whose
technology did not become dominant.

1.1

Outline of this paper

This paper introduces and explores the problem of find Invited paper, presented at the First Digital Lives Research Confering and archiving persons Internet footprint. In Section 2
ence: Personal Digital Archives for the 21st Century, London, England,
we define the term Internet footprint and provide numer911 February 2009
Corresponding Author: slgarfin@nps.edu
ous examples of the footprints extent. In Section 3 we
1

2.1

present a variety of approaches for finding the footprint.


In Section 4 we discuss technical concerns for archiving
the footprint.

The Public Identified Footprint

A persons public identified footprint is any information


that they created which is online, widely available, and
specifically linked to authors real name.
For originators that are authors, their public footprint
1.2 Related Work
almost
certainly includes articles that have been published
Web archiving has received significant exploration in reunder
the
originators own name in web-only publications
cent years, including the use of proxies to collect data[42],
such
as
Slate
Magazine[5] or Salon.com[4]. The public
the need for proper record management[41], and the
footprint
may
also include letters to the editor. (John Updifficulty of reconstructing lost websites from the web
dike
once
wrote
a letter to the editor of the Boston Globe
infrastructure[36]. Researchers have also characterizied
advocating
that
the
comics page retain Spiderman[47].)
the Webs decay[7]. Jatowt et al. have developed
Individuals
may
also
publish their own writing on pertechniques for automatically detecting the age of a web
sonal
web
sites
(home
pages and blogs).
page[28].
Websites
cannot
be
relied
upon to archive their own maJuola provides a review of current authorship determiterial,
because
the
websites
may not exist in the future.
nation techniques[30].
For example, in the late 1990s thousands of articles and
There are numerous open source and commercially
columns by leading writers were published at HotWired, a
available face recognition products, including FaceIt by
web property operated by Wired News. Wired News was
Visionics, FavesVACS by Plettac, and ImageWare Softeventually sold to Lycos, then to Conde Nast[38]. Numerware. Zhao et al. [50] and Datta et al. [15] have both
ous articles were lost during these transfers; those that are
published comprehensive surveys of current research and
still available online are not at their original Internet lotechnology.
cation (http://www.hotwired.com), but are now
Viegas et al. examined cooperation and conflict housed underneath the http://www.wired.com dobetween authors by analyzing Wikipedia logs[48]. main. Many links to, between and even within the articles
Other relevant work on Wikipedia includes analysis of have been broken as a result.
participation[9] and statistical models that can predict fuOne way to retrieve no longer extant web pages is
ture administrators[11].
through the use of the Internet WayBack Machine, operated by the Internet Archive[3]. But here there are several
2 The Internet Footprint
problems:
Consider the staggering range of Internet services that a
The Internet Archive is itself another organization (in
person uses during the course of a year. Some of these
this case a for-profit business) which may cease opare public publication services like BBC or CNN News
eration at some point in the future.
services that are little more than traditional television, ra The Archives coverage is necessarily incomplete.
dio or newspaper repurposed to the Internet, and that most
The Internet Archive may not be accurate. (Fred
Internet users access anonymously. Other services are
Cohen has demonstrated that the content of past
public and highly personalizedblogs and home pages,
pages on the Internet Way Back machine can be mafor example. Still other services are private and personal,
nipulated from the futurea disturbing fact when
like an online calendar or diary. These services can be
one considers that the reports from WayBack maoperated by an organization for its employees, such as a
chine have been entered into evidence in legal cases
company running a Microsoft Exchange server, or they
without challenge from opposing counsel[13].)
can be operated on a global scale for millions of users,
The WayBack machine will not archive websites that
such as Google Calendar[23].
are blocked with an appropriate robots exclusion file
robots.txt. This was especially a problem for
This section considers the wide range of information
the Journalspace online journal, which was wiped
that an originator may create in other computers on the Internet through their own actionsthe originators Internet
out on January 2, 2009 due to an operator error and
the lack of backups[43]. As it turns out, Journalspace
Footprint.
2

2.2

had a robots.txt file that prohibited archiving by


services such as Internet Archive and Google.
Rather than hoping that another organization has managed
to sweep up an individuals relevant web pages in a global
cataloging of the Internet, it almost certainly makes more
sense for archivists to go out and get the material themselves.
The Public Footprint may also contain information at
social networking websites such as Facebook, MySpace
and LinkedIn. These websites contains not just information that a person posted, but documentation of a persons
social networktheir friends and associatesas well
as documentation of a persons preferences in the form
of recommendations messages. Websites such as Flickr
and Picassa hold photographs that a person may have uploaded. What a treasure for future historians trying to understand the life of an individual! What a quandary for an
archivist, for these websites actively encourage originators to intermix the personal and the professional. Only
through consultation with families and other interested
parties will archivists be able to determine which personal information should be made immediately available, which information should be kept in closed collections until a suitable amount of time has passed, and what
should be destroyed.
Finally, a persons public footprint might contain information that the person thinks is private but which is, in
fact, public. It is notoriously difficult to audit security settings because they are complex and not generally apparent
within todays user interfaces. As a result, it is common
for computer users to make information publicly available
when they do not intend to do so. Good and Krekelberg
explored the Kazaa user interface and discovered that it
was relatively easy for individuals to share their entire
hard drive to a file sharing network when they intended
to just share a few documents or folders[22]. Sometimes
such inadvertent public sharing can have important political, social, or historical dimensions: in June 2008, Judge
Alex Kozinski of the 9th US Circuit Court of Appeals
was found to have sexually explicit photos and videos on
his own personal website[31, 33]1 relevant, as the Judge
was himself overseeing an obscenity trial.

The Organizational Footprint

Although not strictly part of the Internet footprint, many


organizations operate their own data services on which an
originator could easily store information. For example,
many businesses and organizations run their own webbased calendar and email services. These services may
also cause problems for archivists because they can be
hard to find and may not be readily interested in sharing
their informationeven when the originator or the originators family strongly favor information sharing.

2.3

The Pseudonymous Footprint

Beyond the information that a person published under


their own name, there is potentially a wealth of information that is publicly available but published under a different name or a non-standard email addressan electronic
pseudonym.
There are may reasons why an individual might publish
information to the public using a pseudonym:
Information might be published under a different
name in an attempt to preserve privacy.
The individual might have a well-established pen
name (for example, Charles Lutwidge Dodgson
blogging as Lewis Caroll).
The individual might be a fiction writer and be publishing the information online using the persona of a
fictional character (for example, Dodgson blogging
as the Queen of Hearts).
The information might appear in an online forum
where there is a community norm that prohibits publishing information under a real name, or the online forum might assign pseudonyms as a matter of
course.
Another person might already be using the individuals name, forcing the originator to pick a different
name.
The individual might be a government or corporate
official and be prohibited from posting under their
own name for policy reasons. (For example, Whole
Foods President John P. Mackey blogged under the
pseudonym Rahobed, a play on his wifes name
Deborah[35].)
Information that an originator publishes on the Internet in a manner that is freely available but is not directly
1 Later the Judge defending himself saying that much of the material
attributed to him by the Los Angeles Times had actually been posted by linked to the persons name can be thought of as the indihis son[25].
viduals Pseudonymous Footprint. It is unlikely that all of
3

2.5

an originators pseudonyms would be known in advance


by an archivist: many people dont even remember all of
the pseudonyms that they themselves use!
Pseudonyms have many characteristics that are sure to
cause problems for future archivists:
Although each pseudonym is typically used by a single person, this is not necessarily the case.
Although some pseudonyms are long-lived, others
may be created for a single purpose and then quickly
discarded.
Pseudonyms may be linguistically similar to the
originators name, similar to another persons name,
or they may be unique.
There is no central registry of pseudonyms.
Some pseudonyms may be confined to a single online service, while others may be used between multiple services.
The same pseudonym on different services may in
fact be used by different people (e.g. while the
user rahobed on Yahoo Finance bulletin was used
by John P. Mackey, the blog http://rahobed.
blogspot.com/ actually belongs to one of the
authors of this article.
Pseudonyms that appear linked to email addresses
(e.g. rahobed@yahoo.com) need not be: some
online services allow any text string to be used as
a username, and usernames that look like email addresses are not verified.
Automated tools may assist the researcher in attempting to determine if a pseudonym is or is not the
originator[30]. In the case of photos, face recognition/matching software could be used.

2.4

The Private Footprint

Increasingly computer users are storing information on


remote servers rather than on their own systems. Such
services are sometimes called grid, cluster or cloud
computing. Although these are online services, they are
frequently used for private use. Individuals prefer them to
using personally owned computer systems because of data
durability (users dont need to back up their own data),
and cost (most of the web-based services are free). Another advantage is that the systems make it relatively easy
to collaborate with a small number of people.
Some examples of these services includes:
Calender services (e.g. Google Calendar and Yahoo
Calendar), which allows a person to have an online
calendar.
Online word processors and spreadsheets, such as
Google Docs, and ThinkFree Boundless,
Livejournal, a blogging service, which also allows
for the creation of a private diary or a passwordprotected journal that is shared with a small number
of people.
Online banking and bill payment services. Whereas
traditionally a person might have kept their own financial records, increasingly individuals are opting
to receive e-statements. Although e-statements
could be sent by email, in practice the statements are
not sent at all. Instead the bank or financial institution sends a message stating that the statement may
be viewed on a website. Most users do not download
a copy, but simply refer to the online version when
they need to.
Access to online private services is typically protected
with a username and a password. Most services allow
users to register and email address; if a password is lost, a
new password can be generated and sent to the address.
Also part of the private footprint are Internet services
that do not appear as content at allbut which can be
vital to understanding a persons approach to the online
world. Two examples come to mind:
1. For example, Individuals can obtain domain name
and populate the Domain Name System (DNS)
database with a variety of types of information. Any
attempt to capture Internet services which does not
capture DNS is necessarily incomplete and may even
be erroneous.But capturing only DNS is insufficient:
there is necessarily a link between DNS names, IP

The Anonymous Footprint

Anonymous works are fundamentally different from


pseudonymous works. With pseudonymous messages
there is at least a name (Lewis Carroll) that the archivist
can use to link a work to the true author. But for works
that are truly anonymous, the only information that can
link the work with the author is the content of the work
itself.
Although the Internet originally had many outlets for
anonymous speech, these systems received significant
abuse as the Internets popularity grew in the 1990s[26,
37]. As a result todays Internet has surprisingly few outlets for speech and messages that are truly anonymous.
4

Computer systems preserve many traces or remnants


that are indicative of Internet activity:
Web browsers maintain bookmarks and caches of
web pages. Web pages may also be recovered from
deleted files.
Email messages are rich with references to online
services in the form of emails containing links, notifications, password reset instructions.
Address books may contain URLs and are frequently
used to hold user names and passwords as well.
Calendars may contain URLs and online information
in their desktop calendars.
Other references may be found in logfiles and even
word processing documents.
Much of these references can be found by making a
forensic copy of the originators computer and all associated media (tapes, CD/DVDs, external drives etc), and
then scanning the resulting disk images with a forensic
feature extractor[19]. We have developed a primitive extractor called bulk_extractor which can produce a
report of all email addresses and URLs found on an originators hard drive. An example of the report of this program is shown in Figure 1.
Unfortunately, while some of an originators account
names, aliases, and pseudonyms may be present on the
originators machine, others may not be. The originator
may have explicitly attempted to hide them, or may have
accessed them exclusively from another machine, or they
may have been used so long ago that references to the
accounts have been overwritten.
The forensic analysis process should be completed with
care not to alter or otherwise disturb the information on
the originators equipment. In general there are three key
requirements which must be adhered to when conducting
the analysis:
1. The entire storage space of the originators computer
and associated media should be captured, not merely
the individual files. If possible, all attempts to copy
data from the originators computer should be done
with a hardware write blocker in place between the
computer and the storage media. This will ensure
that data is not accidentally written to the originators
storage devices during the imaging process.
Complete imaging of the originators computer will
establish the provenance of the captured material and
address concerns of authenticity. These concerns are

addresses, and geographical locations. Thus, in


order to make sense of DNS information, it may
be necessary to perform other operations such as
geolocation[24] or cryptographic operations[16].
2. Much collaborative work that takes place on the Internet today is the collaborative creation of open
source computer programs. These systems reside on
servers such as SourceForge and Google Code, as
well as on privately-managed CVS and Subversion
servers. This code is generally not archived or indexed by existing search engines or web archiving
projects, but may nevertheless have significant historical importance.

Finding the Footprint

As the previous section shows, simply mapping out the


potential of a persons Internet Footprint is quite difficult.
Actually finding it is more difficult still.
We have identified three approaches for finding an Internet Footprint: forensic analysis of an originators computer system; search; and social network analysis.

3.1

Interviews with the Originator

Ideally, the originator or the originators family will


be able to provide a list of online services, complete
with usernames and passwords, to enable the expeditious
downloading and archiving of information stored on remote services. Such a list should also come with signed
consent giving full authorization for the accounts to be
used for the downloading of the information that they contain (see Section 5.1).
But even if the originator is alive and cooperating, it is
unlikely that the originator will be able to provide a complete list of online informationmost of us are simply
unaware of all the various online services that we use on a
daily basis. Finally, there is always the risk that the originator will have died without clearly documenting what
online services were used. Even if the originators family
wishes to assist the archivist, they may be unable to do so.
Interviews may also be conducted with the originators
family and friends to see if they know of any online resources used by the originator.

3.2

Forensic Analysis

One of the most direct ways to identify an originators


Internet footprint is to conduct a forensic analysis of the
originators computers and other electronic devices.
5

Input file: /Users/simsong/M57 Jean.vmwarevm/Windows XP Clean-s001.vmdk


Starting page number: 0
Last processed page number: 90
Time: Fri Jan 16 11:59:27 2009
Top 10 email addresses:
=======================
jean@m57.biz: 1011
bob@m57.biz: 136
alex@m57.biz: 92
JEAN@M57.BIZ: 82
alison@m57.biz: 73
carol@m57.biz: 63
alison@M57.BIZ: 60
googlealerts-noreply@google.com: 49
inet@microsoft.com: 46
ca@digsigtrust.com: 40
Top 10 email domains:
=====================
m57.biz: 1487
M57.BIZ: 213
google.com: 84
netscape.com: 75
microsoft.com: 68
mozilla.org: 52
thawte.com: 51
digsigtrust.com: 46
hotmail.com: 35
aol.net: 31
Top 10 URLs:
=====================
http://pics.ebaystatic.com/aw/pics/s.gif: 5056
http://www.microsoft.com/contentredirect.asp.: 1735
https://www.verisign.com/rpa: 673
http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul: 542
http://ocsp.verisign.com0: 526
http://: 430
http://support.microsoft.com: 424
http://pics.ebaystatic.com/aw/pics/paypal/logo_paypalPP_16x16.gif: 333
http://crl.verisign.com/ThawteTimestampingCA.crl0: 263
http://crl.verisign.com/tss-ca.crl0: 262

Figure 1: The first page of output from bulk extractor program; the actual output runs more than 40 pages.

case of Ambassador David Mulford, much of


the information that can be found at first may
seem to be unrelated: one site states that in
the late 1950s David attended Lawrence University and was a member of its athletic team;
other sites mention his work at different positions in governmental departments and commercial structures, including Chairman International of Credit Suisse First Boston (CSFB) in
London; a few sites (mostly in Spanish) relate
his name to a financial scandal in Argentina.
It is a diffcult challenge to automatically determine whether all of these sites discuss the same
person.[8]

similar to those of legal authorities[2]. It may also


result in data being preserved that would otherwise
be lostfor example, residual data in deleted web
browser cache files may contain important clues for
uncovering pseudonyms used by the originator.
2. Data, once captured, should be hashed, or cryptographically fingerprinted, with a strong algorithm
such as SHA1 or SHA256. (MD5 is no longer sufficient as the algorithm has been compromised[49].)
Even better, the image can be digitally signed and/or
encrypted using a system such as the Advanced
Forensic Format (AFF)[21].
3. In addition to a sector-by-sector copy of the storage media, it may be desirable to make a file-byfile copy. This will both assure that there are two
copies of each file (one in the disk image and one
in the copy), and will also decrease demands for the
use of forensic tools. Also, in some cases, forensic
tools may not be able to extract information from the
disk images. (For example, in some cases it is not
possible to easily reconstruct a multi-drive RAID or
encrypted file system. In these cases it is easiest to
use the host operating system to make a file-by-file
copy.)

3.3

The archivist can also try to find an originators Internet


footprint by searching the websites belonging to the originators known friends and relations and looking for links.
In some cases it may be appropriate to directly email individuals in the originators address book or social network
to see if they have information that they wish to share with
the archivist.
Once references are found, it might be useful to sort
these references into a variety of categories. We suggest
three:

Search and Social Network Analysis

Another way to locate the originators Internet footprint


is by searching for it. Two kinds of search are possible.
First, the archivist could simply search for the originators name (or aliases) on Internet search systems such as
Google and Yahoo. Second, the archivist could go specifically to websites such as Facebook, MySpace and Flickr,
and conduct searches there.
Search is complicated by the fact that many people
share the same name. Bekkerman and McCallum note
that a search for the name David Mulford on Google
correctly retrieves information about a US Ambassador
to India, two business managers, a musician, a student,
a scientist, and a few othersall people who share the
same name[8]. Which David Mulford is the right David
Mulford depends on which one the context of the search.
Sometimes it is difficult to determine if two seemly different individuals are in fact the same person. Consider
again the search for David Mulford:

Provable References Known references could be indicated by the presence of a username/password combination which maps directly to a specific website
and can be validated by testing to see if the account
can still be accessed.
Reliable References A reliable reference could be indicated by the presence of an alias and URL/cookie
combination but does not include a password, preventing the researcher from actually testing the account.
Passing References A passing reference could be indicated by the presence of a URL or cookie which
points to a social networking site or internet e-mail
site. The difference here is that there is only one indicator of reference to a website which could hold
historically interesting material.

It is sometimes quite difficult to determine if


a page is about a particular person or not. In
7

3.4

Unexpected Complications

3.4.1 Comments, Tracebacks, and Diggs


Now, think back to the BBC and CNN news sites. Although these services seem to be anonymous publication
services, increasingly these services are places where an
originator may leave an Internet footprint. BBCs website
allows users to create a membership, Sign In and leave
comments on every story. Comments are displayed with
the users member name which is unique. an originator
might use his or her real name as a member name. Alternatively, the originator might use a pseudonym (or multiple pseudonyms) which might or might not be similar to
the originators real name. A future biographer trying to
build a picture of the originator might be very interested in
the comments that the person thought to leave on the BBC
websiteputting those comments in context requires not
just archiving them, but archiving the original story and
the other comments as well.
CNN also allows readers to post a comment (or Sound
Off, to use CNNs term). But CNN also allows users to
share articles on services such as Mixx, Digg, Facebook,
del.icio.us, reddit, StumbleUpon, and MySpace. Sharing means that a reference to the article, and the users
comments about the article, are cross-posted to another
web-based service.

Figure 2: Postings to Craiglist may one day provide fascinating contemporanious documents of the career of writers or artists.

3.4.2 Report as Offensive and Edit Wars


Another complication is that user contributions may be
removed by other users. Web sites have given users this
power to manage the torrents of spam and inappropriate comments that many high-profile websites receive.
For example, the BBC website allows users to Complain about this comment (Figure 3), and Craigslist allows comments to be flagged as miscategorized, prohibited, or spam/overpost (Figure 2). Many websites
will automatically remove user-generated comment that
is flagged by more than a certain number of people.
On Wikipedia it is even easier to change an originators
wordsthey can simply be edited by other Wikipedia
users. This is particularly problematic when people are
contributing to articles that are controversial. Imagine
a noted author or historian locked in a bitter edit war
with some other Wikipedia user, with each editing and
re-editing the works of the other. Then the noted historian
dies. With no one left to defend the historians intellectual
space, the pages get rewritten or even marked for deletion

and are eventually removed from the system. From the


point of view of Wikipedia policy this is the correct outcome, as a Wikipedia article is supposed to represent a
consensus truth that can be verified from external sources
and for which the author has no vested interest[20].
3.4.3 Privacy Enhancing Technologies
The originator may have employed various privacy enhancing technologies (PETs) such as encryption or
anonymity services during their lifetime. Such services,
unfortunately, may also prevent the analysis of their computer systems by archivists after the originators death.
This can be a problem even if the analysis is performed
with the full consent of the originators family.
For example, data may be encrypted, either on the originators home computer system or on remote servers. In
recent years high-quality encryption has been built into
consumer operating system (for example, Apples FileVault). There are also a small number of Internet service
8

technique dwindles. It is possible that a weak but obscure


algorithm that is crackable today will not be readily crackable in the future without significant re-investment in research as the specific knowledge of the vulnerability is
lost.
3.4.4 Uncooperative Service Providers
There is an old story of an assistant at MIT who worked
for a famous professor in one of the physical science departments. One day the professor died after a long illness.
Shortly thereafter, the assistant received a phone call from
the Institute Archivist who wanted to stop by and evaluate the professors papers. The assistant said that she
had been expecting the archivist and had already cleaned
them up in anticipation of the visit. When the archivist
arrived the extent of the cleaning became evident: the assistant had thrown out the professors scratch pads, his
doodles, a box of business receipts, and so on, and prepared for the archivist a neat folder showing all of the
professors speeches, published articles, and honors. The
Figure 3: The BBC website allows users to complain about archivist was devastated.
comments left from other users.
Although many archivists know that they may need to
act with haste in order to preserve the physical papers of
the deceased, this story of the archivist and the assistant
providers that offer to store information in an encrypted is in danger of playing out with great frequency in tomorform so that not even the provider can access it (for ex- rows cloud-based world of electronic records.
ample, HushMail offers encryption of email, while Iron
For example, photo sharing websites such as AOL PicMountain Digital Services offers encryption of backups.) tures have deleted uploaded pictures that are not viewed
Encryption may be subverted through the analysis of after 60 days, or when the owner of the account fails to
the originators own computer systems, as sometimes log in after 90 days. Some services delete photos when
people store passwords and encryption keys for remote monthly fees are no longer paid[10]. Archivists would
systems on their local computers. Programs such as Ac- need to move fast to rescue an originators photos stored
cessDatas Forensic Tool Kit and Password Recovery Tool on such a service.
Kit can work together to scan a hard drive for proper
Facebooks policy is to place the profile of members
names, use this information to try to forcibly decrypt, or who die into a Memorial State. In Memorial State, the
crack, the encrypted data. The companys Distributed account is given stronger privacy settings (only friends
Network Attack can run the attack simultaneously on hun- can see the profile), the person is removed from any
dreds of computers to dramatically increase speed.
groups and the status is taken away. This policy is the
Crack today or crack tomorrow? Archivists have an in- same across the board. If the family would rather the proteresting dilemma when attempting to decrypt encrypted file be taken down, we will do so, stated Malorie Lucich,
data. In most cases it becomes easier to forcibly de- a spokesperson for the company[34].
But Facebooks only changes the account to memory
crypt encrypted data as each year computers get faster and
new techniques are discovered for cracking. On the other state if someone brings to Facebooks attention that a
hand, a lessor-known encryption technique may conceiv- member has died. Meanwhile, an article at the University
ably become more difficult to decrypt with the passage of of Georgias newspaper details how parents of deceased
time as the number of people familiar with the specific students have taken over their Facebook accounts, using
9

integrity assurances.
The third category, Passing References, will require
significant time and effort on the part of the historian
and it is anticipated that the level of automation will decrease. Since the historian is provide little information
on which to go on exhaustive manual searches of both local and deep/hidden content will be required. For public
content the use of traditional search engines, like Google
and Yahoo, and Webcrawlers, like Webcrawler.com and
DataRover could be utilized. As local search engines index mostly based on hyperlinks which include location
information they typically exclude high quality local
content available in the Deep Web[40]. Deep Web crawling may be accomplished through the use of tools such
as Deep Web Crawler and LocalDeepBot. Additionally,
Hidden Web Agents may be used as well. These agents
can search and collect information on pages outside the
Publically Indexable Web (PIW)[32].

the service as a means for memorializing their children


and getting to know their childrens friends[27].

Archiving the Footprint

Information must be archived once it is discovered.


Archiving involves two distinction processes: getting the
content, and saving the content.

4.1

Getting the content

Once the references have been cataloged, the archivist


must then begin the task of extracting content from the
Internet and saving it in an archival form. The archivist
can manually log into the remote websites to access the
information or, more likely, run some kind of modified
web crawler (e.g. [39]) to do the work.
For historic purposes it will almost always be desirable to store the original web page. However, since
many web pages are likely to contain extraneous information (e.g. advertisements and navigation elements), it
may also be desirable to automatically extract the relevant portions of a web using a wrapper or information
extractor. Generally though these tools are hand written
to suit a specific web site and do not scale or transfer
well from page to page. Fortunately, tools have been proposed to better address the issues associated with wrapper
development, including W4F (World Wide Web Wrapper Factory)[44], Rapier (Robust Automated Production
of Information Extraction)[12] or NoDoSE (Northwestern Document Structure Extractor)[6].
HTML-aware tools, like W4F, typically provide a
higher degree of automation; however, the consistent use
of HTML tags on target pages is required. Tools which
are based on Natural Language Processing (NLP), such
as RAPIER, can be classified as semi-automatic because
though the wrapper is generated automatically a user
needs to provide examples to guide the it. It is up to the
researcher to choose (or develop) the appropriate tool. (A
comprehensive list of information extraction approaches
can be found in [46].)
Reliable References will require a more hands on approach. This category will require the archivist to manually navigate to the website and identify whether or not
it is historically interesting. If it is deemed so then the
tools discussed in the previous category may certainly be
used to extract appropriate content ensuring that appropriate steps are taken to maintain an original copy and

4.2

Saving the content

While there are many different ways to archive web content, each has significant technical problems.
There are several fundamental problems in making an
archival copy of a web page:
Because web pages can appear differently on different computers, it is not clear what should be
archiveda picture of the web page, or the HTML
code of a web page?
Web sites such as Facebook and LiveJournal may
show web pages differently depending on who is
logged in. Should the web page be archived as it
appear to the author, to a person in the authors circle of friends, to an un-friended registered user, or
as it appear if no one is logged in?
Alternatively, web sites may display pages differently at different times of day, or change their
theme to take into account current events. If there
are significant time-dependent changes, should multiple copies be archived?
Once the archivist decides what should be archived, the
next question to answer is how should it be archived.
The nave approach for archiving web content is to print
it. While archivists generally frown on this approach, because all it does is exchange one set of problems for another.
Instead of printing to paper, the web page could be
10

printed to a bitmap file (e.g. a TIFF or PNG). Such files


produce an exact copy of what was seen on the screen
at least for one specific web browserbut they cannot be
readily searched unless they are OCRed. Such scans do,
however, meet the legal requirements for admission to the
US courts[17].
Another approach is to print the web content to
Adobe Acrobat (PDF) format. But PDF is an evolving
standard: PDF documents created today may look differently in 10 years with a different Acrobat reader. Acrobat has specifically had problems with documents that had
embedded bitmap fonts (especially documents created by
versions of LATEX in the 1980s and 1990s) and documents
authored in languages other than English which did not
have embedded fonts[45].

Legal Issues

There are primarily two legal issues that could arise during the conduct of the collection of Internet works being proposed in this paper: violations of copyright law,
and violations of computer crime statutes such as the US
Computer Fraud and Abuse Act, or the UK Computer
Misuse Act. There are also a number of ethical issues
that might arise as well.

5.1

No Site Content may be modified, copied, distributed, framed, reproduced, republished, downloaded, scraped, displayed, posted, transmitted, or sold
in any form or by any means, in whole or in part,
without the Companys prior written permission, except that the foregoing does not apply to your own User
Content (as defined below) that you legally post on the
Site. Provided that you are eligible for use of the Site,
you are granted a limited license to access and use the
Site and the Site Content and to download or print a
copy of any portion of the Site Content to which you
have properly gained access solely for your personal,
non-commercial use, provided that you keep all copyright or other proprietary notices intact. Except for
your own User Content, you may not upload or republish Site Content on any Internet, Intranet or Extranet site or incorporate the information in any other
database or compilation, and any other use of the Site
Content is strictly prohibited[18].
Figure 4: This section of Facebooks Terms of Use would
seem to prohibit the archiving of a persons Facebook profile for historical purposes.

Copyright and Terms of Use

Copyright law, at least in the United States, is generally


quite receptive to archives made for scholarly purposes,
especially when the archiving is done for non-commercial
purpose and in such a way that the value of the original
copyrighted work is not compromised. In such a case,
copies are typically allowed under the Fair Use doctrine
(17 U.S.C. 106); similar Fair Use is allowed under other
copyright regimes as well.
Despite Fair Use, many web publishers and online services are generally not receptive to having their content scraped, spidered, or otherwise archived. For example, Facebooks Terms of Use (Figure 4)clearly prohibits archiving an originators Facebook postings by anyone other than the person herself; whether or not this permission would apply to the persons estate or an archivist
acting on behalf of the person or estate is unclear. However, the policy is very clear that Facebook would not permit an archivist or historian to archive and then display
messages that others had posted on the originators Facebook wall, or messages that the person had received,
11

or how the originators Facebook presence existed in the


context of other Facebook profiles.

5.2

Computer Crime

Even if an archivist decided that it is legally permissible to


archive the content that an originator may have stored in
the Internet cloud, the way that the archivist goes about
performing this function may expose the archivist to criminal charges.
For example, although it may be possible to scan an
originators hard drive for the username and password to
an online service, actually using that username and password may put the archivist in violation of computer crime
statutes such as the US Computer Fraud and Abuse Act
(CFAA) (18 USC 1030). Such violations may be direct,
as the CFA prohibits unauthorized access to computers involved in interstate commerce. But violations may
also be indirect, the result of violating a websites Terms
of Service under a growing interperation of the CFAA
which holds individuals criminally liable for using a website in a manner other than that which was envisioned by
its the websites owner[14].

5.3

Ethical Issues

References

Computer systems have the potential to record more information, retain it for a longer period of time, and make
it available to more individuals than is possible with paper works. More than ever, every effort should be made
to clearly differentiate between what is public and what
is private information. This is especially the case when
collecting from online information systems, since there
is the chance that the information collected may belong
to another person (in the case of a mistaken identity), or
may involve other people (in the case of a social network
website).
The problem of mistaken identity is especially problematic for online data collection. There is little chance
when going through a persons office that the archivist
will accidently pick up and catalog a diary belonging to
a person who has the same name but who lives in another
countrybut this is exactly what can happen when downloading a originators online diary.

[1] Memory of the world: Safeguarding the documentary


heritage: A guide to standards, recommended practices
and reference literature related to the preservation of
documents of all kinds., July 15 2003. http://www.
unesco.org/webworld/mdm/administ/en/
guide/guidetoc.htm.

[6] Brad Adelberg. Nodosea tool for semi-automatically


extracting structured and semistructured data from text
documents. In SIGMOD 98: Proceedings of the 1998
ACM SIGMOD international conference on Management
of data, pages 283294. ACM, New York, NY, USA, 1998.
ISBN 0-89791-995-5.

[2] Adapting existing technologies for digitally archiving


personal lives. In iPRES 2008: The Fifth International
Conference on Preservation of Digital Objects. The
British Library, September 2008. http://www.bl.
uk/ipres2008/presentations_day1/09_
John.pdf.
[3] Internet archive wayback machine, 2008. http://web.
archive.org.
[4] Salon magazine: Breaking news, opinion, politics, entertainment, sports and culture, 2009. http://www.
salon.com/.
[5] Slate magazine, 2009. http://www.slate.com.

Conclusion

It is no longer sufficient to simply analyze local computers and associated media when attempting to catalog
a persons life works. Ever increasingly communication,
personal documents and published works are migrating to
the web space. Social Networking sites contain photos,
videos and personal communication. Blog sites contain
personal ramblings and commentaries; named and anonymous. E-mail and chat as well as personal videos are also
migrating to the web. The archivist of the present must be
technically savvy and be able to use the myriad of forensic analysis, web searching and cataloging tools in order
to be efficient and create a complete set of works.
Many of the approaches discussed in this paper need
not be confined to the archivist profession. Individuals
can apply these approaches on themselves to determine
the extent of their own digital shadow. These approaches
may also be useful in civil litigation for e-discovery, and
even in law enforcement.

6.1

[7] Ziv Bar-Yossef, Andrei Z. Broder, Ravi Kumar, and Andrew Tomkins. Sic transit gloria telae: towards an understanding of the webs decay. In WWW 04: Proceedings
of the 13th international conference on World Wide Web,
pages 328337. ACM, New York, NY, USA, 2004. ISBN
1-58113-844-X.
[8] Ron Bekkerman and Andrew McCallum. Disambiguating
web appearances of people in a social network. In WWW
05: Proceedings of the 14th international conference on
World Wide Web, pages 463470. ACM, New York, NY,
USA, 2005. ISBN 1-59593-046-9.
[9] Susan L. Bryant, Andrea Forte, and Amy Bruckman. Becoming wikipedian: transformation of participation in a
collaborative online encyclopedia. In GROUP 05: Proceedings of the 2005 international ACM SIGGROUP conference on Supporting group work, pages 110. ACM,
New York, NY, USA, 2005. ISBN 1-59593-223-2.

Acknowledgements

Our thanks Jeremy Leighton John at the Digital Lives research project for suggesting that we explore this relevant
and interesting topic and providing valuable feedback on
this paper.

[10] William M. Bulkeley. Failure to log on, buy prints can


lead to loss of pictures; wife on the verge of tears. The
Wall Street Journal, February 1 2006. http://www.
phanfare.com/press/wsj_bulkeley.pdf.

12

[11] Moira Burke and Robert Kraut. Taking up the mop: identifying future wikipedia administrators. In CHI 08: CHI
08 extended abstracts on Human factors in computing
systems, pages 34413446. ACM, New York, NY, USA,
2008. ISBN 978-1-60558-012-X.
[12] Mary Elaine Califf and Raymond J. Mooney. Bottom-up
relational learning of pattern matching rules for information extraction. J. Mach. Learn. Res., 4:177210, 2003.
ISSN 1533-7928.
[13] Fred Cohen. Risks of believing what you see on the wayback machine (archive.org). RISKS Digest, 25, January 7
2008. http://seclists.org/risks/2008/q1/
0000.html.
[14] Susan Crawford. The computer fraud and abuse act, May
19 2008. http://scrawford.net/blog/thecomputer-fraud-and-abuse-act/1172/.
[15] Ritendra Datta, Dhiraj Joshi, Jia Li, and James Z. Wang.
Image retrieval: Ideas, influences, and trends of the new
age. ACM Comput. Surv., 40(2):160, 2008. ISSN 03600300.

[23] Google. Google calendar help, 2009. http://www.


google.com/support/calendar/.
[24] Saikat Guha and Paul Francis. In Privacy Enhancing
Technologies, pages 153166. Springer, 2007. http:
//www.cs.cornell.edu/people/francis/
pet07-idtrail-cameraready.pdf.
[25] Karen Gullo. Panel of five to probe judges sexual
web postings. Bloomberg, June 17 2008. http:
//www.bloomberg.com/apps/news?pid=
newsarchive\\&sid=ahD1O6qXYiGc.
[26] Sabine Helmers.
A brief history of anon.penet.fi the legendary anonymous remailer, September 1997.
http://www.december.com/cmc/mag/1997/
sep/helmers.html.
[27] Brian Hughes. Facebook for the great beyond, July
11 2007.
http://media.www.redandblack.
com/media/storage/paper871/news/2007/
11/07/Variety/Facebook.For.The.Great.
Beyond-3083145.shtml.

[16] Mark Delany. Domain-based email authentication using


public-keys advertised in the DNS (domainkeys), August
2004. INTERNET DRAFT.

[28] Adam Jatowt, Yukiko Kawai, and Katsumi Tanaka. Detecting age of page content. In WIDM 07: Proceedings of
the 9th annual ACM international workshop on Web information and data management, pages 137144. ACM, New
York, NY, USA, 2007. ISBN 978-1-59593-829-9.

[17] John P. Elwood. Admissibility in federal court of electronic copies of personnel records, May 30 2008. http:
//www.usdoj.gov/olc/2008/electronicpersonnel-records.pdf.

[29] Jeremy Leighton John.


Adapting existing technologies for digitally archiving personal lives. In iPres
2008, 2008.
http://www.bl.uk/ipres2008/
programme.html.

[18] Facebook. Terms of use, September 28 2008. http://


www.facebook.com/terms.php.

[30] Patrick Juola. Authorship attribution. Found. Trends Inf.


Retr., 1(3):233334, 2006. ISSN 1554-0669.

[19] Simson Garfinkel. Forensic feature extraction and crossdrive analysis. In Proceedings of the 6th Annual Digital
Forensic Research Workshop (DFRWS). Lafayette, Indiana, August 2006. http://www.dfrws.org/2006/
proceedings/10-Garfinkel.pdf.

[31] Judge alex kozinski calls for probe into his porn postings.
Los Angeles Times, June 13 2008.

[20] Simson L. Garfinkel. Wikipedia and the meaning of truth.


Technology Review, November/December 2008. https:
//www.technologyreview.com/web/21558/.
[21] Simson L. Garfinkel. Providing cryptographic security and
evidentiary chain-of-custody with the advanced forensic
format, library, and tools. The International Journal of
Digital Crime and Forensics, 1, JanuaryMarch 2009.
[22] Nathaniel S. Good and Aaron Krekelberg. Usability and
privacy: a study of Kazaa P2P file-sharing. In Proceedings
of the conference on Human factors in computing systems,
pages 137144. ACM Press, 2003. ISBN 1-58113-630-7.

13

[32] Juliano Palmieri Lage, Altigran S. da Silva, Paulo B. Golgher, and Alberto H. F. Laender. Automatic generation
of agents for collecting hidden web pages for data extraction. Data Knowl. Eng., 49(2):177196, 2004. ISSN 0169023X.
[33] Lawrence Lessig.
The Kozinski mess, June 12
2008.
http://www.lessig.org/blog/2008/
06/the_kozinski_mess.html.
[34] Malorie Lucich. Re: face book pages of dead people, January 16 2009. personal communication.
[35] Andrew Martin.
Whole foods executive used alias.
The New York Times, July 12 2007.
http://
www.nytimes.com/2007/07/12/business/
12foods.html.

[36] Frank McCown, Norou Diawara, and Michael L. Nelson. Factors affecting website reconstruction from the
web infrastructure. In JCDL 07: Proceedings of the 7th
ACM/IEEE-CS joint conference on Digital libraries, pages
3948. ACM, New York, NY, USA, 2007. ISBN 978-159593-644-8.
[37] Declan McCullagh.
Finish line, September 1996.
http://w2.eff.org/Misc/Publications/
Declan_McCullagh/hw.finnish.line.
090696.article.
[38] Elinor Mills. Conde nast to buy wired news, July 11
2006.
http://news.cnet.com/Conde-Nastto-buy-Wired-News/2100-1030_3-6093028.
html.
[39] Jose E. Moreira, Maged M. Michael, Dilma Da Silva,
Doron Shiloach, Parijat Dube, and Li Zhang. Scalability of the nutch search engine. In ICS 07: Proceedings of
the 21st annual international conference on Supercomputing, pages 312. ACM, New York, NY, USA, 2007. ISBN
978-1-59593-768-1.
[40] Dheerendranath Mundluru and Xiongwu Xia. Experiences
in crawling deep web in the context of local search. In
GIR 08: Proceeding of the 2nd international workshop
on Geographic information retrieval, pages 3542. ACM,
New York, NY, USA, 2008. ISBN 978-1-60558-253-5.
[41] Maureen Pennock and Brian Kelly. Archiving web site
resources: a records management view. In WWW 06:
Proceedings of the 15th international conference on World
Wide Web, pages 987988. ACM, New York, NY, USA,
2006. ISBN 1-59593-323-9.
[42] Herman Chung-Hwa Rao, Yih-Farn Chen, and Ming-Feng
Chen. A proxy-based personal web archiving service.
SIGOPS Oper. Syst. Rev., 35(1):6172, 2001. ISSN 01635980.
[43] Craig Richmond.
Why mirroring is not a backup
solution.
January 2 2009.
http://hardware.
slashdot.org/article.pl?sid=09%2F01%
2F02%2F1546214.
[44] Arnaud Sahuguet and Fabien Azavant. Building lightweight wrappers for legacy web data-sources using w4f.
In VLDB 99: Proceedings of the 25th International Conference on Very Large Data Bases, pages 738741. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA,
1999. ISBN 1-55860-615-7.
[45] Adobe Systems. Adobe acrobat 4.0 for macintosh readme,
March 15 1999.

14

[46] Jordi Turmo, Alicia Ageno, and Neus Catal`a. Adaptive information extraction. ACM Comput. Surv., 38(2):4, 2006.
ISSN 0360-0300.
[47] John Updike. Cut the unfunny comics, not spiderman.
The Boston Globe, October 27 1994.
[48] Fernanda B. Viegas, Martin Wattenberg, and Kushal Dave.
Studying cooperation and conflict between authors with
history flow visualizations. In CHI 04: Proceedings of the
SIGCHI conference on Human factors in computing systems, pages 575582. ACM, New York, NY, USA, 2004.
ISBN 1-58113-702-8.
[49] Xiaoyun Wang and Hongbo Yu. How to break md5 and
other hash functions. In Ronald Cramer, editor, EUROCRYPT, volume 3494 of Lecture Notes in Computer Science, pages 1935. Springer, 2005. ISBN 3-540-25910-4.
[50] W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld.
Face recognition: A literature survey. ACM Comput. Surv.,
35(4):399458, 2003. ISSN 0360-0300.

You might also like