You are on page 1of 8

Journal of Ambient Intelligence and Smart Environments 0 (2010) 1 1

IOS Press
A semantic web annotation tool for a
web-based audio sequencer
Luca Restagno
a
, Vincent Akkermans
b
, Giuseppe Rizzo
a
and Antonio Servetti
a
a
Dipartimento di Automatica e Informatica (DAUIN), Politecnico di Torino, Corso Duca degli Abruzzi, 24, 10129
Torino, Italy
E-mail: luca.restagno@studenti.polito.it, {giuseppe.rizzo|antonio.servetti}@polito.it
b
Music Technology Group, Universitat Pompeu Fabra, Barcelona, Spain
E-mail: vincent.akkermans@upf.edu
Abstract. In this work we describe an ontology driven resource annotation tool based on semantic web technology and web
standards. It can load any OWL ontology and guide the end user in the annotation process with a simple graphical user interface.
Furthermore, a collaborative web-based audio sequencer was developed as a test case for the annotation tool. With this content
authoring tool, users can remix materials from the Freesound website and share their creations with others. Because of the
integration of these two components, users are able to annotate the sonic materials used, the intrinsic structure of the artistic
work, and the reasoning behind their actions during the creative process. We believe this approach will provide several novel
ways of making not only the nal product, but the creative or production process rst class citizens of the semantic web.
Keywords: ontology driven annotation tool, semantic audio annotation, web-based audio sequencer, semantic web sequencer,
semantic web
1. Introduction
A key focus of the Semantic Web [2] is the pro-
cess of combining semantic information from several
sources in order to more easily understand and explore
that information programmatically. Currently, a large
number of web communities allow their members to
tag and annotate the content and thereby crowdsource
the content classication.
The ontology driven annotation tool detailed in this
work was developed to allow users to annotate any
kind of resource, but using concepts froma specic do-
main, as opposed to annotating with textual tags. The
main goal of the tool is the capability to load from
the web any ontology developed with the Web Ontol-
ogy Language (OWL) [8]. The annotation tool permits
to select an ontology from the web, to choose the re-
source to annotate and to create annotations linking
concepts from the ontology to the resource. In addi-
tion, it creates an RDF/XML document [1] for each
annotation, which is ready to be stored to a server so
that the annotations are available to everyone.
Furthermore the annotation tool has been com-
pletely developed using web standard languages, in
such a way it can be easily used inside other web appli-
cations. Moreover it provides a userfriendly web front-
end to make the semantic annotation, driven by an on-
tology, an easy task. In order to accomplish this goal
the tool has been developed to load all the classes from
the ontology and their attributes; this way the classes
are easily browsable through the tools graphical user
interface. For each attribute of the class, the tool loads
the right user interface widget, that lets the user to
specify a value.
The annotation tool has been extended to provide a
specic user interface for the annotation of audio con-
tent. For this purpose, it allows the user to load and
annotate sounds from the Freesound
1
website.
1
http://www.freesound.org
1876-1364/10/$17.00 c 2010 IOS Press and the authors. All rights reserved
2 L. Restagno et al. / A semantic web annotation tool for a web-based audio sequencer
The practice of collaboratively tagging media con-
tent, known as folksonomy, has become widespread
on the Internet. Users annotate and categorize content
by providing tags describing any and all possible as-
pects. One of the challenges in the area of annotation
is generating consistency across annotations in terms
of both the vocabulary used and the way it is used. A
possible solution is to let users annotate only with a
certain set of concepts dened in an ontology. An ob-
vious disadvantage is the incompleteness of this ap-
proach. However, with several well designed ontolo-
gies knowledegeable users can be guided to deliver
higher quality annotations. Developing a front-end ap-
plication, we built a dynamic environment which per-
mits to reuse the knowledge contained into ontologies,
according to the Linked Data principle [9].
To asses our annotation tool we developed a web-based
audio sequencer with which users can mix sounds
available on the web by simply using their URLs. The
two applications have been integrated, realizing a com-
plete tool for composing and annotating sounds.
Music and sound generally have a rich semantic
structure. They communicate a message, designed by
the composer or sound designer, which ranges in its in-
tention from nonsensical or abstract to symbolic (e.g.
a piece of lm music supporting a clear narrative). A
majority of media production nowadays is done with
software tools, which give rise to various new opportu-
nities to monitor the production process. In this work
we focused on the combination on the annotation tool
and the audio sequencer, both described earlier, to in-
vestigate the implications of this idea. Take for exam-
ple video games, which generally have a non linear
narrative that is often supported by affective music. As
the game world can give rise to a variety situations the
music should be able to adapt. If the composer is able
to formally describe pieces of his non linear work, and
the game designer is as well, the game would be able
to generate new music with the musical material pro-
vided by matching the formalized intentions in both
domains. Another example is music education, where
providing insight and understanding into the music is
the primary concern. A piece of music, whose struc-
ture and different aspects have been formally anno-
tated, could be represented in different ways and so
give students views on the work of art that match their
capabilities. Possibly with future work the artist can,
while working, develop his own ontology. This ontol-
ogy would describe the themes of the work as seen by
the creator and allow himself and others to reect and
learn from this.
The remainder of this paper is organized as follows:
a review of the current state of the art is presented in
Section 2, key ideas of our approach are introduced in
Section 3 and the description of the annotation Tool is
showed in Section 4. A contextualization of the tool,
by means of a use case is described in Section 5 fol-
lowed by conclusions and future works in Section 6.
2. Related Work
The Internet has become a large repository of re-
sources of any type. As the amount of digital content
grows it becomes increasingly important to improve
methods for description and retrieval. An often used
approach is annotating information with metadata in
order to more easily retrieve the information later. In
particular, annotation is often required to rene and
improve data descriptions also when automatic feature
extraction tools are employed.
In this context we present an open source web tool that
can augment the user experience in the process of se-
mantic annotation because it allows to easily perform
an ontology driven annotation during the creative pro-
cess of audio authoring by means of a web sequencer.
Previous works have already addressed the problem,
but with different approaches.
In the LabelMe Project [12], Russell et al. pro-
duced a web-based image tool useful to identify ob-
jects. Their goal was to provide a dynamic dataset that
would lead to newresearch in the areas of object recog-
nition and computer graphics. Although this research
was looking into tagging analysis, they did not fo-
cus on annotation itself. Additionally, they used free
form textual tags, so the annotations were affected by
the typical folksonomy problems, like polysemy and
synonymy. In our approach we try to solve these prob-
lems using a controlled vocabulary, an ontology which
includes hierarchy of concepts from a specic knowl-
edge domain.
Indeed, in the M-OntoMat-Annotizer [10], Petridis
et al. have explored the usage of controlled vocabu-
laries to improve semantic annotation of images. They
implemented different ontologies based on the MPEG-
7 standard, to let user to associate visual descriptors to
contents. They permitted to select a portion of an im-
age or a portion of a frame and to associate a concept
retrieved from the furnished ontology. Based on this
work, we take advantage of multiple formalized ontol-
ogy domains to extend the descriptive possibilities of
an annotator.
L. Restagno et al. / A semantic web annotation tool for a web-based audio sequencer 3
In [13], Wang et al. faceted the annotation problem
by means of a set of ontologies, which are linked us-
ing a bridge ontology. This work overcame the prob-
lem of ontology reuse and prevented unnecessary on-
tology extension and integration. Although this idea
is promising, it presents only a general idea of how
to link ontologies, without proposing a clear method.
However, G. Kobilarov et al., following the previ-
ous idea, used a bridge or hub ontology for catego-
rizing multimedia documents located within the BBC
archives [6] and linking them to DBPedia [3]. By
means of this approach, they exploited object persis-
tence in the BBC categorization system (CIS) and
they mapped the resources according to DBPedia ref-
erences: resource disambiguation is achieved and se-
mantic information is augmented. Similarly to this ap-
proach, we provide the possibility to take advantage
of multiple ontologies, but we do not try to link con-
cepts between them. Our tool permits the use of con-
cepts from a single online available ontology at a time.
The ontology can however be switched whenever the
annotator feels the need.
In order to facilitate annotation and sharing of an-
notations between many users we implemented a web-
based solution. An attempt to distribute annotations
over the web is represented by the Annotea Project
[5], which aims to provide a system to share annota-
tions on a general-purpose open RDF infrastructure.
It suggests a possible set of technologies to imple-
ment a semantic web infrastructure for creating, edit-
ing, viewing and sharing annotations. We used some
of the framework technologies, like the Annotea An-
notation RDF Schema, and we developed a web-based
tool for the annotation of digital contents.
In addition, we extended the framework allowing to
annotate resources by means of ontology concepts, in
order to exploit the knowledge available on the web.
In particular we focus on the annotation of audio re-
sources, providing a web user interface to annotate se-
lected parts of sound recordings. The goal is to provide
the capability to semantically describe a piece of mu-
sic, or a sound sequence in general, exploiting online
taxonomies and ontologies.
3. Rationale
The semantic web annotation tool is a web tool for
annotating any kind of resource. It can load any online
available OWL ontology and guides the user with a
simple user interface in the annotation process.
Fig. 1. The annotation tool acts as a knowledge aggregator on the
Web. It permits to link pieces of knowledge extracted from online
distributed repositories, to digital resources on the web, in order to
describe their contents in a semantic way.
When starting the annotation process the user is al-
lowed to select the ontology that deals with the aspect
of the resource he or she wants to make statements
about. The tool then uses the information in the ontol-
ogy to make suggestions to the user.
Communities that use the free text tagging method
are presented with a set of problems, like polysemy,
synonymy, data scarcity, spelling errors and plurals.
Polysemous tags can return undesirable results. For
example, in a music collection when a user searches
for the tag love, the results could contain both love
songs, and songs that were tagged as such because
liked them very much. Tag synonymy is also an inter-
esting problem. Even though it enriches the vocabu-
lary, it also presents inconsistencies among the terms
used in the annotation process. According to [7], bass
drum sounds can be annotated with the kick drum tag,
but these sounds will not be returned when searching
for bass drum. To avoid this problem, sometimes users
tend to add redundant tags to facilitate the retrieval
(e.g. using synth, synthesis, and synthetic for a given
sound).
Figure 1 shows howthe tool is organized. It retrieves
information from knowledge repositories available on
the Web. They can be formalized ontologies, like the
large and popular Music Ontology [11], an attempt to
link all the information about musicians, albums and
tracks together. It can exploit ontologies specically
developed for an application, like the Sound Produc-
ing Events Ontology, based on the work of William W.
Gaver. Moreover, it can use web databases that pro-
vides a query service based on the semantic web tech-
nologies, like the DBPedia project that allows to access
the large database of Wikipedia via semantic web re-
sources. Pursuing this approach we wanted to be con-
4 L. Restagno et al. / A semantic web annotation tool for a web-based audio sequencer
Fig. 2. The annotation tool retrieves a set of concepts from distributed knowledge repositories (ontologies/taxonomies). Then it exposes the set
of concepts to the annotator through the graphical interface. Using this front-end, the user can link semantic concepts to an audio resource whose
contents were previously unknown. The result is an annotation, a document that stores the links the human annotator creates.
form to the Linked Data principle of distributable and
connected piece of information. The user can associate
to the resource any concept of the knowledge reposi-
tory, in order to extend the semantic description of the
digital content.
The tool provides an intuitive user interface that lets
users choose one of the classes in the ontology. When
the user is done annotating the annotations are con-
verted to the RDF syntax. This is then sent to the server
and saved in a triple store, ready to be retrieved and
queried.
4. Annotation tool
The annotation tool consists of a client side and a
server side component.
4.1. Client-side component
The client side component is made of boxes, menus
and input elds to let the user navigate the classes pro-
vided by the ontology, choose one or more classes, and
specify the value for the attributes of a class, if present.
At the end of this process, the tool generates an RDF
representation of the annotations in order to store and
reuse them later. As designed for the web, the tool has
been developed using HTML for page markup, CSS
for the graphical style and JavaScript to handle the
program logic and the user interactions. The jQuery
framework
2
was used to manage the Document Ob-
ject Model (DOM) and the jQuery UI
3
utilized for the
GUI components like autocomplete and datepickers,
and complex behaviour handlers like draggable, drop-
pable and so on. The tool has been developed with
attention to modular programming. In order to allow
other developers to reuse the code the tool was divided
into three reusable modules: owl.js, owl-ui.js and owl-
ui.audio.js.
owl.js: requests an interpretation of a specied
ontology from the server side component and
converts this to an internal data model.
owl-ui.js: is responsible for the creation of the an-
notation tool panel, composed of menus and dy-
namic textboxes. It requires the owl.js library to
populate the user interface widgets with the infor-
mation retrieved from the ontology;
owl-ui.audio.js: creates an interface to annotate
audio les. It allows the user to listen to the le
and, using the audio waveform image, to select a
sub part of the sound in order to annotate it. Then
it allows to open the annotation tool panel gener-
ated by the owl-ui.js library in order to annotate
the sound with the classes of the ontology.
This way the code is easily reusable by other develop-
ers projects and it is extensible. Furthermore, it would
2
http://jquery.com/
3
http://jqueryui.com/
L. Restagno et al. / A semantic web annotation tool for a web-based audio sequencer 5
Fig. 3. An example of audio annotation using the annotation tool. On the left the waveform of a sound retrieved from Freesound, reproducing
a dripping faucet. On the right the annotation tool front-end where the user is linking the Dripping category of Liquid sounds, to a temporal
interval of the sound. The categories are provided by the Sound Producing Events Ontology that is loaded from the Web.
be relatively easy to develop special user interface
components for annotating video and text documents.
4.2. Server-side component
The second part of our tool is represented by the
server-side component. It is a SPARQL Protocol and
RDF Query Language (SPARQL) endpoint which
makes queries over the ontology and retrieves all
classes, properties and attributes. The response is gen-
erated in the JavaScript Object Notation (JSON) for-
mat (but it is possible to request different output for-
mats, like raw and XML) and it is given back to the
client-side, which is responsible for the generation of
requests while the server-side makes available the web
service. The SPARQL endpoint has been developed
using a Linux machine with the Apache 2 web server
running and the PHP language available. Furthermore
it has been necessary to install the Redland RDF Li-
braries, which are the main libraries used to handle the
RDF data stored into the ontology les. These les are
written using the OWL language, that has facilities for
expressing meaning and semantics, and it is formal-
ized using the RDF data model, so the RDF Redland
Libraries are a key tool for manipulating the semantic
information stored on the web.
4.3. Annotation process details
Figure 2 shows the annotation tool ow chart. When
the tool is initialized, it makes a synchronous call to the
SPARQL Endpoint hosted by a server machine and it
sends three main parameters: the URL of the ontology
to query, the SPARQL query to execute and the format
of the response.
The tool receives a response, by default a JSON ob-
ject, containing each class and subclass of the ontol-
ogy. Processing this data our tool creates a JavaScript
structure of objects containing the complete ontology
hierarchy of classes. Then the tool makes another call
to the server, in order to obtain the attributes for any
class. The modality is the same. At this point it is pos-
sible to create the user interface populated with the
data retrieved fromthe ontology. The created GUI wid-
gets include a textbox, in which dynamic suggestions
are provided by means of the autocomplete feature.
Data comes from a list of class names, which are ex-
tracted from the ontology that the user can traverse
through a tree menu to choose a concept related to the
resource he is annotating. When the user selects a class
from the menu or from the textbox, if the class has at-
tributes, it is presented a new widget where the user
can assign a value to each attribute. This happens if the
attribute has a data type recognized by the tool (integer,
decimal and oat numbers, date, time and string).
The selected classes are collected into a stack and
when the annotation process is completed, the user
conrms the annotation. The tool generates the annota-
tion itself in an RDF/XML syntax. This piece of infor-
mation can be sent to a server where it could be stored
into a triple store and retrieved back later. Many web
6 L. Restagno et al. / A semantic web annotation tool for a web-based audio sequencer
Fig. 4. The annotation tool offers intuitive widgets to retrieve ontol-
ogy concepts. The image illustrates a text eld with the autocomple-
tion feature.
infrastructures are accustomed to represent informa-
tion in HTML or, more generally, XML. For this rea-
son, the W3C has recommended the use of the XML
serialization of RDF.
Thanks to namespaces and URIs that identify uniquely
a resource, the generated RDF/XML annotation hold
the complete semantic description and the information
the user has associated to the resource.
4.4. An example of audio annotation
In this section we illustrate which are the widgets
developed to allow users to annotate audio resources.
In the specic case, we retrieved a dripping faucet
sound and its waveform image from the Freesound
repository.
As shown in Figure 3, the user has the possibility to
playback the entire sound, and to select a part of the
sound, clicking on the waveform representation with
the mouse. This way the annotator can link multiple
different concepts to the same sound, or even to par-
ticular events that occur into the sound, in order to de-
scribe the resource with high accuracy.
The categories available are provided by external on-
tologies. In this case we loaded the Sound Producing
Events Ontology, based on the work of W. W. Gaver
[4] on a framework for describing sound in terms of
audible source attributes. We formalized a possible on-
tology based on the work of Gaver, using the web On-
tology Language (OWL) and we released it on the web
in the form of an RDF le. The annotation tool ex-
tracts concepts from the online ontology, making them
available to the annotator. The ontology used in this
case, can be easily substituted specifying the URL of
another ontology, so that the concepts from the new
repository can be available into the widgets, ready to
be linked to a digital resource.
The annotation tool front-end is composed of a text
eld with autocompletion (as shown in Figure 4), so
that typing on it the user receives suggestions on the
available concepts. Alternatively the user can traverse
the concepts hierarchy through a tree menu (as shown
Fig. 5. The annotation tool permits to navigate through the class hi-
erarchy with a dynamic interactive menu.
in Figure 5) which illustrates the concepts relation-
ships and that appears clicking on the root concepts (in
this case Vibrating objects, Liquid sounds and Gasses).
On the lower part of the front-end there is a stack of
concepts the user already linked to the resource, that
provides the possibility to edit the attributes of the con-
cept, to show which is the sound part annotated and to
delete a concept link. Clicking on the Conrm button,
the annotation tool generates an RDF/XML document
which stores all the links and the information between
the OWL classes and the resource. This document can
be easily stored and retrieved back later.
We consider this tool a useful instrument to describe
audio contents and to aggregate piece of information
to enhance the global meaning of resources contents.
5. Use case: a web-based audio sequencer
In order to test the capabilities of our annotation tool
in terms of usability and reliability, we developed a
web-based audio sequencer for the web, where users
can work with sounds, mixing and annotating them in
a production environment.
The tool is available at the test project web address
http://scapis.polito.it:8080/wcs/. This site implements
the annotation tool technology described above and it
works as an hub, because it refers audio tracks hosted
on Freesound repositories. Our audio tool will be inte-
grated as soon as possible inside the Freesound portal,
to augment the semantic audio meaning. Because the
tool is in a beta stage, we strongly recommend to use
Firefox browser for a good visualization.
We chose to realize a web-based audio sequencer
completely developed with standard web languages.
We used the standard mark-up language designed for
the web, HTML, the graphical customization allowed
by CSS stylesheets and we handled the business logic
and user interactions with JavaScript. We also tried
to exploit the multimedia capabilities of the new ver-
L. Restagno et al. / A semantic web annotation tool for a web-based audio sequencer 7
sion of the HTML standard, but our project required
advanced audio synchronization features that HTML5
Audio does not provide yet. So we had to fallback on
the Adobe Flash technology that is responsible for the
handling of the audio section of the sequencer, and
communicates bidirectionally to the JavaScript layer.
The application has been designed on three different
layers.
The Audio Engine Layer is responsible for the
playback of the audio. It retrieves audio les from
the web, it synchronizes the tracks and handles
the virtual timeline. Then it has advanced features
like mute and solo and the volume handling for
each track. It also permits to loop a selection and
reproduces a digital metronome. It communicates
with the Communication Layer;
The Communication layer controls bidirection-
ally the Audio Engine Layer and the Graphical In-
terface of the application. When the user performs
an action over the visual elements, the action is
caught by the Communication Layer that trans-
mits the instructions to the Audio Engine Layer.
On the other side, the Communication Layer re-
ceives information from the Audio Engine Layer
and updates the User Interface;
The Graphical User Interface is the user interface.
It allows people to interact with the program in
advanced ways. It provides drag functionalities,
buttons, sliders and editable text elds. When the
user interacts with the GUI, the Communication
Layer propagates the action to the Audio Engine.
Through this application users can mix sounds avail-
able over the web simply using the URL of the au-
dio resources. It implements the functionalities of ev-
ery sequencer, like audio playback, visual tracks syn-
chronization and looping, so that users can create their
own audio composition. We also integrated the search
on the creative commons licensed sound repository
Freesound, so that users can retrieve useful elements
from a large repository.
The annotation tool has been integrated into this en-
vironment. The user can use it to give a semantic de-
scription of audio le contents. The tool makes possi-
ble to select a portion of the audio content and to an-
notate that part of it. In this way we improve the gran-
ularity of the annotation, enriching the semantic de-
scription of an audio resource. The knowledge domain
used as a vocabulary during the annotation process is
not predened, but the user can choose it from a list
Fig. 6. The architecture of the web sequencer is composed of three
layers: the Audio Engine, the Communication Layer and the Graph-
ical Interface.
of available domains and new ontologies can be made
available adding the URL of the ontology le.
The technologies used permits to easily integrate the
annotation tool on every website. What is needed is to
include the JavaScript libraries into the site code. Fur-
thermore the possibility to plug in any ontology over
the web, makes the tool a possible useful instrument
for web sites that intends to make the annotation an
available feature.
6. Conclusions and future works
In this paper we have presented a semantic annota-
tion tool integrated with a web-based audio sequencer.
A novel approach is used to manipulate multiple audio
contents in a reliable and dynamic way through a web
front-end, only pointing to their semantic references.
The value of the annotation tool lies in the fact that
it can load any OWL ontology and guide the end
user in the annotation process over the web. Using an
easy-to-use graphical interface, it permits the anno-
tator to make links between formalized concepts and
resources not described yet, in order to increase the
global knowledge of the resources on the web.
Furthermore, we realized a collaborative web-based
audio sequencer to test our tool. With it, users can edit
materials coming from Freesound website, mixing and
tagging them. The whole project has been led by the
Linked Data principle, a new paradigm to link dis-
tributed data across the web, through the technologies
proposed by the W3C to realize the so called Semantic
web or web 3.0.
8 L. Restagno et al. / A semantic web annotation tool for a web-based audio sequencer
Fig. 7. The web sequencer user interface is similar to a professional audio editing program. It permits to synchronize tracks dragging them on
the grid, to control the audio playback of the composition and to zoom the view. It also implements the search on the Freesound large database
of sounds. Thanks to the integration of the annotation tool, it is possible to describe each sound event of the composition with accuracy.
Future plans are to extend the tool developing new
user interfaces to annotate other types of resources.
We focused on the annotation of audio contents, but
it is possible to easily implement visual interfaces
for the annotation of video, images and text doc-
uments. The tool has been developed to be mod-
ular, so it is not necessary to modify the core li-
braries. Furthermore we would like to improve the
implementation of the web front-end, available at
http://scapis.polito.it:8080/wcs/, in order to make it
completely cross-browser and cross-platform, so that
from its deploy on the Freesound user community we
can experiment better user interaction techniques and
semantic inferences among many users.
Acknowledgment
This project was done at the Music Technology
Group (MTG) of the Universitat Pompeu Fabra in
Barcelona, and supported by the Dipartimento di Au-
tomatica e Informatica (DAUIN) of the Politecnico di
Torino.
References
[1] D. Beckett, B. McBride, RDF/XML Syntax Speci-
cation, W3C Recommendation, 2004, available at:
http://www.w3.org/TR/rdf-syntax-grammar/.
[2] T. Berners-Lee and J. Hendler O. Lassila, The Semantic web, in
Scientic American, May 2001, pp. 34-43.
[3] C. Bizer, J. Lehmann, G. Kobilarov, S. Auer, C. Becker, R. Cy-
ganiak and S. Hellmann, DBpedia A Crystallization Point for
the web of Data. Journal of web Semantics, in Science, Services
and Agents on the World Wide web, 2009, Issue 7, pp. 154165.
[4] W. W. Gaver, What in the world do we hear? An ecological ap-
proach to auditory source perception, in Ecological Psychology,
5 (1), 1993, pp. 129.
[5] J. Kahan and M. Koivunen, Annotea: an open RDF infrastruc-
ture for shared web annotations, In Proc. of the 10th interna-
tional conference on World Wide web, pp. 623632, 2001.
[6] G. Kobilarov, T. Scott, Y. Raimond, S. Oliver, C. Sizemore, M.
Smethurst, C. Bizer and R. Lee, Media Meets Semantic web
How the BBC Uses DBpedia and Linked Data to Make Connec-
tions, In Proc. of the 6th European Semantic web Conference on
The Semantic web: Research and Applications, 2009.
[7] E. Martnez, O. Celma, M. Sordo, B. de Jong and X. Serra, Ex-
tending the folksonomies of freesound.org using content-based
audio analysis, In Proc. of the International Conference on Sys-
tem Modelling and Control, July 2009.
[8] B. Motik, P. F. Patel-Schneider, B. Parsia, eds., textitOWL 2 web
Ontology Language: Structural Specication and Functional-
Style Syntax, W3C Recommendation, 2009, available at:
http://www.w3.org/TR/2009/REC-owl2-syntax-20091027/.
[9] M. Hausenblas, Exploiting Linked Data to Build web Applica-
tions, in IEEE Internet Computing, 2009, pp. 68-73.
[10] K. Petridis, D. Anastasopoulos, C. Saathoff, N. Timmer-
mann, I. Kompatsiaris and S. Staab, M-OntoMat-Annotizer:
Image Annotation. Linking Ontologies and Multimedia Low-
Level Features, In Proc. of the 10th International Conference on
Knowledge-Based and Intelligent Information and Engineering
Systems, Oct. 2006.
[11] Y. Raimond, C. Sutton and Mark Sandler, Interlinking Music-
Related Data on the web, in IEEE Multimedia, 2009, pp. 52-63.
[12] B. Russell, A. Torralba, K. Murphy and W. Freeman, LabelMe:
A Database and web-Based Tool for Image Annotation, in Inter-
national Journal of Computer Vision, 2008, pp. 157173.
[13] P. Wang, B. Xu, J. Lu, D. Kang and Y. Li, A novel approach to
semantic annotation based on multi-ontologies, In Proc. of the
2004 IEEE International Conference on Machine Learning and
Cybernetics, 2004.