You are on page 1of 8

Calculating Web Service Similarity using Ontology

Learning with Machine Learning


Rupasingha A. H. M. Rupasingha, Incheon Paik

Banage T. G. S. Kumara

School of Computer Science and Engineering


University of Aizu
Aizu-Wakamatsu, Fukushima, Japan
hmrupasingha@gmail.com, paikic@u-aizu.ac.jp

Faculty of Applied Sciences


Sabaragamuwa University of Sri Lanka
Belihuloya, Sri Lanka
btgsk2000@gmail.com

AbstractThe Web is a popular, easy and common way to


propagate information today and according to the growth of the
Web, Web service discovery has become a challenging task.
Clustering Web services into similar clusters through calculating
the semantic similarity of Web services is one way for overcome
this issue. Several methods are used for current similarity
calculation process such as knowledge based, informationretrieval based, text mining, ontology based and context-aware
based methods. Through this paper, present a method for
calculating Web service similarity using both ontology learning
and machine learning that uses a support vector machine for
similarity calculation in generated ontology instead of edge count
base method. Experimental results show that our hybrid
approach of combining ontology learning and machine learning
works efficiently and give accurate results than previous two
approaches.
KeywordsWeb Service Clustering, Ontology Learning,
Machine Learning, Context aware service similarity, Information
Retrieval

I.

INTRODUCTION

Since Web is increasing rapidly, most of the business


organizations interesting and adoptability is also growing
towards the Web services. Hence, the number of Web services
published on the Web is rapidly increasing day by day. A Web
service is a collection of related application functions and
loosely coupled software components. It provides a method
for discover, communicating and execute transactions between
clients all over the world in real time with minimal human
interaction. In recent years, number of different Web services
published on the internet therefore efficient Web service
discovering, selecting and recommending has become a very
significant and challenging task. Service selection and
composition are major problems in service computing research
field and they also related with efficient discovery. Thus,
increasing the performance of service discovery is very
important. It will enable to group Web services by their
similarity and reduces search space. Web service clustering is
an efficient approach and currently widely applied in research
and industry area.
Web services can be clustered using functional properties
such as input, output, precondition, and effect [1],

978-1-4799-7849-6/15/$31.00 2015 IEEE

nonfunctional properties such as cost, reliability and response


time [2] or social properties such as sociability [3]. Most of
the current researches mainly focus on functionally based
clustering. In this research, clustered the Web services
considering functional clustering.
Web service similarity computation is a key part of Web
service clustering. Several methods have been used to
compute the Web service similarity in functionally based
clustering approaches such as information-retrieval (IR) based
like cosine similarity, string-based methods, corpus-based
methods, Keyword Match (KM), ontology based (OB) and
context-aware similarity based (CAS) methods. There are
several problems encountered in these approaches. KM
methods are inadequate for declaring semantic concepts and
lack up-to-date information. String-based methods are
insufficient to express semantic concepts. IR methods and
corpus-based methods also do not compatible for fine-grained
measuring of service semantic which is not a good metric with
its reduction is not a substantial. Its similarity mainly focuses
on plain text, for complex structures and polysomy words of
Web services there has not been a fine-grained improvement.
And also it has more disadvantages such as loss of the
machine-interpretable semantics, lack of up-to-date
knowledge and failed to identify synonyms or variations of
terms. This may leads to low recall and low precision in Web
service clustering.
Our approach is mainly focusing on two research works.
First, consider hybrid term similarity (HTS) and IR based term
similarity method [4] that first choose ontology learning
technique to generate ontologies through the hidden semantic
patterns of complex terms and calculate Web service
similarity using generated ontology and if this fails to
calculate the similarities, then used an IR based method. Then
clustered the Web services according to calculated similarity
values. This approach did not consider about the domain
context in calculating the term similarity of Web service
features. When measuring the similarity results considering
domain context it can achieve high accuracy. Another
disadvantage of this approach is mainly focused on the
ontology hierarchy structure and documents ignoring the latest
up-to-date knowledge that exist between the words.

2015 IEEE International Conference on Computational Intelligence and Computing Research

To address this issue, they proposed context-aware


similarity (CAS) method [5] that picks out domain context by
machine learning techniques to produce models of context for
terms using the Web. Their proposed approach is mainly
targeted on functional clustering. They clustered Web services
using a context. They first propose the context aware
similarity method to calculate the similarity of services (SoS)
for different five domains. They took the help of set of terms
that are used frequently in a given domain which are
semantically related. And extracted snippets from the Web
using search engines such as Google and Wikipedia for create
a context. Calculating the SoS for different five domains and
trained the data using Support Vector Machine
(SVM).Through the extracted context and trained SVMs
clustered Web services using spherical associated keyword
space (SASKS) algorithm. But CAS method also contain some
issues with poor consideration for different type of
relationship among Web services such as SubclassSuperclass
relationships, data property relationships, object property
relationships. When it calculates the Web service similarity
considering these relationships it will help to increase the
accuracy.
So, in this paper proposed a hybrid approach to calculate
the semantic similarity of Web services using both two
approaches HTS and CAS. It uses the up-to-date knowledge
from the World Wide Web as large information set and
addresses the semantic issues, encode fine-grained
information. Regarding the behavior of the up-to-date Web
data will help to improve accuracy of similarity calculation
procedure and comparing ontology hierarchy which generated
considering relationships will help to achieve better results
than previous two approaches. By differentiating the results
according to the different domain context rather than
considering fixed domain have a significant impact on the
formation of service clusters. The new approach of
combination of both can use in many disciplines because of its
top level of accuracy, possibility of dealing with high
dimensions, and flexibility in working different sources of
data.
The rest of the paper is structured as follows. In section II
explain related works. Section III discusses the hybrid of
ontology learning and context aware similarity approaches.
Section IV discusses experiment and evaluation and section V
conclude the paper with future works.
II.

RELATED WORK

In the past few years there are numerous researches has


been done in the field of Web services and also the field of
Web services clustering. It is a hot domain and recently gets
momentum due to the heavy usage of Web. Most of the
researches related with Web services clustering done by
information-retrieval-based methods. Other than informationretrieval-based methods, in order to calculate the Web service
similarity ontology learning methods and machine learning

methods also employed. Web service clustering and similarity


calculation is a two related research topics with this new
approach.
A. Web Service Clustering
Liu et al. [6] used tree-traversing ant algorithm to cluster
the services and in similarity calculating of Web service
features, they used the corpus-based method based on
Normalized Google Distance (NGD). They extracted four
features for their clustering approach namely host name,
context, content and name from Web Services Description
Language(WSDL) documents. Sometimes different services
published by the same host. Because of this, considering
context and the host name do not make better way to the
clustering services. Elgazzar et al. [7] used Quality-threshold
clustering algorithm for clustering and extracted features as
messages, complex data types and ports with content and
service names. For a similarity calculation, they used KM
technique and structure matching; it helps to decide the
number of similar matches between Web services. They only
consider the pair wise matches and did not consider semantic
patterns of complex data types.
Chen et al. [8] proposed WordNet-VSM model to
calculate the feature similarity by generating vectors of service
name, operation and message. They used unsupervised neural
networks based on a kernel cosine-similarity measure instead
of using traditional clustering algorithms.
Research work [9] Presented WordNet-based similarity
calculation approach to measure the similarity between service
name and text description and similarity calculation method
using an input and output parameters based on domain
ontology. OWL-S files are used to extract the semantic
information and service similarity calculates through it.
Kmedoids is used as the clustering algorithm. In [10],
association rule is used to identify relationships between
clustering parameters using Web application description
language documents. They proposed combination method to
empower RESTful semantic Web services using a learning
ontology and Web application description language. In [11],
researches proposed post-filtering method to increase the
performance of clusters. It used CAS based approach and
rearranging the incorrect Web services. In [12], proposed a
machine learning method to understand the service space
through number of latent functional factors and Bayesian
Nonparametric Latent Factor Model ( BN-LFF) for clustering
and service representation.
There are only limited research works related with the
Nonfunctional based and social criteria based clustering. In
Nonfunctional based clustering approaches, Web services
attribute is used for clustering process namely Quality of
Service (QoS). In [13], researches proposed Web service
clustering based on QoS properties with genetic algorithm to
improve the efficiency of service discovery. According to
research work [14], the service selection algorithm also plays

2015 IEEE International Conference on Computational Intelligence and Computing Research

a significant role. They proposed a new algorithm, called


QSSAC, for the solution as a service selection problem. It
used for the service clustering and it can cluster a large
number of Web services into an identified small groups using
their different QoS properties. Algorithm is capable to
covenant the near-optimal Web service selection procedure
and reduce the execution time. Research work [15], QASSA
used to resolves QoS-aware services using clustering
techniques such as K-Means algorithm. They did service
clustering in a novel way according to QOS values. Chen Wu
et al. [16] proposed new credibility-aware QoS prediction
approach with two-phase K-means clustering for improve the
prediction accuracy through decreasing the unreliable data.
Research works [17] and [18] proposed social criteria
based clustering method and service composition approach
respectively. As social properties they considered sociability
preference in generating the global social service network.
They proposed it by connecting isolated service islands to
increase the sociability on a global scale of services. It is help
to improve the discovery and composition. However, in this
new approach, only consider the functionally based clustering.
B. Calculating Web Service Similarity
Cosine similarity [19] is used to calculate the similarity of
features. It measures the similarity between two sentences or
documents. But, for a complex Web terms it makes difficult.
To discover related Web services [6], clustered services using
the four types of features based on the tree-traversing ant
algorithm. They similarity computed using the Normalized
Google Distance (NGD). However, NGD is not to consider the
context of the terms occurs.
Based on the Web service semantics and clustering
research work [20] presented a Web service discovery
approach. They computed similarity values of Web services
using WordNet and ontologies. In [21], used similarity
computation methods such as Web-Jaccard, Web-Dice, Web
Overlap and Web-PMI as a Web based measuring methods. In
[22], researches used cosine similarity as IR based methods to
similarity calculation of features. IR techniques like cosine
similarity mainly focus on plaintext, for more complex terms
in Web services it is very problematic.
III. HYBRID OF ONTOLOGY LEARNING AND
CONTEXT AWARE SIMILARITY APPROCHES
A. Summary of Hybrid Term Similarity (HTS) method
First research presented [4] a HTS method according to
the explicit formal specifications of the terms and relations
among them. In this ontology learning method they generate
ontologies extracting features from WSDL documents such as
service name, domain name, operation name, input and output
messages. It is powerful for describe the characteristics and
functionalities of Web services. Then generated ontology
according to the each extracted features describing a
relationship among Web services such as SubclassSuperclass

relationships, data Property relationships, object Property


relationships.
Ontology describes a set of representational primitives and
specifications with interrelationships of the entities that exist
for a particular domain. Each Relations of the ontology
describe interactions between ontology concepts or a
concepts properties. They consider two types of relations,
namely concept hierarchy (SubclassSuperclass) and triples
(SubjectPredicateObject). Consideration of these relations
also helps to improve the accuracy of our new approach.
Take S as a set of concepts { , , , } in the
ontology hierarchy.
Analysis 1: (SubclassSuperclass relationship):
Concept can be an individual term (Student) or a complex
term (GraduateStudent). Here Student is modified by the
GraduateStudent. If a concept is a complex term, then head of
the concept (Student) is its right most term and modifier term
of the concept (Graduate) is the element to the left. So then,
GraduateStudent is a sub class of Student.
Analysis 2: (Property relationship):
Analysis 2.1: (Data property relationship):
Let us consider the complex term StudentName. If Name is not
a concept and Student is a concept, then StudentName is a data
property of Student.
Analysis 2.2: (Object property relationship):
Let us consider concept is Computer and concept is
ComputerFaculty, then the relationship can be described as
ComputerFaculty has Computer and Computer has
ComputerFaculty. If two concepts GraduateStudent and
GraduateDepartment. Then, if term Graduate is not a concept,
then the relationship can be expressed as GraduateStudent has
GraduateDepartment
and
GraduateDepartment
has
GraduateStudent.
In the same ontology, if contain any concepts that relate to
service features, then they calculate the fraction of semantic
matching by applying different filters. They used the Exact,
Plug-in and Subsumes filters [23] and Property-&-Concept,
Property- &-Property, Sibling and Logic Fail and Fail in their
research. Based on the degree of strength apply the filters in
the order such as Exact > Property-&-Concept > Property-&Property > Plug-in > Sibling > Subsumes > Logic Fail > Fail.
If filter is matching with extract, then the similarity is
equal to the highest value 1. They used following equation for
calculate similarity according to seven different machine
filters such as Property-&-Concept, Property-&-Property,
Plug-in match, Sibling match, Subsumes match or Logic Fail.
  

2015 IEEE International Conference on Computational Intelligence and Computing Research

Here, assign according to the machine filter and  for


the edge count based method similarity value with + =1.
Following equation describe the calculating similarity
using edge count based method using generated ontology [5].
Here,  and  is two concepts,   describe shortest
distance between concepts and D describe maximum depth of
the ontology is used for calculate the similarity.
 
 


If similarity calculating is failed using generated ontology
then applied IR based methods such as thesaurus-based term
similarity and search engine based (SEB) term similarity.
Following the similarity value results, clustered Web services
using cluster center method with service names term
frequencyinverse document frequency values.
B. Summary of Context Aware Similarity (CAS) method
Second research [5], presents a method considering five
domains such as vehicle, food, medical, film and book. They
also used the structure of WSDL documents for extract
features. WSDL documents explained the characteristics of
Web services. Then, they used a CAS module and with the
help of set of frequently terms create a context retrieved from
Web search engines. Feature vector is generated using this
context for positive and negative term pairs. Then adopt an
SVM technique to compute the similarity of features (SoFs)
within a particular domain and SoS values are calculated as
an collection of the individual SoF values. They used
Associated Keyword Space (ASKS) [24] algorithm for their

clustering propose. Since, ASKS algorithm is able to project


filtering results based on different domains from a threedimensional sphere it put on a two-dimensional spherical
surface for two-dimensional visualization. Visualization is
easily comparable, look better and attract more attention than
the textual format. Besides it is effective for noisy data and
easier to understand to the human.
C. Proposed clustering approach
Our proposed HTS and CAS hybrid approach uses WSDL
files to cluster the Web services. Fig.1, illustrate diagram of
the approach in terms of five phases. In the first phase,
extracted features that describe the characteristics of Web
services. The second phase generated an ontology for each
extracted feature. This process uses the support vector
machine in phase 3 to compute the similarity values of
ontologies. If there is a similarity calculation failure using the
generated ontologies, then apply the IR-based term-similarity
method like previous HTS method. In phase 4, compute the
final similarity value by integrating these five features. In the
final phase, clustered the Web services using an agglomerative
clustering algorithm like HTS approach.
D. Hybridization of HTS and CAS two approaches
According to our purpose calculating similarity of Web
services only considering generated ontology is not sufficient.
Because of the Web services they have direct relationship with
World Wide Web and Web data is rapidly changing today, so
comparing behavior of the latest Web data will help to
improve accuracy of similarity calculation procedure. Also

Fig.1. Overview of the combining of HTS and CAS approach

2015 IEEE International Conference on Computational Intelligence and Computing Research

consideration the relationship of ontology hierarchy is help to


improve the performance.
This new approach mainly used two research works related
to it. As mentioned above, first method [4] calculated Web
service similarity using generated ontology. They used logic
based filters and edge count based method. Edge based
approach used to measures minimal distance between each
concept in an ontology hierarchy. Here, shortest distance
between concepts and the maximum depth of the ontology is
used for calculate the similarity. If calculating similarity using
ontology is fails then IR based methods such as thesaurusbased term similarity and SEB term similarity used for
similarity calculation. Thesaurus-based term similarity which
is largely based on the similarity calculation measures
between documents. Except SEB term similarity, disadvantage
of those approaches is that they mainly focus on the ontology
hierarchy structure and documents ignoring the latest Web
data relationships of words.
New approach have proposed a method combining of this
first approach [4] and second approach [5], calculating Web
service similarity using machine learning by SVM method is
apply for ontology learning will give better results. Extract
features from WSDL documents which is an XML-based
interface definition language and it help to calculate the
similarity between services. Here extracted features such as
service name, domain name, operation name, input messages
and output messages as the features for Web service-similarity
calculations. Then ontology generates using ontology
hierarchy relationships.
Above (1) is used for calculate the similarity using
generated ontology and CAS method. Here, assign
according to the ontology learning and for the CAS based
method similarity value with + =1. Assigning the weights
according to the Table 1.

Fig.2, show the Process of our new approach hybridization


of HTS and CAS. It clearly illustrates after creating the
ontology, similarity calculation step doing with the help of
HTS, CAS and IR based methods. Then continue feature
integration and clustering processes.

Fig. 2. Hybridization of HTS and CAS new approach

Fig.3, describe the similarity calculation procedure in more


details using ontology and SVM. First it generates the SVM
domain model using a domain context. Then calculate the
similarity value for first and second terms. After applying
ontology relationship value to this SVM similarity value it
takes the final similarity value.

Table I. Assigned values for and 


Machine filter

Weight

Property-&-Concept

0.9

0.1

Property-&-Property

0.88

0.12

Plug-in

0.83

0.17

Sibling

0.8

0.2

Subsumes

0.72

0.28

Logic fail

0.6

0.4

In above (2),  value calculated using CAS


method. Here used same procedure for calculate the term
similarity values as CAS method described above. Create a
context using frequently used terms in Google and Wikipedia,
then compute the feature vector. SVM is used to calculate
similarity value.

Fig. 3.Similarity Calculation Procedure

According to the result of the similarity values clustering


procedure is continue using an agglomerative clustering
algorithm with cluster center method with service names term
frequencyinverse document frequency values [4]. It is a
bottom-up hierarchical clustering method which that can

2015 IEEE International Conference on Computational Intelligence and Computing Research

handle any form of similarity or distance easily with a low


computation cost.

IV. EXPERIMENT AND EVALUATION


For the experiment used platform as Microsoft Windows 7.
An Intel Core i7-4790at 3.60GHz and 8.00 GB RAM.
Programming language as Java and the Jena Framework was
used to build the ontologies.

K

,d^



A. Assgning weights
Results are evaluated by assigning different values for
and  in (1) according to following table II.

Weight 1

Weight 2

Property-&Concept
Property-&Property
Plug-in

0.95

0.05

0.9

0.9

0.1

0.85

Sibling

Weight 3

Weight 4

0.1

0.87

0.13

0.8

0.2

0.88

0.12

0.85

0.15

0.78

0.22

0.15

0.83

0.17

0.8

0.2

0.74

0.26

0.83

0.17

0.8

0.2

0.77

0.23

0.72

0.28

Subsumes

0.75

0.25

0.72

0.28

0.7

0.3

0.65

0.35

Logic fail

0.65

0.35

0.6

0.4

0.58

0.42

0.5

0.5

Finally new method selected the weight 2 values in above


table for and  considering seven filters. Because it was
give better results than other values.

Entropy is used to measure the how semantic classes are


distributed within each cluster. The lower entropy means
better clustering and entropy of a cluster defined as:



 

Here, number of domain classes included in the data set is


k, is explained the number of services of the qth domain
class that were assigned to the pth cluster and the number of
Web services included in cluster p is explained in . The
entropy of the entire cluster defined as:


B. Cluster Evaluation
The clustering result identified main five domain categories
as Vehicle, Medical, Film, Food and Book. Evaluated cluster
quality using purity, entropy, precision, recall and F-measure
which were also use in previous researches to evaluate cluster
results.
In our evaluation procedure, evaluated edge count based
method, HTS approach which uses ontology learning, CAS
approach which uses machine learning and our approach
which uses both ontology learning and machine learning.
Purity is used to measure amount of truly classified class
per cluster and is defined as:

Fig. 4. Cluster performance with HTS+CAS approach (Purity)

Table II. Experimental Values for and 


Machine
filter



Here, total number of services explained using n and


number of services in cluster i belonging to domain class j is
explained using . Fig.4, shows the purity values for the
three approaches with respect to the number of services.



Fig.5, shows the entropy values for the three approaches


with respect to the number of services.

K
,d^


Fig. 5.Cluster performance with HTS+CAS approach (Entropy)

According to the results, with increasing the number of


services purity is decreases and entropy is increases. And also
our approach take possession of lower entropy and higher
purity values throughout. Furthermore, the rate of entropy

2015 IEEE International Conference on Computational Intelligence and Computing Research

Table III: Performance measures of clusters


Edge-count-Based Term
Similarity
using WordNet

Cluster

HTS Approach

CAS Approach

Precision%

Recall% F-Measure% Precision% Recall% F-Measure% Precision%

Vehicle

56.00

80.90

66.20

81.60

94.80

87.70

89.47

89.47

89.47

100.00

93.67

96.73

Medical

100.00

70.00

82.40

100.00

83.10

90.80

88.10

100.00

93.67

100.00

94.64

97.25

Food

55.00

60.00

57.40

96.00

91.30

93.60

85.71

93.10

89.26

90.53

98.85

94.51

Film

77.70

50.00

60.80

91.00

88.00

89.50

87.50

84.85

86.15

98.59

100.00

99.29

Book

67.10

61.30

64.10

86.70

100.00

92.90

82.14

92.00

86.79

89.61

80.23

84.66

increase is greater in the HTS and edge count based methods


respectively and the rate of purity value decrease is smaller in
our new approach. According to these results, this HTS and
CAS combined approach improves the clustering
performance.
As additional evaluation criteria for our combination of
HTS and CAS approach, used precision, recall and F-measure
for cluster evaluation. Precision is used to measure result
relevancy and Recall used to quantify the how many number
of truly related results are returned. Combination of precision
and recall help to measure F-measure. They are defined as:














   

 

Here, the number of members of class p in cluster q is


explained using , the number of members of cluster q is
explained using  and number of members of class p is
explained using . The F-measure of cluster p with respect to
class q is explained using above equation.
In evaluation procedure, new approach is evaluated with
edge count based method, HTS approach and CAS approach.
Table III, produce the recall, precision and F-measure values
for these three approaches.
According to the experimental results shown in table III,
there are no false positives for the vehicle and medical cluster
in our approach, the precision values for being 100%. Vehicle
cluster obtained higher precision values 44% than edge-countbased method and 19% and 11% for HTS and CAS
respectively. Medical cluster obtained higher precision values

Recall% F-Measure%

HTS+CAS New Approach

Precision% Recall% F-Measure%

12% than CAS method. For all other clusters, however, our
approach obtained better precision values. The book cluster
obtained the lowest precision and recall values and other all
the clusters obtained the precision and recall values greater
than 90%. According to the recall values Film cluster obtained
100% value for our approach. It obtained higher recall values
50% than edge-count-based method and 12% and 16% for
HTS and CAS respectively.
When comparing the results, it could observe that some
extracted features of WSDL documents failed to identify their
ontology. Specially some services from book cluster is tends
to placed in wrong cluster. As for the precision and recall
values, our approach obtained higher values for F-measure.
The F-measure value is increased of all clusters except book
cluster by placing all services correctly.
Here conduct a comparative study with existing
approaches will increase the validity of proposed approach.
This evaluation results verify that proposed approach is
efficient and accurate than previous two approaches.
V. CONCLUSION
New approach proposed a method for Web service
clustering which is combining both HTS approach using
ontology learning and CAS approach using machine learning.
This method contributes to overcome the previous two
approaches issues considering both day-to-day Web updates
with the domain context and Web services relationships using
generated ontology. The implementation of new approach
took more running time. Because it search Web data through
the internet according to the CAS method. This new approach
used machine learning by SVM to capture the similarity value
of generated ontology considering domain context. It used
SVM method for calculate service similarly and show better
performance comparing with previous edge count based, HTS
and CAS approaches. Experimental results verify that the new
proposed approach is accurate and efficient than previous two
approaches, improving the average precision value by 24.59%,
4.69%, 9.16% and average recall value by 29.04%, 2.04% and

2015 IEEE International Conference on Computational Intelligence and Computing Research

1.59% and average F-measure value by 28.31%, 3.59% and


5.42% for edge-count-based method, HTS and CAS
respectively.

[12] Qi Yu, Hongbing Wang, Liang Chen, Learning Sparse Functional


Factors for Large-scale Service Clustering, Proceedings of International
Conference on Web Service (ICWS 2015), New York, June 27 2015-July 2
2015.

In the future, like to improve the existing outputs for take


better results of clustering procedure in Web services with
service discovery and recommendation.

[13] J. Zhou and S. Li, Semantic Web service discovery approach


using service clustering, in Proc. International Conference on
Information Engineering and Computer Science, pp. 1-5, 2009.

REFERENCES

[14] Xia, Y., Chen, P., Bao, L., Wang, M., & Yang, J. (2011, July 49). A
QoS-aware web service selection algorithm based on clustering. In
Proceeding of the 9th IEEE International Conference on Web Services,
Washington, DC (pp. 428435). doi:10.1109/ICWS.2011.36

[1] Dasgupta, S., Bhat, S., & Lee, Y. (2011, July 49).Taxonomic clustering
and query matching for efficient service discovery. In Proceeding of the 9th
IEEE International Conference on Web Services, Washington,DC (pp. 363
370). doi:10.1109/ICWS.2011.112
[2] Xia, Y., Chen, P., Bao, L., Wang, M., & Yang, J. (2011, July 49). A
QoS-aware web service selection algorithm based on clustering. In
Proceeding of the 9th IEEE International Conference on Web Services,
Washington, DC (pp. 428435). doi:10.1109/ICWS.2011.36
[3] Wuhui Chen, Incheon Paik and Patrick C.K Hung, Constructing a Global
Social Service Network for Better Quality of Web Service Discovery, IEEE
Transactions on Services Computing, in press,(accepted in February 2013).
[4] B. T. G. S. Kumara, I. Paik, W. Chen and K. Ryu, Web Service
Clustering using a Hybrid Term-Similarity Measure with Ontology Learning,
International Journal of Web Services Research (JWSR), Vol. 11, No. 2, pp.
24 - 45, 2014.
[5] B. T. G. S. Kumara, Incheon Paik, Hiroki Ohashi, Yuichi Yaguchi and
Wuhui Chen, Context Aware Filtering and Visualization of Web Service
Clusters, International Journal of Web Services Research (JWSR)
[6] Liu, W., & Wong, W. (2009). Web service clustering using text mining
techniques. International Journal of Agent-Oriented Software Engineering,
3(1), 626. doi:10.1504/IJAOSE.2009.022944
[7] Elgazzar, K., Hassan, A. E., & Martin, P. (2010, July 5-10). Clustering
WSDL documents to bootstrap the discovery of web services. In Proceeding
of the 8th IEEE International Conference on Web Services (pp. 147-154).
doi:10.1109/ICWS.2010.31
[8]Chen, L., Yang, G., Zhang, Y., & Chen, Z. (2010, December 46). Web
services clustering using SOM based on kernel cosine similarity measure. In
Proceeding of the 2nd International Conference on Information Science and
Engineering,
Hangzhou,
China
(pp.
846850).
doi:10.1109/ICISE.2010.5689254
[9] Wen, T., Sheng, G., Li, Y., &Guo, Q. (2011, August 2022). Research on
web service discovery with semantics and clustering. In Proceeding of the 6th
IEEE Joint International Information Technology and Artificial Intelligence
Conference,
Chongqing,
China
(pp.
6267).
doi:10.1109/ITAIC.2011.6030151
[10] Lee, Y., & Kim, C. (2011, July 49). A learning ontology method for
RESTful semantic web services.In Proceeding of the 9th IEEE International
Conference on Web Services, Washington, DC (pp. 251258).
doi:10.1109/ICWS.2011.59
[11] Banage T. G. S. Kumara, Incheon Paik, Koswatte R. C. Koswatte, Wuhui
Chen, Improving Web service clustering through post filtering to bootstrap
the service discovery, International Journal of Services Computing (ISSN
2330-4472) Vol. 2, No. 3, July Sept. 2014. pp. 1-13.

[15] Nebil Ben Mabrouk, Nikolaos Georgantas, and Valerie Issarny, Setbased Bi-level Optimisation for QoS-aware Service Composition in
Ubiquitous Environments, Proceedings of International Conference on Web
Service (ICWS 2015), New York, June 27 2015-July 2 2015.
[16] Chen Wu, Weiwei Qiu, Zibin Zheng, Xinyu Wang, Xiaohu Yang, QoS
Prediction of Web Services based on Two-Phase K-Means Clustering,
Proceedings of International Conference on Web Service (ICWS 2015), New
York, June 27 2015-July 2 2015.
[17] W. Chen, I. Paik, P. C.K Hung, Constructing a Global Social Service
Network for Better Quality of Web Service Discovery, IEEE Transactions on
Service Computing, Volume:8 , Issue: 2, March-April, 2015, pp284 - 298.
[18] Chen, Wuhui, and Incheon Paik. "Toward Better Quality of Service
Composition Based on a Global Social Service Network." Parallel and
Distributed Systems, IEEE Transactions on 26.5 (2015): 1466-1476.
[19] Christian Platzer, Florian Rosenberg and SchahramDustdar, Web
service clustering using multidimensional angles as proximity measures,
ACM Transactions on Internet Technology TOIT , vol. 9, no. 3, pp. 126, July
2009.
[20] Tao Wen, Guojun Sheng, Yingqiu Li and QuanGuo, Research on Web
service discovery with semantics and clustering, in Proc. 6thIEEE Joint
International Information Technology and Artificial Intelligence Conference,
pp. 6267, 2022 August 2011.
[21] I. Paik, E. Fujikawa, Web Service Matchmaking Using Web Search
Engine and Machine Learning, International Journal of Web Engineering,
SAP, 1(1), pp. 1-5, 2012.
[22] Chen, Lei; Geng Yang; Zhang, Yingzhou; Zhengyu Chen, Web services
clustering using SOM based on kernel cosine similarity measure, Information
Science and Engineering (ICISE), 2010 2ndInternational Conference on ,
pp.846,850, 4-6 Dec. 2010.
[23] Klusch, M., Fries, B., &Sycara, K. (2006, May8-12). Automated
semantic web service discovery with OWLS-MX. In Proceeding of the 5th
International Conference on Autonomous Agents and Multi-Agent Systems,
Hakodate, Japan (pp. 915-922). doi:10.1145/1160633.1160796
[24] Takeshi Sasaki, Yuichi Yaguchi, Yutaka Watanobe and Ryuichi Oka,
Extracting a Spatial Ontology from a Large Flickr Tag Dataset,In
Awareness Science and Technology (iCAST), 2012 4thInternational
Conference on IEEE, 2012.

You might also like