You are on page 1of 9

Cluster Computing

https://doi.org/10.1007/s10586-018-2072-8 (0123456789().,-volV)(0123456789().,-volV)

An enhancement to SePeCloud with improved security and efficient


data management
S. Savitha1• P. Thangam1 • L. Latha2

Received: 18 January 2018 / Revised: 26 January 2018 / Accepted: 7 February 2018


 Springer Science+Business Media, LLC, part of Springer Nature 2018

Abstract
Outsourcing data to a third-party controlled cloud computing services arises various security issues. High security schemes
are required to protect data in the cloud platform. Division and Replication of Data in the Cloud for Optimal Performance
and Security (DROPS) method addressed these issues by improving the security and performance of a cloud environment.
An enhanced version of DROPS named SePeCloud was proposed that improved the security and performance hand-in-
hand with Fog based Deduplication and Privacy Preserving Online Updating. This paper improves the security and data
storage with Self-Destruction mechanism by handling the applications independently built on reinforcement learning
strategy. Data chunk fingerprint index and sketch index are included to support independent and parallel data destruction
among multiple applications. Additionally, SePeCloud is further extended to prevent impersonation attacks from illegit-
imate users by introducing Modified Shamir secrete sharing scheme to handle user revocation policies with limited storage
space. The experimental results prove that the final version of the SePeCloud performed better than the previous versions in
terms of Replication cost savings and Computation time by improving both security and performance of the cloud system.

Keywords Self-destruction  Revocation handling  Reinforcement learning  Modified short secret shares 
Security  Performance

1 Introduction resource with various attractive properties, such as scala-


bility, elasticity, fault-tolerance, and pay-per-use projecting
Cloud computing offers [1] a new way for utilizing infor- itself as a promising service platform.
mation and technology services by rearranging various The popularity of these cloud services has led to a sharp
resources (e.g., storage, computing) and offering services increase in data volume and digital information collection.
to the users based on their demands. Some of these services Hence, the requirement for high data security [2] and
include Infrastructure-as-a-Service (IaaS), Security-as-a- performance, Security-Performance (SePe) [3] are the two
Service (SecaaS), Data Storage-as-a-Service (DaaS) mandate features in a cloud environment. Division and
respectively. Cloud computing connects a huge pool of Replication of Data in the Cloud for Optimal Performance
and Security (DROPS) [4] improved the security and per-
formance of the cloud. The enhanced version of DROPS
& S. Savitha named SePeCloud [5], performed efficiently by employing
ssavithaclick@gmail.com
Deduplication in SePeCloudv1.1 and Fog computing based
P. Thangam Deduplication in SePeCloudv1.2 to handle data uploaded
saithangam@gmail.com
by the cloud users. With the Fog based Deduplication an
L. Latha online updating scheme was included in SePeCloudv2.1 to
latha.l.cse@kct.ac.in
handle dynamic data operations. This online updating
1
Department of Computer Science and Engineering, scheme in SePeCloudv2.1 was further extended to support
Coimbatore Institute of Engineering and Technology, Privacy Preservation for updating the data in the preceding
Coimbatore, India version of SePeCloudv2.2.
2
Kumaraguru College of Technology, Coimbatore,
Tamilnadu, India

123
Cluster Computing

This paper incorporates the Self-Destruction (SeDas) available security mechanisms were not efficient enough to
scheme [6] with SePeCloudv2.2 to improve the privacy of tackle any newly established threats and vulnerabilities.
the user’s data and manage the storage space by self-de- An efficient Data replication approach was proposed
structing the uploaded files at user defined time. This [10] to minimize network delays, energy consumption and
version of SePeCloud is introduced as SePeCloudv3.1. bandwidth usage for the distributed data centre. Efficient
However, cloud encourages different application services, utilization of energy and reducing bandwidth consumption
so to overcome all the application services issues, the self- were the major focus in this approach. Though replication
destruction strategy is designed to function based on a of data in this approach reduced the communication delays,
specific application. Hence, an application aware self-de- the level of data replication resulted in energy efficiency
struction mechanism is proposed in SePeCloudv3.2. This tradeoffs in the cloud system.
promotes faster backup operation and reduction of unnec- Security and availability of cloud data was [11]
essary files for sensitive applications based on reinforce- improved by trusted cryptographic protocol that extended
ment learning in data usability. traditional trust based security protection to the public
An efficient integrity checking by auditing data updating cloud. This was found to improve the integrity, freshness,
with high error detection probability is inherited from and high availability of the uploaded data. To avoid vul-
DROPS to the SePeCloudv3.2. However, User Revocation nerable operations, tenant visibility was found to be mon-
handling policies are not included in the previous versions itored on regular basis by the auditing scheme. Source
of SePeCloud. To maintain the users within the data usage deduplication Framework (SAFE) [12] removed the
bounds, SePeCloudv4.1 is introduced by including efficient duplicate data for backup operations that reduced the time
user revocation handling [7] to SePeCloudv3.2. Shamir consumption for backed up data. This even reduced the
secrete sharing scheme [8] is used for user revocation storage space and the time required for data restoration.
handling. But, Shamir Secrete Sharing Scheme is mostly The global file and local chunk level deduplication was
based on numerical assumptions which are usually more proposed to reduce backup time by adjusting efficiency and
time-consuming and requires much throughput. In secret overhead. Semantics of files were also considered during
sharing schemes [9], naturally in any protocol, dishonest deduplication process to narrow the search space. The
users can cheat during execution. To mitigate this problem, optimization on back-up and restore operations were
a modified short secrete sharing is proposed for user required to improve performance further.
revocation handling in the SePeCloudv4.2. Jordan matrix Multitenant Access Control system [13] provided access
factorization enhanced with the Lagrangian formula is used control based on virtualization policies. However, I/O-in-
in the modified short secrete sharing scheme. These matrix tensive applications for large scale users were not suit-
operations are linear and therefore make the computation able for these systems. File Assured Deletion (FADE)
faster. The use of Lagrange formula to share the secret presented [14] a secure overlay cloud storage system which
sequence optimizes the length of the secret shares. assured appropriate file deletion to protect the deleted data
The remaining section of the paper is organized in detail on the cloud. FADE was built with standard cryptographic
as follows. Section 2, analyzes the methods proposed to techniques with assured privacy and integrity. It assured
improve the security and performances of Cloud. In Sect. 3, that the deleted files remained unrecoverable to any user
the techniques uses for proposed SePeCloud are explained upon revocations of the file access policies.
briefly. Section 4 shows the experimental results conducted Secure Cloud Storage system [15] supported privacy
and the performance evaluations of proposed techniques preserving public auditing. The Third party auditors (TPA)
and Sect. 5 concludes the paper with directions towards the verified the data simultaneously sent by multiple cloud
future work. users. TPA auditing did not allow even new vulnerabilities
into user’s data which also removed additional online
burden faced by the cloud users. To obtain better effi-
2 Related work ciency, multiple auditing tasks in batch manner were not
allowed in the system.
Various security issues for cloud computing [2] were Convergent encryption technique [16] [17] deduplicated
analysed to identify all kinds of vulnerabilities. From the encrypted data to protect sensitive information. This was
study it is found that data storage in cloud space, virtual- the first attempt to deduplicate authorized data. The
ization and the network maintenance faced major security deduplication performed based on the privileges allocated
issues in the cloud environment. Each virtualization strat- to the users. This scheme was found toobtain minimal
egy required different security schemes to handle the data. overhead when compared to the normal deduplication
The relationship between threats and vulnerabilities that scheme.
led to the execution of threats were studied briefly. The

123
Cluster Computing

3 Proposed methodology Application specific Self-Destruction strategy includes the


application type, response time, no of user shared for the
This paper enhances SePeCloudv2.2 [5] by including self- files and TTL value of file. The application aware self-
destruction, Application Aware self-destruction for destruction strategy eliminates unused files over a long
improving the performance of the cloud based systems. To period of time in each application independently. The data
improve the cloud security further, User revocation policies chunk fingerprint index and sketch index are maintained
by Shamir Secret Sharing and Modified Short Secret appropriately for each application. These indexes are sup-
Sharing are integrated to the system. ported to destruct files from multiple application indepen-
dently. In this scheme, the reinforcement learning is
3.1 Enhancing SePeCloud with self destruction utilized to learn the above mentioned parameters of each
(SeDas) application to carry out the self-destruction properly. The
reinforcement learning functions by learning the informa-
In SePeCloudv3.1, SeDasis implemented as an extension to tion and reacting to the environment with a better solution.
the SePeCloudv2.2 that automatically removes the unused
files and decryption keys at the user-specified time. This 3.2.1 Application aware, the data chunk fingerprint index
protects the data privacy improving the storage perfor- and sketch index structure
mance of the system.
In the SeDas system, when a data owner uploads a file, In this scheme, independent and parallel data destruction is
the file name, the key and ttl values are passed as argu- built to support multiple applications. An application index
ments to the cloud. The user defined encryption algorithm is maintained to map different application’s file types to the
encrypts the data where the key shares are generated to the independent fingerprint (FP) index or sketch index. FP/
data users. Once the file is uploaded in the cloud an Active sketch indices are mapped to hash-table based index called
Storage Object (ASO) is created where the files are stored container that stores the chunk. The chunk of same file
as objects. Until self-destruct operation is triggered, the types are indexed together and an ID is maintained in the
cloud user can access the shared keys and are allowed to container. The time validation of each file chunk is also
decrypt the files. Once the operation is activated, the users maintained in the container so that once the system time is
cannot access the keys. matched with the time-to-live factor of the files, the self-
SeDas performs secured deletion on the user’s specified destruction operation is triggered. The files stored in the
sensitive files. The implementation of this mechanism is as container are then moved to the deleted file list. The
follows. application aware SeDas yielded better data reduction and
throughput than the basic SeDas.
• The deleted sensitive files are stored in separate
directory.
3.2.2 Reinforcement learning algorithm in application
• The list of sensitive files, the Logical Block Address
aware self destruction
(LBA) of the files and file allocation tables are
maintained.
An agent based Reinforcement Learning (RL) algorithm is
• LBAs list of sensitive files are updated to the cloud
employed to improve the application aware SeDasby
provider
updating the TTL values of the files based on the usability
• The cloud provider writes newly updated data on the
of the legitimate users. A software agent (a piece of soft-
old data pages where the sensitive data is already
ware code) is developed in each ASO for better usability of
stored. So the deleted sensitive data LBAs are over-
the code. The usability of files in the cloud is learned by the
written by the newly uploaded files.
RL algorithm which then distributes this information to
• For ordinary files, the standard updating is performed.
each related ASOs through agent communication. This
• The self-destruction API is called and deletes the files.
takes place frequently among all the ASOs. RL improves
The deleted sensitive files are preserved by the above
the cloud performance by deciding the self-destruction
procedure.
time within the system itself. Thus, the unused files are
removed independently based on accessibility with high
3.2 Application aware self-destruction space efficiency and less storage overhead than the basic
(SePeCloudv3.2) SeDas.
The parameters included in the RL model are, applica-
In SePeCloudv3.2, an application-aware self destruction is tion type, accessibility rate, TTL values and LBAs of each
proposed for handling independent applications. file. These parameters are updated on each file access. RL
learns the overall usage of these files in the cloud through

123
Cluster Computing

the agents. Based on the learned model of the RL agent, the for authentication in a subgroup until the revocation of next
self-destruction operation is summoned. The agent simply user.
communicates among the ASOs and updates the file usage Algorithm 1: Shamir secrete sharing scheme Based User
information. Once the learning period is over, the agent revocation handling
finally decides the right time to perform the self-destruction
Step 1: The master user u0 runs Shamir secret sharing
operation for improving the backup efficiency. The
scheme and generates N points (j, f (j)) of a
parameters of these agents can possibly change based on
U - 1 degree polynomial
the updates from the other agents.
FðxÞ ¼ v þ a1 x þ a2 x2 þ    þ aðU1Þ xðU1Þ
3.3 SePeCloudv3.2 with Shamir secrete sharing ð1Þ
scheme based User revocation handling
(SePeCloudv4.1) N points send to the N nodes of the cloud
server.
In SePeCloudv4.1, Shamir secrete sharing scheme for user Step 2 To update an authentication tag r any U cloud
revocation handling is introduced to prevent impersonation nodes with the point (j, f (j)) on f(x) computes
attacks from illegitimate users. Lagrange basis polynomial
Y x  xm
Lj ð xÞ ¼ ð2Þ
3.3.1 Shamir secrete sharing scheme based User revocation x
0  m  U;m6¼j j
 xm
handling
Step 3: The each cloud node updates the piece of the
SePeCloudv3.2 employs User revocation handling using tag as
Shamir secrete sharing scheme [8]. As per the revocation 0
rj ¼ rf ð jÞLj ð0Þ ð3Þ
policy a user cannot access their files once denied from the
system. In order to access the file the users has to re-login Step 4: Aggregate U updated tag pieces is
to the system where the users will obtain new secret keys to P U

decrypt and access the files. Such a secure revocation Y f ð jÞLj ð0Þ

handling mechanism ensures that the revocated users can- r0 ¼ r0i ¼ r j¼0 ¼ rv ð4Þ
1jU
not access their files any more from the cloud. To handle
these users, the system changes the keys of all users every
time a user is revocated or re-logged. The authenticated tag
for each user is also updated frequently on each revocation. 3.4 SePeCloudv4.1 with short secrete
In order to reduce the computation cost of the revocation scheme based user revocation handling
handling process, the key updating is narrowed to a sub- (SePeCloudv4.2)
group. This scheme makes sure that all the authentication
tags generated by the revoked users are updated so that the In spite of all its functionalities, the Shamir secret sharing
revoked users’ secret keys are removed from the tags. The scheme lags behind as it requires more store space for keys,
communication/computational intensity of the system are and also the random operations are time consuming. These
totally depended on the number of tags modified by the random operations include splitting the keys, merging
revoked users. when required and updating it on regular basis. The
Internal errors or outside attack in the cloud system SePeCloudv4.1 with Modified short secret sharing is pro-
leads to creation of authenticated tag by the revocated posed as SePeCloudv4.2 to improve the storage space for
users. Compromise attack is another attack in the cloud that the keys. The additional piece information included in
favours the revocated users. To solve these issues and short secret scheme strengthens the security by preventing
increase the reliability of the cloud system, Shamir Secret impersonation. Modified Short Secret improves the secu-
Sharing Technique (U, N) is utilized. This secret sharing rity by avoiding the possible attacks with improved storage
technique with revocation algorithm sends v (keys and space. This functions by utilizing Jordan matrix theories
authentication parameters) to a single cloud node and N which combines Lagrange inserting formulae by intro-
cloud nodes where U out of N nodes with shared v is ducing an algorithm for threshold secret sharing for
chosen to compute its own pieces of the updated authen- improving short shares and efficiency. Jordan matrix with
tication tag. Once the U cloud node updates its own Lagrange optimizes the storage in placing and processing
authentication tag pieces, the aggregated tag is generated as the data efficiently, especially in cloud based systems. It
the final updated tag. Only the final aggregated tag is used also keeps the relation only within the number of ðr; gÞ
threshold (r) where g represents the number of legitimate

123
Cluster Computing

users and does not relate with the length of the initial Step 2: Each joined participator can now resume the
secret. secret i1 ; i2 ; . . .ir using Lagrange interpolation
The short secret sharing contains two phases namely, formulae.
secret sharing process and secret resuming process. Step 3 Each joined participators reads the call board
Algorithm 2: Modified short secretes sharing to get the value of b and Pi1 ; Pi2 ; . . .; Pir to fix
scheme Based User revocation handling the Jordan matrix J based on the b value.
Step 4: Every joint now permutes the vector group
(i) Secret Sharing Process
Pi1 ; Pi2 ; . . .; Pir again based on the secret
Step 1: Divide up the secret data D into r 2 pieces in
i1 ; i2 ; . . .ir to get the matrix P = (p1 ; p2 ; . . .pr ).
which the lengths are equal. D is denoted as
Step 5: Every joiner, the user A = PJ P1 gets the
D ¼ d1 jjd2 jj. . .jjdr2  jj refers to the connec-
initial secret data based on the matrix A.
tion of the bit clusters. If the last piece is less
Step 6: Now denote the information of J as
than the other pieces, the user uses the pack 2 3
technique to assure that the length of each k1 r1
piece is equal. R is the threshold number to 6 k2 r2 7
6 7
resume the secret D. b¼6 6 : : 7
7 ð6Þ
4 : : 5
Step 2: D is expressed in (d1 ; d2 ; . . .; dr2 ) as matrix
2 3 ks r s
d1 d2 . . . dr
6 drþ1 drþ2 . . . d2r 7 Here, i = 1, 2,…,s and length of the ri is
A¼6 4 ...
7
... ...5 log2 r þ 1. So, the total length of b is
... ... dr 2
jDj þ a
þ r  ðblog2 r c þ 1Þ ð7Þ
Step 3: Fix up the Jordan standard type matrix J of A r
and the related switch matrix P accordingly. Step 7: For i = 1,2,…r, the user writes ai ,b and the
2 3
J1 0 0 0 value of Pi1 ; Pi2 ; . . .; Pir on the call board as
6 0 J2 0 0 7 the sequence.
J¼6 4 0 0 ... 0 5
7
The modified short secret sharing computes with higher
0 0 0 Js
2 3 efficiency in space and communication.
k1 1 0 0
6 0 k1 . . . 0 7
J1 ¼ 6
4 0 0 ... 1 5
7 ð5Þ
4 Results and discussion
0 0 0 k1 r1 Xr1
All the versions of SePeCloud are implemented in Jelastic
Here, P = ðp1 ; p2 ; . . .; pr Þ is the column vector cloud environment which is a Platform and Infrastructure
with r dimension, the length of each weight is service provider that provides combined PaaS-IaaS offer-
ðjDjþaÞ
r2 bits. |D| is the length of the secret data ing for Java, Ruby and PHP with balanced performance,
D and a is the length of the back-up data scalability and security functionalities. SePeCloud is
where, i = 1, 2,…, s. developed by java Spring framework. The improvements
Step 4 After that, permute the sequence p1 ; p2 ; . . .; pr of each version of SePeCloud are measured in terms of
to Pi1 ; Pi2 ; . . .; Pir (Here 1  ij  r; j ¼ 1; 2; replication cost saving and computation time. The perfor-
. . .r). mance of the SePeCloud versions is compared against each
Step 5 Now take the sequence i1 ; i2 ; . . .ir as the secret other and DROPS methodology. The outcomes of the
and r polynomials with r-1 degree to share the experimental results prove that SePeCloud works provides
secret among g participators by using the improved awareness on the security and performance to the
Lagrange interpolation formulae. data on the cloud system. The performance of DROPS and
SePeCloud is analyzed by increasing the number of nodes
(ii) Secret resuming process
in the CSP, changing the nodes storage capacity and
Step 1: When the r participators need to jointly increasing the file fragment size.
resuming the secret, each joined participators
broadcasts the users own share to every other
user.

123
Cluster Computing

4.1 Replication cost Figure 2 shows the impact of increasing the fragment
size on RC savings. From the figure it is evident that,
The implementation of all versions of SePeCloud is to Replication Cost savings is decreased with increasing
minimize overall replication time (RT) or replication cost number of fragments. From the Table 2, it is clear that the
(RC). Replication cost determines a systems performance SePeCloudv4.2 has better RC saving for any number of
by making the resources available appropriately. The RT fragmentations. The RC saving of SePeCloudv3.1 is found
consists of time for read and write requests respectively. to be increased by 16% than SePeCloudv2.2.SePeCloud-
The impact in RC is evaluated based on number of nodes, v3.2 by 14% than SePeCloudv3.1.SePeCloudv4.1 by 10%
number of fragment and storage capacity of nodes. than SePeCloudv3.2.SePeCloudv4.2 by 15% than
The Replication Cost of SePeCloud versions are ana- SePeCloudv4.1.The Replication cost savings of each
lyzed by increasing the number of nodes. Figure 1 shows SePeCloud versions are increased gradually which also
the Replication cost saving in percentage against the proves that enhancement on each SePeCloud scheme is
number of nodes. The analysis of the graph concludes that efficient, scalable and extensible. Thus, each development
as the number of nodes increases, the Replication Cost of SePeCloud proves stable improvement on performance
savings increases irrespective of DROPS or any other and security together.
SePeCloud versions. From the Table 1, it is evident that the Figure 3 shows the impact of RC savings while chang-
Replication cost savings of SePeCloudv4.2 is found to be ing storage capacity of nodes. Storage capacity of nodes
higher than all other versions and DROPS. The RC saving affects a replica on the node because of the storage
of SePeCloudv3.1 is found to be increased by 17% than capacity constraint. This leads to removal of some excel-
SePeCloudv2.2. SePeCloudv3.2 by 11% than SePeCloud- lent nodes for the replication process that violates the
v3.1.SePeCloudv4.1 by 11% than SePeCloud- storage capacity constraints which in turn decreases the
v3.2.SePeCloudv4.2 by 13% than SePeCloudv4.1. node capacity. This will eventually degrade the

Replication Cost Savings Vs Number of Nodes Replication Cost Savings Vs Number of Fragments
1800 1600
DROPS DROPS

1600 SePeCloudv1.1 SePeCloudv1.1


SePeCloudv1.2
1400
SePeCloudv1.2
SePeCloudv2.1 SePeCloudv2.1
1400
SePeCloudv2.2 SePeCloudv2.2
1200
SePeCloudv3.1 SePeCloudv3.1
1200 SePeCloudv3.2 SePeCloudv3.2
SePeCloudv4.1 1000 SePeCloudv4.1
SePeCloudv4.2
RC Savings(%)

1000 SePeCloudv4.2
RC Savings(%)

800
800

600
600

400 400

200 200

20 25 30 35 40 45 50 55 60 20 30 40 50 60 70 80 90 100
Number of Nodes Number of Fragments

Fig. 1 Replication cost savings versus number of nodes (In Fig. 2 Replication cost savings versus number of fragments (where
graph 100 U = 10%) 100 U = 10%)

Table 1 Replication cost savings (%) versus number of nodes


No of DROPS SePeCloud SePeCloud SePeCloud SePeCloud SePeCloud SePeCloud SePeCloud SePeCloud
nodes 1.1 1.2 2.1 2.2 3.1 3.2 4.1 4.2

20 50 100 200 300 400 450 530 660 740


30 100 180 250 350 450 510 670 730 820
40 200 330 380 500 600 650 740 810 880
50 250 360 440 580 650 700 760 850 950
60 400 500 600 700 800 840 890 910 980

123
Cluster Computing

Table 2 Replication cost savings (%) versus number of fragments


No of DROPS SePeCloud SePeCloud SePeCloud SePeCloud SePeCloud SePeCloud SePeCloud SePeCloud
fragments 1.1 1.2 2.1 2.2 3.1 3.2 4.1 4.2

20 330 450 570 650 750 780 850 890 970


40 260 380 432 528 625 680 720 770 850
60 210 300 360 429 527 630 730 800 860
80 150 200 260 380 423 520 610 650 750
100 80 150 200 300 350 400 460 510 620

Replication Cost Savings Vs Node Storage Capacity Computation Time Vs Data Size
1800 1800
DROPS
DROPS
SePeCloudv1.1
SePeCloudv1.1 1600
1600 SePeCloudv1.2
SePeCloudv1.2 SePeCloudv2.1
SePeCloudv2.1 1400 SePeCloudv2.2
1400
SePeCloudv2.2 SePeCloudv3.1
SePeCloudv3.1 1200 SePeCloudv3.2

1200 SePeCloudv3.2 SePeCloudv4.1

Computation Time(sec)
SePeCloudv4.1 SePeCloudv4.2
1000
SePeCloudv4.2
RC Savings(%)

1000
800

800
600

600
400

400
200

200
1 2 3 4 5 6
10 10 10 10 10 10

Data Size(Bytes)
10 15 20 25 30 35 40 45 50

Node Storage Capacity Fig. 4 Computation time versus data size (where 100 U = 10 s)

Fig. 3 Replication cost savings versus node storage capacity (where


100 U = 10%) 4.2 Computation time

performance of the system. From the Table 3, it is clear The computation time is the amount of time taken by the
that the Replication cost savings of SePeCloudv4.2 is central processing unit (CPU) for processing the instruc-
increased than all the other versions and DROPS. The RC tions to accessing the whole file content. Computation time
saving of SePeCloudv3.1 is found to be increased by 13% determines how fast the data is processed which is purely
than SePeCloudv2.2.SePeCloudv3.2 by 13% than determined by the algorithms used in the system.
SePeCloudv3.1.SePeCloudv4.1 by 11% than SePeCloud- Figure 4 shows the comparison of DROPS and SePe-
v3.2.SePeCloudv4.2 by 9% than SePeCloudv4.1.If the Cloud in terms of Computation Time. The SePeCloudv4.2
storage nodes have enough capacity to store the allocated improves the performance by taking less computation time
file fragments, then a further increase in the storage for bytes to gigabytes by balancing its security features.
capacity of a node cannot cause the fragments to be stored From the Table 4, it is clear that the RC saving of
again. SePeCloudv3.1 minimizes the computation time up to 25%
than SePeCloudv2.2. SePeCloudv3.2 by 18% than

Table 3 Replication cost savings (%) versus number of node storage capacity
Node storage DROPS SePeCloud SePeCloud SePeCloud SePeCloud SePeCloud SePeCloud SePeCloud SePeCloud
capacity 1.1 1.2 2.1 2.2 3.1 3.2 4.1 4.2

10 50 150 200 250 300 430 540 630 700


20 100 200 270 360 430 480 620 700 750
30 150 280 360 440 540 600 700 750 850
40 200 300 400 530 600 660 750 850 900
50 300 400 500 600 720 770 810 900 960

123
Cluster Computing

Table 4 Computation time versus data size


Data size DROPS SePeCloud SePeCloud SePeCloud SePeCloud SePeCloud SePeCloud SePeCloud SePeCloud
(bytes) 1.1 1.2 2.1 2.2 3.1 3.2 4.1 4.2

101 880 780 680 600 500 350 280 150 100
102 930 850 760 690 520 380 330 220 153
103 920 880 750 650 530 400 340 195 130
104 980 860 790 700 580 430 380 240 175
105 960 910 780 680 600 500 390 260 150
6
10 1100 1000 900 850 740 650 500 350 200

Number of Attempts to Reveal Secret Key


5 Conclusion
1000

900
Cloud computing and its services has become an integral
part in today’s trending environment. Security and per-
800 formance has always been an inevitable issue to be
addressed ever since cloud started playing its part on a
700
great scale. SePeCloudv4.2 efficiently addresses all these
600
issues by overcoming the shortcomings faced in DROPS.
The enhancement in DROPS by including fog based
500
SePeCloudv4.1
SePeCloudv4.2
deduplication, privacy preserving online updating, appli-
cation aware Self-Destruction and efficient user revocation
400
handling has contributed greater efficiency in terms of both
300
security and performance to the system. SePeCloud
reduced the auditing cost in user revocation handling when
200 the group size and data size grows routinely. Eventually,
SePeCloudv4.2 is found to provide improved performance
100
Number of Attempts to Reveal Secret Key
in terms of Replication cost and Computation time with
high security to the uploaded data. The future work is
Fig. 5 Number of attempts to reveal secret key directed towards developing paralleled SePeCloud systems
specially for handling big data networks.
SePeCloudv3.1.SePeCloudv4.1 by 37% than SePeCloud-
v3.2.SePeCloudv4.2 by 35% than SePeCloudv4.1.Thus the
development of each version of SePeCloud trades off
security and performance by considering the response time References
of the each file request thus, reducing the computation on
each server node. 1. Yan, Z., Ding, W., Yu, X., Zhu, H., Deng, R.H.: Deduplication on
encrypted big data in cloud. IEEE Trans. Big Data 2(2), 138–150
(2016)
4.3 Number of attempts to reveal secret key 2. Hashizume, K., Rosado, D.G., Fernández-Medina, E., Fernandez,
E.B.: An analysis of security issues for cloud computing. J. In-
Figure 5 shows the comparison of number of attempts to ternet Serv. Appl. 4(1), 1–13 (2013)
3. Savitha, S., Thangam, P.: Towards SePe (Security-Performance)
reveal the secret key between SePeCloudv4.1 and in cloud computing—survey and recommendations. Int. J. Sci.
SePeCloudv4.2. The SePeCloudv4.1 utilized Shamir Adv. Technol. 7(1), 17–27 (2017)
secrete sharing scheme to store the keys and Number of 4. Ali, M., et al.: DROPS: division and replication of data in the
attempts to reveal those secret keys whereas, the cloud for optimal performance and security. IEEE Trans. Cloud
Comput. (2015). https://doi.org/10.1109/TCC.2015.2400460
SePeCloudv4.1 utilized modified short secret sharing to 5. Savitha, S., Thangam, P.: SePeCloud—a fog computing based
improve the storage space for the keys and to include an deduplication and privacy preserving online updating for
additional piece of information to strengthen the security improving DROPS in cloud. Wulfenia J. 24(9), 69–87 (2017)
by preventing impersonation attacks. 6. Zeng, Lingfang, Chen, Shibin, Wei, Qingsong, Feng, Dan:
SeDas: a self-destructing data system based on active storage
framework. IEEE Trans. Magn. 49(6), 2548–2554 (2013)

123
Cluster Computing

7. Yuan, Jiawei, Shucheng, Yu.: Public integrity auditing for P. Thangam graduated her B.E.
dynamic data sharing with multiuser modification. IEEE Trans. degree in Computer Hardware
Inf. Forensics Secur. 10(8), 1717–1726 (2015) and Software Engineering from
8. Shamir, A.: How to share a secret. Commun. ACM 22(11), Avinashilingam University,
612–613 (1979) Coimbatore in 2001. Her PG
9. Liu, Y.X., Harn, L., Yang, C.N., Zhang, Y.Q.: Efficient (n, t, n) graduation was M.E. degree in
secret sharing schemes. J. Syst. Softw. 85(6), 1325–1332 (2012) Computer Science and Engi-
10. Boru, D., Kliazovich, D., Granelli, F., Bouvry, P., Zomaya, A.Y.: neering from Anna University,
Energy-efficient data replication in cloud computing datacenters. Chennai in 2007. She completed
Clust. Comput. 18(1), 385–402 (2015) her doctorate in Information and
11. Bilal, K., et al.: On the characterization of the structural robust- Communication Engineering
ness of data center networks. IEEE Trans. Cloud Comput. 1(1), from Anna University, Chennai
1–1 (2013) in 2013. She has a total teaching
12. Tan, Y., Jiang, H., Sha, E.H.M., Yan, Z., Feng, D.: SAFE: a experience of 12 years in vari-
source deduplication framework for efficient cloud backup ser- ous reputed engineering col-
vices. J. Signal Process. Syst. 72(3), 209–228 (2013) leges in Tamil Nadu. She is currently serving as Associate Professor
13. Kappes, G., Hatzieleftheriou, A., Anastasiadis, S.V.: Dike: vir- in the Department of Computer Science and Engineering of Coim-
tualization-aware access control for multitenant file systems. batore Institute of Engineering and Technology. She has more than 25
Technical Report No. DCS2013-1 (2013) publications in various journals and conferences at National and
14. Tang, Y., Lee, P.P., Lui, J.C., Perlman, R.: FADE: secure overlay International levels. Her research interests are Medical Image Anal-
cloud storage with file assured deletion. Secur. Priv. Commun. ysis, Image Processing, Databases, Cryptography, Embedded Systems
Netw. 50, 380–397 (2010) and Ubiquitous Computing. She has guided more than 25 UG projects
15. Wang, C., Chow, S.S., Wang, Q., Ren, K., Lou, W.: Privacy- and 15 PG projects. Currently 5 research scholars are pursuing their
preserving public auditing for secure cloud storage. IEEE Trans. Ph.D. under her supervision in Anna University, Chennai. She is a life
Comput. 62(2), 362–375 (2013) time member of the International Association of Engineers, Indian
16. Suresh, A., Varatharajan, R.: Competent resource provisioning Society for Technical Education and the International Association of
and distribution techniques for cloud environment. Clust. Com- Computer Science and Information Technology. She serves as edi-
put. (2017). https://doi.org/10.1007/s1058-017-1293-6 torial board member in International journals like International
17. Li, J., Li, Y.K., Chen, X., Lee, P.P., Lou, W.: A hybrid cloud Journal of Engineering Research and Science, Asian Engineering
approach for secure authorized deduplication. IEEE Trans. Par- Review and as reviewer in many International journals and
allel Distrib. Syst. 26(5), 1206–1216 (2015) conferences.

L. Latha has undergraduate


S. Savitha is currently pursuing qualification in Electronics &
her Ph.D Degree in Information Communication Engineering
and Communication Engineer- and post-graduate qualification
ing, Anna University, Chennai, in Applied Electronics, received
Tamil Nadu, India. She gradu- from Bharathiar University. She
ated her B.E and M.E Degrees has completed her Doctorate in
in Computer Science and Engi- the field of ‘‘Biometric authen-
neering from Adhiyamaan Col- tication’’ from Anna University
lege of Engineering, affiliated to Chennai and has 22 years of
Anna University, Chennai, teaching experience after a
Tamil Nadu, India, in 2013 and year’s stint Research at Fluid
2015, respectively. Her research Control Research Institute, Pal-
work involves Cloud Comput- ghat. She is currently working
ing, Big Data, Network Secu- as an Associate professor in the
rity, High Performance department of Computer Science & Engineering at Kumaraguru
Computing, Information Retrieval and Mobile Computing. She is a College of Technology, Coimbatore. She has won the ‘‘Best Ph.D.
member of the International Association of Engineers. Thesis award’’ from Computer society of India and got the second
place at National level. Her research interests include Multimodel
biometrics, Network security, Pattern recognition and Digital image
processing. She has published 20 papers in International journals and
presented 25 papers in various International and National conferences
and has won the ‘‘Best paper award’’ thrice. She has also completed a
funded research project in the area of Biometric access control.

123

You might also like