You are on page 1of 19

Multiple Attribute Authorities for Public Cloud Storage using Robust and

Auditable Access Control

1. INTRODUCTION

1.1 INTRODUCTION TO CLOUD COMPUTING AND ITS IMPICATION

Cloud storage is a promising and important service paradigm in cloud computing. Benefits of
using cloud storage include greater accessibility, higher reliability, rapid deployment and
stronger protection, to name just a few. Despite the mentioned benefits, this paradigm also
brings forth new challenges on data access control, which is a critical issue to ensure data
security. Since cloud storage is operated by cloud service providers, who are usually outside
the trusted domain of data owners, the traditional access control methods in the Client/Server
model are not suitable in cloud storage environment. The data access control in cloud storage
environment has thus become a challenging issue. To address the issue of data access control
in cloud storage, there have been quite a few schemes proposed, among which Ciphertext-
Policy Attribute-Based Encryption (CP-ABE) is regarded as one of the most promising
techniques. A salient feature of CP-ABE is that it grants data owners direct control power
based on access policies, to provide flexible, fine grained and secure access control for cloud
storage systems.

In CP-ABE schemes, the access control is achieved by using cryptography, where an owner’s
data is encrypted with an access structure over attributes, and a user’s secret key is labelled
with his/her own attributes. Only if the attributes associated with the user’s secret key satisfy
the access structure, can the user decrypt the corresponding ciphertext to obtain the plaintext.
So far, the CP-ABE based access control schemes for cloud storage have been developed into
two complementary categories, namely, single-authority scenario, and multi-authority
scenario.

Although existing CP-ABE access control schemes have a lot of attractive features, they are
neither robust nor efficient in key generation. Since there is only one authority in charge of all
attributes in single-authority schemes, offline/crash of this authority makes all secret key
requests unavailable during that period. The similar problem exists in multi-authority schemes,
since each of multiple authorities manages a disjoint attribute set.

Department of ISE, DSCE Page 1


Multiple Attribute Authorities for Public Cloud Storage using Robust and
Auditable Access Control

In single-authority schemes, the only authority must verify the legitimacy of users’ attributes
before generating secret keys for them. As the access control system is associated with data
security, and the only credential a user possesses is his/her secret key associated with his/her
attributes, the process of key issuing must be cautious. However, in the real world, the
attributes are diverse. For example, to verify whether a user can drive may need an authority to
give him/her a test to prove that he/she can drive. Thus he/she can get an attribute key
associated with driving ability. To deal with the verification of various attributes, the user may
be required to be present to confirm them. The inefficiency of the authority’s service results in
single-point performance bottleneck, which will cause system congestion such that users often
cannot obtain their secret keys quickly and must wait in the system queue.

This will significantly reduce the satisfaction of users experience to enjoy real-time services.
On the other hand, if there is only one authority that issues secret keys for some attributes, and
if the verification enforces user’s presence, it will bring about the other type of long service
delay for users, since the authority maybe too far away from his/her home/workplace. As a
result, single-point performance bottleneck problem affects the efficiency of secret key
generation service and immensely degrades the utility of the existing schemes to conduct
access control in large cloud storage systems. Furthermore, in multi-authority schemes, the
same problem also exists since multiple authorities separately maintain disjoint attribute
subsets and issue secret keys associated with users’ attributes within their own administration
domain.

A straightforward idea to remove the single-point bottleneck is to allow multiple authorities to


jointly manage the universal attribute set, in such a way that each of them can

Department of ISE, DSCE Page 2


distribute secret keys to users independently. By adopting multiple authorities to share the
load, the influence of the single-point bottleneck can be reduced to a certain extent. However,
this solution will bring forth threats on security issues. Since there are multiple functionally
identical authorities performing the same procedure, it is hard to find the responsible authority
if mistakes have been made or malicious behaviours have been implemented in the process of
secret key generation and distribution. For example, an authority may falsely distribute secret
keys beyond user’s legitimate attribute set. Such weak point on security makes this
straightforward idea hard to meet the security requirement of access control for public cloud

storage

2. LITERATURE SURVEY

2.1 EARLIER METHODS THAT WERE USED

Survey from Multi-Keyword Ranked Search over Encrypted Cloud Data with Multiple Data
Owners (Z. Fu, K. Ren, J. Shu, X. Sun, and F. Huang,)

In recent years, many researchers have proposed large number of efficient searching schemes
over encrypted cloud data. The general process of search scheme is divided into five steps:
extracting document features, constructing a searchable index, generating search trapdoor,
searching the index based on the trapdoor and returning the search results. These search
schemes provide different query capabilities, including single International Journal of
Computer Applications (0975 – 8887) keyword search, multi-keyword search, fuzzy keyword
search, similarity search, and so on.
Mekong Cha et. al (2010) studied the influence factor and established that more number of
followers doesn't necessarily mean more influence over twitter.

However, Antoine Boutte et., made prediction with party characteristics with the use of user
behaviour analysis and influence factor using the UK general elections of 2012 case study.
This model assumes the influence factor as follower and Johan Bullen used POMS score to
establish the sentiment values classified in moods category. This method does not use any
machine-learning algorithm for training of the data as positive or negative. The system
measures this sentiment using a syntactic, term-based approach, in order to detect as much
mood signal as possible from very brief Twitter messages.
Marko Scoria et. al proposed word and term frequencies for tweets corresponding to key-tens
related to the subject, such as democratic leader names and democratic organization names
which played an important role in predicting Singapore Elections 2011. This system depends
more on the relationship of the users who publish the tweets for the key-words related to case-
study. Romero et. al. instituted that the people would use the Twitter hash tags, to get noted
over trends and the use of the hashtags remains consistent across time for politically debated
topics. Support Vector Machine (SVM), Naive Bayes (NB) and Maximum Entropy (Maxent)
classifiers are well discussed in many literatures such as Pang and Lee whereas Artificial
Neural Networks (ANN) have been discussed very limited number of times.

Rodrigo Morae’s et al. [11] discussed the comparative features of ANN and SVM in detail for
the document level sentiment classification. They used TF-IDF (Term Frequency - Inverse
Document Frequency) to extract feature values for unbalance datasets. Long-Sheng Chen et. al
[12] implemented the feed-forward BPN network and uses Sentiment Orientation to compute
the values at each neuron. The model depends upon the sentiment orientation of terms used in
the documents. Cozma and Chen [9] studied the US 2010 midterm elections. They found that
the currently chaired politicians and challengers used Twitter in different ways. Currently
chaired politicians were more concentrated upon recent events, whereas challengers preferred
the strategy of attacking the incumbents. Following the same elections, Pew Research Centre
researchers [10] found that tweets from election participants urged users more for active
participation and publish the votes over twitter. Several such studies display that Twitter plays
a vital role in the political communication environment in many countries. It offers an
extremely rich source of information for those attracted towards studying public opinion and
political behaviour. Min Song and Min Chol Kim, attempted to mine the twitter data in real
time Korean Elections 2012.
3.SYSTEM ARCHITECTURE/OVERVIEW

3.1 PROCEDURE BASED ARCHITECHTURE


3.2 SYSTEM
ANALYSIS
4. METHODOLOGY
 Requisites Accumulating and Analysis
It’s the first and foremost stage of the any project as our is a an academic leave for requisites
amassing we followed of IEEE Journals and Amassed so many IEEE Relegated papers and
final culled a Paper designated by setting and substance importance input and for analysis
stage we took referees from the paper and did literature survey of some papers and amassed
all the Requisites of the project in this stage
 System Design
In System Design has divided into three types like GUI Designing, UML Designing with
avails in development of project in facile way with different actor and its utilizer case by
utilizer case diagram, flow of the project utilizing sequence, Class diagram gives information
about different class in the project with methods that have to be utilized in the project if
comes to our project our UML Will utilizable in this way The third and post import for the
project in system design is Data base design where we endeavour to design data base
predicated on the number of modules in our project
 Implementation
The Implementation is Phase where we endeavour to give the practical output of the work
done in designing stage and most of Coding in Business logic lay coms into action in this
stage its main and crucial part of the project
 Testing
It is done by the developer itself in every stage of the project and fine-tuning the bug and
module predicated additionally done by the developer only here we are going to solve all the
runtime errors Once the project is total yare we will come to deployment of client system in
genuinely world as its academic leave we did deployment i our college lab only with all need
Software’s with having Windows OS
Maintenance
The Maintenance of our Project is one-time process only 
Functional Requirements
• User Sign up: - all the application users have to give all the mandatory Fields and get
an Account in our application to access our application
• User Login: - To access the application we are verifying the users login user name
and Password
• Owner Login: - he is super user of the application where he can login into the
application with his/her user name and password
• Owner menu: - owner can encrypt the file data and upload to the database.
• AA Request: Attribute Authority can accept the user’s requests to access the files.
• File view: - In cloud server we can view files and their data.
Application needs Non-Functional Requisites
Expanded System admin security: overseer to eschew the abuse of the application by PC
ought to be exceptionally secured and available.
Compactness: The Presentation of this application is facile to utilize so it is looks simple for
the using client to comprehend and react to identically tantamount.
Unwavering quality: and the functionalities accessible in the application this substructure has
high probability to convey us the required inquiries.
Time take for Reaction: The time taken by the application to culminate an undertaking given
by the client is very fast.
Multifariousness: Our application can be stretched out to incorporate the vicissitudes done by
applications present now to enhance the performance of the item. This is implicatively
insinuated for the future works that will be done on the application.
Vigour: The project is blame tolerant concerning illicit client/beneficiary sources of info.
Blunder checking has been worked in the platforms to avert platforms disappointment. 
4. Implementation
Modules
Central Authority (CA):
The central authority (CA) is the administrator of the entire system. It is responsible for the
system construction by setting up the system parameters and generating public key for each
attribute of the universal attribute set. In the system initialization phase, it assigns each user a
unique Uid and each attribute authority a unique Aid. For a key request from a user, CA is
responsible for generating secret keys for the user on the basis of the received intermediate
key associated with the user’s legitimate attributes verified by an AA. As an administrator of
the entire system, CA has the capacity to trace which AA has incorrectly or maliciously
verified a user and has granted illegitimate attribute sets.
Attribute Authorities (AAs):
The attribute authorities (AAs) are responsible for performing user legitimacy verification
and generating intermediate keys for legitimacy verified users. Unlike most of the existing
multi-authority schemes where each AA manages a disjoint attribute set respectively, our
proposed scheme involves multiple authorities to share the responsibility of user legitimacy
verification and each AA can perform this process for any user independently. When an AA is
selected, it will verify the users’ legitimate attributes by manual labor or authentication
protocols, and generate an intermediate key associated with the attributes that it has
legitimacy-verified. Intermediate key is a new concept to assist CA to generate keys.
Data Owner:
The data owner (Owner) defines the access policy about who can get access to each file and
encrypts the file under the defined policy. First of all, each owner encrypts his/her data with a
symmetric encryption algorithm. Then, the owner formulates access policy over an attribute
set and encrypts the symmetric key under the policy according to public keys obtained from
CA. After that, the owner sends the whole encrypted data and the encrypted symmetric key
(denoted as ciphertext CT) to the cloud server to be stored in the cloud.
User:
The data consumer (User) is assigned a global user identity Uid by CA. The user possesses a
set of attributes and is equipped with a secret key associated with his/her attribute set. The
user can freely get any interested encrypted data from the cloud server. However, the user can
decrypt the encrypted data if and only if his/her attribute set satisfies the access policy
embedded in the encrypted data.
Cloud Server:
The cloud server provides a public platform for owners to store and share their encrypted
data. The cloud server doesn’t conduct data access control for owners. The encrypted data
stored in the cloud server can be downloaded freely by any user.
6. RESULTS AND PERFORMANCE EVALUATION
The average waiting time versus the arrival rate and the number of AAs when μ1 = 20/min, μ2
= 200/min, K = 30. From the figure, we can see that the average waiting time increases rapidly
with the increase of arrival rate when the arrival rates are low. But later the average waiting
time will become steady because newly arrival users will be rejected by the system due to the
limit length of waiting queue. We can see that when the average failure rate of single-authority
scheme is less than 5%, it can only support an arrival rate of less than 20/min. With increasing
the number of AAs, the system can greatly increase its service capacity with the support of a
greater arrival rate at the same failure rate. If we employ 7 AAs, the system can support the
arrival rate of up to 150/min, with the failure rate of less than 5%. It is easy to infer that we
can build our system based on the observation of key request rate, and then use an appropriate
number of AAs to provide high quality service.

7. CHALLENGES IN THE CLOUD COMPUTING

The following are the concerns and challenges we hear most often.

 Making the correct choice: SaaS, IaaS, PaaS and a plethora of options and
variations within. It seems that most anything today is a-a-S (available as a service).
Sorting through it all is daunting. It’s easier to narrow down when you know the
business requirements you need to meet. Every cloud strategy begins with the business
strategy and a determination of the risk/reward of various choices. Business case first,
cloud implementation last.

 Lack of Executive Support: This is a difficult challenge with many fathers. In


one way or another, a lack of support generally comes down to fear, uncertainty and
doubt. Winning favour begins with speaking the language of business, understanding
business issues and goals and, as above, building a sound business case for your
proposals. Align proposal with major corporate campaigns.

 Loss of Control: It’s tough letting go. This is not always a matter of security
either. Entrusting a third party to be a responsible, honest and reliable business partner
(and one for which you are being held accountable) is a cause of frequent agita for
some. Think of the other revenue-generating projects and business contributions IT can
accomplish when chunk of operational responsibility is lifted from your shoulders.

 Vendor Lock-in: Even if you’re already using the cloud you still would like to
have control over your data and be able to switch service providers freely. Ensuring
data portability is essential, as is understanding the data ownership and retrieval
policies of the provider.
 Security and Compliance: The 800-pound gorilla. At the end of the day, there
may be some data or applications that your organization will never feel comfortable
letting out of sight. However, this is also an area of intense focus by some service
providers because the demand is so great, and it’s a major point of competitive
differentiation. Security and compliance is not a cloud computing issue per se; it’s
more a cloud service provider issue.

Some will excel at providing it, some will dabble in it, and others will not have it in their
business model.

 Availability and Reliability: The 799-pound gorilla. As with security,


availability and reliability are a service provider issue. There is no question that
delivering on a stringent SLA requires a commitment to best practices, a thoroughly
redundant architecture, 24/7/365 staffing by trained and experience technicians, and
top-flight hardware, software and network products. For example, Peak 10 guarantees
99.9 percent or greater uptime with 100 percent uptime for critical infrastructure,
spelled out clearly in our service level agreements.

 Lack of Skills, Knowledge and Expertise: It’s different in the cloud, and
many IT organizations may not have the necessary tools or resources to implement,
monitor and manage cloud solutions. It’s not what they are geared to do. Educating
staff about new processes and tool sets, or hiring staff with new skills, may be
necessary … increasingly so as more of your operations and applications move to the
cloud over time. Selecting the right service provider will definitely help ease the
transition and fill gaps.

 Performance and Bandwidth Cost: Businesses can save money on system


acquisitions, management and maintenance, but they may have to spend more for the
bandwidth. For smaller applications this is not usually an issue, but cost can be high
for the data-intensive applications. Delivering and receiving intensive and complex
data over the network requires sufficient bandwidth to stave off latency and application
time outs.

 Vendor Transparency: “Trust me” is not what you want to hear from your
service provider. If that’s what you get, respond with “show me.” Short of divulging
trade secrets or competitively sensitive operational information, a service provider
should be open about it processes and methods for delivering on its SLAs. This is
especially true when it comes to security and compliance.

 Integration with Existing Infrastructure: This is a difficult yet essential piece


of maximizing the value of cloud services. Frankly, it must be addressed. For many IT
departments this challenge already exists within their organizations in the form of
shadow IT and BYOD.

8. APPLICATIONS
Cloud computing has been credited with increasing competitiveness through cost reduction,
greater flexibility, elasticity and optimal resource utilization. Here are a few situations where
cloud computing is used to enhance the ability to achieve business goals.

1. Infrastructure as a service (IaaS) and platform as a service (PaaS)

When it comes to IaaS, using an existing infrastructure on a pay-per-use scheme seems to be


an obvious choice for companies saving on the cost of investing to acquire, manage and
maintain an IT infrastructure. There are also instances where organizations turn to PaaS for the
same reasons while also seeking to increase the speed of development on a ready-to-use
platform to deploy applications.

2. Private cloud and hybrid cloud

Among the many incentives for using cloud, there are two situations where organizations are
looking into ways to assess some of the applications they intend to deploy into their
environment through the use of a cloud (specifically a public cloud). While in the case of test
and development it may be limited in time, adopting a hybrid cloud approach allows for
testing application workloads, therefore providing the comfort of an environment without the
initial investment that might have been rendered useless should the workload testing fail.
Another use of hybrid cloud is also the ability to expand during periods of limited peak usage,
which is often preferable to hosting a large infrastructure that might seldom be of use. An
organization would seek to have the additional capacity and availability of an environment
when needed on a pay-as you-go basis.
3. Test and development

Probably the best scenario for the use of a cloud is a test and development environment. This
entails securing a budget, setting up your environment through physical assets, significant
manpower and time. Then comes the installation and configuration of your platform. All this
can often extend the time it takes for a project to be completed and stretch your milestones.
With cloud computing, there are now readily available environments tailored for your needs at
your fingertips. This often combines, but is not limited to, automated provisioning of physical
and virtualized resources.

4. Big data analytics

One of the aspects offered by leveraging cloud computing is the ability to tap into vast
quantities of both structured and unstructured data to harness the benefit of extracting business
value.

Retailers and suppliers are now extracting information derived from consumers’ buying
patterns to target their advertising and marketing campaigns to a particular segment of the
population. Social networking platforms are now providing the basis for analytics on
behavioural patterns that organizations are using to derive meaningful information.

5. File storage

Cloud can offer you the possibility of storing your files and accessing, storing and retrieving
them from any web-enabled interface. The web services interfaces are usually simple. At any
time and place you have high availability, speed, scalability and security for your environment.
In this scenario, organizations are only paying for the amount of storage they are actually
consuming and do so without the worries of overseeing the daily maintenance of the storage
infrastructure.

There is also the possibility to store the data either on or off premises depending on the
regulatory compliance requirements. Data is stored in virtualized pools of storage hosted by a
third party based on the customer specification requirements.

6. Disaster recovery
This is yet another benefit derived from using cloud based on the cost effectiveness of a
disaster recovery (DR) solution that provides for a faster recovery from a mesh of different
physical locations at a much lower cost that the traditional DR site with fixed assets, rigid
procedures and a much higher cost.

7. Backup

Backing up data has always been a complex and time-consuming operation. This included
maintaining a set of tapes or drives, manually collecting them and dispatching them to a
backup facility with all the inherent problems that might happen in between the originating
and the backup site. This way of ensuring a backup is performed is not immune to problems
such as running out of backup media, and there is also time to load the backup devices for a
restore operation, which takes time and is prone to malfunctions and human errors.

Cloud-based backup, while not being the panacea, is certainly a far cry from what it used to
be. You can now automatically dispatch data to any location across the wire with the assurance
that neither security, availability nor capacity are issues.
While the list of the above uses of cloud computing is not exhaustive, it certainly give an
incentive to use the cloud when comparing to more traditional alternatives to increase IT
infrastructure flexibility, as well as leverage on big data analytics and mobile computing.
9. CONCLUSION

By effectively reformulating CPABE cryptographic technique into our novel framework, our
proposed scheme provides a fine-grained, robust and efficient access control with one-
CA/multi-AAs for public cloud storage. Our scheme employs multiple AAs to share the load
of the time-consuming legitimacy verification and standby for serving new arrivals of users’
requests. an auditing method to trace an attribute authority’s potential misbehaviour.

We conducted detailed security and performance analysis to verify that our scheme is secure
and efficient. The security analysis shows that our scheme could effectively resist to individual
and colluded malicious users, as well as the honest-but-curious cloud servers. Besides, with
the proposed auditing & tracing scheme, no AA could deny its misbehaved key distribution.
Further performance analysis based on queuing theory showed the superiority of our scheme
over the traditional CP-ABE based access control schemes for public cloud storage
10. REFERENCES
[1] Kaiping Xue, Senior Member, IEEE, Yingjie Xue, Jianan Hong, Wei Li, Hao Yue,
Member, IEEE, David S.L. Wei, Senior Member, IEEE, and Peilin Hong (base paper)

[2] Z. Fu, K. Ren, J. Shu, X. Sun, and F. Huang, “Enabling personalized search over encrypted
outsourced data with efficiency improvement,” IEEE Transactions on Parallel & Distributed
Systems, vol. 27, no. 9, pp. 2546–2559, 2016.

[3] Z. Fu, X. Sun, S. Ji, and G. Xie, “Towards efficient content-aware search over encrypted
outsourced data in cloud,” in in Proceedings of 2016 IEEE Conference on Computer
Communications (INFOCOM 2016). IEEE, 2016, pp. 1–9.

[4] K. Xue and P. Hong, “A dynamic secure group sharing framework in public cloud
computing,” IEEE Transactions on Cloud Computing, vol. 2, no. 4, pp. 459–470, 2014.

[5] Y. Wu, Z. Wei, and H. Deng, “Attribute-based access to scalable media in cloud-assisted
content sharing,” IEEE Transactions on Multimedia, vol. 15, no. 4, pp. 778–788,2013.

You might also like