This essay has been submitted by a student. This is not an example of the work written by professional essay writers.
Data

challenges organizations face while building a database on the cloud and future expectations in cloud computing

Pssst… we can write an original essay just for you.

Any subject. Any type of essay. We’ll even meet a 3-hour deadline.

GET YOUR PRICE

writers online

challenges organizations face while building a database on the cloud and future expectations in cloud computing

Abstract

The use of cloud computing services is on the rise now that almost everyone has something to store virtually for future access or review. Today, the cloud environment has allowed users to store valuable information and access data via their smartphones. In spite of the success of the cloud environment, data security has always remained a significant issue. One of the primary reasons as to why the cloud is prone to security threats is the fact that data is scattered at different locations all over the world. As such, users need to be assured of data security to win their trust in the cloud environment. A trustworthy cloud environment is fundamental in winning user confidence while using such technology. According to recent reports, the global computing market is proliferating, and it’s expected to reach $270 billion in 2020.

There are numerous proposed data protection measures in various studies focusing on cloud computing; there are still newly proposed approaches to enhance data protection related techniques further. The sole purpose of this study is to analyze various challenges organizations face while building a database on the cloud and future expectations in cloud computing. Finally, this context will discuss different solutions and their application to the cloud computing environment as well as what the future holds for organizations utilizing these services.

Don't use plagiarised sources.Get your custom essay just from $11/page

Introduction

Cloud computing is the next-generation platform in computation, which is rapidly growing. Cloud computing refers to both applications and resources delivered on-demand over the internet as services. Both hardware and software resources provide various services over the internet to address the client’s requirements, referred to as the cloud. National Institute of Standards and Technology (NIST) refers to cloud computing as the convenient on-demand network access to a shared pool of configurable computing resources (Jing et al., 2010).

Recent reports show that about 95% of North America-based organizations have migrated critical applications to the cloud for one year, and the database building on the cloud environment is ranking high. This trend shows that most organizations have realized the benefits of cloud computing, including cost efficiency, availability, and scalability. However, some challenges are resulting from this hybrid IT environment. The most common problem associated with building databases on the cloud is the increased infrastructure complexity, followed closely by the lack of control of cloud-based applications and infrastructure.

As the web-based applications hosted on cloud computing platforms become diverse, and DBMS designers are making long strides towards ensuring a successful cloud platform by mitigating various challenges such as managing databases in the cloud environment. Here we are going to discuss database management challenges an organization using cloud environment face based on multiple parameters. These parameters include efficient multi-tenancy, data privacy, and security, scalability and elasticity, consistency of database in the cloud, database systems control, as well as their proposed solutions, and wide up by suggesting future expectations in database management in the cloud environment.

Databases storage in the cloud

The database is stored in the cloud on multiple dynamic servers at data centers rather than on specific servers, like in the prior data storage. As such, when one sends a database to the cloud, the data is stored on one or more servers at data centers. Currently, the cloud environment supports little virtualization enhancement database designs, but this will be possible in the future. Today, cloud service components like DBMSs and RDBMSs are considered cloud unfriendly because they cannot be scaled with ease. Many data management technologies such as Google’s AppEngine use key-value store which is insufficient for the web applications such as social network, collaborative editing, and online gaming, etc. because they need consistent access to groups of keys.

Although key-value stores are a preferable choice for scalability, they don’t support rich functionality. Also, key-value stores have scalability limits, and once the user reaches the scalability limits, one can only opt for data migration or load balancing. The solution to this challenge is the proposed DBaaS model. The DBaaS model proposed by Curino et al. (2010) incorporates three components, which include partitioning, workload analysis, and allocation and live migration. DBaaS model overcomes challenges such as elastic scalability and data privacy and efficient multi-tenancy.

Cloud database elasticity and scalability

There is a need to support an unlimited number of users for applications in the cloud environment, and this can only be achieved by making it scalable. Most database system emphasizes scalability, availability, and fault tolerance but loses the mark when it comes to consistency and ease of development. However, there are proposed solutions to this challenge; they include Data Fusion and Data Fission.

Data Fusion ensures scalability and maintains multi-key atomicity, which guarantees multi-key access. The second approach is the Data Fission, where the database is partitioned instead of storing it individually on tables. Data Fission minimizes the distribution of transactions. Another proposed solution to the scalability challenge is adding or removing nodes as per fluctuations. Scholars and IT experts suggest that database elasticity can be achieved in these two ways: database migration and virtual machine cloning.

One of the proposed techniques for live database migration on different cloud architectures within an organization is employing the zephyr technique. Zephyr technique minimizes service interruption for the data being migrated by incorporating synchronized dual-mode service in its operations. The other proposed method is the Albatross for live data migration: Albatross technique is efficient and allows for low cost for live migration of tenant database in the multi-tenant database management system. The critical component for elasticity and database load balancing is migration, as such, data movement should be categorized as the first-class notion in any multitenant database management system. Multitenant database management system utilizes a variety of multi-tenancy models for resource sharing, including machine sharing, process sharing, and table sharing. Albatross model is based on a shared process model, which provides the right balance of effective resource sharing, improved performance, diversity, and scaling. Also, it offers minimal impact on tenant SLA by performing its duties in three phases, which include iterative copying, initialization, and atomic handover (S.Das et al. 2010).

The other way to achieve elasticity in the cloud database is virtual machine cloning. Virtual machine cloning assumes that the cloud environment utilizes a virtualized architecture, and each database replica runs in a separate virtual machine. The virtual machine cloning is known as the shared-nothing Dolly clone, which clones the entire virtual machine of a real replica. Dolly clone uses the replication technique to estimate the latency to spawn a new model based on the virtual machine database resynchronization latency and snapshot-size (Emmanuel et al., 2011)

Additionally, Curino et al. (2011) suggest the use of a Relational Cloud and a DBaaS (Database as a Service) model to adapt to the peculiarities of the cloud computing environment. DBaaS achieves elasticity by utilizing a graph-based partitioning method to spread large amounts of data across many machines.

Further, scholars propose that elasticity and scalability can be achieved if the system has a correct and consistent view of the mappings of partitions to nodes, and partitions to its replicas. If the master of the replica happens to be present, then the system must be aware of the location at all times. They suggest Bigtable’s design as the ultimate solution to achieve scalability and elasticity in the cloud because it segregates different parts of the system and provides abstractions that simplify the whole design. Bigtable ensures there is no data replication, so the notion of replica master is absent.

Also, Bigtable, through a separate component called Chubby, guarantees fault-tolerant stores by providing log-based replication. Consistency amongst the replicas is assured through a Paxos protocol. According to Luis et al. (2011), Paxos protocol assures safety no matter the presence of various failures and guarantees that the replicas are consistent even when some replicas fail. Although the highest levels of consistency come at a cost, the Chubby is considerably small and does not hurt the system performance. It guarantees better scalability by limiting the number of nodes requiring synchronization.

The challenge of autonomic data management

Autonomic data management is another essential requirement in database management and is closely related to scalability and elasticity. Traditionally, database management systems trained DBAs to look after the system and take necessary actions to improve the performance of the database system. Database autonomy is an important factor in monitoring the behavior and performance of the whole system, as well as ensuring elastic scaling and load balance. According to Agrawal et al., 2010, more research is required for developing an autonomous and intelligent system that will support multiple users with no prior database expertise.

The Albatross technique is one of the few best methods for live data migration in shared storage, and that assures the achievement of elasticity. Albatross is an extension system that incorporates intelligent controls that can model the cost of data migration as well as predict the behavior of the entire system. As such, Autonomic data management leads to efficient database management.

Consistency and availability

All cloud vendors are required to avail of data in a considerable duration. To ensure data availability at a significant period in the cloud database can be achieved by availing a certain number of replicas of the data at multiple locations. Although cloud service vendors promise data availability, consistency between the replicas is a challenge. To overcome the inconsistency, scholars proposed a tree-based approach to maximize consistency while improving performance. The tree-based approach emphasizes connecting the primary server, and all replica serves in a way that reduces the probability of transaction failure.

On the other hand, it’s almost impossible to maintain consistency if the data is replicated over a sizeable geographical area because the consistency part of ACID is typically compromised to produce reasonable system availability. However, scholars suggest that the current solutions to the problem are not enough as they have sacrificed consistency and the ease to program for scalability and elasticity.

 

Additionally, modern systems need per-object replication to ensure high availability and improve performance by distributing the workloads amongst the replicas. As different systems continue to use different mechanisms to synchronize the models, the issue of eventual consistency and timeline consistency may remain to be the biggest challenge (Curino, 2010).

Storage performance

One of the most crucial factors that most cloud users consider when storing their data and applications in the cloud environment is how well they will perform. Most DBaaS users find it easy when it comes to storage performance because the cloud service provider sorts most of the underlying infrastructure responsibilities, but one thing remains unsorted; the storage performance that one may need. In the public cloud, the performance of the storage layer depends on the capacity of the provision. As such, the user may either over-provision the ability to get the required performance of the database or opt for more expensive Provisioned Performance storage options. Also, if the workloads have large performance spikes, this means that the required DBaaS storage might increase the cost significantly.

To have more control and to spend significantly less, one needs to use NetApp Cloud Volumes for the database workload. NetApp Cloud Volumes targets excellent performance expected for an expert database control (Yifat Perry, 2018).

Cloud privacy, security, and trust

As the cloud environment adoption continues to rise among many organizations, data security and privacy remain to be the prime concerns. To address the issue of data security and data privacy, A, Kaur & M. Bhardwaj (2012) proposed a hybrid technique that incorporates multiple algorithms such as RSA, Random number generator, and Triple DES to ensure secure communication through digital signature-based authentication. Random Number Generator uses digital signature-based authentication to establish a secure connection, while Triple-DES is useful for encryption of block data.

Also, scholars propose an In-memory database encryption technique to secure sensitive data over untrusted cloud users. In In-memory database encryption, the client needs a key from the synchronizer to decrypt the encrypted shared data it receives from the owner. The sole purpose of the synchronizer is to store the keys and correlated shared information separately. One set back associated with this technique is that delay can occur due to additional communication with the central synchronizer. However, the solution to this problem is adopting group encryption and minimizing communication between the synchronizer and relevant nodes.

Huang et al. (2012) propose an asymmetric encryption mechanism to secure data in the cloud environment and ensuring data privacy. This technique follows the concept of commutative encryption and ElGamal encryption. Commutative encryption is applied to data more than once, and the order of public key used for encryption/decryption does not matter. Commutative encryption is beneficial in cloud applications where privacy is paramount.

Alzain et al. (2011) argue that most security and data privacy issues in cloud computing can only be addressed by keeping off the intrusion, maintaining the high integrity of data, and the availability of cloud services. As such, Alzain proposed that data can be stored in multiple clouds. To assure security in various clouds, Shamir’s secret Algorithm can be of great use. This security measure divides external and internal unauthorized access into chunks and generates a polynomial function against each function. After all is done, the processed data is then stored into different CSPs.

Privacy is a significant barrier to deploying databases in the cloud because it reduces the degree of trust users place on the system. If cloud clients are to encrypt all the data stored in the DBaaS, then data privacy risks could be a thing of the past. Here we are going to learn about a set of techniques designed to provide privacy e.g., keep off third parties from accessing user’s data. The first technique that assures data privacy is the CryptDB, which employs different encryption levels for a variety of data based on the data a client wishes to run in the cloud environment. In CryptDB, all queries are evaluated on the encrypted data and sent back to the client for decryption. In his technique, all questions are run on the cloud end and not on the client.

On the other hand, privacy, efficiency, and scalability challenges in the cloud environment can be eliminated through workload-awareness. The principle behind the workload-awareness approach is monitoring the actual query patterns and data accesses, and then employ mechanisms that use these observations to undertake optimization and security functions.

According to Delettre et al. (2011), data security and privacy can be achieved through a data concealment approach, which merges real data with fake data to falsify the actual data volume. In this approach, authorized parties can easily differentiate and separate factual data from counterfeit data. The concealment technique increases the overall capacity of real data and provides enhanced security for private data. The main objective behind the data concealment approach is to secure real data from malicious users and attackers. Here watermarking is used as the key for actual data, and only the authorized users have the watermarking key. Moreover, Manivannam et al. (2010) have proposed another data security mechanism known as lightweight database encryption referred to as Transposition, Substitution, Folding, and Shifting (TSFS) algorithm.

To address the challenge of data confidentiality, authentication, and access control, increasing cloud reliability, and trustworthiness is fundamental. The Diffie-Hellman approach is proposed to ensure the integrity and safety of cloud services. Diffie-Hellman is a cryptographic algorithm that aims at securing communication. It consists of three modules, namely administration, authentication, and encryption modules. The administration module is used by cloud services providers to register and administer users, and the authentication module is used to authenticate users while the encryption module is used to encrypt data.

Trust is a complex relationship between the client and cloud services provider, and it should be planned before adopting any cloud service. This relationship should be reliable and measurable for users to make trustworthy decisions. Trust must be evaluated with regard to the relationship between the two parties and trust degree, and trust should be monitored to ensure both parties are conducting their business in harmony (Sun et al. (2011).

Data storage on shared compute infrastructure

Most public cloud environments require data to be stored on shared infrastructure, making API scripts to be vulnerable to cyber-attacks. Although multi-tenancy is designed to optimize workloads as well as reduce costs by sharing workloads across multiple cloud environments, it can lead to side canceling. Also, it can lead to an IT security threat that occurs when an attacker accesses critical information through a shared tenant’s node. To solve this problem, data encryption and continuous monitoring are necessary within an organization. As data is transmitted outside secure premised, it must be encrypted using techniques such as cryptography and tokenization.

Further, the data should be secured through digital certificates and multi-factor authentication. Apart from data encryption, monitoring is very critical. Putting monitoring tools in place is of great importance as they reinforce traditional anti-virus and anti-spam tools such as intrusion detection, network traceability tools, and denial-access of service (DoS) attack monitoring. As such, cloud clients must use updated security innovations from leading vendors.

Additionally, organizations are advised to create awareness of various risks associated with cloud solutions through training their IT departments. Organizations should cultivate a culture of constant vigilance on cloud computing services because it’s the most cost-effective method of securing data store (Deloitte, 2019).

Moving to a public cloud infrastructure

For many organizations, giving up some level of compliance control to the public cloud, the vendor is a big challenge. Almost all CIOs and CEOs want to know how they can leap from cloud computing as well as preserve their stand in regulatory and compliance functions. Most cloud service providers don’t disclose their standard terms and conditions. As such, they fail to give what the clients want in terms of customized services for unique requirements.

Another challenge is that the public cloud environment is changing fast, leading to a relatively new market offering that is still maturing in terms of operating model and industrial standards. Also, cloud regulators are evolving, and many can’t provide clear guidance regarding cloud computing. As a result, most alliances, government agencies, among other industry groups, are on the upfront to developing their unique standards. As such, organizations are advised to continue observing the trending compliance and regulatory changes to ensure quality cloud services.

To overcome these challenges, the organizations’ risk management team should embark on developing and implementing cloud-specific procurement guidelines with favorable terms and conditions as well as improve the necessary mechanism to ensure a smooth and well-engineered data migration to the cloud environment. Besides establishing fundamental guidelines and regulatory requirements for the cloud, the organization should look for baseline compliance requirements such as access management, user identity, incidence response, data protection, and residency requirement, among others.

Although organizations are still not contented with the current improvements in the cloud environment, most of them are taking advantage of the cloud’s agility, scalability, and performance. This has resulted in complex hybrid IT environments with many organizations requiring new tools to manage the hybrid environment as well as new skills to secure and manage the hybrid environment.

Further, integrating the new cloud services with the on-premise IT infrastructure is a challenge considering the complexity of customization and configuration and the costs associated with the same. Organizations considering a hybrid model must know how cloud applications and services operate and how well they will integrate with the existing on-premise IT stack and application (Deloitte, 2019).

Legacy IT architecture

Most people and organizations are reluctant to store their critical information in the cloud environment because of security concerns, upgrade cost, risk, and regulatory compliance. On the other hand, an integration problem may arise when the cloud applications and on-premise IT architecture fails to integrate as expected.

Also, in the absence of guidance on cloud consumption, each cloud client may choose an application that he/she thinks is perfect for the purpose, but it may not optimize the cloud services to the client’s satisfaction.

Establishing governance and controls is the ultimate solution to these problems. Perfect governance and controls for cloud adoption consider the client’s process, applications, infrastructure, data, and management controls. On the other hand, structured governance ensures constant performance monitoring, improved service effectiveness, and align investment with the set objectives.

Further, an organization should adopt guiding principles on how cloud solutions should be established. Also, the organization should evaluate the software in use with regard to operational performance metrics and not only the software’s features and functionalities.

Finally, to improve integration flexibility, the organization should focus on modernizing opportunities and prepare a business case to justify such an achievement. As such, an organization moving towards consuming cloud services should first upgrade its existing IT architecture to minimize cloud integration challenges as well as improve the momentum of attaining cloud services benefits (Deloitte, 2019).

Cloud and data accessibility

The quality of services provided in the cloud environment does not have to be compromised to term the cloud service as inferior, but if one cannot access data when needed then, the whole service is inefficient. The challenge of downtime and accessibility is common in the cloud as the data stored in it can only be accessed through the internet connection rather than via a local link. For instance, when the network or internet connection is down, it’s expected that cloud services will also be down; thus, data stored in the cloud cannot be accessed.

Besides internet and network connection failure, the performance of the cloud infrastructure performance can be affected by the workload, cloud environment, and the number of users present. To ensure the cloud infrastructure is resilient to such challenges, clients need to signup cloud services with a reputable provider with robust resilience measures put in place to protect data in the cloud.

Today, most cloud clients, especially organizations, are complaining about the challenge of cloud-based servers having the inability to put in place appropriate customer service support system. For instance, CIOs often raise concerns around data ownership and being unable to control data when moving to the cloud. The ultimate solution to the problem is selecting how the data is stored.

Ensuring seamless integration of applications is also another challenge. To solve this problem, the client should provide the cloud service provider gives the required level of control over the data and the server and the freedom to extract the data and move it elsewhere if need be (Casey, 2015).

Data sovereignty and jurisdiction

Cloud data is subjected to the laws of the country in which the data is stored, and the data processors and data controllers are subjected to the regulations of the country in which the data is received. This raises a new challenge of data sovereignty. Also, applicable laws from other jurisdictions could apply to the cloud data, depending on a set of scenarios that are evolving through case laws. It’s predicted that widespread internet use will attract changes in legislation in the coming years. There may be numerous jurisdictions applying to data stored in the cloud-like it’s today in the UK. For instance, a non-UK resident has to acquire authorization to access data stored in a cloud located in the UK. As such, cloud clients should keep looking out for legal issues surrounding data sovereignty (KMPG, 2016).

Data Protection Acts and principles

As data protection in the cloud environment continues to be the topic of the day, the seventh data protection principle obligates organizations to have adequate technical and organizational measures put in place to protect personal data from attackers, unauthorized disclosure, damage, or loss. As such, the organizations remain accountable for what the cloud service provider does with personal data. This accountability is inclusive of data loss or destruction while being processed by the cloud service provider.

As such, organizations are required to enhance their cybersecurity by protecting their systems, networks, and data in cyberspace. Data protection at the cyberspace level has been a challenge to most organizations across the world, as most of them are unable to put the necessary measures to mitigate cybercrime. However, scholars have suggested simpler and cheaper cybercrime regulations that will tackle the problem and keep personal data in a high-security level against unauthorized parties. One of the rules includes implementation of adequate measures to protect personal data against accidental loss, damage, destruction, and that the data protection contract between the cloud service provider and the organization should set out the responsibilities and liabilities of the data processor and the data controller (KPMG, 2016).

The cloud providers store and move data among multiple data centers located in several jurisdictions, which in many cases, maybe outside of the country boundary. For instance, the eighth principle restricts the transfer of personal data outside Europe, unless there is an adequate level of protection put in place concerning the processing and storage of the personal data (KPMG, 2016).

As such, the cloud service client should focus on securing a good contract that clearly and unambiguously states the rights and responsibilities of both the client and cloud service provider. Cloud clients should have a thorough understanding of their responsibilities and those of the cloud service providers and how the risk is shared in order to make an informed decision (KPMG, 2016).

Cross-instance effects

Another challenge is the cross-instance effects resulting from cloud infrastructure changes either as a whole or in other users’ instances. These changes have adverse effects on the clients’ service regarding confidentiality, integrity, and availability. Security risk factors such as visualized environments, hardware, host systems which are on higher architecture level need to be considered because they are not under the control of the user organization. Changes related to scalability as well as granular and flexible service fees, and fast-passed services provider may have a significant impact on the security posture of the whole system or service. As such, the organizations need to ensure security processes and control needs are put in place in a way that adequately copes with various changes in the cloud environment (Michael et al., 2014).

 

Regulation and compliance restrictions

The high speed and performance of the cloud are compelling, but the challenge of regulation and compliance restrictions hinder many organizations from moving data to the cloud. This set of circumstances is making hybrid cloud the most favorable cloud service provider. For instance, the school has adopted the hybrid cloud because it allows it to share students’ information between the institution and the federal government, while still complying with HIPAA. One of the critical factors that make the hybrid a better option is that it makes certainty around security even more imperative. Also, it incorporates an on-premise solution, usually investing in a traditional private cloud environment vs. a previously existing data center (John Hughes, 2017).

Cloud data storage technologies and management

Data distribution process tasks in the cloud environment are usually performed by software frameworks such as MapReduce through its various implementations such as Hadoop and Dryad. These frameworks are typically operated on Internet-scale file systems such as HDFS and GFS. Both HDFS and GFS are different from traditional distributed file systems concerning their storage structure, application programming interface, and access pattern. As they don’t implement the standard POSIX interface, they don’t introduce compatibility issues with the legacy file system. Scholars propose that there is a need for methods of supporting the MapReduce framework using cluster file systems such as GPFS and IBMs (Patil et al. (40) also proposed a new API primitives for concurrent data access and scalability.

The future of cloud computing

Despite the many challenges facing organizations using cloud computing services, the future looks brighter. According to Michael Corrado, a Word Wide marketing manager with Hewlett Packard Enterprise, the future of cloud computing will feature a combination of cloud-based software products and on-premise IT infrastructure. This hybrid IT solution will balance flexibility and scalability associated with the cloud and will improve security measures and the control of private data centers.

Also, the cloud will implement additional security concerns in a world where private data is increasingly vulnerable. As such, major technology vendors will adjust their business models to pave the way for flexible consumption payment models to purchase on-premise IT infrastructure, thus bringing a balance between cloud and on-premise IT infrastructure.

Additionally, a Hybrid IT future will compel software companies to provide their services and solutions as cloud services, which will diversify customers’ needs to multiple platforms based on their preferred software vendors.

Authors argue that a decade from now, every organization is conducting its business operations in the cloud environment to paving the way for more flexibility, productivity, and efficient operations and service delivery. As such, in a decade to come, some of the cloud challenges will be dealt with and rigid solutions put in place by cloud service providers to realize the dream of most businesses. So, many cloud clients will be assured of crucial data security and privacy as well as efficiency. As privacy measures continue to be implemented by cloud providers, more cloud clients, especially organizations, will build trust with cloud service providers and trust them with vital information.

According to Matt Riley, the Co-founder of Swiftype, the current cloud challenges witnessed by many organizations today will be a thing of the past in the next few years. Cloud service providers are on the right track towards solving specific cloud service challenges to ensure the cloud is an environment that assures flexibility, efficiency, and privacy of information at all times.

According to Sbarski 2018, the future of cloud computing assures observability and new security solutions that will help organizations to evolve their systems and architectures to take advantage of cloud flexibility features. However, as the adoption of serverless continues to grow, more needs to be done by the serverless community to create awareness of what this technology is all about.

Conclusively, cloud computing has emerged as a compelling platform for managing and delivering services over the internet. Over the years, cloud computing has rapidly changed the landscape of IT and ultimately turning the promised utility computing process into a reality. However, the current cloud computing services have not evolved to their full potential. Many challenges continue to hinder smooth cloud service delivery, including security management, elasticity, automatic resource management, and provision, among others. However, it’s still believed that there yet a tremendous opportunity for researches to bring in their contributions to this field and bring onboard a significant impact on the industry.

This paper has compiled a comprehensive study on various security issues facing the use of cloud computing database and their solutions. It has started by introducing a cloud database followed by multiple factors that contribute to the challenges facing cloud computing. Then, it has discussed a wide range of security threats related to cloud computing architectures and has showcased that attackers can use various ways to exploit a broader range of cloud database environment resources. This paper has finalized the study by recommending newly proposed security mechanisms that ensure the security of cloud computation and their applications within an organization.

Lastly, we have tackled the most prevalent challenges relating to cloud computing and their solutions, and we hope that this paper has provided a better understanding of various cloud challenges witnessed today, and paved the way for further research on ways to mitigate these challenges.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

References:

Jing Bi, Zhiliang Zhu, Ruixiong Tian, Quingbo Wang : Dynamic Provisioning Modeling for Virtualized Multitier Applications in Cloud Data-Center, IEEE 3rd Intl.Conf.on Cloud computing, 2010.

Carlo Curino, Evan P.C.Jones, Raluca Ada Popa et.al., “Relational Cloud: A Database -as-a-Service for the Cloud”, CIDR, 2011.

Das, S., Agrawal, D., El Abbadi, A.: G-Store: A Scalable Data Store for Transactional Multikey Access in the Cloud. In: ACM SoCC. pp. 163–174 (2010)

Emmanuel Cecchet, Rahul Singh, Upendra Sharma, Prahant Shenoy, “Dolly: Virtualization- driven Database Provisioning for the cloud”, VEE’11, ACM, 2011

Luis M.Vaquero, Luis Rodero-Merino, Rajkumar Buyya, “Dynamically Scaling Applications in the Cloud”, ACM SIGCOMM, Vol 41, Number 1, Jan 2011.

Sudipto Das, Shashank Agarwal, Divyakant Agrawal, Amr El Abbadi : ElasTraS: An Elastic, Scalable, and Self Managing Transactional Database for the Cloud, UCSB Computer Science Technical Report 2010.

Kaur, and M. Bhardwaj, “Hybrid Encryption for Cloud Database Security,” International Journal of Engineering Science & Technology [IJESAT], Cloud Database Security, vol.2, pp.737-741, 2012

  1. Huang, and R. Tso, “A Commutative Encryption Scheme based on ElGamal Encryption,” batabase Encryption, vol.4, pp.156-159, 2012.
  2. A. AlZain, B. Soh, and E. Pardede, “MCDB: Using Multi-Clouds to Ensure Security in Cloud Computing”, Cloud Security, pp.784-791, 2011.
  3. Delettre, K. Boudaoud,& M. Riveill, “Cloud Computing, Security and Data Concealment,” IEEE Symposium on Computers and Communications (ISCC), pp.424-431, 2011.

D.Manivannan,&R.Sujarani, “Light Weight and Secure Database Encryption Using TSFS Algorithm,” International Conference on Computing Communication and Networking Technologies (ICCCNT), pp.1-7, 2010.

  1. Sun, G. Chang, L. Sun and X. Wang, “Surveying and Analyzing Security, Privacy and Trust Issues in Cloud Computing Environment”, Procedia Engineering, vol. 15, 2011, pp. 2852-2856.

Patil S et al (2009) In search of an API for scalable file systems: under the table or above it? HotCloud

(2019). Retrieved from https://home.kpmg/content/dam/kpmg/pdf/2016/04/moving-to-the-cloud-key-risk-considerations.pdf

(2019). Retrieved from https://www2.deloitte.com/content/dam/Deloitte/ca/Documents/consulting/ca_cloud_pov_EN_doc.PDF

Agrawal, D., El Abbadi, A., & Antony, S. (2019). Data Management Challenges in Cloud Computing Infrastructures [Ebook]. Santa Barbara: University of California. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.209.125&rep=rep1&type=pdf

Casey, J. (2015). Top 5 cloud computing challenges | Trilogy. Retrieved from https://trilogytechnologies.com/top-five-challenges-of-cloud-computing/

Hofmann, M., Loos, P., Fettke, P., & Schaefer, T. (2014). Selecting the Right Cloud Operating Model Privacy and Data Security in the Cloud [Ebook]. ISACA JOURNAL. Retrieved from https://www.isaca.org/Journal/archives/2014/Volume-3/Documents/Selecting-the-Right-Cloud-Operating-Model_joa_Eng_0514.pdf

Hughes, J. (2019). The Challenges of Database Management in Hybrid Cloud. Retrieved from http://www.manageforce.com/blog/the-challenges-of-database-management-in-hybrid-cloud

Perry, Y. (2018). Cloud Based Database Intro: Challenges and Advantages. Retrieved from https://cloud.netapp.com/blog/cloud-based-database-challenges-and-advantages

Sbarski, P., Hall, S., & Raihan, I. (2019). On the Future of Cloud Computing – The New Stack. Retrieved from https://thenewstack.io/on-the-future-of-cloud-computing/

Shelar, M., Sane, S., & Kharat, V. (2016). Database Management Challenges In Cloud Environment [Ebook]. Savitribai Phule: IJMTR. Retrieved from http://file:///C:/Users/user/Downloads/MShelar_Database_Management_Challenges_in_Cloud_Environment%20(2).pdf

The Future of Cloud Computing WiIl Blow Your Mind – Exclusive Interviews. (2019). Retrieved from https://www.futureofeverything.io/future-of-cloud-computing/

  Remember! This is just a sample.

Save time and get your custom paper from our expert writers

 Get started in just 3 minutes
 Sit back relax and leave the writing to us
 Sources and citations are provided
 100% Plagiarism free
error: Content is protected !!
×
Hi, my name is Jenn 👋

In case you can’t find a sample example, our professional writers are ready to help you with writing your own paper. All you need to do is fill out a short form and submit an order

Check Out the Form
Need Help?
Dont be shy to ask