CHAPTER 1 INTRODUCTION
- INTRODUCTION
A database management system at an organizational level is meant for planning, controlling, managing, securing, and delivering data wherever and whenever necessary. Businesses rely mostly on databases for storing their sensitive operational data and for their functionalities. Organizations cannot afford failure or malfunctioning in their databases, as it can have a deadly impact on the business. If these databases are not properly secured or monitored, they might be subject to data breaches and other vulnerabilities with catastrophic impacts. Data is one of the most important assets in the contemporary world. It plays a vital role in the functioning and operations of almost all businesses and organizations. To accommodate the exponentially growing data, enterprises years ago started using database management systems for better management and ease of data storage. Cloud computing is the latest evolution of Internet-based computing. The Internet provided a common infrastructure for applications. Soon, static web pages began to add interactivity. This was followed by hosted applications like Hotmail. As these web applications added more user-configuration, they renamed Software-as-a-Service (SaaS). A database management system is a software system for storing and managing a large volume of data for allowing multiple users to access and manipulate data in a consistent and secured manner. In this chapter, the cloud DBMS based data transactions, cloud framework based fault identification, data modeling, and its reliable features and the applications of DBMS etc. to be reviewed.
1.2 Cloud Computing
Cloud computing is not a graze; It is driven by some tangible and very powerful benefits. There are some very real advantages to this model even when the cloud is provided as an internal corporate resource, as a service hosted by a third-party, or as a hybrid of these two models. Figure 1.1 represents the cloud computing based Internet connection.
Figure 1.1 Cloud computing based Internet connection
Cloud computing shares the basic theme of previous typical example associated with the provisioning of computing infrastructure. However, cloud computing differs in that it shifts the location of the infrastructure to the network to provide basic components.
These basic components, such as CPUs, storage, and network bandwidth, are provided as a service by specialized service providers at a low unit cost. Since users of these services are guaranteed and Internet Switch Router Server-Clients based available resources are virtually countless and failed components are replaced without any service interruption and data loss, and they are not to be worried about scalability and backups. These advantages derive from specialization and economies of scale:
1.2.1 Specialization
A great deal of specialized knowledge is required to set-up and operate systems to scalability, address security, scalability, platform maintenance (patches, updates) and more. In a traditional model, each development effort has to inculcate this expertise on staff. Cloud computing enables these capabilities to be staffed by experts who are shared across many customers. Instead of hiring that one person who does a decent job across all of these elements, cloud computing entities can hire individuals with deep expertise in each area, and then divide and distribute this expense across a large number of customers. This degree of specialization enables a variety of benefits which are driving cloud computing.
1.2.2 Economies of Scale
This is also a powerful driver for cloud computing. This ideal platform is very expensive to build. The networking equipment, servers, redundant high-speed connectivity, power, data storage/backup, etc. can require a huge start-up cost for a single product or project. Even then most development efforts fail and the economics simply make no sense for investment of this level in each project. Cloud computing enjoys economies of scale, because that same investment can be divided and distributed over a large number of projects. If one project fails, it can be replaced by a number of new projects that continue to divide and distribute the initial investment.
1.2.3 Key Benefits of Cloud Computing
1.2.3.1. Lower costs: All resources, including expensive networking equipment, servers, IT personnel, etc. are shared, resulting in reduced costs, especially for small to mid-sized applications and prototypes.
1.2.3.2. Agility: Provisioning-on-demand enables faster set-up and tear-down of resources on an as-needed basis. Funding the project initiates service, and then if the project is killed, it simply terminates the cloud contract.
1.2.3.3. Shifting CapEx to OpEx: Cloud computing enables companies to shift money from capital expenses (CapEx) to operating expenses (OpEx), enabling the customer to focus on adding value in their areas of core competence, in business and process insight, instead of building and maintaining IT infrastructure. In short, cloud computing enables u to focus money and resources on innovating.
1.2.3.4. Dynamic scalability: Spikes in traffic is experienced by most applications. Instead of over-buying own equipment to accommodate these spikes, many cloud services can smoothly and efficiently scale to handle these spikes with a more cost-effective pay-as you-go model.
1.2.3.5. Large scale prototyping/load testing: Cloud computing makes large scale prototyping and load testing much easier. Thousand servers can easily spawned in the cloud to load test application and then release them as soon as that are done. Try doing that with owned or corporate servers.
1.2.3.6. Simplified maintenance: Upgrades and patches are rapidly deployed across the shared infrastructure, as backups.
1.2.3.7. Diverse platform support: Many cloud computing services offer built-in support for a rich collection of client platforms, mobile, including browsers, and more. This diverse platform support enables applications to reach a broader base of user’s right at the outside.
1.2.3.8. Faster development: Cloud computing platforms provide many of the core services, which would normally be built in house under traditional development model. These services, other tools and templates can significantly accelerate the development cycle.
1.2.3.9. Faster management approval: This is closely related to cost saving. The management approval process is greatly accelerated, causing faster innovation since cloud computing has very low upfront costs. In fact, costs are so low, that individuals can easily fund the expense personally to demonstrate the benefits of their solution, while avoiding organizational sloth.
1.3 Cloud Service Models
Cloud delivers many types of services. They are Software as a Service (SaaS), Platform as a Service (PaaS), Hardware as a Service (HaaS) and Data as a Service (DaaS) etc.
1.3.1 Software as a Service (SaaS)
If an application is hosted as a service to customers to access it through the internet, then that model is called software as a service. The customers do not need to maintain it or support it, if the software is hosted off site. On the contrary, if the hosting service decided to change, it is out of the hands of the customer. The outcome is that if the software is used out of the box as is and do not need a lot of change or require integration to the systems. All the patching and upgrading as well as keeping the infrastructure running are done by the provider. Figure 1.2 represents the Software as a Service
Figure 1.2 Software as a Service
SaaS provides an application or piece of Software from the service provider. Cost can be sort of double edged sword. Costs for accessing the software can be a regular thing, on the one side. Instead of pay once for accessing the software and hence be satisfied, the more the software is used, the move will be billed. On the in some cases, it is enough to pay as much up front and only on the basis of the application it is billed. For vendors, SaaS has the ability for providing stronger protection of their intellectual property and also creating a continuous flow of income. There are numerous types of software that contributes themselves to the SaaS model. Symbolically, software performing a simple task without much need to interact with other systems makes them ideal candidates for SaaS. Those who are not interested to perform software development but they are in need of high-powered applications can also derive from SaaS. Some of these applications are described as follows. There are different services one will have to run based on the cloud provider and on the requirements of the organization. Also these services will likely affect as to how the organization spread out the cloud infrastructure.
- Accounting
- Customer resource management(CRM)
- IT service management
- Video conferencing
- Web content management
- Web analytics
One of the biggest of SaaS is costing less money than buying the application outright. In addition other benefits are include the following smaller staff, better marketing, security, familiarity with the World Wide Web, More bandwidth, customization, web reliability.
1.3.2 Platform as a Service (PaaS)
In continuation of SaaS, another application delivery model is Platform as a Service (PaaS). PaaS has the capacity to supply all the resources required to build applications as well as services completely from the internet, without having either to download or install software. Application design, development, testing, deployment and hosting are the services included in PaaS. In addition to above services, PaaS services include web service integration, team collaboration, database integration, reliability, storage, security versioning and state management.
The only deficiency in PaaS is lack of interoperability and portability among providers. It means that on creating an application with one cloud provider and deciding to move to another provider, you will not able to do so without having to pay a high price for that. Lastly on the provider going out of business all application done and database created will be lost. Figure 1.3 shows the Platform as a Service
Figure 1.3 Platform as a Service
PaaS allows clients to access a computing Platform over a cloud computing solution. PaaS Options: Any one of the following three different types of systems is found in PaaS.
1.3.2.1 Add-on Development Facilities: Existing SaaS applications to be customized are allowed in these facilities. Most time PaaS developers as well as users are required to purchase subscriptions for adding-on-SaaS application.
1.3.2.2 Stand alone environments: these environments do not include technical, licensing or financial dependencies on specific SaaS applications but are used for general developments.
1.3.2.3 Application delivery-only environments: Hosting level services like on demand scalability and security are supported by these environments. These do not include test capabilities, development and debugging.
1.3.3 Hardware as a Service (HaaS)
The next form of service available in cloud computing is Hardware as a Service (HaaS). HaaS does not provide applications to customers as done by both SaaS and PaaS. HaaS only offers the hardware so that an organization can put whatever the organization wants onto it. Figure 1.4 represents the Hardware as a Service
Figure 1.4 Hardware as a Service
HaaS allows service providers to rent hardware resources. The infrastructure can be dynamically scaled up or down , based on the application resource needs. And more tenants can be on the equipment on the same time. The following resources are allowed to rent by HaaS.
- Memory
- Storage space
- Server space
- Network equipment
- CPU cycles
In addition, based on the application resources needs, the infrastructure can be dynamically scaled up or down. At all times innumerable to ten of different categories can use the equipment. Since providers are charged on the basis of how many resources are consumed, resources are typically billed on basis of a utility computing. The following are involved in HaaS.
1.3.3.1 Network:
Hardware for load balancing, firewalls routes and so on is included in network.
1.3.3.2 Platform virtualization environment:
If the clients are allowed to run the virtual machines they require it is called platform virtualization environment.
1.3.3.3 Utility computing billing:
This is a system, specifically set up to bill customers based on how many system resources are used by them.
1.3.3.4 Internet connectivity:
If clients are allowed to access the hardware from their won organization it is called internet connectivity.
1.3.3.5 Service level agreements:
An agreement between the provider and client on guaranteeing a certain level of performance from the system is called service level agreement.
1.3.3.6 Computer Hardware:
The components whose resources will be rented out are called computer hardware. Services providers always have this set up as a grid for easier scalability.
1.3.4 Data as a Service (DaaS)
Another form of service offering which is universally available in the world of cloud computing is Database as a Service (DaaS). To avoid the complexity and cost of running one‟s own database is the idea behind DaaS. The following benefits are offered by PaaS.
1.3.4.1 Power
Since the database is not installed locally, it does not mean that it is not effective. Based on the render one can get custom data validation to endure accurate information. It is made easy to create and manage the database.
1.3.4.2 Ease of Use
In this, there are no redundant systems to worry about and no servers to provision. There is no need to worry about purchasing, installing and maintaining hardware for the database.
1.3.4.3 Management
Since large databases benefit owing to constant pruning and optimization, symbolically expensive resources are dedicated to this task. With some DaaS contributions, this management can be made available as part of the service for a much lesser expense.
1.3.4.4 Integration
The database can be integrated with other services maintained by a person to provide more value and power. For example, a person can tie it with email, calendars and people to make the work more powerful.
1.4 Challenges in the Database Management Systems
The challenge factors in the cloud DBMS framework that attains the following inefficient features such are following as
1.4.1 Deployment failures
Deployment failures is probably the most common and foremost cause for database failures. Although most enterprises tend to have preliminary tests before deploying their databases, they often fall short in testing it properly.
Most pre-deployment tests for databases just check the functionality whether the database is doing what it is supposed to do, but they often fail to ensure that the database isn’t doing anything that it’s not supposed to do. Getting the work done is not enough, getting it done in the right way, without having any adverse impacts to the business, has to be ensured during database deployments.
Traditionally, a database deployment isn’t an easy task to do. It involves a number of challenges making the task more difficult. Most development teams in enterprises these days aren’t giving sufficient thoughts to planning a database deployment.
In a database deployment, there are two major tasks: deploying a new database and upgrading an existing database. Irrespective of what deployment you’re doing, it is advised to follow care and precision while deploying. Here are some of the essentials you’d need to do for a successful database deployment:
1.4.1.1 Adapting source control or cluster services like VCS: Make sure that your database and all its schemas are in a source control, as this makes the deployment easier, more reliable, secure, and auditable. Source control also provides many essential features that will help to manage the complexity of databases as they grow in size and volume.
1.4.1.2 Always have a snapshot of data: Be it a new deployment or upgrading the existing databases, always make sure to have a snapshot of the data backup before you make any changes. This way, your data will be safe no matter how the deployment goes.
1.4.1.3 Deploy the script to a nonproduction environment first: To make sure that the database has been deployed the right way, always test in a trial/test environment before implementing it in the production environment.
1.4.1.4 Make use of the automatic scripting approach wherever possible: An appropriate automated scripting approach simplifies the database development and maintenance. This also makes it easy to track the changes in a database made, rollback whenever required, or even reorder the previous changes.
1.4.1.5 Proper data classification: Before you start deploying a database and filling it with data, it is essential to first properly classify the data based on its severity and sensitivity. This will not only make data retrieval and indexing easy but will also help you to monitor sensitive data in a better way.
1.4.2. Privilege-based issues
After the database has been successfully deployed, it is very important to give only the required privileges or access to the users or applications of the database. Most database disasters are often caused by privilege-based issues such as excessive or legitimate privilege abuse, privilege elevation, or unused privileges. Providing unnecessary access privileges the users or applications increases the attack surface of the databases.
To deal with privilege or access-based issues at an organizational level is not an easy task. This task doesn’t just require the administrators to eliminate unnecessary excessive rights to the database, but it also needs them to constantly monitor the resources for any privilege-based abuses. Undertaking this task manually is possible for small businesses with a limited number of resources, but to do the same for a large organization manually is near to impossible. Therefore, most organizations these days are deploying readymade software solutions to monitor the privileges of its employees and resources.
To avoid intruders or hackers from exploiting the database vulnerabilities with access privileges, implement the right combination of intrusion-prevention systems (IPS) along with a query-level access control system such as Secure Sphere. Finally, to prevent legitimate privilege abuse, it is absolutely essential for enterprises to understand the context of database access. Implementing proper policies to monitor client applications, date and time of access, and location can help in knowing who are using legitimate database access privileges in a suspicious manner.
1.4.3 SQL injections
SQL injection is a common web and data-hacking technique, which involves placing malicious code in the database through vulnerable SQL data input channels. It is also one of the most popular vectors of attack for databases that might even completely destroy your database.
In a typical SQL injection, a hacker or an intruder injects an unauthorized piece of SQL code or stored procedures into the database, typically through the web application’s input parameters. These injected statements are specifically meant to execute at the database level for malicious purposes such as data security breach or for destroying the data. Successful execution of an SQL injection can also give attackers unrestricted access to the entire database.
To prevent SQL injection, a balanced combination of intrusion-prevention system, query-level access control, and event correlation can be used. Some of the additional techniques to prevent this attack are: principal of least privilege has to be implemented at the database level. This way, although an intruder injects a harmful piece of code into the database, it does not get executed due to lack of privileges. Moreover, cleaning and validating the inputs from the users before sending it to a database can also help preventing these SQL injection attacks. Another effective way of curbing SQL injections is to implement parameterized queries. A parameterized query uses a parameter instead of directly injecting the values. Implementing error messages or using stored procedures can add an extra layer of data abstraction for security and can aid in preventing SQL injections.
1.4.4. Broken databases
Every day, thousands of issues are being reported around the globe regarding database attacks or exploitations. Each time this happens, a hacker comes up with a new vulnerability that allows attackers to crash, gain access control over, and exploit databases with vulnerabilities.
Thankfully, most database vendors are keeping pace and are providing attack-resistant database services. Vulnerabilities like SQL command injections or database buffer overflows have gone down to a great extent as the vendors are being proactive or responsive in providing patches to vulnerabilities.
These database vulnerabilities can be caused due to multiple reasons. Lack of proper data management, data leakage, unmanaged or uncategorized organizational data, and lack of constant monitoring are some of the most common reasons for database vulnerabilities. Database vendors need to identify these issues and roll out regular updates or patches to fix them. Therefore, to prevent hackers or intruders gaining access to your sensitive organizational data, keep your databases patched regularly and make sure to maintain a constant monitoring at its functioning and access privileges.
1.4.5. Denial of services
Denial of services is a form of cyberattack wherein a hacker or an intruder prevents legitimate users from accessing certain resources or a service by temporarily or indefinitely disrupting the service host. A database is one of the common victims of a denial of service attack. In this, a hacker overloads a company’s database with multiple requests/queries at a time. This will consume almost all system resources such as server’s and database’s memory, or the CPU processing power, crashing a business website or a service.
The infamous denial-of-service attack, in which the attackers used Internet of Things devices to take down host services. To prevent such attacks, try allocating more system resources to accommodate higher user requests simultaneously. Also, there are currently a lot of third-party software solutions to identify, detect, and separate the legitimate users from the malicious traffic. Finally, implementing DDoS attack response systems could reduce the adverse effects of DDoS attacks by using technologies to prevent these attacks.
Databases are the storehouses of all your data. Be it your personal data, organizational, or even the most sensitive business data, it has to get stored in the databases. Therefore securing these databases should be a high priority in every industry. Ensure your databases are properly configured and deployed. Also make sure that you leave no stones unturned in terms of security, so that your databases remain safe, secure, and resistant to cyber attacks.
1.4.6. Integration
Data integration is the combination of technical and business processes used to combine data from disparate sources into meaningful and valuable information. A complete data integration solution delivers trusted data from various sources. All issues related data integration can be easily recognized during the requirement analysis stage. Some of the complications are: Varied Information, Lack of Manpower, Shortage of Storeroom Competence, Awful Information, Insights of Data Integration as an Irresistible Attempt also enables the database challenges such as: internet speed, multi-tenancy, privacy and security. The types of data integration are Enterprise Application Integration (EAI) Data Integration (DI) Electronic Document Integration/Interchange (EDI). The data integration provides the following factors to enables the efficient database transactions like as
- Speed
- Cyber security
- Scalability
1.5 Characteristics of a cloud DBMS framework
A cloud computing framework is almost always built with heterogeneous components which mean some servers are computationally slow, while other servers are computationally fast and highly loaded with jobs. Some servers take a large amount of time to make a disk update, while other servers take a large amount of time to relay an update message and some servers are not reliable enough. Some servers are connected by a slow network, other servers reside in highly loaded networks, and some servers reside in less reliable networks. The parameters listed down will affect the time taken for execution for transactions either read or write will be as follows:
1.5.1 Time for an update: The originating server for the request will have to wait until all necessary servers reply. If a replica server spends a large amount of time in a requested update operation for a disk update, then the effective response time for an update operation will be increased. This may cause overall performance degradation, performance bottleneck and an exponential increment in response time.
1.5.2 Workload of the server: Response time for a job is proportional to the workload of the server. If a replica server is overloaded, then it will introduce a very large delay in response time.
1.5.3 Reliability of the server and network: A cloud computing infrastructure is typically built with cheap servers. Server failure is a very common scenario in a cloud computing platform. If a replica server fails, another server can be replaced by a new server within a reasonable amount of time. However, this reasonable amount of time increases the response time temporarily. The server failure is more important to this system in terms of its impact on consistency. If an update request is lost due to network unreliability, another update request will be sent after a request time out (predefined certain amount of time). As a result, a significant amount of delay will be introduced in the response time.
1.5.4 Time taken to relay a message: An update request propagated according which is the time required to update all servers depends on the time to relay a message.
1.5.5 Network bandwidth and congestion: If there is no network with required bandwidth or a huge amount of traffic, there is more transmission delay.
An Inconsistency of data transactions is a drawback occurring in a distributed database. For a cloud database system, it takes time to conduct a write operation with all relevant nodes. If the system waits for all nodes to confirm completion of a write operation, then the processing of the next write operation will be delayed. So the time duration of the time frame between the system confirmation of completion of the write operation and the actual completion of all the write operations is called the Inconsistency Window which indicates the amount of time the system is inconsistent.
1.6 Transaction Management System (TMS)
A transaction is a unit of work that is performed against a database. Transactions are units or sequences of work accomplished in a logical order, whether in a manual fashion by a user or automatically by some sort of a database program. A transaction is the propagation of one or more changes to the database. It is important to control transactions to handle database errors and to ensure data integrity. A classical example is transferring money from one bank account to another. To do that first withdraws the amount from the source account, and then deposit it to the destination account. This operation has to complete in full. If it is stopped halfway, money will be lost and in turn the bank may lose the customer for such inconsistent transactions.
Database systems implement distributed transactions as transactions against multiple applications or hosts. A distributed transaction enforces the ACID properties over multiple systems or data stores and might include systems such as databases, messaging systems, file systems and other applications. In a distributed transaction a coordinating service ensures that all parts of the transaction are applied to all relevant systems. On the failure of any part of transaction, the entire transaction is rolled back across all affected systems. Hence, a transaction is a collection of operations. The ability of DBMS to manage various transactions that occur within the system is called transaction management. DBMS should ensure that the transactions are executed properly – either the transaction executes or none of it does. A transaction is a unit of database activities that accesses possibly various data items. It is usually the result of a program written in a high-level programming language or data manipulation language. To ensure the data integrity, DBMS should follow the transaction properties popularly known as ACID properties.
1.7 Components of TMS
The transaction manager coordinates the transactions on behalf of the application programs. It communicates with the scheduler. The scheduler is responsible for implementing a particular strategy for concurrency control. If the concurrency control protocol is lock-based, the scheduler is referred as the „lock manager‟. To maximize concurrency without allowing the concurrently executing transactions to interfere with one another and thereby maintaining the integrity and consistency of the database is the objective of the scheduler. Figure 1.5 shows the Transaction Processing System
Figure 1.5 Transaction Processing System
The transaction process is also said to have states. They are:
Active- This is the initial state, the transaction stays in this state while it is executing.
Partially Committed – A transaction which has just executed the final statement.
Failed – A transaction that cannot continue its normal execution.
Aborted – A transaction which has rolled back and the database is being restored to the consistent state prior to the start of the transaction.
Committed – A transaction which is completed and executed successfully and the database is transformed to the consistent state. Figure 1.6 represents the Consistent State of a Transaction
Figure 1.6 Consistent State of a Transaction
A transaction which is either committed or aborted is called as terminated. A transaction enters the failed state once the system determines that the transaction can no longer proceed due to hardware failures or logical errors. Such a transaction should be rolled back. When the rollback is complete, the transaction enters into the aborted state. When the transaction aborts, the system has two options to choose: It has to restart the transaction or it must kill the transaction.
1.8 Consistency of data transactions
A collection of operations which together comprise a consistent change may be grouped to form a transaction. The consistency property of transactions requires that they preserve the consistency constraints of the system; i.e. they change a consistent state into a new consistent state. The atomicity property of transactions requires that either a transaction completes all its operations successfully and commits, or none of its effects survive and it aborts. The durability property of transactions requires that once a transaction has committed it cannot be undone. It is important to distinguish between three different forms of consistency:
1.8.1. Mutual consistency
Mutual consistency is relevant to systems which store multiple copies (replicas) of data, and refers to the consistency which exists between replicas. If all replicas are identical, they are mutually consistent. Internal consistency refers to single objects in the system. Internal consistency is maintained for an object if updates to that object are atomic (partial updates can lead to the object being left in an inconsistent state).
1.8.2. Internal consistency
Application- dependent consistency is involved with relationships between the values of multiple objects and is determined by the application.
1.8.3. Application-dependent consistency
The application dependent consistency relies on both mutual and internal consistency, 20 and makes further assertions about the relationships between object values. Atomic transaction mechanisms should maintain application-dependent the consistency constraints of the system. This would be easy to achieve if the consistency constraints of the system could be kept centrally (for example in a database schema). The transaction mechanism could then check proposed new system states against the consistency constraints for the system and disallow any that did not conform.
The public and private cloud platforms are looking to deliver the benefits of cloud computing to their customers. Whether it is a private or public cloud, the database is a critical part of that platform. Therefore it is imperative that cloud database be compatible with cloud computing. In order to understand cloud computing requirements, it must be first understand the benefits that urge these requirements. Business applications demand that the cloud database be ACID (Atomicity, Consistency, Isolation and Durability) obliging. But the cloud database architecture provides Availability, Scalability, but weak form of consistency. The following are the two example scenarios that explain the major problems faced by the companies due to inconsistency.
1.9 Data lifecycle management systems (DLMS)
Traditional data lifecycle management systems (DLMSs) focus more on the way data is managed and not on how they are processed. In particular, the actual main services that they offer are data storage planning (and provisioning) and data placement (and execution support) via efficient data management policies. On the other hand, it seems that data processing is covered by other tools or systems as it is regarded as application-specific. Traditionally in Cloud, Big Data processing is offered as a separate service, while the resource management is usually handled by other tools. The system completely addresses the data management lifecycle, as inscribed in the previous sub-section. This system comprises six primary components. Metadata management takes care of maintaining information which concerns both the static and dynamic characteristics of data. It is the cornerstone for enabling efficient data management.
- Data placement encapsulates the main methods for efficient data placement and data replication while satisfying user requirements.
- Data storage is responsible for proper (transactional) storage and efficient data retrieval support.
- Data ingestion enables importing and exporting the data over the respective system.
- Resource management is responsible for the proper and efficient management of computational resources.
1.10 Cloud DBMS based data Storage
For some computer users, it is a real challenge to find enough space to store all the data acquired. Some users place resources on larger hard disks. Some people are heading to external storage devices such as USB sticks and compact rims. A desperate computer user can erase old documents on folders with the ultimate goal of creating space for new information in mind. However, some depend on development models: cloud storage. Although cloud storage seems to be related to weather forecasts and weather forecasting systems, it implies that data is actually stored in an external storage system that is effectively managed from the outside.
Figure 1.7 Cloud DBMS data storage
Figure 1.7 illustrates the cloud DBMS data storage. There is no computer data storage on your hard drive or other local storage device; unless it is so remote a way that a database. A lot of the advantages of cloud storage data stored in traditional example, if you do not have the ability to store data to a cloud storage system, you cannot work anywhere access to information on the Internet. You do not need a physical storage device to record, so that, at the same time is able to use the information on your computer. By using an appropriate storage system, you can allow other people to access data and convert their personal efforts into collaboration. Therefore cloud storage is advantageous and more adaptable.
A typical Cloud storage system architecture incorporates a master control server and several storage servers. There are several diverse Cloud storage systems. Some have a particular concentrate, for example, putting away Web email messages or digital pictures. Others are available to store all types of digital data. Other things, that the cloud of activities, which are a kind of a small amount of sufficient means of engines, Integer to cover the barns. The cloud storage requires a server connected to the Internet through a basic level. To save the information, the client can send a copy of the original document on the server through the Internet. The customer accesses the server data through the web interface when it wants to retrieve information. It is then important document to a client server and gives access to client records and change on the server. Many cloud storage data systems typically rely server.
1.11 Requested Load Balancing
In cloud storage, a typical feature is that these massive solicitations can be actualized inside the system load balancing. The solicitations from the diverse addresses are sent to the particular bearer gadgets equally and optimally according to the settings of the load balancing gadget. The cloud load in the load balance incorporates the edge of the network load balancing hardware to achieve dynamic balancing based on DNS analysis, also incorporates the cloud load internal load balancing mechanism. The load balancing mechanism inside cloud storage can be basically comprehended as load balancing between hub resources. At the point when a demand happens, the cloud storage knowledge has began to mirror its part. This ask for react by which hub, in addition it must be to the demand of the data where the hub is relating. On the off chance that it is countless have been pushed on this hub, the system have a superior handling mechanism to handle this demand or not turn into an inquiry. This sort of demand based load balancing mechanism for cloud storage hub resources is important to help an entire arrangement of cloud storage systems.In general, the program will give an explanation for that. A typical practice is through countless and judgments of system asset availability to carry on complete coordination. The balance hub at the same time present to us another important idea: the load balancing of the hubs can realize the dynamic expansion of the system. Because of various hubs comparing to thedifferent demands, each hub just needs to execute the demand assigned to it, the performance has great security. At the point when a considerable measure of large demands made at the same time, this mechanism under the cloud storage system has distinctive hubs balancing the preparing of solicitations for comparing data transactions.
For example, the system has eight requests, and there are 80,000 solicitations, the 80,000 solicitations will cloud balanced disseminate to the hub. On the off chance that each hub handles up to 10,000 solicitations, regardless of how much the system resources , it will bring performance and client encounter decrease. When it ends up 800,000, individuals can expand the hub to 80, because the internal load balancing mechanism does not require special treatment. At the point when the 800,000 solicitations happens, it should at present equitably distributed up to 80 hubs , at this point each hub is as yet dealing with 10,000 solicitations. Performance won’t decay. Through this load balancing mechanism and the ability of hubs to expand, can realize the large-scale sending of cloud storage needs.
1.12 Typical Cloud Storage Applications
Cloud storage applications are cloud storage suppliers through the open Internet to the client to pay for the utilization’s storage services. According to the customer base and storage services gave in various ways, can be isolated into public applications (for individuals, home clients) and undertaking applications for business clients. Typical cloud storage public applications are for the most part filled in as SaaS layer services to clients. There are many cloud storage applications synchronous backup record save disconnected download web based altering and report sharing.
Typical cloud storage venture applications for the most part serve the client with the IaaS layer. Giving basic storage facilities and basic storage backup services for engineers (mainly for endeavors) rental to facilitate their own particular office or according to business needs. Tenant builds up the business layer and ultimately gives a cloud of storage to business clients. With the gradual maturity of cloud storage innovation, more companies realized cloud storage prospects and start to launch cloud storage applications. As a typical representative of the SaaS layer application Dropbox earlier appeared before the public client, gives synchronous backup capacity and change the client’s office habits. Allowing clients away from inconvenience that work outside the workplace need to carry the computer, and the document cloud sharing model is also nearer to the distance of many Internet clients.
These advantages may be the basis for the achievement of telecom operators in cloud storage applications. The typical cloud storage public application and endeavor application from the aspects of capacity, benefit show and operating characteristics. At that point presented the telecom operators of cloud storage applications plan of action. The following public applications also inevitable the cloud DBMS efficiently. These are given below
Dropbox
Google Drive Google Drive
iCloud
Enterprise application
Telecom operators
1.13 SUMMARY
In this chapter, discussed about the fault identification of the cloud DBMS based database transactions. The database storage, cloud service models, cloud framework based client server network connect through the efficient data transactions and eliminate the fault tolerance. The advantages and its respective applications to be discussed in this chapter.