This essay has been submitted by a student. This is not an example of the work written by professional essay writers.
Racing

EFFECTIVE HACommit ARCHITECTURE FOR CLOUD DATA TRANSACTIONS BASED ON MEMORY CAPACITY AND ACTIVE CONNECTIONS TO HANDLE READ-WRITE TRANSACTIONS

Pssst… we can write an original essay just for you.

Any subject. Any type of essay. We’ll even meet a 3-hour deadline.

GET YOUR PRICE

writers online

EFFECTIVE HACommit ARCHITECTURE FOR CLOUD DATA TRANSACTIONS BASED ON MEMORY CAPACITY AND ACTIVE CONNECTIONS TO HANDLE READ-WRITE TRANSACTIONS

 

5.1 OBJECTIVE

Cloud Computing enables convenient, on-demand network data access to shared configurable computing resources such as networks, servers, storage, applications, and services that can be rapidly provisioned and released with minimal management effort or service provider interaction. There are four basic models of cloud service: Infrastructure as a service (IaaS), Platform as a service (PaaS), Software as a service (SaaS), and Database as a Service (DaaS). The public Internet spawned private corporate intranets; cloud computing is now generating individual cloud platforms. Both public and different cloud platforms are looking to deliver the benefits of cloud computing to their customers. Whether the private or public cloud provides the Database is a critical part of that Platform. Therefore your cloud database must be compatible with cloud computing. One of the core design principles is dynamic scalability, or the ability to provision and decommission among the clients and servers on demand. Unfortunately, the majority of today’s database servers are incapable of satisfying this requirement. Business applications also demand that the cloud database be ACID compliant: providing   Atomicity, Consistency, Isolation, and Durability. In the third stage of work, proposes the HACommit architecture (Highly Available Commit) evolves the cloud DBMS environment through the data transactions with the compelling read and write transactions of data: a highly-available data store that depends on the (Consensus Algorithm) consensus value of all active connections of the commit nodes. The client, whether commit or abort the transaction like decision-making approach while before to vote for a commit along with the transaction participant commit nodes through the clients and servers. The replication of data enables the consistent flow of the client’s read-write logging and the server’s read-write logging with the distributed commit process through the error-fault tolerance of timeout of transactions. The data transaction is easy to access to the other distributed commit data transaction. The client initiates the transaction commit, within one communication like as one phase of data transactions.

5.2 INTRODUCTION

In recent years, the cloud infrastructure highly available in web-based applications. The database transaction provides the optimal data storage along with the different users to access the data transactions. The cloud-based database transaction used for the benchmarks of the standard Database. The transaction processing performance provides the durability of guaranteeing design considerations with the efficient way of the front-end side and back-end side of database transactions. The production of scale-out storage systems, the design of a communication subsystem plays a vital role in determining the overall performance of these systems. Load balancing deployment algorithm across the workload amounts, maximize the activity of overlapping and provides multiple connections between storage servers to maximize transaction purpose along with the clients and server. The significant performance of larger memory and multi-core applications establish the resident systems across the improved throughput and query latency of data transactions. Their performance no longer limited by the efficiency and scalability over the indexing, logging, transaction compilation, and concurrency control.

The database management systems (DBMSs) as a service user through the cloud environment. The high-speed networks evolve the server-less alternative workload fluctuations from physical storage that yield a low performance, high availability to users, and reliability through the database instances of guaranteeing the total cost of ownership optimization. The design can robustly estimate the substantial effort that leads to the single point of the database instance and storage managing both transaction log and data pages. The front-end side and efficient data durability in the back-end side establish the design considerations for the performance of transaction processing among the clients and servers within the benchmarks of better-optimized performance. The storage capacity and the limited processing distributed among the server and the different users, which is addressed by cloud computing. The production of data processing, data storage, data visualization provides the capability of limitless storage and processing through the cloud DBMS. SaaS (Software as a Service) deployed and enables better data transactions in cloud infrastructure. Load refers to the CPU load, network load, and the memory capacity of every server. This load could manage by using different load balancing techniques. The objective of load balancing algorithms is to make sure that every server of the cloud is busy processing a sort of Operation. The scalability requires the cloud service provider and end-users with the computational resources that prevent users from utilizing logless transactions. The communication channels and the various computational resources originate from the multitenant systems along with the assignment of a trustworthy cloud service provider. However, the cloud service provider allows maximum data transactions with the irrespective of the behaviours of timeout error tolerance to maximize resource utilization from various frameworks. The significant performance improvement in terms of latency and throughput over the different clients and servers.

The read or write transaction of the same data item estimates the more than one transaction, which called concurrent transactions. The concurrent transactions will lead to an inconsistent database that enables the simultaneous execution of such efficient transactions to lead to better read-write transactions in a cloud DBMS. The factors are following as

  • Data replication
  • Consistency
  • Concurrency of data items
  • Load Balancing
  • Locking Mechanism
  • Read Write data transactions

The state-of-the-art practice for atomic commit in highly-available data stores is to have the transaction client decide before the transaction participants vote, denoted as the vote-after-decide approach. Deciding to commit a transaction, the client initiates a distributed commit process, which typically incurs two phases of processing. In the first phase, participants vote on the decision and force write-ahead logs, while the coordinators record the votes through write onwards logging. The second phase is for the coordinator notifying the commit outcome and the participants applying transaction changes. The transaction client can notified of the commit outcome at the end of the first phase, but the commit not completed and the transaction result is not visible to other transactions until the end of the second phase.

The two processing phases involve at least two communication roundtrips, as well as the write onwards logging steps of the coordinator and the participants. The write onwards logging steps and the communication roundtrips are costly procedures in distributed processing. A different approach to distributed commit having participants vote for a commit before the client decides to commit or abort the transaction, denoted as the vote-before decide approach. The participants have vote. First, the voting step can overlap with the processing of the last transaction operation, saving one communication roundtrip; and, the votes can be replication in the course, instead of using a separate processing step. This makes the removal of one processing phase possible. On receiving the client’s commit decision, the participants can directly commit the transaction locally; thus, the transaction data can be made visible to other transactions within one communication roundtrip time, i.e., one phase. The above steps can result in a logless one-phase commit solution, similar to the classic 1PC protocols but without logging and other impractical assumptions. The highly-available data stores, the logless solution can enable recovery using replications instead of logs. In recent years, it has realized that write onwards logging is not the only way to guarantee atomicity and durability; replication is also a possible condition. As network access becomes faster than disk access, the read/write speed, using replication to guarantee the correctness of atomic commit becomes far more efficient than logging to disks. Moreover, replication is already widely exploited to guarantee high availability for large-scale data stores that have the correctness of atomic commit.

5.2.1 Database Transactions

A transaction is a tiny unit of a program, and it may contain several low-level tasks. A transaction in a database system must ensure performance like accuracy, completeness, and data integrity. All changes of data performed in a single operation.  ACID is primarily concerned with how a database recovers from any failure that might occur while processing a transaction, which is the standard set of properties that guarantee database transactions are processed reliably. ACID properties are the most important concepts of database theory. A transaction is a small unit of the program which contains several low-level tasks. These properties guarantee that database transactions are processed reliably. In the context of transaction processing, the acronym ACID refers to the four fundamental properties of a transaction such as the following:

  1. Atomicity
  2. Consistency
  3. Isolation
  4. Durability

The best way to convey ACID properties with the real-time application is transferring money from one account to another account. There may be two databases with different banks, so this transaction will perform withdrawal from one account while depositing on another account. If one of the steps got failed, then return both database state to initial from where get started. The balance should be consistent on all nodes and if someone is also performing some transaction at the same time that should be waiting so that it keeps it isolated. If the transaction got completed, it should be durable by maintaining state in data. In the case of ACID, other transactions must wait to complete the current transaction. This waiting causes the performance issue. In this case, for maintaining consistency and improve performance, the solution is vertical scaling so that getting more get more robust systems to process each transaction in less time and become available to process other requests. Figure 5.1 represents the log entries based first commit phase.

Figure 5.1 Log entries based first commit phase

5.2.2 Overview of HACommit architecture

HACommit designed used for highly-available data stores, which guarantee high availability of data. Highly-available data stores generally partition data into s and distribute them to networked servers to achieve high scalability. To high availability of data, each paring is replication across a set of servers. Except for data replication and distribution, HACommit does not rely on other properties of the underlying data store. Clients are front-end application servers or any proxy service acting for applications. Clients can communicate with servers of the highly-available data stores. A transaction is initiated by a client. A transaction participant is a server holding any paring operated by the transaction, while servers conducting replications of a paring are called participant replications.

The implementation of HACommit involves both the client and server sides. On the client-side, it provides an atomic commit interface via a client-side library for transaction processing. On the server-side, it specifies the processing of the last Operation and the normal commit process, as well as the recovery process on client or participant failures. A HACommit application begins a transaction, starting from the transaction execution phase. It can then execute reads and writes in the transaction. Except for the last Operation, all transaction operations can processed following either the inconsistent replication solutions or the consistent replication solutions. That is, except for the last Operation, HACommit makes no assumptions on other operations of a transaction.

5.2.3 Transaction context

The transaction context must include the transaction ID and the paring IDs. The transaction ID uniquely identifies the transaction and distinguishes the instance consensus values for the commit process. The paring IDs are necessary to compute the set of participant IDs, which constitute the configuration information of the consensus instance for commit. This configuration information must be known to all consensus acceptors, i.e., transaction participants. In case when inconsistent replication used in operation processing, the transaction context must also include relevant writes. Relevant writes are writes operating on data held by a participant and its replications. The consistent writes are necessary in case of participant failures. With inconsistent replication, participant replications might not process the same writes for a transaction as the participant. Consider when a set of relevant writes are known to the participant but not its replications. The client might fail after sending the Commit decision to participants. In the meantime, a participant fails, and one of its replication acts as the new participant. Then, the recovery proposers propose the same Commit decision. In such a case, the new participant will not know what writes to apply when committing the transaction. The reduced data kept in the transaction context; the relevant writes can be recorded as commands.

5.2.4 Evaluation of HACommit architecture

The evaluation explores three aspects of HACommit:

5.2.4.1 Commit performance

HACommit has smaller commit latency than other protocols and this advantage increases as the Number of participants per transaction increases.

5.2.4.2 Fault tolerance

HACommit can tolerate client failures, as well as server failures.

5.2.4.3 Transaction processing performance

HACommit has higher throughputs and lower average latencies than other algorithms.

5.2.5 Commit Process

In HACommit, the client commits or aborts a transaction by initiating a Consensus instance. The commit process without any failures with the background knowledge of consensus values. Then, we present the Consensus-based one-phase commit process of HACommit. As compared to the joint exploitation of Consensus, HACommit makes a few adaptations, including how participants make acknowledgement. The atomic commit process starts after the client makes its decision to commit or abort the transaction, though the client can only commit the transaction if all participants vote YES. Then the client proposes its decision on the transaction outcome to the participants and their replications. The client can safely end the transaction once it has received acknowledgement from a replication attendance of any participant, despite failures of the client or participant replications. Participant replications directly commit the transaction on earning the client’s commit decision.

5.2.6 Participant acknowledgements

For any participant, if the acknowledgements by a attendance of its replications are received by the client, the client can safely end the transaction. The commit process not finished until all participants acknowledge the client. But any participant failing to acknowledge can go through the failure recovery process to complete the commit process. In HACommit, all participants must finally recognize the acceptance of the client’s proposal so that the transaction committed at all data operated by the transaction.

The requirement for participants’ acknowledgements is different from that for the attendance acceptance in the original Consensus algorithm. In Consensus, the Consensus reached if a proposal is accepted by more than the attendance of participants. The unique Consensus algorithm can tolerate the failures of both participants and proposers. HACommit uses the client as the initial proposer and the participants as acceptors and would-be proposers when exploiting Consensus for the commit process. In its Consensus exploitation, HACommit only tolerates the failures of the initial proposer and would be proposers. However, the inability of participants commit nodes can be tolerated by the participant replication, which can also exploit consensus algorithms like Consensus. Each Consensus instance corresponds to the commit of one transaction, but one participant can engage in multiple Consensus instances for commit, as the participant can involve in multiple concurrent transactions. To distinguish different transactions, we include a transaction ID in the 2-stage message, as well as in all messages sent between clients and participants.

 

 

5.2.7 The participant replication failures

HACommit can tolerate not only client failures but also participant replication failures. It can guarantee continuous data availability if more than the attendance of replications are accessible for each participant in a transaction. In case that attendance replication availability cannot be guaranteed, HACommit can be blocked, but the correctness of atomic commit is guaranteed anyhow. The high availability of data enables a recovery process based on replications instead of logging. However, logging and other mechanisms like checkpointing and asynchronous logging can fasten the recovery process. Failed participant replications can recover by copying data from the correct replications of the same participant. Or, recovery techniques used in consensus and replication services can employ for the replication recovery of participants. Although one replication selected as the leader (i.e., the participant commit nodes), the leader replication can easily be replaced by other replications of the same participant.

If a participant failed before sending its vote to its replications, the new leader would make an original decision for the ballot. Otherwise, as the vote of a participant is replication before sending to the coordinator, this vote can be kept consistent during the change of leaders. Besides, the client has sent the transaction outcome to all participants and their replications in the commit process. Thus, failed participant replications can recovered correctly as long as the Number of failed replications for a participant is tolerable by the consensus algorithm in use. Generally, there are fewer failed replications for each participant than that is tolerable by the highly-available data store, as the Number of replications can increased to tolerate more failures. Hence, transactions can be committed in HACommit. In case there are not enough active replications, the participant without enough replications will not respond to the client so as to guarantee replication consistency and correctness. The commit process will have to be paused until all participants are equipped with enough active replications. Though the liveliness of the architecture relies on the Number of active replications, HACommit can guarantee the correctness of commit and the consistency of data in all situations.

5.2.8 The recovery process

A recovery proposer starts the recovery process by starting a new round of the Consensus instance from the first phase. In the first phase, the original proposer will update the ballot number bid to be larger than anyone that it has seen. It sends a phase-1 message with the original ballot number to all participants. On receiving the phase-1 message with the bid, if a participant has never received any phase-1 message with ballot number higher than bid, it responds to the proposer. The response includes the accepted transaction decision and the ballot number on which the acceptance made if the participant has ever taken any transaction decision.

If the proposer has received responses to its phase-1 message from all participants, it sends a phase-2 message to all participants. The phase-2 message has the same ballot number as the proposer’s last phase-1 message. Besides, the transaction outcome with the highest ballot number in the responses proposed as the final transaction outcome; or, if no accepted transaction outcome included in responses to the phase-1 message, the proposer must propose abort to comply with the defined transaction.

5.2.9 Failure recovery

The design of HACommit, we assume the crash-stop failure behaviour of the server as many real-world implementations of Consensus do. That is, if a client or a participant replication fails, it will fail and stop responding. Servers will not send random and malicious messages on failure. The one-phase commit protocol HACommit will require more rounds of communication on failures. The recovery mechanisms for client failure and participant replication failure respectively.

5.2.10 Liveliness

Similar to Consensus, HACommit assumes for the guarantee of liveliness that one proposer will finally succeed in finishing one round of the algorithm. In HACommit, if all participants consider the current proposer as failed and start a new series of Consensus simultaneously, a racing condition among original proposers could be formed in the first phase of Consensus. No proposer might be able to succeed in finishing the second phase of Consensus, making the liveliness of commit not guaranteed. Though rarely happening, the racing condition among would-be proposers must avoided in Consensus for the liveliness consideration. In actual implementations, the random back-off of candidates, i.e., having the candidates wait for a random time before initiating other rounds, is enough to resolve the racing situation. However, leader election or failure detection services outside the algorithm implementation can also used.

5.2.11 Transaction throughput and latency factors

Transaction throughput and latency when using different commit protocols. In the experiments, we retry the same transaction on the failure of lock acquisition, until the transaction committed successfully. Each retry made after a random amount of time. The database transaction-based proposed throughputs and the latency levels when using HACommit represented in the results and evaluation section. The HACommit implementation has larger transaction throughputs than the other application in all workloads. The retry policy of the experiments leads to the proximity between the throughputs of HACommit architecture. Figure 8 demonstrates the average transaction latencies. HACommit has lower transaction latencies than other methods in all workloads. The advantage of HACommit is the more transaction latency increases and the Number of latencies reduced as the Number of operations in a transaction increases in the workloads. The same concurrency control and isolation level, factors leading to HACommit make the advantage over other existing methods. There is no costly logging involved during the commit; then there is no persistence of data needed.

 

 

 

5.3 PROBLEM STATEMENT

The cloud database transaction is an essential factor for data storage and its related function. The distribution of database transactions may lead to challenges like data security, performance, data privacy, resource utilization, and high availability functions with the high quality of data storage in cloud DBMS. The inefficient ways of database transactions reflect the read and write functions across the cloud infrastructure. The response times of the system also affect the read and write capacity. Due to the database transactions, the replication of data makes the inconsistency of database transaction problems. Due to the read and write transactions, the response time creates the conflict of data storage and database transactions. With the corporate adoption of cloud computing, the significant increase of cloud options established the database services in the form of cloud databases or Database-as-a-service (DaaS). These early applications put a priority on read access because the ratio of reads to writes was very high. Delivering high-Performance read access was the primary purchase criteria. However, this is changing. Adding another database server is as simple as splitting the data across one more server. Many user requests involve related information.

5.3.1 DFCL (Deadlock Free Cell Lock)

The existing method name as DFCL algorithm detriment the overhead of concurrent data transactions. DFCL, referred to as Deadlock Free Cell Lock, depends on the locking system applied to the low degree of locking. To access the same data concurrently and enables a small percentage of conflict through the many transactions. The algorithm proceeds the operations on its required cells, which is depends on the lock mode. DFCL provides either read or write transactions simultaneously within the cells. It also eliminates the deadlock issues while locking the cells by forcing the waiting transaction to pass into the rollback or the commit phase. Our proposed algorithm improves the performance of the Database and the transactions. During concurrency transactions, the transactions may affect due to the conflict and deadlock condition of data storage. The deadlock issues generated from an infinite waiting time of transactions for data lock condition. During data transactions, DFCL detriments in the high overhead of concurrent data transactions with the Number of increased waiting times.

To overcome the issues, to utilize the HACommit architecture for active transactions along with the requirement of less time of database transactions for the orientation of business applications. Hence the HACommit architecture improves the performance of the cloud database transactions concurrent transparency, enhanced read to write data transactions.

5.4 PROPOSED METHODOLOGY

A Cloud DBMS is a distributed database that brings computing as a service instead of a product. It shares the resources, information, and Software between multiple devices over the Internet. Nowadays, it is growing significantly. As a result, database management tasks outsourced to third parties, just like putting it into the cloud for a much lower cost.  The structure of the cloud computing database and its functioning in collaboration with other nodes observed under the Database as a service. Many e-commerce companies are getting benefits from DB as a service. For useful read and write transactions, the HACommit architecture enables the effective, consistent, and replication of data transactions among the clients and servers. The HACommit provides the non-blocking mechanism under the client and server failure along with the nodes of data transactions.

HACommit architecture (Highly Available Commit) designed for highly-available data transactions. The highly available data transactions established the one phase-based read and write data transactions. The partition of data to be distributed through the network of different servers to attains the improved scalability, reliability of the server. The high availability of data used for the replication over the various servers. The data replication and distribution are except for underlying the multiple data transactions. The use of front-end servers provides the clients. The client starts a data transaction. The server can hold the data transaction along with the use of participants commit nodes like A, B, C, D while servers are holding replications by the presence of the HACommit that provides the client and server such as a transaction processing based client-side library. The Highly Available Commit (HACommit) data transaction enables the improved concurrency of data transactions without waiting time delays.

The read (Read Log Equalizer) and write (Write Log Equalizer) transactions done over the process when the HACommit established the job assignment for each participant commit nodes, the lock-based ACK, and data transactions through the client-server interfaces.  The commit interface enables the client based processing; the last Operation processing featured as a normal commit process. The client or participant failures established the recovery process.

The data transaction starts an execution phase in the HACommit. The transactions perform the read and write transactions with the inconsistent replication of solutions, except for the last transaction. For the last Operation, the client displays to all the participant nodes like A, B, C, D, which is the last transaction processing operation. Overall the nodes check to vote YES or NO decision for the commit data transaction process, which depends upon the consistency of data transactions, data integrity, and concurrency control like a lock-based mechanism. The participant nodes replication enables the context information transaction before the response to the client. The client receives the commit like ACK from all the participant nodes as well as the last processing operation. Now the commit process starts.

Now the client picks the decision-making of whether to commit or abort when the concurrent data transactions occurred. So the client makes to select the commit only the transaction if all participant commit nodes vote YES like as one by one. Now the client decides for data transaction reads to the participant’s nodes and their replicated commit nodes. Then the client ends the data transactions. Once the client received the acknowledgement from the replication commit nodes, the replication of data item to be read until all the other replicated commit nodes locked by applying the locking mechanism approach. Despite the failures occurred due to the error in the replication of the participant commit nodes in the existing methods that to be recovered and solved along with the Number of increased commit operation within the commit nodes. Figure 5.2 represents the HACommit based One-Phase Transactions.

Figure 5.2 HACommit based One-Phase Transactions

HACommit provides the one phase logless data transactions such as without deadlock or waits time of participant nodes. The execution of data transactions enables the level of locking among the nodes during a read-write data transaction. In reasonable condition, the read-write transaction done through the ACK sends and then received. The proposed HACommit attains the specific feature, i.e., decide and vote approach. When the transaction starts, the participant nodes operate through the commit phase. When the transaction committed, the process never faces the wait and delay issues. The number of transactions increased and also without inconsistent data transactions enabled. When the read transactions completed by the presence of a locking approach, vote to commit Yes. Then the write data transactions done and also the lock to be released and without any difficulty of data transactions. The locking approach never makes the complexity related issues. The Operation enhanced the improvement of concurrency as well as eliminates the wait time of data transactions.

HACommit established the process like the Consensus algorithm for the actual less Number of client-server failures while during the data transactions. HACommit also provides the specification like replacing or eliminate the client and server failure. When the client-server failure may occur, the transaction of the whole process leads to failure, execution time to be high, etc.

5.4.1 Consensus Algorithm

Consensus Algorithm is the process of accepting the one database transactions among the all participant nodes without any failures of client and server failures. The acceptance of database transactions depends on the consensus value, i.e., the minimum number of active connections among the capacity of the server. This algorithm provides the vote-before-decide among the commit transaction, which is lesser than the traditional commit issues. Consensus algorithm used while data transactions without delay and no wait time of concurrent data transactions. The client decides the decision among the nodes. The memory capacity of the server depends on the consensus value. To regulate the consensus values among the participant nodes. The consensus value depends upon the memory capacity of the participant commit nodes and the applied load across the commit nodes. The problem may arise when the difficulties in the participant nodes or their respective communication among the client and server, the overall transaction experienced as processing delay.

Consensus value, C = Active connections of the Workloads – Memory Capacity

The Consensus algorithm experienced as high durability when the replication of a database transaction occurs. While the last transactions perform, the client sends a message to the participants to commit nodes that hold the data. The client sends the message to the other commit nodes that provide to have no processing of transactions, i.e., empty Operation performs. The last Operation acts as a read, a write, or a meaningless operation—the ACID property, whether a commit checks whether for the participants of commit nodes violation. After the last Operation, the participant vote YES, or else Vote to NO to restart the data transactions or not.

Execution                                             Commit

Decide

 

 

 

 

 

 

 

Figure 5.3 Consensus algorithm based client coordinator

Figure 5.3 shows the Consensus algorithm based client coordinator. Figure 5.4  illustrates the Read and Write Log Equalizer.

 

 

Figure 5.4 Read and Write Log Equalizer

5.4.2 Read and Write Log Equalizer (RLE and WLE)

The read the data transaction done with the presence of load balancer based concurrency control policy. The read transaction named as commit mode. i.e., HACommit initially decides whether to vote or not. HACommit provides the decision like the vote to YES means; the other commit nodes locked with the conscious of other participant nodes and its active connections. The read transactions have highly available data stores among the different commit nodes. Then read the transaction done, the other commit nodes to be released. The client coordinator receives the read data transactions without any wait time, no delay, and as well as consistent data transactions. The capacity of the active connections established the load balancing along with the client-server database implementation systems. The transaction processing to solve inconsistency and to control concurrency using lock managers. A dynamic load balancer distributes the load among servers. Hence it needs to satisfy the user with minimum response time and correct data.

Hence the load balancing approach used for the proper utilization of resources to improve the performance of the overall system. It also maximizes throughput. For distributing the read and write load among servers based on their capacity and minimum active connections. As the client requests are distributed based on their capacity, servers can provide a quick response. And also, the minimum active connection server is considered for service. So, it can provide better service to the clients with minimum response time and maximum throughput.

The write transaction done through the consistent replication of database transaction. The write transaction that also includes the transaction context. The applicable write data transaction provides the replication participant commit nodes. The participant commit node failures required for the consistent write data transactions. The write requests to the participant commit nodes sent by the client. To choose to write transactions along with the conscious execution of write transactions. The transaction context distributes among the commit nodes. The overall write data transactions manipulate through the stable storage before the changes write to the database transactions. The change in the read or write attains equal data transactions. Several works belong to data replication and high scalability of data transactions in the cloud environment, which takes no waiting to write transactions. All the modifications write to a log before the data transactions done. It enables the processing time. The time taken for read to write transactions to be quiet high. Because of the write transaction ensembles, the participants commit node failures. During transaction depends on the client and server failures. The commit node replication does not evolve the inconsistent data transactions. So the participant commit nodes never process the same write transactions. The replication of data transaction provides the relevant write transaction to the server for active write database transactions. Consensus algorithm used to detect the client and server failures, then to write the data transaction along with the lock released concept to the same data at different times, such as increased write time compared to the read transaction time. The Number of servers and the increased Number of job assignment specifies the response time as well as the latency time during write database transactions.

5.4.3 Algorithm 1: A read and write data transaction

// the transaction among the A, B, C, D commit nodes

           start the transaction;

for all n − 1 transactions do

send the transaction to the participant commit nodes and the results during processing ;

          end

// data transaction like read-write transaction for one cycle

            send the ACK to the participants commit nodes

assign the job and load balancing to all the commit nodes

            receives ACK

            Lock the other B, C, D commit nodes

            Performs the read transaction

            If successful read transactions

            Such as consensus value = active connections – memory capacity

            release the locks

            performs the write transactions

            end

             // process the multiple read-write transaction

            Performs multiple read write transactions

end

// process the last transaction

          send the n-th transaction to the participants commit nodes;

          send relevant write transaction to and incurs the votes from all participant nodes;

          waits for the results from all commit node participants;

          end the transaction safely;

 // process the commit starts

         For all commit nodes participants do

         set result = Commit read and write transaction;

         send message as client failure or server failure; 

         timeout the transaction;

         end

        wait until the message from all the commit nodes participants;

        finally not receiving message; 

// then

        end the transaction; 

 

5.4.4 Fault Tolerance

Fault-tolerant used for the active process of determining client failure or server failures during the data transactions. If the failure of nodes occurred, the connection to be a timeout. The fault-tolerant used to identify when and where. The failures are not common during the failed replication process. It will detect and replaced quickly with the normal state of the client and server. HACommit enables the non-blocking process under the client-server failure with the participant to commit nodes of data transactions. When fewer than the attendance of participants fails transactions.

In the fault-tolerance tests, we examine how HACommit behaves under client failures and server failures. The evaluation result demonstrates that no transaction blocked under server failures and the client failure, as long as attendance of participant replicas are accessible. We use five replication and initiate one client in the fault tolerance tests. To create failure scenarios, we actively kill a process in the experiments. The network module of our implementations can instantly return an error in such case. Our application processes the mistake as if a connection timeout on node failures happens. With pessimistic concurrency control, reads in the HACommit implementation take up a significant portion of the time. The failure of one replica means that the system can process fewer reads. Hence, this leads to lower average latencies and higher throughputs for read transactions, as well as for all transactions. The failures of more than one replication violating the attendance availability stage of the HACommit. The throughput drops to zero immediately because no operation or commit process can succeed at all. The HACommit implementation uses timeouts to detect failures and attendances read or write transactions. As long as the attendance of replication is available for every data item, HACommit can process transactions usually. We also examine how HACommit behaves under transaction client failures. We have all servers log the events of timing out, repairing a transaction and getting a transaction repaired. We deliberately kill the client in an experiment. Each server program periodically checks its local transaction contexts to see if any last contact time exceeds a timeout period.

5.4.5 The client failure

In HACommit, all participants are all candidates of recovering nodes for a failure. We call recovering nodes as recovery proposers, which act as would-be proposers of the commit process. The recovery proposers will activate on client failure. In an asynchronous system, there is no way to be sure about whether a client fails. In practical implementations, a participant can keep a timer on the duration since it has received a message from the current proposer. If the period has exceeded a threshold, the participant considers the present proposer as failed. Then it finds itself as the recovery proposer. A recovery proposer must run the complete Consensus algorithm to reach the Consensus safely among the participants. Multiple rounds, phases and communications roundtrips can be involved on client failures, as any would-be proposer can start a new series on any (apparent) failure.

Although complicated situations can happen, the participants of a transaction will reach the same outcome eventually, if they ever reach a consensus and the transaction ends. For example, as delayed messages cannot be distinguished from failures in an asynchronous system, the current proposer might not have failed. Instead, its last message has not reached a participant, which considers the proposer as failed. Or, multiple participants view the current proposer as failed and start a new round of Consensus simultaneously. All these situations will not impair the safety of the Consensus algorithm.

5.5 RESULTS AND DISCUSSION

The HACommit architecture based read-write transaction simulated using the CloudSIM tool. The workload generation for replication of consistent data items to provide the read and write transactions. The parameters involve classifying the write database transactions with the corresponding data centres and its response time calculations. The Read Log Equalizer and Write Log Equalizer established the Number of replicated nodes, read transactions, write transactions, and response time due to the database transactions. The load variation balanced the write database transactions to enables the speed of the write function. The ratio of load variation and its rate estimated through the corresponding Number of replicated nodes. The performance of results compared with the existing work methodologies. From the results, the existing method and its approaches to the DFCL algorithm compared with the proposed HACommit architecture for a highly compelling read and write data transactions.

Figure 5.5 Number of Users Vs. Throughput

Figure 5.5 Number of users Vs. Throughput. The load factor depends upon the speed of the transactions during the resource allocation of all other transaction nodes. The resource consumption of all other nodes enables the request during the load instant. The different load condition allows when the same service request and the speedup time may increase, based on the difference between the current load and the load factors of the transaction nodes. The load enumerates the current CPU usage, memory usage, and network of the transmission nodes. The experimental result shows the Number of replicated nodes. When there is a change in the resource status, the speedup time increases in linear order. For read transactions, the Number of replicated omitted nodes and its read transaction variation.

Figure 5.6 Number of users Vs. Number of concurrent transactions

Figure 5.6 represents the Number of users Vs. Number of concurrent transactions.

Figure 5.7 No of replicated commit nodes Vs. No of successful read transactions

Figure 5.7 illustrates the Number of replicated commit nodes Vs. No successful read transactions. The Number of successful read transactions. HACommit based read and write transaction displays the function during read phase without holding any locks. RLE and WLE maintain consistency in the validation phase and provide the write phase with the updating of the database transactions.

 

Figure 5.8 Number of replicated commit nodes Vs. The Number of successful write transactions.

Figure 5.8 represents the Number of replicated commit nodes Vs. The Number of successful write transactions. The HACommit provides the response time and the waiting times across the concurrent transactions along with the Number of successful transactions of read and write database transactions. While increases in response time and reduces the waiting time for concurrent transactions. The execution of concurrent transactions that ensures the serializability of the transactions.

 

Figure 5.9 No of replicated commit nodes Vs. Response time (Read)

Figure 5.9 represents the Number of replicated commit nodes Vs. Response time (Read).

 

 

Figure 5.10 No of replicated commit nodes Vs. Response time (write)

Figure 5.10 represents the Number of replicated commit nodes Vs. Response time (write). Figure 5.11 illustrates the Operation per transaction Vs. Average latency.

 

 

Figure 5.11 Operation per transaction Vs. Average latency

 

5.6 SUMMARY

Cloud DBMS based data transactions are effective through the balanced distribution of workloads among the replicated commit nodes by the presence of HACommit. The HACommit architecture based database transactions provides the logless one-phase commit process along with the highly available, reliable, and scalable for business applications. In the third stage of work, the proposed HACommit, a logless one-phase commit protocol for highly available data stores. In contrast to the classic vote-after-decide approach to distributed commit, HACommit adopts the vote-before-decide approach. This approach turns the atomic commit problem into a consensus problem. HACommit kept the approach like as vote-before-decide through processing distribution of the last processing transaction. The voting process overlapped by the last operation processing to the transaction of the commit process in a single-phase commit process. The aim of read and write log equalizer is to satisfy the user while distributing load among the several commit nodes or servers. The Consensus configuration information presence of transaction context structure that used to identify the client failures or not. The HACommit processes active read transactions and writes transactions. When read transactions occur, the time to be reduced and during write transactions occurs; the taken time is high by comparing the read transactions while selects the job assignment for each replicated commit node of the transactions.

 

 

  Remember! This is just a sample.

Save time and get your custom paper from our expert writers

 Get started in just 3 minutes
 Sit back relax and leave the writing to us
 Sources and citations are provided
 100% Plagiarism free
error: Content is protected !!
×
Hi, my name is Jenn 👋

In case you can’t find a sample example, our professional writers are ready to help you with writing your own paper. All you need to do is fill out a short form and submit an order

Check Out the Form
Need Help?
Dont be shy to ask