Operating system
Table of contents
Description | Page number |
Introduction | 3 |
Analysis of failures under: 1.process state | 4-5 |
2. process scheduling | 6-7 |
3.synchronization | 8-9 |
Survey and test experiments | 10-12 |
Conclusion | 13 |
A comparison study with related works. | 14 |
Summary | |
References |
Introduction
This study is an in-depth analysis of computer systems to evaluate technical problems associated with the operating system. This research entails the summary of articles and publications that have evaluated then challenges of management systems besides giving necessary recommendations. The motivation behind this research is because recently, computer users have experienced difficulties in failures of the operating system. Malfunctioning and breakdown of the system are of significant concern, and this article will seek to give a solution to this problem. My interest also is to build a system that can help correct some most common technical issues. Don't use plagiarised sources.Get your custom essay just from $11/page
My topic of interest is on process management, where we study the organization of data in the operating system and its control. The process is the execution of a program and performing specific actions as articulated in the plan. The operating system helps in executing this process. The process involves creating, scheduling, and termination of an operation through the operating system. All process operations can be regulated and controlled through the use of a process control block. The control block contains all the critical information that runs the entire system. Any failure in the system is simply handled in this process control block. It therefore means that with a proper understanding of this control block, any technical problem arising from the operating system can be well managed or solved.
Process management will involve coordinating all activities undertaken by a process. To begin with, different types of technical problem related to the process management includes failure on:
- Process State
- Process Scheduling
- Process Synchronization
- Delta computing
- Uploading
- Retrieving
Failure in any of this process means that there will be an overall technical failure in the system. The operating system manages procedures by carrying out tasks such as the allocation of resources and process scheduling. The CPU of the computer holds these commands when in operation. Within the computer system, the operating system has to synchronize all the different information.
Analysis and findings
In the first survey, I did the test and examination of each of the technical mechanisms related to all concepts of process management.
- Failure on the Process state
Generally, a process has a set of commands or instructions that need execution. A process state refers to a state associated with a particular process at a given specific time. A process will have interlinked data which need to processing to a final information. Major problem sets in here when wrong data interlinks in this process. Research done by a group of computer scientists shows that the process components form a significant factor for technical failures. These process components are the resources needed to execute a process. Such resources include raw data, encryption, and login information, among many more others. If a mistake happens at the time of coordinating these resources or during the input, the system will end up malfunctioning or not working as expected.
A process state has a particular time that it has to respond to a given command or action, and hence this is critical in analyses common errors. Time indifference in terms of action response of a process can delay the reaction of another concept, therefore affecting the system as a whole.
A process state of any process can be new, running, waiting, executing, blocked, suspended, or terminated. If blocked or terminated, that means some operations will not be performed.
- Failure in process scheduling.
Analysis of the problem
Process scheduling is a system that makes the central processing unit to allow operations of one process while keeping the others on hold in a different location. This action happens due to the unavailability of critical inputs like the I/O, among other factors. The critical purpose of scheduling is to make operations within the central processing unit efficient and fast in execution. Failures of the inability of the central processing unit to hold activities in the waiting resources cause data traffic and hence slow operations. Investigations show that most instances when our computers are operating at a slow speed, the failure can be traced to the CPU.
This action means malfunctioning of the CPU leads to failure in several operations within the operating system. Many researchers have come up with several methods to try and explain how to solve the challenges of scheduling policies. The widespread understanding in all works related to scheduling research is that all use average waiting time calculated on given standard parameters. The turnaround time is directly proportional to the waiting time and also the execution time. High turnaround time means that waiting time will also be high, and hence the overall average waiting time will increase. Through this system of Critical scheduling failures have been outlined. If there is a timing challenge, then the execution will take quite a long time.
This challenge can be solved through the timely loading of operations and creating a bigger capacity for the CPU to perform many tasks.
Findings
Challenges associated with scheduling include:
- Type of Job
Scheduling begins when a command or the process has its allocation on the CPU. Ordinarily, long term scheduler takes control in the ready queue & short term scheduler also selects the next job from the available queue, and in the end, the job is passed to CPU. The common challenge is that before entering any job by the scheduler, it does not keenly look to the kind of job it is. There are two main types of jobs: computational intensive and communication intensive. If the kind of job is communication-intensive, then it would need higher bandwidth for the execution of data, and if it is computation-intensive, then it would require a CPU with high speed. Computational jobs are the kind of jobs that need more processing time as compared to communication time. On the other hand, communicative tasks are those jobs that require more communication time than executing or processing time. For computational intensive, the process considers execution time for processing majorly, and it also consumes more power.
- Reliability
Security and resource allocation is of primary concern in the operating system. A suitable safety mechanism is critical for grid resources within parameters of excellent safety. This case is because if the support is affected by viruses or other malicious code, then it will probably be blocked from executing the application safely and also downgrades the overall performance of the operating system. In line with these safety measures, a key factor to consider is the reliability factor. The reliability factor (RF) value of each coordination is identified through the calculation of its self-protection capability, and reputation weightage realized from the user community based on past performance.
- Location Dependency
Communication jobs usually should be submitted at a location near the operating system base. Some operating system has job inputs placed at far positions which affect frequency and coordination. In this regard, there would be no network congestion problem. Additionally, packet delay will drop to a considerable rate, which results in low traffic. If communication jobs are to be relayed from a remote location from the system, then it will cause a network congestion problem. In this case, network problem goes high, and also failure rates increases.
CONCLUSION AND RECOMMENDATION TO SCHEDULING FAILURE.
In this part of the analyses, we have identified the Scheduling algorithms and challenges encountered during the scheduling of processes. Scheduling can is a significant area of concern in the operating system. This study has determined that during scheduling, the operating system does not always consider the types of jobs to execute. It only applies the algorithm based on execution time, number of processes, among other factors. As a recommendation to salvage the challenge, we can minimize the average waiting time for the processor. Secondly, we can identify and distinguish the type of job that’s need perfomance, either computational or communication. Location dependency is also one of the key factors used in determining the efficiency of scheduling algorithms. The active and well-organized exploitation of the system requires the proper scheduling of jobs.
- Synchronization failure
Another technical issue with process management relates to the synchronization process. Synchronization of data between the server and the device in use is a very complicated process and involves a high level of operation management. During synchronization, several process activities must take place regarding the flow of data. The operating system can fail due to errors in the input mechanism of data in the order. Synchronization failure can also result in poor manual operation of the device. This survey aims at identifying such causes of this failure and the necessary recommendations to such errors.
Survey test and analysis
When synchronization takes place, it means that the interviewer must get a response message and show statistics in the form of a summary. These statistics give detailed information on all actions undertaken for synchronization. This part focuses on studying aspects of the network concerning saving and loss of data. A study was conducted for synchronization changes when data need to be kept on offline mode.
A group was tasked with an investigation of working offline and how it affects synchronization—working offline meant using our database for all direct changes to the interviewer. Later, the team went to synchronize all changes with the server. The basic idea behind the synchronization is to keep safely changes made offline inform of a series of deltas that can later be uploaded to the server. This study examines how one can construct and upload all deltas.
Delta Structure
for instance, each new change to a particular model as an entry. These appear in wording like “create board” or “delete card.” when the modification is created, the present state of the change si saved. Each entry has articular data associated with it. For instance, the information will be displayed as pending, uploading, failed, or canceled, and it will show how many times we’ve attempted to sync the change.
For each model with the change, there was a possibility for multiple fields to be modified. These changes referred to as deltas, and each delta represents a specific field changing in a certain way. There is a rather simple mechanism to evaluate since they just point to an area and store of what value was established before and after implementing the change. The following was a sample of the synchronization process and calculating deltas
Calculating Deltas
The first step involved doing an annotation-based calculation. The delta field were entered as shown.
public class Card { @DeltaField(ModelField.NAME) private String name; @DeltaField(ModelField.DESC) private String description
Then the process follows that you feed the calculator with the models and then compute the difference between them. A standard error occurred during the process of computing this case. Some ill-thought-out data structures necessitated this failure. Additionally, despite the challenge still, the researchers had to calculate deltas by hand, though mainly, most operation of delta calculation is automatic.
Results and findings
It turned out that calculating deltas was not just useful for synchronization but also needful for faster calculation. With the failure of calculation or computation, it therefore follows that a manual backup should be put in place. Anytime we needed to know how different two objects are related, we can use the delta calculator. For instance, we keep track of the number of conflicts that occur when synchronizing data. The delta calculator was useful for computing this information. This action can is considered significant in computing.
Failure in Uploading Deltas
When we switch our network on then, we can be in a position to upload our created deltas. A sound operating system will be able to upload these deltas without difficulties.
This response means there can be data hierarchy problems when uploading deltas. To avoid system shutdown, this process should be done carefully. This research, however, identified technical issues that pertain to the process of uploading deltas, which Is part of process management.
Working out with example that, a user might create a card then edit the information of that card. If we tried to upload the “edit description” change first, the server would be majorly disorganized because it does not understand if the card exists. This is because deltas must be uploaded in a given protocol.
In this article, the process was to upload the different deltas after converting them to HTTP request form.
.
This test was affected by a lack of enough capacity to hold the data before synchronizing. The upload was also influenced by execution command which allows one delta to be uploaded at a time. This technical problem pushes one to use an alternative custom code to translate the delta into HTTP requests.
Based on this same test, other technical problems were also identified and a critical response established. It is quite terrifying to with these common failures that there is a possibility of the entire operating system falling apart. Considering issues of queuing of data, and the queue gets permanently blocked, the final result will be quite frustrating to the user.
Solution
All these network challenges can be solved through two main categories: temporary or permanent. The basis of our categorization is from a list code on HTTP status codes that can be termed as “temporary” other operations challenges are all considered as permanent errors. Permanent errors are easy to solve: the best approach is to drop the particular delta and continue with other processes. This act may result in future cascading errors, this but it can controlled permanently through more complex procedures.
Temporary errors are those errors caused by the operator or network challenges. This can be done through retry uploading the delta when uploading fails. Obviously, you will have to wait a bit first to avoid the temporary problem that came up and also to avoid banging the server with many requests. To that end, you can use exponential backoff on our synchronization service when this kind of error shows up. This challenge on overload can also be solved by capping the number of times a single delta retried
Conflict Resolution
This aspect is a common challenge that occurs when people are working in the same environment and using the same server. It can occur when two or more users try to edit the same field of a system. This action can delete crucial information and cause a go-slow on the network. Conflict resolution cannot be easily identified since the two users may not be aware of each other. The changes can also cause a total shut down of the system. This challenge has a higher magnitude than other errors. In the actual execution of this conflict resolution, the last user wins it over the first one. The simple solution to this error is that the chief editor or system controller should be the last one to key in all the necessary corrections.
The process of synchronization may not always work as expected more so when handling operations with long-form fields. These general failures call for continuous monitoring through some analytics that is able to detect the frequency of conflict occurrences. A safer solution will involve keeping a history of all operations so that in case of data loss, recovery can be possible
Reverting Data
It is also a big challenge that sometimes upload may fail. A change for the same may not even work correctly due to the many reasons, and factors analyzed before.
With such failures of reverting data: just let future GETs from the central system make all local changes to the database. This way, the operating system will still be the source of all information about what changes occurred and how it occurred. In other words, it will serve as a backup for the info. In extreme cases, the process of reverting changes, if not well synchronized, some legitimate changes may not be established.
These can is approached in two ways. The first one is to try to upload all changes before downloading any other information. The second approach is to reload all changes done when the system was offline and update it to the GET requests.
COMPARISON WITH OTHER RELATED WORKS
Generally, looking into this research work, many technical issues arise from the general operation of the operating system.it is evident that a better operating system entails a better management system. A similar study by the MM University in India. The study identified several critical issues in computing.
Analysis and review of the study
Of late, there has been, there increased demand to reduce the powers for computing processors and help empower the operating system through advancing technology. This action will entail reducing computing processors’ capabilities and improve the system throughout. technology world, which have led to the evolution of healthy computing environments in order to meet the demand of today’s technology. Most of the traditional computing processors were similar i.e. with the same characteristics. These Homogenous processing system were overwhelmed by the increasing problems and work to be performed by computers. This challenge led to the evolution of heterogeneous processing systems. These have diverse characteristics for the same processor, unlike the homogenous processors. The aim of this research was is to bring out the differences between the parallel computing which includes process management, cloud processing environments and serial computing which includes homogenous and heterogeneous operating systems.
In particular I analyzed the part of grid computing that relates to operating system and the challenges occurring form the same system. The grid system technology is a simpler way of executing jobs across the broad processor system.
This system allows the sharing of resources over different geographical locations. Such concepts create a virtual environment for the organization and management of data. The nature of data contained in each grid is meant to perform a specific task. It divides a big program into subprograms and also assigns each subprogram a particular processor. The processor assigned gives the final feedback after a process execution. The challenge identified here is the fact that grid computing allows broad virtual interactions, which might lead to the corruption of information. If one processor fails, it might affect others and hence slowing the operations.
This research established similar cases with process management, and the recommended approach was to upgrade the processors. This act means that data can be handled more privately without affecting other content.
Summary and conclusion
Generally, this paper has analyzed the different technical challenges that are associated with process management and the way to solve them. The difficulties identified include failure in synchronization, process state, and scheduling errors. The study has identified what causes such errors, which can either be a general failure of the operating system or mistake from the person operating the device. The research was intense since it involved sample work from other researchers on the same field, and feedback form operating system technical institutions. The data analyzed here include all the necessary information on the survey process, findings, and recommendations on how to solve the problem. I recommend that in the future, this work needs to broaden to all fields of the operating system since most technical errors co-relate. Thi study means that some technical errors might affect several components of the operating system.
References
Chou, A., Yang, J., Chelf, B., Hallem, S., & Engler, D. (2001, October). An empirical study of operating systems errors. In Proceedings of the eighteenth ACM symposium on Operating systems principles (pp. 73-88).
Marisetty, S., Ayyar, M., Quach, N. T., & Lint, B. J. (2011). U.S. Patent No. 7,904,751. Washington, DC: U.S. Patent and Trademark Office.
Mei, Y., Hua, Y., Swami, A., & Daneshrad, B. (2005, March). Combating synchronization errors in cooperative relays. In Proceedings.(ICASSP’05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005. (Vol. 3, pp. iii-369). IEEE.
Suri, P. K., & Mittal, S. (2012). A comparative study of various computing processing environments: a review. Int J Comp Sci Inf Technol, 3(5), 5215-8.
Virvou, M. (1999). Automatic reasoning and help about human errors in using an operating system. Interacting with Computers, 11(5), 545-573.