accidents of self-driving vehicles
Introduction
Since the automobile appeared in this world, the quality of human beings’ lives increased significantly. People use it to commute, transport, and travel, and so on. Over the years, many new functions were being applied to the automobile. It is not merely a vehicle nowadays, it can answer the phone, broadcast the radio, and provide games for rear passengers to play. These years, the invention of electric cars solved the problems of expensive fuel costs and greenhouse gas emissions. As technology improves, everything becomes possible. The ambitions of humankind are developing rapidly. Many technology companies are focusing on the development of “self-driving” cars, a concept that was existing in many science fiction films. A “self-driving” car is an autonomous car, and it can drive by itself intelligently without the operations of human beings. It works through the combination of internal algorithms and some external devices, like sensors and cameras. The ideal “self-driving” car can bring convenience to people’s lives. It would reduce the number of accidents, provide elder and disabled people opportunities to drive and make transportation efficient and harmonious. However, the progress of the self-driving car is still at a nascent age, requiring more improvements and tests. Therefore, who should be liable for an accident of self-driving vehicles is a controversial topic.
The Society for Automotive Engineers has defined six levels of the autonomous vehicle, from level 0 with full human intervention to level 5 with full automation. The most self-driving cars that currently sold on the market are level 2 or level 3 drive automation, also called advanced driver-assistance systems (ADAS). It includes functions of adaptive cruise control, parking sensors, emergency braking, and self-parking systems. The autonomous cars of level 2 and 3 might be more dangerous than level 0, 1, and 5. Because both the level 2 and 3 connect the human and technology equivalently. Without either, the system will not work. These years, ADAS causes many injuries and deaths due to various reasons. Many people begin to oppose the development of self-driving cars, and it affects some companies’ progress and tests. Even so, people should believe in technology. Once the level 5 cars become mature, all hardship and efforts will be worthy. And in many situations, vehicle manufacturers should be liable for accidents. Don't use plagiarised sources.Get your custom essay just from $11/page
Generally, there has been a recognition that self-driving cars do implicate the manufactures of the car than the operator of the vehicle. That has a different implication for a company like Tesla, which manufactures and sells the self-driving care than the google which operate the cars. According to a legal scholar Walker-Smith, autonomous vehicles represents a shift from the vehicular negligence to the product liability. The later legal doctrine does cover the claim against the element of manufacturers who sells the defective and dangerous product to the operators. Today on roads, product liability claims do come up in a case like the failure of Firestone tires in the late 1990s and other matters such as the violent rupture of the airbags in the new aught. Just like ins such scenarios where a company that manufactures a product did not do as it promised in the traditional cases, the recent case of the self-driving care causing the accident is attributed to product liability on the side of the manufacturer.
The majority have a misunderstanding of the term “self-driving,” treating all current autonomous cars as level 5 automation. Tesla, a vehicle company, focuses on electric vehicles, offering a drive assistance service called “Autopilot.” This is a term to describe a system that pilots the airplane or spacecraft automatically without human actions. It is improper for Tesla to choose this word for its ADAS system. People would have too much expectation on it. One man from the Thatcham Research that tests vehicles, Matthew Avery, said, “Calling this kind of technology Autopilot that’s very misleading for consumers. They might think, ‘I just need to push this button, and I can let the car drive.'” (Leggett, T)
Similarly, Mercedes-Benz withdrew an advertisement video of its new E-Class car because the content of video showed more self-driving capabilities than it did. (Reilly, K) Mistakenly considering a level 2 drive assistance as a level 5 full-autonomous driving will lead to more accidents.
The sellers might warn buyers that it is not a real self-driving car when people purchase automobiles. But if one day, the buyer’s sibling borrows the vehicle in an emergency, without the buyer’s warn, and the borrower has little knowledge about this car, then distract attention from driving. The tragedy is likely to happen in this situation. When the accidents happen, the surface reason might be the driver disobeyed the instructions. However, the real reason is the driver did not receive enough information about the self-driving car. As famous vehicle manufactures, Tesla and Mercedes-Benz should be responsible for emphasizing the reality of current self-driving status rather than overstate their limited technology.
The level 2 and level 3 driving automation still needs human driving in some special situations. The system is not able to predict when will the “special situation” happens, so it requires drivers to concentrate on driving all the time, or it might cause an accident. In this way, it seems that this kind of self-driving has no apparent difference with traditional driving. Drivers need to pay attention to the road as before, but the computer controls the steering wheel. Under this system, drivers cannot drive in the manner they want. Instead, they have to follow the network and deal with problems that stumped the computer. Keeping sitting in the car and then waiting for the response is dull and tired. “It’s impossible for even a highly motivated human to maintain effective visual attention toward a source of information, on which very little happens, for more than about half an hour.” Lisanne Bainbridge, a psychologist at the University College London, wrote in her essay “Ironies of Automation.” (Fry, H.) Nobody would like to stay focused on a self-driving car for the whole distance.
People prefer to read books, fix makeup, or take a snap. Usually, when drivers believe that the coming roads will be safe, they take their hands off the steering wheel and start to do personal things. These relaxations resulted in many accidents. “In the Florida 2016 crash, the driver of the Tesla had his hands on the steering wheel for only 25 seconds of the 37 minutes in which he operated the vehicle in automated control mode. In California in 2018, the driver’s hands were not detected on the steering wheel in the six seconds preceding the crash.” (Oliver, N) If a pedestrian suddenly appears in the front of the car, the driver might supposedly have no enough time to react and press on the brake. A study found that it took about 40 seconds to awaken people from what they were doing to regain control of the car after the alarm. (Fry, H.) This system violates the physiology; people cannot adapt to it usually. The vehicle should realize that this poor design caused lots of accidents. If people did not rely on drive assistance, they might not die.
Ethical problem
There are also some ethical issues related to self-driving cars. When programmers set the algorithms for driving features, they must consider some ethical questions. For example, if the driver turns left, a kid will be struck; if the drivers turn right, an elder will be struck; if the driver goes straight, he or she will die. What should a computer do? Who will be considered having the priority? It seems that the best method is to transfer control to people. However, it often takes a long time for humans switching from other things to drive. Therefore scientists need to create a programmed and reasonable system for critical situations. (Nyholm, S) In the Jimin Rhim, Gi-bbeumLee, and Ji-Hyun Lee’s experiment of the cross-cultural comparison of Korea and Canada, they found four stages of the ethical decision-making process: Moral Recognition, Moral Judgment, Moral Intention, and Moral Behavior. Otherwise, their study “highlights the need for designing AV moral behaviors that correspond to cultural and crash contexts.” (Rhim, J., Lee,) Programmers should notice the cultural difference when they are designing new algorithms. The vehicle manufacturers take full responsibility for ethical decisions. If a moral choice is failed to answer, the self-driving car will become the center of public opinion.
The ethical problem deepens when one is attending to the conflict of interest that surfaces in mundane situations such as the crosswalks, turns, and intersections. One such location is who is to be held accountable in case of any accidents caused by self-driving cars. The ethical dilemma exists in who is liable for such accidents and why. The moral preferences regarding such fundamental difficulties vary across the world, and new finding, however, does reveal that several arguments surround this ethical dilemma. Depending on the country, the reasoning on who is to be held accountable or liable for a lawsuit in case the self-driving cars caused accidents to remain under stiff contention, with the majority claiming the care manufacturer is responsible. At the same time, few are arguing against the operator’s fault or pedestrian who enters the road illegally.
My argument and the counter-arguments
While some researchers say, it is human beings’ fault. Vehicle manufacturers are not always wrong. My case is that the operators are still more liable to such accidents than the manufactures of cars. I do believe that the productions of automobiles do put in place all the measures aimed at minimizing the accidents which are associated with such disasters. If all the standards are strictly adhered to by the operators of the cars, the cases of self-driving causing accidents would be eliminated. Sometimes the drivers should be liable for their errors. The driver assistance system needs to update on time, like phone applications. People who forget or refuse to update the software might bring trouble to cars.
What’s more, if the driver ignores the system’s warning, he should be liable for himself. “Brown was audibly warned six times to keep his hands on the steering wheel. He was also warned visually, seven times, on his Tesla’s dashboard.” (Fung, B. (2017, June 20)) Brown finally die of accidents. The system had given him many chances to remain safe, but he chose to give up.
To encourage the development of the self-driving car, the United States passed the ‘SELF DRIVE’ Act in September 2017. It ensures the safety of automated vehicles, establishes cybersecurity and privacy plan, and obtain some safety exemptions. (Bruyne, J. D) Under the help of the government, the process of testing can be favorable and prosperous. As such, the innovation of the self-driving should be shifted on the aspect of safety both on the operator and the manufacturer to end the threat of the endless lawsuits. Most people are having against these two sides as a result of the accidents. However, the Act should be revised as technology changes. NHTSA states that evolution is rapid, and uncertainties are numerous. The policy should be made with insurers, manufacturers, consumer groups, and so on. (Anderson, J) As long as the policy follows technology’s development, the inventions have an excellent environment to mature. Various problems will come out gradually along with time, and the amendment of policy can help close the loopholes.
Conclusion
I do believe that my argument of the fact that the operator of the self-driving cars is liable to an accident caused by the vehicle than the manufacture is more convincing as the operator is often directly linked to the accident than the manufacturer. Besides, in most cases, it is a difficult task determining the fault or the liability caused by the manufacturer on the accident after the accident, and this even leaves the operator more liable. The counter-argument is, however, limited to the elements of determination. People need to believe the self-driving technique and understand the technology of the future. Once the fully autonomous vehicles are created, the world will become completely different. The number of accidents reduces, while the mobility of disabled people increases. People’s lives become efficient, no traffic, and there is no more driver, everyone can do what they want in the car. However, I do acknowledge the argument by most people that self-driving car accidents are more attributed to the manufacturer’s faults, although it is less convincing on facts.
References
Adnan, N., Nordin, S. M., Bahruddin, M. A. bin, & Ali, M. (2018). How can trust drive forward the user acceptance of the technology? In-vehicle technology for an autonomous vehicle. Transportation Research Part A: Policy and Practice, 118, 819–836. doi: https://doi.org/10.1016/j.tra.2018.10.019
Anderson, J. M., Nidhi, K., & Stanley, K. D. (2014). Autonomous vehicle technology: A guide for policymakers. Retrieved from https://ebookcentral.proquest.com
Bruyne, J. D., & Werbrouck, J. (2018). Merging self-driving cars with the law. Computer Law & Security Review, 34(5), 1150–1153. doi: https://doi.org/10.1016/j.clsr.2018.02.008
Fry, H. (2018, October 24). The Road to Self-driving Cars Is Full of Speed Bumps. Retrieved February 9, 2020, from https://www.discovermagazine.com/technology/the-road-to-self-driving-cars-is-full-of-speed-bumps
Fung, B. (2017, June 20). The driver who died in a Tesla crash using Autopilot ignored at least seven safety warnings. Retrieved February 9, 2020, from https://www.washingtonpost.com/news/the-switch/wp/2017/06/20/the-driver-who-died-in-a-tesla-crash-using-autopilot-ignored-7-safety-warnings/
Leggett, T. (2018, May 22). Who is to blame for ‘self-driving car’ deaths? Retrieved February 8, 2020, from https://www.bbc.com/news/business-44159581
Nyholm, S., & Smids, J. (2016). The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem? Ethical Theory and Moral Practice, 19(5), 1275–1289. doi: 10.1007/s10677-016-9745-2
Oliver, N., Potočnik, K. P. T., & Calvard, T. (2018, August 14). To Make Self-Driving Cars Safe, We Also Need Better Roads and Infrastructure. Retrieved February 9, 2020, from https://hbr.org/2018/08/to-make-self-driving-cars-safe-we-also-need-better-roads-and-infrastructure
Reilly, K. (2016, July 30). Mercedes Pulled a Self-Driving Car Ad Because It’s Not a Self-Driving Car. Retrieved February 8, 2020, from https://time.com/4431956/mercedes-/
Rhim, J., Lee, G.-B., & Lee, J.-H. (2020). Human moral reasoning types in autonomous vehicle moral dilemma: A cross-cultural comparison of Korea and Canada. Computers in Human Behavior, 102, 39–56. doi: 10.1016/j.chb.2019.08.010