This essay has been submitted by a student. This is not an example of the work written by professional essay writers.
Intelligence

The Negative Effects of Artificial Intelligence

Pssst… we can write an original essay just for you.

Any subject. Any type of essay. We’ll even meet a 3-hour deadline.

GET YOUR PRICE

writers online

The Negative Effects of Artificial Intelligence

AI (Artificial Intelligence) is changing the nature of nearly all that pertains to human life, for example, economy, employment, privacy, economy, warfare, healthcare, ethics, and communication among others. Nonetheless, the world is yet to realize its advancement in the long-run, whether it is directing humankind towards making the world a better place to live or a place full of tragedy. All technologies have their advantages and inconveniences. However, benefits at all times prevail over the disadvantages for a tech to survive in a market. However, for AI, the world is not yet convinced whether in the long-run the positive impacts will always keep overshadowing adverse effects, and this appears not the case then the world is in a severe crisis. Analyzing the current situation, on one side, the planet appears to embrace the transformations brought about by technology, be it smart healthcare, autonomous automobiles. On the other hand, people in most cases find objecting against the government in the context of taxes, redundancy, and privacy, among others. As the advancement of AI speeds up, more autonomous systems are being designed and replacing human labor. This essay covers the major spheres where human life is extensively affected by Artificial Intelligence in negative ways.

AI is doing a considerable good and will remain to offer many advantages for the contemporary world; however, along with the good, there will undoubtedly be adverse effects. Drones are a perfect example of Artificial Intelligence. Drones are affecting people’s privacy in private spaces. The subject is growing critical as the primary autonomous technologies become more complicated. In her article, “Personal drones, AI and Our Privacy” Kristen Thomasen notes that in various instances, encounters with unidentified drones have brought about visceral and at times fierce reactions from the individual feeling observed(Thomasen 1. The author further notes that the use of drones by top government organizations and leading firms such as Amazon certainly raises the quantitatively higher risk to the privacy of citizens.

Don't use plagiarised sources.Get your custom essay just from $11/page

In various cases, the infringement by friends and foes, lovers and neighbors, which is significantly facilitated by robotic Artificial Intelligence enabled technologies, might feel qualitatively worse. Thomasen further remarks that it is a mistake to neglect the social effects of personal drone expertise in favor of concentrating solely on physical safety (Thomasen 1).

The article offers solutions to the crisis. The law can direct the course of innovation by nurturing public approval of modern technology by dealing with some of its adverse social impacts. Thomase presents two specific policies which to approach the privacy concern caused by drones. First, drone regulation can assist in addressing accountability and transparency. The anticipated new drone regulations integrate several mechanisms for achieving this comprising licensing requirements and registration of drones. Nonetheless, Thomasen claims that it is also the moment for the regulation of drones to go beyond the sole focus on defending physical safety, to address the security and safety that people anticipate by having a sense of control over their personal information and personal spaces. It imperative that people discuss the social effect of AI technology, precisely because more refined forms of Artificial Intelligence-enabled robots are on the horizon.

In his presentation, “What happens when our computers get smarter than we are?” Nick Bostrom discusses the various adverse effects of AI. Bostrom states that several of his colleagues think that the world is on the verge of something that might result in significant change, and that is machine superintelligence. He views AI as inserting orders in a box. According to him, the world would have human techies that would carefully craft items of knowledge. Systems are useful for various purposes; however, they can be quite delicate that one cannot scale them.

A significant challenge of AI is the problem of value-loading. Bostrom claims that it is hard to create an Artificial Intelligence that utilizes its aptitude to study what humans’ value (1). Besides, improvising an AI which system of motivation is built in a manner that is inspired to follow human principles or to execute acts that it foretells humans would consent is a challenge.  The presenter acknowledges that several mysterious concerns will require to be solved, for example, how to handle logical uncertainty among others. According to Bostrom, designing a super-intelligent Artificial Intelligence is quite a hard challenge (1). Fabricating a secure super-intelligent Artificial Intelligence entails various extra challenges. The danger is that if someone works out how to crack the initial problem without breaking the extra challenge of ensuring ideal safety.  Bostrom believes that the response is to unravel how to devise super-intelligent Artificial Intelligence such that even in cases where it escapes, it is still secure since it is fundamentally on the human side since it shares their values. However, the speaker sees no way around the difficult problem.

Thus according to Bostrom, humans ought to develop a way out to manage the crisis in advance, such that it is accessible when it is required. Currently, it may be that the world cannot address the whole control challenge since some elements can only be addressed when one is aware of the details of architecture. Nonetheless, the more of the control problem that is solved earlier, the better the likelihood that the move to the Artificial Intelligence age will proceed smoothly.

In the expose, “Trash talk hurts, even when it comes from a robot,” the author notes that Pepper, a commercially available human-robot did poorer when the robot downcasts them and good when the robot cheered them. The article further reveals that forty research respondents were relatively refined and were quite aware that a device was the cause of their uneasiness. A partaker noted that he was not amused with what the robot was saying; however, he could not blame it because it was programmed to talk that way. The author further states that in circumstances like online shopping, human-robot interaction might not have similar goals. The editorial further argues that a machine’s capacity to induce response might have repercussions on mental health treatment and automated learning.

Artificial Intelligence also harms preschoolers. Lafrance claims that Algorithm makes preschool children fixated with YouTube. This predisposes kids to become obsessed with relatively narrow interests such as automobiles, ice cream, elephants and moon, among other things (Lafrance 1).

In conclusion, in the pursuit of sophistication, humans have developed continuously and refurbished a range of technologies. The advancement of Artificial Intelligence and Algorithmic schemes in government and society offers new threats. The widespread applicability of Artificial Intelligence system implies that a broad swath of spheres will be impacted and are potentially predisposed to unexpected and new failure modes. The set of affected areas include education, health privacy and security.

 

Works cited

Bostrom. Nick. “What happens when our computers get smarter than we are?” TED 2015 https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are/transcript?referrer=playlist-talks_on_artificial_intelligen

Carnegie Mellon University. “Trash talk hurts, even when it comes from a robot: Discouraging words from machines impair human gameplay.” ScienceDaily. ScienceDaily, 19 November 2019. <www.sciencedaily.com/releases/2019/11/191119075309.htm>.

LaFrance, Adrienne. “The Algorithm That Makes Preschoolers Obsessed With Youtube Kids”. The Atlantic, 2020, https://www.theatlantic.com/technology/archive/2017/07/what-youtube-reveals-about-the-toddler-mind/534765/.

Thomasen, Kristen “Personal Drones, AI And Our Privacy”. Policy Options, 2018, https://policyoptions.irpp.org/magazines/february-2018/personal-drones-ai-and-our-privacy/.

 

  Remember! This is just a sample.

Save time and get your custom paper from our expert writers

 Get started in just 3 minutes
 Sit back relax and leave the writing to us
 Sources and citations are provided
 100% Plagiarism free
error: Content is protected !!
×
Hi, my name is Jenn 👋

In case you can’t find a sample example, our professional writers are ready to help you with writing your own paper. All you need to do is fill out a short form and submit an order

Check Out the Form
Need Help?
Dont be shy to ask