Should Parents Monitor Their Children’s Social Media?
TOPIC 2
In regards with censoring certain content on social media I believe that there are different people responsible depending on the situation. For example, if a parent has allowed their young teen between the ages of 13-17 to access and use social media then it is mainly the parent’s responsibility to make sure that the young teen doesn’t access adult content. There are many studies and articles discussing the affects of the youth watching/seeing adult content and the effects that is has on them later such as damaging relationships and giving some people social anxiety. One article had said “a that graphic pornography is becoming a normal part of life for children as young as 11” (Goldhill, 2014). This is a statistic that has changed drastically in comparison to earlier years as the older generations didn’t have that kind of access to this content so easily due to social media not being as wide spread and available. Also, on some social media websites such as Twitter and Facebook there is an option in place that allows users to mute words from posts, so they won’t be seen by the user even if searched for. This is just one of many ways that an adult could prevent their young teens from accessing certain content. I believe that because there are these kind of precautions in place that this then makes it an adult’s job to ensure that when their teens are on social media that they are monitoring what their children are viewing if they don’t want their child to be a part of that statistic. There is also an article on monitoring this type of issue and it states that “In the virtual world, though, ignorance can be the worst crime. Educate kids and teens to use common sense while on social media. And the monitoring method is up to you” (Should Parents Monitor Their Children’s Social Media? – TeenSafe, 2017) and this is a statement that I agree with.
Another thing to note is that mental health issues are an increasing problem amongst young people that use social media. It was discovered that 20% of adolescents may experience mental health problems in any given year along with 50% of mental health problems being established by age 14 and 75% by age 24 (Mental health statistics: children and young people, 2016). That percentage is a clear indication of how what I believe is an increasing problem. In this situation it could be both the social media developers along with the influencers on that app to try and aid the situation. One way I think that this could be helped is (if possible) the developers of the social media sites could implement things such as algorithms that promote multiple mental health aiders depending on the age of the user that has signed up. This would be a way to target specific age groups where mental health issues are at an increase based on the article previously cited. I believe that seeing the mental health promotions could possibly save/help a vast percentage of young people by the option of help being there while using social media daily. Along with if (for example) the user tweets multiple tweets about death or anxiety then the system should recognise these things and reply with help to means of contact towards mental health clinics or helplines. Although it is also then up to the user at hand to go out and receive help for these issues themselves. I believe that just these two issues could help in a variety of ways and would also be easily implemented.
In terms of social media apps that focus on images such as Instagram, I believe it is then up to both the adult and sometimes the influencers on these apps. An NHS survey found that “Instagram is rated as the worst social media platform when it comes to its impact on young people’s mental health, a UK survey suggests” (NHS Choices, 2017). This is mainly because young children and teens are very impressionable at these ages meaning that once they think a specific image is the social standard they will often try to look like that image and if they don’t this could breed depression or other mental health issues. In this scenario it is partially up to the parents to make sure that their child knows not all the images on apps such as Instagram are edited making them unnatural, so these levels are naturally achievable, along with ensuring that their children’s mental health isn’t being affected by what they see on social media. It is also the influencers on this apps responsibility to let their younger audiences know that not all their images are unedited, and some should also promote mental health wellbeing. This would be to let their younger audience that suffer from these mental heath issues not only that there is help but also that not everything they see on social media is attainable. I believe that doing something as simple as that could have a huge influence on the younger people that use social media.[unique_solution]
Overall the responsibility of these issues varies depending on the situation and some social media apps do in fact have things in place to be able to prevent some of these problems and issues but overall it is a case by case solution topic.
References
- Goldhill, O. (2014) Why teenagers’ obsession with porn is creating a generation of 20-year-old virgins, Telegraph.co.uk. Available at: https://www.telegraph.co.uk/women/sex/11045859/Why-teenagers-obsession-with-porn-is-creating-a-generation-of-20-year-old-virgins.html (Accessed: January 28, 2019).
- Mental health statistics: children and young people (2016) Mental Health Foundation. Available at: https://www.mentalhealth.org.uk/statistics/mental-health-statistics-children-and-young-people (Accessed: January 28, 2019).
- NHS Choices (2017) Instagram “ranked worst for mental health” in teen survey – NHS, Department of Health. Available at: https://www.nhs.uk/news/food-and-diet/instagram-ranked-worst-for-mental-health-in-teen-survey/ (Accessed: January 28, 2019).
- Should Parents Monitor Their Children’s Social Media? – TeenSafe (2017) TeenSafe. Available at: https://www.teensafe.com/blog/parents-monitor-childrens-social-media/(Accessed: February 12, 2019).
TOPIC 2
Top of Form
Self-harm in teenagers is an increasingly growing problem in today’s youth. An NHS study found out that there has been a “68% rise in rates of self-harm among girls aged 13-16 since 2011” (NHS Choices, 2017). This is a statistic that is not only worrying but also related to social media. This is because on social media there is a multitude of posts from people showing their own scars and things along that nature of self-harm and posting it onto their social media. So the main question is who should be governing these posts?
It could be said that the responsibility is in the hands of the causal users of these social media apps. On apps such as Twitter and Instagram users can post whatever they wish to post freely and so this means that there is a lot of opportunity for users to posts self-harm images on these apps. This also means that many people can also see these posts and it is at that moment that they can choose to do something about it or they could choose to pass it. When a user sees this I believe that they would usually choose not to get involved and simply scroll past it as if they haven’t seen it. Although there are hundreds of thousands of people that do things such as offering support, adding comments to certain helplines or even reporting these posts. These are all acts that could possibly be seen by the user, and then they could go and seek help. Although the argument could be made that for the average user it wouldn’t be in their place to try and act on what they see when coming across these posts as they could have a negative effect on the situation, which is also a valid argument and is also one to be considered. One last thing to note is that there are some users out there that have created pages to help to stop self-harm in general and these pages also search for posts and try to offer help as soon as possible (Stop Self Harm (@StopSelfHarm) | Twitter, 2010).
References
- NHS Choices (2017) Worrying rise in reports of self-harm among teenage girls in UK – NHS, Department of Health. Available at: https://www.nhs.uk/news/mental-health/worrying-rise-reports-self-harm-among-teenage-girls-uk/(Accessed: February 22, 2019).
- Stop Self Harm (@StopSelfHarm) | Twitter (2010) Twitter.com. Available at: https://twitter.com/StopSelfHarm?lang=en(Accessed: February 22, 2019).
Top of Form
It is also majorly the social media apps develops and owner’s responsibility to govern what is posted. This would be because they would need to make sure that they govern all the harmful content being posted on their app. I believe that unless the self-harm images are for educational posts on respected educational/informative pages (such as NHS or helpline) then they should be monitored by the developers. There was a recent article discussing how Instagram is now trying to ban graphic self-harm images for a multitude of reasons but the main quote of the article was from a father who had said (about his daughter who recently took her own life) “Instagram helped her take her life” (Instagram to ban graphic self-harm images: Here’s how such posts affect teens, 2019). Instagram has many actions in place that can be used to help limit the amount of self-harm posts such as their help/action page, the ability to report self-harm images and the use of tags to cypher through these posts (Self-Injury | Instagram Help Centre, 2018). On both Instagram and Twitter users can hashtag their posts and they are then grouped and people can search for them. This allows for users to use hashtags such as “#SelfHarm” (which is where a majority of these posts are) and posts to these tags for many people to see. Some people believe that these tags shouldn’t be allowed but I believe that this would then cause a problem and this is because the pages that exist to help these people would use this hashtag and so removing it would get rid of their wide-reach. Also there are many people that sarcastically use the hashtag in a humorous context when they minimally injure themselves and so they wouldn’t be able to do this if it was removed. I believe that it is the developers of the app to set aside teams that go through these posts and both remove the harmful posts but also send the user links/numbers that provide help. If these apps don’t take accountability for these posts then I believe that the issue will increase and become a bigger problem for these apps and in a worst case scenario could result in multiple court cases because of this content.
References
- Instagram to ban graphic self-harm images: Here’s how such posts affect teens (2019) The News Minute. Available at: https://www.thenewsminute.com/article/instagram-ban-graphic-self-harm-images-here-s-how-such-posts-affect-teens-96946(Accessed: February 22, 2019).
- Self-Injury | Instagram Help Centre (2018) Instagram.com. Available at: https://help.instagram.com/553490068054878(Accessed: February 22, 2019)
TOPIC 3
As of May 25th, 2018, there was a new legislation that was put in place as far as data protection in Europe. This is considered as “the world’s strongest data protection rules” (Burgess, 2017). This was a law that was needed because the last data protection rules that were in place was designed in the 1990s and as technology has developed over time these laws haven’t been able to keep up with these developments. The main aims of GDPR are to “alters how businesses and public sector organisations can handle the information of their customers” along with “boosts the rights of individuals and gives them more control over their information” (Burgess, 2017). The GDPR legislation has made it so that companies are now more responsible and accountable for handling their people’s personal information. This means implementing data protection policies, data protection impact assessments and having relevant documents on how data is processed. For example, “For companies that have more than 250 employees, there’s a need to have documentation of why people’s information is being collected and processed, descriptions of the information that’s held, how long it’s being kept for and descriptions of technical security measures in place” (Burgess, 2017). In 2017 alone “around 46% of businesses have now suffered a digital attack. So, with 5.5 million companies in the UK, that suggests around 2.5 million may have been hit” (The UK’s biggest cybersecurity and data breaches in 2017 (so far) | PolicyBee, 2017). Seeing how much of an issue this was, I believe implementing these new laws was needed and had to be done fast.
One example of how GDPR would’ve had a greater impact a breach is how in March 2017 “Mobile phone company Three suffered a major breach when an employee’s password was stolen, and 200,000 customers’ data was compromised”. With GDPR in place this would’ve held the company Three 100% accountable and because this would’ve been a more severe breach “for more severe breaches, the maximum fine is €20 million or four per cent of a company’s annual revenue, whichever is greater” (dcomisso, 2019). This huge sum of money would’ve made Three ensure that this doesn’t happen again due to how harsh the fine would’ve been.
References
- Burgess, M. (2017) What is GDPR? The summary guide to GDPR compliance in the UK, Wired.co.uk. WIRED UK. Available at: https://www.wired.co.uk/article/what-is-gdpr-uk-eu-legislation-compliance-summary-fines-2018 (Accessed: February 23, 2019).
- dcomisso (2019) GDPR penalties and enforcement, nibusinessinfo.co.uk. Available at: https://www.nibusinessinfo.co.uk/content/gdpr-penalties-and-enforcement (Accessed: February 23, 2019).
- The UK’s biggest cybersecurity and data breaches in 2017 (so far) | PolicyBee (2017) Policybee.co.uk. Available at: https://www.policybee.co.uk/blog/uk-biggest-cybersecurity-and-data-breaches-in-2017 (Accessed: February 23, 2019).
TOPIC 3
Regarding the issue of securing data, I believe that there are a multitude of cases whereby data on the internet is loosely protected and inevitably breached. I believe this because there is very often a variety of reports and articles published describing a multitude of data breaches or leaks. With millions of users inputting their data on the internet daily it is evidently/inevitably difficult to fully secure all this information and protect it from hackers due to the mass that must be protected.
On the 22nd February 2018 the BBC released an article regarding this issue. The article depicted how 14,000 people in Singapore had their HIV positive status data both breached and made public (Sharanjit Leyl, 2019). The article outlined that “The government has blamed the leak on the American partner of a local doctor, who had access to the records kept on all known HIV-positive people in Singapore” (Sharanjit Leyl, 2019). One worrying thing to note is that this person is yet to have been entirely confirmed to be the leak, therefore meaning that the direct cause of this breach hasn’t been outlined (alike to many other breaches). Also, the article states that “Authorities say the leak has been contained, but this is little relief to a vulnerable community in a society that continues to stigmatise the condition” (Sharanjit Leyl, 2019). This statement highlights one of the overlooked and undermined side effects of this leak, but this is a problem that is usually associated with many other leaks due to the information being leaked usually being personal/confidential information.
When private/confidential information such as this is breached and/or leaked, I believe the biggest issue is not only the lack of security for something as personal as health records but also that from the variety of people who of which’s information get made public there is no offer of any consolation or support after the breach happens no matter what the severity of the situation is. I believe that this is because organisations feel that they cannot do anything when a wide mass of people’s information is leaked, except from sealing it away as fast as possible.
Although in this instance I believe that this issue could have possibly been avoided as it was an American partner of a local doctor who of which had access to records of people from Singapore, meaning that I believe the health system in Singapore needs to have a more restricted system of who can access what data, especially when information as confidential/personal as this is being easily accessed by a user who isn’t an native. This could’ve eradicated the possibly entirely due to American partner not having such easy access to the information. I believe that the only positive from this instance was the leak being contained effectively and efficiently.
This is just one prime example of how data is unsecure on the internet and how it can affect thousands of people, along with this being an example of how I don’t believe data on the internet is being secured enough.
References
- Sharanjit Leyl (2019) “Singapore HIV data leak shakes a vulnerable community,” BBC News, 22 February. Available at: https://www.bbc.co.uk/news/world-asia-47288219?intlink_from_url=https://www.bbc.co.uk/news/topics/c0ele42740rt/data-breaches&link_location=live-reporting-story (Accessed: February 23, 2019).