This article is part of the Financial Times free schools access programme. Details/registration here.
Social media video platforms should step up safeguards for their teenage users, enforce age limits and increase monitoring against harmful content to make them kinder, safer places for young people, students say.
Ellie Hattam, one of the winners of a joint competition run by FT Schools and Ofcom, the UK media, telecoms and internet regulator, called for bans on fake accounts, hidden identities and harmful hashtags; tougher age verification; and “child friendly” versions for easier monitoring of content.
She stressed that social media had benefits in connecting with family and friends, which were essential during the pandemic, but argued greater controls were required to limit abuse most notably of teenagers.
“This is an age where you barely even know yourself, and yet you are swarmed with constant videos promoting unhealthy lifestyles, unrealistic body images, promotion of self-harm and an array of cruel and racist comments,” she warned.
She said tougher measures were less extreme than “allowing easily influenced children to be exposed to the horrors that lie within the internet, targeted by never ending harmful or derogatory information with just the single tap of a button”.
Billy Tarr, the second winner, argued that the current system of reporting harmful content was failing, and suggested the introduction of small “nudge” incentives to reward users who notify system administrators of offensive material.
He called for a points-based system so “users who actively participate in the protection of the online community are prioritised in search algorithms and have their posted content promoted above others with lower community protection scores”.
He stressed that such a system should itself be verified, adaptive and backed by extensive advertisement campaigns to promote more effective moderation.
In the winning video entry, Dom Neesam put the responsibility on viewers, saying they should learn how algorithms keep them watching; be mindful of their mood before writing; “report not respond” to offensive comments; and choose their words wisely because “what goes on the internet stays on internet” and can be seen by employers and schools.
He stressed that social media was valuable to stay in touch with friends and family, and allowed users to read the news, be entertained and share interests and opinions. But for those who used platforms just to stave off boredom, he suggested “just delete”.
Other runners-up in the competition said the platforms should be required to include prominent messages promoting kindness and tolerance; and share resources and icons for users to seek help and support.
Some suggested automatically generated warnings requiring users to confirm before posting something offensive or if too much personal information was entered; a “report abuse” button alongside every comment; and a “protection mode” to remove videos with inappropriate words or tags.
One who warned about body dysmorphia called for regulation to stop filters from changing body shapes in images, or at least to make it clear when a filter had been used, and to remove or hide by default “like” counters.
Other ideas included changes in the school curriculum to teach more about bullying and privacy; showcasing pre-made messages online using kinder language such as “well done” or “that was great”; and incentives to post positive messages.
One argued that platforms could automatically stop after a certain period and encourage users to do something else; and another suggested banning behavioural-based algorithms entirely to reduce polarisation and dissent. Another said those reported for abuse should be forced to watch videos on the consequences of cyber bullying before they could continue as users.
The full winning blogs and video are on the Ofcom competition site. The winners each won a £100 voucher prize and will spend a day at Ofcom to find out more about the work it does.