Social media sites face tougher age checks under new Ofcom rules
Ofcom has warned social media sites they could be banned for under-18s completely if they fail to comply with new online safety rules.
The media regulator has published a children’s safety draft code of practice, which requires social media firms to have more robust age-checking measures in place. Ofcom boss Dame Melanie Dawes warned any company that broke the rules could have their minimum user age raised to 18.
Firstly, online services must establish whether children are likely to access their site – or part of it. And secondly, if children are likely to access it, the company must carry out a further assessment to identify the risks their service poses to children, including the risk that come from the design of their services, their functionalities and algorithms. They then need to introduce various safety measures to mitigate these risks.
Says Ofcom chief executive, Dame Melanie Dawes: “We want children to enjoy life online. But, for too long, their experiences have been blighted by seriously harmful content which they can’t avoid or control. Many parents share feelings of frustration and worry about how to keep their children safe. That must change.
“In line with new online safety laws, our proposed codes firmly place the responsibility for keeping children safer on tech firms. They will need to tame aggressive algorithms that push harmful content to children in their personalised feeds and introduce age checks so children get an experience that’s right for their age.”
But parents of children who died after exposure to harmful online content have described the Ofcom’s new rules as “insufficient”. The child online safety campaigner Ian Russell, the father of 14-year-old Molly Russell who took her own life in November 2017 after viewing harmful material on social media, said more still needed to be done to protect young people from online harm.
In his role as chair of online safety charity the Molly Rose Foundation, Russell says:
“Ofcom’s task was to seize the moment and propose bold and decisive measures that can protect children from widespread but inherently preventable harm.
“The regulator has proposed some important and welcome measures, but its overall set of proposals need to be more ambitious to prevent children encountering harmful content that cost Molly’s life.”
Ofcom says its consultation proposes more than 40 safety measures that services would need to take – all aimed at making sure children enjoy safer screen time when they are online. These include:
- Robust age checks – Ofcom’s draft code expects services to know which of their users are children in order to protect them from harmful content. In practice, this means that all services which don’t ban harmful content should introduce highly effective age-checks to prevent children from accessing the entire site or app, or age-restricting parts of it for adults-only access.
- Safer algorithms – under Ofcom’s proposals, any service that has systems that recommend personalised content to users and is at a high risk of harmful content must design their algorithms to filter out the most harmful content from children’s feeds, and downrank other harmful content. Children must also be able to provide negative feedback so the algorithm can learn what content they don’t want to see.
- Effective moderation – all services, like social media apps and search services, must have content moderation systems and processes to take quick action on harmful content and large search services should use a ‘safe search’ setting for children, which can’t be turned off and must filter out the most harmful content. Other broader measures require clear policies from services on what kind of content is allowed, how content is prioritised for review, and for content moderation teams to be well-resourced and trained.