Academics and OSA stakeholders say Ofcom needs to take a closer look at how controversial legislation is enforced
Industry experts expressed both concern and sympathy for Ofcom, the Brit regulator that is overseeing the Online Safety Act, as questions mount over the effectiveness of the controversial legislation.
The UK's Communications and Digital Committee heard from academic and industry leaders this week about various aspects of how the Online Safety Act (OSA) is being enforced, many of whom criticized the regulator's recent comments.
Particular attention was given to Ofcom's previous claim that the OSA could potentially have prevented situations similar to the Southport riots which broke out in 2024, had it been in force at the time.
The riots broke out in the aftermath of the Southport knife attack, where three girls aged from six to nine were stabbed to death while attending a summer holiday dance class. The idea is that such regulations might have stopped the misinformation over the attacker's background, identity, and motive that stoked the violence and led to rioters gathering to attack local businesses and mosques.
The attacker was a local British teen, born in Cardiff, with a Christian background and Rwandan parents who'd moved to the UK before he was born. Because the killer was underage at the time, police did not release information about him, and false info was spread by far-right leaning individuals who blamed immigrants and Muslims, leading to mobs attacking both people and businesses they believed were linked to the groups across the country. They dismantled brick walls of local homeowners to produce projectiles, set fire to cars at random, looted shops and smashed up windows, injured 50 police officers and hurt three police dogs after throwing bricks at them.
Bernard Keenan, lecturer in law at University College London, said Ofcom's claims raise "dangerous expections" about the OSA's capability.
He told the committee: "We're playing with questions of counterfactuals and causality, and I don't think anybody can really determine what difference it would have made. But, from the perspective of civil liberties and human rights, I am pretty concerned by this thinking and this statement from Ofcom.
"I don't think Ofcom is doing itself any favors by making this kind of claim."
He added: "Riots happened before social media, and there are structural and political reasons why they do erupt. The triggers are obviously changing, but to say that Ofcom could have been in a position to more directly influence events, I think raises dangerous expectations of what this act could do, how we might measure its success, and how we might circle back to try to [clamp down further].
"It implies a form of political causality that they're in no position to assert."
Beatriz Kira, assistant professor in law at the University of Sussex, poured more cold water over Ofcom's suggestions, saying that communications offences are only deemed so under the OSA if the sender knowingly transmits false information with intention of causing serious harm.
That is outlined in section 179 of the OSA, and by the word of the law, the misinformation spread shortly before the riots would not have been in scope of the regulations anyway.
"A lot of the people who shared the content were worried because precisely they thought it was real," she told the committee.
"They thought they act[ed] on the understanding that the information they were sharing and resharing was real and not false. So, they're not committing a false communications offence in that sense."
The majority of the shared information on social media platforms such as X at the time would not have been illegal even had the OSA been in force.
The real issue stemmed from the amplification of the content that led to the riots, which in itself may not inherently violate the new legislation.
The discussions echoed those recently chaired by the Science and Technology Committee on the matter in July, which poked holes in the OSA and concluded that it was unfit to counter misinformation.
Kira also responded to the committee's questions about Ofcom's proposals to change in-scope communications platforms' recommender systems, more commonly referred to as their algorithms, and whether they could limit the spread of illegal content online.
She said that Ofcom's proposals to reform recommender systems are, in fact, proposals to introduce new content moderation tools. In effect, the regulator was conflating the two.
Ofcom wants to prevent potentially harmful content from being pushed to users' feeds. It has not explained how exactly it will enforce this, but in Kira's view, it suggests a two-step process:
Kira said this is unlikely given how many platforms operate, and conflates the function of amplification with the tools of content moderation that the platforms also have at their disposal.
Recommender systems are specifically designed to serve content to users that engages them, whether they like it, agree with it, or not.
"What Ofcom is actually describing [in] the proposal is something else," said Kira. "Ofcom is focusing on the description of something that is demotion as a content moderation tool."
Social media platforms already demote certain types of content; they reduce the visibility of these posts and do it to a decent degree of success.
"They have terms of services that tell users what type of content they reduce the volume of; they don't recommend, they demote.
"And this is really helpful too, but I don't think that Ofcom is quite engaging with the problem of recommender systems [which] here is very much focused on demotion."
Ofcom is the regulator tasked with enforcing the OSA, and while it had a say in its drafting, ultimately lawmakers had the last say on what the final product looked like.
"I see Ofcom is really in a really difficult position here in terms of regulating recommended systems, and this is because a lot of the concerns and the worries and the criticism towards the systems doesn't really focus on the legality or legality of content," said Kira.
The scope of the OSA also narrowed as it passed through the Houses of Parliament. A provision to include "awful but lawful" content, as the assistant professor put it - content that most people would not want to see but is not illegal - was removed from the OSA as it passed through the houses to become law.
"So Ofcom is left with an Act now that only allows it to act within the very limited scope of harmful to children or illegal content, and that means that a lot of what could be done in terms of the systems approach to recommended systems, they're not able to do because they don't have that kind of leverage," Kira added.
For all of Ofcom's apparent faults, in the academics' view, Keenan also defended the regulator over criticism that it is implementing the appropriate regulations too slowly.
A widely discussed workaround to the OSA's safeguards for children online has been the surge in VPN and proxy use across the UK since the law came into force.
The children's commissioner, Rachel de Souza, has called for VPNs to be banned in an update to the legislation. She believes VPN providers should implement "highly effective age assurance" measures to ensure under-18s are not finding an easy way around the age filters on adult sites, for example.
While a VPN ban is unlikely to come in the form of legislation, the technology provides an easy way for the very people whom the OSA intends to protect to bypass it entirely.
Dan Sexton, CTO at the Internet Watch Foundation, which provides databases to online platforms to hash-match potentially illegal content against a known list of harmful or abusive media, said VPNs are not the issue.
Asked if there's anything the government can do to stop children using VPNs, Sexton suggested the committee was looking at the problem in the wrong way.
He said: "If children are using VPNs to bypass, then that's bad because that hasn't worked and they're accessing pornography. But if it stops half the children, that's half the children that were viewing pornography that are now not, and that's good.
"So, VPNs are not the reason to throw out age verification."
He went on to say: "In this case, I would say, if anything, it's the failure of other countries to not protect children.
"It's not OK in France or Hungary or America for those children to be able to view pornography anymore... the damage to them is no different to the damage to the UK. So, actually, the issue there is actually not because we've done it first, but if other countries also prevented pornography from being viewed by children, then the VPN wouldn't help."
Sexton said that while it would be possible to enforce De Souza's recommendation that VPN providers implement the same age verifications as in-scope platforms, users will always find a way around these guardrails.
"None of these [suggestions] are perfect," however, it could mean that a tech-savvy 17-year-old could find a way around this, while a nine-year-old remains protected.
Going further, the government could look at which VPNs children are using and see where they're finding that access - via app stores, browser plug-ins, etc - and find ways to introduce friction at each stage.
"Just like age verification makes it harder for children, you could also make it harder for children to access a VPN.
"So if you did that on an app store, if that tackles 70 percent of the cases, you're already making it smaller and smaller and smaller. Just like children might have used fake IDs for 17-year-olds to enter bars and things, they're not the ones you're trying to protect, actually, it's really the child on the bus to school at age 11, which is being protected."
The Register contacted Ofcom for a response and will update the story if it provides one.
The committee will hear from civil society organizations on the same issues at a follow-up meeting next week. ®