Lemonsoup14 ā stock.adobe.com
MPs heard different views from the online harms regulator and the UK government about whether and how the Online Safety Act obliges platforms to deal with disinformation
By
Sebastian Klovig Skelton,
Data & ethics editor
Published: 02 May 2025 12:09
The UK government and online harms regulator Ofcom disagree about whether misinformation is covered by the UKās Online Safety Act (OSA).
On 29 April 2025, the Commons Science, Innovation and Technology Committee (SITC) questioned the UKās online harms and data regulators about whether the UKās Online Safety Act (OSA) is fit for purpose, as part of its inquiry into onlineĀ misinformationĀ andĀ harmful algorithms.
As with previous sessions, much of the discussion focused on the spread of disinformation during theĀ Southport Riots in 2024. During the session, the SITC also grilled government minister Baroness Jones about the implementation of the OSA, which went into effect on 17 March 2025. However, the regulators and the government took different views about the applicability of the legislation to online misinformation and disinformation.
Mark Bunting, the director of online safety strategy delivery at Ofcom, for example, said that while the OSA contains provisions to set up an advisory committee on disinformation to inform the regulators ongoing work, the OSA itself contains no provisions to deal with disinformation.
During the previous SITC session, in which the committee grilled X (formerly Twitter), TikTok and Meta, each of the firms contended that they already have processes and systems in place to deal with disinformation crises, and that the OSA would therefore not have made a notable difference.
Bunting added that while the OSA does not cover misinformation directly, it did āintroduce the new offence of false communications with an intent to cause harm, and where companies have reasonable grounds to infer that there is intent to cause harmā.
Committee chair Chi Onwurah, however, said it would be difficult to prove this intent, and highlighted that there are no duties on Ofcom to take action over misinformation, even if there are codes about misinformation risks.
Jones, however, contended that misinformation and disinformation are both covered by the OSA, and that it would have made a āmaterial differenceā if its provisions around illegal harms were in force at the time of the Southport Riots.
āOur interpretation of the act is misinformation and disinformation are covered under the illegal harms code and the childrenās code,ā she told MPs.
Talitha Rowland, the Department for Science, Innovation and Technologyās (DSIT) director for security and online harm, added that it can be challenging to determine the threshold for illegal misinformation, because it can be so broadly defined: āIt can sometimes be illegal, it can be foreign interference, it can be content that incites hate or violence thatās clearly illegal. It can also be below the illegal threshold, but nevertheless be harmful to children ā that is captured.ā
In the wake of the riots,Ā Ofcom did warn that social media firms will be obliged by the OSA to deal with disinformation and content that is hateful or provokes violence, noting that it āwill put new duties on tech firms to protect their users from illegal content, which under the act can include content involving hatred, disorder, provoking violence or certain instances of disinformationā.
Bunting concluded that platforms themselves want clarity over how to deal with disinformation within their services, and that Ofcom will continue to monitor case law developments around how the OSA can be interpreted in the context of misinformation, and update future guidance accordingly.
Updating the SITC on the progress made since the act went into force on 17 March, Bunting said that Ofcom has received around 60 safety assessments from platforms about the risks of various harms occurring on their platforms. These are required to demonstrate to Ofcom how they are tackling illegal harms and proactively working to find and remove such content.
Initially Published 16 December 2024, the risk assessment is the first step to compliance with Ofcomās Illegal Harms Codes and guidance.
The codes outline various safety measures providers must put in place, which includes nominating a senior executive to be accountable for OSA compliance; properly funding and staffing content moderation teams; improving algorithmic testing to limit the spread of illegal content; and removing accounts that are either run by or are on behalf of terrorist organisations.
Companies at risk of hosting such content must also proactively detect child sexual exploitation and abuse (CSEA) material using advanced tools, such as automated hash-matching.
Ofcom previously said it will be holding a further consultation in spring 2025 to expand the codes, which will include looking at proposals on banning accounts that share child sexual abuse material, crisis response protocols for emergency events such as theĀ August 2024 riots in England, andĀ the use of āhash matchingā to prevent the sharing of non-consensual intimate imagery and terrorist content.
Read more on Social media technology
Online Safety Act measures come into effect
By: SebastianĀ Klovig Skelton
MPs grill X, TikTok and Meta about online misinformation
By: SebastianĀ Klovig Skelton
Ofcom publishes Illegal Harms Codes of Practice
By: SebastianĀ Klovig Skelton
Government issues strategic priorities for online safety regulator Ofcom
By: SebastianĀ Klovig Skelton