Ofcom calls on tech firms to start preparing for regulation no

Ofcom calls on tech firms to start preparing for regulation no

The UK is preparing to become among the first countries in the world to introduce comprehensive new laws aimed at making online users safer, while preserving freedom of expression. The Online Safety Bill will introduce rules for sites and apps such as social media, search engines and messaging platforms – as well as other services that people use to share content online.

Ofcom expects the Online Safety Bill to pass by early 2023 at the latest, with our powers coming into force two months later.[1]

Immediate action once powers kick in

Within the first 100 days of our powers taking effect, Ofcom will focus on getting the ‘first phase’ of the new regulation up and running - protecting users from illegal content harms, including child sexual exploitation and abuse, and terrorist content.[2] We will publish:

  • a draft Code of Practice on illegal content harms explaining how services can comply with their duties to tackle them; and
  • draft guidance on how we expect services to assess the risk of individuals coming across illegal content on their services and associated harms.

To help companies identify and understand the risks their users may face, we will also publish a sector-wide risk assessment. This will include risk profiles for different kinds of services that fall in scope of the regime. We will also consult on our draft enforcement guidelines, transparency reporting and record-keeping guidance.

We will consult publicly on all these documents and expect to finalise them by spring 2024. Within three months, companies must have completed their risk assessments related to illegal content, and they must be ready to comply with their duties in this area from mid-2024 once the Code of Practice has been laid in Parliament.

We are ready and able to evolve our timelines and plans, should the timing or substance of the Bill change.

Early engagement with high-risk services

As well as expecting tech firms to engage as we consult, we will also identify high-risk services for closer supervision.[3] The companies who run these sites or apps must be ready – as soon as our first set of powers come into force in early 2023 – to explain their existing safety systems to us and, importantly, how they plan to develop them.

Ofcom will expect companies to be open with us about the risks they face and the steps they are taking to address them. We will want to know how they have evaluated those measures, and what more they might consider doing to keep users safe. We will also seek to understand users’ attitudes to those services, and consider evidence from civil-society organisations, researchers and expert bodies.

Where we consider that a platform is not taking appropriate steps to protect users from significant harm, we will be able to use a range of investigation and enforcement powers.

Action following secondary legislation

Some elements of the online safety regime depend on secondary legislation – for example, the definition of priority content that is harmful to children, and priority content that is legal but harmful to adults.[4] So duties in these areas will come into effect later and timings will be subject to change, depending on when secondary legislation passes.

We will move quickly to publish draft Codes of Practice and guidance on these areas shortly after secondary legislation passes. Once again, we will consult publicly on these before finalising them.[5]

Mark Bunting, Online Safety Policy Director at Ofcom, said: “We’ll move quickly once the Bill passes to put these ground-breaking laws into practice. Tech firms must be ready to meet our deadlines and comply with their new duties. That work should start now, and companies needn’t wait for the new laws to make their sites and apps safer for users.”

Maintaining momentum this year

Ofcom’s preparations to take on its new role are continuing apace. Today we are calling for evidence on the ‘first phase’ areas identified for consultation: the risk of harm from illegal content; the tools available to services to manage this risk; child access assessments; and transparency requirements. We would like to hear from companies that are likely to fall within the scope of the regime, as well as other groups and organisations with expertise in this area.

In the immediate months ahead, we will build on work already underway by:

  • ramping up our engagement with tech firms, large and small;
  • publishing our first report on how video-sharing platforms such as TikTok, Snapchat, Twitch and OnlyFans are working to tackle harm; and
  • undertaking and publishing research on the drivers and prevalence of some of the most serious online harms in scope of the Bill, as well as technical research on how these might be mitigated;
  • further developing our skills and operational capabilities, building on the expertise we have already brought in from the technology industry, academia and the third sector; and
  • continuing to work with other regulators through the Digital Regulation Cooperation Forum to ensure a joined-up approach between online safety and other regimes.

What the new laws will mean

This is novel regulation and so it is also important to understand what the Online Safety Bill does – and does not – require.

The focus of the Bill is not on Ofcom moderating individual pieces of content, but on the tech companies assessing risks of harm to their users and putting in place systems and processes to keep them safer online.

As well as setting Codes of Practice and giving guidance on compliance, Ofcom will have powers to demand information from tech companies on how they deal with harms and to take enforcement action when they fail to comply with their duties. The Bill will also ensure the tech companies are more transparent and can be held to account for their actions.

It is also important to recognise that:

1.       Ofcom will not censor online content. The Bill does not give Ofcom powers to moderate or respond to individuals’ complaints about individual pieces of content. The Government recognises – and we agree – that the sheer volume of online content would make that impractical. Rather than focusing on the symptoms of online harm, we will tackle the causes by ensuring companies design their services with safety in mind from the start.
2.       Tech firms must minimise harm, within reason. We will examine whether companies are doing enough to protect their users from illegal content and content that is harmful to children, while recognising that no service in which users freely communicate and share content can be entirely risk-free. Under the draft laws, the duties placed on in-scope online services are limited by what is proportionate and technically feasible.
3.      Services can host content that is legal but harmful to adults, but must have clear service terms. Under the Bill, services with the highest reach – known as ‘Category 1 services’ – must assess risks associated with certain types of legal content that may be harmful to adults. They must have clear terms of service or community guidelines explaining how they handle it, and apply these consistently. They must also provide tools that empower users to reduce their likelihood of encountering this content. But they will not be required to block or remove legal content unless they choose to.

You've successfully subscribed to Causeway Coast Community News
Great! Next, complete checkout to get full access to all premium content.
Welcome back! You've successfully signed in.
Unable to sign you in. Please try again.
Success! Your account is fully activated, you now have access to all content.
Error! Stripe checkout failed.
Success! Your billing info is updated.
Billing info update failed.