UK Regulator Warns Tech Giants of Heavy Fines Ahead of New Online Safety Law

LONDON — The United Kingdom’s media regulator, Ofcom, has issued a stark warning to major technology companies, cautioning that they could face significant fines or even service suspensions if they fail to comply with new online safety regulations. These sweeping rules, which form part of the Online Safety Act, are set to take effect in December, marking a pivotal moment in the UK’s efforts to regulate harmful content on the internet.

The Online Safety Act is designed to hold tech companies accountable for illegal content on their platforms, ranging from child exploitation to online disinformation. Companies that fail to comply with the new rules could be fined as much as 10% of their global annual revenues, a penalty that could reach billions of pounds for the largest firms.

In a statement issued on Thursday, Ofcom’s CEO Melanie Dawes emphasized the regulator’s commitment to enforcing these new laws, saying that companies can no longer afford to delay action. “The time for talk is over. From December, tech firms will be legally required to start taking action, meaning 2025 will be a pivotal year in creating a safer life online,” Dawes said.

Tech Giants Under Pressure

The Online Safety Act places significant new responsibilities on tech firms, requiring them to take stronger measures to protect users—particularly children—from exposure to illegal and harmful content. Platforms that host user-generated content, such as social media networks and video-sharing sites, are expected to implement age verification tools, enhance moderation practices, and conduct risk assessments on the harmful content their users may encounter.

Over the past six months, Ofcom has been engaging with major tech platforms to help prepare them for the upcoming regulations. According to the regulator, some companies have already introduced new safety features. For example, adult content platform OnlyFans has implemented age verification to ensure minors cannot access explicit material. Free speech-focused video-sharing platform BitChute has reportedly improved its content moderation practices, while live-streaming site Twitch has introduced new measures to protect children from harmful videos.

Ofcom also highlighted changes made by social media giants Meta, which owns Facebook and Instagram, and Snapchat to prevent online grooming of children. However, Dawes made it clear that while these steps are a positive start, they fall short of the full compliance that will be required when the new rules take effect.

Fines, Suspensions, and Criminal Liability

Once the Online Safety Act comes into force, Ofcom will have the power to levy significant fines against tech companies that fail to meet their new legal obligations. Firms that continue to breach the regulations could face fines as high as 10% of their annual global revenues—a hefty sum for the world’s largest tech companies.

In severe cases of non-compliance, Ofcom could also move to block access to a platform within the UK entirely, potentially cutting off millions of users from services like Facebook, YouTube, or Twitter. The watchdog also has the authority to limit a company’s access to payment providers or advertisers, hitting their bottom line even harder.

Additionally, individual senior executives could face criminal charges and jail time if their companies are found to be repeatedly violating the rules. This aspect of the legislation is intended to hold tech leaders personally accountable for their platform’s role in hosting harmful or illegal content.

Timeline for Implementation

The December deadline is a crucial turning point, but the rollout of the Online Safety Act’s provisions will continue into 2025. Ofcom is expected to publish its first edition of illegal harms codes and guidance before the end of this year. Following this, tech companies will have three months to conduct a full risk assessment and demonstrate that they are taking appropriate steps to reduce the spread of illegal content.

In January, Ofcom plans to release further guidance focused on protecting children’s access to online services and finalizing rules around age verification for pornographic content. By spring 2025, the regulator will consult on additional measures, revising its codes and guidelines to ensure continued protection against new and evolving online risks.

As the rollout progresses, Ofcom will continue to monitor compliance and may take enforcement action against companies that fail to meet their obligations.

Government Support for Stricter Regulations

The UK government has been vocal in its support for the Online Safety Act, positioning it as a critical tool for addressing the increasing dangers of the digital age. The legislation was introduced in response to growing concerns over the proliferation of harmful content online, particularly regarding the protection of children and vulnerable groups.

UK Technology Minister Peter Kyle recently wrote to Ofcom, urging the regulator to take a hard look at how disinformation and illegal content were spread during recent anti-immigration protests across the country. In his letter, which was shared on social media platform X (formerly Twitter), Kyle asked Ofcom to consider introducing additional targeted measures to combat disinformation in its next round of illegal harms codes.

“I would appreciate an update from you on the assessment Ofcom has made about how illegal content, particularly disinformation, spread during the period of disorder,” Kyle wrote. “And if there are targeted measures which Ofcom is considering for the next iteration of the illegal harms code of practice in response.”

Tech Industry Response

While some tech companies have been proactive in preparing for the new regulations, others have voiced concerns over the potential impact of the Online Safety Act on their operations. Critics argue that the law could lead to over-censorship, limiting free expression on platforms by forcing companies to over-moderate content in an effort to avoid fines.

Industry groups have also raised questions about the feasibility of implementing robust age verification systems across a range of services, particularly for adult content and social media platforms. However, Ofcom has maintained that these measures are necessary to protect users, especially children, from harmful content.

As the December deadline looms, many tech giants are ramping up their compliance efforts. Whether they meet the tough new standards set by the UK regulator remains to be seen, but with the threat of substantial fines, service suspensions, and even jail time for executives, the stakes are high.

Looking Ahead

The Online Safety Act represents one of the most significant pieces of internet regulation to date, and its impact will be closely watched by governments and regulators around the world. With enforcement set to begin in just a few months, the next year will be crucial in determining whether the UK’s approach to regulating online safety can successfully balance protecting users with maintaining a free and open internet.

As tech companies prepare to navigate the challenges of this new regulatory landscape, the message from Ofcom is clear: the era of unchecked harmful content online is coming to an end.