The UK Online Safety Act and AI
The UK Online Safety Act 2023, which received Royal Assent on October 26, 2023, is the UK's comprehensive framework for regulating online platforms. While not an AI-specific law, the Act has significant implications for AI systems, particularly in three areas: AI-generated harmful content (including deepfakes), AI-powered content moderation, and AI-driven recommender systems.
The Act creates duties of care for online service providers, enforced by Ofcom as the designated regulator. Platforms that use AI to generate, moderate, or recommend content must ensure their AI systems comply with the Act's safety duties, including preventing the dissemination of illegal content and protecting children from harmful material.
For enterprises deploying AI in online services, the Act creates new obligations around AI content governance. Areebi supports compliance through content guardrails, policy enforcement, and audit trails that document AI content decisions.
Duties Relating to AI-Generated Content
The Online Safety Act creates several obligations relevant to AI-generated content:
Illegal Content Duty
Platforms must take proactive steps to prevent users from encountering illegal AI-generated content, including:
- AI-generated child sexual abuse material (CSAM) - the Act criminalizes creation of AI-generated CSAM
- AI-generated intimate images shared without consent (deepfake pornography) - made a criminal offence under the Act
- AI-generated content that constitutes fraud, terrorism-related material, or other offences
Safety Duties for User-Generated Content
Platforms must implement systems and processes to minimize the presence of AI-generated content that is harmful but not necessarily illegal, including misinformation, harassment, and content harmful to children. This includes AI-powered content moderation systems that can detect AI-generated material.
Transparency Reporting
Large platforms (Category 1 services) must publish annual transparency reports detailing their use of AI in content moderation, the effectiveness of AI-powered safety systems, and how AI recommender systems operate.
Areebi's guardrails can enforce content policies that prevent AI systems from generating content that would violate Online Safety Act duties. Audit trails provide the documentation needed for Ofcom transparency reporting.
Deepfakes and Synthetic Media
The Online Safety Act specifically addresses deepfakes and AI-generated synthetic media:
- Intimate image abuse: Creating or sharing AI-generated intimate images without consent is a criminal offence under the Act, carrying penalties of up to two years' imprisonment
- Platform duties: Platforms must prevent the sharing of non-consensual AI-generated intimate images and have effective reporting and takedown mechanisms
- Content provenance: While the Act does not mandate specific watermarking requirements (unlike California's SB 942), Ofcom's codes of practice encourage platforms to implement content provenance measures for AI-generated media
Organizations using AI to generate or process visual media should implement controls to prevent the creation of harmful synthetic content. Areebi's policy engine can enforce content generation boundaries that prevent misuse. Visit our Trust Center for details on content safety controls.
Ofcom Codes of Practice and AI
Ofcom is developing codes of practice that set out how platforms can comply with their Online Safety Act duties. Several codes have direct AI implications:
- Illegal content codes: Requirements for AI-powered detection systems to identify illegal content, including AI-generated CSAM and non-consensual intimate images
- Children's safety codes: Requirements for age assurance measures (which may use AI), content filtering for children, and restrictions on algorithmic amplification of harmful content to minors
- Transparency codes: Requirements for platforms to disclose how AI is used in content moderation, recommendation algorithms, and safety systems
- User empowerment: Requirements for user tools to control AI-driven content recommendations and filter AI-generated content
Platforms must comply with Ofcom's codes or demonstrate equivalent alternative measures. Areebi's compliance dashboards help organizations monitor adherence to Ofcom requirements and prepare for regulatory examinations.
Compliance Strategy for AI in Online Services
Organizations using AI in online services subject to the UK Online Safety Act should:
- Assess platform category: Determine whether your service is a Category 1 (largest platforms), Category 2A (search services), or Category 2B service, as obligations vary by category
- Implement content safety controls: Deploy guardrails that prevent AI systems from generating or distributing illegal or harmful content
- Deploy AI content detection: Implement AI-powered systems to detect AI-generated harmful content, including deepfakes and synthetic media
- Establish reporting mechanisms: Create user reporting processes for AI-generated harmful content with clear escalation procedures
- Prepare transparency reports: Use compliance dashboards and audit trails to compile data for annual Ofcom transparency reporting
- Coordinate with UK AI governance: Ensure Online Safety Act compliance is integrated with broader UK AI governance requirements
Request a demo to see how Areebi supports Online Safety Act compliance for AI in online services. Explore our pricing plans for enterprise AI governance.