UK Slams AI Chatbots with Safety Act Rules

Prime Minister Keir Starmer said that changes to the Crime and Policing Bill would require all AI chatbot providers to follow the rules of the Online Safety Act to protect users from illegal content like child exploitation and encouraging self-harm. Companies that don’t follow the rules could be fined up to 10% of their global revenue or have their UK services blocked by Ofcom. Standalone chatbots that don’t let users share information may still be able to avoid full regulation, but chatbots that are built into apps or platforms are now fully regulated.

The Act, which was passed in 2023, was mostly about platforms at first, but it didn’t keep up with the rapid growth of AI. Ofcom’s 2024-2025 guidance made it clear that chatbots are “user-generated content” in social or search services. The Grok incident made more people want to close loopholes, which is in line with efforts around the world, such as Australia’s age limits and Spain’s restrictions. Other things being looked at are limiting infinite scrolling, sharing explicit images, and allowing kids to use VPNs and chatbots.

Providers must now do risk assessments, put in place protections, and keep an eye on Ofcom updates. Enforcement has already started on platforms that don’t follow the rules. This makes the UK a leader in AI safety, which could affect policies in the EU and US as chatbots become more popular in finance, retail, and social media.