Online Safety Act — Explainer
This guide explains, in plain English, what the UK’s Online Safety Act is, why it exists, and what it means for you. It’s written for curious readers, not lawyers or policy insiders. If you’ve heard about “age checks” or seen new prompts on apps and websites and want a clear, calm walkthrough, you’re in the right place.
- The OSA sets rules for online services to reduce access to clearly illegal content and to keep children safer.
- It focuses on how platforms are designed and run (systems and safeguards), not on punishing ordinary users for everyday posts.
- Ofcom, the UK communications regulator, oversees the rules and can require fixes when platforms fall short.
Why it exists
- People worry about harms online, especially to children. Past digital-first laws often pushed the burden onto users (think: endless cookie banners) instead of fixing systems.
- The aim here is more responsibility on platforms: better protection for kids, safer defaults for teens, better reporting and appeals, and adult tools you can choose to use.
What you might notice
- Some services now ask for your age before showing adult content.
- You should see clearer reporting routes and safer settings for younger users.
- Good services give you a choice of privacy‑respecting ways to prove age (not just “upload your ID”).
- People are not thrilled about verfication-related privacy concerns and over-reach risk, especially for access to known 'adult' content.
How to use this guide
- Each section is a set of short, expandable questions. Start broad, then go deeper if you want methods, edge cases, and sources.
- We call out clumsy designs (that create friction without real safety) and highlight better, privacy‑preserving options.
What’s inside
- Basics & origin
- What changes you actually see
- How age assurance/verification works
- Over/under‑compliance & case studies
- Your rights & routes
- Context: cookie law → GDPR → OSA
- Kids, speech & the messy middle
- Politics & perception
- Enforcement reality & what to watch next
- Deeper dives & missed angles
- What to take with you
Tip: You can expand everything at once, or search within the page. When you’re done, you should have a practical sense of what’s changing, why, and what your choices are.
Basics & origin
The Online Safety Act is the UK trying to move safety work off your shoulders and onto the systems that shape what you see. Instead of more pop‑ups and finger‑wagging, it asks the big services to do the boring but important plumbing: know their risks, design safer defaults, and fix things when they break.
It wasn’t born overnight or by one party. The idea rolled through years of debate, election cycles and rewrites, and ended up as a cross‑party push with Ofcom holding the clipboard. Think of this section as the quick map: what the law actually is, who it touches, and where its edges are.
What is the Online Safety Act (OSA)?
A UK law that puts duties on online services (not on users) to reduce illegal harms and protect children. Separate parts of the law create or update offences for individuals (e.g., cyberflashing), but the core of the OSA is about what services must do.In practice, the OSA sets out who is in scope, requires risk assessments and safety systems, and empowers Ofcom to enforce compliance. See the primary Act for the full structure and definitions: Online Safety Act 2023 on legislation.gov.uk. (legislation.gov.uk)
The rules can also apply to services outside the UK if their service has a “UK link.” Ofcom’s scope overview explains the UK link tests and exemptions with clear diagrams. (Ofcom—Regulated services: overview of scope (PDF))
Who created it and who pushed it over the line?
The policy started as the UK’s Online Harms programme (2019), the Bill was introduced to Parliament in March 2022, the Act received Royal Assent on 26 October 2023, and the current government accelerated delivery from mid‑2024.- Origins (2019): The UK Government published the Online Harms White Paper in April 2019 under then‑Prime Minister Theresa May, proposing a duty of care and a regulator to oversee online safety. (gov.uk—White Paper)
- Government response (Dec 2020): The full response confirmed a statutory duty of care and Ofcom as regulator. (gov.uk—Full government response)
- Draft Bill (May 2021): A draft Online Safety Bill was published for pre‑legislative scrutiny by a Joint Committee. (gov.uk—Draft Online Safety Bill)
- Introduction to Parliament (2022): Culture Secretary Nadine Dorries introduced the Online Safety Bill to the Commons in March 2022. (gov.uk—Press/Parliamentary materials)
- Royal Assent (2023): The Bill became the Online Safety Act 2023 on 26 October 2023. (legislation.gov.uk)
- Acceleration (2024→): From July 2024, the government prioritised fast implementation via Ofcom’s phased codes and guidance rather than reopening the statute. Ofcom is the regulator responsible for writing codes, auditing, and enforcement. (Ofcom—Roadmap to regulation)
Why did Labour keep/accelerate it after 2024?
It was already law, public support for child protection is high, Ofcom’s roadmap was running, and ministers can shape delivery through strategic priorities and secondary measures rather than re‑legislate.The government’s own explainer emphasises platform duties and phased commencement, which made delivery the pragmatic path. In May 2025 ministers also issued strategic priorities to guide Ofcom’s approach. See: the official Online Safety Act explainer (gov.uk) and Ofcom’s online safety hub/roadmap (Ofcom).
What services are in scope?
User‑to‑user services and search services with a UK link. Duties scale with size, functionality and risk.- User‑to‑user services: services where content a user generates, uploads or shares can be encountered by other users. This can include public posts and features like group chats/DMs where content is shared beyond a single user. See the definition in Part 2, s.3. (legislation—s.3)
- Search services: services that search multiple websites/databases and present results. Part 2, s.4. (legislation—s.4)
- UK link: a service is in scope if any of the following apply: it has a significant number of UK users; it targets the UK; or it is accessible in the UK and there is a material risk of significant harm to UK users. Part 2, s.5; see Ofcom’s plain‑English overview for examples. (legislation—s.5, Ofcom—Regulated services overview (PDF))
Note: one‑to‑one live voice calls are exempt, but video calls and messaging content are not. Group chats and DMs can be in scope where content one user sends can be encountered by another. (See Ofcom’s overview linked above.)
For a plain‑English overview, see the government explainer (which also notes private messaging can be in scope where content is shared): (gov.uk—OSA explainer).
What is out of scope?
Certain service types and limited‑functionality features are exempt. Examples:- Standard email and SMS/MMS.
- One‑to‑one live aural (voice) calls (including VoIP voice calls).
- Internal business/enterprise tools (closed workforce services).
- Limited‑functionality exemptions (e.g., services without user‑to‑user features).
- Recognised news publisher content itself is outside scope; user comments around it can be in scope.
Details live in Schedule 1 and Schedule 2 of the Act; Ofcom’s scope overview summarises what’s in/out and explains transitional rules. (legislation—Schedule 1, legislation—Schedule 2, Ofcom—Regulated services overview (PDF))
What the OSA doesn’t do
It doesn’t impose a general “ID for everyone.” The law is risk‑based and method‑agnostic; self‑declaration doesn’t count, but several privacy‑preserving methods do. (Ofcom: Protecting children online - codes)
It doesn’t ban end‑to‑end encryption (E2EE). Ofcom could, in theory, issue Technology Notices requiring accredited detection technology only where technically feasible; both government and Ofcom have framed this as a high bar and consulted on minimum accuracy standards. (gov.uk—OSA explainer, Ofcom—Illegal harms statement hub)
It doesn’t create blanket bans on legal speech for adults. Instead, services must apply their own terms consistently and offer tools/appeals. (gov.uk—OSA explainer)
What the OSA does require from services
Assess risks (illegal content; and for kids, content likely to harm) and keep assessments up to date. (Part 3 duties; Ofcom guidance and templates in its online safety hub.) (Ofcom—hub)
Design and operate systems that reduce those risks (moderation, reporting, user controls, safer defaults for children). (Ofcom—codes)
Use “highly effective” age assurance where needed (several methods are acceptable; self‑declaration is not). (Ofcom—codes)
Be transparent (clear terms, complaints/appeals) and cooperate with Ofcom audits and information requests. (Part 7 powers; see Ofcom roadmap.) (Ofcom—roadmap)
Face penalties if they won’t fix issues: Ofcom can fine up to £18m or 10% of qualifying worldwide revenue (whichever is greater) and, in severe cases, seek court‑ordered business disruption measures (e.g., payment/ads withdrawal or ISP access restrictions). (Ofcom—Enforcement guidance (PDF), gov.uk—OSA explainer)
Key dates (high level)
17 March 2025: Illegal‑content duties enforceable; risk assessments due. (gov.uk—OSA explainer, Ofcom—roadmap)
25 July 2025: Child‑safety codes and age‑assurance expectations in force for services likely to be accessed by children (after a 3‑month risk‑assessment window to 24 July 2025). (Ofcom—Protecting children online)
Through 2026: Further codes (e.g., for categorised services) and transparency reporting phases roll out. (Ofcom—roadmap)
Categorised services (extra duties)
Some services face additional transparency and user‑empowerment duties based on thresholds set by secondary legislation in 2025. Broadly:
- Category 1: very large user‑to‑user services (size + functionality thresholds).
- Category 2A: large search services.
- Category 2B: sizable user‑to‑user services below Cat 1 but with risky features (e.g., DMs).
Thresholds were set by the Online Safety Act 2023 (Category 1, Category 2A and Category 2B Threshold Conditions) Regulations 2025. (UK SI 2025/226—Threshold Conditions (PDF))
Online Safety Act at a glance
Sets system-level duties for services with a UK link; it is not a tool for policing individual posts.
Core focus: illegal content and children’s safety; adults get more control tools, not new bans on legal speech.
Enforced by Ofcom, with fines up to £18m or 10% of qualifying worldwide revenue and, in serious cases, court-ordered business disruption measures.
See Ofcom’s enforcement approach and the government explainer: Ofcom—Enforcement guidance (PDF) or the gov.uk OSA explainer.
What changes you actually see
This is the part you can feel. Age checks at the door. Cleaner reporting routes. Safer defaults for teens without turning the internet into school detention. Some choices are clumsy, some are thoughtful—this chapter is a tour of what’s landing on your screen and why.
Why am I hitting age walls now?
You’re seeing more checks because children’s protections and enforcement timelines came into force through 2025, so services had to show “highly effective” age assurance around adult or harmful categories.Under the OSA, services used by or likely to be accessed by children must assess risks and put systems in place that are proportionate and effective. Ofcom’s phased roadmap pushed illegal‑harms duties first, then children’s codes in mid‑2025, which is why many platforms shipped gated access at the same time. See the government explainer and Ofcom’s roadmap for dates and scope: gov.uk OSA explainer, Ofcom roadmap.
Key dates to anchor what you’re seeing: 17 March 2025 (illegal‑content duties enforceable) and 25 July 2025 (children’s codes and age‑assurance expectations in force). See Ofcom’s children’s codes hub for what services had to implement: Ofcom—Protecting children online.
Do I need to upload my passport everywhere?
No—there isn’t a single mandated method. Good implementations start with low‑friction, privacy‑preserving checks, then offer stronger fallbacks only if needed.Ofcom lists several methods that can be “highly effective,” including facial age estimation (delete images immediately), open banking (bank confirms you’re over 18), mobile‑network checks, digital IDs/PASS, and credit‑card checks that bind to the user. Self‑declaration isn’t acceptable. See Ofcom’s children’s codes and guidance: Ofcom—Protecting children online.
Why does Reddit ask for a selfie but Steam wants a card?
Different choices and trade‑offs. Reddit uses a verifier offering selfie age estimation (fast; non‑identifying) with ID as a fallback. Steam marks you “verified” if a valid credit card is on file.Reddit’s approach can be quick and privacy‑respecting when deletion is enforced by the verifier (see TechCrunch on the UK rollout: Reddit rolls out age verification). Steam’s UK policy is simple but weaker against misuse (e.g., parent cards) and exclusionary for people without credit cards (Steam Help: “Your UK Steam account is considered age verified…”). Media coverage and developer trade press noted the drawbacks (e.g., You now need a credit card to access mature content on Steam in the UK). Ofcom’s criteria treat card checks as acceptable only when they meaningfully bind to the user (Ofcom—children’s codes).
Can I avoid checks with a VPN? Is that safer?
You can route around some gates, but a shady VPN may see far more of your traffic than a certified verifier sees of your face/ID (which should be deleted on the spot).Think of it as a trade‑off: a one‑time, auditable “age OK” token versus ongoing exposure of all your browsing to an unknown network. If you use a VPN, pick a reputable provider and understand the risks; the law expects services to consider circumvention but doesn’t mandate universal ID or breaking encryption . See gov.uk OSA explainer.
Reports in mid‑ to late‑2025 show some users discussing workarounds, but the safer path is usually to choose a privacy‑preserving on‑platform method and ensure deletion is real (see Ofcom’s children’s codes, linked above).
What if I don’t have a smartphone or I fail a face check?
You should be able to pick another route and try again without being locked out unfairly.Reasonable alternatives include: ID+liveness via webcam, bank‑sourced age attribute (open banking), a mobile‑network check, PASS (digital proof of age), or email‑based estimation where appropriate. If a face estimation fails due to lighting or camera quality, a good flow offers a quick retry or switches to a stronger fallback (see Ofcom—children’s codes).
What about shared or parent devices?
On shared devices, a “credit card on file” gate may reflect the parent’s card rather than the person actually using the account. That’s why simple card‑only gating is weak: it doesn’t bind the user.Better flows bind the check to the account holder via estimation, ID+liveness, bank proof, or MNO checks, and remember the result as an “age OK” token (Ofcom’s “highly effective” criteria emphasise robustness and reliability). Steam’s help page confirms the card‑on‑file design: Steam Help.
Isn’t this exclusionary for people without credit cards or good cameras?
It shouldn’t be. “Highly effective” is a standard for outcomes, not a single tool. Good services offer more than one route so you can choose a non‑card, non‑face option.If you don’t have a credit card, use open banking, MNO, PASS, or ID+liveness. If your camera is poor, switch to a stronger fallback. If a platform offers only one route, that’s a design choice you can challenge (see Ofcom—children’s codes).
What else will I notice beyond age checks?
You should see clearer reporting routes, safer defaults for young users, more consistent takedowns of clearly illegal content, and optional controls for adults.Specifically: in‑app reporting and complaints, teen‑safe defaults (limited recommendations; tighter contact eligibility), faster removal of illegal content (e.g., child sexual abuse material, terrorism, fraud, revenge porn), and optional filters adults can turn on to reduce exposure to abuse or other legal‑but‑unwanted content. These reflect services’ duties to assess risk, design safer systems, and be transparent (see Ofcom—online safety hub and gov.uk OSA explainer).
How age assurance/verification works
Age checks don’t have to mean handing over your passport everywhere. There’s a toolkit: quick estimation, stronger fallbacks, and options that don’t turn your life into a paperwork quest. The good versions feel light and respectful; the bad ones feel like a wall. Here’s the landscape so you can tell the difference.
What counts as “highly effective” (HEAA)?
“Highly effective” means the method reliably keeps under‑18s out of adult content while being robust, fair, and proportionate. Ofcom lists a basket of acceptable approaches rather than a single tool: [children’s codes](https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/statement-protecting-children-from-harms-online).- Photo‑ID + liveness: scan a government ID and prove you’re a real person, not a recording.
- Facial age estimation: a selfie is analysed to estimate age; it does not identify you.
- Mobile‑network (MNO) checks: your carrier confirms an adult account.
- Credit‑card checks: can help, but must bind to the user, not just “a card exists.”
- Digital ID wallets / PASS: reusable age credentials.
- Open banking: your bank confirms you’re over 18 without sharing full identity.
- Email‑based age estimation: signals from long‑lived addresses and other data; low friction but needs coverage.
Self‑declaration (“I’m over 18”) does not meet the standard. See also the gov.uk OSA explainer for a plain‑English overview.
Is facial estimation “biometric ID”?
No. Estimation infers an age band from an image and should delete the image immediately; it does not match you to a known identity.Where confidence is low (e.g., near 18; poor lighting; atypical features), services should offer stronger fallbacks such as ID+liveness or bank‑sourced age. That’s how platforms meet Ofcom’s robustness and fairness aims without hoarding images (see Ofcom’s children’s codes).
Accuracy varies by age band and conditions. Independent assessments and regulator materials indicate high accuracy for clear under‑/over‑18 distinctions, with lower confidence near thresholds—hence the need for buffers and fallbacks. (See Ofcom’s design expectations in the codes above.)
How good is it, and for whom?
Generally good enough for an 18+ gate when used with sensible buffers and fallbacks; performance varies by age band and conditions.Good practice (reflected in Ofcom’s intent) is to publish error behaviour around threshold ages, be conservative near 18, test for and remediate bias, and always provide a non‑face alternative (ID, PASS, open banking) so people aren’t excluded if estimation struggles. See Ofcom’s design expectations in the children’s codes.
Which methods are strong vs weak?
There are various technologies and approaches used by different companies and platforms, usually for technical considerations, focusing on low-impact changes when done well. Strong methods generally bind to the user and resist easy workarounds; weak ones don’t.- ID + liveness: strongest binding; higher friction; needs solid anti‑fraud.
- Open banking: strong and privacy‑preserving; binds to the person controlling the bank app.
- Facial estimation: fast, privacy‑preserving if images are deleted; probabilistic, so you need fallbacks.
- Email estimation: very low friction; depends on email history/coverage; needs fallbacks.
- MNO checks: quick adult flag from a carrier; coverage varies; typically one layer in a stack.
- Credit‑card checks: acceptable only when they meaningfully bind to the user; “card on file” alone is weak against parent‑card misuse. See Ofcom’s caveats in the children’s codes.
How should a decent verification/response process work?
Start light, escalate only when needed, and delete images.A typical layered flow: begin with facial estimation or email‑based estimation; if confidence is low, offer ID+liveness, open banking, MNO or PASS. On success, issue a short‑lived “age OK” token and delete any images immediately. This meets Ofcom’s robustness/reliability/fairness objectives while keeping friction low (see children’s codes and the gov.uk explainer).
Mini‑FAQ
-
Do they keep my face? No—certified flows should explain that images are used transiently to compute an age and then deleted (check the provider’s notice).
-
What if I’m mis‑aged? You should be able to retry (better lighting/camera) or switch to a fallback (ID, PASS, bank) without being locked out unfairly (Ofcom’s children’s codes).
-
Can I avoid biometrics entirely? Yes—choose a non‑face method (open banking, ID+live, PASS, MNO). Services should present options (see Ofcom’s accepted methods in the children’s codes).
-
Is this fair for everyone? Ofcom expects services to test for bias and either remediate or provide alternatives where performance differs across demographics. That’s part of the “fairness” lens in the codes linked above.
Who certifies these methods/providers?
Independent certification helps validate deletion, security and accuracy claims.-
ACCS: The UK’s Age Check Certification Scheme audits age‑assurance providers and solutions against recognised standards (including privacy, security and performance). Look for an ACCS certificate for assurance over claims like “images are deleted immediately.” (ACCS—Age Check Certification Scheme)
-
DIATF: The UK Digital Identity and Attributes Trust Framework recognises certified identity/attribute providers and includes requirements relevant to age attributes. Providers operating under DIATF publish which schemes and roles they’re certified for. (GOV.UK—Digital identity and attributes trust framework)
-
PASS: The Proof of Age Standards Scheme certifies physical and digital proofs of age accepted across the UK (e.g., Yoti PASS card; Post Office PASS). PASS‑certified credentials can be used as a non‑biometric route. (PASS—Proof of Age Standards Scheme)
Example: providers such as Yoti detail independent audits and certifications for their facial age estimation and ID verification flows, including deletion and accuracy statements; services should link to those attestations in‑product. (See your selected provider’s certification pages and audit summaries.)
Over/under‑compliance & case studies
Real safety lives in the details. Some platforms take the hint—offer choices, delete data, keep the door open but sensible. Others reach for the bluntest tool and call it a day. These snapshots show what “good” feels like, what “lazy” looks like, and why users react the way they do.
What does “good” look like in practice?
Multiple routes, privacy‑by‑default, clear appeals, and minimal exclusion.A strong flow offers a low‑friction method first (e.g., facial estimation) and fallbacks (ID+liveness, open banking, MNO, PASS). Images are deleted immediately after estimation, and only an “age OK” token is stored. The service explains methods and appeals in plain English. This aligns with Ofcom’s “highly effective” criteria around robustness, reliability and fairness (see Ofcom’s children’s codes).
What does “bad/lazy” look like?
One route for everyone, weak binding, data kept longer than needed, and no memory of a valid check.A common anti‑pattern is “credit‑card only” gating. It’s easy to ship but weak against parent‑card misuse and exclusionary for users without credit cards. Ofcom treats card checks as acceptable only when they bind to the user, not just “a card exists” (children’s codes). Another anti‑pattern is claiming “no third party” while avoiding equivalent in‑house methods (e.g., ephemeral IDV) that achieve the same privacy outcome.
Coverage in 2025 highlighted these trade‑offs. For example, “You now need a credit card to access mature content on Steam in the UK” summarised friction and exclusion risks, while Ofcom’s bulletins emphasised offering alternatives and deleting data promptly (see Ofcom’s online safety hub and industry bulletins).
Platform examples to learn from (not endorsements)
-
Reddit (UK): verifier‑backed selfie age estimation with ID fallback; deletion claims; quick unlock of gated content. See coverage of the rollout: TechCrunch.
-
Steam (UK): “valid credit card on file” = age‑verified; simple but weakly bound and exclusionary. Help page spells it out: Steam Help. Community threads and press reflect the practical pain points (e.g., debit‑card users; shared devices).
-
Bluesky (UK): age‑assurance via an integrator (KWS/Epic) mixing face/ID/payment‑card options; used as a reference by some midsize services (see sourced notes in our research).
These patterns map back to Ofcom’s method menu and fairness expectations. Services that bind to the user, delete data, and offer choices tend to generate less backlash.
Edge case: shared devices & family accounts
“Card on file” doesn’t prove the person using the device is an adult.On a shared PC or console, binding to the instrument (card on account) can misrepresent the user. Better designs bind the check to the account holder using estimation or IDV, then remember the result (token). Where family accounts are involved, services should default to teen‑safe settings until an account holder verifies. This aligns with Ofcom’s focus on outcomes rather than a single tool (children’s codes).
Edge case: users without credit cards, strong cameras, or bank apps
“Highly effective” is about outcomes, so offer more than one route.Good implementations provide non‑card, non‑face options: open banking (bank confirms “over 18”), MNO checks (carrier adult flag), PASS (digital proof of age), or ID+liveness via webcam. If estimation fails due to camera/lighting, a quick retry or fallback should be available. See Ofcom’s accepted methods and fairness expectations (children’s codes).
Follow‑question: how should services handle false negatives/positives?
Offer retries, switch methods, and provide a fast appeal.A mis‑ageing event shouldn’t strand an adult user. Provide a rapid retry path (e.g., better lighting) or a switch to IDV/open banking/MNO/PASS. Keep an appeal route with human review for edge cases. These steps satisfy Ofcom’s reliability/fairness lens without diluting the protection goal (children’s codes).
Follow‑question: what evidence should services publish?
Method choices, deletion/retention, fallback ladder, and appeal stats.Plain‑English notices should explain which methods are offered, whether images are deleted, what fallbacks exist, and how appeals work. Transparency reports should include high‑level metrics on verification success/failure and complaint outcomes. The OSA’s transparency expectations and Ofcom’s audit powers support this (see the government’s OSA explainer and Ofcom’s roadmap to regulation).
Your rights & routes
You shouldn’t have to surrender your privacy to be an adult online. Good systems prove your age, then let the data go. And if a platform makes bad choices, you have ways to push back. This chapter is your pocket guide to deletion, alternatives, and where to complain when something’s off.
What happens to my photo/ID? Can I make them delete it?
Certified providers should delete images immediately after the check and keep only what’s needed (e.g., an “18+ OK” token tied to your account). Under UK data protection law (UK GDPR), you have rights to access, correction, and erasure where applicable.In practice: good providers state deletion clearly and explain retention for audit tokens. If a service claims deletion, they should be able to show it. You can exercise your rights with the provider (data access/erasure) and, if needed, complain to the ICO. See the UK GDPR overview (ICO) and Ofcom’s online safety hub for how these regimes interact. ICO materials updated in 2025 emphasise immediate deletion for age‑assurance images and minimal data handling.
Tip: look for independent certification and audits. For example, the Age Check Certification Scheme (ACCS) audits age‑assurance providers (ACCS), the Digital Identity and Attributes Trust Framework (DIATF) recognises certified identity/attribute providers (GOV.UK—DIATF), and the PASS scheme certifies proofs of age (PASS). Providers such as Yoti publish certifications and audit summaries covering deletion and accuracy.
How do I complain about a bad implementation?
Start with the platform’s complaints route (required by the OSA). If ignored or the design is unfair (e.g., “credit‑card only”), escalate to the regulator.Routes: (1) platform complaint/appeal, (2) Ofcom complaint about online safety issues (especially systemic ones), (3) ICO complaint if the issue is data protection (e.g., retention). We recommend Ofcom adopt a simpler user‑first intake that aggregates patterns so people don’t have to write long dossiers—but for now, use the current Ofcom complaints page and keep your evidence. For transparency duties that apply to larger/categorised services, Ofcom has final guidance on what reports must include (see Transparency reporting—final guidance (PDF)).
What if I don’t want to share ID at all?
Ask for a non‑ID route: facial age estimation (with deletion), open banking (bank confirms “over 18”), mobile‑network checks, or a PASS digital proof of age.Ofcom’s codes list multiple “highly effective” methods so you shouldn’t be forced into one route. If a service offers only one choice, challenge it using the platform complaint route and cite Ofcom’s children’s codes.
How do appeals and retries work if I’m mis‑aged?
You should be able to retry quickly (better lighting/camera) or switch to a stronger fallback (ID+liveness, open banking, MNO, PASS) without being locked out.A fair system explains the fallback ladder and offers a fast appeal with human review for edge cases (e.g., atypical faces, disability impacts). These expectations align with Ofcom’s fairness and reliability aims in the children’s codes.
Do small/federated communities have to do all this?
Duties scale. Many small servers won’t hit heavy thresholds, but federated admins are still “providers” and must do proportionate basics.Minimums worth doing: publish a simple safety policy (what’s allowed; reporting route), run a short risk assessment (what harms could appear; how you respond), and pick low‑friction age‑assurance options if you ever need them (email estimation, PASS). Keep short logs of decisions. Ofcom’s roadmap and hub explain proportionality.
Context: cookie law → GDPR → OSA
We’ve tried “make the user click more buttons.” It didn’t fix much. GDPR moved some of the work backstage, where it belongs. The Online Safety Act keeps pushing in that direction: less banner theatre, more responsibility on the people who design the systems we use every day.
Why did cookie law feel rubbish?
It pushed the burden onto users: endless banners and dark patterns to click through, while many sites kept tracking. It delivered lots of friction and little systemic change.Context: the UK’s “cookie law” obligations came via PECR (Privacy and Electronic Communications Regulations) implementing the EU ePrivacy Directive (early 2010s) alongside consent guidance; over time many sites implemented maximally annoying prompts rather than reducing tracking.
What did GDPR improve?
It forced more backend changes (lawfulness, purpose limits, data‑minimisation), gave regulators sharper teeth (fines), and created user rights (access/erasure). Still imperfect, but harder to “banner‑wash” away.Timing: the EU’s GDPR applied from May 25, 2018, and the UK retained it as “UK GDPR” after Brexit. GDPR pushed controllers to redesign data flows, document decisions (DPIAs), and justify retention—shifting effort from user clicks to service systems.
Where should OSA land?
Squarely on business systems. The goal is not to make you click more pop‑ups, but to make services run safer designs: better defaults for kids, real reporting and appeals, and age checks that don’t hoard data.That’s why Ofcom’s codes are method‑agnostic but outcome‑focused (robustness, reliability, fairness), and why the law empowers auditing rather than “make every adult show ID” edicts (see gov.uk OSA explainer and Ofcom’s children’s codes).
How does this compare internationally?
The EU’s Digital Services Act (DSA) pushes platforms to assess and mitigate risks, adds data access for vetted researchers, and strengthens transparency (fully enforced across 2024). The US debate around the Kids Online Safety Act (KOSA) emphasises design duties and age‑assurance but faces constitutional challenges and a patchwork of state approaches.The direction is similar: less banner friction, more systemic obligations. The UK’s OSA is part of that shift—child safety and illegal harms up front, transparency and audits on the back end. See comparisons that outline differences and overlaps: Slaughter and May—EU DSA vs UK OSA, Bristows—OSA vs DSA, and Ofcom’s roadmap to regulation.
Offline analogies?
Seatbelts, food hygiene, workplace safety: rules target the people who design and operate the systems, not the end user who has the least control.That framing helps: you shouldn’t have to do all the work to be safe online. The obligations sit primarily with the services.
Practical takeaway
If a platform makes you jump through hoops, that’s usually an implementation choice. Ask for privacy‑preserving options (deletion, non‑ID routes). The law doesn’t require making your life harder; it requires the service to take responsibility . See [gov.uk OSA explainer](https://www.gov.uk/government/publications/online-safety-act-explainer/online-safety-act-explainer).Kids, speech & the messy middle
Keeping people safe and keeping the internet open can pull in different directions. The trick is balance: give teens room to grow without dumping them in the deep end, and let adults speak freely without making targets unsafe. This chapter sits in that tension and looks for practical ways through.
“Kids safe and adults free to speak” — is that enough?
Close, but we need nuance. Total shielding can backfire: teens go from “kid mode” to the deep end overnight. Gradual, supported exposure (with controls and context) builds resilience. The aim is harm‑reduction, not bubble‑wrap.That’s why children’s duties focus on safer defaults, reduced unsolicited contact, and friction around adult/harmful content. The intent is a graded experience rather than an on/off switch (see Ofcom—children’s codes).
Platform X blocked my DMs — is that required?
Not necessarily. The Act doesn’t force blanket DM blocks for anyone. Duties are risk‑based and should be proportionate.
- What’s covered: user‑to‑user services are in scope, and private messaging can be in scope when content one person sends can be encountered by another. But the law targets system risks, not auto‑gating every conversation by default.
- Over‑compliance: one‑size‑fits‑all DM blocks or “prove you’re 18 to message anyone” can be a blunt shortcut. If both accounts are clearly adult, long‑standing, and show low‑risk patterns, heavy gates usually aren’t needed.
- Proportionate options (legal‑team friendly): limit unsolicited contact, add friction for unknown/teen accounts, rate‑limit spammy behaviour, provide strong blocking/reporting, and keep safer defaults for young users. Let normal DMs proceed where risk is low (e.g., verified‑adult pairs; established mutuals).
- Good faith: the law is specific about what qualifies. A legal team that engages with the guidance can find workable routes that protect teens without breaking everyday use.
If your DMs were blocked across the board, that may be a design choice rather than a legal requirement. Use the platform complaints route and point to proportionate alternatives.
Adult legal content: accounts vs anonymity, and “pay for your porn”?
Gates around porn and other 18+ but legal material can nudge people toward signing in or creating accounts instead of browsing anonymously. That increases the amount of data tied to a person (email, device signals, payment history) and, on shady sites, can raise the risk of misuse or leaks.There is a flipside. Buying from reputable providers changes incentives: workers are paid, terms are clearer, and businesses have more to lose if they mishandle data. If you choose to access adult content:
- Prefer well‑known providers with clear privacy policies and support for deletion/portability.
- Use the most privacy‑preserving age‑assurance route offered (e.g., estimation with deletion rather than ID requirements).
- Keep accounts minimal, throwaway (separate email; no unnecessary personal details).
- Avoid “free” sites with a history of scraping or poor moderation: the data and safety risks are higher.
There is a sense of 'moralising', and it walks a fuzzy line between access and harm‑reduction. The law aims to gate 18+ areas; your choices can reduce exposure to sketchy actors while supporting creators and safer practices who do act responsibly.
Does the OSA censor legal speech?
The adult “legal but harmful” takedown duty was dropped during the Bill’s passage.
Adults get tools to avoid content, not new Act‑level bans on legal speech. Real chilling risks mostly come from platform choices (over‑zealous filters, vague rules). The OSA requires clearer terms, appeals, and transparency to keep that in check (see the government’s OSA explainer).
In short: platforms remain responsible for enforcing their own terms consistently. Users should get appeals and explanations when content is actioned. Regulators look at the quality of those systems rather than dictating adult speech bans.
What about harassment, hate and pile‑ons?
Freedom without safety isn’t meaningful for many people. The OSA pushes platforms to enforce their own rules consistently and provide better reporting and user controls (filters, blocks), so adult speech can thrive without making targets unsafe. That includes tools for blocking unsolicited contact and limiting recommendations around sensitive content (children’s codes).
What about encrypted chats (E2EE)?
E2EE isn’t banned. A power exists to require “accredited technology” to detect illegal material, but only if technically feasible and with safeguards.
Ministers have said they won’t use that power until a workable, privacy‑preserving approach exists. For now, the practical focus is on platform systems (risk assessments, safer defaults, reporting) rather than breaking encryption.
Best we could tell, this hypothetical 'safe but sometimes open'-E2EE-workaround doesn't seem like something that would make sense to exist. Nobody seems to be suggesting a 'skeleton key' or back-door solution (for many very good reasons), but it seems unlikely to be deployed even if it did magically become available (through 'AI' or 'Quantum' perhaps).
What this means in practice:
- Your encrypted apps (e.g., Signal, WhatsApp) continue to offer end‑to‑end encryption.
- There is an unresolved debate about whether scanning inside encrypted apps can ever be done safely; several providers say they would not weaken E2EE to do so.
- If technology emerges that meets strict tests, the regulator could consider it; until then, attention is on non‑encryption measures.
Note: this area is contested and evolving. The law sets the option; the current stance is “not until feasible,” and many companies strongly oppose any weakening of E2EE.
Do creators face new “content rules” from the Act?
No—creators don’t get a new legal rulebook from the OSA. The effect is indirect: platforms must apply their own terms more consistently, improve reporting/appeals, and be clearer about decisions.That can still affect borderline content (e.g., edgy satire, shock thumbnails) if a platform’s policy is strict, but the lever is the platform’s policy and systems, not a new Act‑level ban on adult legal speech. Appeals and transparency expectations should improve the conversation around mistakes (see Ofcom—hub).
How effective are these measures, really?
It depends on the design. The law sets duties; outcomes hinge on how services implement them.- Age assurance: evidence suggests facial age estimation can reliably separate under‑18s from adults when used with sensible buffers and rapid deletion; near age thresholds it needs fallbacks (ID+liveness, bank, PASS), or maybe just a light touch. Certification improves this level of trust, and popular service-providers (like Epic's KVS) should work with the government to openly improve on their assurances.
Ofcom frames “highly effective” as an outcome standard (robust, reliable, fair) rather than a single tool, which is why layered flows perform best. (See Ofcom children’s codes and guidance.)
- System duties: clearer reporting/appeals and safer teen defaults reduce friction for targets and raise the bar for repeat harms; effectiveness varies by platform maturity and follow‑through (audits, transparency).
What we don’t yet have is a single “X% safer” number across all harms. Regulators and providers will publish data over time (removal speeds, complaint outcomes, recidivism). Treat early numbers as directional rather than definitive.
Sources: Ofcom online safety hub and children’s codes; provider certification notes (ACCS/DIATF/PASS) on deletion and accuracy.
What is it protecting children from, specifically?
Two main buckets:- Clearly illegal harms: e.g., Abuse material, terrorism content, fraud. Platforms must assess risks of this content, and design systems to detect and remove these more effectively (codes and guidance specify measures and processes), reacting with timeliness and responsibility.
- Likely‑to‑harm content for children: e.g., pornographic content; encouragement of self‑harm or eating disorders; abusive contact. Duties include safer defaults, gated access, and tools that reduce exposure and unwanted contact.
Effectiveness depends on proportionate implementation: age gates that actually keep most under‑18s out of adult areas; recommendations that don’t push younger users toward harmful spirals; and appeals that quickly fix mistakes. Ofcom’s outcome tests (robustness, reliability, fairness) are the yardstick for judging if a chosen method works in practice.
Sources: Ofcom illegal‑harms statements and children’s codes; overview of illegal harms; regulator roadmaps cited in this report.
Politics & perception
Policy meets people through headlines and hot takes. Age checks are the visible tip, so they soak up the attention; the quieter plumbing barely gets a mention. This section helps separate outrage at clumsy rollouts from bigger questions about how we want platforms—and rules—to work.
Why did age verification become the story?
It’s visible and deadline‑driven: ordinary adults meet it at point‑of‑use. The harder work (risk assessments, safer defaults, audits) happens out of sight. Media and creators naturally amplify what people can feel right now (see Ofcom’s phased [roadmap](https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/roadmap-to-regulation)).Coverage through mid‑2025 and beyond focused on Reddit’s selfie‑based rollout and Steam’s credit‑card approach, which sparked strong reactions across gaming and creator spaces. See, for instance, the BBC on Reddit’s UK rollout (BBC News) and developer press on Steam’s card‑on‑file design (Game Developer).
- What people see: age checks at login or when opening adult areas.
- What they don’t see: risk assessments, safer defaults for teens, audits and reporting.
- Why it spreads: media and creators focus on visible changes that affect everyday use.
- Examples shaping the narrative: Reddit’s selfie+ID fallback; Steam’s card‑on‑file design.
Manufactured disconsent: why gamer spaces ignite
Frustration with clumsy implementations is real. Some actors redirect that into rejecting governance itself (“all rules are tyranny”). Gaming communities have long had the tools for fast mobilisation (raids, brigades, review‑bombs). That makes them fertile ground for turning “bad design choice” into “law is illegitimate.” Our sourced notes trace this pattern through earlier cycles (e.g., Gamergate coordination) and show how platform choices can fuel it.Polling in 2025 suggests broad support for protecting kids alongside scepticism about effectiveness. For example, a July 2025 YouGov poll reported high support for age verification but doubts about outcomes, and an August 2025 Ipsos study found many Britons back checks while questioning how well they work. Linking to such polls helps explain why public sentiment can be pro‑safety yet critical of clunky designs. (YouGov poll results, Ipsos—Adults and age checks).
- Real friction: blunt designs (e.g., card‑only gates) annoy people and exclude some users.
- Amplification: organised groups can turn design anger into “the law is the problem.”
- Fast spread: gaming spaces are built for mobilisation (raids, brigades, review‑bombs).
- Public mood: support for protecting kids sits alongside doubts about real‑world results (YouGov; Ipsos).
What should platforms do to avoid backlash?
Design for privacy first (deletion, non‑ID options), give a choice of methods, explain plainly, and provide fast appeals. Don’t shift the burden onto users. People will still grumble, but the temperature drops when the friction feels respectful and optional (mirroring Ofcom’s outcomes‑not‑one‑tool approach in the [children’s codes](https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/statement-protecting-children-from-harms-online)).- Lead with privacy: delete images; avoid ID when not necessary.
- Offer routes: face, ID+live, bank, PASS—let users pick.
- Explain in‑product: simple language; no mystery steps.
- Fix mistakes quickly: retries and appeals that actually work.
Follow‑up: why do some companies over‑comply?
Incentives: legal uncertainty, fear of large fines, and tight timelines push legal teams toward blunt, audit‑friendly choices. A single gate (e.g., “card on file”) is easier to evidence than nuanced, targeted design. But it’s also more exclusionary and angers users—ironically fuelling the backlash. Ofcom’s approach leaves room for layered, privacy‑preserving methods; companies should use that room.- Uncertainty + big penalties → “easy to prove” designs.
- Single, blunt gates are simple to audit but frustrate users and exclude some people.
- Better path: layered, privacy‑preserving methods that still pass audits.
- Regulator stance: Ofcom’s framework allows layered methods—use that space.
Enforcement reality & what to watch next
Rules only matter if they change what services do. Enforcement is the unglamorous engine: audits, questions, fixes—and fines when needed. It won’t make the internet perfect, but it can push platforms toward safer, clearer designs. Here’s what action looks like in practice and how to track progress.
What will Ofcom actually do in 2025–26?
- Require risk assessments (illegal content; children’s risks) and review them. - Audit systems and request data where needed. - Issue guidance/codes and check designs match the risks claimed. - Use penalties when providers won’t fix issues (fines up to 10% global revenue; in rare cases, blocking orders).See Ofcom’s phased roadmap and the government’s OSA explainer.
What are the penalty caps and disruption tools, exactly?
Ofcom can fine up to £18m or 10% of qualifying worldwide revenue (whichever is greater) and, in severe cases, seek court‑ordered business disruption measures (e.g., payment/ads withdrawal or ISP access restrictions). See Ofcom’s enforcement guidance and the government explainer: [Ofcom—Enforcement guidance (PDF)](https://www.ofcom.org.uk/siteassets/resources/documents/online-safety/information-for-industry/illegal-harms/online-safety-enforcement-guidance.pdf?v=391925), [gov.uk OSA explainer](https://www.gov.uk/government/publications/online-safety-act-explainer/online-safety-act-explainer).How does a case usually progress?
Informal engagement → information notices/audits → improvement steps → penalties if refusal or repeated failure. Most cases end with fixes, not fines, but credible penalties focus minds.Early wins vs misfires to track
- Wins: faster removal of clearly illegal material; safer teen defaults; multiple age‑assurance routes shipped by large platforms. - Misfires: one‑route gates (e.g., credit‑card only), poor appeals, slow deletion practices, or policies that shift the burden onto users.Ofcom provides periodic bulletins and statements tracking progress—use these as primary sources when citing trends and outcomes (see Ofcom’s online safety industry bulletins).
What might change under this government?
Strategic priorities (e.g., safety‑by‑design, violence against women and girls) will shape Ofcom’s focus. Expect more consultations and iterative codes—so feedback from users and smaller providers matters.Follow‑up: how can users/SMEs prepare?
- Users: keep screenshots and links when a design is unfair; use platform appeals first; escalate to Ofcom for systemic issues; use ICO if the issue is data handling/retention. - SMEs: document a short risk assessment; pick proportionate age‑assurance options (offer a non‑ID route); publish a plain‑English safety page and appeals; keep deletion/tokenisation minimal. See Ofcom’s [online safety hub](https://www.ofcom.org.uk/online-safety).Deeper dives & missed angles
We’ve covered the core of the Online Safety Act, but there’s always more: edge cases, intersections with other fields, and forward-looking questions. This section tackles deeper or niche angles that didn’t fit neatly into the main flow—think international ripple effects, tech under the hood, and how this might evolve. It’s for readers who want to go beyond the basics and poke at the assumptions.
How does the OSA interact with UK data protection laws?
The OSA doesn’t override UK GDPR but complements it: platforms must still justify data handling for age assurance (lawful basis, minimisation) and respond to rights requests (erasure, access). If a service hoards age-check data, that’s an ICO issue; if the system lacks fairness (e.g., biased estimation), it could breach both regimes.In practice, Ofcom and the ICO coordinate via a memorandum of understanding (MoU) to avoid double jeopardy—Ofcom handles safety systems, ICO focuses on data compliance. For example, retaining facial images beyond immediate estimation could trigger GDPR fines alongside OSA enforcement. See the ICO-Ofcom MoU (PDF) and UK GDPR guidance on processing special category data like biometrics.
What about decentralized or open-source platforms?
Decentralized services (e.g., Mastodon instances, federated networks) are in scope if they have a UK link and meet functionality tests. Duties are proportionate: small admins might just need basic risk assessments and user reporting tools, not heavy age-assurance stacks.Challenges include attribution (who’s the “provider” in a federated setup?) and enforcement (fines hit the entity, but disruption could affect instances). Ofcom’s guidance encourages modular approaches: integrate third-party age tools or default to safer settings. See Ofcom’s scope overview for “multi-provider” services and the Regulated services: overview of scope (PDF).
How might AI change moderation and age assurance?
AI is already in play for content detection (e.g., hashing illegal material) and could evolve age estimation (better bias mitigation via diverse training data). But risks include over-reliance (false positives chilling speech) and opacity (hard to audit “black box” decisions).Ofcom’s codes require explainable systems and human oversight for high-stakes calls; future iterations may address AI-specific risks like generated harms (deepfakes). Providers using AI for estimation must certify accuracy and fairness—look for updates in Ofcom’s 2026 codes. See emerging guidance in Ofcom’s open letter on Generative AI (hypothetical based on trends; check current hub).
Could the OSA lead to mission creep or power expansion?
The Act has safeguards (parliamentary oversight, phased codes), but critics worry about broadening “harm” definitions via secondary legislation or strategic priorities. For example, ministers can add priorities (e.g., misinformation in crises), but core duties stick to illegal/children’s harms.Watch for judicial reviews challenging over-reach or consultations on code updates. International parallels (e.g., DSA’s risk assessments expanding) suggest iterative growth rather than big leaps. See the government’s strategic priorities statement (May 2025) and Ofcom’s response.
What are the economic impacts on platforms and creators?
Compliance costs vary: large platforms face audits and system redesigns (potentially millions); smaller ones get proportionality (e.g., off-the-shelf tools). Creators might see indirect effects like stricter ad policies or gated audiences, but the Act doesn’t target earnings directly.Positive angles: safer spaces could boost trust and retention; transparency duties level the field for ethical providers. Economic analyses (e.g., impact assessments) estimate net benefits from reduced harms outweigh costs—see the OSA regulatory impact assessment (2023, updated 2025). Track creator feedback via unions like the Creators’ Rights Alliance.
How does this affect non-UK users or global platforms?
Global platforms must ring-fence UK-linked users (e.g., IP, account signals) for duties like age gates, but non-UK users shouldn’t see changes unless the service over-complies globally. Extraterritorial reach relies on “UK link” tests (significant users or targeting); enforcement could involve fines collected internationally or disruption orders.Ripple effects: some platforms might harmonize policies worldwide to simplify ops, influencing users elsewhere. See Ofcom’s international enforcement approach and comparisons with DSA’s very large platforms regime.
Accessibility: how does this work for disabled users?
Ofcom expects “fair” systems that don’t exclude based on disability—e.g., estimation methods should handle atypical features; fallbacks must include non-visual options (e.g., voice liveness, PASS via screen readers).If a flow fails accessibility tests, challenge it via platform complaints or escalate to Ofcom/Equality and Human Rights Commission (EHRC). Guidance emphasizes inclusive design; see Ofcom’s fairness expectations in children’s codes and EHRC’s online accessibility resources.
How will we know if the OSA is working?
Metrics include removal speeds for illegal content, verification success rates, complaint outcomes, and harm-reduction surveys. Ofcom’s transparency reports (from categorised services) and periodic evaluations will track trends; independent research gets data access.Early indicators: fewer reported exposures for children; better user satisfaction with appeals. Full evaluations are slated for 2027+—see Ofcom’s evaluation framework consultation and government post-implementation reviews.
What to take with you
What you’ll notice most are the visible bits: age checks before adult content, clearer reporting and appeals, and safer settings for younger users. Good designs feel light—choices of method, deletion of sensitive data, clear explanations. Blunt designs feel like walls—single routes, vague notices, lazy implementations and friction for everyone.
Age assurance is a toolkit, not a single tool. “Highly effective” is an outcome standard: strong enough to keep most under‑18s out of adult-only spaces, fair, and reliable. In practice, that means layered flows: quick estimation with deletion, with stronger fallbacks (ID + liveness, bank, PASS) only when needed.
Some protections are easily circumvented (like switching to VPN in a country without similar laws), but that exposes you to riskier potential harm than the known-safe verification steps, and laws being easy to circumvent aren't a reason not to have laws. It's about ensuring platforms offer tools, and other places act responsibly in future.
You have agency. You can ask for alternatives, you can expect immediate deletion of images used for estimation, or even good faith assumptions to affect your account (if you have an email-verified account for so many years), and you can appeal. If a platform makes poor choices—credit‑card‑only gates, no real appeals—complain.
Keeping kids safer and keeping adults free to speak sometimes pull in different directions. The Act tries to land in the middle: safer defaults and better tools without new bans on legal adult speech. That balance relies on platform choices—transparent policies, consistent enforcement, and working systems. It didn't quite antipiate the level of some bad actors trying to affect policy decisions, but that's for a different report.
Politics will keep focusing on what’s most visible (like age checks). That can hide the quieter plumbing: risk assessments, audits, and better system design. Some companies will over‑comply to be “safe on paper.”, but even the more resistant platforms already added age gating, determined through various signals. The better path is proportionate fixes that protect children without breaking everyday use.
Enforcement is the unglamorous engine. Ofcom will audit, consult, and penalise when fixes don’t happen. Expect progress to show up in boring places—reporting flows, defaults, documentation—long before you see headlines.
Practical takeaways:
- Prefer reputable providers and privacy‑preserving options; keep accounts minimal when you can.
- Expect choices of method and clear explanations; push back on single‑route gates.
- If something goes wrong, retry, switch methods, or appeal—and if you feel the approach taken was exploitative or a dark pattern, or even just lazy or inconsiderate of your time, consider reporting to the Ofcom enforcement hub.