Social Media Regulation & Section 230 Reform
Social media platforms have become the primary information ecosystem for hundreds of millions of Americans, and Section 230's blanket liability shield must be reformed to hold platforms accountable for algorithmic amplification of harmful content — especially content targeting children.
Last updated: March 12, 2026
Domain
Technology & Civil Liberties → Platform Governance → Content Moderation & Platform Accountability
Position
Section 230’s blanket immunity made sense when platforms were passive bulletin boards — it doesn’t make sense when algorithms actively amplify harmful content for profit. Platforms should retain protection for good-faith content moderation, but lose immunity when their algorithms knowingly promote content that harms children, spreads dangerous misinformation, or facilitates violence.
The Kids Online Safety Act passed the Senate 91–3 with overwhelming bipartisan support but was never brought to a House floor vote. Meanwhile, teen mental health has deteriorated dramatically alongside social media adoption, platforms’ own internal research has documented harm they chose not to address, and both parties agree Section 230 needs reform — they just disagree on how. Children are paying the price for congressional inaction.
Key Terms
-
Section 230: A provision of the Communications Decency Act (1996) that provides two key protections: (1) platforms aren’t liable for content posted by users (“No provider… shall be treated as the publisher”), and (2) platforms can moderate content in good faith without losing that protection. The law was written when the internet was a collection of message boards; it now shields trillion-dollar companies whose algorithms actively shape what billions of people see.
-
Algorithmic Amplification: The process by which platform algorithms select, prioritize, and promote content to maximize engagement — effectively deciding what users see. Algorithms don’t just host content; they curate, recommend, and amplify it. Content that provokes outrage, fear, and anger gets more engagement, so algorithms systematically boost the most extreme and harmful material.
-
KOSA (Kids Online Safety Act): Bipartisan legislation requiring platforms to “prevent and mitigate” harms to minors — including mental health disorders, addiction, physical violence, and sexual exploitation. KOSA passed the Senate 91–3 in 2024 and advanced through the House committee but was never brought to a floor vote despite having 75+ co-sponsors in the reintroduction.
Scope
- Focus: Reforming Section 230 to hold platforms accountable for algorithmic amplification, protecting children online, and establishing a regulatory framework for platform accountability
- Timeframe: Section 230’s passage (1996) through current reform efforts (2025–2026)
- What this is NOT about: Government censorship of political speech, banning social media outright, or repealing Section 230 entirely (which would likely increase censorship, not reduce it)
The Case
1. Section 230 Was Written for a Different Internet — Algorithms Changed Everything
The Point: Section 230 made sense when platforms passively hosted user content. It doesn’t make sense when algorithms actively select and amplify the most harmful content because it drives engagement and advertising revenue.
The Evidence:
- Section 230 was enacted in 1996, when the internet consisted of message boards, email lists, and early websites. Today’s platforms use sophisticated recommendation algorithms that determine what billions of users see — not just hosting content, but actively curating, promoting, and amplifying it based on what maximizes engagement time.
- Internal documents from Meta (leaked by whistleblower Frances Haugen in 2021) revealed that Facebook’s own research found Instagram was toxic for teenage girls — worsening body image for 1 in 3 — and that the company chose not to act on its own findings because the engagement was profitable.
- Bipartisan support for reform is unprecedented: the Sunset Section 230 Act, introduced in late 2024 by Senators Durbin and Graham, was sponsored by senators across the ideological spectrum. Both parties agree reform is needed — Democrats focus on misinformation and child safety; Republicans focus on perceived political censorship.
The Logic: There’s a fundamental difference between a bulletin board and an algorithm. A bulletin board displays what people post; an algorithm decides what people see. When a platform’s algorithm recommends a teenager increasingly extreme eating disorder content, that’s not “hosting” — it’s promoting. Section 230 was never intended to immunize the active, algorithmic promotion of content that platforms know is harmful. The law should distinguish between passive hosting (which deserves protection) and algorithmic amplification (which should carry accountability).
Why It Matters: Platform algorithms shape the information environment for hundreds of millions of Americans. When those algorithms systematically boost outrage, misinformation, and content harmful to children — because it’s profitable — and face zero accountability for the consequences, the result is a degraded public discourse, a mental health crisis among young people, and an epistemic environment where reality itself is contested.
2. Children Are Being Harmed — and Platforms Know It
The Point: The evidence that social media is harming children’s mental health is overwhelming, platforms’ own internal research confirms it, and Congress has bipartisan supermajority support for protections but still can’t get legislation to the president’s desk.
The Evidence:
- KOSA passed the Senate 91–3 — one of the most bipartisan votes in modern Senate history — and advanced through the House Energy and Commerce Committee. Despite 75+ co-sponsors upon reintroduction, it has not received a House floor vote. Speaker Johnson never scheduled the original bill; the reintroduction faces similar obstacles (Congressional tracking, 2025).
- KOSA would require platforms to “prevent and mitigate” harms to minors including mental health disorders, addiction, physical violence, and sexual exploitation — and would give parents new tools and children the option to disable addictive design features like infinite scroll and autoplay.
- The U.S. Surgeon General issued an advisory in 2023 calling social media a “profound risk” to children’s mental health, noting that adolescents who spend more than three hours per day on social media face double the risk of anxiety and depression symptoms. Nearly half of adolescents report social media makes them feel worse about their bodies.
The Logic: We don’t allow companies to market tobacco to children, put children in hazardous workplaces, or sell alcohol to minors — because society recognizes that children require special protection from commercial interests. Social media platforms are deliberately designed with addictive features (infinite scroll, variable-reward notifications, social comparison metrics) that exploit developing brains. The platforms know this because their own research shows it. KOSA isn’t censorship — it’s the same principle we apply to every other industry that affects children: you have to take reasonable steps to prevent harm.
Why It Matters: A generation of children is being raised in an information environment that platforms’ own researchers call harmful. The 91–3 Senate vote proves this isn’t partisan. The obstacle isn’t public support or congressional consensus — it’s procedural dysfunction and tech industry lobbying. Every year of delay is another year of children experiencing documented harm.
3. Reform Can Be Done Without Becoming Government Censorship
The Point: The right reform targets algorithmic amplification and platform design — not individual speech. It preserves Section 230’s core protection for good-faith moderation while adding accountability for platforms’ own algorithmic choices.
The Evidence:
- Multiple reform proposals thread the needle between accountability and free speech: KOSA targets platform design (addictive features, algorithmic recommendations to children) rather than specific content. The EARN IT Act focuses on child sexual exploitation. The Sunset Section 230 Act creates a time-limited framework requiring platforms to meet duty-of-care standards.
- The ACLU has raised concerns that KOSA could lead to over-censorship of LGBTQ+ content or other protected speech — a legitimate concern that has led to revisions narrowing the bill’s scope and clarifying that it targets platform design features, not specific categories of content.
- Freedom House has noted that poorly designed Section 230 reform could backfire by causing platforms to over-censor to avoid liability. The key distinction is between targeting the algorithm (which the platform controls) and targeting the speech (which users create).
The Logic: The false binary is “either platforms are immune or we get government censorship.” The reality is more nuanced. Platforms make algorithmic choices that amplify some content and suppress other content — those choices can be regulated without the government dictating what individuals can say. Requiring platforms to stop algorithmically recommending eating disorder content to teenagers isn’t censorship — the content would still exist; it just wouldn’t be actively pushed into children’s feeds. The reform targets the recommendation engine, not the speech.
Why It Matters: Getting the design right matters enormously. Bad reform could legitimize government censorship, suppress marginalized voices, or entrench dominant platforms (which can afford compliance) at the expense of smaller competitors. Good reform — targeted at algorithmic amplification, platform design features, and duty-of-care for children — can hold platforms accountable without becoming a speech police.
Counterpoints & Rebuttals
Counterpoint 1: “Reforming Section 230 would destroy the internet — small websites and startups can’t afford compliance.”
Objection: Section 230 doesn’t just protect Facebook and Google — it protects every website with user comments, every small forum, every startup. Making platforms liable for user content would crush small competitors and entrench Big Tech, which can afford legal teams.
Response: Well-designed reform targets algorithmic amplification, not passive hosting — which means small websites that don’t use recommendation algorithms to boost content wouldn’t be affected. KOSA applies to platforms that meet specific size thresholds. The Sunset Section 230 Act focuses on large platforms with significant market power. The concern about crushing startups is valid for poorly designed reform (which is why design matters), but it’s not an argument against reform — it’s an argument for smart reform that differentiates between a trillion-dollar company’s recommendation engine and a community forum’s comment section.
Follow-up: “But any liability creates a chilling effect — platforms will over-moderate to avoid risk.”
Second Response: They’re already over-moderating in some areas and under-moderating in others — based on whatever maximizes engagement, not what protects users. The current system doesn’t produce good moderation; it produces profitable moderation. Liability for algorithmic amplification would actually improve moderation by giving platforms a reason to care about the consequences of their algorithmic choices, rather than just the engagement metrics.
Counterpoint 2: “This is just Democrats wanting to censor conservative speech.”
Objection: Democrats push Section 230 reform because they want platforms to suppress conservative viewpoints, misinformation claims are subjective, and government involvement in content moderation would inevitably lead to political censorship.
Response: The Sunset Section 230 Act was co-sponsored by Lindsey Graham, a conservative Republican. KOSA passed the Senate 91–3, including virtually every Republican. Both parties want reform — they just have different concerns. Republican Senator Ted Cruz chairs the committee handling KOSA. The bipartisan alignment on child safety and platform accountability is one of the few genuine areas of congressional consensus. And the best reform proposals — targeting algorithmic amplification rather than specific content — don’t give the government power to decide what speech is acceptable.
Follow-up: “But who decides what counts as ‘harmful content’? That’s inherently political.”
Second Response: That’s exactly why the best proposals target platform design features (algorithmic amplification, addictive design, data collection from minors) rather than content categories. KOSA’s most recent version focuses on a duty of care — requiring platforms to take reasonable steps to prevent documented harms — rather than defining prohibited speech. The distinction matters: requiring a platform to stop algorithmically pushing self-harm content to depressed teenagers is different from telling users what they can post.
Counterpoint 3: “Parents, not government, should be responsible for what children see online.”
Objection: It’s parents’ job to monitor their children’s internet use. Government regulation is a poor substitute for good parenting. Parents can use existing controls, limit screen time, and teach digital literacy without a federal law.
Response: Parental controls are outmatched by billion-dollar companies employing thousands of engineers specifically to make their products addictive. Platforms are designed to circumvent parental oversight — algorithms learn children’s vulnerabilities and exploit them. And parental control tools are inadequate: they can’t see what the algorithm recommends within an app, they can’t prevent addictive design features, and they can’t stop platforms from collecting children’s data. KOSA would give parents better tools — including the ability to opt children out of addictive features — while requiring platforms to meet minimum standards.
Follow-up: “But we managed without internet regulation when I was a kid.”
Second Response: When you were a kid, there wasn’t a multi-billion-dollar industry employing behavioral psychologists to design maximally addictive experiences targeting children’s developing brains. The comparison to television or even early internet doesn’t hold because the technology is fundamentally different — algorithmically personalized, infinitely scrolling, deliberately engineered for compulsive use, and accessible 24/7 in every child’s pocket. This isn’t the same situation, and it doesn’t warrant the same response.
Common Misconceptions
Misconception 1: “Section 230 is what allows free speech online.”
Reality: The First Amendment protects free speech. Section 230 protects platforms from liability for hosting user content and from liability for moderating it. Reforming Section 230 doesn’t restrict what individuals can say — it changes what platforms are accountable for amplifying. The distinction between an individual’s speech and a corporation’s algorithmic promotion of that speech is crucial.
Misconception 2: “Repealing Section 230 would hold platforms accountable.”
Reality: Full repeal would likely cause platforms to massively over-censor — removing anything that might create liability, including political speech, journalism, and marginalized voices. It would also entrench dominant platforms that can afford legal departments while crushing smaller competitors. Targeted reform — focused on algorithmic amplification, child safety, and design accountability — is far more effective than the blunt instrument of full repeal.
Misconception 3: “The teen mental health crisis can’t be blamed on social media — correlation isn’t causation.”
Reality: The evidence has moved well beyond correlation. Platforms’ own internal research found causation (Meta’s “we make body image issues worse for one in three teen girls”). Experimental studies show that reducing social media use improves well-being. And the timing — a sharp decline in teen mental health coinciding precisely with smartphone and social media adoption around 2012 — combined with the documented mechanisms (social comparison, algorithmic amplification of extreme content, addictive design) creates a strong evidence base. Perfect proof isn’t the standard; the Surgeon General’s advisory reflects the weight of evidence.
Rhetorical Tips
Do Say
“This isn’t about censoring anyone’s speech — it’s about whether algorithms should be allowed to push eating disorder content to depressed teenagers for profit.” Make it about children and algorithms, not about “Big Tech” in the abstract. The 91–3 KOSA vote is a powerful proof point of bipartisan consensus.
Don’t Say
Don’t say “repeal Section 230” — it sounds extreme and would backfire. Don’t make it sound like you want the government to decide what’s true. Don’t dismiss all social media as harmful — acknowledge the benefits (connection, community, information) and focus on the design features and algorithmic choices that cause harm.
When the Conversation Goes Off the Rails
Come back to this: “The Kids Online Safety Act passed the Senate 91–3. The Surgeon General calls social media a ‘profound risk’ to children. Meta’s own research showed Instagram harms teen girls. The science is clear, the bipartisan support exists, and the bill still can’t get to the president’s desk. That’s the problem.”
Know Your Audience
For conservatives, emphasize parental rights (KOSA gives parents new tools), children’s safety, and the bipartisan nature of the issue (Graham, Cruz support). For moderates, lead with the 91–3 vote, the Surgeon General’s advisory, and the distinction between algorithmic accountability and speech censorship. For progressives, emphasize corporate accountability, the mental health crisis, and the need to protect LGBTQ+ and marginalized youth while holding platforms accountable.
Key Quotes & Soundbites
“KOSA passed the Senate 91 to 3 — one of the most bipartisan votes in years. It still hasn’t become law. When 91 senators agree and nothing happens, the system is broken.”
“Section 230 was written in 1996, when the internet was message boards. It now protects trillion-dollar companies whose algorithms decide what billions of people see. The law hasn’t kept up.”
“No one is saying ban social media. We’re saying that when an algorithm pushes eating disorder content to a depressed teenager because it increases engagement, the platform that built that algorithm should be accountable.”
Related Topics
- Data Privacy & Surveillance — Social media platforms are among the largest personal data collectors (see technology-civil-liberties/data_privacy_surveillance)
- Facial Recognition & Policing — Social media images feed facial recognition databases (see technology-civil-liberties/facial_recognition_policing)
- Citizens United & Campaign Finance — Social media enables micro-targeted political manipulation (see governance/citizens_united_campaign_finance)
- Gun Violence as Public Health Crisis — Social media algorithms amplify extremism linked to mass violence (see healthcare/gun_violence_public_health)
Sources & Further Reading
- Section 230 Debate and Child Online Safety — Harvard JOLT
- Blackburn, Blumenthal, Thune, Schumer Introduce KOSA — Sen. Blackburn, 2025
- How KOSA Has Evolved — TechPolicy.Press
- Section 230 Reform: What Websites Need to Know — Crowell & Moring, 2025
- Social Media Regulation and the Perils of Section 230 Reform — Freedom House
- Durbin, Graham Introduce Bill to Sunset Section 230 — Sen. Durbin
- ACLU on KOSA and Online Speech — ACLU, 2025