Intermediary Liability
The legal question of when a platform, application, or service becomes responsible for content created, shared, or transmitted by its users — rather than only the users themselves being responsible.
What is it?
Intermediary liability deals with a fundamental question for anyone building a platform: If a user does something harmful through your system, are you liable?
The answer depends on what your platform does with user content. There’s a spectrum from passive conduit (like a postal service delivering a sealed letter) to active publisher (like a newspaper choosing what to print). Where your application falls on this spectrum determines your legal exposure.1
This question has become central to modern technology law. The internet was built on the assumption that platforms are neutral intermediaries — Section 230 of the US Communications Decency Act famously established that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” But this US-centric model is being challenged worldwide. The EU’s Digital Services Act (DSA), enacted in 2024, creates a tiered system of responsibilities. Switzerland has no Section 230 equivalent at all — liability follows from general criminal and civil law principles.2
For developers, this means that architectural decisions — whether your app sends messages on behalf of users, whether it moderates content, whether it amplifies some content over others — directly determine legal risk.
In plain terms
Intermediary liability is like the difference between being a postman and a newspaper editor. A postman delivers letters without reading them — if a letter contains threats, the postman isn’t liable. A newspaper editor chooses what to publish — if an article is defamatory, the newspaper is liable. Your platform sits somewhere between these two extremes, and your design choices determine where.
At a glance
The liability spectrum (click to expand)
graph LR A[Mere Conduit] --> B[Caching] B --> C[Hosting] C --> D[Active Curation] D --> E[Publisher] A -.- F["Lowest liability"] E -.- G["Highest liability"] style C fill:#4a9ede,color:#fffKey: Most applications fall in the “Hosting” zone — you store and display user content but don’t create it. Liability increases as you move toward active curation (recommendation algorithms, editorial selection) or publishing (generating content yourself).
How does it work?
The three tiers of intermediary (EU DSA model)
The DSA establishes three categories of intermediary service, each with increasing obligations:3
1. Mere conduit
The service transmits information without selecting or modifying it (like an ISP or VPN).
- No liability for the transmitted content
- Must not initiate the transmission, select the receiver, or modify the content
- No obligation to monitor
2. Caching
The service temporarily stores information to make transmission more efficient (like a CDN).
- No liability if it doesn’t modify the content and complies with removal requests
- Must act quickly when notified of unlawful content at the origin
3. Hosting
The service stores information provided by a user at their request (like a social platform, forum, or file host).
- Conditional immunity — not liable for stored content IF:
- It does not have actual knowledge of illegal content, AND
- Upon obtaining knowledge, it acts expeditiously to remove or disable access
- This is where most applications fall
Think of it like...
A landlord is not responsible for what tenants do inside their apartments — unless the landlord knows about illegal activity and does nothing. The “knowledge + inaction = liability” formula applies to platform hosting in the same way.
The curation problem
Here’s where it gets complicated for modern applications. If your platform recommends, ranks, amplifies, or algorithmically curates user content, you may be moving from “hosting” toward “publishing.”4
| Platform action | Liability implication |
|---|---|
| Store and display user content, chronologically | Hosting — conditional immunity |
| Rank content by engagement metrics | Curation — increased scrutiny |
| Recommend content based on user profiles | Active curation — potential publisher status |
| Generate content using AI | Publisher — full liability |
| Send content to third parties on behalf of users | Transmission with amplification — high risk |
Developer rule of thumb
The more your platform shapes, selects, or amplifies content rather than passively displaying it, the closer you move to publisher liability. Every algorithmic decision is a potential editorial decision in the eyes of the law.
Switzerland: no safe harbour
Switzerland has no Section 230 and no DSA. Intermediary liability is determined by general civil law (Code of Obligations, Art. 41ff) and criminal law (StGB Arts. 173-174 for defamation, Art. 180 for threats).2
Key implications:
- No blanket immunity for platforms
- Knowledge + inaction can create civil and criminal liability
- Facilitating harmful communications (providing the tool and the audience) may be sufficient for accessory liability
- The 2025 draft platform law aims to create DSA-like obligations for Swiss platforms
For example: a civic messaging feature
You’re building a feature that lets citizens send messages to elected officials:
High-risk design (intermediary):
- User writes message in your app
- Your platform sends it via your email server to the official
- You are the sender of record — you transmitted the content
- If the message is threatening, you facilitated the threat
Low-risk design (tool provider):
- User writes message in your app
- Your platform generates a downloadable document
- User sends it themselves from their own email
- You provided a template — you didn’t transmit anything
The architectural difference is enormous: one design makes you an intermediary, the other makes you a tool provider. Same feature, radically different liability.
The notice-and-action framework
Under the DSA (and increasingly expected elsewhere), platforms must implement a process for handling reports of illegal content:3
- Notice — receive reports from users or authorities
- Assessment — evaluate whether the content is illegal
- Action — remove, disable, or restrict access to illegal content
- Transparency — inform the content creator and publish transparency reports
- Appeal — provide a mechanism for content creators to contest removal
Concept to explore
When AI generates the content on your platform, intermediary liability intersects with AI content liability — you may be both the platform and the “author.” See ai-content-liability.
Why do we use it?
Key reasons
1. It determines your legal exposure. Understanding where your platform sits on the liability spectrum is essential for risk management. The wrong architectural choice can make your entire platform a liability risk.
2. It shapes product design. “Build the feature, handle liability later” doesn’t work. The decision to send messages on behalf of users versus letting users send themselves is a product decision with legal consequences.
3. It’s evolving rapidly. The DSA, the 2025 Swiss draft platform law, and ongoing US Section 230 reform mean the rules are actively changing. Building with liability awareness means building adaptably.
When do we use it?
- When your platform stores, displays, or transmits user-generated content
- When users can send messages, comments, or reviews through your system
- When your platform uses algorithms to rank, recommend, or surface content
- When you provide tools that users can employ to communicate with others
- When AI generates content that is displayed to or sent to third parties
- When designing content moderation policies and processes
Rule of thumb
If content flows through your platform from one person to another, you’re an intermediary. The question is then: are you a passive pipe or an active participant? Design accordingly.
How can I think about it?
The venue owner analogy
You own a venue where people give speeches. As a venue owner:
- If someone gives a hate speech, you’re not liable if you didn’t know and had no reason to know
- If you were told in advance and did nothing, you may be liable for providing the platform
- If you curated the speakers, set the agenda, and promoted the event, you’re a co-publisher — fully liable
Your application is the venue. Your users are the speakers. Your algorithms are your event programming. The more you curate, the more you’re responsible.
The courier vs ghostwriter analogy
A courier delivers a sealed package — they don’t know what’s inside, and they’re not liable for the contents. A ghostwriter creates the content on behalf of someone else — they share responsibility for what’s written.
Most platforms are somewhere in between:
- Courier = mere conduit (ISP, email relay)
- Courier who opens and sorts packages = hosting with algorithmic curation
- Ghostwriter = AI-generated content sent under the user’s name
Your design choices determine which role your platform plays.
Concepts to explore next
| Concept | What it covers | Status |
|---|---|---|
| ai-content-liability | What happens when the platform itself generates the content | complete |
| algorithmic-transparency | How content curation algorithms affect liability | complete |
| personal-data-protection | Moderation processes involve processing personal data | complete |
Some cards don't exist yet
A broken link is a placeholder for future learning, not an error.
Check your understanding
Test yourself (click to expand)
- Explain — Why does algorithmic content curation increase a platform’s liability compared to chronological display?
- Name — What are the three tiers of intermediary service under the EU Digital Services Act?
- Distinguish — What is the key difference between the US approach (Section 230, broad immunity) and the Swiss approach (no safe harbour, general law)?
- Interpret — Your app lets users write letters to officials and sends them via your email server. A user sends a threatening letter. What is your liability exposure, and how would you redesign the feature to reduce it?
- Connect — How does the concept of intermediary liability relate to Privacy by Design — specifically, the principle of “proactive not reactive”?
Where this concept fits
Position in the knowledge graph
graph TD A[Data Governance] --> B[Intermediary Liability] A --> C[AI Content Liability] A --> D[Algorithmic Transparency] B --> E[Content Moderation] B --> F[Notice and Action] B --> G[Safe Harbour Provisions] style B fill:#4a9ede,color:#fffRelated concepts:
- ai-content-liability — when the platform generates the content, liability shifts dramatically
- algorithmic-transparency — curation algorithms move platforms from hosting to publishing
- client-server-model — the technical architecture that enables intermediary relationships
Sources
Further reading
Resources
- Intermediary Liability: Rules, Risks and Reforms in the Digital Age — Comprehensive overview of intermediary liability across jurisdictions
- Towards a New Paradigm of Platforms’ Liability — Academic analysis of how platform liability is evolving
- Between Search and Platform: ChatGPT Under the DSA — How AI platforms like ChatGPT are classified under the DSA
- Platform Liability 2026: Section 230, the DSA, and Beyond — Cross-jurisdictional comparison of platform liability frameworks
Footnotes
-
Wray Castle. (2026). Intermediary Liability: Rules, Risks and Reforms in the Digital Age. Wray Castle. ↩
-
Swiss Criminal Code (StGB) Arts. 173, 174, 180; Code of Obligations (OR) Art. 41ff, as referenced in the legal compliance analysis for pol.yiuno.org (2026). ↩ ↩2
-
Kinstellar. (2026). The Digital Services Act: An Overview of the New Online Intermediary Liability Rules. Kinstellar. ↩ ↩2
-
Scelta, D. (2026). Towards a New Paradigm of Platforms’ Liability. MediaLaws. ↩
