Who Rules the Internet? US and the Rise of Digital Platforms

May 4, 2026

The first “Constitution” of the Internet is a minimalist document that boils down to two basic rules, embedded in an amendment to Section 230 of the United States’ Communications Act from the distant year 1996. The first rule guarantees the immunity of digital intermediaries for the content that third parties share through their platforms. The second enshrines the freedom of these corporations to remove from their own forums any content they deem.

“The major digital platforms have operated as materially public forums, yet subject to private control and regulation”

This basic rule of the digital architecture was adopted in an embryonic Internet where none of the major platforms that now dominate this market existed. It is also a context with an utopian horizon, in which it was thought that in this technological and communicative sphere the consolidation of market-dominant positions was unlikely, and that the very openness inherent in this technology would foster ideological tolerance and the relativization of one’s own ideas. This utopian horizon underlying the regulation was contradicted by reality quite early, though the two regulatory pillars remain in effect today in the United States. The governance of Internet in this country was born out of a forecasting error, a kind of original legal sin.

Liberal Democrats against Regulatory Republicans

Within the original framework described, the major digital platforms have operated as materially public forums, yet under private control and regulation. Corporations such as Facebook, Instagram, or Twitter have enjoyed the power to prohibit speech that is protected by the First Amendment of the Constitution, while at the same time they have not been responsible for illicit content that others have transmitted through their channels.

This model of deregulation and immunity enjoyed the support of the Democratic Party and was challenged by the Republican Party. The paradox that Republicans urged regulation in contrast to the Democrats’ more liberal stance was fundamentally due to the underlying ideological alignment between the Democratic Party and the principal backers of these companies. The moment that alignment became more evident is when the account of the still-president Trump was canceled as a consequence of his activity on social networks during the Capitol assault. Trump’s expulsion from the platforms was not ordered by any judicial or administrative authority, but was a decision taken by the platforms themselves, which enjoyed complete legal autonomy to have opted for a contrary decision. In other words, to keep Trump on their forums and not to act, or even to have aided, those contents that lent credence to the claim of electoral fraud.

During Biden’s presidency, the federal executive branch persuaded the major digital platforms to act against certain content. Very notably against health misinformation during COVID; against so-called hate speech; and also against content that denied the legitimacy of the electoral process that brought Biden to the White House. The Supreme Court found that this presidential persuasion could not be considered coercive. At the same time, during these years, the so-called “Brussels effect,” i.e., the importation of regulatory standards driven in Europe for platforms, influenced the moderation norms of the major social networks, producing harmonization in the regulation of digital public opinion between the United States and the European Union.

Part of the Republican Party considered that this situation was effectively a regime of digital censorship, and two states under its governance, Texas and Florida, enacted laws to impose neutrality in platform speech regulation, thereby challenging the foundational principle that these are free to set the regulatory standards they deem appropriate within their own forums. Both laws were appealed to the United States Supreme Court.

Platforms and their expressive freedoms

In 2024, a Supreme Court ruling, Moody v. NetChoice, elevated to constitutional status one of the two principles on which the Internet is built. For the Court’s majority, social networks, when they introduce rules or make decisions to regulate third-party speech, are exercising their own freedom of expression. The Court extends to social networks its jurisprudence on editorial freedom of the press, by virtue of which it held that it is not constitutional in the United States to enact a law that guarantees the right of rectification, on the grounds that no one can tell the media what they must publish.

“Self-regulation of platforms has given way to a bet on the ‘restoration of the freedom of expression,’ that is, an Internet without filters”

Thus, the ruling reinforces the position held by the Democratic Party and also by a large portion of liberal academia. The networks are sovereign in their own forum. Yet, the context in which this ruling was issued was no longer the tacit alliance between the major digital corporations and the Democrats. In 2024 not only had the purchase of X by Elon Musk occurred, but also the alignment of the rest of the digital corporations, from the campaign itself, with the Trump presidency. Self-regulation of platforms has given way to a bet on the “restoration of freedom of expression,” i.e., an Internet without filters.

It is highly likely that, in this new reality, there will be state initiatives to impose by law regulatory standards on digital corporations to combat pathologies such as misinformation or the propagation of discourse against certain minorities. However, the adoption by the Supreme Court of the thesis maintained by the Democratic Party for years, according to which platforms are protected by the First Amendment to regulate public discourse freely, leaves very little room for any of these initiatives to survive a constitutional challenge. In American law, platforms are sovereign in their own forums.

The resurgence of accountability

If digital companies have enjoyed a regime of immunity against liability for the content that third parties share or store through them, it is because they were considered not to be editors, unlike the media, but mere intermediaries. Moody v. NetChoice, however, recognizes them as editorially free, which inevitably calls into question that regime of immunity.

To date there is no Supreme Court decision that has done so, but there are already rulings that may point to a paradigm shift. Among them, two very recent ones stand out. In the first, State of New Mexico v. Meta Platforms, Inc., a popular jury in a civil case brought by the state itself found that Meta negligently designed its platforms with regard to minor safety, exposing them to contexts of risk to their sexual integrity. The trial court thus ordered the company to pay more than $375 million in damages. A few days later we learned that another jury, this time in California, similarly found that Meta and Google were responsible for psychological damages caused to a user during childhood, since features such as infinite scrolling, algorithmic recommendations, and autoplay were designed to generate addiction in young users, prompting them to engage compulsively.

“These are not lawsuits where the platform’s freedom of expression is at stake, but rather the duty of diligence the platform must meet to avoid risks to users”

What is significant is that, in both cases, the root of the liability claim was not third-party content, but the product’s own design, which would mean that the Section 230 immunity clause would not apply. Put differently, the courts have understood that these are not cases involving the platform’s freedom of expression, but rather the duty of care the platform must meet to prevent user risks. Thus, in state courts, one of the objections raised by Justice Alito in his Moody v. NetChoice dissent—namely that automated algorithmic procedures cannot be sheltered by freedom of expression—begins to gain traction. There are thousands of lawsuits framed in these terms across various state jurisdictions. If these precedents consolidate, we could be witnessing a radical inflection point in the immunity regime that has governed Internet companies for thirty years.

Platforms and digital geopolitics

In the United States, a pathway seems to be opening to regulate and hold platforms responsible for the risks that their technological design may pose to users, especially to minors. Nevertheless, nothing challenges their unlimited autonomy to regulate freedom of expression on their forums. They are governors of digital public opinion.

“Under this premise, governing Facebook is more like governing a state than a company”

Mark Zuckerberg rightly observed that, under this premise, governing Facebook is more like governing a state than a company. In short, these are companies with the power to set their own standard for what can be said in a materially public sphere. They are also transnational monopolies, meaning their natural inertia is to extend that standard wherever they operate. It seems clear, however, that the legal regime enjoyed by these companies as governors of discourse in the United States is not acceptable in other jurisdictions. Certainly not in the EU, as the Digital Services Act (DSA) testifies. The interest of these corporations in also owning their own forum when operating in other countries, and the U.S. government’s cooperation to make this so, transforms the legal issues with large Internet corporations into problems akin to those arising in the relationship between legal systems. In other words, asserting European law here is a matter of sovereignty.

Natalie Foster

I’m a political writer focused on making complex issues clear, accessible, and worth engaging with. From local dynamics to national debates, I aim to connect facts with context so readers can form their own informed views. I believe strong journalism should challenge, question, and open space for thoughtful discussion rather than amplify noise.