Next month, the US Supreme Court will hear arguments in a case that could make it much harder for millions of online companies like Meta to provide the type of services that people enjoy using every day — to facilitate deep connections with friends and families, to discover new places, interests and communities, and to help millions of small businesses grow and thrive. The case, Gonzalez v. Google, asks whether Section 230 protects the ability of an online service to sort, organize and recommend the growing number of posts, videos, photos, customer reviews and other content created and shared online by hundreds of millions of people every day.
Over the past quarter-century, Section 230 has enabled the internet to revolutionize the way we live. For example, it has helped companies like Spotify to introduce people to new music and to connect up-and-coming artists with new audiences; companies like Etsy to connect small businesses with new customers; and fundraising platforms like Kickstarter and GoFundMe to empower millions to contribute to causes they care about.
One way Meta helps people to build community is by using algorithms to recommend connections and content you might be interested in — for example, new Facebook groups you might want to join, pages you might like, or events you might want to attend — and by ranking content so that you are more likely to see the posts you care most about. This technology also helps protect our community by filtering, blocking, and reducing the spread of content that violates our policies or is otherwise problematic.
The sheer volume of user-generated content on the internet means that online companies have to make decisions about how to organize, prioritize, and deprioritize this content in ways that are useful to people and advertisers, while enforcing our policies against terrorism and other harmful content. Meta has invested billions of dollars to develop sophisticated safety and security systems that work to identify, block, and remove terrorist content quickly — typically before it is ever seen by any users. Section 230 was enacted to allow companies to do exactly this. Exposing companies to liability for decisions to organize and filter content from among the vast array of content posted online would incentivize them to simply remove more content in ways Congress never intended.
Here are the key arguments we make in the amicus brief filed today with the Court:
- “If §230 is truly to be converted into a regime at such profound odds with Congress’ express findings and purposes, that decision should come from Congress, not this Court.” [Pg 6]
- “If online services risk liability for disseminating content but not for removing it, the only rational reaction is to err on the side of removal.” [Pg 28]
- “Whatever else one may say about the scope of §230, there can be no serious dispute that its core protection for online services is against liability for the third-party content that they host, even though they exclude some material and organize the material they host to make it useful for users. Indeed, the statute was enacted to counter cases that held online services liable for third-party content they hosted, not for content they removed. Any reading of §230 that would exclude from its protections virtually everything that it was enacted to protect just because material is presented in a format with utility for the user is a complete non-starter.” [Pg 4]
- “So-called ‘targeted recommendations’ reflect nothing more than how online services organize and display content. They differ from other more static organizational choices only in that they harness the power of the internet to personalize content on a user-by-user basis rather than through a one-size-fits-all approach. There is no coherent basis for depriving an online service of §230’s protection for those core publisher functions just because the technological advances Congress wanted to protect enable online services to personalize content so users might see what they actually want.” [Pg 5]