Consultation For Your Reputation

Is Section 230 operating as intended?

On Behalf of | Nov 12, 2021 | Online Reputation Management |

Section 230 protects social media platforms from liability for content users post, but it also allows them to moderate potentially harmful content as they see fit. These dual protections have allowed for the innovation and growth of social media companies; however this law is often criticized. Politicians seem to disagree on whether it invites too much content moderation, or not enough. The ideas regarding possible reformation continue to be discussed at length.

We need to look at the history of Section 230 before considering reformation options. Prior to the Telecommunications act in 1996, there were two prominent cases that led to the need for legislation. The first involved a dial-up bulletin board company called Prodigy that was building itself as a family-friendly bulletin board. They were moderating their chats and taking anything down that was profane or abusive. Something that was allegedly libelous was posted on one of the bulletin boards, and the courts determined that because Prodigy was moderating content, it could be held culpable for all content on the site, even that which had not been moderated.

In the other case, Cubby vs. CompuServe, the defendant was an online service that hosted forums. One of the publications available to subscribers was a daily newsletter called Rumorville, by Dan Fitzpatrick. The plaintiff, Robert Blanchard, owner of Cubby, Inc., sued Fitzpatrick for posting defamatory remarks about his newsletter, Skuttlebut. He also sued CompuServe for hosting the content. But since CompuServe didn’t have any prior knowledge of the contents of Rumorville, did not control its publication, and didn’t review the newsletter’s contents, the U.S. District Court ruledthat CompuServe had acted as a distributor, not a publisher, and was therefore not liable.

These were the first cases of online defamation, which led to the CDA being created. Luckily, the case against Prodigy was dropped but the possible precedent it represented was alarming. The platform had tried to moderate content to the best of their ability and missed something. To be held legally responsible for that, didn’t seem fair. Seeing what happened with CompuServe, who did not monitor any content, Congress was concerned that companies would stop moderating any content to avoid legal liability. Those concerns led to the creation of section 230 as it exists today — including a provision that states that a platform’s attempt to moderate can’t be used against it to hold them legally accountable for content posted on the site. Essentially stating that moderation doesn’t create liability.

Social media sites cannot be treated as publishers of third-party content, even if they don’t do any moderating. Meaning that even if companies know that there is libel, or other illegal content on their platforms and do nothing about it, they are still immune from culpability – which undermines the whole objective and removes the incentive to moderate. Since moderation requires time and costs money, there may not be much moderation happening at all on most sites. The way this law has been interpreted creates the opposite effect of it its intention, eliminating the need for businesses to prevent illegal activity.

This is where the idea of reform comes in as there is ambiguity and the need for clarity. Some think that section 230 is currently not operating in a positive way, as social media platforms aren’t regulated or subjected to court claims regarding negligence. The argument for change stems from the lack of safety regarding online businesses, as voluntary efforts aren’t always enough. However, it is a very delicate balance and any changes that are made could completely change the internet as it exists today.