Combat odious online content by supporting startups

unnamed.png

Combat odious online content by supporting startups

TLDR:  As congressional leaders continue to explore ways of addressing online disinformation ahead of the 2020 elections, it’s important that policymakers understand the difficulties digital platforms already face before impulsively moving forward with flawed legislation that would roll back critical liability protections.  

What’s happening this week:  The House Science, Space, and Technology Committee is scheduled to hold a hearing at 2 pm tomorrow to discuss online imposters and disinformation. With the 2020 elections quickly approaching, lawmakers are concerned about the ways in which social media platforms can be used by nefarious actors — both foreign and domestic — to potentially sway voters and alter opinions through the use of fake accounts and disinformation. 

With the rise of accounts peddling fake news and misinformation and the growth of deepfakes — digitally manipulated videos to make people appear as though they’re saying or doing things that they are not — platforms are facing increasing demands from lawmakers to safeguard the integrity of the electoral process. This comes as the Trump administration and some lawmakers look at rolling back liability protections provided by Section 230 of the Communications Decency Act, which has allowed startups and online platforms to grow and thrive. 

Why it matters to startups: Content moderation, particularly for startups, is already a difficult and time-consuming undertaking. At the same time, congressional lawmakers have become increasingly hostile towards online platforms over their content moderation practices — often pushing contradictory and counterproductive demands. And this is all in spite of the fact that odious and problematic content, such as hate speech and deepfakes, is nonetheless constitutionally protected. Startups know firsthand that weakening Section 230 protections is not the right approach to limiting this type of content further.

Concerns about the spread of harmful online content have taken a priority in political debates over the past several months. Ever since the 2016 presidential election highlighted how nefarious actors can use social media sites to promote disinformation, platforms have become more transparent about their efforts to remove connected networks of foreign-controlled accounts attempting to influence public opinion. Internet platforms are already addressing the difficult task of content moderation, working to better protect their users and the public at-large while also trying to meet the demands of well-intentioned but often uninformed policymakers. 

Lawmakers have also increased calls for platforms to better combat extremist, violent, and hateful content on their sites — something that platforms are taking to heart. Last week, tech representatives told a Senate panel how they have improved their coordination efforts, as well as their use of more advanced AI systems, to identify and take down troubling content. 

But even as issues such as disinformation, hateful online content, and political censorship have dominated policy debates, legislative solutions introduced thus far have paled in comparison to the ability of platforms to police their own sites. In fact, lawmakers can’t even agree on whether platforms are over-moderating user-generated content, or are not doing enough to remove harmful posts. 

Some lawmakers, including House Intelligence Committee Chairman Adam Schiff (D-Calif.), previously expressed interest in rolling back liability protections that platforms receive under Section 230 of the Communications Decency Act in order to push websites to remove deepfake videos.

Such a move to combat deepfakes would instead kneecap the very online platforms that are already working to rid their sites of harmful content. Startups in particular often lack the means and resources to police their sites for all types of offending content — whether it’s hateful content, disinformation, harmful online content, or deepfake videos. As Evan Engstrom, Engine’s executive director, said in a Morning Consult op-ed earlier this year: “If it’s hard for users to distinguish between real and doctored videos, why would it be any easier for websites — particularly small startups — to know what to delete?”

The Trump administration and some conservative lawmakers, meanwhile, are pushing back against what they see as political censorship of free speech. It was reported last month that the White House is working on a draft executive order that would empower the FCC and the FTC to push back against platforms that moderate user-generated content on their sites. Sen. Josh Hawley (R-Mo.) even proposed legislation earlier this year that would hold platforms liable for illegal content shared on their sites unless they prove to the FTC that they are “politically neutral.”

Meanwhile, online platforms are put in the difficult task of determining -- often in an incredibly expedited fashion -- what content is harmful, false, misleading, or doctored. Removing posts or articles that promote one ideology’s views over another could be viewed through the lense of political censorship instead of as a move towards limiting the spread of harmful disinformation. But not doing enough to combat hate speech, disinformation, or networks of bots peddling fake news could also be seen as platforms allowing the political and societal discourse to be corrupted by harmful outside forces.

If congressional lawmakers are truly concerned about combatting deep fakes, online imposters, and the spread of disinformation, they should focus on protecting and securing Section 230 protections. Section 230 is structured the way it is because it represents the most meaningful way to curb legal — though harmful and problematic — forms of speech. Platforms have the freedom to moderate this type of harmful content without the threat of legal judgment. Forcing platforms to do otherwise would violate free speech protections. Section 230 maximizes the opportunity to remove problematic, but legal, content by removing the barriers that currently exist.

On the Horizon.

  • Engine and the Charles Koch Institute will be holding the second panel in our three-part series on the nuts and bolts of encryption next Friday, Oct. 4, at noon. We’ll be looking at the evolving global policy landscape around encryption. Learn more and RSVP here.