The CEOs of Facebook, Google and Twitter Share Their Thoughts on Section 230 Ahead of New Hearing

After all the discussion around Section 230 laws over the past year, with former US President Donald Trump calling for reform, and various others raising significant concerns with the powers that digital platforms now hold in regards to information flow, regulators and officials are now looking for a way forward, in order to ensure that digital platforms are equally held to account for the role that they play in the broader information ecosystem.  

This week, Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai and Twitter chief Jack Dorsey will front a House Energy and Commerce hearing over proposed changes to Section 230, with a view to improving the ways in which online platforms address key concerns relating to free speech, moderation and the spread of harmful information on their platforms.

All of the tech giants have opposed reforms to Section 230, saying that any significant change to the law will effectively cripple the free web, and force them to significantly limit speech, in order to avoid potential legal challenges. But as highlighted in various discussions, there are ongoing concerns that the platforms are not doing enough to stop the spread of misinformation and hate speech, in particular, which could be sparking broader societal division – while others have argued the opposite, in that platforms are effectively working to support their own agenda in their moves to censor and restrict speech.

Ahead of the hearing, the House Energy and Commerce Committee has today released written statements from the three CEOs, in which they outline their perspective on the proposed reforms.

In his response, Facebook CEO Mark Zuckerberg has urged caution in and changes, noting that it’s almost impossible for a company like Facebook to police all speech, given the scale of its operation.

“Platforms should not be held liable if a particular piece of content evades its detection – that would be impractical for platforms with billions of posts per day – but they should be required to have adequate systems in place to address unlawful content.”

Instead, Zuckerberg has argued that providers need to have adequate systems in place to deal with such as best they can, and that should be mandated by law:

“Instead of being granted immunity, platforms should be required to demonstrate that they have systems in place for identifying unlawful content and removing it.”

That would largely align with Facebook’s defense in various cases related to the same, with Facebook outlining the processes it has in place to address such, establishing industry best practice in terms of alerting processes and timeliness of response.

Really, that’s likely the best outcome that can be expected – the only other option, as noted, is that platforms restrict what people can say entirely in order to avoid legal liability.

Essentially, given the real-time nature of the process, there’s no way for platforms to guarantee that they’ll catch all instances of potentially harmful speech on their platforms, especially once those platforms reach a certain scale. But by detailing their detection processes – which, for the big players, are powered by AI, and always improving – and establishing clear guidelines around human assessment and response, that could be a more effective regulatory approach, as opposed to penalties for failures.

Zuckerberg also includes this interesting note:

“Facebook is successful because people around the world have a deep desire to connect and share, not to stand apart and fight. This reaffirms our belief that connectivity and togetherness are ultimately more powerful ideals than division and discord – and that technology can be part of the solution to the deep-seated challenges in our society. We will continue working to ensure our products and policies support this ambition.”  

Interesting in that, these days, it increasingly feels like Facebook users would actually prefer to ‘stand apart and fight’. But that’s whole other debate.

Google’s Sundar Pichai, meanwhile, in his written testimony, reiterates the dangers of Section 230 reform:

“Section 230 is foundational to the open web: it allows platforms and websites, big and small, across the entire internet, to responsibly manage content to keep users safe and promote access to information and free expression. Without Section 230, platforms would either over-filter content or not be able to filter content at all.”

Pichai essentially proposes the same as Zuckerberg as a solution, in establishing more transparent processes for such, in order to ensure all platforms are working towards the same outcome.

“Solutions might include developing content policies that are clear and accessible, notifying people when their content is removed and giving them ways to appeal content decisions, and sharing how systems designed for addressing harmful content are working over time.”

Pichai doesn’t go so far as Zuckerberg in proposed a third-party regulatory framework, but the focus on transparency is similar in its aims.

Twitter’s Jack Dorsey actually takes a more progressive view in his statement, referring to the company’s new Birdwatch and Bluesky projects as potential ways forward in addressing moderation and content concerns.

“We believe that people should have transparency or meaningful control over the algorithms that affect them. We recognize that we can do more to provide algorithmic transparency, fair machine learning, and controls that empower people. The machine learning teams at Twitter are studying techniques and developing a roadmap to ensure our present and future algorithmic models uphold a high standard when it comes to transparency and fairness.”

In this sense, Dorsey is looking to focus on the content recommendations systems themselves as a means to help users improve their experience. Which could be a better solution – but then again, do users really want more control, or would they prefer the systems to simply learn from their behavior and serve them relevant content based on usage?

There are still many questions to come, and we’ll likely get some further insight as to the Committee’s thinking around Section 230 reform in this week’s hearing. 

But the answers are not easy. Online platforms have become critical information sources, especially over the last year, which has increased their capacity to inform and influence large sections of society. Ensuring they’re used for good is an important aim, but setting parameters for such can be risky, and even dangerous, in many ways.

Like this article?

Share on Facebook
Share on Twitter
Share on Linkdin
Share on Pinterest

Leave a comment

Why You Need A Website

Now