Weston Buhr, a field office organizer, works o this laptop with a volunteer at a Bernie Sanders field office on February 1, 2020 in Waterloo, Iowa.
Mark Makela | Getty Images
The big tech companies are facing a high-stakes test of their ability to protect their platforms from interference and root out bad actors as the U.S. presidential election formally kicks off with the Iowa caucuses Monday evening.
Facebook, Google and Twitter said they are coordinating closely with the Democratic National Committee, which is dispatching cybersecurity staff to the caucuses for the first time to provide rapid response on the ground to any threats. The Iowa Democratic Party also has created a system for voters to submit fake information as they encounter it.
“The most important thing is making sure that we have truth and accuracy coming out of such an important milestone in our nomination process,” Nell Thomas, chief technology officer at the DNC, said in a statement to CNBC.
Tech companies have recently ramped up their efforts to combat misinformation, especially for content related to the election. Twitter introduced a tool last week that allows users to more easily report false or misleading information about voting. Facebook’s election-season war rooms are up and running. And YouTube said Monday that changes to its video recommendation system have led to a 70% decline in average watch time of misinformation and “borderline” content.
“YouTube remains committed to maintaining the balance of openness and responsibility, before, during and after the 2020 U.S. election,” Leslie Miller, YouTube’s president of government and public policy, wrote in a blog post on Monday.
But the platforms are navigating a volatile political climate that has not only pitted Republicans against Democrats, but also turned both parties against the tech industry itself. Republicans have accused the platforms of bias against conservatives. Democrats are skeptical of their sheer size and market power. Both sides have lambasted the companies for allowing political ads to include false information, forcing Twitter to attempt to ban them entirely.
Most recently, Democratic presidential candidate Sen. Elizabeth Warren proposed establishing criminal penalties for knowingly sharing false information about when and how to vote. She also called on the CEOs of Facebook, Twitter and YouTube to clearly label content from state-controlled organizations and let users know when they’ve been affected by disinformation campaigns.
“The safety of our democracy is more important than shareholder dividends and CEO salaries, and we need tech companies to behave accordingly,” Warren wrote on Medium.
The major platforms have previously committed to removing false or misleading content about voting, even when it occurs in political ads. Facebook said it removed 45,000 misleading posts aimed at voter suppression during the 2018 midterm elections, almost all before they were reported by users or outside organizations.
“The conversations that we hear from platforms is that they are trying hard to combat misinformation, but the speed at which the internet works makes it challenging,” said Chris Lewis, chief executive of the think tank Public Knowledge. “I think there are a lot more lessons to be learned every time there is an election, so we can refine how we fight these things as a society.”
In an op-ed published in The Des Moines Register last month, Facebook’s head of cybersecurity, Nathaniel Gleicher, acknowledged that the company was “caught off guard” by the widespread Russian interference in the 2016 election. He described it as a “wake up call” and tried to assure Iowans that Facebook is better prepared for this election cycle.
“We remain committed to fighting election interference, increasing transparency, and giving more people more information about what they see online,” Gleicher wrote. “Those trying to attack our democracy won’t let up, and neither will we.”
Correction: This story was updated to reflect the correct name of the Democratic National Committee.