Lawmakers have become increasingly concerned about the potential havoc that manipulative media like “deepfakes” could wreak on American society. But the steps they’re willing to take to address the issue remain unclear.
With the presidential election coming up later this year, researchers have raised alarms about the urgency of addressing deepfakes and other forms of digital deception. On Wednesday, the House Subcommittee on Consumer Protection and Commerce invited digital experts and a representative from Facebook to a hearing on “manipulation and deception in the digital age” to figure out what strategies tech companies are already employing to combat deepfakes and what more should be done on the federal level.
Experts warned of the societal and national security implications of manipulated digital media, like the potential to fake a remark by a politician or to drive opposing groups to real-world events that will put them in close conflict. The experts and lawmakers disagreed on the level of involvement needed from Congress to ensure tech companies responsibly patrol deceptive content on their platforms. In her opening remarks, Chairwoman Jan Schakowsky, D-Ill., lamented Congress’ “laissez-faire” approach over the past decade toward digital platform moderation.
“The result is Big Tech failed to respond to the great threats posed by deepfakes … as evidenced by Facebook scrambling to announce a new policy that strikes me as wholly inadequate,” Schakowsky said, referring to the new policy Facebook released a day before the hearing banning highly manipulated videos created by artificial intelligence or machine learning.
Schakowsky noted the new policy would not cover the doctored video of House Speaker Nancy Pelosi that circulated on Facebook and other platforms. The video was simply slowed down to make Pelosi’s speech appear slurred, and Facebook said it would not remove the video after it was viewed millions of times.
At the hearing Wednesday, Facebook’s vice president of global policy management Monika Bickert confirmed the altered Pelosi video would not be subject to the new deepfake policy, but would still be subject to existing misinformation policies. Facebook previously said it limited the distribution of the Pelosi video in the News Feed and added additional context after a fact-checking partner rated the video as false.
Several Republicans, like ranking member Cathy McMorris Rodgers of Washington, urged caution in taking a heavier hand to legislation, warning of potential repercussions on consumers and the threat of China’s rising sophistication in developing AI.
“As we discuss ways to combat manipulation online, we must ensure America will remain the global leader in AI development,” McMorris Rodgers said. “There’s no better place in the world to raise people’s standard of living and make sure this technology is used responsibly.”
Justin (Gus) Hurwitz, an associate law professor at the University of Nebraska, tended to take a more conservative approach to regulation in his testimony Wednesday. With regard to dark patterns — which are certain design choices that can influence a users’ behavior, like making one button large and colorful and another small and dull — Hurwitz said in his written testimony, “we need to be careful in how and why we regulate these practices, including understanding when and whether we should at all. In some cases, regulatory efforts may be better focused on other areas; in some cases, it may make more sense to allow the underlying technology and markets to continue to improve before stepping in with regulatory intervention; and in other cases still beneficial regulatory intervention may simply not be possible.”
If regulation is found to be necessary, Hurwitz said in the written testimony, it should target “specific design practices” or empower an agency like the Federal Trade Commission to identify practices that it believes violate the FTC Act. Rather than jumping to legislation right away, Hurwitz suggested allowing the FTC to use its rulemaking authority to regulate dark patterns, and have the agency tell Congress if further intervention is needed.
The two other experts on the panel, however, urged Congress to be more proactive in reigning in digital manipulation.
Joan Donovan, research director of the technology and social change project at the Shorenstein Center at the Harvard Kennedy School, said it’s important to ensure the FTC has access to all the information it needs to effectively investigate and audit tech companies and that the FTC should be able to “assess substantial injuries” in investigations.
But the FTC alone may not have the capacity to deal with the “exponential” scale and issues of the tech industry, said Tristan Harris, executive director of the Center for Humane Technology and former Google design ethicist.
“This is why I’m thinking about how can we have a digital update for each of our different agencies who already have jurisdiction over whether it’s public health or children or scams or deception, and just have them ask the questions that then are forced upon the technology companies to use their resources to calculate report back, set the objectives for what they’re going to do in the next quarter,” Harris said. He also warned that centralizing this power in a new federal agency would take too long as these issues accelerate.
Harris also suggested creating an awareness campaign to “inoculate the public” against deception and misinformation, noting that the government released a propaganda film in the 1940s warning against fascism, though research has since questioned the effectiveness of that particular film. Harris even suggested that tech companies help distribute such a campaign.
Hurwitz said the campaign “runs the risk of being called a dark pattern if the platforms are starting to label certain content in certain ways.”
As Schackowsky noted in her opening remarks, the question of Section 230, a bill that shields tech from legal liability for its users’ content, was an undercurrent of the hearing. Bickert, the Facebook representative, emphasized that the law also allows the company to remove harmful content as it sees fit.
Rep. Greg Walden, R-Ore., who also advised Congress to revisit 230, said, “this hearing should serve as a reminder to all online platforms that we are watching them closely.”