Section 230 Daily Updates

Many foreign agents targeting the election; they will use deep fakes and disinformation

AP, January 5, 2024, https://www.politico.com/news/magazine/2024/01/05/unpredictable-events-2024-election-turmoil-experts-00133873, Politico, The Unpredictable But Entirely Possible Events That Could Throw 2024 Into Turmoil

Biden supporters are worried about the end of democracy. Trump supporters are worried Democrats want to throw the former president in jail. And there are plenty of adversaries (I’m looking at you, Russia, Iran and North Korea) that would love nothing more than to see more chaos from the Americans. That means that efforts to subvert the election could be successful and could come from a variety of actors — from cyberattacks, deep fakes and disinformation, physical attacks on the election process and oversight, and/or mass unrest, violent intervention and even terrorism to disrupt voting on Nov. 5. There’s no more geopolitically significant target than the upcoming U.S. elections, which are vulnerable due to limited experience and resources focused on election security. I wasn’t worried about a coup back on Jan. 6, and I don’t see any way to overturn this coming year’s election either. But disrupting the 2024 U.S. election strikes me as plausible and deeply concerning.

80 Democratic elections are threatened by misinformation on social media platforms and they are under no obligation to commbate it

Duffy & Harbath, January 4, 2024, KAT DUFFY is Senior Fellow for Digital and Cyberspace Policy at the Council on Foreign Relations; KATIE HARBATH is the Global Affairs Officer at Duco Experts; Foreign Affairs, Defending the Year of Democracy; https://www.foreignaffairs.com/united-states/defending-year-democracyWhat It Will Take to Protect 2024’s 80-Plus Elections From Hostile Actors

This year, over 80 national elections are scheduled to take place, directly affecting an estimated 4.2 billion people—52 percent of the globe’s population—in the largest election cycle the world will see until 2048. In addition to the U.S. presidential election, voters will go to the polls in the European Union, India, Indonesia, Mexico, South Africa, Ukraine, the United Kingdom, and dozens of other countries. Collectively, the stakes are high. The candidates that win will have a chance to shape not only domestic policy but also global issues including artificial intelligence, cybersecurity, and Internet governance. This year’s elections are important for reasons that go beyond their scale. They will be subject to a perfect storm of heightened threats and weakened defenses. Commercial decisions made by technology companies, the reach of global digital platforms, the complexity of the environments in which these platforms operate, the rise of generative AI tools, the growth of foreign influence operations, and the emergence of partisan domestic investigations in the United States have converged to supercharge threats to elections worldwide. Each election will, of course, be affected by local issues, the cultural context, and the main parties’ policies. But each will also be challenged by global threats to electoral integrity and, by extension, democracy. Governments, companies, and civil society groups must invest to mitigate the risks to democracy and track the emergence of new and dangerous electoral threats. If they get to work now, then 2024 may be remembered as the year when democracy rallied. Elections take place within local contexts, in local languages, and in accordance with local norms. But the information underpinning them increasingly comes from global digital platforms such as Facebook, Google, Instagram, Telegram, TikTok, WhatsApp, and YouTube. Voters rely on these commercial platforms to communicate and receive information about electoral processes, issues, and candidates. As a result, the platforms exert a powerful sway over elections. In a recent survey by Ipsos, 87 percent of respondents across 16 countries with elections in 2024 expressed concern that disinformation and fake news could affect the results, with social media cited as the leading source of disinformation, followed by messaging apps. Although voters use these social media platforms, they are generally unable to influence the platforms’ decisions or priorities. Platforms are not obliged to fight information manipulation, protect information integrity, or monitor electoral environments equitably across the communities in which they operate. Nor are they focused on doing so. Instead, the largest U.S. technology companies are increasingly distracted. Facing declining profits, higher compliance costs, pressure to invest in AI, and increased scrutiny from governments around the world, leading companies such as Google and Meta have shifted resources away from their trust and safety teams, which mitigate electoral threats. X (formerly known as Twitter) has gone even further, implementing massive cuts and introducing erratic policy changes that have increased the amount of hate speech and disinformation on the platform. Some platforms, however, have begun to prepare for this year’s elections. Meta, for example, has announced that it will apply certain safeguards, as will Google, both globally and in the United States. Both companies are also seeking to maximize the use of generative AI-based tools for content moderation, which may offer improvements to the speed and scale of information monitoring. Newer platforms—such as Discord, TikTok, Twitch, and others—are beginning to formulate election-related policies and mitigation strategies, but they lack experience of operating during elections. Telegram, which is an established global platform, takes a lax approach to combating disinformation and extremism, while U.S.-centric platforms including Gab, Rumble, and Truth Social have adopted a hands-off strategy that promulgates extremism, bigotry, and conspiracy theories. Some even welcome Russian propagandists banned from other platforms. WhatsApp and other popular encrypted messaging platforms present their own unique challenges when it comes to reducing misuse because of the encrypted nature of the content being shared Leading companies such as Google and Meta have shifted resources away from their trust and safety teams.

Tech platforms have neither the resources nor the resolve to properly monitor and address problematic content. Every digital platform has a different process for reporting disinformation, hate speech, or harassment—as well as a varying capacity to respond to those threats. Companies will invariably be confronted with difficult tradeoffs, especially when their employees’ personal safety is at stake. At the same time, revenue constraints, technological limitations, and political prioritization will result in a vast gap between resources aimed at supporting U.S. electoral integrity and those focused on other countries’ elections. The result will be that most nations will be neglected.

Easy to create misinformation with generative AI

Duffy & Harbath, January 4, 2024, KAT DUFFY is Senior Fellow for Digital and Cyberspace Policy at the Council on Foreign Relations; KATIE HARBATH is the Global Affairs Officer at Duco Experts; Foreign Affairs, Defending the Year of Democracy; https://www.foreignaffairs.com/united-states/defending-year-democracyWhat It Will Take to Protect 2024’s 80-Plus Elections From Hostile Actors

Another threat to electoral integrity is the continued proliferation of powerful, publicly available generative AI tools. The level of expertise required to create and disseminate fake text, imagery, audio clips, and video recordings across multiple languages will continue to plummet, without any commensurate increase in the public’s ability to identify, investigate, or debunk this media. Indeed, events in 2023 demonstrated how easy it will be to generate confusion. In Slovakia, for example, 48 hours before the parliamentary elections, an AI-manipulated audio recording circulated in which the leader of the liberal Progressive Slovakia party discussed how to rig the election. The audio clip was released during a news blackout on political coverage, limiting the media’s capacity to cover the story or debunk it. A similar campaign was seen in Bangladesh, which is holding a hotly contested election on January 7. There, AI-generated news clips have been transmitted that falsely accuse U.S. officials of interfering with the election. The tools powering this misinformation cost as little as $24 a month, and fakes have also emerged in Poland, Sudan, and the United Kingdom, demonstrating the growing nature of the threat.

Easy to create misinformation with generative AI; information overload collapses trust and democracy

Duffy & Harbath, January 4, 2024, KAT DUFFY is Senior Fellow for Digital and Cyberspace Policy at the Council on Foreign Relations; KATIE HARBATH is the Global Affairs Officer at Duco Experts; Foreign Affairs, Defending the Year of Democracy; https://www.foreignaffairs.com/united-states/defending-year-democracyWhat It Will Take to Protect 2024’s 80-Plus Elections From Hostile Actors

Another threat to electoral integrity is the continued proliferation of powerful, publicly available generative AI tools. The level of expertise required to create and disseminate fake text, imagery, audio clips, and video recordings across multiple languages will continue to plummet, without any commensurate increase in the public’s ability to identify, investigate, or debunk this media. Indeed, events in 2023 demonstrated how easy it will be to generate confusion. In Slovakia, for example, 48 hours before the parliamentary elections, an AI-manipulated audio recording circulated in which the leader of the liberal Progressive Slovakia party discussed how to rig the election. The audio clip was released during a news blackout on political coverage, limiting the media’s capacity to cover the story or debunk it. A similar campaign was seen in Bangladesh, which is holding a hotly contested election on January 7. There, AI-generated news clips have been transmitted that falsely accuse U.S. officials of interfering with the election. The tools powering this misinformation cost as little as $24 a month, and fakes have also emerged in Poland, Sudan, and the United Kingdom, demonstrating the growing nature of the threat.

The greatest danger of generative AI tools on online platforms is not their capacity to generate absolute belief in fake information, however. It is that they have the capacity to generate overall distrust. If everything can be faked, nothing may be true. Media coverage to date has focused on the use of AI to target political parties or officials, but it is likelier that the most significant target this year will be trust in the electoral process itself. For those seeking to sow skepticism and confusion, there are abundant opportunities.

Hostile actors are positioned to take advantage of these opportunities. In August 2023, Meta announced its “biggest single takedown” of a Chinese influence campaign that targeted countries including Australia, Japan, the United Kingdom, and the United States. The campaign, which used thousands of accounts across numerous platforms, sought to bolster China and discredit the country’s critics. Beijing has also been active elsewhere, particularly in its attempts to affect the outcome of the Taiwanese election in January. Officials in Taipei have warned that China has “very diverse” ways of interfering in the election, including increasing military pressure and spreading fake news. Indeed, Beijing sought to do the same in 2019, and all countries can expect to be subject to some foreign interference efforts. As a recent Microsoft report noted, “Cyber operations are expanding globally, with increased activity in Latin America, sub-Saharan Africa, and the Middle East. . . . Nation-state actors are more frequently employing [information operations] alongside cyber operations to spread favored propaganda narratives. These aim to manipulate national and global opinion to undermine democratic institutions within perceived adversary nations—most dangerously in the contexts of armed conflicts and national elections.”

Since 2016, attempts to influence elections via online platforms have been met by a coalition of companies, external researchers, and governments working to analyze and understand electoral information dynamics. But those efforts are now under attack in the United States. Congressional investigations into these partnerships have been led by politically motivated lawmakers who are convinced that these collaborations are aimed at censuring conservatives’ speech. The pending Supreme Court decision in Murthy v. Missouri, which is due by summer, could result in the U.S. government being the only government in the world that is not at liberty to contact American social media platforms regarding electoral threats at home or abroad.

Courts have protected a broad interpretation of Section 230 now (court clog uniqueness)

ANDREAS RIVERA YOUNG | DECEMBER 25, 2023, Brown Political Review, Sensible or Censorship?, https://brownpoliticalreview.org/2023/12/sensible-or-censorship/

While the Court has ruled on a couple of Section 230 cases, such as Gonzalez v. Google, the Biden case highlights something distinct: First Amendment issues with government coercion of social media companies. Section 230 cases focus on the Communications Decency Act, which shields social media companies from legal liability regarding most content posted on their sites. Gonzalez v. Google saw the Court punt on the question of corporate liability for algorithmic learning in recommending content, refusing to limit the scope of Section 230.

Meta used AI to solve hate speech & misinformation

Yann LeCun, Chief AI Scientist @ Meta, AI “Godfather,” December 22, 2023, How Not to Be Stupid About AI, With Yann LeCun, https://www.wired.com/story/artificial-intelligence-meta-yann-lecun-interview/

Why are so many prominent people in tech sounding the alarm on AI? Some people are seeking attention, other people are naive about what’s really going on today. They don’t realize that AI actually mitigates dangers like hate speech, misinformation, propagandist attempts to corrupt the electoral system. At Meta we’ve had enormous progress using AI for things like that. Five years ago, of all the hate speech that Facebook removed from the platform, about 20 to 25 percent was taken down preemptively by AI systems before anybody saw it. Last year, it was 95 percent.

You can keep Section 230 and say it doesn’t apply to generative AI

Ty Albright, December 13, 2023, https://newstalkkzrg.com/2023/12/13/hawleys-bipartisan-ai-bill-to-empower-parents-to-hold-big-tech-accountable-blocked-on-senate-floor/, Hawley’s bipartisan AI bill to empower parents to hold big tech accountable blocked on senate floor,

Today U.S. Senator Josh Hawley (R-Mo.) delivered remarks on the Senate Floor and called for unanimous consent to pass his bill, the No Section 230 Immunity for AI Act. This legislation would clarify that Section 230 immunity does not apply to claims related to generative AI, ensuring consumers have the tools they need to protect themselves from harmful content produced by the latest advancements in AI technology. “We have seen what [Big Tech companies] do with their subsidy from government when it comes to social media […] [Big Tech companies] censor the living daylights out of anybody they don’t like […] This government protects [Big Tech],” said Senator Hawley. He continued, “[This bill] just says that these huge companies can be liable like any other company—no special protections from government […] It just breaks up the Big Government, Big Tech cartel. That’s all it does, and it says parents can go into court, same terms as anybody else, and make their case.” Senators Hawley and Richard Blumenthal (D-Conn.) – the Ranking Member and the Chair of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, respectively – introduced the No Section 230 Immunity for AI Act in June to put power in the hands of consumers and give Americans impacted by nascent AI technology their day in court to hold Big Tech companies accountable.