Gabby Miller is a staff writer at Tech Policy Press.
“We are concerned about the fact that in 2024, platforms will have fewer resources than they did in 2022 or 2018, and what we see is platforms sleeping at the wheel again,” said Yoel. Ross and the former head of Twitter’s trust and safety team spoke earlier this week on a panel hosted by the UCLA School of Law. The premise of the event was how platforms should address election speech and disinformation in the lead-up to an election year of historic scale globally. At least 65 national elections will be held in more than 50 countries in 2024, including countries such as the United States, Ukraine, and Ukraine. Slovakia, Taiwan, and the European Union.
These elections are being planned amid heightened political and technological uncertainty and could cause “a lot of disruption,” said Katie Harvath on the same UCLA panel. Harvas, who previously served as public policy director for global elections at Meta, said major platforms including TikTok, Discord and Twitch are building new tools to combat election disinformation as other major platforms adjust their policies. He pointed out that he was doing it. All this despite the fact that there are still many unknowns about the potential for artificial intelligence to further complicate platform governance.
Another variable is the extent to which generative AI disrupts political debate, with rapidly evolving technology allowing bad actors and political opponents to use low-cost and increasingly convincing “deepfake” images. It’s clear that it’s now easier to create videos. The Federal Election Commission and Congress have both expressed a desire to crack down on the use of deepfakes in political ads, but the chances of federal legislation passing before the 2024 presidential election are unlikely. Although it is not, it is said to be low. (States such as California and Texas have successfully banned the use of deepfakes in state-level elections, but critics say they are difficult to enforce, raising First Amendment concerns.) )
And not all governments can or should be trusted to regulate these technologies, especially repressive regimes that engage in “heavy censorship in the name of countering disinformation.” This is due to a new framework announced Wednesday called “Democracy by Design,” which calls for a “content-agnostic” approach that prioritizes product design and policy rather than a game of content moderation. The aim is to protect freedom of expression and the fairness of elections. The framework takes a three-pronged approach in appealing to Big Tech platforms, including recommendations to strengthen resilience, counter election manipulation, and leave a “paper trail” that promotes transparency. There is.
So far, a coalition of 10 civil society organizations have all signed on to the framework, including Accountable Tech, the Center for American Progress (CAP), and the Electronic Privacy Information Center (EPIC). To directly combat election manipulation, the Coalition will use generative and manipulated AI to depict election fraud or misrepresent public figures or micro-targeted voters with ads generated using personal data. It proposes banning all use of public media and requiring strong disclosure standards for political advertising. Featuring AI-generated content. Algorithmic systems that determine users’ feeds could also be “opt-in” during election periods, the coalition has suggested.
Civil society organizations are also highlighting the role Big Tech plays in strengthening or weakening democracy. Vulnerabilities in social media often stem from platform architecture, and the very designs that maximize engagement and promote frictionless user experiences can “distort discourse and undermine democracy,” the coalition said. ing. The proposal says soft interventions could help alleviate these threats, such as introducing a “viral circuit breaker,” limiting “rampant re-sharing” during elections, and creating a clear and defined strike regime. . Most platforms already use strike systems, but the coalition wants tech companies to open these systems up for oversight.
However, it’s not just platform design and content moderation policies that will influence the next election. How platforms respond will also play out in the context of a war being waged primarily by the political right to pressure technology companies to suppress online speech.
Last week, the Center for Democratic Technology (CDT) released a report detailing how economic, technological, and political trends are challenging election disinformation campaigns in the United States. The report, titled “Seismic Change,” is based on interviews with more than 30 tech company employees, independent researchers, and advocates. We investigated the growing challenges they face in their daily work. And we recommend steps counter-disinformation initiative leaders can take to weather the storm. According to the CDT report, these actions include a shift to year-round harm reduction strategies such as proactive banking and a focus on mitigating the effects of disinformation superspreaders rather than individual content. It is said to be included.
What happened to Yoel Roth after he left Twitter is one of the most high-profile examples of this systematic harassment. In a recent editorial, new york timesHe doesn’t see the barrage of attacks from former Twitter CEO Elon Musk, former President Donald Trump, Fox News, and others as a personal vendetta or “cancel culture,” but rather as a negative response to the platform. It was characterized as a coordinated strategy to encourage Make controversial moderate decisions for fear of partisan attacks on companies and their employees. (Since this online assault began, including Musk baselessly claiming that Ross condones pedophilia, Ross has been forced to move multiple times due to physical threats against him and his family.) (He even hired armed guards to protect his home.)
But more common are attacks on platforms owned by the world’s biggest billionaires and on rank-and-file researchers that have drawn the ire of elected officials. This summer, X’s Elon Musk accused the Center for Countering Digital Hate (CCDH) that the nonprofit had misused X data in a study showing how hateful content had increased under Musk’s ownership. filed a complaint as Shortly after, he spoke out over the Anti-Defamation League’s campaign to pressure advertisers to leave the platform, citing rampant hate speech and anti-Semitism under Musk’s leadership. threatened legal action. Then there was the “Twitter Files,” Musk’s vulgar attempt to expose Twitter’s “liberal bias.” Mr. Musk passed the document on to hand-picked writers, who produced a report with few meaningful new facts and a few factual errors. And that’s just a small part of what’s going on with He X.
Conservative Freedom Caucus Rep. Jim Jordan (R-Ohio) and his ilk, who have been characterized as Mr. Musk’s attack dogs, are escalating their campaign against disinformation researchers. last week, washington post Academics, universities, and government agencies are struggling under the weight of a systematic legal and regulatory threat posed by Republican lawmakers and state governments, resulting in a shift in efforts aimed at countering online misinformation. It was reported that the research programs that had been developed were being overhauled and dismantled.
The court could also permanently restrict certain government communications with the platform and researchers, according to CDT’s Earthquake Shift report.take Missouri vs. Biden, currently in court: The lawsuit accuses the White House and federal agencies of colluding with tech companies to remove objectionable content and suppress free speech on their platforms during the pandemic. There is. Communications between the Biden administration and social media platforms regarding content could ultimately be restricted by the courts.
However, there are signs that researchers intend to continue studying these issues. Kate Starbard, co-founder of the University of Washington Center for Public Information, recently wrote: legal issues Some of the center’s previous election projects remain the focus of online conspiracy theories, lawsuits and Congressional investigations that have “grossly misrepresented” its activities, yet dozens of researchers still The government is working to identify the harms of misinformation and election manipulation.
And despite mass layoffs across the tech industry and the gutting of many platforms’ trust and safety teams, some of them include Mr. Musk’s previous promise to expand election integrity teams. Still, Harvath finds some comfort in the remaining employees, including those who reportedly cut their numbers in half on Wednesday. “I don’t think we should forget that there are people out there who are trying to do this work with the resources they have, and we shouldn’t think of this as an all-or-nothing problem,” said Harvath of UCLA. stated in the law. school. “As you go about all of this, be responsible and don’t panic.”