Most US Adults Concerned About AI’s Impact on Election Misinformation


Summary: A recent poll reveals that a majority of US adults believe that the use of artificial intelligence (AI) tools in the 2024 presidential election will contribute to the spread of false and misleading information. Concerns center around AI’s ability to micro-target political audiences, generate persuasive messages, and produce realistic fake images and videos. The use of AI by candidates is seen as detrimental, with clear majorities stating that creating false or misleading media, editing photos or videos, tailoring political ads, and using AI chatbots would be negative. There is bipartisan agreement on these concerns, with both Republicans and Democrats expressing pessimism. The debate over the use of AI in politics comes as the Federal Election Commission considers regulating AI-generated deepfakes in political ads ahead of the 2024 election.

Concerns Over AI’s Impact on Election Misinformation

A recent poll conducted by The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy reveals that the majority of US adults are concerned about the role of artificial intelligence (AI) tools in spreading false and misleading information during the 2024 presidential election. The poll found that 58% of adults believe that AI tools will increase the spread of misinformation in next year’s election. These tools have the ability to micro-target political audiences, mass produce persuasive messages, and generate realistic fake images and videos in a matter of seconds.

Skepticism Towards Candidates Using AI

The poll also shows that a significant majority of American adults, regardless of political affiliation, believe that candidates should not be using AI in certain ways. Clear majorities stated that it would be detrimental for presidential candidates to create false or misleading media for political ads, edit or touch-up photos or videos for political ads, tailor political ads to individual voters, and answer voters’ questions via AI chatbots. This sentiment is shared by both Republicans and Democrats, with majorities from both parties expressing concern over the use of AI in these ways.

Examples of AI Use in the 2024 Presidential Election

The concerns over the use of AI in elections are not unfounded. During the 2024 Republican presidential primary, AI was already deployed in campaign advertisements. The Republican National Committee released an entirely AI-generated ad that depicted a dystopian future if President Joe Biden was reelected. The ad used realistic-looking but fake images to create a sense of panic. Similarly, Republican governor Ron DeSantis used AI-generated images in his campaign, including one that appeared to show former President Donald Trump hugging Dr. Anthony Fauci. These examples highlight the potential for AI to be used in misleading ways and further fuel concerns about its impact on election misinformation.

Bipartisan Agreement on Regulations and Labeling

Despite some differences in opinion, there is bipartisan agreement on the need for regulations and labeling when it comes to AI-generated content. The poll found that a majority of American adults are open to regulations that would ban or label AI-generated content. For example, about two-thirds favor the government banning AI-generated content that contains false or misleading images from political ads. A similar number want technology companies to label all AI-generated content made on their platforms. These findings indicate a shared concern among Americans about the potential risks associated with AI-generated content and a desire for transparency and accountability.

Responsibility for Addressing AI-Generated Misinformation

The poll also explored who should bear the responsibility for preventing AI-generated false or misleading information during the 2024 presidential elections. The majority of respondents believe that technology companies that create AI tools have a significant responsibility in this regard. Nearly two-thirds of Americans feel that technology companies should be responsible for banning or labeling AI-generated content. Additionally, half of the respondents believe that the news media, social media companies, and the federal government also have a significant role to play in addressing this issue. While there are some differences between Republicans and Democrats on certain aspects, there is overall agreement on the shared responsibility in combating AI-generated misinformation.

Public Perception of AI Chatbots and Information Sources

The poll revealed that there is a high level of skepticism among Americans towards AI chatbots and the information they provide. Just 5% of respondents expressed extreme or very high confidence in the factual accuracy of information from AI chatbots, while 61% stated that they are not very or not at all confident in the reliability of this information. When it comes to seeking information about the presidential election, most adults rely on traditional sources such as the news media, friends and family, and social media, rather than AI chatbots. This suggests that while AI has its uses, it is not currently seen as a trusted source of information.

The Future of AI in Politics

The concerns raised by the poll reflect the growing debate over the role of AI in politics and its impact on election misinformation. While there is skepticism towards its use in certain ways, there is also recognition of the potential benefits of AI in other areas, such as the economy and society. The use of AI tools for tasks like historical research and creative brainstorming is seen as positive. However, when it comes to the political realm, the majority of Americans are wary of the potential for AI to be used in misleading and manipulative ways. As the 2024 presidential election approaches, the regulation and responsible use of AI in the political sphere will continue to be important topics of discussion.

Tags: AI, election misinformation, 2024 presidential election, poll, regulations, labeling, candidates, deepfakes, technology, misleading information