City AI Connection: AI and Misinformation

Aug 2, 2024 | Blog

City AI Connection is a series of posts on emerging ideas in artificial intelligence, with a focus on government application. These articles are initially shared exclusively with the City AI Connect community and subsequently with a broader audience via the Bloomberg Center for Government Excellence at Johns Hopkins University website, govex.jhu.edu. City AI Connect is a global learning community and digital platform for cities to trial and advance the usage of generative artificial intelligence to improve public services. The community is open exclusively to local government employees. To join, visit cityaiconnect.jhu.edu

BY Melanie Veneron

Increasingly, Artificial Intelligence (AI) plays a major role in both the creation and dissemination of misinformation. It is imperative that city governments invest in strategies to mitigate misuse by bad actors and prevent harm in communities.

Understanding the Role of AI in the Spread of Misinformation

Earlier this year, fraudsters impersonated a CFO and other staff of a large company using deepfakes— incredibly realistic digital fakes of humans—to steal $25 million. As AI technology evolves, stories like this are becoming more common. Deepfakes and other manipulated content are so effective that it’s hard to distinguish between real and fake media. This technology is becoming more accessible and affordable, enabling even unskilled people to produce convincing fake audio and video. Dazhon Darien, a former athletic director at Baltimore’s Pikesville High School, allegedly used easily accessible tools to create fake voice content of the school’s principal making bigoted comments.

Newer AI algorithms are not only more accessible, they can target specific user groups. Micro-targeting misinformation personalizes content to align with users’ preexisting beliefs, making it more persuasive. Generative language models automate the production of compelling and misleading content at scale, flooding media channels and making “influence operations”—deceptive efforts to influence the opinions of a target audience—more sophisticated and harder to detect as they drown out factual information. These dynamics were perhaps most obvious during the spread of misinformation around COVID-19 when echo chambers reinforced beliefs that the COVID vaccine would cause harm to people by altering their DNA, weakening their immune systems, and by implanting a microchip to control them.

Aside from direct effects of creating and spreading misinformation, use of AI to spread deceptive stories undermines public trust, allowing authentic content to be dismissed as fake. This is known as the liar’s dividend. Some notable examples: former Spanish Foreign Minister Alfonso Dastis claimed that real images of police violence in Catalonia were fake and American Mayor Jim Fouts called audio tapes of him making derogatory comments toward women and black people “phony, engineered tapes” despite expert confirmation.

Strategies for Mitigation

City governments can play a distinct and important role in preventing and disrupting the spread of misinformation in their communities through education, enhanced verification processes, stronger detection capabilities, and employing AI experts.

Public Education and Awareness: City governments are best positioned to educate the public about the existence and risks of AI-generated misinformation. Initiatives including workshops, online resources, and collaboration with local media spread awareness. Programs to encourage critical thinking and media literacy help the public better discern authentic content from fake. In the event of a crisis, it’s predictable that misinformation will spread and, in partnership with local media, city governments can “pre-bunk” information to inoculate against future conspiracy theories.

Policy and Regulation: City governments should develop and enforce internal policies that require transparency in AI usage in government processes, ensuring constituents are aware when AI-generated content is used. They should also implement strict guidelines and best practices for AI usage in government communications to maintain accuracy and accountability.

Enhanced Verification and Detection Processes: City governments should invest in multi-factor authentication, robust verification protocols for financial transactions, and advanced detection tools and technologies to help identify AI-generated content and deepfakes. City governments would benefit from collaborating with technology companies and research institutions to stay updated on the latest detection methods.

Invest in AI Experts: Employing AI experts who are committed to the residents they serve is crucial to building agile and sustainable strategies. As with other processes that require technology, incorporating AI experts in the policy and procurement processes can not only save time and money, but improve implementation effectiveness and outcomes.

By understanding the ever evolving capabilities of AI in creating and spreading misinformation, and by implementing effective proactive measures, city governments can better protect their communities from the risks associated with this technology.

For further reading, refer to these sources:

Melanie Veneron is a data visualization designer at GovEx.