The Risks of Generative AI: Familiar Challenges and Emerging Threats

May 29, 2024 | Blog

City AI Connection is a series of posts on emerging ideas in artificial intelligence, with a focus on government application. These articles are initially shared exclusively with the City AI Connect community and subsequently with a broader audience via the Bloomberg Center for Government Excellence at Johns Hopkins University website, govex.jhu.edu. City AI Connect is a global learning community and digital platform for cities to trial and advance the usage of generative artificial intelligence to improve public services. The community is open exclusively to local government employees. To join, visit cityaiconnect.jhu.edu

By Andrew Nicklin

As advisors to government leaders on treating data as a strategic asset, GovEx staff have had a front-row seat to the rapid rise of generative AI technologies like ChatGPT and Runway. These tools have captured the public imagination, promising to revolutionize everything from content creation to data analysis to problem-solving. But as with any transformative technology, there are significant societal risks that local government leaders and their constituents must grapple with.

In many ways, the challenges posed by generative AI echo those we’ve seen with previous waves of technological change. Just as the internet and social media platforms disrupted traditional media and information flows, generative AI has the potential to upend entire industries and professions. We’ve seen similar dynamics play out over the last two decades as, for example, open data, business intelligence platforms, and blockchain each became the center of attention. As with these previous innovations, we can expect to see a large number of new companies competing for both our attention and our funding, followed by a consolidation as the AI market matures.

A key difference between AI and prior technological advances, however, is the pace and scale of change. Generative AI models are advancing at a breathtaking rate, with capabilities that can seem almost magical. And their potential applications are vast, spanning everything from education and healthcare to public safety and civic engagement.

With great power comes great responsibility.

Recent research suggests that trust and reliance on automated decision systems increases as they are used over time, even if they are biased. Even as we encourage the use of feedback collection tools to help maintain fairness, researchers think those amplify systemic biases too, leading to unwarranted social media bans or even denial of insurance claims. Thus, for AI implementations that can have significant, lasting impacts on constituents, we need to think about independent evaluations, conducted by academic or other third parties, and formalized appeal processes.

Decisions, however, aren’t the only consideration. What are the consequences when a system also uses AI to present the options to make a decision? In our private lives, this happens all the time – Amazon’s product search results, Netflix “up next” recommendations, Google Maps’ travel options, or the next video you see on TikTok. But what about when defense attorneys need to sift through huge volumes of bodycam footage, or police departments want to automatically identify problematic interactions – the results of which, at scale, can significantly influence criminal justice outcomes?

We clearly need reliable tools to distill vast quantities of data and help us prioritize the more important stuff, but we simultaneously place our trust in their ability to provide an accurate and complete set of options for further investigation. (Answer honestly: when was the last time you clicked through more than two pages of Google Search results?)

This raises a host of novel challenges for municipal governments. For one, the opacity of these AI systems makes it difficult to audit their outputs and understand how they arrive at a particular set of options or a decision. Despite the trend in city generative AI policies requiring people to be responsible for decisions, city governments are rapidly deploying constituent-facing chatbots which have risks. There are also concerns about the environmental impact of the massive computational power required to train and run these models. As AI becomes more sophisticated, the potential for malice only grows – from financial fraud to political disinformation or manipulation.

Addressing these unique risks will require a multifaceted approach, drawing on expertise from technologists, ethicists, policymakers, and the public. We must invest in building public understanding and digital literacy, so that constituents can critically evaluate and appeal automated decision systems and ensure equitable outcomes.

Ultimately, the societal impact of generative AI will depend on how we choose to harness and regulate these powerful technologies. By learning from past technological disruptions and proactively addressing emerging threats, we can steer a course that maximizes the benefits of AI while mitigating the risks. It’s a complex challenge, but one that is essential for ensuring a thriving, equitable, and resilient future.

Andrew Nicklin is a Senior Research Data Manager for GovEx.