Content Discussion History

Applying Ethical Principles to Technologies... Finally!

When I was hired into my first government job, I had to sign a lot of paperwork. Some of those were forms acknowledging that I had read the city’s conflict of interest laws and related ethics rules. I didn’t understand their importance at first but as my career in civil service continued, I was witness to how a few individuals undermined the public trust from minor infractions, to outright corruption. As I earned promotions to more senior positions with more authority, I didn’t just have to sign forms acknowledging that I read ethical statements; I also had to undergo background investigations with over one hundred pages of manually written answers to questions that included the routine disclosure of my personal finances. The efforts to evaluate my trustworthiness increased as my access to taxpayer funds and decision-making authority increased (along with, of course, the increased possibility of abusing both). In order for governments to function well it’s important that “we the people” trust that this power is not, and will not, be abused. These tools protected me, the government, and most importantly, the public I served.

For many decades, government has been using technology to track things more accurately, make things more efficient, and make factually-informed decisions. Today we are entering a new era in which government leaders are increasingly shifting decision-making authority away from humans and towards technology, especially with Artificial Intelligence (AI) technologies on the rise. However, we are not as carefully investigating the technological tools as we investigate the people in whom we vest authority. This is partly because we don’t have a clear set of ethics that we would like our technology to follow (for example, “treat everyone equally” vs “treat everyone equitably”). But it is also partly because government leaders haven’t had access to tools which consistently and systematically investigate AI to understand where it might succeed or fail. From social media to social services we hear stories on a daily basis of how algorithms are ruining people’s lives and further fueling mistrust – often among those who are already marginalized.

This is why the work that Joy Bonaguro, Miriam McKinney, Dave Anderson, Jane Wiseman, and I have accomplished since February is so essential. On Sunday, September 16, 2018, at the Data for Good Exchange (D4GX) in New York City we publicly released our ethics and algorithms toolkit for governments (and others too!). This toolkit is designed to help government managers characterize the risks associated with transfer of power to automated tools and also, hopefully to manage those risks appropriately. It is a first step in what we expect will be a long road towards applying the same ethical principles we apply to human beings to our technologies.

At D4GX, we conducted a workshop where we invited participants to apply the toolkit to a problem they were working on or to a sample scenario which we supplied. After just one hour in a room together we gathered a lot of great feedback from the participants and are very grateful for the dialogue we had together. We have already assembled the resulting recommendations and will apply them to the toolkit in the near future. Next month we will be conducting a similar workshop at the 2018 MetroLab Summit with government leaders and their academic partners. We are looking forward to even more feedback, but more importantly we hope that those attending the workshop will continue to apply the toolkit to their projects beyond the time they spent with us..

After all, maintaining standards of ethics doesn’t just require that people fill out a form and sign it; it requires that investigations happen when there is suspicion of harm – whether intentional or not. It requires recognizing when the public trust is being undermined and figuring out how to restore that trust. This toolkit, therefore, is not simply a checklist to be completed at the start of an AI-enabled project. Rather, it is a “living” guide to be used continuously for conversations and management practices while the technology is in public service.

We also believe that the toolkit has applications well beyond public-sector, with some adjustments. For example, medical practitioners could apply a version of the toolkit when using solutions like IBM Watson Health to help diagnose and treat patients. Venture capital companies could use a different version, ensuring their automated decision-making for investments are in line not just with their organizational mission and values, but those of the communities they will affect. Large technology companies might even use a different version of it to think through how the products they build might unintentionally result in increasing filter bubbles or deeper marginalization.

If you are interested in working with the toolkit and need assistance, we would love to help! Please contact us at govex@jhu.edu, our contact page, or Twitter. Or you can reach us through the toolkit website, https://ethicstoolkit.ai/

Comments

add comment

Your comment will be revised by the site if needed.