Making the AI Discussion More Human

The term "artificial intelligence" can conjur up any number of thoughts. Some may think about a home device providing instant access to weather and news. Others may find their minds going to technologies that aid in policing, employment and other important aspects of our life. These many aspects of AI can lead to confusion about the role it plays in our lives and the development of regulations to govern AI.  

Earlier this year, Eileen Donahoe and Megan MacDuffee Metzger published the essay "Artificial Intelligence and Human Rights" in the Journal of DemocracyDonahoe, former U.S. ambassador to the UN Human Rights Council in Geneva, serves as executive director of the Global Digital Policy Incubator and adjunct professor at Stanford University's Center on Democracy, Development, and the Rule of Law. Metzger is a research scholar and associate director for research at the Global Digital Policy Incubator.

The essay lays out strategies "to figure out how to protect and realize human rights in our new AI-driven world." Donahoe and Metzger joined us for a Q&A about the essay.

How did the development of this essay come about?

We are at an absolutely critical moment when it comes to both the development of artificial intelligence (“AI”) systems and the development of regulations to govern AI. On the one hand, AI is already widely deployed in increasingly important areas of our lives ranging from policing and criminal justice, to decisions around insurance and credit, to employment decisions, and even in various realms of security. These are realms that have very real effects on people’s lives.

jod_images.jpg

On the other hand, very little has been done to regulate the development or application of these systems or to mitigate potential harms. Many of the systems already in use have been found to be biased. Others don’t improve on the performance of humans, but are adopted on the presumption that they are “data-based” and therefore more accurate than human decision-making. It is critical that we focus on mitigating the negative impacts of AI, especially when AI is used in governance decisions that impact citizen’s rights.

We wrote this essay in an attempt to both outline the problem and suggest part of the solution. We argue that the best way to approach regulation and governance of AI is to apply the international human rights framework. This framework keeps the human person at the center of the conversation and speaks to many of the most pressing concerns raised by deployment of AI. It also provides structure for understanding the roles and responsibilities of both governments and the private sector. Perhaps most importantly, the existing human rights framework enjoys widespread global legitimacy, which is essential in our globalized, digitized world. 

Do you think most people truly understand the reach and risks of AI?

Definitely not. Generally speaking, people are not aware of the extent to which AI and machine processing is already embedded into many important dimensions of their lives. For example, the general public is unaware that AI is already used in the criminal justice system to help determine eligibility for pre-trial release or for parole.  Many users of social media platforms do not realize that “AI” algorithms determine what information they see on newsfeeds, or the order in which search results are displayed.

To be honest, there isn’t even widespread consensus on the meaning of the term AI, let alone an understanding of the reach and risks of the technology. People use the term “AI” in many different ways. Some people hear the term artificial intelligence and imagine an army of killer robots from their favorite SciFi movie. Others think of a future where robots replace people working in fast food restaurants. Most couldn’t tell you the many ways that algorithmic decision-making already impacts daily life.

In our essay, we use the term “AI” broadly to refer to any machine decision-making process that would otherwise be made by humans. Our goal was to speak to the whole spectrum of algorithmic decision-making currently deployed throughout digital societies. We use this broad definition because we felt it was the best way to encompass the totality of the conversation.

Simply put, people do not understand either the full reach of AI or the risks it potentially poses.  Those people whose vision of AI is futuristic and dark, may not recognize the extraordinarily positive potential of AI if carefully regulated and deployed. Others are perhaps too trusting and optimistic, and not as concerned as they should be about the very real risks that we are already being exposed to by these new technologies. Our approach is to caution against both over-exuberant optimism, as well as over-zealousness in restrictions on the technology.  

What are the best steps to apply existing human rights norms to protect people against the risks of AI?

square3.jpg
Many of the principles for how to mitigate against the risks of AI already exist in the international human rights framework. Accordingly, one of the most important first steps is to clearly articulate how we can apply these principles in the context of digitized, AI-driven societies. Some of this work has already been started by people like UN Special Rapporteur for Freedom of Expression, David Kaye, and by research and civil society groups like Data and Society and Access Now. Their efforts reflect the need to operationalize what it means to respect privacy, protect against bias, and ensure freedom of information in the context of algorithmic decision-making and AI.

From a practical standpoint, private sector companies can start by conducting human rights impact assessments at all stages in the development process for their AI applications, products and services. It isn’t enough to wait until a product is ready for market. Instead, assessments should be done throughout the development process to ensure that the risks of new technologies are identified and addressed. Part of the reason for this approach is to ensure that companies spot problems before they reach the point of no return, where enormous capital has already been invested. Post-hoc impact assessments may lead companies to ignore or gloss over issues. If human rights impacts are considered throughout the product development process, those considerations can be integrated and the product is more likely to meet human rights standards.

Once the product is ready for market, it is critical also to consider the context and uses for which the product is sold. For example, Microsoft has been candid in their willingness to sell their facial recognition technology for some uses and not for others. Technology that may be appropriate and useful in one context may have very high risks for violating rights in another context, and this will vary based on the specific technology. Companies should be conducting these kinds of assessments throughout both their product and sales processes.

Private sector human rights impact assessments are necessary, but they are not sufficient. As governments begin to craft regulation around AI, they must lean on the international human rights framework as a guideline. Governments also must reflect upon their own utilization of AI as a component of government decision-making and assess those applications from a human rights vantage point. Countries should coordinate on these standards, but human rights must form the foundation as we move forward with new regulations around AI.

How difficult will it be to bring so many different groups to the same understanding of how to protect human rights as AI continues to expand?

Admittedly, the geo-political context surrounding AI regulation is very challenging. A big motivation for writing this article was to address skepticism in the feasibility of adhering to human rights principles in the digital realm. It is never easy to bring different stakeholder with divergent incentives and goals to a consensus, but the international community needs a common set norms and rules for artificial intelligence. Most stakeholders are seeking to create a global market for AI technology that promotes innovation and minimizes risk to human beings. Relying upon the universal human rights framework is the best way to ensure that we capitalize on the benefits of AI while protecting against the risks. Convincing stakeholders to coalesce around existing international human rights norms that already have wide global legitimacy –– is a more realistic approach than devising any new set of guidelines from scratch.

We recognize it won’t be easy, but we believe that convincing skeptical countries and stakeholders of the feasibility of using the human rights framework for governance of AI is essential for the future of AI. 

Publish Date:
Related News
Top 20 Articles of November 2024
The Top 20 most-read Hopkins Press journal articles in November on Project MUSE features an array of topics, from who defines democracy to the ways wealth impacted women's suffrage. This month's list follows below!
Top 20 Articles November 2024
Featured Readings on Democracy
On DemocracyHopkins Press is dedicated to publishing and disseminating information on the importance of preserving democracy. Explore our rich catalog of academic resources on the study of democracy below. These texts offer rigorous scholarship that engages...
People Waiting Outside Poling Location
ICYMI: New & Notable Articles 11.11.2024
Each week, we collect the articles that we posted in the last week and put them all in one place, right here on the blog. So no worries if you missed an article we posted to Facebook, X/Twitter, Mastodon, Threads, Bluesky, Instagram and/or LinkedIn. Here they...
ICYMI New & Notable header