Skip Navigation

Algorithmic Accountability for the Public Sector: Learning From the First Wave of Policy Implementation

This piece was originally posted by the AI Now Institute, in collaboration with Ada Lovelace Institute and the Open Government Partnership (OGP) on Medium here. The organizations are partnering to launch the first global study evaluating this initial wave of algorithmic accountability policy.

Governments are increasingly turning to algorithms to automate decision-making for public services. Algorithms might, for example, be used to predict future criminalsmake decisions about welfare entitlementsdetect unemployment frauddecide where to send police, or assist urban planning*. Yet growing evidence suggests that these systems can cause harm and frequently lack transparency in their implementation, including opacity around the decisions about whether and why to use them. Many algorithmic systems lack political accountability as they replicate, amplify, and naturalize discrimination against people who’ve borne the brunt of historical oppression and discrimination. They also facilitate new forms of privacy intrusion, producing determinations about people that have lasting echoes throughout people’s lives. These determinations can be hard to contest, and are often illegible to those whose lives they shape. This has spurred heated opposition across continents from researchers, civil society groups, organized tech workers, and communities directly impacted by these systems.

In recognition of this crisis, policymakers have turned to regulatory and policy tools, hoping to ensure ‘algorithmic accountability’ across countries and contexts. With many challenges and open questions arising from their early stages of implementation, the AI Now InstituteAda Lovelace Institute, and the Open Government Partnership (OGP) are partnering to launch the first global study evaluating this initial wave of algorithmic accountability policy.

While there have been some efforts to evaluate algorithmic accountability within particular institutions or contexts (e.g, Shadow Report to the NYC’s Automated Decision System Task Force, and OGP’s informal ‘Open Algorithms’ network), there have been few systematic and cross-jurisdictional studies of the implementation of these policies. This project aims to understand the challenges and successes of algorithmic accountability policies by focusing on the experiences of the actors and institutions directly responsible for their implementation on the ground.

Combining our respective organizations’ work in the field, this project will provide practical guidance to policymakers and frontline bureaucrats at the helm of the latest wave of algorithm accountability policy.

Through this project, we aim to:

  1. Review the existing policies for algorithmic accountability in the public sector to understand their challenges, successes, and how they were implemented. These include Algorithmic Impact AssessmentsAlgorithmic AuditsAlgorithm/AI registers, and other measures intended to increase transparency, explainability, and public oversight.
  2. Provide practical guidance for policymakers and bureaucrats to design and implement effective policies for algorithmic accountability.
  3. Identify critical questions and directions for future research on algorithmic accountability that can inform and address the challenges emerging from contexts where policies are already being trialed.

As a final output, in late Summer 2021, we will release a comprehensive report that reviews existing algorithmic accountability policy frameworks and provides practical guidance to the policymakers, bureaucrats, and agencies responsible for implementing them. Based on these insights, we will also release a separate piece of research that outlines future directions for the lively research community in this field.

Have questions or are you working on algorithmic accountability design, legislation, or research? We’d love to hear from you: Please get in touch with Divij Joshi, Lead Researcher for this project, at divij[dot]joshi[at]gmail[dot]com.

Team:

Project Leads: Jenny Brennan is a Senior Researcher at the Ada Lovelace Institute, Tonu Basu is the Deputy Director of Thematic Policy Areas at the Open Government Partnership, Amba Kak is the Director of Global Policy & Programs at the AI Now Institute at New York University.

Lead Researcher: Divij Joshi is a lawyer and researcher interested in the social, political and regulatory implications of emerging technologies and their intersections with human values.

About the partners:

For the AI Now Institute, law and policy mechanisms are a key pathway toward ensuring that algorithmic systems are accountable to the communities and contexts they are meant to serve. This research builds upon a wider body of work including our framework for Algorithmic Impact Assessments (AIA) and the Algorithmic Accountability Toolkit. In the spirit of proactive engagement with the policy process, alongside a broad civil society coalition, we also published the Shadow Report to the New York City Automated Decision Systems (ADS) Task Force to detail accountability mechanisms for various sectors of the city government.

For the Ada Lovelace Institute, this research forms part of their wider work on algorithm accountability. It builds on existing work on tools for assessing algorithmic systemsmechanisms for meaningful transparency on use of algorithms in the public sector, and active research with UK local authorities and government bodies using machine learning.

For the Open Government Partnership, a partnership of 78 countries and 76 local jurisdictions, advancing transparency and accountability in digital policy tools is a critical part of a country’s open government agenda. OGP members work with civil society and other key actors in their countries to co-create and implement OGP action plans with concrete policy commitments, which are then independently monitored for ambition and completion through the OGP’s Independent Reporting Mechanism. While several OGP countries are implementing their digital transformation agenda through their engagement in OGP, a growing number of OGP members are also using their OGP action plans to implement policies that govern public sector use of digital technologies. Among these, accountability of automated decision-making systems and algorithms has seen increasing interest. OGP convenes an informal network of implementing governments, mobilizing a cross-country coalition of those working on algorithmic accountability. Given the rapid evolution of the issue, OGP members would benefit from a more comprehensive effort that documents what works (and doesn’t) on the issue, across different country contexts.

*We use ‘algorithms’ to describe a set of correlated technologies employed to computationally generate knowledge or decisions, operating on particular datasets and bounded by specific logics and procedures. (cf. Tarleton Gillespie, “Algorithm.” In Digital Keywords: A Vocabulary of Information Society and Culture, Ben Peters ed.)

Comments (1)

Paul Clermont Reply

In my thinking & reserch for writing I’ve ddone for a professional journal, I’ve homed in on a couple of key principles to guide the design and use of algorithms. (I ecognize they may have long been obvious to you.)
1) Algorithms are relatively blunt instruments that depend on attempts to quantify the unquantifiable, e.g., where two people observing the same thing could assign different values on a 1-to-5 scale. When their results are not clearcut, it is unethical to use them to make close call decisions about things that affect the long-term directions of people’s lives. In the public sector, this particularly applies to criminal justice decisions like bail, sentencing and parole. It can also apply to decisions about eligibility for entitlements or temporary relief after natural disasters and during epidemics.
2) Algorithms that base “goodness” or potential success predictions for new people based on the characteristics of an existing chosen group will bake in whatever biases led to that group having been chosen. This is true even when obviously unacceptable factors like race are excluded.

Leave a Reply

Your email address will not be published. Required fields are marked *

Open Government Partnership