Skip Navigation

Algorithms and Human Rights: Understanding Their Impacts

Algoritmos y derechos humanos: Entendiendo sus impactos

Paula PerezandPaul Braithwaite|

The increasing use of algorithms for decision-making processes can pose risks and have harmful impacts on privacy and the exercise of human rights. Human rights algorithmic impact assessments have emerged as an accountability tool to identify potential harms, mitigate unintended impacts, and inform policy decisions on the use of algorithms across key policy areas including health, and education

Government agencies from Canada, Finland, Netherlands, Scotland and the United Kingdom implementing OGP commitments on open algorithms met during Open Gov Week to discuss how human rights principles and frameworks can be built into accountability tools for the public sector use of algorithms. Here are the top three highlights from the discussion: 

Algorithmic Impact Assessments and Human Rights

A number of countries have developed algorithmic impact assessments in recent years to address concerns about the possible unintended consequences of introducing algorithms into decision-making. Some are framed explicitly in terms of human rights providing a substantive framework to assess the effects of algorithms on individuals, set against existing legal and constitutional protections. For example, the Netherlands’ Fundamental Rights and Algorithms Impact Assessment (FRAIA) is a tool for public institutions considering developing or purchasing an algorithmic system. It requires them to walk through the Why (purpose), the What (data and data processing) and the How (implementation and use) of the algorithm. Importantly, it ensures that decision-makers undertake an assessment of the likely impact of the use of algorithms on specific human rights. It uses a dialogue-oriented and qualitative approach, requiring consultation with  stakeholders and consideration of different trade-offs to reach a final decision on whether and how to proceed.

Another example is Canada’s Algorithmic Impact Assessment (AIA) Tool. The AIA  supports the Treasury Board Directive on Automated Decision-Making by determining the impact level of an automation project, which ranges from I (little to no impact) to IV (very high impact). It identifies and assesses potential impacts on individuals and communities in a range of areas, including rights and freedoms, health and wellbeing, economic interests, and ecosystem sustainability. While the current version of the AIA does not refer to specific human rights instruments, the intent is to account for potential impacts on rights enshrined in domestic and international human rights law.

Other countries are developing a sectoral approach, such as the United Kingdom’s impact assessment framework for the National Health Service’s National Medical Imaging Platform.

Engaging the Public in Algorithmic Impact Assessments 

There is a need to strengthen both public knowledge of algorithms and public participation in deliberation and decision-making related to their use. Amidst several public controversies (for example in the Netherlands and the U.K.), skepticism is widespread. Meaningful engagement is a prerequisite for building public trust in the positive potential of algorithmic decision-making while mitigating potential harms. 

The European Center for Not-for-profit Law (ECNL) presented a review of stakeholder engagement in Human Rights Impact Assessments and identified opportunities for cross-country learning. The undoubted technical nature of algorithms has often been perceived as a major barrier to public engagement. But the review demonstrated that citizens do not need to know the technical details of an algorithm to understand and deliberate its ultimate impacts on everyday life, or how it can disrupt (for good or ill) traditional decision-making processes. 

For instance, the U.K. led a deliberative public engagement exercise which resulted in the development of the Algorithmic Transparency Standard for use across the public sector. Other countries have integrated these questions into forums looking at wider issues related to digital technology, including Scotland’s Digital Ethics People’s Panel and Finland’s ‘Digitalization for EveryDay Life’ Advisory Board

Training Government Officials on Use of Algorithmic Impact Assessments 

Civil service training is already happening in many countries in relation to participatory processes and citizen engagement, some are starting to focus more on algorithms, but this is still an emerging area. For example the Netherlands has developed guidelines and training for civil servants. In Finland, the focus is on creating understanding of algorithms across government agencies given that roles are spread among different organizations. In Canada the federal government has developed training modules for public servants that explore the risks, challenges, and ethics of AI technologies. Policymakers overseeing the directive and AIA tool also regularly support federal institutions with completing and publishing Algorithmic Impact Assessments on the Open Government Portal. 

As the use of algorithms and artificial intelligence increases across governments in decision-making processes, human rights impact assessments and the accompanying public engagement are vital to ensure this technology does not reinforce or exacerbate existing inequalities in our societies. We hope that the above examples and lessons can support the efforts of governments and civil society to advance algorithmic accountability, including as part of their OGP action plans.  

Comments (2)

ABDELKADER ESSOUSSI Reply

L’évolution des algorithmes ou l’intelligence artificielle au service de la participation citoyenne et surtout dans les sujets des droits de l’homme , l’existence des plateformes , l’expression orale et écrite et les obligations et les éthiques ( plagiat et intelligences collectives ) de la digitalisation.

Bienvenido Pascual Encuentra (Secretario de ALGOVERIT) Reply

Soy secretario de la Asociación ALGOVERIT, que tiene por objetivo informar a los ciudadanos sobre los buenos y malos usos de la IA y los algoritmos, así como de la defensa de la participación de la sociedad en el establecimiento de un marco regulatorio que respete la libertad colectiva, personal y minimice las posibilidades de discriminación negativa. Tal vez podríamos mantener contactos fructíferos en un próximo tiempo.
Stentamente,
Bienvenido Pascual Encuentra
Secretario de ALGOVERIT

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Content

Thumbnail for Open Algorithms Network

Open Algorithms Network

Learn about OGP's informal network of implementing governments, which mobilizes a cross-country coalition of those working on algorithmic accountability.

Thumbnail for Algorithmic Accountability for the Public Sector

Algorithmic Accountability for the Public Sector

The Ada Lovelace Institute, AI Now Institute, and OGP have partnered to launch the first global study to analyze the initial wave of algorithmic accountability policy for the public sector. 

Thumbnail for Making Algorithms Accountable to Citizens Challenges and Solutions

Making Algorithms Accountable to Citizens

At RightsCon 2021, government officials and civil society organizations, including members of the Open Algorithms Network, discussed their experience implementing algorithm transparency ...

Open Government Partnership