Skip Navigation

Open Algorithms: Experiences from France, the Netherlands and New Zealand

Algoritmos abiertos: Las experiencias de Francia, Países Bajos y Nueva Zelanda

Helen Turek|

This piece is part of OGP’s “Open Algorithms Blog Series.” Read other posts in the series here.

Algorithms – analytical systems that process data and supplement or replace decision-making previously undertaken by people – have become an essential way for governments to improve delivery of public services and implementation of policy. More and more, governments are using algorithms to make decisions that have a concrete impact on people’s lives, from allocating heart transplants to filling vacancies at daycare centres.

It’s crucial that citizens can access information about how these decisions are made and what data is used to build these algorithms, and have the know-how and opportunity to challenge automated processes. Cases, such as a pilot immigration program used in New Zealand and the Dutch System Risk Indication (SyRI) benefits fraud detection system, show how important it is to embed transparency, accountability, digital rights and user-engagement into government automated decision-making processes. 

Last month, the Open Government Partnership (OGP) hosted an online exchange of officials from three implementing agencies in the Partnership: Etalab from the Government of France, the Association of Municipalities of the Netherlands, and the Government Chief Data Steward from the New Zealand Government, to exchange experiences, and dig into some of the challenges in opening up algorithms to public scrutiny. Here are some highlights from the discussion:


Communicating complexity

We know from the open data movement that transparency is insufficient – a data dump on a portal is not meaningful without sufficient awareness, education and participation. The same principle applies to algorithms.

We need to enable citizens to exercise their rights and hold their governments accountable in relation to use of algorithms. But there is not yet a broad understanding of what algorithms are, or how they might be used in public decision-making. Additionally, the average person is unlikely to have the technical knowledge or time to look into source code or interpret a dataset. With this in mind, it may be more helpful to focus public engagement and discussions on specific case studies and issues around data sharing and consent. 

As part of the Netherlands’ OGP commitment on open algorithms, reformers brought together 150 people from all walks of life, including civil society, artists and scientists, to raise awareness and talk about the government’s use of algorithms. Additionally, an interview with the policy-makers has been published to help with public understanding. Such government efforts can be supported by increased activism on the part of civil society to help demonstrate the value of opening up algorithms.

It’s also important to seize the opportunity of heightened public interest, which can emerge in the wake of scandals, or as we currently see in the midst of the pandemic. COVID-19 presents a clear opportunity to educate citizens on their digital rights, given the current focus on privacy and data use in relation to track and tracing apps.


Building government capability

It’s not just citizens who need more information and knowledge about algorithms. Inside government, there is also a need to train and equip civil servants with the right expertise, as well as increase awareness of the benefits and importance of additional complex procedures, such as impact assessments. That’s why all three countries have developed guidance to help governments and civil servants navigate the responsible use of algorithms (France, Netherlands and New Zealand). Etalab in France has also been working to support government agencies in the implementation of the legal framework that supports accountability and transparency of public sector algorithms.


Surfacing and addressing bias

All data, and therefore all algorithms, contain bias. The key is to understand where bias could lie, and how to handle the data and its processing fairly, and in line with community expectations. Data ethics training for civil servants is crucial and could be accompanied by expert advisory groups, such as the one set up in New Zealand, in order to help manage risks and define responsibilities. 


Developing international standards

As government use of algorithms increases, there is growing discussion about international standards, and how to reconcile different approaches and find commonalities. Standards can be more effective if accompanied by a professional accountability body and grievance mechanisms, underscored by an appropriate legal framework. If there is wider interest, OGP could work with partners to start looking into the existing standards (e.g. OECD Principles on AI), and other resources, inventories (e.g. Algorithm Watch’s AI Ethics Guidelines Global Inventory), guidelines and concrete use cases, to help identify what has or hasn’t been working.


Next steps

Etalab, the Association of Municipalities of the Netherlands, and New Zealand’s Government Chief Data Steward are considering ways to continue their productive discussions. OGP also wants to engage a broader group of governments, civil society and international partners from other regions to drive this discussion forward, and explore other questions such as transparency and accountability of government procurement of algorithms and AI systems, how to approach impact assessments, and how this topic relates to COVID-19. Watch this space!



What They’re Doing
Etalab has been focusing on how to help agencies fulfill their legal obligations in terms of explainability and transparency. In this context, Etalab has produced two guidance documents for agencies: one on opening public source codes and one on the legal framework of accountability and transparency of public sector algorithms. Their work is also grounded in accompanying agencies on specific case studies. 
In June 2019, Etalab wrote a paper outlining their approach.
Legislative framework?
Yes – the 2016 ‘Law for a Digital Republic’, introduced new provisions concerning public algorithms. These provisions aim to introduce greater transparency and accountability of the administration in the use of these systems, in particular when they are used to make decisions.
OGP Commitment
‘Improving transparency of public algorithms and source codes’, Commitment #6, AP 2018-2020


What They’re Doing
The Ministry of Justice and Safety and The Ministry of the Interior, among others, published guidelines on the use of algorithms by the government, with public values as an important focus.In 2019 research was conducted on the oversight of the government’s use of algorithms. The Dutch National Chamber of Audit has been researching the use of algorithms in the public sector since February 2020. The Dutch government is considering a tri-level approach, looking at utilisation, implementation and adoption of open algorithms.
Legislative framework?
OGP Commitment
‘Open Algorithms’, Commitment #6, AP 2018-2020


What They’re Doing
In 2018 the Government Chief Data Steward and the Privacy Commissioner published the Principles for the Safe and Effective use of Data and Analytics to guide government agency practices.In October 2018 the government published a stocktake of operational algorithms used by government agencies.In June 2019 the Government Chief Data Steward convened an independent Data Ethics Advisory Group and in October 2019, announced public consultation on a proposed algorithm charter for government agencies.
Legislative framework?
OGP Commitment
‘Review of government use of algorithms’, Commitment #8, AP 2018-2020

Download the examples as a PDF here.

Featured Photo Credit: Unsplash

Comments (2)

Alex Cooper Reply

Can policy be decided by an algorithm?
What moral compass is built into the algorithms?
Opening up the algorithms to (expert) public scrutiny does not amount to accountability.
Delegating decisions to an automated system means less accountability not more. Officials can say “it wasn’t me, the computer decided”
And it also means LESS CITIZEN PARTICIPATION in decision making, not more. Why consult people when all you have to do is collect data and feed it into the machine?
However, all we may be talking about here is pattern detection in benefit fraud detection? The examples are sketchy – a pilot immigration program means what? A system to count the number of points a person has to decide whether to allow citizenship? Isn’t that called a calculator? Or will the system be allowed to take away point from certain individuals it doesn’t like and award bonus points to others – based on what?
The details of these systems don’t matter if the principles behind them are flawed.

Diego Cota Reply

Alex, quizás convenga un modelo híbrido. Donde un algoritmo imparcialmente seleccione a las personas que tengan injerencia en la decisión a tomarse. Puede comenzar con decisiones hiper-locales y hacer un goteo gradual hacia macro decisiones. Por ejemplo, si un gobierno estatal decide incluir en su plan la construcción de una avenida para desfogar el tráfico de un sector particular de la ciudad, el algoritmo pudiera seleccionar los ingenieros viales, civiles, ciudadanos afectados por la expropiación de terrenos, propietarios de casas cercanas a la avenida, propietarios de establecimientos que pudieran ser mermados en sus ventas durante la construcción, etc. Y hacer modearación de las alternativas presentadas entre los participantes. De esa manera se focaliza la capacidad de decisión entre los afectados directos. Cuando ocurre un consenso, el proyecto se “gradúa” y compite por la asignación de recursos vs. otros proyectos en igualdad de circunstancias. ¿Hace sentido?

Leave a Reply

Your email address will not be published. Required fields are marked *

Open Government Partnership