Skip Navigation

Three Recommendations for More Inclusive and Equitable AI in the Public Sector

Tres recomendaciones para el uso más incluyente y equitativo de la inteligencia artificial en el sector público

Allison Merchant|

Around the world, public sector entities are increasingly exploring the use of algorithms with the aim of improving data-informed policies and decision-making. There is great potential for these systems to help correct for gender biases or target policy interventions to close equity gaps around financial access, hiring and recruitment, health or education policy, or open procurement design and decision-making.

But that same data can be used to actively or inadvertently design or train algorithms to discriminate against women, girls, and non-binary individuals who may otherwise qualify for a service, benefit, or loan. Governments and civil society partners are looking to better understand and address the gender-differentiated impacts of algorithms in open government and reduce human biases around gender, race and ethnicity, class, age, and other demographics.

In the last Open Algorithms Network meeting, members came together to discuss this intersection of inclusion, open government, and artificial intelligence (AI). This meeting was co-chaired by the Government of Canada, with participation from the Governments of Estonia, Norway, the United Kingdom and Scotland as well as civil society respondents. Many around the virtual table have started considering issues of equality, bias, and discrimination in their algorithmic commitments in OGP action plans and across government AI strategies. Many of these commitments are grounded in the idea that opening data and design of algorithms is an avenue to reduce bias and discrimination, and that the process of collecting data or design is as important as the outcome. As Scotland’s action plan notes, “the way public services make decisions using data is as important as the data they publish. This includes the use of trustworthy, ethical and inclusive Artificial Intelligence, as outlined in Scotland’s AI Strategy.” Finland, for example, will prepare a general set of ethical guidelines for the government to ensure that AI systems are not embedded with directly or indirectly discriminatory models.

Algorithmic transparency is an emerging commitment area for OGP members, with most commitments coming from Global North nations. Few explicitly reference gendered experiences rather than broader discrimination or bias. As AI technology becomes more available and embedded in government policy and procedures, open government actors need to be aware of the potential discriminatory issues from the start and build in open, transparent, and equitable approaches to help mitigate them.

Members of the Open Algorithm Network recommend that open government actors:

1. Open automation strategies and projects to the public to improve algorithmic transparency and accountability for more communities

Where possible, members may seek opportunities to establish measures through policy, law, regulation, or other reforms to advance algorithmic transparency. This may take the form of making algorithmic policies or strategies open and accessible to the public, publishing information about upcoming projects for review and comment, or proactively engaging impacted stakeholders around the design and use of algorithms.

In Canada, the Directive on Automated Decision-Making includes several transparency requirements including completing and publishing an Algorithmic Impact Assessment (AIA); providing notices and explanations to clients or the public before and after decisions; assuring access to the components of a system (e.g., for audits or investigations); releasing source code where appropriate; and documenting automated decisions. While the directive requires departments to test data used by automated systems for unintended bias, publishing the results of bias tests may be challenging due to risks to privacy, security, or intellectual property. Privacy protection for a small demographic group, for example, may come into conflict with efforts to openly test algorithms for bias, a process that could require sensitive personal information about people impacted by automation. While there are strategies to anonymize data like this, this tension can pose a challenge to developing a shared understanding of equitable and inclusive algorithmic transparency in the public sector.

Another opportunity is to connect national algorithmic strategies or policies to OGP commitments to improve their public engagement and consultation, as we’ve seen from members like Scotland and Canada. The Scottish Government has continued to open algorithmic programs through public challenges, such as a current challenge on AI and disability and inclusive access to public services.

2. Use assessments early in the design process to understand potential gender or inclusion differences or test for unintended bias

Assessments and guidelines can be used to help public sector actors identify and mitigate risks of discrimination and other forms of inequality during a design stage of a project. These tools can include gender assessments like Canada’s GBA Pluspolitical economy analysis, or human rights impact assessments just to name a few. Members agreed that these are best deployed early in the process and used iteratively throughout the life of a policy or program.

The Government of Canada is looking to strengthen the Directive on Automated Decision-Making’s safeguards against discriminatory outcomes for historically marginalized individuals or groups. The ongoing third review of the directive proposes a new requirement mandating the completion of a Gender Based Analysis Plus during the development of modification of an automated system. This would foster an intersectional approach to the design of automation projects, allowing departments to consider multiple identity factors such as sex, gender, geography, and language when assessing the potential impacts of a system on individuals. The government is also working to expand the AIA to evaluate the potential impacts of automation projects on people with disabilities.

Similarly, in the UK , policies should undergo an Equality Impact Assessment, which can help identify opportunities to better promote equality or spot potential discrimination or negative effects of a policy or service. This covers a variety of protected demographics like age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, and sexual orientation.

In Finland, although existing law protects equality and non-discrimination, the government’s first AI Program assessment called for the adoption of ethical guidelines for developing algorithms and architectures to avoid biases and adverse effects on human dignity and equality. Their commitment proposes adopting new methods of data collection and use, provision of more high-quality open government data, revision of regulations, funding for long-term interdisciplinary research on the risks of AI, better AI awareness among government employees, and public discussions around AI.

Though ethical guidelines and assessments can provide critical insights to potential inclusion opportunities or blindspots in AI, they aren’t an end of themselves. AI systems – and the data that underpin them – also need to be regularly monitored to ensure quality and that results aren’t inadvertently biased against demographics like women and gender diverse individuals.

3. Use open data to achieve representative and accountable design while considering privacy and data management 

Better open and gender-informed data can improve decision-making, transparency, and accountability around policymaking , budgets, and public services. This includes representative data that can feed algorithms and improve outcomes of automated decisions, along with better citizen-generated data. For example, Finland is enhancing public access to information by improving the quality and usability of open data.

When and how to use, analyze, and store sensitive data including sex, gender, and sexual orientation are ongoing privacy considerations. One potential solution proposed by the UK is the use of data intermediaries, trusted partners who can facilitate data access and sharing and support governments in managing some of this risk and potential bias mitigation. They can also enable individuals and entities to have greater insight into what data is collected and when and how it is used.

Looking Ahead on Inclusive AI 

OGP will continue to explore this intersection of gender equality, equitable algorithmic design and use, and digital governance with the Open Algorithms Network and open government partners. Together, we’ll aim to identify interventions that can be adapted and shared across the Partnership and beyond for the development of gender-informed commitments on algorithmic policy and learn how open government processes can better support co-designing the use and regulation of algorithms that lead to more equitable policies and practice.

Comments (1)

Prof Thuli Madonsela Reply

This is an encouraging development. At the Centre for Social Justice at Stellenbosch University (SU) we have been advocating for the use of sufficiently disaggregated data for prospective social justice or equality impact assessment of planned laws, policies and social schemes at the point of design, in addition to the normal impact assessments done during and after implementation. The disparate Covid-19 containment measures such as lock down and compensatory socio-economic support packages has been a gift in highlighting the importance of foresight impact assessments. This purpose and all process and AI leveraging measures require adequately disaggregated data and equity attuned machine learning. We would love to collaborate.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Content

Thumbnail for Algorithms and Human Rights: Understanding Their Impacts Challenges and Solutions

Algorithms and Human Rights: Understanding Their Impacts

Human rights algorithmic impact assessments have emerged as an accountability tool to identify potential harms, mitigate unintended impacts, and inform policy decisions on the use of algorithms across key policy…

Thumbnail for Digital Governance in OGP

Digital Governance in OGP

OGP countries are leveraging the opportunities of evolving technology, such as artificial intelligence, data-driven algorithms, and mass social networks, while also developing policies to deal with the threats, including disinformation,…

Thumbnail for Gender in OGP

Gender in OGP

Open government reformers are joining forces with gender and inclusion advocates across the Partnership to better recognize and respond to the lived realities of women, girls, and those across the…

Open Government Partnership