Emma MacDonald and Katrine Evans
Developing a shared understanding of responsible AI
As developments in the area of artificial intelligence gather pace, New Zealand’s public service has been laying the groundwork to ensure emerging AI and algorithms can be used responsibly, and in secure, ethical and privacy-protecting ways, to help deliver government services.
Governments typically do not have a reputation for moving quickly, so how is the public sector to deal with the accelerating interest in and potential use of artificial intelligence? The starting point, at least, is simple: we need to establish appropriate leadership and create principled supporting structures that enable government to think, test, learn and adopt AI safely. We are already doing this, but more work is needed.
Katrine is the Government Chief Privacy Officer at the Department of Internal Affairs.
Emma is the Director of the Interim Centre for Data Ethics and Innovation at Stats NZ.
We know that use of AI could boost the efficiency of how services are delivered and help people more easily access government services. To succeed, though, we need to work together across government agencies and our partner organisations to ensure any application of AI maintains the trusted relationship that people, communities and businesses need to have with government.
While agencies themselves are responsible for managing their individual relationships with their clients, they also need some help from cross-government leaders, to avoid unnecessarily duplicating efforts across the public sector. This work is well under way. Earlier this year, the Public Service Commissioner, Peter Hughes, asked Paul James, the Government Chief Digital Officer (“GCDO”), to take the lead across government for AI. This was with the understanding that to do this well, it is essential that government works as a whole, with unified system leadership including the Government Chief Data Steward, the Government Chief Information Security Officer and system and policy leaders, for example at the Ministry of Business, Innovation and Employment.
A cross-government effort on AI
The cross-government group’s first step was to publish the interim guidelines on the use of generative AI in government agencies. This looks to be hitting the mark – we had over 5,000 views of the guidance in the first six weeks of its publication, and it remains one of our most accessed resources on digital.govt.nz. But the GCDO and its partners are now developing a broader work plan for the wider issues, both for the public service, and our economy.
Of course, much of this is not particularly new. While we are in the early stages of applying new forms of AI in the public sector, many people across government have been thinking deeply about AI for a long time. For instance, the 2019 Government Use of Artificial Intelligence in New Zealand (Otago), shows how predictive AI models were being used in New Zealand government departments at that time.
There are also existing guidance documents or frameworks that are relevant to AI, and many agencies have already incorporated these into their design and risk management processes. The most obvious example is the Algorithm Charter for Aotearoa, which has now been signed by around 30 government agencies. It offers clear guidelines on how to use algorithms in cases where doing so could significantly impact on the wellbeing of people, or there’s a high likelihood that they could suffer an unintended adverse impact. Other excellent resources, that are available for everyone to use, include Principles for the Safe and Effective Use of Data and Analytics Data Protection and Use Policy, Ngā Tikanga Paihere framework, and the AI Forum’s Trustworthy AI in Aotearoa – AI Principles.
However, recent rapid advances in AI capability and availability mean that there is now an urgent need for some more specific tools, case studies and advice on how to proceed. For instance, the Government Chief Data Steward will soon be releasing a set of algorithm impact assessment tools. These will offer practical ways for government agencies to consider the questions that are most relevant to AI implementation, so they can be confident that what they do will be safe and trusted.
Some of the core issues that we need to consider
Having a solid foundation of privacy, security, ethics and information management is obviously critical for our ability to adopt AI safely. Some of the issues that we need to think about are as follows.
The impact of using AI as an aggregator - One of the things that AI tools can be really good at is pulling together information from a whole range of different sources and repackaging it in ways that can be insightful. Generative AI is particularly well suited to these types of uses.
But there’s potential for people to experience aggregation of information from various sources as a real and significant intrusion into their personal lives. The fact the information may have come from public sources does not lessen that impact: it can even enhance it. Put simply, it can feel seriously creepy. It can even lead to concerns about state surveillance. Agencies need to ask whether a particular use of AI could affect people in this way, and if the answer is yes, then moderate the system design to avoid the problem and act to reassure people.
Accuracy - Taking information out of the original context for which it was created can also lead to real problems with accuracy. The information may have been accurate enough in its original setting, but could be misleading when used for other purposes.
Creating appropriate expectations for accuracy, and ensuring that AI use meets those expectations, are critical issues that agencies have to consider. What does accuracy actually mean in this context? What kind of inaccuracy rate is acceptable and why? What systems and processes are in place to check whether information produced by an AI process is fit for purpose and how can people challenge the results if they need to?
Transparency - This is not simply a question of explaining the mechanism of what's going on in the ‘black box’. Sometimes, there will be limits on what you can explain in a way that makes sense. But we should do our best. Apart from anything else, in government we need to be able to explain how decisions are made, and justify them if required. So, and most importantly, government needs to be crystal clear about when it is using AI and what it is using it for. And we need to make sure there are really good protections in the background so we can assure people that what we are doing is legitimate and beneficial.
Bias - Finally, bias is often talked about in the context of AI, and is a very serious issue.
Like accuracy, the issue of bias has many layers. Most obviously, humans have biases, which may be reflected in the design of AI systems. Do we know what datasets the AI systems have been trained on and whether those datasets properly reflect the population or the values of the society in which they are being used? Are they suited for the purpose to which they are going to be put, or will that use create or perpetuate discrimination or other harms?
Also, humans are prone to automation bias – in our fast-paced world, with our quest for greater efficiencies, we tend to trust what computers tell us. That means that anyone using these systems, or the output of AI tools, needs to be well trained to understand both the advantages and the limitations of what they are seeing. Trust in government can be quickly eroded by biased or unfair decision-making. Sound design and human oversight of AI systems are therefore crucial.
From a privacy perspective, further guidance is available from the Office of the Privacy Commissioner: Office of the Privacy Commissioner | AI tools and the Privacy Act: Commissioner issues new guidance.
An interim Centre for Data Ethics and Innovation that works in partnership
As part of working across government to address some of these concerns, at this year’s Aotearoa AI Summit, Stats NZ introduced its interim Centre for Data Ethics and Innovation. This small group operating in Stats NZ will collaborate across government, to help foster the trusted and ethical use of data by the government. Trust is widely considered to be essential to enabling data-driven innovation and is a key pillar (Mahi Tika) of the Government’s Digital Strategy for Aotearoa.
Stats NZ is in the process of co-designing how the longer-term centre will operate to address data ethics and innovation issues within the unique context of Aotearoa. Once fully established, the Centre will also support AI policy implementation and public engagement. Trust is key.
The Centre isn’t just about AI, which is very much in the spotlight currently due to the rise of ChatGPT and other generative AI services. Next year, the technology of the moment could be quantum computing or IoT networks. We need to be flexible, embrace new technologies, and examine them through the lens of data ethics and innovation.
Nor is the Centre just about government agencies collaborating. Part of its job is to connect with academia and the private sector and the AI Forum, to collaborate, share ideas and develop a common understanding of what best practice looks like.
It’s early days, as we explore the opportunities and risks posed by the use of data-driven technology across the public sector, but there is a real and exciting potential to harness AI for public good – but we need do it right.
Peter Griffin
Science and Technology Journalist