Frith Tweedie
Doing AI right: Maximise the benefits, minimise the risks
There’s an analogy from the realm of automobile safety that applies just as well to artificial intelligence - the brakes on a car enable you to go faster.
After all, you only drive at high speed because you know you have the ability to slow down safely. Effective automobile design and rigorous safety standards and maintenance are crucial to building and maintaining our trust in cars.
It’s exactly the same in the fast-moving world of artificial intelligence. AI governance or “Responsible AI” is ultimately about maximising the benefits of AI while minimising the risks. And it’s not just a defensive manoeuvre – adopting a responsible approach to the development and use of AI helps improve the overall performance of AI models.
Frith Tweedie is a member of the Executive Council of the AI Forum and the Government’s Data Ethics Advisory Group as well as the global advisory board for the AI Governance Center. She is a former lawyer with 20+ years’ experience who now focuses on helping clients develop robust privacy and responsible AI practices. She has a particular interest in the intersection between digital technologies, privacy and Responsible AI.
This is also important for maintaining the trust of your employees, your customers, and society in general. And that trust is sorely needed when it comes to AI. Recent Ipsos research on global attitudes to AI found that Kiwis are more sceptical of AI than the rest of the world and are less likely to trust companies that use AI compared to those that don’t.
This lack of trust likely arises from well-documented instances of AI systems causing privacy violations, producing biased and inaccurate outputs, infringing IP rights and operating as opaque black boxes.
Incoming AI regulation
Many New Zealand businesses are globally focused, selling into markets all over the world. And those operating in the global digital economy face an increasingly complex regulatory landscape.
The EU is looking to finalise the “AI Act” by the end of the year, a groundbreaking piece of legislation that will apply across the 27 EU member states and to those providing AI systems to customers in the EU. Like the General Data Protection Regulation, which changed the global privacy landscape when it was enacted by the EU in 2018, the impacts of the AI Act will also be felt around the world. The AI Act's focus on ethical and trustworthy AI is likely to set a precedent for global AI governance.
Elsewhere, China has already enacted generative AI focused legislation and the US, Canada and Australia are looking to regulate the potential harms of AI as well. In New Zealand, the Privacy Commissioner is currently exploring whether to establish a biometrics Code of Practice under the Privacy Act.
So it’s a question of when, not if, we will see AI-targeted legislation, particularly for businesses with a global footprint. There’s no doubt New Zealand businesses that have a Responsible AI framework in place will be much better placed to meet these changing market conditions when it comes to taking homegrown AI products and services to the world.
AI in the public sector
The public sector is also putting the groundwork in place to ensure the responsible adoption of AI in the delivery of public services.
We certainly want to avoid some of the horrifying stories from overseas. The “Robodebt” scandal in Australia related to an algorithm used to identify income discrepancies declared by social welfare beneficiaries. While the aim was to create a fairer system by checking to see whether recipients had received more social welfare payments than they were entitled to, it was found to be inherently flawed in its design, with welfare recipients being wrongly judged by the system to have been overpaid and treated as welfare cheats.
In 2021, a Federal Court Judge approved a A$1.8 billion settlement relating to nearly half a million false accusations of welfare fraud. The robodebt system was found to be biased, making unfair assumptions about people who had limited ability to contest its findings.
The human impacts of Robodebt were significant - many victims experienced mental health impacts and there were several suicides. A recent Royal Commission of inquiry found that it was:
a crude and cruel mechanism, neither fair nor legal, and it made many people feel like criminals. In essence, people were traumatised on the off chance they might owe money. It was a costly failure of public administration, in both human and economic terms.
The scandal severely damaged the reputation of Centrelink and significantly eroded public trust in the government’s ability to manage social services.
In a similar vein, an algorithmic risk scoring system used by the Dutch tax authority to predict child welfare benefit fraud was found by Dutch courts to have breached privacy and human rights laws. The system wrongly accused more than 20,000 families of fraud, driving many to financial ruin and leading to several suicides and over a thousand children being taken into foster care. The Netherlands Government was forced to resign over the scandal in 2021.
Algorithmic impact assessments in our public sector
In New Zealand, the Algorithm Charter for Aotearoa New Zealand was introduced in 2020 to demonstrate “a commitment to ensuring New Zealanders have confidence in how government agencies use algorithms”.
The majority of government departments and agencies are signatories to the Charter, meaning they have committed to a set of five ethical principles surrounding their use of algorithms.
After an independent report found many agencies were struggling to implement the commitments in practice, Stats NZ – which leads the Algorithm Charter work - initiated a piece of work to help government departments operationalise the Charter. This includes an “Algorithmic Impact Assessment” process and set of documents that I have developed for Stats NZ, which has been shared with the Charter signatories.
An Algorithmic Impact Assessment or “AIA” is designed to facilitate informed decision-making about the benefits and risks of government use of algorithms. The ultimate aim of both the Charter and AIA process is to support safe and value-creating innovation by agencies.
Conducting an AIA enables agencies to identify, assess and document any potential risks and harms of algorithms so they are in a better position to address them. The process involves asking fundamental questions about the nature of a proposed algorithm and the potential harms it might cause, the intended use, the people who will be using it, and most importantly, the people who will be impacted by it.
The first step is a threshold assessment to weed out low risk algorithms, enabling agencies to focus on algorithms presenting a higher risk of harm. The AIA documentation adopts a best practice approach to satisfying the Charter commitments, recognising that each agency will need to tailor the process and the ultimate risk assessments in a way that is appropriate for its own context, risk profile and role in society.
Other resources
The Interim Centre for Data Ethics and Innovation is helpful resource that will support government agencies to maximise the opportunities and benefits from new and emerging uses of data, while responsibly managing potential risk and harms. Its role is to raise awareness and help shape a common understanding of data ethics in Aotearoa New Zealand, while building a case for a wider mandate and a scaled-up work programme over time. It will work across a wide network of people and ideas, drawing on the knowledge and expertise within that network, including the Data Ethics Advisory Group.
There’s no doubt AI has huge potential to transform how we work and deliver services across the economy. With the right approach to AI governance, we can safely adopt and deploy AI, while minimising potential risks.
Peter Griffin
Science and Technology Journalist