Carole Barnay
Responsible AI - an ad hoc approach isn’t enough
The organisations that practise responsible AI recognise that they have a special relationship with their customers built on trust.
This was the case at cloud accounting software leader Xero, where we created a set of principles - Xero’s Responsible Data Use Commitments, and implemented them across all aspects of Xero, from product, technology, data and AI, sales and marketing, as well as across the Xero ecosystem.
The same ethos drives our work at Te Whatu Ora’s Data & Digital organisation.
I design guardrails for responsible use of data, and provide useful toolkits and advice so that people designing new solutions, such as AI-powered services and algorithms, can ensure they are using data responsibly. New Zealanders need to trust that their sensitive health data is being appropriately used and protected.
Carole Barnay is an AI Forum member and Principal Advisor Information Privacy & Governance - Te Whatu Ora - Health New Zealand.
Left to their own devices
Both organisations have thought carefully about their approach to AI. But a surprisingly large number of organisations still take an ad hoc approach to development of artificial intelligence. Their work in this area isn’t guided by an overarching set of principles or guidelines, such as the AI Forum’s Trustworthy AI in Aotearoa principles.
Left to their own devices, product developers will forge ahead on new innovations, solving problems, and attempting to add value for the organisation and its customers. Often the approach taken comes down to the ethical assertions adopted by each individual practitioner, who may have a lot of experience working with personal data, or very little.
The problem is that we don’t all have the same ethics or values. That’s why it is really important that organisations define their own ethics principles when it comes to AI, and ideally incorporate industry best practice when doing so.
The cost of an ad hoc approach
I’m aware of an IT security team implementing a blanket, organisation-wide ban on the use of ChatGPT. This feels like a knee jerk approach, taken without fully assessing the risks and benefits associated with using the technology.
It meant that the organisation wasn’t able to start realising the benefits of generative AI, but also meant that employees could be tempted to go around the ban, unofficially accessing ChatGPT, but without understanding the risks and the steps required to mitigate them.
What does good practice around AI look like from an organisational perspective? For me, I look for three things:
- Accountability, Governance & Risk
- Responsible data and AI by design
- Transparency and a consumer focus
Accountability - Senior leaders and the executives of the organisation should have a mature understanding of the nature of the use of data and AI systems in their organisations and be able to make informed decisions based on the potential risks and benefits to consumers.
The culture in place in the organisation, needs to ensure that everyone from product managers, to data scientists, and machine learning engineers, know how and why to do the right thing. That starts with embedding responsible AI principles and then ensuring they are followed by creating a ‘task force’ of cross-functional team members, bringing in expertise on privacy, legal, consumer, security, ethics, product design etc.
Customers, or end users, need to be consulted in the design of AI systems that will use their data, and they need to understand the technologies that may contribute to decisions that will affect their lives. In New Zealand for example, we have the annual Plain Language Awards. I challenge our organisations to submit entries to the Plain Language Awards to showcase how straightforward and transparent they can be when explaining complex AI or automated decisioning concepts in simple terms. Transparency and clarity are incredibly important when it comes to AI and in explaining its implications for informing how decisions are made.
Robust Toolkit
The AI Forum’s Robust Toolkit is designed for organisations that prioritise ethical considerations and responsible practices when implementing AI tools.
This toolkit outlines key concepts, principles, and guidelines for using AI in a way that respects human values, mitigates potential risks, and fosters trust among stakeholders. It can be utilised by medium to large organisations that have integrated or are planning to integrate AI into their processes.
It has been curated using local and international Responsible AI and data governance tools that we personally have found useful in our roles implementing responsible data and AI governance in the New Zealand setting.
Peter Griffin
Science and Technology Journalist