Kin Lung Chan
Responsible AI: The gap between principles and implementation
When business leaders are surveyed about artificial intelligence, they typically agree wholeheartedly that principles for responsible AI are very important.
Fewer of them can confirm that they are actually implementing AI in their organisations in line with those principles.
There is no shortage of valuable and well thought-out principles and guidelines from the OECD and UNESCO, through to vendors like Microsoft and Google, and organisations like the AI Forum, telling you what constitutes responsible and ethical AI.
But there are far fewer resources telling you how to implement AI responsibly and ethically in your organisation. That’s because implementing AI needs to be customised to the organisation. There is no one size fits all approach to practising responsible AI.
Dr Kin Lung Chan is AI Lead and Data Science Team Leader at Callaghan Innovation.
AI principles aren’t the end of the story, they are just the beginning. Some of these principles may even appear to conflict with one another. They may tell you that ensuring accuracy in AI systems is paramount, but so too is preserving the privacy of user data used in AI systems.
If you are trying to use AI in a cancer screening tool, the reality is that you may need to make some trade-offs between privacy and accuracy.
Let’s not forget that businesses investing in AI are doing so to meet their business objectives. Efforts to implement AI responsibly will fail unless this is taken into account. Either the product won’t meet the needs of the business because the restrictions on its development and use are too onerous, or it will fulfil expectations but overreach in its use of data in the process.
When it comes to using AI, there are differences between early stage companies that are racing to build a customer base and get to a positive cash flow, and an established company that is well-resourced, has loyal customers and has a reputation to protect.
The point is that every business faces its unique set of circumstances. They can all learn from AI principles, but understanding how they implement them is a complex process and one most organisations considering AI are ill-equipped to grapple with.
The other issue is that trust in AI and the organisations that use it is not universal. The perception of trust varies among people based on their background, such as level of technical literacy, and their interactions with AI.
Perceptions of trustworthiness will change over time, because of not only the technological advancements but also the shifts in social environments. So organisations implementing AI need to understand and continue monitoring the perceived trustworthiness of AI among the stakeholders who will be affected by its use. That too will shape how AI is implemented in accordance with AI principles and guidelines.
Implementing responsible AI is challenging, especially in NZ where there is a shortage of people with the skills to translate AI principles into action in organisations. But you don’t have to face this challenge alone. The AI Forum’s new set of toolkits and its member network provide good resources to support responsible AI implementation.
Seek some external advice, talk to companies and business leaders that have already embarked on their responsible AI journey.
Only by working together and learning from each other’s practical experience in working with AI can we ensure that the responsible AI principles and guidelines will be worth more than the paper they are written on.
Peter Griffin
Science and Technology Journalist