Ming Cheuk
Generative AI: Understand its limitations, be transparent
Auckland-based consultancy ElementX is applying the AI Forum’s AI Principles to building intelligent chatbots and digital humans for its customers
The pivotal moment for ElementX (formerly Spark 64) in its use of artificial intelligence came in 2017 - seemingly an age ago given the recent rapid advances in AI.
That was when we created our first chatbot to sell insurance policies in New Zealand on Facebook Messenger
Ming is Co-Founder & CTO at ElementX
Facebook had just opened up its popular messaging platform to allow third-party chatbots to answer customer queries from within the Messenger app. It signalled a shift in focus for the five year-old company from app development and data visualisation, to deepening its capabilities in AI-related areas, such as machine learning, computer vision, and natural language processing.
Since then, ElementX has built Aimee, a digital assistant that answers questions online for Southern Cross customers, and created an AI-powered customised gift recommendation engine for The Warehouse.
If you implement a black box, turn it on and let your customers use it on day 1, that’s pretty dangerous.
Cheuk and his team are extensively using the generative pre-trained transformer (GPT) technology that underpins OpenAI’s ChatGPT in their internal software development processes, and in products for their clients.
“AI is being rolled into the software development process more and more,” Cheuk says.
“We're finding a lot of efficiencies ourselves in software development using tools like Github Copilot.”
For customers, the ElementX team spends a lot of time fine-tuning large language models to get the best results for the intended use.
“We do a lot of prompt engineering to make them purpose specific. A key part of it is making sure we have adequate guardrails on the output and apply additional manual checks on the final results,” says Cheuk.
Additional risks with generative AI
While many best practice approaches to using generative AI overlap with more conventional use of machine learning, Cheuk says the new generation of AI tools currently garnering attention raise additional ethical questions.
“Generative AI introduces quite a few new things that don’t really surface as a risk in traditional IT systems,” he explains.
“To some extent it really is a black box because even the researchers are still discovering emerging capabilities they didn’t anticipate,” he says.
A key concern is data privacy, particularly for web and cloud-based third-party services like ChatGPT.
“What happens to the data that you enter into this tool? It’s actually a question that should be asked about any tool, because a lot of data goes straight to the cloud, but is more pronounced with generative AI,” says Cheuk.
Electronics giant Samsung was embarrassed to find that staff cutting and pasting company information into the chatbot were inadvertently divulging sensitive information that appeared in the results of other ChatGPT users. Such concerns have led to widely varying policies being hastily applied to use of such tools.
AI Principles guide design and development
“One of the first things that we advise our clients to be proactive about is setting some organisation-wide policies about how people can use it,” Cheuk says.
“Be proactive and understand where the data resides and how it is used. It needs to fit with your policies and align with the agreements you have with your customers.”
Even before embarking on an AI project, ElementX draws on the AI Forum’s AI Principles to decide whether the intended use cases are appropriate.
“It’s about alignment with principles - human-centred values, fairness, privacy, those types of things,” Cheuk says.
ElementX also drew on the AI Forum’s responsible AI governance toolkit, which has helped formalise and improve some of the processes the company already had in place.
Any project starts with an impact assessment that gives the ElementX team a sense of the type of impacts for its client and their customers, should something go wrong.
Then a process is designed to ensure AI-based products utilise test data, and are subject to benchmarking and quality assessment before they get anywhere near deployment to customers.
“If you implement a black box, turn it on and let your customers use it on day 1, that’s pretty dangerous,” says Cheuk.
But that’s a very real scenario in the era of ChatGPT.
“In the old days of machine learning, it was hard and expensive, you needed a specialised team,” he points out.
“Now, anyone can create an application overnight using the ChatGPT API, so the risk is more people might be jumping to releasing the product too soon, because it's deceptively easy.”
As the technology develops - and continues to produce results of varying reliability, disclosure and transparency will be crucial to engendering trust in the technology.
Understand the limitations, and be transparent
“If you are using generative AI in any application, just disclose it.”
Peter Griffin
Science and Technology Journalist