Mia Simmonds Keate
Responsible AI involves staying on the right side of the law
Artificial intelligence has been put to good use by New Zealand organisations for decades in everything from customer service platforms to scientific research.
But recent advances in the field of AI, particularly the availability of sophisticated large language models which underpin services like ChatGPT, pose new challenges from a legal perspective.
AI loves data and the ready access to chatbots powered by LLMs makes it incredibly attractive for businesses to feed data into them to draw on AI’s power to generate insights and summarise information quickly and efficiently.
Mia Simmonds Keate is a Solicitor practising in Technology, IP and Media law
What goes in
But the use of generative AI systems raises issues around dealing with confidential and personal information.
Many organisations handle confidential information which is appropriately drawn on within the digital confines of the organisation. This information may be deemed confidential under an employment agreement or another type of contract. Employees may inadvertently breach the confidentiality of their organisation or another organisation by entering confidential information as prompts in generative AI systems.
In addition, organisations that collect and process personal information in New Zealand have statutory obligations under the Privacy Act 2020. If any of that personal information were to be unlawfully uploaded to a tool like ChatGPT, that organisation would be fully responsible for this unauthorised use and disclosure.
Generative AI tools may also lift people’s personal information from the internet and use it in a way that the person has not authorised. Under the Privacy Act, entities need to make people aware of how they are going to use their information.
Organisations are faced with a steep learning curve when figuring out how to remain compliant as they take advantage of this new generation of AI-as-a-service offerings. ChatGPT’s creator OpenAI has been proactive in educating its users about the risk of data entered via prompts being added to its model and potentially being disclosed in answers to other users. The Office of the Privacy Commissioner has also published some useful guidance on generative AI and the application of privacy law.
What comes out
If personal or confidential data has been unlawfully entered into generative AI systems, the consequences will typically be felt when the outputs are produced, either to the organisation itself, or to other parties through the unauthorised use or disclosure of data.
That’s why organisations need to have robust processes in place to assess not only the quality and accuracy of AI outputs, but their right to use the content generated that could then feature in reports, publications, products and services.
Generative AI and copyright
Copyright is a legal right (a property right) given to “creators” such as authors and artists under the Copyright Act 1994. For there to be a copyright issue in relation to AI some of the data collected and input into AI systems must constitute copyright “works”.
When using new generative AI there is a possibility that the output could include parts of copyright-protected text, images, video, and audio, which were included in the data during the AI training phase. “Data” could include music in the form of sound recordings, for example. Training data used by the generative AI system may infringe copyright in original works by copying bits and pieces of existing works, or by trying to replicate an artist's style.
Unless the inclusion of copyright works was authorised by the copyright owner, outputs may infringe copyright.
Case law in this area is quickly evolving as generative AI tests the bounds of copyright protections. Earlier this year, Getty Images issued proceedings against Stability AI in the US, alleging that the AI image creator had generated images incorporating aspects of images from Getty’s image collection. That case continues and many more have been taken by artists and publishers against AI platform operators alleging copyright infringement.
Bottom line, think very carefully about the data you are inputting into AI systems. Is it protected by copyright? You may need to seek approval from the copyright holder.
Protecting yourself and your organisation
Businesses now face a plethora of choices when it comes to AI systems. It’s more important than ever that organisations carefully think through their approach to using AI. In particular, consider how to use generative AI models in a lawful way that does not breach the Privacy Act 2020, confidentiality obligations, or the Copyright Act 1994.
It starts with good governance. What does your AI governance strategy look like? This should have input and oversight from senior leadership.
It involves conducting appropriate impact assessments before using or developing AI systems - a selection of these templates and tools are available via the AI Forum. Having operational protocols in place governing who can access data and when should also be put in place.
You should be transparent about your use of generative AI internally and externally to maintain client trust.
Generative AI is an incredibly powerful tool that is already being used in innovative ways by New Zealand organisations. With the right approach, you too can benefit from this technology while staying on the right side of the law and respecting the rights of your employees, partners and clients.
Peter Griffin
Science and Technology Journalist