Actively managing AI solutions from deployment to decommissioning
How does an organisation track and mitigate risk over time? This step closely aligns with the ‘AI Impact Assessment’ and ‘Develop’ components of the Lite and Robust toolkits and the resources listed in these sections are also relevant to more complex organisations.
In some ways, this is the most crucial step of the AI governance process, as this is how all of the good definition, design and deployment work from the previous steps comes to life. This is where organisations bring their principles and values to life and work to ensure that their AI-enabled vision is progressed in a responsible and ethical way that aligns with their goals and risk appetite.
To make this management step a reality, organisations should look to use their existing governance processes to manage risk over time. Data and information governance processes (like this one from the NZ government) are especially relevant to AI systems and solutions, as many of the AI risks that organisations face are similar to those in the data space (e.g. privacy and decision-making bias). Complex organisations like those looking to use the ‘Comprehensive’ toolkit are likely to have significant governance processes already in place, so building on the processes and stakeholders already involved in these types of decisions is likely to be the best way to ensure that this work is built into the way these organisations work in the future.