Procurement Applications
Plan
Source
Manage & Maintain
Decommission
Top 5 questions to ask
Business Opportunity or Problem: Is the business clear on the scope and outcome of what it is seeking to achieve, both long and short term? Cost/Benefit: Do the benefits outweigh the costs and associated risks? Alignment with other organisations’ business strategy: Are there suitable market options available that align with your strategies, e.g. avoiding vendor and single model lock in and maintaining flexibility and continuous discovery of new models? Complexity of AI Model Needed: Is the business confident that a COTS solution will satisfy most requirements and can meet any required international standards and regulations, i.e., does not require a bespoke solution? Context: Does the company have the right resources and risk appetite to operate an AI system in this sector or industry, i.e., needs to cater for many regulations and laws, e.g., additional data privacy and security requirements in health and finance?
Evaluation Criteria: Does the RFx have the right evaluation criteria with the right weighting for the business and has the criteria adequately addressed local laws (e.g., NZ Privacy Act compliance)? Adaptability/Flexibility: Does the market option provide the required flexibility, e.g., can AI components be switched on, off or bypassed if required, computing power dialled up and down, upgrades deferred? Does the supplier lock you into their model or can your ICT environment include a diversity of suppliers? Solution Gaps: Are there gaps and/or limitations and how will these be addressed? End to End Accountability: Is the company clear on their own and 3rd Party accountability when deploying this AI solution, i.e., identified risks are either: Accepted, Shared, Reduced, Eliminated Appropriate Solution Support: Between the vendor and the company, are the support model and required resources clear?
AI Readiness Assessment: Is the business ready to deploy the AI system or feature in alignment with the contract, e.g., all required preparation, notification and testing done, and signed off to operate the system responsibly? Communication and Training: - Are all the relevant stakeholders aware and trained in their new responsibilities when using the new AI system or feature, i.e., new AI roles and responsibilities? Resources to Support and Manage: Does the business have ongoing access to the right skill sets to validate outputs to facilitate its responsible management both internally and externally with the vendor? Continuous Monitoring: Does the company have the appropriate monitoring and reporting system in place to ensure appropriate RAI and contract compliance and SLA reporting? What benchmarks are in place to check the performance of the model? Required AI Governance: Is the correct company governance in place to meet contractual and legal requirements and ensure company value alignment, e.g., are all required independent assessments purchased and/or planned, e.g., bias audit?
Reason for Decommissioning: Is the AI product or component still meeting requirements, are there an unacceptable number of contract breach incidents to suggest the AI feature should be switched off or solution decommissioned at contract renewal time? Contractual and Legal Context: What legal or contractual obligations does the company have when decommissioning? Is the business fully aware of the impact, e.g., rights to remove its data, take data to a replacement system etc? Communication: Are all the relevant stakeholders aware and approving of the potential decommissioning, e.g., system owners, users, staff, managers, and vendors etc? Decommissioning Plan: Does the business have a plan on how to decommission the AI system and have all stakeholders had input? Financial: Is the business clear on how to financially treat the AI decommissioning. e.g. can the company take financial advantage of any system improvements etch in the original contract?
RAI considerations & Mitigation of inherent risk
Key principles Justice and Fairness - ensuring fairness, inclusivity and the identification and mitigation of bias during decision making processes. Wellbeing - promote, as much as practically possible, the wellbeing of New Zealand’s people and environment
Key Principles Reliability, Security, and Privacy - Can the vendor prove the AI component will not exhibit undesirable behaviour (e.g., illegal activity) in foreseeable use or misuse scenarios or is resilient enough to recover from unexpected events and adapt? Transparency - The operation and impacts should be traceable, auditable and appropriately explainable.
Key principles Human Oversight and Accountability - Companies that deploy AI systems from the market need to be aware that saying ‘the AI did it’ is not an acceptable legal defence. Deployers will be held responsible and accountable for their AI decision making and AI content creation.
Key principles Reliability, Security, and Privacy - ensuring data is securely deleted, or anonymized, to prevent future unauthorised access. Transparency - The de-commissioning operation and impacts should be traceable, auditable and appropriately explainable.
Mitigation of inherent risk Complexity: High as a thorough understanding of business needs, data types, regulatory compliance, and ethical considerations is required. Impact: High as decisions made here set the foundation for the entire procurement process, affecting all subsequent phases.
Mitigation of inherent risk Complexity: Context Dependent as it involves evaluating multiple vendors, assessing data privacy and security, and evaluating integration mechanisms and approaches with existing systems. Impact: Context Dependent as it influences the quality and reliability of the procured application, where the impacts will vary according to operational considerations.
Mitigation of inherent risk Complexity: Context Dependent as it requires ongoing monitoring of performance, quality, security, compliance, and user experience, all of which will vary based on the features, and integration points of the application. Impact: Context Dependent, depending on the operational implications of the application.
Mitigation of inherent risk Complexity: Context Dependent as it involves ensuring business capabilities are still being met, addressing changes to integrations, and change management - all of which depend on the type and scale of the application. Impact: High as it directly impacts end users and the technology application landscape - and also has potential commercial implications.
Key mitigants Adopting Principles for Responsible or Trustworthy AI can guide an organisation and provide a framework for evaluation of procurement options. Companies need to understand the potential impact of deploying their AI systems on individuals, groups and society. Harms Assessment: Organisations need to understand where their planned AI purchase fits in this continuum of AI harms risk, especially if these solutions are supporting or making decisions. Plan to Manage Risk: Deployers need to be aware of the potential harms from these systems, assess their risk appetite and adopt appropriate mitigation controls.
Key mitigants Evaluation Criteria: in addition to usual IT evaluation criteria, the principles of responsible AI should be used as evaluation criteria in the sourcing exercise and evaluated based on their priority and relevance to the business. Different characteristics will have different priorities and tradeoffs may be required, e.g., can the vendor prove the AI component functions appropriately and does not cause undue harm or pose unreasonable safety or security risks to individuals, groups and society? Contractual Clauses: AI-specific clauses need to exist in IT contracts to deal with AI-specific issues, such as clauses relating to training data or intended use clauses. Unique, vendor-specific clauses ensure clear liability for certain AI model events (e.g., breach terms).
Key mitigants Continuous Monitoring and Control:
AI applications need to be monitored as they operate in an evolving environment with potential for different considerations for different use cases, model drift, and any emergent bias. AI systems regularly incorporate new features as systems evolve, which also contributes to the need for ongoing monitoring and control mitigations. Companies need to be clear on the human oversight required for their AI application solution (e.g., human in the loop).
Right of Redress:
users need to have the right to opt out or complain with a right of redress, especially when systems are supporting or making decisions.
Key mitigants Contract Review: The original contract needs to be reviewed to ensure the organisation understands the impacts of decommissioning the AI system (e.g., reuse of generated content, ongoing ownership of improvements if company data was used to improve the vendor’s market solution and respecting any intellectual property rights). Decommissioning Plan: An application with embedded AI is often deeply integrated with other IT infrastructures and any disconnection requires careful planning. The plan needs to include all relevant obligations (e.g., communication with impacted stakeholders, compliance with contract clauses, and avoiding disrupted processes).