Ethically Driven Design
Identifying potential risks
The Design phase of implementing responsible AI is where the rubber really hits the road within your teams. It can be a time of confusion. "You say we need to meet these principles, but what does this really mean in my job" "How do I think this way?" "How do I apply it to the way I make decisions?" In this toolkit we are suggesting some Responsible AI design tools that make these complex ideas into real tradeoff discussions that your employees can use within their teams, with stakeholders and potential customers of the solution or product.
Conduct ethical impact assessments
to identify potential biases, discrimination, and unintended consequences in the design and implementation of AI systems.
Microsoft Responsible AI Toolbox
Microsoft Responsible AI toolbox - A collection of tools and best practices for building responsible AI solutions. This can be used to inform the development of a responsible AI standard, as part of a business's design strategy.
AI Impact Assessment
Microsoft Responsible AI impact assessment - A pre-built template for organisations to fill out when considering the risks with AI implication.
UKGDPR AI Compliance
The UK Information Commissioner's Office (ICO) has created in-depth guidance explaining how the UK GDPR applies to AI, emphasising the importance of accountability, transparency, and lawful processing of personal data. It covers topics such as data protection principles, data minimisation, profiling, automated decision-making, and data subject rights.
Stakeholder Involvement
Involve stakeholders, including employees, customers, and external experts, in the ethical decision-making process. Gather diverse perspectives to ensure a comprehensive understanding of ethical challenges and potential solutions.