Ethical AI: Who's in charge?

By
4 Minutes Read

Who's job is it to make sure AI is ethical?

Ensuring ethical AI must go beyond the executive level and be fully embraced by software designers and developers as a collective effort that involves stakeholders from various disciplines, including:

  • key stakeholders
  • ethicists
  • legal experts
  • UX researchers
  • UX/UI designers
  • content authors
  • data stewards
  • developers
  • data scientists
  • end users

By fostering a collaborative environment where different perspectives and expertise come together, we can co-design AI systems that not only function effectively but also align with ethical principles and values. This approach is essential as it places the responsibility on every person in the pipeline.

McKinsey said, "In a recent flash survey of more than 100 organizations with more than $50 million in annual revenue, McKinsey finds that 63 percent of respondents characterize the implementation of gen AI as a “high” or “very high” priority."

"Yet 91 percent of these respondents don’t feel “very prepared” to do so in a responsible manner.Source: McKinsey

One challenge is the bias that can be present in teams, data, algorithms, and even in user perceptions. Bias in AI can lead to discrimination and unfair outcomes, which can have serious consequences in various domains such as hiring, lending, and criminal justice. It is important to address this challenge by developing AI systems that are trained on diverse and representative datasets, and by regularly monitoring and auditing the algorithms for bias, but also by training teams who are creating software on data equity, bias, and inclusive design.

Waterfall anyone?

Another challenge is the lack of interpretability and explainability in AI systems. Many AI models, such as deep learning neural networks and LLMs are black boxes, making it difficult to understand how they arrive at their decisions. This lack of transparency raises ethical concerns, especially in critical applications like healthcare and autonomous vehicles. Researchers and practitioners need to develop techniques to make AI systems more interpretable and provide explanations for their decisions.

In the early days of software design and development, everything was documented. With the rise of agile, teams move fast and break things instead of spending time on the old waterfall methods that emphasized documentation and process over moving quickly. This low or no-doc approach resulted in teams no longer capturing the logic and thinking behind a decision, or conducting an impact analysis to gain a shared understanding of the impact of their decisions, and instead, we have systems without any historical or continuously improved documentation. 

To address this challenge within agile teams, it is essential to balance the need for speed with the necessity of understanding and documenting AI behavior. Agile teams can adopt a "living documentation" approach, where documentation is continuously updated as part of the development process rather than being a one-time activity at the end of a project. This can be facilitated by integrating tools that automatically generate documentation based on the code and decisions made. Additionally, embedding practices such as regular "reflection sessions" where team members discuss the reasoning behind decisions and the implications of AI behavior can promote a culture of transparency. These practices not only help in making the AI systems more interpretable but also ensure that the knowledge is shared and accessible to all team members, thereby enhancing collective understanding and accountability.

SDLC Waterfall

SDLC Waterfall

 

Best Practices

Leading ethical AI projects requires following best practices to ensure the development and deployment of AI systems that are both technically robust and ethically sound.

One best practice is to establish clear goals and guidelines for the project. This includes defining the ethical principles and values that the AI system should adhere to, as well as setting specific objectives - or a UX vision - for the project.

Another best practice is to prioritize transparency and accountability. This involves documenting the decision-making process and ensuring that it is transparent and understandable to stakeholders. Teams don't have to follow SDLC Waterfall to be effective, but they do need to establish mechanisms for auditing and monitoring the AI system to detect and address any ethical issues that may arise post-launch and during subsequent releases.

Additionally, it is important to involve diverse perspectives in the development and deployment of AI. This includes engaging with stakeholders from different backgrounds and ensuring that their voices are heard and considered. By incorporating diverse perspectives, organizations can avoid biases and ensure that their AI system is fair and inclusive.

Regular training and education on ethical AI are also crucial for leading such projects. This helps in raising awareness about ethical considerations and ensures that team members are equipped with the necessary knowledge and skills to make ethical decisions throughout the AI development process.

Lastly, it is important to continuously learn and adapt. Ethical AI is an evolving field, and new challenges and opportunities may arise over time. Leaders of ethical AI projects should stay updated with the latest research and developments in the field and be willing to adapt their approaches and practices accordingly.

How to Get Started

Getting started with developing ethical AI requires a systematic approach. Here are some steps to consider:

1. Identify the ethical considerations: Start by identifying the potential ethical issues that may arise from the use of AI in your specific domain. This may include issues related to bias, privacy, fairness, transparency, and accountability.

2. Establish an ethical framework: Develop an ethical framework that outlines the principles and values that the AI system should adhere to. This framework will serve as a guide for the development and deployment of the AI system.

3. Involve diverse stakeholders: Engage with stakeholders from different backgrounds and perspectives to ensure that their voices are heard and considered. This includes end-users, domain experts, ethicists, and legal experts.

4. Design for interpretability and explainability: Incorporate techniques that make the AI system interpretable and provide explanations for its decisions. This will help in addressing concerns related to transparency and accountability.

5. Regularly monitor and audit the AI system: Implement mechanisms to continuously monitor and audit the AI system for bias, fairness, and other ethical considerations. This will help in identifying and addressing any ethical issues that may arise during the deployment of the AI system.

By following these steps and adopting a human-centered approach, organizations can take the first steps toward developing and implementing ethical AI systems.

Picture of Karen Passmore

Karen Passmore

Karen Passmore is the CEO of Predictive UX, an agency focused on product strategies and user experience design for AI and data-rich applications. Karen talks about UX, AI, Inclusive Design, Content and Data Strategies, Search, Knowledge Graphs, and Enterprise Software. Her career is marked by product leadership at Fortune 500 companies, startups, and government agencies.

Author