Markets and Trends

The 5 Elements Your AI Strategy’s Code of Ethics Needs

Learn 5 ways to incorporate AI ethics into your AI strategy to deliver business advantage and protect employees and clients.
Pinterest LinkedIn Tumblr

As artificial intelligence’s (AI’s) potential grows, so does the need for a cohesive AI strategy to leverage AI to prioritize and execute the enterprise’s goals. Aside from articulating business goals and mapping out the ways organizations can use AI to achieve those goals, there is another extremely important element that every AI strategy needs: a code of ethics.

The rise of AI brings with it questions of who is responsible for making sure these powerful technologies are used responsibly and safely. The reason that ethics is so important is that now we have machine intelligence that sits between us and the organizations that we are dealing with. AI algorithms aren’t neutral. They are built by humans, and it leaves them exposed to bias as they are programmed or used. Instances of bias are already found in image searches, hiring software, financial searches, and other human-programmed AI applications.

Establishing ethical standards doesn’t necessarily change human behavior, but it does create a baseline of ideals and behaviors for how an enterprise will approach its data collection and use through AI algorithms and other applications. It mitigates risk within the enterprise, but it also allows the organization to communicate its standards and its mission of customer care to the market.

As artificial intelligence and machine learning technologies become part of our everyday life and as data and big data insights become accessible to everyone, CDOs and data teams are taking on a very important moral role as the conscience of the corporation.

To establish a better set of behavior, here’s a few principles that could be adhered to by the technology providers:

1. Utility:

Ensure that your algorithms are clear, useful, and satisfying (delightful) for the user. Establish holistic metrics — don’t just goal yourself on revenue but think about the social outcomes. Try hard to measure the leading indicators of social outcomes.

2. Empathy and Respect:

Validate that your algorithms understand and respect people’s explicit and implicit needs. Have diversity in your data teams — work to have a representation of the population where you would be deploying the algorithms. Avoid the blinders of the homogenous teams. Diverse teams will work to have diverse training data, more thoughtful feature sets, and less bias in the data.

3. Trust:

Strive for your algorithms to be transparent, secure, and consistent in behavior. While explainability for deep learning algorithms are hard, explore alternate algorithms where feasible or expose logic/rules to the greatest extent possible.  Have centralized data science teams or a center of excellence to avoid or mitigate line-of-business bias. For example, data science team reporting to sales will lean toward the bias of the sales objectives. Centralizing the data science team avoids that bias.  However, in some situations, you may have a hub and spoke structure where the line-of-business (LOB) data science team aligns with center of excellence to mitigate those biases.

4. Fairness and Safety:

Ensure that the algorithms are free of bias that could cause harm — in the digital or physical world, or both — to people and/or the organization. Leverage fairness tools to check for unwanted bias in datasets, machine learning models, and state-of-the-art algorithms to mitigate such bias. .Data teams should be chartered to be the “conscience” officers and monitor the use of AI to ensure that the data – and the people it represents – are protected.

5. Accountability:

Establish clear escalation and governance processes and offer recourse if customers are unsatisfied. This accountability has to come from the top of the organization; executives should remain focused on refining design practices, creating AI that is human focused, and auditing for biases. Have ongoing data governance practices performed jointly by IT as well as those in business and compliance functions.

Societal trust is critical for widespread AI adoption. AI may not be completely devoid of bias, just as we humans aren’t. However, diverse data teams, with the right partnerships, can help reframe AI-driven consequences in terms of human rights and make it more acceptable. By creating a code of ethics in your AI strategy, your enterprise demonstrates a commitment to responsible technology use – which can translate to better business results.

Learn more about the ways AI maturity and strategies can affect your organization for the better; watch IDC’s on-demand webcast, “How Your AI Maturity Impacts Building a Winning Corporate Strategy.”

Ritu is responsible for leading the development of IDC's thought leadership for AI Research and management of the Worldwide AI and Automation Software research team. Her research focuses on the state of enterprise AI efforts and global market trends for the rapidly evolving AI and Machine Learning (ML) including Generative AI innovations and ecosystem. Ritu also leads insightful research that addresses the needs of the AI technology vendors and provide actionable guidance to them on how to crisply articulate their value proposition, differentiate and thrive in the digital era.