The ethics of artificial intelligence in the enterprise

By AJung Moon. Posted 09 February 2018
enterprise need to have responsible ethical artificial intelligence

The increasing use of artificial intelligence (AI) in enterprise applications reflects that it can provide competitive advantages by parsing data for useful insights into customers and competitors. But there’s a parallel trend afoot: assessing the ethical implications of AI and the algorithms that drive it in an enterprise setting.

Just last year, the Massachusetts Institute of Technology (MIT) and Harvard University joined forces on a $27 million project to explore AI ethics. More recently, DeepMind Technologies Ltd., a private British firm acquired by Google in 2014, launched a new research team dedicated to studying AI ethics. These efforts join other recent initiatives on AI ethics, including the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, the AI Now Institute at New York University, and the Leverhulme Centre for the Future of Intelligence at Cambridge University.

Why so much interest in AI ethics and what does it mean for enterprise organizations?

Recent examples of catastrophic brand damage and public backlash have revealed the ethical, societal and enterprise risks that may accompany the deployment of AI into our enterprises.

Defining AI

Though some experts insist on reserving the term AI to refer to software artifacts with equal or superior intelligence to that of human’s – also known as “strong AI” – these new initiatives use the term AI to also include machine learning and data-driven algorithms that supplement or replace traditionally human decisions.

If we accept the latter, broader definition, we must recognize that AI has been a feature of the computer age for many years. Today, in the era of Big Data, the Internet and social media, many of the advantages of using AI are already well-known and embraced: AI can offer an enterprise a competitive advantage, enable efficiencies and provide insights into customers and their behavior.

Running an algorithm to detect nontrivial patterns and value in data is almost universally seen as a cost-effective path to value creation, especially in a market-oriented, competitive environment.

But the rise of AI ethics initiatives is an acknowledgement that these seemingly advantageous uses of AI can backfire. Recent examples of catastrophic brand damage and public backlash have revealed the ethical, societal and enterprise risks that may accompany the deployment of AI into our enterprises.

Enterprise organizations should address how they use AI because that use comes with business risks as well.

If your enterprise fails to examine the potential biases of employees who create algorithms that drive AI or machine-learning systems, you risk systematizing that bias to all stakeholders in your enterprise.

Human bias and discrimination

The algorithms that drive AI and, more importantly, datasets we use to train these algorithms often derive from human sources. Thus, these algorithms unavoidably reflect human biases.

In one example, evidence shows that certain automated soap dispensers do not function when a dark-skinned user puts his/her hand by the sensor. These systems were ostensibly tested by the developers using their own hands; but they failed to test the system on users with different skin tones. This example illustrates that human-produced algorithms can behave according to the perspectives and biases of their creators.

These biases are often unintentional, however, the potential backlash to a company that makes the foregoing mistake exists whether the consequence was intentioned or not. But the larger point is that, intentional or otherwise, the human biases potentially inherent to human-created algorithms have largely escaped scrutiny. And that lack of scrutiny translates to risk for enterprises that use AI.

If your enterprise fails to examine the potential biases of employees who create algorithms that drive AI or machine-learning systems, you risk systematizing that bias to all stakeholders in your enterprise. And that puts your organization at a risk of brand damage, lawsuits, public backlash, and potentially feeds a culture of distrust between your enterprise and your employees and customers.

Do enterprises owe themselves and society more than mere legal compliance? Can your enterprise say with confidence that how it applies AI is fair, transparent and accountable to humans?

AI’s widespread application, and risk

The biased soap dispenser is but one example of a potentially pervasive challenge. AI algorithms can be applied to hiring practices and sentencing guidelines, as well as safety and security operations. They are intrinsic to how social media works – or doesn’t work.

Simply put, AI is being applied to myriad everyday and specialized endeavors; it is becoming ubiquitous – and so are its potential risks to your enterprise.

The challenge is to understand how algorithms are designed and vetted to avoid the perspectives and biases – conscious/intentional or not – of their creators. And that raises challenging questions.

How many CEOs really understand how AI and its algorithms are sourced and applied to their enterprise? (Many companies work with third-party providers that deliver AI solutions for their applications.)

Corporate due diligence can be a legal requirement – does it extend to reviewing how AI applications are generated and used by an enterprise? Is the legal definition of due diligence and compliance with it enough for an enterprise utilizing AI or do ethics and the traditional notion of corporate responsibility apply here?

Do enterprises owe themselves and society more than mere legal compliance? Can your enterprise say with confidence that how it applies AI is fair, transparent and accountable to humans?

Answering these questions requires the enterprise to review and articulate its own corporate stance on ethics as well as applying a methodical approach to risk assessment.

Contributing trends

Two trends may exacerbate the urgency and criticality of risk assessment for AI uses and users. First, there is growing awareness and concern among consumers, citizens and policy makers of AI’s growing ubiquity and potential for misuse or unintended consequences. The corollary of that fact is that transparency, fairness and accountability are getting more attention as competitive differentiators.

Ultimately, we want to be able to identify our key values, embed them in the design of AI algorithms, understand associated risks and continue to validate the effectiveness of individual, enterprise and societal AI practices.

Call to action

The first step in addressing these issues is awareness. How does your enterprise employ AI and who is potentially affected? Is outside expertise needed to make these evaluations?

At the same time, it’s important to articulate your enterprise’s core values. Do those values align with your use of AI? If not, how can that alignment be achieved?

Resources are available to assist in this developing area of concern. For instance, I am an executive member of the aforementioned IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, which is working on best practices, resources to raise awareness and guide decision making, and standards for various AI-related applications. (As you may know, IEEE stands for the Institute of Electrical and Electronics Engineers and is the largest technical, professional organization dedicated to advancing technology for the benefit of humanity.)

One key resource is the Initiative’s Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systemsan iterative document now in its second edition that encourages technologists to prioritize ethical considerations in the creation of autonomous and intelligent technologies.

The Initiative works closely with the IEEE Standards Association, which recently embarked on developing standards for data governance for child and student data, for transparent employer practices and for human-in-the-loop AI agency that ensure that human values guide the development of algorithms that affect us personally.

Ultimately, we want to be able to identify our key values, embed them in the design of AI algorithms, understand associated risks and continue to validate the effectiveness of individual, enterprise and societal AI practices.

Rest assured that this is an emerging topic and that the concerns and goals expressed here remain domains of active research. To be a socially responsible enterprise in the era of AI, however, requires that business leaders become aware of the issues and begin the process of identifying corporate values and embedding them in the ethical design of AI applications.

AJung Moon is an Executive Member, IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems; and Director, Open Robotics Institute