As AI matures and its use becomes widespread, the technology learns what we teach and uses it to do as we instruct, so ethics is a vital consideration in both creation and consumption.

As an infant, Superman was sent in a rocket from Krypton and crash landed on Earth. He was found and adopted by the kind-hearted Kent family, who taught him strong values. As every parent knows, a large part of the person a child grows up to become results from the values parents role model.

But this was no ordinary child, he had super powers. The values the Kent family infused into the boy as he grew allowed him to become a superhero rather than a super villain.

We’ve now seen the birth of artificial intelligence, another powerful force. It’s in its formative years and, as we bring it into our industries, our workplaces and our homes, as we shape what it will become, what sets of values are we going to embed into it?

People are trying to work through the implications of how AI is developed in its infancy, of how and what it is fed in order for it to grow into a responsible adult. I think there are two important lenses we must look through. The first is about creation and the second is consumption.
 

Creating responsible AI

 

The discussion around creation centres on the inherent biases we might be hard-wiring into AI as we build it. AI algorithms feed on data, under a form of supervised learning – our Chief Scientist Richard Socher recently said about training AI, “If you put rubbish in, you get rubbish back out”.

Fundamentally, it is a sophisticated pattern recognition framework that is based on the data it is fed. But what if it is not fed the right mix of data?

I remember a wonderful TED talk by an MIT researcher who is African American. She was working on facial recognition technology which, despite very easily recognising the faces of her Caucasian colleagues, couldn’t recognise her face as a face. When she began to unpick this, she found the training data contained only Caucasian faces. As a result, it errored out when it saw anything that did not match that set. This is a good example of AI having the potential to amplify our own biases.

It can cause errors, and in more serious instances it can cause deaths. A report in The Economist about the death of a pedestrian hit by a self-driving Uber vehicle said the car spotted the pedestrian but, because she was pushing a bicycle, didn’t recognise her as a pedestrian. Many other factors were involved in this terrible accident, but once again it demonstrates the importance of fully representative data in the set from which the AI is learning.

 


 

Guidelines for responsible AI consumption

 

Car manufacturers have certain responsibilities. Engines must not overheat and explode. Tyres must not blow out and cause an accident. The vehicle must be safe for passengers.

At the same time another set of actors, in this case a state government or national government, develops a set of rules and regulations around the use of cars. You must earn a licence to drive beyond a minimum age. Your car has to be inspected regularly and must be roadworthy. You have to drive on a particular side of the road and under a certain speed.

Similarly, as there is a role for people building the capabilities of AI to ensure we’re not building intrinsic bias and factors that could lead to bad outcomes, there is also a role for regulators in creating frameworks to govern its use. It will become important for individuals within organisations to agree on how AI technologies are deployed and what decisions we’re going to allow them to make.

In America, for example, AI is being used in the criminal justice system and already some advocacy groups are demanding a re-think. When you look at the data being fed in about offenders, they fit a particular profile. So, when two suspects look very different, the AI might look at the historical data and return a biased result. If that is applied blindly, without rigorous analysis, it can have very damaging, unintended long-term consequences.

This challenge will likely lead to entirely new roles emerging within organisations. For example, we’ve created the role of Chief Ethics Officer at Salesforce. That is probably a role that other organisations, particularly creators of these technologies, will also start to hire. Cyber security has become a frontline issue and over the last decade we’ve seen the rise of the Chief Security Officer. I think similarly we’ll see the rise of a Chief Algorithm Officer, responsible for the deployment, governance and consumption of algorithms that are being built.
 

AI, for good

 

Why is all of this important? Let’s go back to the Superman story. Imagine if the young Kryptonian refugee had been found not by the Kent family and instead by Lex Luthor. Imagine the force for evil he would have become.

“To state the obvious, AI in particular is reshaping geopolitics, our economy, our daily lives and the very definition of work,” L. Rafael Reif, President of my alma mater MIT, in Boston, said when announcing the new Stephen Schwartzman College of Computing. “It is rapidly enabling new research in every discipline and new solutions to daunting problems. At the same time, it is creating ethical strains and human consequences our society is not yet equipped to control or withstand.”

The Superman tale is fiction, but the AI story is very real. AI offers super powers to the world of business. We need to ensure those powers are used for good.

We’ve recently partnered with the Economist Intelligence Unit to find out more about how AI and other Fourth Industrial Revolution technologies are impacting our world. Download the report here: Navigating The Fourth Industrial Revolution: Is All Change Good?