Rana El Kaliouby is a computer scientist, entrepreneur, researcher, and World Economic Forum Young Global Leader. She is the CEO of Affectiva, a company that applies computer vision, machine learning, and data science to understand people’s feelings and behaviors, a field known as “affective computing.”

We spent time with El Kaliouby discussing the future of artificial intelligence (AI), and the challenges and opportunities of leadership in the Fourth Industrial Revolution.

Q. As the CEO of Affectiva, you have experience in leading a business through the turbulence of the Fourth Industrial Revolution. And as an AI trailblazer, you have some unique insights into one of the key drivers of that turbulence — the changing relationship between humans and machines. Is that impacting the pace of change in business?

Definitely. Particularly in the last three years or so, there is a sense of urgency that didn’t exist before.

For instance, at Affectiva, we’re getting pulled into the automotive space. When we were initially approached about a year and a half ago, we assumed that given the sheer scale of their operations, any changes they’d make would take place over a longer time horizon than we were used to operating on as a smaller company.

That was emphatically not the case. Within a span of six months, we are talking to every single car manufacturer, every tier-one supplier. We're talking to disruptors in both the semi-autonomous and the autonomous vehicle space. Everyone is running at full speed right now.

It’s blown my mind how fast the iterations are and how fast this industry is being disrupted. Incumbents seem to sense that if they don't jump on it right now, they're going to be left behind.

It's good for us because it drives demand and accelerates our growth as a company. Yet it means we need to be ready to quickly adapt as well.

As we entered the automotive space, for instance, we needed to collect a huge amount of data for that industry. So we installed cameras in cars in Boston and Tokyo, and very quickly amassed a dataset that allows us to enter the space. That’s one of the key benefits of running a business today: It’s relatively easy to collect the data one needs to make better decisions and build better businesses. I enjoy that, but data collection only goes so far. You also need to have the right culture and team in place, and that team must be willing to think on their feet.

As a Young Global Leader at the World Economic Forum, you’ve been part of the discussions on the impact of the Fourth Industrial Revolution. As we at Salesforce discuss the topic with CEOs, it has become very clear is that there is a paradigm shift in the role that companies play in society. Companies are increasingly expected to be responsible to the needs of not only their shareholders but all of their stakeholders. As a CEO, what do you see as the role of the business leader in making that paradigm shift?

Back in September, we hosted the first ever Emotion AI Summit. We wanted to bring together the startups, researchers, clients, and ethicists AI to talk through the future of the space.

My deep belief is that we’re shaping the future of this industry, and hence the future of human-computer interfaces. That means the technologies we’re building here are going to have a considerable impact on the future of humanity.

And unless we come together as a community of thought leaders and discuss the ethical issues, and the applications of AI that have profound moral implications, then we’re abdicating our responsibility. That dialog is important, because the decisions we make now are likely to have an outsize effect in the years ahead.

When we decided to put together the summit, a few members of the Affectiva board were a little concerned — asking whether our time would be better spent focusing on building partnerships and driving sales.

But I was adamant that my role as a business leader nowadays is not only to drive revenue and growth for the company, but to advance the field at large by bringing people together and facilitating these challenging conversations about AI and its broader implications. So we went ahead with the summit.

 

My role as a business leader nowadays is not only to drive revenue and growth for the company, but to advance the field at large by bringing people together.

 

We had 330 people attend. We had 30 amazing speakers lined up, and it wasn’t about Affectiva at all, save a brief product announcement at the outset. And I made that decision because I do believe that I have a responsibility, as a business leader and the CEO of an AI company.

It would be unethical, really, to ignore that responsibility.

As you’ve alluded to, AI has the potential to impact many aspects of everyone’s daily life. As the scale of that impact becomes clear, the risks of bias in the system are drawn into sharp focus. Inadvertently biased datasets lead to biased AI — and in a world where computer science graduates are primarily white or Asian males, there is a real risk of institutionalizing bias as we institutionalize AI.

How can we guard against that, and ensure that instead AI is something that can maximize humans' potential and give everyone more opportunities, not fewer?

Traditionally, the people designing AI systems have little training in ethics or bias. When I went through my Ph.D. program at Cambridge University, I took zero ethics classes. Zero.

And that’s likely true for many computer science, engineering, robotics, and AI programs. Ethics is simply not a core part of the curriculum — and it needs to be.

When I think about AI and ethics, I often find myself drawing parallels to the green movement.

When Walmart, many years ago, embraced “going green” as an initiative, they did so not only because it was the right thing to do, but because it was the smart thing to do. It made business sense.

They saved money by optimizing their processes, and they introduced new green product lines that went on to be successful.

The same can be true for the AI space. Companies that embrace and prioritize ethics are doing the right thing for society, sure. But they are also far more likely to build out AI systems that are more accurate, more robust, and more scalable. That is why it’s so key that companies must embrace integrity — in their workforce, in their datasets, and in their applications of AI.

 

Companies that embrace and prioritize ethics are doing the right thing for society

 

That's a fascinating point. Can you go into a little more depth on how focusing on bias and equality in AI makes clear business sense?

Well, bias correlates with accuracy, right? If an algorithm is biased, it won’t be accurate.

At Affectiva, we are training algorithms to recognize human emotion.

If we trained those algorithms based solely on white, middle-aged men, and then attempted to run that algorithm in China, it’s not going to work. At all.

If you train the algorithm on mostly fair-skinned individuals, and then test it in Africa, it will struggle to identify the same emotional signs.

Biased datasets build inaccuracy into AI from the ground up.

We therefore spend a lot of time thinking about our sampling strategy, and ensuring our training data — which includes over 6 million faces — is as unbiased as possible. We ensure that it equally represents males and females, different ethnicities, and different geographies (87 countries, at the last count).

That ensures that the plot, so far as it gets trained at the other end of this deep learning network, is more accurate.

At the companywide scale, that makes a lot of sense. But if you focus in on the individual, the impact of unbiased AI is perhaps even more impactful.

We did a hackathon recently, where I met Amy, who is transgender.

She told me that her parents were struggling to accept her as female, and she was finding that very hard. And as we were talking, she realized we have a gender classifier in our facial recognition algorithm. She asked to try it.

I remember saying that of course she could, but warning her that “it's never seen a transgender person in its life.” We had no transgender people in our training data, so I had no idea what it was going to say.

She tries it anyway, and it recognizes her as a female. She immediately called up her mom and said “Look, mom — even the computer says I’m female!"

Now, I know it could have gone either way, based on our training data. But Amy had assigned a level of objectivity to this machine, and she thus took its assessment more seriously than she would have done otherwise. And as AI becomes ever more entwined into everyone’s daily life, it is crucial that we as Trailblazers take our responsibility seriously — to build out AI that empowers people and that fosters equality, not inequality.

This article is part of a Trailblazer spotlight series on the Fourth Industrial Revolution — a concept introduced by Klaus Schwab at the World Economic Forum to describe the fundamental shift in the business and social landscape as a result of a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres. For this series, we’ve interviewed a number of chief executives and thought leaders about the impact of the Fourth Industrial Revolution on the business world. Check out the the other posts in the series: