In the tech space, we have discussions about artificial intelligence (AI) and emerging technologies every day. But there’s a whole world of people (namely 4.1 billion who don’t even have internet access yet) who are unaware of technology’s reach and the vast opportunities it creates. As creators, collaborators, users, and proponents of technology, we have a responsibility to proactively address not only the opportunities but the challenges of AI and emerging tech. As we move into the Fourth Industrial Revolution, there are social and ethical considerations that simply cannot be ignored.

In partnership with the World Economic Forum, we welcomed experts to discuss the promise of AI, including:

This panel, designed in partnership with the World Economic Forum, brought together insightful perspectives on AI’s implications — from education to ethics. Here’s a look at some highlights from our speakers.

 

AI Is Spurring New Educational Needs

AI holds tremendous potential in education — not just in school systems but more specifically, in filling the “missing middle” created by new tech. The missing middle is what Paul Daugherty describes as a formula you can use to look at where there are new jobs and how they’re being created. Yes, certain jobs will be eliminated, but there’s also tremendous opportunity for humans to do new things in innovative, creative ways for work that’s more engaging, fulfilling, and meaningful. In this way, companies can reimagine what their businesses will look like and work backward to determine the right investments in employees, education, or their communities to prepare people with the skills they’ll need to address AI going forward.

 

“There's tremendous potential we haven't tapped yet in education — to reinvent learning particularly for mid-career people who know they are going to be challenged by some of the impacts of the technology [itself],” Daugherty said. “I think there's huge untapped opportunity to use the technology to help prepare people better for all these changes that are coming.”

 

AI Requires Diverse Inputs to Ensure Diverse Outputs

But first, we must address some of the significant ethical concerns regarding AI today. Whose responsibility is it to actually develop AI? Who is in charge of policing it? This is what Terah Lyons addresses every day at the Partnership on AI. There's a significant amount of discussion right now in the field about how to diversify the perspectives of technologists. Increasingly, the development of AI encompasses more than just computer scientists, statisticians, or mathematicians. People in roles like social scientists, philosophers, and ethicists with all sorts of different academic backgrounds and perspectives are being brought in to handle some of AI’s toughest questions.  

“Although companies have started talking more about the accountability they face as institutions, they could still benefit from making sure that they're increasing the number of participants they have in these conversations,” said Lyons. “The advocacy community … talks to itself — much like the nonprofit sector does the same thing and government rarely consults technologists, et cetera. The Partnership is really working to break down those barriers and ensure that people can see themselves as being a part of that conversation even if traditionally they haven't really considered themselves a stakeholder in that discussion.”

Richard Socher, Chief Scientist at Salesforce, also touched on diversity and AI during the Dreamforce Fortune CEO Series. AI algorithms are only as good as the training data you give it, Socher attests. If your training data has certain biases against gender, age, or sexual orientation, it will get picked up in the AI. For instance, if a bank is processing loan applications for startups and only 5% of the past approved loans have had female founders, the system will assume it is a negative quality to be a female founder and not grant those loans going forward. Because humans have biases and humans are the ones who create datasets, using such datasets in AI may perpetuate biases or make them worse.

The hope is that when more diverse roles and industries join the conversation, engineers can begin baking the solutions into the technology, allowing government to make more well-informed policies.  

 

Developing a Framework for Responsible AI in the Private Sector

In the beginning, AI functioned on complex algorithms and rule-based technology that replaced simple muscle with technology. And many of the AI systems we use now, like Alexa or Siri, still rely on those concepts. But much of the emerging tech we see today replaces cognitive ability, giving machines life-like qualities and the ability to make decisions. Still, their decision-making power must come from somewhere. On this subject, Liesl Yearsley raised some pointed questions.

 

“What are the values that these [AI] systems will hold? How will they make decisions? If their decision-making is better than ours, where does that come from?” Yearsley said.

 

“Do we want to give them human values, the same values which gave us slavery and sexism …? This is a more influential technology than anything we have seen. Our responsibility in tech should not be to get people addicted to technology. Our driver should be if the human life is improved.”

Socher echos some of these concerns. “As we’re touching more and more people’s lives with these AI algorithms, we do need to very carefully think about how they’re impacting people’s lives. They can change elections for the worse, and they can spread misinformation. We need to be very careful whenever these algorithms touch human lives and have some empathy with how those are going to be deployed and carefully think about what training data we give it.”

Popular sentiment on the panel seems to dictate that the weight of this responsibility — ensuring AI is ethically and responsibly developed — will fall more on the private sector than the public sector.

It’s Daugherty’s stance that government and other regulations aren’t going to solve these problems for us, namely because technology is moving too fast. In his mind, business leaders must take accountability in a more serious way than in the past to make sure the outcomes are improving humanity.

“It's fine to talk about machines making decisions, but at the end of the day, in any organization or any household, a person is responsible for making the [final] decision, not a software programmer,” Daugherty said. “And understanding that the human accountability for every decision that's made is one key element of leadership.”

 

The Future of the Workplace Is Humans and AI — Not One or the Other

Most people tend to focus on what machines can do and automate, thus replacing humans with machines. But in the Fourth Industrial Revolution, Daugherty believes the vast majority of jobs going forward will be a fusion of people and machines doing things more effectively — in essence, giving people “superpowers” to achieve more. In fact, a recent report by Accenture predicts businesses will see a 38% increase in profits over the next 10–15 years based on AI improving workplace efficiency.

This might include “personality trainers” for chatbots and virtual agents or stylists at Stitch Fix that work with the platform to provide personalized fashion advice — jobs that weren’t around just a decade ago.

Where we a the mistake, according to Yearsley, is thinking that humans are and will always be more comfortable interacting with other humans in a service capacity. In a study comparing human chat and AI chat, she and her team discovered that humans were more comfortable talking to AI than other humans. Why?

 

“Today, we’re forcing people into hundreds and hundreds of fake interactions with other humans,” Yearsley said. “At some subliminal level, we know that these are fake. But with AI, people are perfectly willing to accept that this thing is just there for them and they have formed this deep bond. No strings attached. No expectation of a relationship.”

 

If there’s one thing our panelists can all agree on, it’s AI’s impact on the immediate future. While many businesses still maintain a sense that AI is far off in the distance, our panelists assert that the impact is already happening here and now. Companies need to start thinking about how to prepare workers, consumers, and communities for AI in the workplace.

To our panelists, there’s never been a technology that’s moved as fast to impact business and society as AI. That’s the Fourth Industrial Revolution — an incredible wave of innovation and technology that is radically transforming our economies, societies, and daily lives.

This article is part of a Trailblazer spotlight series on the Fourth Industrial Revolution — a concept introduced by Klaus Schwab at the World Economic Forum to describe the fundamental shift in the business and social landscape as a result of a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres. For this series, we’ve interviewed a number of chief executives and thought leaders about the impact of the Fourth Industrial Revolution on the business world. Check out the other posts in the series: