On March 24th, Treasury Secretary Steven Mnuchin said he wasn’t worried about the business and social impact of AI for at least fifty to a hundred years.
Regrettably, that’s a pretty common view in boardrooms today. At a recent Salesforce panel on the impact of AI on business, Zvika Krieger, Head of Technology Policy and Partnerships at the World Economic Forum, pointed out “That's a sentiment that actually I heard quite frequently from engaging with leaders from around the world...They see [AI and other new technologies] as trends that are so far ahead in the future that they can't compete with the pressing policy issues that they have sitting in front of them on their desk every day.”
Krieger is one of a large group that sees things differently. He points out that AI technology is already fundamentally changing the way we work, relate to others, engage with our environment and think about what it means to be human.
The increasing digitization of our lives means there is far more data than ever before - a crucial ingredient necessary for AI. At the same time, Moore’s Law - Gordon Moore’s prediction that overall processing power for computers will double every two years - means that today’s iPhone is millions of times faster than the computers that put man on the moon.
That means the processing power needed to build and run the complex algorithms at the root of Artificial Intelligence is readily available, and that those algorithms have the raw materials they need to get more and more effective.
The combination of data and processing power has enabled the natural language processing that powers Alexa, Amazon’s popular voice assistant, and the computer vision that powers Tesla’s self-driving cars.
But as Michael Chui, partner at the McKinsey Global Institute and a leading AI thinker, pointed out on the panel, it’s perhaps easy for us to miss the deep impact of AI on our lives. Artificial Intelligence done well tends to fit seamlessly into our world and thus be pretty easy to miss.
“We call it AI until it works, and then we call it a dishwasher”
AI is already, as Krieger puts it, “everywhere”. Yet we’re only at the very outset of the journey to a fully AI-driven future. John Fremont, GTM Lead for AI at Accenture and another panelist, suggested that the “really dramatic shift” is still five to ten years away.
It’s a shift that has the potential to revolutionize the way we live and work - driving new advances in healthcare, improvements in how we live our lives, and helping bring countries and populations in the developing world out of poverty.
At the same time, there are increasing levels of concern around AI’s potential to destroy jobs, and a worry that new technology is indeed leading to a better future for the world - but one that will only be experienced by the 0.1% that control it.
Yet even with divergent perspectives on the future, our panelists were united around the belief that the private sector’s role will be critical. As Marc Benioff, CEO of Salesforce and WEF patron wrote in Fortune,
“As business leaders, we have an obligation to ensure that the changes wrought by technology transcend our companies and benefit all of humanity.”
Panel participants felt that a core part of that obligation is to think seriously about education. As Sarah Aerni, Senior Data Science Manager at Salesforce pointed out, “It’s so important to educate the existing workforce - to empower them on how AI can serve them in their roles”.
John Fremont built on Aerni’s point, highlighting the importance of reshaping the education we provide to tomorrow’s workforce. “How can we educate them in a better way? The kids of today will come into a very different workplace than the one they’re being trained for right now.”
Michael Chui agreed. He told the audience that in 1900, about 40% of the US workforce was involved in agriculture. Yet less than 70 years later, Industrialization meant that less than 2% were employed in that industry. That shift didn’t lead to mass unemployment because the leaders of society took a series of conscious decisions and actions to counteract it. The High School Movement was one such action - a movement to educate people so they could move from jobs in agriculture to jobs in the factory.
“This [avoidance of unemployment] didn’t just ‘not happen’. There were a set of conscious decisions. And not just by public policy institutions and governments - business leaders who ran those new factories were also involved in educating the people who then worked in those factories. There is a role for private sector to play.”
So how should we work to prepare the workforce of tomorrow for a world in which Artificial Intelligence touches every part of their lives? As we undergo a transition as impactful as the move from the farm to the factory, how should education change?
For Chui, changes in curricula are key. Schools aren’t focused on building the skills core to AI yet. Children are taught calculus, not statistics.
“When is the last time anyone had to take an integral? Anyone? But when is the last time you had to make a decision based on dirty incomplete data? Breakfast, maybe, right?”
For a future dominated by AI, new and different skills must be developed by the next generation of worker. Bringing public and private sector expertise together to drive a new ‘High School Movement’ and modernize curricula around the country is a good start.
There’s also an imperative to improve diversity in AI-related studies.
Many assume that when decisions are handed over to a computer, we’ll remove human bias from the equation. Regrettably, that’s not the case. AI is simply a collection of algorithms and decision-making protocols designed by (human) computer scientists. Those algorithms are only as unbiased as the computer scientists who made them.
There are plenty of examples of this phenomenon already out there. In 2015, Google’s photo app mistakenly labelled a black couple as gorillas. A May 2016 Pro Publica report discovered that software used to forecast recidivism in criminals was twice as likely to flag black defendants as having a higher risk of committing future crimes. Carnegie Mellon found that women were less likely as men to be shown Google advertisements for higher paying jobs.
In a world where AI is being used to help authorities and corporations make an increasing number of decisions, it’s critical that we don’t inadvertently ingrain bias into the software, reinforcing stereotypes and exacerbating inequality.
That’s why it’s so important to ensure diversity in the cohort of computer scientists designing AI. Yet the number of women working in computer science has dropped from about 35% in 1990 to 24% today (according to Girls Who Code). An enormous 84% of working professionals in science and engineering jobs in the U.S are white or Asian males.
As Chui told the audience, “We can't lose 50% or 45% [of possible computer scientists]. I mean we can't have the percentage of female graduates in computer science continue to decrease. That's insane”.
Sarah Aerni agreed that it’s critical “to shift education to include other perspectives, and drive diversity of thought.” It is already well-established that diversity of thought, of background, of gender, of race and of sexual preference leads to better decision making.
It’s no difference in computer science. Bringing diversity to AI means better, more considered responses to a set of questions fundamental to all our futures, and a stronger technology that will have such an impact on our lives moving forward.
According to our panelists, a critical component of bringing that diversity to bear is thinking hard about the future of education.