As organizations start to implement artificial intelligence (AI) and realize its true power, many sectors of society are asking profound social and ethical questions about how it can and should be used responsibly.
The co-founder of Seneca Women and former Commissioner of the U.S. Commodity Futures Trading Commission Sharon Bowen put it plainly during a recent panel discussion at Dreamforce ‘18. She said that data and the technology it drives, including AI, has “enabled organizations to become more powerful and it heavily influences decision-making — but while it can help us protect groups and communities that are at risk, it can also be used to harm them.
One way to help avoid negative effects from AI is to treat concerns about ethical and humane use as human problems rather than faults of the technology. This echoes the sentiment of Salesforce’s Ethical AI Practice Architect Kathy Baxter that technology “should be in service of the human, not the other way around."
Taking the human-problem approach may help to tackle one of the biggest AI concerns: managing bias. Humans are laced with opinions and agendas and it is unrealistic to expect AI to magically erase the biases that exist in datasets. Humans need to identify and manage bias, including training AI systems to identify bias, to prevent AI models from reinforcing the stereotypes and allowing bias to influence recommendations and predictions.
Salesforce’s Chief Scientist Richard Socher gave a good example: “A bank wants to predict whether it should give somebody a loan. If, in the past, the bank hasn’t given as many loans to women or people from certain minorities, let’s say, and it has those kinds of features in its dataset, then the algorithm might say, ‘No, it’s not a good idea.’ So, AI is picking up the bias and either amplifying it or at least keeping it going.”
Baxter noted that the representativeness of the information on the internet is also a major concern. She referred to a news story about a teenager doing a school report and searching for an image of three black teenagers. “He got tons and tons of mugshots,” she said. “Then, he did a search for three white teenagers and got lots and lots of very highly polished photos of teens wearing sporting gear — lovely, positive pictures.”
Her concern is that simply using information from the internet to train AI programs will “magnify the stereotypes, the biases, and the false information that already exists.” It could, she said, “perpetuate biased and possibly incorrect information, and that’s not what we want.”
Socher said it’s important to understand that AI algorithms are only as good as the training data, which can be easily influenced by human biases. Training data is used to teach machine learning algorithms. For example, the machine will learn how to spot a car in an image by looking at thousands of images of cars.
“A biased algorithm isn’t there because some evil programmer said, ‘Oh, I don’t like women so I’m going to program something to prevent this algorithm from giving loans to women,’” he said. “Instead, the programmer will have an abstract algorithm and then get a training dataset based on past behavior, and that dataset will include certain biases.”
Treating the problem as a human one rather than technological can help to reduce the bias.
“We don’t live in a fair or unbiased world,” said Socher, “so we need to manually take diversity into account as we build these datasets, and be careful not to use the technology to replace human understanding.”
AI users can also help reduce bias by identifying certain protected classes, such as race and gender, and excluding them where appropriate. For example, if an organization is trying to create a system to make bank loan predictions, then ideally it shouldn’t have a gender column, which could reiterate biased data.
With such guidance, organizations can improve their datasets and get better training data or try explicitly to mitigate negative effects. This can help to make AI systems more transparent and less inscrutable, so people can understand what’s happening — for example, why they were rejected for a loan.
But, as Baxter said, this process — and others involving the responsible use of AI — should not just be seen as yet another task. Rather, it’s a way to fundamentally change people’s thinking and behavior. “Ethics is a mindset, not a checklist, and it needs to be instilled in people early on.”
Efforts to tackle bias are moving in tandem with a broader global debate about AI ethics being led by groups such as the Partnership on AI. This consortium is made up of technology and education leaders, including Salesforce, from varied fields who have joined forces to focus on the responsible use and development of AI technologies. They are doing this by advancing the understanding of AI, establishing best practices, and harnessing AI to contribute to solutions for some of humanity’s most challenging problems.
Socher believes this wider debate may help us “focus more on what makes us uniquely human — that is, having empathy with each other.”
“You can have an algorithm automate a lot of things, but if you want another person to be empathetic to your situation, you cannot program that,” he said.
On this point, the technologists seem to be in harmony with at least some of their religious counterparts. Father Eric Salobir, a Roman Catholic priest and founder of the OPTIC network, which promotes research and innovation in digital humanities, told the Dreamforce ‘18 audience: “One thing that makes us human is facing our responsibilities and making our own decisions. So, we need to be sure that the way we use technology, especially AI, which is super powerful, is in a way that will really empower us instead of dehumanizing us.”
To hear more, watch the full “Ethical Responsibility in the Fourth Revolution” session from Dreamforce ‘18 on Salesforce LIVE.