We are facing an important confluence of challenges and opportunities right now. In the last several months, people’s behaviors and needs have radically changed, and some artificial intelligence (AI) systems are struggling to adapt. The Black Lives Matter movement heightened the public’s awareness of historical and inherent bias in AI predictive policing and facial recognition technologies. While this understanding is in no way new to the research and ethics communities, the number of news stories in the popular press has skyrocketed, calling broader attention to the conversation. 

Getting AI right is more difficult, but more important than ever. Little will change until companies commit to creating and implementing AI responsibly and ethically.

 

Bias is error

For the success of businesses and the good of society, we need AI to be accurate, and that means eliminating as much bias as possible.

Let’s take the example of a financial institution attempting to measure a customer’s “ability to repay” before approving a loan. Suppose this institution’s AI system predicts someone’s ability to repay based on sensitive or protected variables like race or gender (which is illegal, by the way) or proxy variables (like zip code, which may correlate with race). In that case, they could potentially approve loans for some people who can’t repay the loan and deny loans for people who can repay. If a system weighs race more heavily than income, it may exclude a high-income Black family from receiving a loan while offering one to a low-income white family. Since the AI is making decisions based on the wrong factors, it perpetuates bias in the model, putting the financial institution at risk of losing both money and customers, and potentially forcing a customer segment to depend on predatory lenders. 

Including race, gender, and proxy variables in their model, but choosing not to make decisions based on those variables would dramatically improve accuracy and grow their customer base. For instance, if they see some communities denied loans, they might consider offering an alternative product like microloans to meet their needs better. This approach creates a virtuous cycle that helps customers improve their financial footing and eventually be eligible for the bank’s traditional loan products.

Organizations have a responsibility to ensure fair and accurate AI solutions — it’s an ongoing effort that requires awareness and commitment. While there’s no “one-size-fits-all” fix, here are  four strategies to get you started:

 

1. Identify underlying bias in your systems and processes

Recent conversations around AI and bias are challenging the notion of “unbiased data.” Since all data carries bias, you need to take a step back and assess the systems and processes that have historically perpetuated bias.

Examine the decisions your systems are making based on sensitives variables:

  • Are certain factors, such as race or gender, being inordinately impacted?

  • Are there variances by region or even by decision-maker?

  • Are those differences representative of the broader population in those regions, or does it seem some groups are impacted disproportionately? 

Once you find bias, you need to eliminate it from the process before using that data to train an AI system. How? Focus on three core areas: employee education, product development, and customer empowerment.

At Salesforce, we introduce all employees to our Ethical and Humane Use Principles at our new hire boot camp. From day one, new employees learn they are as responsible for ethical decision making as they are for good security practices. We work directly with product teams to develop ethical features that empower our customers to use AI responsibly and review new features for ethical risks.

 

2. Interrogate the assumptions you make about your data

To uncover and remediate bias and achieve high-quality, representative data, you need to have a deep understanding of everyone who will be impacted by the technology — not just the features the buyer may need but also the downstream effects.

Get curious about your data and ask questions like:

  • What assumptions are we making about the people being impacted and their values?

  • Who are we collecting data from? Who is not represented, and why?

  • Who is at the greatest risk of harm? What are the effects of that harm, and how do we mitigate it?

  • Are we collecting enough data to make accurate decisions about everyone impacted by the AI?

To determine if you are making decisions based on unfair criteria like race, gender, geography, or income, you need fairness through awareness. That means collecting sensitive variables to see correlations in the data but not make decisions based on those sensitive variables or their proxies. 

As we discussed in the loan example, proxy variables will influence your model, so we must address those too. For example, at Salesforce, we learned early on that the name “John” was the number one predictor of a good lead in terms of Sales Lead Scoring. “First Name” is a proxy for gender and can be a proxy for race and country of origin. As a result, the team removed “name” as one of the variables in the model.

 

3. Engage with people who may be affected by the technology

Because none of us can fully understand each other’s reality, AI creators need to solicit direct feedback from their customers to understand what impacts the technology they create will have on any given group of people. 

At Salesforce, for example, our Research and Insights teams run User Summits that bring employees and consumers together to deeply understand the context, needs, and concerns that both groups have regarding things like data collection and personalization. This helps us develop features and guidelines for our products that all stakeholders can trust.

 

4. Don’t revert to the “move fast and break things” mindset

The concept of fairness is complex and requires a great deal of work. Especially when AI impacts human rights — justice, privacy, health, access to food, or housing — you have a heightened responsibility to make it accurate and fair. 

While the industry had collectively started to take a step back from the “move fast and break things” mentality in recent years, COVID-19 has the potential to affect a shift backward. Organizations are trying to move quickly to adapt. But moments of crisis don’t permit you to move fast at all costs.

The power, and the risk, of AI, is that it can make many predictions about people and behavior at scale, and very quickly. Society and the tech industry have learned the hard way that failing to consider potential negative impacts may slow you down and cause a great deal of harm in the process. Building AI responsibly from the beginning allows you to focus on improving the world rather than constantly fixing what you've broken.

Now more than ever, allow time for mindful and productive friction to ensure accuracy and fairness. This will result in a more complete, inclusive, and safe product offering. And you’ll avoid creating ethical debt that you will have to revisit at a greater cost and brand risk down the road.

 

Next steps: stay actively engaged to ensure AI is a power for good

AI can perpetuate and magnify bias that has existed in our societies for centuries, but it also has the potential to empower communities and strengthen democracy. If we want to ensure AI is a power for good in the world, it is everyone’s responsibility to educate themselves about how AI works, how it is used, and engage with our employers, companies, schools, and government to demand AI is fair and representative of all, not just a few.

Organizations that are value-driven and build a culture of ethics will be better positioned to develop and use AI safely, accurately, and ethically.

Learn more about how Salesforce infuses ethics into its AI, and check out the “Responsible Creation of AI” training on our free online learning platform, Trailhead.