Timnit Gebru is a research scientist within the Ethical AI team at Google AI and has also studied the ethics of algorithms at Microsoft and elsewhere. After noticing that she was only one of a handful of black people at a major artificial intelligence (AI) conference, she co-founded the Black in AI social community in 2017. In her role at Google, she examines the ethical development and use of AI and the impact of unconscious bias on its development.

In a recent interview with Salesforce, she shared some insights into how companies can counter unconscious bias when developing AI algorithms and AI-driven data models.


1. Keep in mind that machines aren’t immune to human bias


Businesses leverage AI technologies for all kinds of applications. However, machines can still be limited by human misconceptions. AI isn’t yet able to reason like a human would — it can only identify patterns in different datasets.

That’s where unconscious bias tends to seep in. “The problem is that the process of ‘training’ data isn’t neutral” Timnit told us. “It can easily reflect the biases of the people who put it together. That means it can encode trends and patterns that reflect and perpetuate prejudice and harmful stereotypes.”


2. Build diversity into your datasets


Unconscious bias can plague even the most well-intentioned AI. Timnit noted some recent examples including a voice recognition software that struggled to understand women, a crime prediction algorithm that targeted black neighborhoods, and an online ad platform that was more likely to show highly paid executive jobs to men. These, are all examples of the problem with applying AI to the incredibly diverse world we live in without applying an understanding of diversity to those AI models. As the use of AI continues to increase , so too does the need for programing parameters that take diversity into account.


3. Create an inclusive tech team to create inclusive AI


Businesses have pushed for diversity in hiring and increased inclusivity in the office for some time now. But diversity is much more than just a moral initiative. When the teams creating AI models, programming computers, and developing datasets are diverse, the algorithms they produce are more likely to be fit for a more diverse world.

Timnit said: “It’s crucial to make diversity a priority whenever and wherever you’re embedding AI. A lack of diversity in AI affects what kinds of research we think are important, and the direction we think AI should go. When problems don’t affect us, we don’t think they’re important…but it’s when we work for inclusion that the exponential benefits of AI can positively affect all of us.”


For more, read the complete interview with Timnit Gebru on the Salesforce Newsroom.