Left unchecked, modern computing technology could hijack our minds as well as society.

That’s the view of Tristan Harris, a former design ethicist at Google. Tristan now runs the Center for Humane Technology, which aims to advance solutions for re-aligning technology with humanity’s best interests. He recently spoke at Salesforce’s Dreamforce ‘18, where he discussed his belief that the business model for many technology companies is built on grabbing people’s attention — and it’s happening at a scale never before seen in human history. In his view, this is leading to a situation where algorithms increasingly determine what billions of people think and do each day.

“Technology is not neutral, and it becomes a race to the bottom of the brain stem to see which company will go lower to get people’s attention,” Harris said.

 

The race for users’ attention isn’t going away

Harris’ vision of technology as a persuasive force, capable of exploiting our minds’ weaknesses, stems from his fascination with performing magic tricks as a child. He realized magicians look for blind spots and vulnerabilities in people’s perception. That’s how they can influence what the audience does without them realizing it.

“Magic teaches you to see human minds in a different way,” he said. “Instead of seeing people’s choices as authoritative, you start to see that their attention can be manipulated, their choices can be manipulated, their notion of cause and effect can be manipulated.”

Harris explained how many technology product designers rely on similar persuasive techniques in the race to grab people’s attention and hold users to their platforms. In his view, companies like Facebook, Twitter, Instagram, and YouTube have produced extraordinary products that benefit the world. But under pressure to outperform their competitors, he feels that such companies now ceaselessly push AI-driven news feeds, content, and notifications our way, exploiting our brain’s instinctive response to seek novelty and reward.

Harris’ views chime with an increasing number of leading thinkers in the technology sector, including Salesforce Co-CEO and Chairman Marc Benioff. In Benioff’s view, social media needs to be regulated like every other industry because of the addictive power of the technology that propels it.

 

 

The downward spiral of recommendation algorithms

Harris pointed to Snapchat’s “streaks” feature, which encourages people to become daily users of the platform. In his view, the race to keep children’s attention through such mechanisms can train them to replace their self-worth with “likes,” redefining how they measure friendship and creating the constant illusion of missing out.

He also discussed how YouTube drives considerable traffic via its auto-play feature that automatically begins a new video after the current video ends. Auto-play works by removing “stopping cues” — or to put it another way, by removing a mental trigger for the user to decide which video to watch. Everyone is receptive to the persuasive effects of such techniques, but in Harris’ view some demographics, like teenagers, are more vulnerable to it than others.

“One of our Center for Humane Technology members is a former YouTube engineer who helped build the company’s recommendation system. He found that the recommendation algorithms intrinsically skew toward something more radicalizing. So if you start a teen girl at a dieting video on YouTube, the algorithms may auto-play videos about anorexia — not because anyone at YouTube wanted that to happen, but because that’s better at keeping people’s attention.”

Harris believes that it’s vital to recognize how powerful product designers are when they exploit these kinds of vulnerabilities. In his view, by allowing algorithms to control a great deal of what we see and do online, such designers have allowed technology to become a kind of “digital Frankenstein,” steering billions of people’s attitudes, beliefs, and behaviors.

Harris also said he believes technology platforms make it easier than ever for bad actors to cause havoc by pushing “fake news” to specific target audiences, creating fake accounts to impersonate real people, or finding people who are already prone to conspiracies or racism and automatically reaching similar users with “lookalike” targeting.

 

 

Designing for a more humane approach

Referencing a new survey that shows 72% of teens believe that tech companies manipulate users to spend more time on their devices, Harris said he believed most people aren’t happy with the current system. 

“It’s very simple. Once you’ve shown people how technology is driving the world, no one wants that. No one is excited by it.” He added, “Technology is its own force pushing culture in a certain direction. But we can predict what that direction is and steer it differently.”

Harris believes technology companies can play a big part in steering people differently by designing products that restore choice to our relationship with technology and how we spend our time with it. To illustrate, he described how designers could seek to gauge a product’s success by using metrics that measure a positive contribution to our wellbeing, rather than more typical metrics such as the number of “likes,” “matches,” or “shares” received, or the number of connections made.

“FaceTime is a great example of a humane technology because it allows us to draw on our capacity for empathy through our voices and through eye contact,” Harris said. Speculating on other potential humane use cases, he continued, “Or say you were watching a ukulele video and YouTube asked you: ‘What are you really after?’ And if you said, ‘I want to learn this instrument,’ the system would be like, ‘Great. Let me think about it. What would be the best way to help you?’ And even say, ‘Here’s two of your friends who play the ukulele — they might be able to help you out.’”

 

 

 

Using technology to bring out the best in humanity

Harris suggested the change process should start with designers taking a step back and considering the goal of the product they’re creating.

“Humane technology starts with an honest appraisal of human nature. We need to do the uncomfortable thing of looking more closely at ourselves and seeing that we’re vulnerable to social validation, to magicians’ tricks, and to algorithms that split-test 66 algorithmic variations of toxicity or hate speech to figure out what will animate our instincts.”

He also gave examples of companies that he believes are already taking small steps to change. “Apple just announced screen time-limiting features with the release of its iOS 12 update. Google also recently launched a digital wellbeing initiative to help people limit their screen time. These are the tiniest baby steps in the right direction.”

Harris said that while bringing about a widespread shift to humane technology will not be easy, doing so remains everyone’s moral responsibility. “It will take parents to message their friends who are inside tech companies; it will take kids talking to their parents about these issues’ it will take policymakers; it will take the media. But everyone can be a part of this transition, and we need your help so that we can stop exploiting human nature and put humanity’s best interests first.”

 

Watch the full interview with Tristan Harris at Dreamforce ‘18.

 

Learn more about the influence of technologies in the Fourth Industrial Revolution.