Why Google defined a new discipline to help humans make decisions

Machine-learning systems are only as smart as their training data. So Google formalized the marshaling of hard and soft sciences that go into its decisions.

Cassie Kozyrkov is Google’s first-ever chief decision officer. She has already trained 17,000 Googlers to make better decisions by augmenting data science with psychology, neuroscience, economics, and managerial science. Now Google wants to share this new discipline–which it calls Decision Intelligence Engineering–with the world.

At Google, the need for someone like Kozyrkov stemmed from the company’s adoption of machine-learning technology across an array of products and services to make decisions reliably and at massive scale. A Machine Learning model which decides if a photo of an animal is a cat, can trigger actions accordingly: If it’s a cat, do A. If it’s not a cat, do B. And it can do it over and over without ongoing human involvement.

The problem is that an algorithm which learns from examples–in this case, photos of animals which are labeled as cats or not cats–is only as good as the examples it’s trained on. If the human being training the algorithm sometimes labels rabbits as cats, the algorithm will make bad decisions as efficiently as it does good ones. And the more sophisticated machine-learning applications get, the more opportunity there is for humans to introduce subtle problems into the final results.

Google needed a decision-making framework which enabled individual humans, groups of humans, and machines to make wise decisions. Such a process didn’t yet exist. So the company decided to build it.

Decision intelligence engineering

The well-established academic field of decision science covers the psychology, neuroscience, and economics of how human beings make decisions–but it doesn’t encompass the engineering perspective and the scale of automated decision-making. Likewise, data science doesn’t cover how humans think through a decision.

“A lot of the training that data scientists have assumes that the decision maker knows exactly what they need and the question and problem are framed perfectly,” says Kozyrkov. “The data scientist goes off and collects the data in service of that question, and answers it, or builds the machine learning system to implement it.”

That ideal scenario is all too rare in the real world. While working in Google’s data-science consulting arm, Kozyrkov often saw executives make decisions that were steered by unconscious bias rather than by the data itself.

Kozyrkov has a PhD in psychology and neuroscience as well as a master’s in statistics. Instead of just training decision-makers in data science, she set out to draw on the behavioral sciences to help them to make truly data-driven decisions. This means framing a decision effectively–often before looking at any data at all.

How to decide

The first step in Google’s framework asks decision-makers to determine how they will make the decision with no additional information. What would the default choice be? Let’s say you have to decide whether to stay at a hotel. You have photos of the hotel but no guest reviews. Based on the photos alone, would you stay there?

“We pretend that we don’t have a preference, but we’re really lying to ourselves,” says Kozyrkov. “We do have some kind of innate feeling about what seems to be the safer, better option under ignorance.”

The second step is to define how you would make the decision if you had access to any information you wanted. What would it take to convince you to stay at the hotel? Would you want to read every review or just see the average review score? If you use average review score only, does that number need to be 4.2 or 4.5 or something else to convince you to stay at the hotel?

This soul-searching exercise determines the metrics you need to make the decision and the cut-off point for each metric. In the hotel example, this could be end up being an average review score of more than 4.2 or a more complex, compound bottom line using the average rating weighted with one star reviews and the possibility of bedbugs.

“That step is actually skipped a lot in industry, where people will use fuzzy concepts, they’ll never make them concrete, and they won’t own up to the fact that they’re using them,” says Kozyrkov. “They’ll think that putting a bunch of mathematics near it fixes it.”

In the final step of this process, you should look at whether you can get access to all the data you ideally want to make the decision. If you’ve decided that you’ll stay at any hotel that has an average star rating of over 4.2, and you have access to hotel ratings, then you’re good to go. But if you’re factoring in bedbugs–but don’t have the full reviews which might mention them–then you have to make your decision under uncertainty. That introduces the potential for mistakes–and in automated decision-making, you might make mistakes many times.

As the decision maker, you must therefore consider which mistakes you can live with. Is it worse if you end up in a hotel with bedbugs or if you miss out on a hotel which would have best met all your criteria? How likely is one mistake versus the other? In the case of automated decision making, only when you have figured out which risks you are willing to accept can a data scientist gather relevant data and apply statistical analysis to help you to make a decision.

Beyond data science

Actually, Kozyrkov says, social scientists are often better equipped than data scientists to translate the intuitions and intentions of a decision-maker into concrete metrics. And ideally, data scientists and social scientists work together to define metrics, collect appropriate data, and apply it to automated decision-making.

“I think that we don’t realize how valuable social scientists are,” she says. “A data scientist might think that they’re qualified to create a survey (e.g. to measure user satisfaction), and then analyze data from it, but what happens if your users simply ignore your survey? We call that non-response bias. What if there’s some incentive for your users to lie to you? That’s not something that a data scientist is trained in handling, whereas if you worked in a social psychology lab, you deal with that day in and day out.”

Making decisions the Google way isn’t necessarily easy. Applying Decision Intelligence Engineering to an important, complex decision can take weeks, or, if multiple stakeholders are involved, even months. But with any luck, the result should be better, wiser decisions, especially at scale.

“We also figure out the right approach based on the importance of the decision,” says Kozyrkov. “A lot of the training prioritizes the most important decisions, but there are also approaches to taking decisions based on … no information at all, if the decision is not that important.”

Kozyrkov argues that Decision Intelligence Engineering is not just for experts like data scientists and social scientists. Everyone in a company can be involved in decision making using data. The trick is figuring out how each person can best contribute. Google’s training is therefore also tailored to different roles in the company.

“Decision-making is something that our species does,” says Kozyrkov. “Everyone knows something about it, and I would say that everyone is an expert in at least some piece of the process.”

Source: www.fastcompany.com