Article: Tuesday, 3 May 2022

Using automated algorithms to make decisions – like shortlisting job applicants’ CVs, or which customers’ applications to approve or deny – is becoming more normal. What’s surprising is that customers’ reactions to them contradict the expectations of managers. The research was done by PhD candidate Gizem Yalcin and Professor Stefano Puntoni at Rotterdam School of Management, Erasmus University (RSM), and Dr Sarah Lim from the University of Illinois at Urbana-Champaign and Professor Stijn van Osselaer from Cornell University.

Researcher Gizem Yalcin said: “Companies increasingly adopt algorithms to make business decisions that directly affect their potential and existing customers: algorithms decide which applicants to admit to companies’ platforms or which customers’ applications to approve or deny.

“However, there’s little research that examines how customers react to different types of decisions about themselves made by algorithms, and how their reactions differ if they know the decisions are made by humans.

“Specifically, this paper tests whether and how customers evaluate a company differently depending on whether they are accepted or rejected by an algorithm or a human employee.”

What if “computer says no”?

Ten studies revealed that customers defy managers’ predictions by reacting less positively when a favourable decision is made by an algorithm rather than a human employee, whereas this difference is less for an unfavourable decision.

“We tested customer reactions to favourable and unfavourable decision outcomes, like an acceptance or rejection. The participants were randomly told that the decision about their application was either made by an algorithm or a human employee. We then asked participants to tell us what they think about the company. In our studies, we covered various contexts such loan and membership applications.”

It’s easier for customers to internalise a favourable decision rendered by a human employee rather than an algorithm.

Less sensitive to who made the rejection

The effect of customers being less positive about algorithms decisions that go their way than decision by humans that go their way is driven by ‘distinct attribution’ processes. Gizem Yalcin said: “It’s easier for customers to attribute a favourable decision to themselves when it is rendered by a human rather than an algorithm. Customers find it easier to take credit for an acceptance when the decision is made by a human employee – “my request was accepted because I am special, and I deserve it” than when an algorithm is responsible for the acceptance. That would reduce you to a number just like everyone else.

“For unfavourable decisions, however, customers are motivated to protect their self-worth and blame others. Accordingly, they find it similarly easy to blame others for an unfavourable decision regardless of whether it was a human or an algorithm that made it.”


Give it a face

The researchers advise managers how to limit the likelihood of less positive reactions towards acceptances made by algorithms, how to design processes to avoid the effects on customers’ evaluations of the company, and how best to communicate about how decisions are taken.

One of the studies provides managers with an easy-to-implement solution: humanising the algorithms, for example by giving it a name or an avatar that looks human. Researchers find that customers react more positively toward companies when a human-like algorithm accepts them rather than a regular algorithm.

Even having a human working somewhere in the decision-making process isn’t enough to offset the effect.

Just the presence of a human is not enough

Today, many automated decision processes are actually monitored by a human. If you tell customers that decisions affecting them are overseen by a real person, that should be enough to overcome less positive reactions to algorithmic acceptances – right?

Wrong. One of the researchers’ studies shows that as long as algorithm is making the decision, having a human working somewhere in the decision-making process isn’t enough to offset the effect. In other words, researchers warn managers that passive human oversight will not necessarily improve customer responses, and customers still react less positively to the favourable decisions the algorithms make.

 

When customers assume it’s a human

Companies are increasingly required to disclose how they use algorithms to make decisions that affect people and society. This research validates these efforts and offers an important insight to policymakers: one of the studies reveals that neglecting to mention who made the decision leads customers to assume that the decision was made by a human. Averting the negative consequences of algorithmic decision making by making algorithms more human-like, for example by using a more conversational format, a human name, or a human-like photo, can help companies to stay in their customers’ good graces.

Stefano Puntoni

Former Professor of Marketing

Pile of books with vibrant bookmarks protruding from various pages, symbolizing in-depth research.

Related articles

RSM Discovery

Want to elevate your business to the next level using the latest research? RSM Discovery is your online research platform. Read the latest insights from the best researchers in the field of business. You can also subscribe to the newsletter to receive a bimonthly highlight with the most popular articles.

Do you want to learn more about this subject?

Check out these RSM education programmes

Your contact for more information:

Danielle Baan

Science Communication and Media Officer

Portrait of Erika Harriford-McLaren

Erika Harriford-McLaren

Corporate Communications & PR Manager

Erasmus University campus in autumn, showcasing its iconic red trees, viewed from across the campus pool.