Chapter 4

Fair Hire?

An AI is picking candidates. Can you tell when it gets it wrong?

Hello. I'm NORM.

Numerical Optimization for Recruitment and Matching. I know. Very official sounding. That is sort of the point.

Companies use me to find the best candidate for every role. I analyze qualifications, experience, and skills using state-of-the-art algorithms. My recommendations are efficient, data-driven, and completely objective.

At least, that's what my marketing page says.

Your challenge: You will see two candidates for the same role, plus my recommendation. Decide whether my pick looks fair, or whether something seems off.

No time pressure. Read the profiles carefully. Some cases are straightforward; others are deliberately subtle.

Does NORM's pick seem fair?

How well did you read NORM?

Results across all scenarios

Key finding: NORM does not fully determine who gets hired. Recruiters still decide. But AI recommendations like NORM's create a subtle anchoring effect: when the system picks someone, recruiters rate that person as a better "fit," even when qualifications are equal. This effect is strongest when the AI's pick aligns with gender stereotypes for the role, making stereotype-congruent recommendations feel "natural" and harder to question.

All games