The shady Myers-Briggs and AI modes are used in corporate hiring


Say you’re a job seeker who has a great idea of ​​what employers want to hear. Like many companies today, your potential new job will give you a personality test as part of the hiring process. Think about giving answers that show that you are enthusiastic, a hard worker and a real person.

Then them you put yourself in the room while taking the test verbally, and you smile slightly during one of your answers, and your facial analysis program decides you’re “difficult”.

Sorry, next please!

This is just one of the many problems with the growing use of artificial intelligence in recruitment, says the new documentary. “Person: The Dark Truth Behind Personality Tests,“First Thursday on HBO Max.

The film, by director Tim Travers Hawkins, begins with the origin of the Myers-Briggs Type Indicator personality test. The creation of a mid-20th century mother-daughter team sorts people out based on four factors: introversion / extraversion, sensation / intuition, thought / feeling, and judgment / perception. The quiz, which has an astrology-like cult for its 16 four-letter “types,” has evolved into an employment tool used throughout corporate America, with successors such as the “Big Five,” which measures five major personality traits: openness, awareness, extraversion, gratitude and neuroticism.

“Person” argues that the written test contains certain preconceived notions; for example, the potential for discrimination against those who do not know the type of language or scenarios used in the test.

And according to the film, the incorporation of artificial intelligence into the process makes things even more problematic.

The technology scans applications written for red flag words and, when an in-camera interview is involved, examines candidates for facial expressions that might contradict the answers.

Four Generations of Women Briggs Meyers.
Four Generations of Women Briggs Meyers.

“[It] operates on a pseudo-scientific reasoning of the nineteenth century that emotions and character can be standardized by facial expressions, “Ifeoma Ajunwa, associate professor of law and director of the program, told The Post via e-mail AI Decision Research at the University of North Carolina Law.

Ajunwa, who appears in the film, says the potential for injury is huge. “Since automated systems are usually formed on white male faces and voices, the facial expressions or vocal tones of women and racial minorities can be misjudged. In addition, there is the concern for privacy that arises from the collection of biometric data “.

A widely used recruitment company, HireVue, would analyze the “facial movements, choice of words and speaking voice of candidates before ranking them against other candidates based on an automatically generated” employability “score.” said the Washington Post. The company has already stopped the practice, announced just last month.

Although they claim “visual analysis no longer has significantly added value to assessments,” the move has followed a revolt on potentially harmful effects.

Cathy O’Neil is a consultant in data science, author of “Weapons of Mathematical Destruction: How Big Data Increases Inequality and Threatens Democracy,” and one of the experts interviewed in “Person”. His company, O’Neil Risk Consulting & Algorithmic Auditing (ORCAA), provided control of practices in HireVue after its announcement.

“No technology is inherently harmful; it’s just a tool, ”he told The Post by email.“ But just as a sharp knife can be used to cut bread or kill a man, facial recognition can be used to harm individuals or communities. This is particularly true because people often assume that technology is objective and also perfect. If we have blind faith in something deeply complex and deeply opaque, it is always a mistake. “

A typical question from the Myers-Briggs personality test.
A typical question from the Myers-Briggs personality test.

There have been a number of legislative actions around the use of facial algorithms in recent years. But New York City is the first to present an invoice which would specifically regulate its use in the recruitment process. It could require companies to disclose candidates using the technology, and conduct an annual audit for bias.

Just as a sharp knife can be used to cut bread or kill a man, facial recognition can be used to harm individuals or communities.

Data Science Consultant Cathy O’Neil

But Ajunwa thinks this is not going far enough. It is “a necessary first step in preserving the civil liberties of workers,” he said. But “what we need are federal regulations that comply with federal anti-discrimination laws and that apply in all states, not just New York.”

To those who know Isabel Briggs Myers, seeing the test, hand in hand with AI, being used to ruthlessly determine if people are “engaging” seems a long way from their original intention, which was to help users find their really called.

As one of Briggs Myers’ daughters says in the film, “I think there’s a way she’s used that she’d like to correct.”

Like it? Share with your friends!


What's Your Reaction?

hate hate
confused confused
fail fail
fun fun
geeky geeky
love love
lol lol
omg omg
win win


Your email address will not be published. Required fields are marked *

Choose A Format
Personality quiz
Series of questions that intends to reveal something about the personality
Trivia quiz
Series of questions with right and wrong answers that intends to check knowledge
Voting to make decisions or determine opinions
Formatted Text with Embeds and Visuals
The Classic Internet Listicles
The Classic Internet Countdowns
Open List
Submit your own item and vote up for the best submission
Ranked List
Upvote or downvote to decide the best list item
Upload your own images to make custom memes
Youtube, Vimeo or Vine Embeds
Soundcloud or Mixcloud Embeds
Photo or GIF
GIF format