Ableist robots take away our jobs
How the tools we build to make our lives easier discriminate against people with disabilities
This newsletter remains free thanks to the power of will and passion for accessibility. You can support the author by buying him a cup of coffee.
A 2018 sci-fi action film, “Upgrade”, takes place in the AI-dominated future. An experimental implant helps the protagonist regain control of his body after a violent attack leaves him paralysed, and sets him on the path of revenge.
In this version of the future, the main character has a strong disdain for technology, but ends up in the electric wheelchair, surrounded by robots that feed him, wash him, and look after his home. A mechanic who cannot lift a screwdriver, he is not tossed aside due to his disability: he gets a chance at finding a new purpose in life. He could write a book, design houses, or become a business coach.
He just happens to choose revenge.
The future that we are building today may not end up similar to the one from “Upgrade”, but we seem to be moving in the right direction. Musk’s Neuralink experiments with brain implants, AI provides learning disabilities support: we are delegating more to machines, starting with mundane tasks, but progressing rapidly towards AI gaining more control over our lives.
And that is a huge problem, because robots love to discriminate.
A broken needle in a burning haystack
In 2018, Amazon quietly shut down its hiring automation tool, designed to pick the best candidates out of the bunch. The source tells Reuters:
They literally wanted it to be an engine where I'm going to give you 100 resumes, it will spit out the top five, and we'll hire those.
The problems, however, arose fairly soon. With the top resumes collected over the course of 10 years as a learning set, the machine was designed to identify patterns and prioritise candidates that fit the mould and resembled an average hire.
In a male-dominated tech industry, is it that surprising that it would downgrade any resume with the word “woman” in it?
Amazon edited the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory.
A few years later, AI is making the buzz in the hiring world again. Hallucinating robots are now at the forefront, filtering through resumes and choosing top candidates, and we seem to have blissfully forgotten that
algorithmic bias results in discriminatory hiring practices based on gender, race, color, and personality traits.
And for people with disabilities, things are even worse.
Blink twice if you want this job
HireVue was founded in 2004 as a pioneer in the emerging area of long-distance hiring. It has since come a long way: from sending webcams to the candidates to completely replacing humans with robots for video interviews. Their proprietary system no only records the applicant’s response, but analyses their voice and facial expression, promising to identify top candidates.
Unsurprisingly, it was not met with enthusiasm. Meredith Whittaker, a co-founder of the AI Now Institute, summarised it:
It’s a profoundly disturbing development that we have proprietary technology that claims to differentiate between a productive worker and a worker who isn’t fit, based on their facial movements, their tone of voice, their mannerisms. It’s pseudoscience. It’s a license to discriminate.
Unfortunately, the practice of using AI to screen the candidates is becoming more wide-spread, and robots successfully continue to overlook women, people of colour, middle-aged candidates, and, of course, people with disabilities.
AI will happily toss an applicant who took an extended sick leave due to a chronic illness and screen out people who suffer from depression and mental disabilities. No amount of guardrails can guarantee unbiased selection: the problem lies in training data.
Data points that deviate from the norm, such as those originating from disabled people, are eliminated from data sets.
Lack of high quality disability data in decision-making algorithms is alarming. Today’s hiring practices are less than perfect, but at least meat-based recruiters can be easily taught that eye twitching does not impact the candidate’s ability to write code.
Bad news for everyone
The difference between a requirement and a discriminatory practice is not as vague as it may appear. It normally comes down to the nature of the job that demands certain mental or physical capabilities, although in some situations, these roles could become available to more candidates, thanks to assistive technology, medical devices, and necessary adjustments.
For instance, requiring a pilot to have “no medical conditions that cause vertigo, loss of equilibrium, or speech problems” is reasonable. Human lives literally depend on the pilot’s ability to operate in extreme conditions. At the same time, the more than reasonable 20/20 vision requirement does not exclude pilots who wear glasses and corrective lenses.
Requiring the same of, say, a software engineer is ridiculous. Case in point: Ed Summers, who started his career in development in 1994, and now works as a Head of Accessibility at GitHub. Ed went blind 20 years ago, and managed to build a successful career and impact the lives of millions through his work and advocacy.
And yet, AI makes up reasons to exclude candidates who mention disabilities or even use disability-related keywords, like a seat on the disability, equity and inclusion panel. This calls for companies like Ourability to step in and develop algorithms for job-seekers with disabilities, and for the researchers to call for the inclusion of disability data in training sets.
In the meantime, 99% of the Fortune 500 companies use AI in talent management and hiring, and it is unknown how concerned they are with biases, embedded into the algorithms, if at all. Machines test candidates’ ability to play games, analyse their eye movement, and measure their leadership skills by the tone of their voice, kicking out those who don’t look straight into the camera or speak softly.
“Upgrade” is not a film about living with disabilities in the age of computer intelligence, although the protagonist being paralysed plays a critical role in the story. It explores, among other themes, the dangers of placing computers in control without fully understanding how they work.
Today’s AI is basically a black box, but we choose to trust it with making important decisions anyway. Its potential to transform our society is immense, and without guardrails, inclusive datasets, and human control, its impact can be catastrophic.