This is the 2nd article in a 2-part series on the use of AI in hiring. The first part is available here.
In the previous post in this series, I discussed what the public thinks about whether AI can reduce discrimination in hiring. I briefly went over why we should expect it to have no effect (or worse), and then pointed to a survey I conducted that showed how the public seems to disagree with this.
For this post, I want to address whether there is a way to design a system that is not discriminatory, and how you might do that. But first, I need to back up a little bit and ask a simple question:
What is Discrimination?
Discrimination refers to the treatment or consideration of, or making a distinction in favor of or against, a person or thing based on the group, class, or category to which that person or thing belongs rather than on individual merit.
https://definitions.uslegal.com/d/discrimination/
There is a pernicious belief that it’s not real discrimination unless someone says a bigoted slur, or something like that. Most discrimination isn’t that obvious; more often, people and institutions get away with discrimination in all walks of life by enforcing ostensibly race-neutral rules.
For example: is it discriminatory against non-Asian populations that Harvard College’s admitted Class of 2023 was comprised of 25.3% Asian-Americans, despite that they are about 6% of the youth population in the U.S.? A recent court case argued the exact opposite question, with plaintiffs alleging that Harvard was discriminating against Asian-Americans despite their massive over-representation relative to the general under 18 U.S. populace. How can discrimination against Asian-Americans even be a hypothetical possibility given those numbers? The answer is obviously that Harvard is not pulling candidates from the general populace: it is looking at what Harvard considers to be the subset of that population that is high-achieving. Asian-Americans are disproportionately more likely to be a part of this group (small average differences can create large swings in outliers). And since high academic achievement is an individual merit, everyone is fine with that.
The plaintiffs argued that Asian-Americans were underrepresented at Harvard relative to how many of them are high-achievers. The defendants retorted that there is more to being a high-achiever than just test scores and GPA; to quote Card’s expert testimony, “Harvard’s admissions process values many dimensions of excellence, not just prior academic achievement.”
In the case of employment discrimination, the analogous subset of the population is anyone who is qualified for the job. The legal approach to identifying employment discrimination makes a lot of sense, and is worth covering. A prima facie case of employment discrimination exists in the event that:
- The plaintiff belongs to a protected class;
- The plaintiff was qualified for the job;
- The plaintiff was subjected to an adverse employment action;
- The employer gave better treatment to a similarly-situated person outside the plaintiff’s protected class.
If the plaintiff can prove these things, then the employer/defendant can retort with a race neutral reason. But what if the ostensibly race-neutral policy is actually designed to filter out candidates in a protected class without nominally using race? For example, an employer in Chicago having a policy of not hiring people who live in Chicago’s South Side would be pretty racist but would not require mentioning race. This is why “disparate impact” can also be a basis for bringing a lawsuit, even though the SCOTUS (egregiously, in my view) limited the scope of disparate impact claims in 2015.
Going back to the Harvard case: There are some policies that Harvard has that disparately impact racial groups that everyone seems OK with, such as Harvard’s strong preference to only admit students with relatively high standardized test scores. Harvard doesn’t need to only admit people who have high test scores, after all. They could throw a bone to some people who are not very high-achieving by traditional standards– or simply have the entire Class of 2024 be randomly chosen. But because a standardized test score is considered an “individual merit,” it is not discrimination by the definition quoted above. Nobody thinks it’s unreasonable that Harvard would want high-achievers, just as nobody thinks it’s unreasonable that employers want qualified employees. The question in the big Harvard lawsuit more or less came down to whether the “personal rating” was a form of racial discrimination or a valid individual merit; the court agreed with defendants that it was the latter.
One Neat Trick to Eliminate Hiring Discrimination (Data Scientists Hate Him!)
All the talk in my previous article about whether AI can eliminate discrimination misses the point. There is already a way to mostly eliminate discrimination, and the method works whether the backbone of your process is an AI approach or a more traditional human discretion approach. If your company isn’t hiring a lot of women but you’d like to do so, then hire more women. If your company isn’t hiring a lot of black people but you’d like to do so, then hire more black people. That’s it, that’s the trick.
Even when implementing this adjustment to your candidate pool, there is still the very real concern that because your process is still fundamentally discriminatory, your entire pool of black employees will (as an example) have white-sounding names like “John Williams,” and people with black-sounding names are rarely hired. In fact, this is very likely to be the case! But the One Neat Trick never purports to fix this problem, unless you also make “black-sounding names” another thing you’re trying to remove discrimination against. The trick only balances out the things you very specifically want to balance out, and nothing else. And in that sense, the One Neat Trick works.
This trick is so trivial that it is downright offensive to many people who assume solutions to complicated problems must themselves be complicated. Surely there are some reasons to object to it, or surely there must be better solutions, right?
Some people may object that the One Neat Trick as a form of affirmative action gone mad that gives an unfair advantage to minority groups. Why must it be the case that the adjustment gives an unfair advantage to minority groups? It can just as easily be said that the pre-adjustment scoring rubric gives an unfair advantage to the majority groups. Much like actual affirmative action at colleges, it’s not a tacit admission that women or nonwhites are worse candidates: it’s an admission that the pre-adjustment scoring rubric isn’t perfect.
Some people may object that the One Neat Trick is an “equality of outcome” approach as opposed to an “equality of opportunity” approach. But how exactly do you measure and quantify “equality of opportunity?” The answer is that you use outcomes to quantify opportunity: we can tell if women had an equal shot at getting the job as men (opportunity) if women get a proportional number of jobs as men (outcome). There is no a priori reason to believe that a process with equal opportunity could nevertheless lead to unequal outcomes over a sufficiently large sample size. This is all to say that the “equality of opportunity, not equality of outcome” mantra is, from a statistician’s or social scientist’s perspective, meaningless. Group average outcomes directly measure opportunity. Making the outcomes equal across groups is the way to make the opportunities equal across groups.
How We Conceptualize Remedies to Discrimination Makes No Sense
SCOTUS precedent does not share my perspective on discrimination and quotas (here specifically in the context of college admissions), but the SCOTUS is wrong here. SCOTUS precedence assumes that there is a meaningful difference between a “plus factor” and a “racial quota,” and that the former is permissible and the latter is not. But there isn’t a real difference. For every “racial plus factor,” there is a percentage of additional applicants who make the cut. For every percentage of additional applicants you might want under a “quota,” there is an implied plus factor that can be backed out of it. For those familiar with climate economics, this is not dissimilar to the similarities shared between cap and trade versus carbon taxes: a price implies some quantity, and a quantity implies some price, and you can take your pick which one you want to set (and the other number will follow).
I’m not alone in saying that quotas and plus factors are functionally similar. Interestingly, it was easier to find conservative writers who have come on the record in agreement with this point than progressive writers. For example, see this WaPo column from conservative columnist Charles Lane. For an older example, see the chapter titled “The Higher Learning” in Abigail and Stephan Thernstrom’s 1997 book, “America in Black and White.” (Some Gen Xers may remember Abigail Thernstrom from Bill Clinton’s 1997 town hall on racial issues.) All of these writers highlight that these arguments were all brought up at the time Regents v. Bakke was decided, and that this is an argument that Justice Thurgood Marshall agreed with. In the view of these conservative writers, much of the existing legal order requires legal teams defending racial affirmative action programs to make distinctions without actual differences, such as when Harvard needs to defend its affirmative action program as “merely using race as a ‘plus factor.'” Again, I agree with the conservative writers on this.
Where I disagree with these conservative writers is in their implied premise that quotas are a bad or inappropriate remedy to racial discrimination that still happens today. Notably, these writers don’t put any emphasis on this premise in their writing. Rhetorically, they don’t have to emphasize this: the idea that quotas are bad is taken as a given in the collective conscience, and their goal is to simply convince people that plus factors are just quotas wearing a fake mustache. Normal people believe that a process should be designed to be race-neutral and fair, that it’s not so bad to give help to people who are disadvantaged, but that we shouldn’t set quotas. But if “help to people who are disadvantaged” is a quota, the implication is that because quotas are bad, therefore help to the disadvantaged is bad.
This is not to say that conservative writers have a level-headed view of the nature of discrimination; the common conservative insistence that there exists a meaningful difference between outcomes and opportunities is somewhat similar to the fallacy related to quotas (analogous to outcomes) and plus factors (analogous to opportunities).
All of this talk about the supposedly good ways and supposedly bad ways to fight discrimination leads you back to the One Weird Trick solution discussed in the prior section. Note that I never needed to invoke AI: quotas predate AI, and the existence of AI hasn’t invalidated that hiring or admitting more people in disadvantaged groups helps fix discrimination in a hiring or admissions process. AI does not create new inroads to solve any problems of discrimination that currently exist, despite some headlines suggesting otherwise:

If the AI process is truly discriminatory, we’ll be able to measure it in terms of outcomes– but that’s also true of normal hiring practices. If we want to fix the discrimination, we can add quotas to the process to fix it, whether that process utilizes wither AI or human discretion. (Ahem, sorry. My lawyers told me that you cannot add quotas, but you are allowed to use “plus factors” instead.) Put another way, in the words of Dr. Ifeoma Ajunwa: discrimination in hiring is a legal problem and “not a technical problem.”
Those four words– “not a technical problem”– can be used to describe a striking amount of what the tech sector spends its time on. Are we headed toward climate catastrophe because we haven’t built enough electric cars, or because we’ve designed every aspect of our society around unlimited fossil fuel consumption? Do we need a new alternative to banks in the form of cryptocurrency, or should we build out regulatory institutions that we can trust to keep our money safe? Can AI help create a more fair and just society, or do we already have the capabilities to do that– and simply lack the willpower?
This is the 2nd article in a 2-part series on the use of AI in hiring. The first part is available here.
Major kudos to Matt Darling (@besttrousers) and Daniel Ruffolo (@SFF_Writer_Dan) for their constructive feedback on this series. The views expressed in this article are mine alone and do not necessarily represent the views of anyone who assisted me.
You must be logged in to post a comment.