Fake AI 


Edited by Frederike Kaltheuner


Meatspace Press (2021)

Book release: 14/12/2021




This book is an intervention - 





Chapter 2


Cheap AI


By Abeba Birhane


Cheap talk (n): talk that results in harm to the marginalised but costs nothing for the speaker.1


Not a week goes by without the publication of an academic paper or AI tool claiming to predict gender, criminality, emotion, personality, political orientation or another social attribute using machine learning. Critics often label such work as pseudoscience, digital phrenology, physiognomy, AI snake oil, junk science, and bogus AI. These labels are fitting and valid. However, identifying this work as Cheap captures the fact that those producing it (usually a homogeneous group from privileged backgrounds) suffer little to no cost, while the people who serve as the testing grounds, frequently those at the margins of society, pay the heaviest price.

Cheapness emerges when a system makes it easy to talk at little or no cost to the speaker, while at the same time causing tangible harm to the most vulnerable, disenfranchised, and underserved. Within traditional sciences, cheapness manifests when racist, sexist, ableist, misogynist, transphobic and generally bigoted assumptions are re-packaged as scientific hypotheses, with the implication that the only viable way to reject them is to test them. Much of the work from the intimately related fields of “race science” and IQ research constitutes Cheap science. Increasingly, parts of AI research and its applications are following suit.

Cheap AI, a subset of Cheap science, is produced when AI is inappropriately seen as a solution for challenges that it is not be able to solve. It is rooted in the faulty assumption that qualities such as trustworthiness, emotional state, and sexual preference are static characteristics with physical expression that can be read (for example) from our faces and bodies. Software that claims to detect dishonesty, most facial recognition systems deployed in the public sphere, emotion and gait recognition systems, AI that categorises faces as “less or more trustworthy”—all of these constitute Cheap AI.Judgements made by these systems are inherently value-laden, wholly misguided and fundamentally rooted in pseudoscience.

At the root of Cheap AI are prejudiced assumptions masquerading as objective enquiry. For instance, the very conception of racial categories derives from German doctor Johann Friedrich Blumenbach’s declaration in 1795 that there are “five human varieties: Caucasians, Mongolians, Ethiopians, Americans, and Malays”.2 This arbitrary classification placed white people of European descent at the top of the hierarchy and cleared the way for colonisation. Subsequently, apparently scientific racial classifications have served as justifications for inhumane actions that include naturalisedslavery, forced sterilization, the genocidal Nazi attempt to exterminate the Jewish people, people with disabilities, and LGBTQ+ people; among other “deviant” classes. Today, racial classifications continue to justify discriminatory practices—including immigration policy—as a way to filter out “inferior” races.

Like other pseudosciences, Cheap science is built around oversimplifications and misinterpretations. The underlying objective of “race science”, for example, is to get at biological or cognitive differences in supposed capabilities between different races, ethnicities, and genders, on the presumption that there exists a hierarchy of inherent differences to be found between groups. IQ research, for instance, has asserted the existence of race-based differences, by reducing intelligence to a single number and by framing it as dependent upon race. Similarly, Cheap AI arrogates complex and contingent human behaviour to a face, gait, or body language. Tools based on these presumptions are then produced en masse to classify, sort, and “predict” human behaviour and action.

Cheap AI presents itself as something that ought to be tested, validated, or refuted in the marketplace of ideas. This grants bigoted claims the status of scientific hypothesis, and frames the proponents and critics of Cheap AI as two sides of equal merit, with equally valid intent and equal power.


This equivalence is false. While those creating or propagating Cheap AI may face criticism, or reputational harm (if they face these things at all), marginalised people risk discrimination, inhumane treatment, or even death as a result.

Time and time again, attempts to find meaningful biological differences between racial groups have been proven futile, laden with error (there exist more average differences within groups than between groups), and rooted in racist motivations. Yet, the same speculations persist today, just differently framed. In a shift precipitated by the catastrophic effects of Nazi “race science”, race and IQ research has abandoned the outright racist, colonialist, and white supremacist framings of the past, and now masquerades in cunning language such as “populations”, “human variation”, and “human biodiversity” research.

The mass application of Cheap AI has had a gradual but calamitous effect, especially on individuals and communities that are underserved, marginalised, and disproportionately targeted by these systems. Despite decades of work warning against the dangers of a reductionist approach, so-called “emotion detection systems” continue to spread. Though criminality is a largely complex social phenomenon, claims are still made that AI systems can detect it based on images of faces. Although lies and deception are complex behaviours that defy quantification and measurement, assertions are still made that they can be identified from analysis of video-feeds of gaits and gestures. Alarmingly, this and similar work is fast becoming mainstream, increasingly appearing in prestigious academic venues and journals such as NeurIPS and Springer.

The forms Cheap AI takes, the types of claims made of it, and the areas to which it is applied, are varied and fast expanding. Yet, a single theme persists: the least privileged, the most disenfranchised, and the most marginalised individuals and communities pay the highest price. Black men are wrongly detained due to failures in facial recognition systems; Black people with ill health are systematically excluded from medical treatment; the elderly and women are not shown job ads. These few cases represent only the tip of the iceberg. So far there are three known cases of Black men wrongly detained due to facial recognition; there are likely many more. The victims of algorithmic injustice that we know about are often disenfranchised, and it is likely that many more are fighting injustice in the dark, or falling victim without redress, unaware that Cheap AI is responsible.

Like segregation, much of Cheap AI is built on a logic of punishment. These systems embed and perpetuate stereotypes. From “deception detection” to “emotion recognition” systems, Cheap AI serves as a tool that “catches” and punishes those deemed to be outliers, problematic, or otherwise unconventional.

The seemingly logical and open-minded course of action—to withhold judgement on these systems on the premise that their merits lie in how “well” or “accurately” they work—lends them a false sense of reasonableness, giving the impression that the “self-correcting” nature of science will eliminate bad tools. It also creates the illusion of there being “two sides”. In reality, criticisms, objections, and calls for accountability drown in the sea of Cheap AI that is flooding day-to-day life. Cheap AI is produced at an unprecedented rate and huge amounts of money go into producing it. By contrast, those working to reveal it as scientifically unfounded and ethically dangerous are scholars and activists working under precarious positions with little to no support, who are likely to suffer negative consequences.

By suspending judgement until wrong is proved, an ecosystem has been created where anyone can claim to have created obviously absurd and impossible tools (some of which are nonetheless taken up and applied) without facing any consequences for engaging in Cheap AI. Such creators and deployers may risk their reputations when their tech is proven to be “inaccurate”. However, for those who face the burn of being measured by this tech, it can be a matter of life and death, resulting in years lost trying to prove innocence, and other grave forms of suffering.

There is no quick solution to ending Cheap AI. Many factors contribute to the ecology in which it thrives. This includes blind faith in AI, the illusion of objectivity that comes with the field’s association with mathematics, Cheap AI’s creators and deployers’ limited knowledge of history and other relevant fields, a lack of diversity and inclusion, the privilege hazard (a field run by a group of mostly white, privileged men who are unaffected by Cheap AI’s harms), the tendency to ignore and dismiss critical voices, and a lack of accountability. We must recognise Cheap AI as a problem in the ecosystem. All of these factors and more need to be recognised and challenged so that Cheap AI is seen for what it is, and those producing it are held accountable.

It took Nazi-era atrocities, forced sterilizations, and other inhumane tortures for phrenology, eugenics, and other pseudosciences to be relegated from science’s mainstream to its fringe. It should not take mass injustice for Cheap AI to be recognised as similarly harmful. In addition to strict legal regulation and the enforcement of academic standards, we ourselves also bear a responsibility to call out and denounce Cheap AI, and those who produce it.

Abeba Birhane is a cognitive science PhD candidate at the Complex Software Lab, University College Dublin, Ireland, and Lero, the Science Foundation Ireland Research Centre for Software.

Notes

1. This definition was partly inspired by Rabinowitz, A. (2021, January 8) Cheap talk skepticism: why we need to push back against those who are ‘just asking questions’. The Skeptic. https://www.skeptic.org.uk/2021/01/cheap-talk-skepticism-why-we-need-to-push-back-against-those-who-are-just-asking-questions

2. Saini, A. (2019) Superior: the return of race science. Boston, Mass: Beacon Press.







Instagram        Twitter