Fake AI 


Edited by Frederike Kaltheuner


Meatspace Press (2021)

Book release: 14/12/2021




This book is an intervention - 





Introduction 2021


This book is an intervention


Frederike Kaltheuner


Not a week passes by without some research paper, feature article or product marketing making exaggerated or even entirely unlikely claims about the capabilities of Artificial Intelligence (AI). From academic papers that claim AI can predict criminality, personality or sexual orientation, to the companies that sell these supposed capabilities to law enforcement, border control or human resources departments around the world, fake and deeply flawed AI is rampant.

The current amount of public interest in AI was spurred by the genuinely remarkable progress that has been made with some AI techniques in the past decade. For narrowly defined tasks, such as recognising objects, AI is now able to perform at the same level or even better than humans. However, that progress, as Arvind Narayanan has argued, does not automatically translate into solving other tasks. In fact, when it comes to predicting any social outcome, using AI is fundamentally dubious. [1]

The ease and frequency with which AI’s real and imagined gains are conflated results in real, tangible harms.


For those subject to automated systems, it can mean the difference between getting a job and not getting a job, between being allowed to cross a border and being denied access. Worse, the ways in which these systems are so often built in practice means that the burden of proof often falls on those affected to prove that they are in fact who they say they are. On a societal level, widespread belief in fake AI means that we risk redirecting resources to the wrong places. As Aidan Peppin argues in this book, it could also mean that public resistance to the technology will end up stifling progress in areas where genuine progress is being made.

What makes the phenomenon of fake AI especially curious is the fact that, in many ways, 2020-21 has been a time of great AI disillusionment. The Economist dedicated its entire summer Technology Quarterly to the issue, concluding that “An understanding of AI’s limitations is starting to sink in.” [2] For a technology that has been touted as the solution to virtually every challenge imaginable—from curing cancer, to fighting poverty, predicting criminality, reversing climate change and even ending death—AI has played a remarkably minor role [3] in the global response to a very real challenge the world is facing today, the Covid-19 pandemic. [4] As we find ourselves on the downward slope of the AI hype cycle, this is a unique moment to take stock, to look back and to examine the underlying causes, dynamics, and logics behind the rise and fall of fake AI.

Bringing together different perspectives and voices from across disciplines and countries, this book interrogates the rise and fall of AI hype, pseudoscience, and snake oil. It does this by drawing connections between specific injustices inflicted by inappropriate AI, unpacking lazy and harmful assumptions made by developers when designing AI tools and systems, and examining the existential underpinnings of the technology itself to ask: why are there so many useless, and even dangerously flawed, AI systems?

Any serious writing about AI will have to wrestle with the fact that AI itself has become an elusive term. As every computer scientist will be quick to point out, AI is an umbrella term that’s used for a set of related technologies. Yet while these same computer scientists are quick to offer a precise definition and remind us that much of what we call AI today is in fact machine learning, in the public imagination, the term AI has taken on a meaning of its own. Here, AI is a catch-all phrase used to describe a wide-ranging set of technologies, most of which apply statistical modelling to find patterns in large data sets and make predictions based on those patterns—as Fieke Jansen and Corinne Cath argue in their piece about the false hope that’s placed in AI registers.

Just as AI has become an imprecise word, hype, pseudoscience, and snake oil are frequently used interchangeably to call out AI research or AI tools that claim to do something they either cannot, or should not do. If we look more closely however, these terms are distinct. Each highlights a different aspect of the phenomenon that this book interrogates.

As Abeba Birhane powerfully argues in her essay, Cheap AI, the return of pseudoscience, such as race science, is neither unique nor distinct to AI research. What is unique is that dusty and long-discredited ideas have found new legitimacy through AI.


Dangerously, they’ve acquired a veneer of innovation, a sheen of progress, even. By contrast, in a wide-ranging interview that considers how much, and how little, has changed since his original talk three years ago, Arvind Narayanan hones in on “AI snake oil”, explaining how it is distinct from pseudoscience. Vendors of AI snake oil use deceptive marketing, fraud, and even scams to sell their products as solutions to problems for which AI techniques are either ill-equipped or completely useless.

The environment in which snake oil and pseudoscience thrives is characterised by genuine excitement, unchallenged hype, bombastic headlines, and billions of dollars of investment, all coupled with a naïve belief in the idea that technology will save us. Journalist James Vincent writes about his first encounter with a PR pitch for an AI toothbrush and reflects on the challenges of covering hyped technology without further feeding unrealistic expectations. As someone who used to work as a content moderator for Google in the mid 2010s, Andrew Strait makes a plea against placing too much hope on automation in content moderation.

Each piece in this book provides a different perspective and proposes different answers to problems which circle around the shared question of what is driving exaggerated, flawed or entirely unfounded hopes and expectations about AI. Against broad-brush claims, they call for precise thinking and scrupulous expression.

For Deborah Raji the lack of care with which engineers so often design algorithmic systems today belongs to a long history of engineering irresponsibility in constructing material artefacts like bridges and cars. Razvan Amironesei, Emily Denton, Alex Hanna, Andrew Smart and Hilary Nicole describe how benchmark datasets contribute to the belief that algorithmic systems are objective or scientific in nature. The artist Adam Harvey picks apart what exactly defines a “face” for AI.

A recurring theme throughout this book is that harms and risks are unevenly distributed.


Tulsi Parida and Aparna Ashok consider the effects of AI inappropriately applied through the Indian concept of jugaad. Favour Borokini and Ridwan Oloyede warn of the dangers that come with AI hype in Nigeria’s fintech sector.

Amidst this feverishly hyped atmosphere, this book makes the case for nuance. It invites readers to carefully separate the real progress that AI research has made in the past few years from fundamentally dubious or dangerously exaggerated claims about AI’s capabilities.

We are not heading towards Artificial General Intelligence (AGI). We are not locked in an AI race that can only be won by those countries with the least regulation and the most investment.


Instead, the real advances in AI pose both old and new challenges that can only be tamed if we see AI for what it is. Namely, a powerful technology that at present is produced by only a handful of companies with workforces that are not representative of those who are disproportionately affected by its risks and harms.

Notes

1. Narayanan, A. (2019) How to recognize AI snake oil. Princeton University, Department of Computer Science. https://www.cs.princeton.edu/~arvindn/talks/MIT-STS-AI-snakeoil.pdf

2. Cross, T. (2020, 13 June) An understanding of AI’s limitations is starting to sink in. The Economist. https://www.economist.com/technology-quarterly/2020/06/11/an-understanding-of-ais-limitations-is-starting-to-sink-in

3. Mateos-Garcia, J., Klinger, J., Stathoulopoulos, K. (2020) Artificial Intelligence and the Fight Against COVID-19. Nesta.
https://www.nesta.org.uk/report/artificial-intelligence-and-fight-against-covid-19/

4. Peach, K. (2020) How the pandemic has exposed AI’s limitations. Nesta.
https://www.nesta.org.uk/blog/how-the-pandemic-has-exposed-ais-limitations/






Instagram        Twitter