This book is an intervention -
Chapter 16
The power of resistance: from plutonium rods to silicon chips
By Aidan Peppin
Around the same time that Turing was devising his famous test, many countries were pouring concrete into the foundations of the world’s first nuclear power stations. Nuclear power then was a technology that, like AI today, generated as much hope and hype as it did anxiety.
Today, the abundant hype around AI is being met with growing resistance towards harmful algorithms and irresponsible data practices. Researchers are building mountains of evidence on the harms that flawed AI tools may cause if left unchecked. Civil rights campaigners are winning cases against AI-driven systems. Teenagers scorned by biased grading software are carrying placards and shouting “F*** the Algorithm!”. Public trust in AI is on rocky ground.
This puts AI’s current success on a potential tipping point. Fuelled by misleading AI “snake oil”, today’s unrealised hype and very real harms are creating public resistance that could push AI into another stagnant winter that stifles both its harms and its benefits—much like those of the 1970s and 80s, when AI innovation was curbed and political will failed. But resistance also highlights what responsible, publicly acceptable technology practice could look like, and could help prioritise people and society over profit and power. To understand how, nuclear power’s history of hype and resistance offers a useful guide. After all, the term “AI winter” was in part inspired by the idea of nuclear winter.1
Despite the lethal dangers of nuclear technology being made horrifyingly evident at Nagasaki and Hiroshima in the closing days of the Second World War, the following years saw nuclear’s military applications moulded to civil purposes. In the early 1950s, its potential for good was manifested in the world’s very first nuclear power station at Calder Hall in Cumbria, UK, now known as Sellafield.
Since then, trends in public acceptance of nuclear power have been well-studied, and thousands of people across the world have been surveyed. These studies have shown how, in nuclear power’s early years, the fearsome image of mushroom clouds was perceived as the distant past, while civil nuclear power was promised by governments and the industry as not only the future of electricity generation, but as the herald of a brave, new, technological society built on unlimited energy.2 However, many hyped-up promises never materialised (such as Ford’s nuclear-powered vehicle, the Nucleon) and the fear of nuclear war lingered.
Then, in 1986, the potent, dangerous reality of nuclear power was reproven when reactor No.4 at the Chernobyl nuclear power station failed catastrophically. The combined technical, chemical, institutional, and administrative break-down killed thousands according to official estimates, and spewed radiation across thousands of square miles (reaching as far as Sellafield in Cumbria, in a bitter irony).
In the wake of the Chernobyl disaster, support for—and investment in—civil nuclear power plummeted. Fears of nuclear’s destructive force were absorbed into the growing environmental movement, and the early enthusiasm for nuclear power swiftly gave way to a “dark age” for the technology, in which resistance grew and innovation stumbled. Nuclear power saw little new activity until the mid-2000s, when some nations, including the UK, France, US and China, began (and continue) to reinvest in nuclear power, in part thanks to public recognition that it generates low-carbon electricity.
Though the public is often dismissed as having little influence on the material shape of technology, the history of nuclear power shows that public opinion can have a profound impact on policy and practice—that is, how governments invest in and regulate technologies, how innovators develop them, and how people use them. This is seen with many other technologies too, from genetic engineering to cars, and now in AI.
The recent kindling of public resistance against AI must be recognised as the vital signal that it is. Previous AI winters of the late 20th Century followed seasons of over-promising: while computing hardware was catching up with AI theory, excitement waned as the pace of innovation slowed and hyped-up claims about what AI could do were left unfulfilled. Today, however, superfast processors, networked cloud servers, and trillions of bytes of data allow many of those claims to be apparently realised. AI systems are helping detect and diagnose macular degeneration, powering GPS systems, and recommending the next best thing since Tiger King.
But success is a double-edged sword. As AI’s abilities are put to use in everyday applications, so too are its dangers. The dubious data practices of firms like Cambridge Analytica have disrupted politics, algorithms like COMPASS have exacerbated pre-existing bias and injustice, and facial recognition systems deployed by law enforcement are infringing on civil and data rights. As people realise they are largely powerless to challenge and change these systems by themselves, public concern and resistance has been spreading to all corners of the AI landscape, led by civil rights campaigners, researchers, activists, and protesters. If ignored, this resistance won’t just weed out harmful AI. It may stifle the social and political will that supports beneficial AI too.
Social scientists remind us that correlation is not causation. Simply because public support and resistance to nuclear power has neatly tracked levels of government investment and technological innovation does not mean one creates the other. But an established body of research shows that they are connected, and that public resistance is an important signal.
After Chernobyl, public resistance to nuclear power made clear that its risks and dangers were no longer considered acceptable. This had a chilling effect on political will and investment. It wasn’t until industry and governments committed to more responsible, transparent, and accountable practices that investment and innovation in nuclear power began to reignite. Even today, nuclear power is not a widely accepted technology—public support is rocked still by incidents like the 2011 Fukushima disaster—and healthy public scepticism plays an active part in balancing hype against responsible practice.
Silicon chips and plutonium fuel rods do not share much in common technically, and the threat of nuclear meltdown differs from the threats posed by biased algorithms. But we cannot let the hope that AI may never have a disaster on the same scale as Chernobyl lure us into complacency. The many small, individual disasters that are already occurring every day around us will continue to add up. As irresponsible AI practices lead to social, political, cultural, and economic harms, public acceptance will falter and resistance will grow.
However, resistance is not a force to fear: it is a powerful signal. It may threaten the current hype-fuelled AI summer, but it need not stifle responsible innovation. Harnessed well, public resistance can help shine light on what must be improved, weed out AI “snake oil”, define what is socially acceptable, and help a more responsible AI industry flourish. But if it is ignored, the current public mood around algorithms and big data could forecast more than just the winds of change. It could be the first cold breeze of another AI winter.
Aidan Peppin is a Senior Researcher at the Ada Lovelace Institute. He researches the relationship between society and technology, and brings public voices to ethical issues of data and AI.
Notes
1. Crevier, D. (1993) AI: The Tumultuous Search for Artificial Intelligence. New York, NY: Basic Books. p. 203
2. Gamson, W.A. & Modigliani, A. (1989) Media Discourse and Public Opinion on Nuclear Power: A Constructionist Approach. American Journal of Sociology, 95 (1), 1–37.
Today, the abundant hype around AI is being met with growing resistance towards harmful algorithms and irresponsible data practices. Researchers are building mountains of evidence on the harms that flawed AI tools may cause if left unchecked. Civil rights campaigners are winning cases against AI-driven systems. Teenagers scorned by biased grading software are carrying placards and shouting “F*** the Algorithm!”. Public trust in AI is on rocky ground.
This puts AI’s current success on a potential tipping point. Fuelled by misleading AI “snake oil”, today’s unrealised hype and very real harms are creating public resistance that could push AI into another stagnant winter that stifles both its harms and its benefits—much like those of the 1970s and 80s, when AI innovation was curbed and political will failed. But resistance also highlights what responsible, publicly acceptable technology practice could look like, and could help prioritise people and society over profit and power. To understand how, nuclear power’s history of hype and resistance offers a useful guide. After all, the term “AI winter” was in part inspired by the idea of nuclear winter.1
Despite the lethal dangers of nuclear technology being made horrifyingly evident at Nagasaki and Hiroshima in the closing days of the Second World War, the following years saw nuclear’s military applications moulded to civil purposes. In the early 1950s, its potential for good was manifested in the world’s very first nuclear power station at Calder Hall in Cumbria, UK, now known as Sellafield.
Since then, trends in public acceptance of nuclear power have been well-studied, and thousands of people across the world have been surveyed. These studies have shown how, in nuclear power’s early years, the fearsome image of mushroom clouds was perceived as the distant past, while civil nuclear power was promised by governments and the industry as not only the future of electricity generation, but as the herald of a brave, new, technological society built on unlimited energy.2 However, many hyped-up promises never materialised (such as Ford’s nuclear-powered vehicle, the Nucleon) and the fear of nuclear war lingered.
Then, in 1986, the potent, dangerous reality of nuclear power was reproven when reactor No.4 at the Chernobyl nuclear power station failed catastrophically. The combined technical, chemical, institutional, and administrative break-down killed thousands according to official estimates, and spewed radiation across thousands of square miles (reaching as far as Sellafield in Cumbria, in a bitter irony).
In the wake of the Chernobyl disaster, support for—and investment in—civil nuclear power plummeted. Fears of nuclear’s destructive force were absorbed into the growing environmental movement, and the early enthusiasm for nuclear power swiftly gave way to a “dark age” for the technology, in which resistance grew and innovation stumbled. Nuclear power saw little new activity until the mid-2000s, when some nations, including the UK, France, US and China, began (and continue) to reinvest in nuclear power, in part thanks to public recognition that it generates low-carbon electricity.
Though the public is often dismissed as having little influence on the material shape of technology, the history of nuclear power shows that public opinion can have a profound impact on policy and practice—that is, how governments invest in and regulate technologies, how innovators develop them, and how people use them. This is seen with many other technologies too, from genetic engineering to cars, and now in AI.
The recent kindling of public resistance against AI must be recognised as the vital signal that it is. Previous AI winters of the late 20th Century followed seasons of over-promising: while computing hardware was catching up with AI theory, excitement waned as the pace of innovation slowed and hyped-up claims about what AI could do were left unfulfilled. Today, however, superfast processors, networked cloud servers, and trillions of bytes of data allow many of those claims to be apparently realised. AI systems are helping detect and diagnose macular degeneration, powering GPS systems, and recommending the next best thing since Tiger King.
But success is a double-edged sword. As AI’s abilities are put to use in everyday applications, so too are its dangers. The dubious data practices of firms like Cambridge Analytica have disrupted politics, algorithms like COMPASS have exacerbated pre-existing bias and injustice, and facial recognition systems deployed by law enforcement are infringing on civil and data rights. As people realise they are largely powerless to challenge and change these systems by themselves, public concern and resistance has been spreading to all corners of the AI landscape, led by civil rights campaigners, researchers, activists, and protesters. If ignored, this resistance won’t just weed out harmful AI. It may stifle the social and political will that supports beneficial AI too.
Social scientists remind us that correlation is not causation. Simply because public support and resistance to nuclear power has neatly tracked levels of government investment and technological innovation does not mean one creates the other. But an established body of research shows that they are connected, and that public resistance is an important signal.
After Chernobyl, public resistance to nuclear power made clear that its risks and dangers were no longer considered acceptable. This had a chilling effect on political will and investment. It wasn’t until industry and governments committed to more responsible, transparent, and accountable practices that investment and innovation in nuclear power began to reignite. Even today, nuclear power is not a widely accepted technology—public support is rocked still by incidents like the 2011 Fukushima disaster—and healthy public scepticism plays an active part in balancing hype against responsible practice.
Silicon chips and plutonium fuel rods do not share much in common technically, and the threat of nuclear meltdown differs from the threats posed by biased algorithms. But we cannot let the hope that AI may never have a disaster on the same scale as Chernobyl lure us into complacency. The many small, individual disasters that are already occurring every day around us will continue to add up. As irresponsible AI practices lead to social, political, cultural, and economic harms, public acceptance will falter and resistance will grow.
However, resistance is not a force to fear: it is a powerful signal. It may threaten the current hype-fuelled AI summer, but it need not stifle responsible innovation. Harnessed well, public resistance can help shine light on what must be improved, weed out AI “snake oil”, define what is socially acceptable, and help a more responsible AI industry flourish. But if it is ignored, the current public mood around algorithms and big data could forecast more than just the winds of change. It could be the first cold breeze of another AI winter.
Aidan Peppin is a Senior Researcher at the Ada Lovelace Institute. He researches the relationship between society and technology, and brings public voices to ethical issues of data and AI.
Notes
1. Crevier, D. (1993) AI: The Tumultuous Search for Artificial Intelligence. New York, NY: Basic Books. p. 203
2. Gamson, W.A. & Modigliani, A. (1989) Media Discourse and Public Opinion on Nuclear Power: A Constructionist Approach. American Journal of Sociology, 95 (1), 1–37.