Fake AI 


Edited by Frederike Kaltheuner


Meatspace Press (2021)

Book release: 14/12/2021




This book is an intervention - 





Chapter 10


Talking heads


By Crofton Black

“In these fables of failed predictive systems we discern a prediction of our present.”


In the 13th century, it was said, the scientist and philosopher Albertus Magnus built a talking head. This contraption, skilfully manufactured with hidden wheels and other machinery, spoke so clearly that it frightened his pupil, who smashed it.1

Like us, the people of the Middle Ages told themselves myths about technology. One particular mythic strand involved talking heads. Like today’s systems, these heads received inputs from their builders and offered outputs that told of things to come. However, the heads failed as predictive systems. The tales warned of automated data operations gone wrong.

Such fables probably resonated with their audiences for several reasons. One is that although there were many known examples of automata that were capable of moving or performing other functions, the possession of a voice—the sign of a reasoning soul—crossed a significant threshold. Another is that these systems occupied an ambiguous, or sometimes overtly transgressive, place in the taxonomy of technology.

In the Middle Ages, technology—the “science of craft”—embraced three categories, which we might for convenience call exoteric, esoteric, and demonic.

Exoteric technologies, such as mills, bows, cranks, and wheelbarrows, operated in accordance with physical laws which were well understood and clearly visible to all. They were uncontroversial. Esoteric technologies, on the other hand, occupied the space of so-called “natural” magic. This involved secret or hidden mechanisms, which were not commonly known or understood. Those who exploited these mechanisms argued that they were nonetheless natural, because they operated through certain intrinsic features of the universe.

The workers of natural magic construed the universe in terms of analogies. They perceived relationships between heavenly and earthly bodies, between stars and comets, animals and plants, and they believed in a cause and effect straddling these analogies. Not unlike, perhaps, the data analysts of today, they built their universe through proxies.

The danger in the practice of natural magic, as witnessed in writings and controversies across the centuries of the early modern period, was that the line dividing it from the third category of technology, demonic magic, was highly contested. Demonic magic comprised any activities for which no natural explanation could be found, and which therefore had to be the work of demons, persuaded by scholars via ritual to do their bidding. Martin Delrio, for example, recounting the fate of Albert’s head in his Investigations into Magic (1599), concluded that the head could not be the product of human ingenuity. This was impossible, he said, for such human-manufactured idols “have a mouth but speak not”.2

Therefore it was a demon that spoke.

In the absence of plausible explanation, there was a black box, and in the box was thought to be a demon. The story of Albert’s head is about a scientific—literally, knowledge-making—endeavour which ended in the construction of an object of horror.

Before Albert there was Gerbert of Aurillac. Gerbert, it is recounted by the historian William of Malmesbury, constructed his head at the apex of a career investigating computational and predictive systems.3 He had mastered judicial astrology, the analysis of the effects of the movements of heavenly bodies on human lives. He had “learned what the singing and the flight of birds portended”. And as a mathematician he had established new ways of manipulating the abacus, developing for it “rules which are scarcely understood even by laborious computers”.

The head was capable of responding in binary to a question, answering “yes” or “no”. But its responses, which lacked context and strictly adhered to narrowly defined inputs, were misleading. Gerbert, who lived in Rome, had asked the head whether he would die before visiting Jerusalem. The head replied that he would not. But it failed to specify that there was a church in Rome of that name, where Gerbert did indeed go. Soon after, he died.

Likewise, it was said that in Oxford in the 13th century, the philosopher Roger Bacon built a bronze talking head as a component in a defence system which would render England invulnerable.4 Yet after building the head, he fell asleep, leaving a servant to monitor it, whereupon the head uttered three obscure phrases and exploded. The servant, who was blamed for the debacle, argued that a parrot with the same duration of training would have spoken more sense. Bacon himself was convinced that the head had something profound to communicate. What this was, no one would ever know, because only the machine’s builder was able to interpret it. To those charged with its management, its findings were opaque.

A story told of Bacon’s teacher, Robert Grosseteste, runs parallel. He, too, built a head, and worked on it for seven years. But his concentration flagged for 30 seconds as he sought to finish the task. A lapse in the internal mechanism or the underlying data? We are not told. As a result, he “loste all that he hadde do”.5

These fables, centuries old, offer us patterns which we may find familiar: allegories of modern encounters with predictive systems. Gerbert failed to appreciate how his system’s outputs were conditioned by unperceived ambiguities in its inputs. A flaw in the underlying dataset produced misleading results. Bacon left his system running under the management of someone not trained to understand its outputs. Consequently, nothing useful was derived from them. Grosseteste’s machine failed owing to a very small programming discrepancy, a mere half-minute’s worth in seven years. The effects of this miscalculation, at scale, rendered the machine useless. Whether or not Albert’s black box was effective, the tale does not tell us, but its inexplicability proved a fatal barrier to its acceptance.

In these myths, the creators of these heads dreamt of the future. In these fables of failed predictive systems we discern a prediction of our present.

Crofton Black is a writer and investigator. He leads the Decision Machines project at The Bureau of Investigative Journalism. He has a PhD in the history of philosophy from the Warburg Institute, London.


Notes

1. De’ Corsini, M. (1845) Rosaio della vita: trattato morale. Florence: Società Poligrafica Italiana.

2. Delrio, M. (1657) Disquisitionum magicarum libri sex. Cologne: Sumptibus Petri Henningii.

3. Ed. Giles, J. A. (1847) William of Malmesbury’s Chronicle of the Kings of England. London: Henry G. Bohn.

4. Anon (1627) The famous historie of Fryer Bacon. London: A.E. for Francis Grove.

5. Ed. Macaulay, G. C. (1899) The Complete Works of John Gower. Oxford: Clarendon Press.




Next: Chapter 11
What is a face?


by Adam Harvey







Instagram        Twitter