This book is an intervention -
Chapter 8
Learn to take on the ceiling
By Alexander Reben
“So next time you encounter or read about an AI, ask yourself the following questions: Am I projecting human capabilities onto it which may be false? Are sci-fi ideas and popular media clouding my perceptions? Are the outputs cherry picked?”
Whether it’s forming a scientific hypothesis or seeing deities burnt into toast, humans are hard-wired to find patterns in the world. We are adept at making sense of the chaotic or unknown from information we already know.
This can elevate understanding as much as it can mislead. One consequence is our tendency to project the human onto the artificial, a phenomenon called anthropomorphism. Similarly, ascribing animal-like qualities to non-animals is zoomorphism. This can result in curious behaviour. People give names to their robot vacuums, which they hate to see struggling in corners. Apple users thank Siri when she answers a question. A car owner feels subject to a personal attack by their broken-down car.
Advanced technologies are particularly ripe for this sort of misperception, as operating mechanisms are often opaque, capabilities are increasingly human or animal-like, and the results can seem magical to those who do not have insight into the design. Indeed, as soon as room-sized computers were first developed, they were commonly referred to as “electronic brains”. Obviously, they were nothing like brains, but the phrase sounded super futuristic, gave the public a grasp of their functions, and grabbed more attention than something bland, like “Electronic Numerical Integrator and Computer”.
While metaphorical descriptions which equate machine capabilities with human traits may initially appear harmless, they can lend more agency to the technology than is warranted. If you read the headline “Scientists Invent Giant Brain” you would imagine much about that device based on your understanding of a brain and its functions. When we ascribe human traits to technology, those traits often come with assumptions of abilities which may not be true. For example, even if an AI system produces human-like text, it cannot be assumed that it also has other capabilities associated with language, such as comprehension, perception or thought. Nevertheless, most of us do this unconsciously and automatically. If we are aware of our tendency to anthropomorphise, it’s easier to notice when this sort of bias kicks in, making us recognise advanced technology for what it is.
Equally, concepts drawn from sci-fi are often used as a shorthand to describe (and sometimes overrepresent) the capabilities of technology. Probably journalists do this because it creates a quick and familiar image in the reader’s mind. (More likely, it is to chase clicks.) One example is when self-balancing skateboards were called “hoverboards” in reference to the movie, Back to the Future. Since the device worked using wheels which were firmly planted on the ground, no hovering was involved at all. Still, “hoverboard” was a better buzzword than, say, “self-balancing transport”. The implication was that sci-fi technology was here now, even though it was clearly not.
Something similar has happened with artificial intelligence. There are two important categories of AI. Artificial intelligence (AI), which is a type of machine learning that carries out a specific task, and artificial general intelligence (AGI), which is a system that is capable of doing any non-physical task as well as, or better than, a human. AI in a car can determine whether or not there is a stop sign in an image captured by a camera. An AGI can do the same while also being the CEO of the company that builds the cars and writing flawless poetry.
Unfortunately, in media reporting, AI is sometimes treated like AGI. Take the example of an AI chatbot developed by Facebook which sometimes formed incomprehensible English output when two such AIs “spoke” to each other. This was an artefact of the system attempting to find a more efficient way to communicate, a sort of shorthand. Here’s an example of the two chatbots negotiating how to split up basketballs between them:
Bob: i can i i everything else
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i have everything else
Alice: balls have a ball to me to me to me to me to me to me to me to me
This kind of semantic soup became such an obstacle to the intended function of the system, they stopped research.
A human analogue to this AI chat might be drawn with military acronyms or scientific terms, which make sense to those within a particular communication system, but not to those outside it. However, alarmist headlines evoking images of a movie hero “killing” an AI poised to destroy humanity were used to report the incident: “Facebook engineers panic, pull plug on AI after bots develop their own language”. This kind of sensationalism leads to misunderstanding and fear, when the reality of AI is often far more mundane.
Another phenomenon which contributes to the misrepresentation of AI’s capabilities is “cherry picking”, whereby only the best system outputs are shared. The rest are hidden or omitted, and no mention is made of the winnowing process. Sometimes this is intentional, such as when artists generate AI outputs for artworks. Other times this can be a tactic used to intentionally overrepresent capabilities. This might be done to make systems appear better than they are, to gather more attention and support, or to make the system seem more interesting to potential investors.
Within an artistic context, where an artist works with the system to make a creative output, “cherry picking” is more akin to curation. For example, in one of my own artworks, I trained an AI on fortune cookie fortunes so it could generate new ones. I then chose outputs I liked and had them printed and enfolded in actual fortune cookies.
“Your dreams are worth your best pants when you wish you’d given love a chance”
“I am a bad situation”
“Your pain is the essence of everything and the value of nothing”
“Today, your mouth shut”
Fairly unusual fortunes which make sense, if you don’t think about them too hard. Many others which I left out didn’t catch my eye initially. Yet they kind of have meaning, if you bend your mind a bit:
“There is a time when you least expect it”
“Learn to take on the ceiling”
“Emulate what you need to relax today”
“You are what you want”
Then there are the really nonsensical ones:
“Nappies and politicians need to listen to”
“You are a practical person with whom you exchanged suggestive glances?”
“You will be out of an ox”
“The greatest war sometimes isn’t on the ground even though friends flatter you”
Cherry picking without disclosing that you are doing so leads to a misrepresentation of current AI systems. They are already impressive and improving all the time, but they are not quite what we see in sci-fi.
So next time you encounter or read about an AI, ask yourself the following questions: Am I projecting human capabilities onto it which may be false? Are sci-fi ideas and popular media clouding my perceptions? Are the outputs cherry picked? We need to keep in mind that if current AI is to AGI as “self-balancing transports” are to hoverboards, we are still a long way from getting off the ground.
Alexander Reben is an MIT-trained artist and technologist who explores the inherently human nature of the artificial.
Next: Chapter 9
by Gemma Milne
This can elevate understanding as much as it can mislead. One consequence is our tendency to project the human onto the artificial, a phenomenon called anthropomorphism. Similarly, ascribing animal-like qualities to non-animals is zoomorphism. This can result in curious behaviour. People give names to their robot vacuums, which they hate to see struggling in corners. Apple users thank Siri when she answers a question. A car owner feels subject to a personal attack by their broken-down car.
Advanced technologies are particularly ripe for this sort of misperception, as operating mechanisms are often opaque, capabilities are increasingly human or animal-like, and the results can seem magical to those who do not have insight into the design. Indeed, as soon as room-sized computers were first developed, they were commonly referred to as “electronic brains”. Obviously, they were nothing like brains, but the phrase sounded super futuristic, gave the public a grasp of their functions, and grabbed more attention than something bland, like “Electronic Numerical Integrator and Computer”.
While metaphorical descriptions which equate machine capabilities with human traits may initially appear harmless, they can lend more agency to the technology than is warranted. If you read the headline “Scientists Invent Giant Brain” you would imagine much about that device based on your understanding of a brain and its functions. When we ascribe human traits to technology, those traits often come with assumptions of abilities which may not be true. For example, even if an AI system produces human-like text, it cannot be assumed that it also has other capabilities associated with language, such as comprehension, perception or thought. Nevertheless, most of us do this unconsciously and automatically. If we are aware of our tendency to anthropomorphise, it’s easier to notice when this sort of bias kicks in, making us recognise advanced technology for what it is.
Equally, concepts drawn from sci-fi are often used as a shorthand to describe (and sometimes overrepresent) the capabilities of technology. Probably journalists do this because it creates a quick and familiar image in the reader’s mind. (More likely, it is to chase clicks.) One example is when self-balancing skateboards were called “hoverboards” in reference to the movie, Back to the Future. Since the device worked using wheels which were firmly planted on the ground, no hovering was involved at all. Still, “hoverboard” was a better buzzword than, say, “self-balancing transport”. The implication was that sci-fi technology was here now, even though it was clearly not.
Something similar has happened with artificial intelligence. There are two important categories of AI. Artificial intelligence (AI), which is a type of machine learning that carries out a specific task, and artificial general intelligence (AGI), which is a system that is capable of doing any non-physical task as well as, or better than, a human. AI in a car can determine whether or not there is a stop sign in an image captured by a camera. An AGI can do the same while also being the CEO of the company that builds the cars and writing flawless poetry.
Unfortunately, in media reporting, AI is sometimes treated like AGI. Take the example of an AI chatbot developed by Facebook which sometimes formed incomprehensible English output when two such AIs “spoke” to each other. This was an artefact of the system attempting to find a more efficient way to communicate, a sort of shorthand. Here’s an example of the two chatbots negotiating how to split up basketballs between them:
Bob: i can i i everything else
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i have everything else
Alice: balls have a ball to me to me to me to me to me to me to me to me
This kind of semantic soup became such an obstacle to the intended function of the system, they stopped research.
A human analogue to this AI chat might be drawn with military acronyms or scientific terms, which make sense to those within a particular communication system, but not to those outside it. However, alarmist headlines evoking images of a movie hero “killing” an AI poised to destroy humanity were used to report the incident: “Facebook engineers panic, pull plug on AI after bots develop their own language”. This kind of sensationalism leads to misunderstanding and fear, when the reality of AI is often far more mundane.
Another phenomenon which contributes to the misrepresentation of AI’s capabilities is “cherry picking”, whereby only the best system outputs are shared. The rest are hidden or omitted, and no mention is made of the winnowing process. Sometimes this is intentional, such as when artists generate AI outputs for artworks. Other times this can be a tactic used to intentionally overrepresent capabilities. This might be done to make systems appear better than they are, to gather more attention and support, or to make the system seem more interesting to potential investors.
Within an artistic context, where an artist works with the system to make a creative output, “cherry picking” is more akin to curation. For example, in one of my own artworks, I trained an AI on fortune cookie fortunes so it could generate new ones. I then chose outputs I liked and had them printed and enfolded in actual fortune cookies.
“Your dreams are worth your best pants when you wish you’d given love a chance”
“I am a bad situation”
“Your pain is the essence of everything and the value of nothing”
“Today, your mouth shut”
Fairly unusual fortunes which make sense, if you don’t think about them too hard. Many others which I left out didn’t catch my eye initially. Yet they kind of have meaning, if you bend your mind a bit:
“There is a time when you least expect it”
“Learn to take on the ceiling”
“Emulate what you need to relax today”
“You are what you want”
Then there are the really nonsensical ones:
“Nappies and politicians need to listen to”
“You are a practical person with whom you exchanged suggestive glances?”
“You will be out of an ox”
“The greatest war sometimes isn’t on the ground even though friends flatter you”
Cherry picking without disclosing that you are doing so leads to a misrepresentation of current AI systems. They are already impressive and improving all the time, but they are not quite what we see in sci-fi.
So next time you encounter or read about an AI, ask yourself the following questions: Am I projecting human capabilities onto it which may be false? Are sci-fi ideas and popular media clouding my perceptions? Are the outputs cherry picked? We need to keep in mind that if current AI is to AGI as “self-balancing transports” are to hoverboards, we are still a long way from getting off the ground.
Alexander Reben is an MIT-trained artist and technologist who explores the inherently human nature of the artificial.
Next: Chapter 9
Uses (and abuses) of hype
by Gemma Milne