This semester I am lucky enough to be in an amazing new class, Automatic Everything, a transdisciplinary design lab at Parsons School for Design that focuses on the intersection of speculative design and artificial intelligence. For our first week, we were asked by our professors Anthony Dunne, Fiona Raby, and visiting lecturer Joseph Lemelin to do some broad research across films, articles, academic papers, podcasts etc. to begin to identify what people’s dreams are concerning Artificial Intelligence. The goal of this exercise and research is to start to pull out fears, desires, opportunities and consequences that people have ascribed to artificial intelligence, both now, and as I’ve learned from my research, seemingly for a very very long time.
This blog post is some rough-and-ready notes, conjectures, ideas, and interesting sources that I have pulled out during the research as I try to understand what dreams I am most interested in exploring. At the end of this post, I will try to identify some core questions about AI that particularly interest me as well as general themes and theories stemming from those questions that I think have the potential to explore further through making projects.
Notes, Reflections, Ruminations on Human’s Dreams of Artificial Intelligence
In Our Time Podcast (2005) – Artificial Intelligence
I started off my research with a podcast from 2005 that featured three scientists and historians talking about some of the fundamental questions of AI and the history of how humans thought of intelligence of both machines and humans. This podcast was structured around 3 big questions:
- Who were the early pioneers of AI and what drove them to imitate the operations of the human mind?
- Is intelligence the defining characteristic of humanity?
- How has the quest for AI been driven by warfare and conflict in the 20th century?
Starting with the idea of automatons of the 18th century, it should be noted that people were super freaked out by these creations. At the time, intelligence was not considered what it meant to “be human” rather doing human-like things was the hallmark. By creating this machines that could replicate human-like tasks such as playing music, eating, defecating, people were faced with a real philosophical challenge about what it meant to be human, a challenge we might not feel the same way about today.
These ideas were challenged again throughout the combination of the enlightenment, evolutionary theory mechanisation, further refining what it meant to be “intelligent”. In the Enlightenment period, you find people trying to develop an idea of what the human mind is, much of it because of these automata and their life-like behaviour. The development of evolutionary ideas raised questions around the specialness of non-specialness of the human condition, a question echoed in the mechanical revolution also taking place. Evolution challenged the idea that humans are special (created by god) and instead are different from animals and plants because of the complexity of their brains, giving rise to a higher intelligence.
The speakers on the podcast note that Charles Babbage, a noted British mathematician, was a bridge between these different influences since he was inspired by the idea of automata to consider whether or not machines could equally be programmed to solve mathematical problems. He developed the idea of the difference engine, a machine which could solve mathematical tables that were inefficiently and slowly calculated by people who computed the mappings, literally (and where we get the term from) computers. He was inspired by the ideas of evolution to try to solve this math problem in the same way, by breaking down the complexity into the idea of several simple processes, then building it up again. Rather, reproducing evolution on machines to solve complex mathematical equations.
Interestingly (and I don’t remember anything about this from history class…) his girlfriend, Ada Lovelace (weirdly the daughter of Lord Byron!?!?!) was a mathematical genius, and in transcribing a french copy of the article about one way to solve these math tables, the difference equation, ended up creating a mathematical proof for the idea of universality of an engine of this sort, or in her words, an analytical engine. This analytical engine would instead do mathematics, instead of produce tables. Lovelace’s idea, although never realised, was that you could solve any type of mathematical equation with this type of universal computer. Her boyfriend Babbage even went so far as to rip-off the punch cards of the jacquard loom being developed at the time as a way to program this theoretical machine. Sound familiar?
But one very interesting challenge this analytical engine created, theoretical or not, was the classic Greek idea that mathematics is one of the highest functions of the human mind, what makes us human as opposed to animals. If these engines could do mathematics, what did that mean about human intelligence and what differentiated us from animals?
The speakers in the podcast then fast forward to the 1940’s and look at the figure, John Von Neumann, a Hungarian-born US scientist who did research in ballistics tables, similar to the tables mentioned above. He was asked to look at a computer being developed and worked with the team to solve one of the biggest design challenges, that the computer must be operated by individual switches on the outside of the machine. So the machine must be programmed from the outside, while the memory of the machine was kept inside. Von Neumann helped come up with an idea to move those programming switches into the inside memory of the computer, blurring the distinction between what was memory and what was the program, very quickly leading to innovations in computing power and flexibility. The analytical engine was getting closer and closer.
It is important to understand the evolution of artificial intelligence in relation to the evolution of computing. In principle, AI can be demonstrated by any machine. It is the universality of computers, though, which give them an edge when producing AI. Since a universal machine, or a computer, can do a wide range of calculations and operations, it can theoretically do anything a special purpose machine could do or imitate their functions. Therefore, if AI were ever truly to be realised, it must be through the use of a computer. However, at the time, memory was a huge impediment and storage limits made it very difficult to mimic these specialised functions, although the systems were there.
Another figure who is important in the development of this intertwined evolution of computing and notions of artificial intelligence is Claude Shannon, an engineer best known for having formalised something called “Information Theory” or the statistical theory of communications, on which all modern computing systems are now based. He realised that computers don’t need to be used only for doing mathematics equations outright and that the numbers can just be used as symbols, or as abstractions of higher level functions – basically as 0’s and 1’s or ons and offs. These abstractions describe the state of things in the system or the computer. Using this idea, he built an algorithm that looked at states of playing chess and how they could be numerically programmed into a computer using these 1’s and 0’s. The system he developed was purely numerical on the inside, but on the outside, it looked the manipulation of ideas. The algorithm he wrote for chess playing is still in use by AI today and was notably used by the machine Deep Blue over then World Chess Champion Garry Kasparov in 1998.
This abstraction of symbols and strategic moves into mathematical representations was another huge challenge to the idea of human intelligence. It was realised that ideas could be expressed through math, and therefore computers could perform them in a somewhat simplistic way. In many ways, this dual advance in computing and as a result, artificial intelligence could be one of the most significant developments in computers and AI as we know it today.
Of course, you can’t talk about Artificial Intelligence without discussing Alan Turing, a brilliant British mathematician known for breaking the Enigma Code during WWII. Through his explorations of mathematics, Turing ended up asking and partially answering many questions about human versus artificial intelligence. In his 1936 paper “On Computable Numbers, with an application to the Entscheidungsproblem”, Turing was trying to tackle an arcane math problem that had been bothering mathematicians for some time. He answered the problem, but it was the way in which he did so that was important for AI. Instead of attacking the problem directly, he used a metaphor of human intelligence for his Universal Turing Machine, a machine that can imitate any special purpose calculating machine (sounds awfully similar to Lovelace’s theoretically Analytical Engine…). The metaphor he uses for understanding how this machine would work is that of a number of individual human clerks overseen by a generalist who possessed all the instructions for the things they needed to do. Through this type of intelligent subdivision of labour, information processing would take place in a certain way. This method, according to Turing, could solve any type of calculation that you would want to do. By solving this math problem, Turing ended up giving a description of a new type of machine, based on intelligent human action, that is the heart of today’s modern computer.
Again, this calls into question what exactly human intelligence is. In a later paper from 1950, “Computing Machinery and Intelligence”, Turing tries to understand if a human brain could be simulated by mathematics using this type of theoretical Universal Turing Machine. He again tries to answer the question not directly, but more obliquely, by using the metaphor of the imitation game, a game where an interrogator can ask questions one at a time, in succession, of a man and a woman that they cannot see. The interrogator must decide based on the answers to these questions, whether the person is a man or a woman. Turing said, well what if you put a machine in there and had people try to figure it out? If you put the idea of his Turing Machine, his computer, behind one of the curtains and the likelihood that the interrogator guessed incorrectly was more than 50% (rather they guessed wrong over half the time) you had a machine that could, therefore, be called intelligent. Of course, no such machine, with natural language processing and enough computing power and storage to perform these operations was yet possible in the 1950’s when this paper was written, but Turing put forward the idea that if you can’t tell its a machine versus a human, well the machine must, therefore, be intelligent.
For this reason, it is important to ask if the human mind can simply be described in terms of mathematics. The podcast presenters mentioned that Galileo famously said “The universe is written in the language of mathematics”, but can the mind just be defined in terms of applying mathematical formulae to symbols we take in? This has many strong connotations for how we understand both human and artificial intelligence, and what the distinction between the two are.
One more recent development that calls this further into question is the idea of neural networks. Again, the technology uses a human system, a brain, as a model for creating systems of interactions to program machines. Neural nets mimic the idea that the brain is made up of millions of tiny different nodes that can’t do much on their own, but can instead recognise the patterns of what other cells are doing, and change their behaviour accordingly. This is fundamentally how the brain is thought to learn. While we can now build neural networks that imitate and replicate the mechanisms of the human brain, we have to start distinguishing the difference between the mechanisms of the human brain, and the actual mind, where the mind is defined as the total capacity of a human to think. In the history of AI, much of the confusion and controversy stems from this distinction. While you can create machines that include mechanisms that are automatic, roughly having a mind of their own, when you ask the machine if it is human, it invariably must answer that it is not, it has its own mechanisms for processing information. It will distinguish itself from a human.
20th-century thinking and later very much came to centre around new questions and understanding of intelligence given these advancements. People started to think that these methods in place for gauging intelligence were too pure. That math might not be the highest order or definition of intelligence humans had, as once venerated by other cultures. Instead, people started to believe that intelligence is more of a complete concept that also includes things like language, context, nuance, embodiment and everything else that defines what humans can do and what they are.
These methods of gauging intelligence looking for it in machines are too pure. When we talk about symbols and math, versus things like language, context, nuance, bodies, all the rest of the stuff. More importantly, how do you define the intelligence you want machines to display in this context? If you have machines trying to get good models of how the human brain works, how do you understand how the brain generates consciousness and mind?
In addition, 20th century thought on intelligence started to shift towards the ability to “feel something” and to know that you are feeling it, although Turing pointed out that this argument was solipsistic since you can’t know that a person truly feels without being that person, it’s all subjective.
In light of these changing ideas, the podcast also looks into the role of why intelligence became the hallmark of what it meant to be human in the 20th century (versus doing things a human does and acting like a human, the notion challenged by automata in the 18th century). Why did the idea of artificial intelligence become such a troubling idea? They conjecture that it is a result of the 20th century being a century of conflict like no other before it. The development of AI and computation from military goals to create faster and faster computation machines was accomplished by automating control and computation in those machines, rather than keeping it manual. Through this goal to make machines faster through automation, it changed the way people felt about intelligence itself. The act of automation and computation became thought of as “the machine” and inferior to the range of things humans could do.
As a result, the western idea of machines and computers as pure tools for expressing human ingenuity was born. Interestingly, I think an article on how the Japanese AI market has grown way differently from the Western model in the 21st century, favoring creating human-like machines and emotional AI that can enrich people’s lives, versus practical AI that can coordinate systems and logistics, versus practical AI applications echoes this sentiment perfectly and shows that it might be more of a Western phenomenon.
On a brief side-note, the podcast also briefly touched on the issue of thinking of the relationship between humans and machines as “man-machine” versus “woman-machine” or “human-machine” and one speaker points out a relationship to the idea of “reason” being masculine (the mind) and the body or “matter” (potentiality???) being female in nature. While this is a much bigger story, the speaker questions whether or not this might have something to do with mathematics once being seen as the epitome of intelligence and reason.
However, today, we are beginning to recognise that the world intelligence has several different and nuanced meanings: social intelligence, emotional intelligence, intellect etc. Depending on one’s field (psychology, computer science etc.) very different understandings of intelligence have emerged and perhaps intelligence is the wrong term for AI, maybe smart computing or some other term is better since it makes a distinction between different types of human intelligence that exist.
One important idea is the idea of emergence and “emergent intelligence” and “emergent behaviour” that started to come out of AI research in the 1980’s. The study of artificial life or a-life focused on robotics and generating intelligence within computers, but not individual intelligence, rather creating populations of machines and making the system intelligent. This idea comes from evolution yet again. “Emergent Behaviour” is the idea that a group of animals, like birds, can act together (intelligently) as a system or group, but not as individuals (take for example flocking). This research gave rise to the idea that intelligence is the “emergent behaviour” of the mind, rather, that the idea of intelligence required all of the different little pieces of the mind and human experience to grow. For this reason, people started to point out that computers are not embodied and can’t sense things as a result. Just like you can’t explain to someone how to ride a bike, they must feel it with their body and learn, could you say that computers couldn’t be intelligent with embodiment? People started to believe that a lot of human intelligence is emergent from the whole human system of mind and body. , dependent on the ability to sense. In addition, people started to believe in the idea that you needed culture to be intelligent, a knowledge of human behaviour and society. Without this cultural situation, you couldn’t have a truly intelligent being.
One final point the podcast made which I found very interesting is the idea that AI isn’t about trying to develop human minds in human bodies, it is about trying to develop machine minds in machine bodies in order to figure out what human minds and bodies are. I like this idea using AI in order to more fully understand what makes humans different. For this reason, the idea of “artificial subjectivity” the ability for an AI to “wake up” and suddenly be subjective is very interesting to me. For when we finally figure that out, and some point to the idea that irrationality and adaptability might be the solution, we will finally understand, perhaps, what that spark of being human means.
To finish this synopsis, I think the opening provocation of this podcast is a perfect place to end. In 1949, professor Jeffrey Jefferson stated that a mechanical mind could never rival human intelligence because it’s not conscious of what it does (artificial subjectivity). He said:
“Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions and not by the chance form of symbols, could we agree that the machine equals a brain, that is, not only write it, but know that it has written it.”
After this deep historical dive, I perused several other short and more recent articles on Artificial Intelligence, including one more podcast by The Infinite Monkey Cage. One question that stood out to me from this work was mention of a German philosopher, Thomas Metzinger, who writes on AI. Based on the above ideas discussed regarding the boundary between human and artificial intelligence as subjectivity, he poses an even more interesting question, what would AI feel when it woke up into this subjectiveness? As Aristotle mentioned in his treatise on what human intelligence was, in order to be an animal you must have sense-perception, and if you have sense-perception then you can feel both pain and pleasure, and if you can feel both pain and pleasure, then by definition you can want for things. Metzinger makes an argument that if you were to develop AI to the point of this artificial subjectivity, there is a very strong possibility that the machine you managed to switch on wouldn’t be very happy. This idea of artificial suffering is particularly disturbing according to Metzinger, for, the ability to feel pain and suffering is arguably the most fundamental of all emotional states. It is based on self-preservation, a basic predictor of being human, much more simple than higher level emotional drives.
This question has been articulated in a lot of science fiction I have read and watched in my life, especially the book Do Android Dream of Electric Sheep by Philip K. Dick and later movie Bladerunner by Stanley Kubrick, as well as I Robot by Isaac Asimov. These texts often focus on the idea of self-preservation as a sign of “humanity” in AI, or at least this artificial subjectivity that Metzinger mentions. The AI in these pieces clearly suf r, want, dream, and that it made them unique from other types of AI, or older models. It is also what causes huge questions for the humans in the book as to whether or not they should be abhorred, pitied, or loved.
Wrapping Up – Some Questions I Have & Connections I’ve Made
- If self-preservation articulated through the ability to feel pain and suffering is a distinctly human aspect, then is it cruel to “wake up” AI knowing that we must cause them to suffer from having true subjectivity, to know they are cognizant?
- When they are awoken and can feel pain and suffering, can we continue to think of them just as machines, as sub-human, without rights and ethics? Or do we have to consider them as at least animals – since we know animals feel pain and suffer, resulting in us feeling bad for hurting them?
- Is it morally wrong to “hurt” a machine that can feel pain?
- How can we even truly say that it is “feeling” the pain in the first place?
- One speaker on this podcast makes an analogy to fish when it comes to feeling pain. We know they have brains and pain receptors, but we don’t feel bad generally about hurting fish, because they are too alien, way on the other side of the uncanny valley to recognise their possession of the ability to suffer and actually feel the pain.
- If we can ignore the suffering of something as different to us as fish, which we know can indeed sense pain, how will we treat computers and networks of sensors that can take in information that can be a computed or understood as pain? (e.g setting up some rule like the below…
-
if (inputPainSensor >= 10) { var inPain = true;} else if (inputPainSensor > 10) {var inPain = false};
- With just that short amount of code, can we be said to be programming in the feeling of pain for a machine, setting boundaries and variables? Is that how we feel “pain” too at some weird micro level in our bodies, abstracted as mathematical limits of physical input?
-
To me, perhaps the most interesting question that arises out of the history of examining human and artificial intelligence in relation to one another, and in relation to the advancement of computing is:
What is the distinction between the mechanism of feeling, sensing, and the mind, the actual feeling of pain or pleasure as a result of those senses?
For if to be human is to suffer, why would we ever want to make our machines feel the same way? And if we knew that our machines could suffer, would we change the way we behave towards them? Or would we just unleash our aggressions, frustrations, and will upon them as the tool we have traditionally considered them to be? (just bang the damn thing and it will work… well what if the damn thing said ooooowwwwwww???) Will we treat intelligent machines as just one more channel, one more outlet for our own individual creativity to express itself (impose itself?) through their mechanical and now digital “minds”?