Artificial Intelligence and What It Means To Be Human

En Français

You’ve probably seen the videos. A robot walking or crawling around, sensing its environment and figuring out how to do human-like tasks, such as opening doors. They’ve even learned how to do back flips now. The videos go viral on social media every few months, along with terrified reactions about how “the robot overlords are coming,” “we might as well submit now,” and so on.

Back-flipping bots aside, it makes sense to learn more about robots, automation, and artificial intelligence (AI). The website “Wait But Why” has an in-depth article on this topic [1], which discusses how AI is catching up to humans and in fact will eventually overtake us. Others have commented on the concept of the “singularity,” which is where AI will reach a point where it can self-improve, creating explosive increases of intelligence [2].

This has naturally created concern. Elon Musk says AI represents a fundamental risk to human civilization [3], and Stephen Hawking said it could spell the end of the human race [4]. Bill Gates agrees with Musk, and is baffled why many others aren’t also worried [5]. So what’s actually going on? Is AI really intelligent? Will it take over the world and doom humanity?

There are a lot of unstated assumptions at work in discussions about these topics, so it’s important to take a step back and analyze things one at a time. This post will introduce what AI is and describe how it works. Then it will explain how it’s possible to create an AI-driven machine which can behave like a human, and examine the philosophical conundrum that comes out this.

How AI Works

The current use of the word “AI” is synonymous with a field of computer science known as machine learning. A machine learning algorithm is one that improves as it does some task, based on feedback provided to it from data (i.e. the “correct” answers are recorded in that data set). Eventually, it becomes adept at performing that task and can be deployed to perform the task in the real world without the feedback that enabled it to constantly improve.

Take, for example, an algorithm that tells you whether or not an image contains a cat. First it treats the image as a matrix (an array of numbers, pixel intensities). It then passes the image through a series of fancy mathematical operations, until it is assigned a single number between 0 and 1. If the number is close to 1, it’s a cat, and if it’s close to 0, it’s not a cat.

Numbers called “parameters,” which determine the success of the algorithm, factor into the mathematical operations. Setting these parameters correctly is the key to getting the algorithm to work. This is based on a process of trial-and-error, referred to as “training” in the machine learning context. The parameters are initially randomly set, then for each training image, a correction is made based on what the correct output should have been.

In addition to images, an analogous process could be done for textual or any other type of data. What’s important to note is that machine learning training is an extremely computationally heavy task, and the limit of what can be currently achieved always hovers at the edge of how far computing power has advanced until now.

It’s All Optimization

In essence, machine learning is about solving optimization problems. We have a number we’re trying to optimize – in the example above, we want the difference between the calculated and expected output (which we can call the “error”) to be as close to zero as possible. We make the mathematical adjustments necessary until we reduce the error as close to zero as possible over the data set.

With enough computing power, this setup can be creatively exploited. One classic way to do this is to input an image of a hand-drawn digit (0-9), and have 10 outputs, one for each digit. If the input is say 3, then the outputs should be 0-0-0-1-0-0-0-0-0-0. We can take things even further with more computing power. The input can be a video feed from a camera mounted on a car, and the output can be a control signal for the accelerator or brake. When training, optimize towards what the “correct” thing to do in that situation is from a human driver’s data.

Now keep taking things up a notch. We could train a giant machine learning algorithm that’s optimized to behave like a human. The input can be sensory data – audio, video, and so on – and the output can be speech, control of limbs, etc. We’re far away from being able to create this just yet, but as mentioned above, the more computational power that is thrown at the problem, we’re guaranteed to get better and better results.

In other words, there’s a direct line between what AI can currently do (such as recognize cats in images) and what it can potentially do – imitate the human mind’s behavior. The algorithms have already been invented, and we know how it would be set up and trained. It’s just a question of having enough computing power, which will come eventually.

A Thought Experiment

Let’s say it’s the year 2028 and this has been achieved. Some people then claim that since our minds are just made of matter anyway, and a robot is also made out of matter, it’s effectively a person. It has rights, responsibilities, and a legal status. Imagine the activist movement advocating for greater machine rights (#MLM – machine lives matter?). Some humans even start to have romantic relationships with the machines, or assign them as babysitters for their kids, or ask them to carry out illegal tasks. A fierce debate about their personhood ensues.

How would we respond to this? Some might claim that we can dismiss the machine’s claim to personhood based on the fact that it has no subjective experience. But this is not empirically provable. In fact, no one can even prove another human being has a subjective experience. Noam Chomsky talks about this in this video:

Chomsky makes a few key points here. First, he discusses the point mentioned previously regarding the possibility of emulating or reproducing the activity of the human mind. He also mentions AI and consciousness, pointing out that discussing whether AI is conscious is essentially meaningless, as we can’t prove the existence of consciousness even for other humans. Thus, appeals to consciousness and subjective experience can’t really help us.

“There’s Just Atoms In Here:” Failing to Appreciate Intelligence As God’s Gift

The actual answer to this question is straightforward. Allah has created the human being with its physical form, but He has also endowed us with a ruh (soul) and an aql (intellect). What makes us human is not our outward behavior, but rather this God-given inner essence. Building a robot that can imitate human behavior is an interesting exercise, but ultimately such a robot will always be a tool, not actually human. The imitation it’s doing is at a superficial level, like if you were given a question in a foreign language and looked up the correct answer in a book and copied it down, without actually understanding anything. For more on this concept of blind imitation, see the Chinese Room Experiment [6].

The reason why many modern philosophers view AI as eventually becoming human or superhuman is because they have subscribed to the ideology of materialism. According to this ideology, human beings are entirely self-sufficient and originate from within this physical universe itself, and thus by extension are replicable with silicon. Sam Harris summarizes this view the following way [7], emphasis mine:

We know that mere matter can give rise to what is called “general intelligence,” an ability to think flexibly across multiple domains, because our brains have managed it. Right? I mean, there’s just atoms in here, and as long as we continue to build systems of atoms that display more and more intelligent behavior, we will eventually, unless we are interrupted, we will eventually build general intelligence into our machines.

It may be the case that our minds are made out of “mere matter.” But what this conception of the mind leaves out is the will and creative power of Allah. This lump of mere matter is able to think, observe, contemplate, converse, discover, love, rejoice, and mourn because Allah SWT gave it the ability to do so. Allah says, “It is He who has produced you and made for you hearing and vision and hearts; little are you grateful” (67:23).

We can train a machine to play board games, to have a conversation, to write a book, or to do any other task. Eventually, they’ll get pretty good at doing these things. But there will always be something missing. What humans have, and what machines don’t have, is a soul and an intellect which was created by God and which sets them apart from mere matter.

So why have materialistic ideas become popular? There’s an appeal to such beliefs for modern hedonistic humans, because if we don’t “owe” anyone our existence, we can do whatever we want. However, nothing could be further from the truth. It is Allah who has created us and given us our intelligence, and this profound gift must be acknowledged and appreciated.

It’s beyond the scope of this article to “prove” that the theistic or Islamic conception of the mind is the correct one. Readers interested in this topic can check out the recent work of Hamza Tzortzis [8]. Regardless of the theistic view of the matter, the point stands that the reason why many people view AI as eventually attaining general intelligence, is because they’ve accepted materialistic beliefs about the mind and how it works.

Real Concerns About AI

Even if AI won’t become superhuman, there’s still plenty to be concerned about. For example, Mark Zuckerberg has said AI is a technology like others that needs to be managed well, not something that’s going to destroy humanity by acquiring general intelligence then turning on us [9]. If we make the wrong choices now, it will harm humanity in the long run. Here are some areas to think about:

  • Task automation like driving and grocery checkout. Computers can do these better than humans already, it’s just a matter of building the infrastructure around them (think of how gas pumps or tolls would work if the car was empty). This could lead to the loss of millions of jobs, all from the “unskilled” sector of the market. Plus it replaces human interaction which can be bad in an already atomized society.
  • Engineering talent being wasted on stupid stuff, like ad click optimization. A famous computer scientist once said: “The best minds of my generation are thinking about how to make people click ads. That sucks” [10]. Sure, our machine learning algorithms can figure out how to make people use an app more or buy a product or click a link. But is that really making the world a better place?
  • Fake photos and videos. Generative models are an area of very active research, with progress being made quickly. We can now make fake photorealistic images of small size with enough training. There’s also things like “face swap” which are now stunningly realistic for photos, and progress is being made with videos also.

What the above examples have in common is they’re all cases of humans knowingly developing technology that can be used to cause harm. AI might end up being a destructive force on human civilization, not because it will attain general intelligence, but because of choices people make about where to use it and how to apply it. An optimistic way of viewing this is that our fate is in our hands. Maybe there’s hope after all.

Works cited:

  1. Urban, Tim. “The AI Revolution: The Road to Superintelligence.” Wait But Why, 22 Jan. 2015, https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html.
  2. “Technological Singularity.” Wikipedia, Wikimedia Foundation, 9 Sept. 2018, https://en.wikipedia.org/wiki/Technological_singularity.
  3. Sulleyman, Aatif. “AI Is Going to Wipe out Human Civilisation If We Don’t Stop It, Says Elon Musk.” The Independent, Independent Digital News and Media, 17 July 2017, http://www.independent.co.uk/life-style/gadgets-and-tech/news/elon-musk-ai-human-civilisation-existential-risk-artificial-intelligence-creator-slow-down-tesla-a7845491.html.
  4. Cellan-Jones, Rory. “Stephen Hawking Warns Artificial Intelligence Could End Mankind.” BBC News, BBC, 2 Dec. 2014, http://www.bbc.com/news/technology-30290540.
  5. Holley, Peter. “Bill Gates on Dangers of Artificial Intelligence: ‘I Don’t Understand Why Some People Are Not Concerned’.” The Washington Post, WP Company, 29 Jan. 2015, http://www.washingtonpost.com/news/the-switch/wp/2015/01/28/bill-gates-on-dangers-of-artificial-intelligence-dont-understand-why-some-people-are-not-concerned/?utm_term=.29422738f99c.
  6. Cole, David. “The Chinese Room Argument.” Stanford Encyclopedia of Philosophy, Stanford University, 19 Mar. 2004, https://plato.stanford.edu/entries/chinese-room/.
  7. Harris, Sam. “Can We Build AI without Losing Control over It? .” YouTube, TED, 19 Oct. 2016, http://www.youtube.com/watch?v=8nt3edWLgIg.
  8. Tzortzis, Hamza Andreas. “Chapter 7 – Denying God, Denying You: The Argument From Consciousness.” The Divine Reality: God, Islam and the Mirage of Atheism: Revised Edition, FB Publishing, 2018.
  9. Clifford, Catherine. “Facebook CEO Mark Zuckerberg: Elon Musk’s Doomsday AI Predictions Are ‘Pretty Irresponsible’.” CNBC, CNBC, 24 July 2017, http://www.cnbc.com/2017/07/24/mark-zuckerberg-elon-musks-doomsday-ai-predictions-are-irresponsible.html.
  10. Madrigal, Alexis C. “The Rise of the Wants, Silicon Valley’s Answer to Wall Street’s Math Nerds.” The Atlantic, Atlantic Media Company, 15 Apr. 2011, http://www.theatlantic.com/technology/archive/2011/04/the-rise-of-the-wants-silicon-valleys-answer-to-wall-streets-math-nerds/237411/.

About the author: Yousuf is a developer with an education in computer science. His interests include science, technology, religion, and politics. You can follow him on Twitter here.

Disclaimer: Material published by Traversing Tradition is meant to foster scholarly inquiry and rich discussion. The views, opinions, beliefs, or strategies represented in published articles and subsequent comments do not necessarily represent the views of Traversing Tradition or any employee thereof.

11 thoughts on “Artificial Intelligence and What It Means To Be Human

    1. That’s an interesting question. By cloning, do you mean making an atom-for-atom copy of the person’s body? I think if that was possible, you would basically end up with a corpse: a dead body. The atoms and molecules with be there, but ultimately it’s by Allah’s command that we live and die.

  1. Reblogged this on Muslim604's Blog and commented:
    Salaam,

    I recently wrote this post about AI over on Traversing Tradition. Please check it out, share, and leave any questions or feedback in the comments!

  2. Interesting Read. Will definitely look more into Tzortzis’ work to learn more about the islamic understanding of human consciousness. I was thinking those algorithms can be trained to tell if an article on the internet makes false claims/or comes to false conclusions about various topics in Islam. Not sure how much we can trust it with such a task though..

Leave a Reply