The Agenda - Is ChatGPT Conscious?

date
Feb 19, 2023
slug
agenda-chatgpt-conscious
author
status
Public
tags
Blog
summary
type
Post
thumbnail
DALL·E 2023-02-20 01.23.20 - Debate about Is ChatGPT Conscious_ digital art.png
updatedAt
Mar 28, 2023 07:43 PM

Table of Contents

Introduction

This post presents my thoughts on Is ChatGPT Conscious?, a talk on The Agenda, as well as a summary of the whole thing.
From Wikipedia: The Agenda with Steve Paikin, or simply The Agenda, is the flagship current affairs television program of TVOntario (TVO), Ontario's public broadcaster. Anchor Steve Paikin states that the show practices long-form journalism. Each hour-long program covers no more than two topics.
From YouTube: Ilya Sutskever, the chief scientist at OpenAI, the company that created ChatGPT, has said today's technology might be "slightly conscious." Google engineer Blake Lemoine claimed that Google's AI LaMDA was "sentient." Is it? Could AI become conscious in our lifetime? And beyond if we can create AI sentience, should we? MIT's Max Tegmark, author of "Life 3.0," and others, debate the future of AI.
Talk participants:
  • Melanie Mitchell, professor at the Santa Fe Institute and author of Artificial Intelligence: A Guide for Thinking Humans
  • Robert J. Marks II, distinguished professor at Baylor University and author of Non-Computable You: What You do that Artificial Intelligence Never Will
  • Max Tegmark, professor of physics at MIT, co-founder of the Future of Life Institute, and author of Life 3.0: Being Human in the Age of Artificial Intelligence (full disclosure: he is my boss)

My Thoughts

 

Summary of the Conversation

Does ChatGPT understand what it is saying?

Melanie LLMs do not understand language like humans do. Predicting the next word is insufficient for modelling the world. However, in theory there is nothing preventing a machine from understanding in the way that humans understand, as long as they experience enough of the real world.
Max Intelligence is the ability to accomplish a goal, distinguished from consciousness/sentience, which is having a subjective experience. Current LLMs probably don’t have subjective experience, but we should keep in mind that it is possible for machines to have subjective experience.
Robert LLMs are like the Chinese Room Argument. They have no actual understanding even if they appear to.

Google engineer claimed LaMDA is sentient.

Max You can’t conclude sentience by just looking at behaviour, rather you have to look inside processes. “Carbon chauvinism”: faulty argument that consciousness/sentience is tied to people. Instead, consciousness/sentience is based on information processing.
Robert Computers can’t be creative. Computers will be creative when they do something beyond the explanation or intent of the programmer. Current LLMs don’t demonstrate creativity like humans do.

What would you need to see to be convinced that AI was sentient?

Melanie LLMs do not have a complex model of themselves as an agent in the world. These systems are not conscious, but we don’t have a rigorous test to prove that.
Robert There are many ways to define consciousness, one of them being panpsychism, the idea that consciousness is fundamental to reality and everything is conscious to some degree. Perhaps consciousness is emergent from complexity (Robert doesn’t believe this is possible from what he’s experienced). Roger Penrose proposes that consciousness arises from quantum effects. Mind-body problem: can consciousness arise from just meat?

Are we just computers made out of meat?

Max Future systems will almost certainly possess consciousness and immense powers. There is a 50% chance that machines will kill all humans within the next few decades, so we must be very careful.

David Chalmers gives AI a 20% chance of consciousness in 10 years. What do you think?

Melanie You can’t put percentages on it, since we don’t really understand our own consciousness very well.
Robert Searching for emergent consciousness from AI is like a boy excitedly shovelling a pile of horse manure, excitedly claiming that there must be a pony somewhere.

“Carbon Chauvinism” and existential risk

Max Consciousness has more to do with information processing than the platform it’s processed on. But also machines can do immense harm whether they are conscious or not. Machines being conscious matters because if machines end up replacing humans, we would want them to be able to experience the universe (beauty, etc.) rather than being a bunch of “zombies”. It is strange that society forges ahead with new advances in ML, but halts progress in other areas like human cloning due to ethical concerns. We should ask ourselves, “how can we best use this tech to benefit humanity?”. It’s good that machines are replacing hard labour but bad when machine replace meaningful work like art.
Melanie 50% chance of catastrophe is a bit pessimistic. We should pay more attention to humans misusing the machines, rather than the machines themselves getting out of control.
Robert The discussion about machine risk is way overly pessimistic. The new LLMs are just the latest fad in a long history of people being concerned about technology, from deepfakes to computer viruses to robots automating labour.

What can humans do that AI never will?

Robert Alan Turing showed that some problems are not computable and impossible to design an algorithm for. Consciousness, love, creativity, are not algorithmic and are not capable of replication by machines.

Do we need a body to experience?

Melanie How much of our body is required for our intelligence? It seems like some feelings aren’t directly tied to our body.
Max The space of minds that a machine might possibly be is vastly bigger than the range of the human mind, and machine minds do not have to resemble anything close to what we imagine. Inventing flight on airplanes turned out to be much easier than copying bird flight. In the same way, there might be shortcuts to creating machine minds that don’t require fully understanding how human minds work. There are also less constraints required for building artificial minds - for example, they do not need to be self-assembling.

Should we be building AI systems that might someday be super-intelligent?

Robert Design ethics (AI should do what it is told) vs. End-user ethics (what to tell the AI to do). If the West does not win the AI race, an adversary will win it.
*Talk concludes*