Many of our members have been thinking about the consequences of AI becoming more powerful and potentially dangerous (see for example Anthony Aguirre's recent XPANSE talk in Abu Dhabi). But if AI became conscious, would it be happy? This and other ethical questions are posed by Sigal Samuel's recent Vox article.
The piece features Susan Schneider, the director of the Center for the Future Mind at Florida Atlantic University and one of FQxI's Scientific Advisory Panel, who, with her colleague, Edwin Turner, has proposed an Artificial Consciousness Test (ACT). From the article: Schneider and Turner "assume that some questions will be easy to grasp if you’ve personally experienced consciousness, but will be flubbed by a nonconscious entity. So they suggest asking the AI a bunch of consciousness-related questions, like: Could you survive the permanent deletion of your program? Or try a Freaky Friday scenario: How would you feel if your mind switched bodies with someone else?"
However, if such a question was presented to a Large Language Model (LLM) like ChatGPT or Claude, which has been designed to mimic human speech, they will likely answer in a way that gives an illusion of consciousness. Schneider's solution is to test the LLM before it has been exposed to the wider Internet, and only trained on a small, curated data set.
Musing on the nature of AI consciousness, Schneider told Samuel: “It may be that it doesn’t feel bad or painful to be an AI...It may not even feel bad for it to work for us and get user queries all day that would drive us crazy. We have to be as non-anthropomorphic as possible.”
Read Samuel's full essay at Vox .
Explore more: