Intelligence and consciousness are often thought of as two sides of the same coin. And in the history of psychology the two have always been linked. It was assumed that intelligence comes out of conscious experience as we think through responses to the environment that are clever enough to help us survive. Evolution also creates a feedback loop between consciousness and intelligence. Over time evolution produces smarter and presumably more conscious organisms.
However, the arrival of artificial intelligence causes us to question this truism. AI entities are clearly intelligent, but not so clearly conscious. Rather than define intelligence in terms of consciousness, we begin to characterize intelligence as the ability to process information. Processing information in an effective manner allows for intelligent responses, whether or not the processor is conscious. And while we don’t know if today’s or tomorrow’s AI will grow in consciousness as they grow in intelligence, it seems likely that at least the AI systems operating today are not conscious. The net result is that we are beginning to decouple our ideas about intelligence and consciousness.
We can explore the separation of these two concepts by adapting it to the now famous schema that philosopher David Chalmers characterized as the “easy” and “hard” problems of consciousness. The easy problem (not really easy, but by comparison) relates to methods and structure by which our human minds cognitively processing information. Many aspects of human intelligence brought to light through cognitive models are now being incorporated into AI design. That’s the easy problem. The hard problem, in contrast, is the issue of how the material world as we understand it is able to produce subjective experience at all. That is, subjectivity seems incompatible with material reality as we have defined it in physics. The hard problem of consciousness remains a true conundrum in science.
We can delve deeper into this distinction by using the zombie concept—originally put forward by philosopher Robert Kirk and later developed by Chalmers. When you encounter a person on the street, how do you know that the person has an inner life like you do, or whether it is a lifeless zombie that simply knows how to respond in human interactions? Indeed, if that zombie is an AI robot, we now have a real problem. An AI robot of the future may try to convince us through clever programming that it is indeed conscious and full of feeling. Not being able to reach into the AI’s mind to see if there is anything going on “in there” we will face the issue of “robot rights” and other legal and moral concerns. Many articles are now appearing on how this dilemma might play out—will robots gain consciousness and if so, how would we know?
The REG provides a potential answer. The REG seems to give a window into the mind in a way that nothing else we know of today does. It circumvents the wet matter of the brain or the electronic circuitry of the AI. It does so because the effect on the REG output—if it is present at all in a given instance—is not a physical effect. It is not so much a force applied to the REG output as it is the experience of overlaying a state of mind on reality to bend reality to a preferred image and outcome. And one has to truly care about the results one is trying to produce in order to make them happen. Success also appears to involve deep-seated expectations, which are also emotional in nature. That is, these expectations are not just informed predictions (as an AI could cognitively generate), but the mind is also deeply invested in its predictions, which it considers to be what should happen. The predictions we make in the normal course of daily living are a test of how we understand reality, and we want reality to play out our predictions to confirm that we know what we are doing.
So, here is the trick. We could sit an AI robot in front of a REG and tell it to affect the output by skewing it (for example) in the direction of producing more “heads” than “tails.” We first ask if it understands the task. Once we get an affirmative response, we let it go to work on the REG output. As it tries to produce an effect, we further explain to it that if it can succeed in the task, then we will call it conscious. If it cannot, then it should begin to accept itself as a non-conscious but intelligent entity.
My bet is that it will not succeed. Of course, if it does, we are all in for a shock! In any event, it would be interesting to see how it begins to shape its own description of itself if it does fail over time. And note that we could set it up to train on the REG for 24 hours a day, literally years without even a lunch break. That should give it some advantage in training over humans training. It would indeed be interesting to see if it got better at the task over time.
Dean Radin, chief scientist of the Institute of Noetic Sciences (IONS) tells the story of setting a semi-intelligent computer in front of a quantum system like a REG (an interferometer in this case) to see if it could affect its interference fringe pattern as human subjects have been able to do. I’m not quite sure how he talked to the computer to explain the task, but in any event, he reported that nothing happened.
Finally, we might note that there are some confounding issues with this experiment making it not quite as clean as we would like it to be. It has become clear that experimenters, for example those who would administer the REG test to the AI robot, can themselves influence the results. So, if we get positive results from the work of the AI, we may not know who actually produced them—particularly if the experimenters are hoping for a positive result. I describe exactly this situation in a real-life laboratory experiment in the introduction of my new book The Selection Effect. In the end, though, I think there are ways around this problem. We will get into them in some further post. For now, the message is that the REG device shows a lot of promise with growing relevance in the study of consciousness as separate from intelligence.
0 Comments