Michael Herrmann
  • About
  • Standing Invitation

Demystifying Consciousness


This post is a layman's attempt at describing consciousness, i.e. the experience of self. It gives a model that explains how consciousness arose in humans, and discusses whether animals and today's computers can be called conscious under its definition. It also describes mechanisms that could give consciousness to AI.

Consider an early ancestor of life, swimming in a primordial soup with nutrients. This life form might have one very simple decision mechanism: If the gradient of nutrients is larger in one direction, swim that way.

As life evolved, its decision making processes became more complex. A starving monkey may see food right next to a dangerous predator. Should it take the risk to grab the food or risk starvation? In every moment, organisms need to choose the action with the highest importance.

The mechanism that makes such decisions is not difficult to imagine: If there are multiple stimuli, react to the one with the highest priority. When the monkey is only mildly hungry, the fear of the predator is most important. But if it's starving, there's a strong incentive to take the risk.

Over time, decision making processes became more complex to include planning and anticipation. If it starts to rain, I better find shelter now to avoid being wet and cold later. This is no longer a reaction to an immediate sensation - after all, I'm only slightly wet now and not cold. Instead, it's a reaction to an *anticipated* sensation.

A decision making process that anticipates future sensations must be aware of them in an abstract way. It needs to predict which wanted or unwanted stimuli may occur in the future, prioritize them and use this information to decide which one is most important right now. These calculations, or thoughts, happen simultaneously, and the one that's most important right now chooses the next action.

Experienced meditationers like Sam Harris often point out how thoughts arise of their own accord, and that it is very difficult to control them. Bliss in meditation seems to be achieved by being aware that thoughts keep coming up, but not being carried away by them.

A computational model that fits the above surprisingly well is implemented in modern operating systems: When you open the Task Manager in Windows, you see a list of running processes. They execute in parallel, there's usually a foreground process, and Windows decides how much CPU time each process gets. In these ways, computer pocesses are a lot like the thoughts running in our heads.

But there are also key differences between an operating system and consciousness. The first is that an operating system does not have agency. It reacts to user input, but does not plan future actions in pursuance of goals of its own. Because of this, it is more similar to very primitive early forms of life than to a conscious human being.

Another difference is that operating systems know only very little about what the various running processes are doing. This is a general limitation of computation and stems from the fact that there can be very many different kinds of processes.

To make an operating system more similar to conscious life, multiple steps are required. The first is to give the operating system agency. These are goals that do not just consist of reacting to user input, but are more abstract and require planning and anticipation to achieve. Survival and reproduction are examples of such goals from natural life.

Another way of making operating systems more conscious is to run their scheduling algorithm, i.e. the mechanism that decides when processes execute, in a process of its own. The scheduler would be given control at frequent intervals, to decide whether to continue with the current process, or give CPU time to another one.

The final piece of the puzzle is to create mechanisms that give the operating system a way to understand what processes are doing. Crucially, the scheduling process also needs to be transparent in this way.

With these changes and some reasoning capabilities, the operating system could become an independent agent in creative pursuit of its own goals. What's more, by inspecting the scheduler, it could become aware of itself. If it ever chooses to do this, and if it changes its behavior as a result, I would say it has achieved the first degree of consciousness.

The above gives a workable model of consciousness in AI. But what about animals? Obviously, opinions of which animals are conscious (if any) differ wildly. Also, we cannot (yet?) look inside an animal's brain and see which exact thoughts are running there. But here is an empirical criterion. Certainly, it will have been considered by others as well. But I'm on a plane and can't research.

Planning into the future requires taking into account what oneself will experience in various contexts. The further ahead you plan, the more optionalities you need to consider, including predictions about your own state. What's more, the number of future possibilities becomes larger and larger the more you look ahead. As a result, it becomes harder and harder to hardwire decisions that cover large parts of the future. For example, some animals' yearly migration to better grazing grounds may be more instinct than conscious decision, because it only covers one very specific future outcome.

So I would say that the time span and number of future situations covered by an animals' planning gives a good indication of its level of consciousness. While I would not call a being truly conscious until its scheduler has inspected itself, I would hold that it's fair to attest lower levels of consciousness for independent planning into the future, because this intrinsically requires seeing the self as an entity separate of the rest of the world.
Powered by Create your own unique website with customizable templates.