Artificial Consciousness

From ADS-AC

Jump to: navigation, search

It is probably not possible to say, who used the term "Artificial Consciousness" first, but one of the first who mentioned it as a theory, was Igor Aleksander from Imperial College, London, who wrote in 1995 [1]: "Here the theory is developed by defining that which would have to be synthesized were consciousness to be found in an engineered artefact. This is given the name "artificial consciousness" to indicate that the theory is objective and while it applies to manufactured devices it also stimulates a discussion of the relevance of such a theory to the consciousness of living organisms."

Later Igor Aleksander, and Owen Holland, from University of Essex, named the field a "Machine Consciousness" [2], but Machine Consciousness is considered to be Artificial Consciousness, as well as "Digital Sentience" etc. The term "Artificial Consciousness" is better though as a general term, because it also includes systems, which may contain biological components, such as neurons, and therefore strictly speaking cannot be called "machines".

Artificial Consciousness is not very well-determined, as different people want to mean different things by Artificial Consciousness. Owen Holland describes this variety in his Machine Consciousness FAQ, and in whatever wider discussion, almost the same considerations occur. First there are philosophical considerations, which in different ways doubt the possibility of modelling consciousness as a whole, like that consciousness is nothing more than an illusion, or whether consciousness is an epihenomenon or not. According to naïve realism and direct realism we perceive things in the world directly. According to indirect realism and dualism our brains contain data about the world that is obtained by processing, but what we perceive is some sort of mental model that appears to overlay physical things as a result of projective geometry (such as the point observation in Rene Descartes dualism). The theory of direct perception is problematical because it would seem to require some new physical theory that allows conscious experience to supervene directly on the world outside the brain. If we perceive things indirectly, then some new physical phenomenon, other than the endless further flow of data, would be needed to explain how the model becomes experience. If we perceive things directly, self-awareness is difficult to explain. The direct perception also demands that we cannot 'really' be aware of dreams, imagination, mental images or any inner life because these would involve recursion. Such recursion is likely similar problem to the self-reference problem in Gödel's theorem, which is argued to cause the system to be "inconsistent", though there is a lot of self-reference in every computer, and almost in every software which we know. Thomas Nagel in his "What is it like to be a bat?" [4] argues that we cannot model subjective experience, a part of the consciousness. There are many different considerations of what test should be necessary for Artificial Consciousness, many suggest a Turing Test. And finally, some suggest that a conscious machine needs a body, and therefore must be a robot of some kind.

By Igor Aleksander and others there have also been proposed an axiomatic approach. Such axiomatic approach comes from the work of modern psychologists and neuroscientists like Bernhard Baars, who in his "Cognitive Theory of Consciousness" [3} suggested a variety of functions in which consciousness plays a role: prioritization of alternatives, problem solving, decision making, brain processes recruiting, action control, error detection, planning, learning, adaptation, context creation, and access to information. Strictly speaking, it is not proper to call them functions, as far as we don't know whether they appear as separate functions, or are properties of many parts of the mind, therefore it is more proper to call them aspects of consciousness, and for our purpose, aspects of Artificial Consciousness, as it is important, which aspects and in which way are important in implementing Artificial Consciousness. Igor Aleksander and Owen Holland in their axioms [5] stated a sense of place, imagination, directed attention, planning, and decision/emotion as aspects of Artificial Consciousness. Also Igor Aleksander stated prediction as an aspect of consciousness [1].

Artificial Consciousness is not strictly defined in any scientific literature, because of so many different opinions. But still we are allowed to apply some considerations, which are logical and self-evident. First, as there are certain objective aspects of consciousness known in science, then a system which cannot implement any of them, cannot be considered an Artificial Consciousness. Therefore, Artificial Consciousness is an effort to implement known objective aspects of consciousness. This is a practical effort with the aim to implement certain objective aspects of consciousness artificially, the aim is not to model consciousness completely, therefore the philosophical considerations of whether consciousness as a whole can be modelled, are part of the Philosophy of Mind, but are not part of the Artificial Consciousness. On the same reason, the efforts to copy consciousness completely, as Strong AI, are not part of the Artificial Consciousness; Artificial Consciousness is not the same as Artificial Intelligence either, as aspects of consciousness and intelligence are not necessarily the same. The embodyment may be in some way considered to be one aspect of consciousness, but it is by far not certain that it is the most important or the determining aspect.

What is also not defined, is whether Artificial Consciousness should implement only certain aspects of consciousness, or most of them. But it is logical that building a very trivial system, which implements only one aspect of consciousness in a small extent, likely has no scientific or intellectual value. The system which has a value, should be as close to consciousness as possible, and therefore been theoretically able to implement most of the known objective aspects of consciousness. This includes learning, which means being able to work in an unexpected environment. The problem of working in an unexpected environment is called generality in Artificial Intelligence [6]. Also, an almost undeniable aspect of consciousness is awareness, which also means awareness of processes. And awareness of processes means an ability to model how the external processes will develop, which is prediction. Also, it might be possible to show, that all other objective aspects of consciousness may be explained through such awareness.

Concerning the tests, the Turing Test is obviously subjective, and it is a question whether such test is a sufficient criteria in science. But also, it is not certain that such intricate test is necessary. First of all, when Artificial Consciousness implements certain objective aspects of consciousness, then these aspects themselves can be objectively tested. And such test may even not be very complicated, for example when Artificial Consciousness should be able to work in an unexpected environment, then any test which provides for it an environment, so that such particular environment is not anyhow considered while programming the system, may be sufficient, when it shows that the system can implement an aspect of consciousness, for example prediction, in such environment. For full Artificial Consciousness system, a simple test like playing the game of Nim, may be well enough. The game of Nim has been widely used in testing Artificial Intelligence systems, because a good theory of that simple game has been created by mathematicians like C. L. Bouton of Harvard University.

Aside from Artificial Intelligence, Artificial Consciousness has not often considered as something which should give some great economic results. The Artificial Consciousness research has mainly been a study of implementing certain aspects of consciousness, often with systems, which otherwise are considered to be Artificial Intelligence, such as neural networks or genetic algorithms. But certain aspects of such systems were important for study of implementing certain aspects of consciousness. And even an Artificial Consciousness system which implements most of the aspects of consciousness, is not necessarily considered to be something very smart and powerful, though it may have such theoretical potential, but may in physical reasons be quite slow, with no advanced behaviour, in spite of being very important in studying the aspects of mind. So Artificial Consciousness has been sometimes considered to be rather more related to psychology, especially a newer psychology, and not so much to computer science. It may even seem to be somewhat more similar to mathematics, than computer programming. It seems obvious that it's main aim is to understand.

So, it seems logical to consider that Artificial Consciousness is an effort to implement known objective aspects of consciousness. This is a wide definition, which allows many different approaches, but still gives us an idea what Artificial Consciousness is.

References

1. Aleksander, Igor, Artificial Neuroconsciousness: An Update (IWANN, 1995).

2. Holland. Owen, Machine Consciousness (Journal of Consciousness Studies, 2003).

3. Baars, Bernhard, A Cognitive Theory of Consciousness (Cambridge University Press, 1988-1998).

4. Nagel, Thomas What is it like to be a bat? (Philosophical Review LXXXIII, 4, October 1974).

5. Aleksander, Igor, Holland, Owen, Will fact match fiction as scientists start work on thinking robot? (The Guardian, 2003).

6. McCarthy, John, Generality in Artificial Intelligence (Stanford University, 1971-1987).

Personal tools