To Table of Contents 8'98
Mathesis Universalis     No.8 - Autumn 1998


C.1. Is Consciousness a Computational Property?

Gilbert Caplain, ENPC-Cermics, 6 et 8 avenue Blaise Pascal, Cite Descartes - Champs-sur-Marne, F-77455 Marne la Vallee Cedec2, France,
Gilbert.Caplain@cermics.enpc.fr

Keywords: consciousness, knowledge, belief, artificial intelligence

Abstract: A proof is outlined that consciousness cannot be adequately described as a computational structure and(or) process. This proof makes use of a well-known, but paradoxical, ability of consciousness to reach ascertained knowledge (as opposed to mere belief) in some cases. Although such a result rules out "naive reductionism'', it does not fully settle the reductionism vs dualism debate in favor of the latter, but merely leads to some kind of weak dualism.


C.2. Computation and the Science of Mind

Paul Schweizer, Centre for Cognitive Science, University of Edinburgh, Scotland,
paul@cogsci.ed.ac.uk

Keywords: computational paradigm, mental content, consciousness

Abstract: The main thesis of the paper is that the computational paradigm can explain neither consciousness nor representational content, and hence cannot explain the mind as it standardly conceived. Computational procedures are not constitutive of mind, and thus cannot play the foundational role they are often ascribed in AI and cognitive science. However, it is possible that a computational description of the brain may provide a scientifically fruitful level of analysis which links consciousness and representational content with physical processes.


C.3. Mind versus Gödel

Damjan Bojadziev, Institute "Jozef Stefan'', Jamova 39, 61111 Ljubljana, Slovenia,
damjan.bojadziev@ijs.si - http://nl.ijs.si/~damjan/me.html,

Keywords: Godel's theorems, self-reference, artificial intelligence, reflexive sequences of theories

Abstract: Formal self-reference in Godel's theorems has various features in common with self-reference in minds and computers. These theorems do not imply that there can be no formal, computational models of the mind, but on the contrary, suggest the existence of such models within a conception of the mind as something that has its own limitations, similar to those which formal systems have. If reflexive theories do not themselves suffice as models of mind-like reflection, reflexive sequences of reflexive theories could be used.


C.4. Computation and Understanding

Mario Radovan, University of Rijeka, Faculty of Pedagogy, Omladinska 14, 51000 Rijeka, Croatia,
mradovan@mapef.pefri.hr

Keywords: mind, consciousness, computation,language of thought, connectionism, metaphor, intelligence, understanding, background, commitment

Abstract: The paper discusses the Symbol (Classic) and the Connectionist approach to the development of intelligent systems. It is argued that the two approaches are primarily two different ways of description of the same system; they have the same expressive power and face the same essential limitation. Some basic problems are discussed concerning the theoretical possibility of constructiong a machine which could replicate (and excel) human cognitive abilities. It is claimed that the most important among these problems concerns the actual scientific taxonomy which offers no suitable way to deal with phenomenon of the subjective.


C.5. What Internal Languages Can't Do

Peter Hipwell, Centre for Cognitive Science, University of Edinburgh,
petehip@cogsci.ed.ac.uk

Keywords: language, analogy, emergence

Abstract: The ability of artificial internal languages to mirror the world is compared to the power of natural language systems. It is concluded that internal languages are equally as arbitrary, and therefore have no representational advantage. Alternative forms of representation, including particle interaction in cellular automata, are considered.


C.6. The Chinese Room Argument: Consciousness and Understanding

Simone Gozzano, via della Balduina 73, 00136 Rome, Italy,
s.gozzan@phil.uniroma3.it

Keywords: Searle's Chinese room

Abstract: The aim of this paper is to submit that the "Chinese room'' argument rests on the assumption that understanding a sentence necessarily implies being conscious of its content. However, this assumption can be challenged by showing that two notions of consciousness come into play, one to be found in AI, the other in Searle's argument, and that the former is an essential condition for the notion used by Searle. If Searle discards the first, he not only has trouble explaining how we can learn a language but finds the validity of his own argument in jeopardy.