To Table of Contents 8'98
Mathesis Universalis     No.8 - Autumn 1998


B.1. Computation and Embodied Agency

Philip E. Agre, Department of Communication, University of California, San Diego, La Jolla, California 92093-0503,
pagre@ucsd.edu

Keywords: artificial intelligence, planning, structural coupling, critical cognitive science, history of ideas, interaction, environment

Abstract: An emerging movement in artificial intelligence research has explored computational theories of agents' interactions with their environments. This research has made clear that many historically important ideas about computation are not well-suited to the design of agents with bodies, or to the analysis of these agents' embodied activities. This paper will review some of the difficulties and describe some of the concepts that are guiding the new research, as well as the increasing dialog between AI research and research in fields as disparate as phenomenology and physics.


B.2. Why Philosophy? On the Importance of Knowledge Representation and its Relation to Modelling Cognition

Markus F. Peschl, Dept. for Philosophy of Science, University of Vienna, Sensengasse 8/10, A-1090 Wien, Austria, Europe,
a6111daa@vm.univie.ac.at

Keywords: cognition, epistemology, HCI, knowledge representation

Abstract: What is the role of (knowledge) representation and epistemological issues in the fields of cognitive modelling and development of human-computer interfaces/interaction (HCI)? This paper argues that the question of knowledge representation is the common link and the foundation underlying these two domains.The main points of this paper can be summarized as follows:

(i) Humans and computers have to be considered as two representational systems which are interacting with each other via the externalization of representations. (ii) There are different levels and forms of representation involved in the process of HCI as well as in the processing mechanisms of the respective system. (iii) As an implication there arises the problem of a mismatch between these representational forms - in some cases this mismatch leads to failures in the effectiveness of HCIs.

The main argument is that representations (e.g., symbols) typically ascribed to humans are built/projected into computers - the problem is, however, that these representations are merely external manifestations of internal neural representations whose nature is still under investigation and whose structure seems to be different from the traditional (i.e., referential) understanding of representation. This seems to be a serious methodological problem.

This paper suggests a way out of this problem: first of all, it is important to understand the dynamics of internal neural representations in a deeper way and seriously consider this knowledge in the development of HCIs. Secondly, the task of HCI-design should be to trigger appropriate representations, processes, and/or state transition in the participating systems. This enables an effective and closed feedback loop between these systems. The goal of this paper is not to give detailed instructions, how to build a better cognitive model and/or HCI'', but to investigate the epistemological and representational issues arising in these domains. Furthermore, some suggestions are made how to avoid methodological and epistemological "traps'' in these fields.


B.3. Intelligent Objects: An Integration of Knowledge, Inference and Objects

Xindong Wu, Sita Ramakrishnan, Heinz Schmidt,
Department of Software Development, Monash University, 900 Dandenong Road, Melbourne, VIC 3145, Australia,
{xindong,sitar,hws,dai}@insect.sd.monash.edu.au

Keywords: AI programming rules, objects, intelligent objects, knowledge objects

Abstract: True improvements in large computer systems always come through their engineering devices. In AI, one of the fundamental differences from conventional computer science (such as software engineering and database technology) is its own established programming methodology. Rule-based programming has been dominant for AI research and applications. However, there are a number of inherent engineering problems with existing rule-based programming systems and tools. Most notably, they are inefficient in structural representation, and rules in general lack software engineering devices to make them a viable choice for large programs. Many researchers have therefore begun to integrate the rule-based paradigm with object-oriented programming, which has its engineering strength in these areas. This paper establishes the concepts of knowledge objects and intelligent objects based on the integration of rules and objects, and outlines an extended object model and an on-going project of the authors' design along this direction.


B.4. Emotion-Based Learning: Building Sentient Survivable Systems

Steven Walczak, School of Computer and Information Sciences, University of South Alabama, Mobile, Alabama 36688-0002 USA

Keywords: emotion, machine learning, adaptation, problem solving

Abstract: Artificial intelligence has succeeded in emulating the expertise of humans in narrowly defined domains and in simulating the training of neural systems. Although "intelligent'' by a more limited definition of Turing's test, these systems are not capable of surviving in complex dynamic environments. Animals and humans alike learn to survive through their perception of pain and pleasure. Intelligent systems can model the affective processes of humans to learn to automatically adapt to their environment, allowing them to perform and survive in unknown and potentially hostile environments. A model of affective learning and reasoning has been implemented in the program FEEL. Two simulations demonstrating FEEL's use of the affect model are performed to demonstrate the benefits of affect-based reasoning.


B.5. The Theoretical Foundations for Engineering a Conscious Quantum Computer

Richard L. Amoroso, The Noetic Institute, 120 Village Sq. #49, Orinda, Ca, 94563-2502 USA, Phone: 510 893 0467,
ramoroso@hooked.net

Keywords: AI, conscious computer, hard problem, molecular electronics, noumenon of consciousness, quantum cerebroscopics, quantum computer, teleology

Abstract: Attempts to mimic human intelligence through methods of classical computing have failed because implementing basic elements of rationality has proven obstinate to the design criterion of machine intelligence. A radical definition of Consciousness describing awareness, as the dynamic representation of a noumenon comprised of three basic states; and not itself fundamental as generally defined in the current reductionist view of the standard model, which has created an intractable hard problem of consiousness as defined by Chalmers. By claryfying the definition of matter a broader ontological quantum theory removes immateriality from the Cartesian split bringing mind into the physical realm for pragmatic investigations. Evudence suggests that the brain is a naturally occurring quantum computer, but the brain not being paramount to awareness does not itself evanesce consciousness without the interaction of a nonlocal conscious process; because Mind is nor computer and cannot be reduced to brain states alone. The proposed cosmology of consciousness is indicative of a teleological principle as an inherent part of a conscious universe. By applying the parameters of quantum brain dynamics to the stack of a specialized hybrid electronic optical quantum computer with a heterosoric molecular crystal core, consciousness evanesces through entrainment of the non local conscious processes. This `extracellular conainment of natural intelligence' probably represent the only viable direction for AI to simulate `conscious computing' becuase true consciousness = life.


B.6. Mind: Neural Computing Plus Quantum Consciousness

Mitja Perus, National Institute of Chemistry, Lab. for Molecular Modelling and NMR, Hajdrihova 19 (POB 3430), SLO-1001 Ljubljana, Slovenia,
mitja.perus@uni-lj.si

Keywords: quantum, neural, netowork, attractor, consciousness, mind, coherence, Green function, density matrix, wave-function collapse, coupled oscillators

Abstract: Characteristics of mind are compared to capabilities of computers. It is argued that conteporary computers cannot realize conscious information processing. Then it is discussed what characteristics future computers would have to possess in order to be treated as mind-like. Like in human consciousness, they would have to combine neural computing with quantum coherence. In our brain, fractal-like multi-level (neural, subcellular, quantum) synergetic processing realizes a dynamic virtual system of patterns having a role of attractors. The neuro-quantum interplay of attractors and coherent wholes is, it seems, a necessary condition for consciousness. Then, mathematical analogies in the models of associative neural networks and quantum networks are presented. Because the presented set of coupled neural equations realizes efficient content-addresable memory, we can (using our neuro-quantum comparison) argue that a mathematically equivalent set of coupled quantum equations may also realize similar collective information processing including some additional quantum improvements necessary for conscious mind.


B.7. Computation without Representation: Nonsymbolic-Analog Processing

Robert S. Stufflebeam, Washington University, Philosophy-Neuroscience-Psychology Program, Campus Box 1073, One Brookings Dr., St. Louis, MO, 63108, USA,
rob@twinearth.wustl.edu

Keywords: representation, computation, discovery, explanation, PDP

Abstract: Is it appropriate to posit internal representations to explain how intelligent systems work? Insofar as intelligent systems are supposed to be computational systems, and computation is supposed to require a "medium" of internal representations, the answer should be yes. The paper opposes that view. Its purpose is to defend nonsymbolic-analog processing as part of a computational yet minimally representational framework for explaining how biological intelligent systems work. It is argued that nonsymbolic-analog processing is a type of computation. The task then is to determine whether parallel distributed processing is mediated by internal representations. Of particular interest is whether so-called "distributed representations" warrant being called representations.