To Table of Contents 8'98
Mathesis Universalis     No.8 - Autumn 1998
When using any part of this text - by Witold Marciszewski - refer, please, to the URL listed at the bottom



Were Dreyfus and Winograd Right?
A Comment on the Volume's Intention
as Expressed in the Subtitle


The subtitle, or something like a motto, of the volume reads: Were Dreyfus and Winograd right? With it the Editors refer to a debate concerning the strong AI approach. The volume, which appears two decades later is meant to examine - as the Editors put it - ,,basic positions in arguments, analyse current status, and try to predict future AI orientations''.

The Editors may be right when assuming that the content of that debate is well known to most af the addressees of the volume as those who witnessed the AI development from its very start. However, there must have been a minority which either for younger age or for a geographical distance did not participate in those events; the present comment is addressed to that audience.

The famous papers of the the authors mentioned in the subtitle have been reprinted in the special issue of Informatica. An International Journal of Computing and Informatics which is published by The Slovene Society Informatika in Ljubljana. The issue appeared as No.4, vol.19, in November 1995, bearing the description of its subject identical with the title of the volume reviewed. It is opened by the contributions of the authors whose names appear in that description. The Informatica Editors introduce both contributions with the following questions to express the contention of that special issue.

Were not H.L. Dreyfus, S.E. Dreyfus and T. Winograd right about this issue years ago? Were the attacks on them by the strong-AI community and large parts of the formal-sciences community unjustified? We believe the answer is yes.

Let me start each report with the abstract as given by the author of the contribution reported.


Terry Winograd
Stanford University, Computer Science Dept.
Thinking Machines: Can There Be? Are We?

Keywords: thinking machines, broader understanding

Abstract:Artificial intelligence researchers predict that "thinking machines" will take over our mental work, just as their mechanical predecessors were intended to eliminate physical drudgery. Critics have argued with equal fervor that "thinking machine'' is a contradiction in terms. Computers, with their foundations of cold logic, can never be creative or insightful or possess real judgment. Although my own understanding developed through active participation in artificial intelligence research, I have now come to recognize a larger grain of truth in the criticisms than in the enthusiastic predictions. The source of the difficulties will not be found in the details of silicon micro-circuits or of Boolean logic, but in a basic philosophy of patchwork rationalism that has guided the research. In this paper I review the guiding principles of artificial intelligence and argue that as now conceived it is limited to a very particular kind of intelligence: one that can usefully be likened to bureaucracy. In conclusion I will briefly introduce an orientation I call hermeneutic constructivism and illustrate how it can lead to an alternative path of design.

Reviewer's Comment.

The above abstract renders main ideas of the essay faithfully, provided that two definitions are added, to wit that of the view criticized, called patchwork rationalism, and that of the view proposed, called hermeneutic constructivism.

As to the former, no explicit definition is given, but from relevant contexts one can guess the meaning intended. With philosophical competence, Winograd traces links between the strong AI ideology and the ideas of the 17th century rationalism. That rationalism has been modified by AI theorists into a "patchwork" version. It claims that rational behaviour does not attach to a self which would integrate mental processes but to a multitude of agents ("homunculi" etc) whose operations somehow spontaneously succeed to be put together to form a consistent action.

As to Winograd's positive suggestion, it amounts to claiming that one should do justice to all the differences between minds and computers, which he convincigly hints at. This attitude is certainly hermeneutic as being supported by references to such champions of hermeneutic phenomenology as Gadamer. Also Wittgenstein and his Oxford successors are called to witness the actual complexity of linguistic and mental processes. Its being constructive, in the sense of a new project for AI research, seems less conspicuos; this may be taken as a sign that the approaches to AI rejected by the author, though naive, or unrealistic in their extreme optimism, have no clear and realistic alternative so far.


Hubert L. Dreyfus and Stuart E. Dreyfus
University of California,Berkeley
Making a Mind vs. Modeling the Brain: AI Back at a Branchpoint

Keywords: mind, brain, AI directions

Abstract: Nothing seems more possible to me than that people some day will come to the definite opinion that there is no copy in the nervous system which corresponds to a particular thought, or a particular idea, or memory. Information is not stored anywhere in particular. Rather it is stored everywhere. Information is better thought of as "evoked'' than "found''.

Reviewer's Comment.

The first sentence above, quoted after Wittgenstein, states the obvious point that the nervous system is not like an archive of photos in which each thing perceived or remembered is mirrored in a separate picture. How absurd such a supposition would be, can be seen in the case of concepts which appear in sets of postulates (axioms), eg. the notion of natural number as defined with Peano axioms; even if each axiom functioned like a separate copy (of what?), the concept resulting from all of them would not be a new copy to be added to those identical with axioms (a relation between connectionism and axiomatic systems is extensively discussed in my book Logic from a Rhetorical Point of View, de Gruyter 1994).

As for the rest of the essay, its main contention is hinted at in the very title. Instead of dwelling on it, let me focus on just one passage which is crucial for this review -- distributed into some separate items, nevertheless converging towards one point which I am to call "pannumeralism". The passage in question - quoted after A.Newell and endorsed by the Authors - runs as follows.

The digital-computer field defined computers as machines that manipulate numbers. The great thing was, adherents said, that everything could be encoded into numbers, even instructions. In contrast, the scientists in AI saw computers as machines that manipulate symbols. The great thing was, they said, that everything could be encoded into symbols, even numbers.

The alleged contrast between the two approaches is illusory. The only symbols which come into play are those of numbers, whole numbers in the "inner life" of computer are no abstract objects byt numerical symbols; hence the class of symbols and the class of numerals are identical both from either point of view, that of AI and that of computer science. The greatest thing, let me add, is just that identity of symbols and numerals (like in the Gödelian arithmetization of language); owing to it we have computers with their most advanced use which amounts to AI.

The above comment is not only a terminological observation. Once having clearly stated that identity of the realm of symbols and the realm of numerals, we obtain a new perspective on the whole AI debate, the perspective to be added to that dominating in the volume reviewed; this is discussed in another context within this review.