Witold Marciszewski
1. Leibniz vs Descartes in views on knowledge
Gottfried Wilhelm Leibniz (1646-1716), who anticipated so enormous areas of modern knowledge, is also a forerunner of what we nowadays call knowledge engineering -- by his creations of library systems and his projects of collective research, the latter fruitfully materialized with the etablishing of learned societies and ministries of science according to some projects of him
These creations are so commonly known that there is no need to dwell upon them except for, possibly, Leibniz's role in establishing state offices, like ministries, to manage science in the country in question. In fact, his project of the Prussian Academy in Berlin made it more similar to a state office than to a learned society in the style of the Royal Society of London. Even greater was this similarity in the case of the Russian Academy, what complied with the autocratic mode of ruling of Peter the Great.
We need this term to cover the issues of both correct producing and efficient organizing and managing human knowledge. The former is traditionally handled by epistemology and logic, including methodology of sciences, the latter -- by a cluster of new specialized disciplines. However, these two fields are not unrelated to each other. To use an example to suggest the argument, let it be recalled that the procedures of formalizing proofs, though belonging to the sphere of mathematical production (and so handled by mathematical logic) turned out to be an indispensable tool for databases, in particular mathematical ones, which belong to the domain of knowledge organization and management.
Let us start from a comparison of Leibniz views to those of Rene Descartes (1596-1650). Why should we start in this way? The answer is as follows. There are two Leibniz's views, to be discussed in this essay, which oppose each other, namely his (i) explicit belief in the possibility of automation of the processes of producing knowledge, and his (ii) implicit questioning of the same possibility because of the role attributed by him to perception as characteristic of organic life.
The former makes him closer to Alan Turing as a pioneer of hard AI, the latter -- to John von Neumann as one who acknowledged peculiarities of organisms, seeing their enormous advantages over electronic devices. These views, though opposing each other, have a point in common, to wit either makes a stand against Descartes' conceptions of mind and knowledge; thus Descartes provides us with a remarkable contrastive background to better perceive Leibniz's both approaches.
The fact that Descartes' position is so strikingly one-sided, disregarding complexities of mind-matter relations, makes it even more useful for the present purposes. The narrower is a view, the greater there may prove its cognitive value, provided its limitations being costs of a fitting idealization, even a counterfactual one. In our century a nice example of such strategy is found in the programme for science furthered by logical empiricism. Even more impressive paradigm (firmly opposing both empiricism and logicism) which preserves its vitality for four centuries, is that produced by Descartes. Any reflection on knowledge, mind and logic has to involve references to his so illuminatively clear (even if false) ideas.
To express Descartes' position in a most concise way, let us put it as follows: the mind does not belong to the same world to which matter does (cp Ryle [1949]). Thus Descartes has created the paradigm of physics-independent theory of mind; the article "the" is to hint at the enormous impact of that theory, to the extent of its becoming a commonsense approach (extremities of behaviourism may be partly explained as a revolutionary reaction to that paradigm). Leibniz's point results from denying that denial, so that it reads `the mind does belong' (etc). Thus Leibniz paves the way to what nowadays starts to be called `physics of mind' ( cp Penrose [1988]).
Now, there are two theoretically possible concretisations of this general point: either (i) one reduces thought to matter (as, eg, in hard AI) or (ii) one acknowledges their distinctness and interaction (as, eg, Popper and Eccles [1997]; as to hard and weak AI, cp Gams [1995]). Leibniz in his philosophy never endorsed (i) but in his practical knowledge-engineering projects he came close to it, while in the main stream of his philosophy it was point (ii) which he firmly held. In this sense we can speak of two Leibniz's legacies. His approaching point (i) was connected with the idea of ars combinatoria as a universal method of problem solving which was combinatory and finitist, hence feasible for mechanical devices.
In the sequel, the fairly `materialistic' point (as that of strong AI) is designated by the term `anti-physicalism' while the opposite one by `physicalism'; this terminology may seem rather odd, but is justified as follows. The view that physical devices (mechanical, electronic, etc) in principle (technical complications notwithstanding) can do the same job as organisms and minds do, involves irrelevance of physical kind of hardware: it is not matter but software what matters. Thus the stress put on the import of what constitutes a physical component deserves to be denoted as `physicalism', while its opposite as `antiphysicalism'.
Before we enter the discusion of both approaches, it is in order to hint at their implications for knowledge engineering. Knowledge is produced by intelligence, hence a step is needed towards a theory of intelligence as data-processing faculty. Let us assume that in producing knowledge there are involved three kinds of data processing, and three skills, respectively, to wit reasoning (including computing), {\sb abstraction}, and ordering. Reasoning with computing is the unique member of this triad which so far, to same extent, has been successfully mechanized, that is, made feasible for machines, esp.electronic ones; hence it plays a special role in the present discussion (cp Marciszewski and Murawski [1995]).
As to reasoning, therefore, we can already see its role in knowledge organization and engineering, for instance that of inferential mechanism in expert systems and databases. As for the other skills, the question is not settled yet in an empirical way, hence a support should be expected from philosophy. Abstraction is a subject-matter of AI research but at a most primitive stage, namely that of pattern recognition (from that to, eg, abstracting transfinite cardinals is a rather long way). Should we envisage for a future its full mastering by electronic devices, as antiphysicalism does claim? If so, then it is worth while to devote time and money to such a promising research. If not, then it is wiser to spare resources for more feasible and payable AI projects.
The same dilemma appears with regard to the skill of ordering which includes creation of structures, as mathematical, syntactic, musical, technical, political ones, etc -- if we endorse such interpretation of Georg Cantor's well-ordering theorem.
The ordering issue turns even more involved than that of abstraction, since any non-trivial ordering presupposes acts of assessing certain values. In the face of enormous multitude of elements from which relevant ones are to be selected for the structure in question one has to be able to judge which ones are duly important, relevant, etc. This, in turn, presupposes a system of values or goals. Such a system is inborn to any living entity, as these have such goals (imparting values to means) as self-preservation and reproduction (to mention most primitive ones). It is hard to imagine how an electronic device could share such attitudes; however if there are people who can imagine, let them do their best (anyway, were the present author a VIP in a ministry of science, he would never grant a financial support to research based on such antiphysicalist philosophy).
These are only few examples of connexions between the theme of this essay and the issues of producing and organizing knowledge. However, let them suffice to encourage those being interested in these issues, as well as those who like inquiring into philosophical and logical presumptions of AI research inspired by Leibniz's ideas.
2. On physicalism and antiphysicalism in logic
Physicalism holds human thoughts and acts to be determined by physical laws (Webster [1971]). Logical Physicalism, LP for short, holds reasoning processes to be determined by laws deriving from physical properties of the brain, hence from some hardware properties.
In the heroic times of logical empiricism people used to employ the term `physicalism' in a different sense; that story, though, seems to be half-forgotten, so one can give this word a new meaning, as suggested in Marciszewski and Murawski [1995]. An alternative suggestion is due to Schnelle [1988] who uses the phrase `naturalization of logic'. However, it seems desirable to have a term related to the phrase `physics of thought' (see below). Moreover, the use of the adjective `natural' in contexts like `natural logic' has been already established for what Gentzen called das natuerliche Schliessen.
It was the famous physicist Roger Penrose [1988] who was bold enough to claim inquiries into the mathematics and physics of thought. His ideas can be in a fertile way combined with those of John von Neumann [1958] which prove crucial for the story in question.
However, when associating physics with logic and a theory of mind, one has to regard the strong hold over philosophers get by the Cartesian paradigm concerning the mind-matter relations. With respect to that paradigm, any phrase like ``the physics of thought'' is even worse than a philosophical heresy; it is felt as a category-mistake, like saying that numbers happen to be warm, or that some thoughts are yellow. The term category-mistake is due to Ryle [1949]. In the same book the Cartesian doctrine is rendered as follows. ``Human bodies [...] are subject to the mechanical laws which govern all other bodies in space. [...] Minds are not in space, nor are their operations subject to mechanical laws.'' (p.11). When the mechanical laws (like those stated by Newton) are identified with the totality of physical laws, the mind-body problem is doomed to be ``solved'' either in the Cartesian way or in the behaviouristic way (endorsed by Ryle). However, modern physics offers a more sophisticated approach, and that seems to accord with Leibniz's insights.
Had Leibniz more influence on modern minds, than Descartes seems to have even in our times, then the idea of the physics of thought would be less shocking. For Leibniz this idea would be rooted in the notion of the pre-established harmony between perceptions of the monads and the motions of the bodies. As he puts it, there is ``the concord and the physical union of the soul and the body, which exists without the one being able to change the laws of the other''.
See Principes de la nature et de la grace fond'es en raison, item 3, This statement is taken from English translation, Leibniz [1973]. To render all the nuances of this important text, it is worth while to quote it in the French original and in a suggestive German translation. <harmonie parfaite entre les perceptions de la Monade et les mouvements des corps, pr\'e\'etablie d'abord entre le syst\`eme des causes efficientes et celuy des causes finales, et c'est en cela que consiste l'accord et l'union physique de l'\^ame et du corps, sans que l'un puisse changer les loix de l'autre.>>. Here is the German text. ,,Daher besteht eine volkommene Harmonie zwischen den Perzeptionen der Monade und den Bewegungen der Koerper, die von Anbeginn an zwischen dem System der Wirkursachen und dem der Zweckursachen praestabiliert ist; und eben darin besteht die Uebereinstimmung und die natuerliche Vereinigung von Seele und Koerper, ohne dass eines die Gesetze des anderen zu aendern vermoechte.'' See Leibniz [1982], p.6 f.
If so, then the laws of thought must exactly mirror the physical laws of functioning of the entity in question, and vice versa. Hence, since electronic automata are subjected to different physical laws than organic automata, iemonads, the laws governing their intellectual processes must be different as well. This is a physicalistic thesis on the relevance of hardware to intellectual performances, inherent in mature writings of Leibniz. On the other hand, younger Leibniz's belief in the possibility of constructing artificial reasoning automaton to entirely replace human reasoners implies the irrelevance of hardware in this important respect.
These opposing views may complement each other in an attempt to express a live fundamental insight surpassing either formulation. It seems a great task for Leibniz scholarship to inquire into relations holding between those poles of Leibniz thought. The present paper does not aim at such a remote target. Instead, it tries to clarify the tenets of logical physicalism and logical anti-physicalism and, farthermore, to present reasons for either point as seen by Leibniz. Thus it should be treated as a prelimary study to pave the way to the more ambitious task of interepreting the alleged discrepancy.
In a natural way, the main body of this study should consist of four parts, two of them providing paradigmatic statements of antiphysicalism and physicalism, the former represented by Alan Turing, the latter by John von Neumann. Then there follow two items regarding Leibniz: one concerned with his supposed anti- and the other with his pro-physicalist attitude.
3. Turing's claim as to the insignificance of hardware
(i) Is a human brain a universal Turing machine?
(ii) Is the material that constitutes a thinking device, esp. a brain,
of any consequence?
Turing's answer to the questions stated above, as found in his [1950] article, is as follows. A human brain is really a kind of computer. From the so-called universal Turing machine it differs in that it may involve a random element, iehave instructions like that: ``throw the die (the throwing may have the counterpart in an electronic process) and put the resulting number into store n (say, 1000)''. Moreover, unlike the universal machine, it has only a finite store (memory).
To explain that the hardware to be used is irrelevant, Turing takes advantage of the fact that Charles Babbage's Analytical Engine was a real prototype of modern electronic computers although it was a mechanical device, using wheels and cards (Boden [1990, 46]; Babbage's ideas, going back to 1834, are discussed by Gandy [1988]).
Here is Turing's [1950] comment. "Since Babbage's machine was not electrical, and since all digital computers are in a sense equivalent, we see that this use of electricity cannot be of theoretical importance. [...] In the nervous system chemical phenomena are at least as important as electrical. In certain computers the storage system is mainly acoustic. The feature of using electricity is thus seen to be only a very superficial similarity. If we wish to find such similarities we should look rather for mathematical analogies of function."
That all digital computers are equivalent follows from the fact than they can mimic any discrete-state machine, ie, all of them are universal. A discrete-state machine is one that in a deterministic way passes step by step from a definite state to another state, each step being determined by an appropriate rule. In other words, each state is a function of the previous state and an impulse. Imagine, eg, a wheel which clicks round through 120$^0$ once a second, but may be stopped by a lever operated from outside; a lamp is to light in one of the positions of the wheel. Let the machine states, ie, three possible positions of the wheel, be referred to as s1, s2, s3, input signals as i0 and i1; t (for `transition') is to denote that two-place function which assigns a value to each pair sk, ik.
t(s1, i0)=s2 | t(s2, i0)=s3 | t(s3, i0)=s1 | ||||
t(s1, i1)=s1 | t(s2, i1)=s2 | t(s3, i1)=s3 | ||||
Turing's [1936-7] result is to the effect that any procedure which can be computed at all, ie, any procedure for which there is an algorithm, can be computed by his machine called, therefore, universal. As Turing [1950] argued, a physical stuff from which such a machine is made, ie, its hardware component, is irrelevant to its performances in any respect, also with regard to methods of reasoning. In this sense, his claim opposes logical physicalism.
4. Von Neumann's claim as to the significance of hardware
For the sake of convenience, let us repeat the questions posed in the preceding section.
(i) Is a human brain a universal Turing machine?
(ii) Is the material that constitutes a thinking device, esp. a brain,
of any consequence?
While Turing [1950] answers YES to (i) - with the proviso that a brain may involve a random element, and NO to (ii), Von Neumann [1958] answers YES to (ii), which implies NO to (i). (Cp. Schnelle [1988], Penrose [1988]). Von Neumann concludes his essay as follows: "Thus logic and mathematics in the central nervous system, when viewed as languages, must structurally be essentially different from those languages to which our common experience refers." (ie those commonly used by logicians and mathematicians). This puts limitations to the project of creating Artificial Intelligence, unless a human creator proves able to imitate the emergence of the human brain and the conscious mind from the process of evolution (a definition of AI is found in Boden (ed.) [1990], Schnelle [1988], Sterelny [1991], Gams [1995]).
Von Neumann's point does not imply any postulate of symbolic reconstruction of those neural systemes that would yield an alternative logic or mathematics (a different set of theorems, or different meanings of operators). What is at variance it is a different information-processing technology to produce concepts and theorems -- when compared with that of formalized systems, Turing machine and digital computers. Technology involves hardware, ie a physical component, as well as software; hence von Neumann's point can be called physicalist. Is it right, then AI requires a human-like hardware, contrary to the claim involved in Turing's project.
According to von Neumann, the hardware difference between a neural device and a digital computer consists in the former's (i) being partly analog (eg, chemical) and only partly digital; (ii) using a recording system that is not digital but statistic, what means that the sense of a signal depends on its intensity rendered as oscillations frequency.
Here is an example which combines some recent neurological findings (Fischbach [1992], Crick and Koch [1992]) with logician's reflexion. In visual awareness a significant role is played by 40-cycle-per-second oscillations in firing rate which synchronize the firing of neurons responding to different parts of a perceptual scene, and so the whole object, eg, one's face emerges. There are specialised cells responsible for reassembling a face picture from scattered components (a parallel processing). Such integration is accompanied by abstraction as the resulting picture corresponds to faces with similar features rather than to one face alone.
To find a logical point, let us fancy the way which the human mind must have made from perceiving, say (instead of faces), the sun, the moon and round tree trunks, to the abstract concept of a circle (which, in turn, may have suggested the technological idea of a wheel). The process starts from not verbalized, even not apperceived (in Leibniz's sense) percepts being unconscious counterparts of statements like "the sun is round". In the long course of information processing, such true statements result in true Euclid's theorems on the circle; hence it is a truth-preserving process, characteristic of reasoning.
Thus, perception should be defined broader, including intellectual percepts of mathematical and other abstract objects. This can be seen, eg, in Euclid's proofs, where the perception of an object, both concrete and typical, leads to general propositions (the famous Locke-Kant problem (cp. Beth [1970], Beth and Piaget [1966], reported by Marciszewski [1994]). The logical step in question is due to applying quantifiers, a fact that shows a possible mutual dependence of perception and reasoning. Since perceiving is due to the statistical (not digital) nature of brain signals, that dependence confirms von Neumann's contention that such a logical process requires a piece of hardware (hence a physical entity) different from that found in a digital computer.
5. Why Leibniz would NOT have accepted logical physicalism
Leibniz held it possible to build a logical machine matching humans in the ability of reasoning and surpassing them as to its infallibility: ut errare ne possimus quidam si velimus, et ut Veritas quasi picta, velut Machine ope in charta expressa deprehendatur. (letter to Oldenburg, Oct. 28, 1675, quoted by Couturat [1901]).
In his philosophy there were premises to judge that programme impossible, but the ``Zeitgeist'' led him to the opposite. It was the time of extreme optimism regarding potentialities of the human mind (eg, Descartes was ready to prove all philosophical truths in one chat). It was only needed to find proper ways of improving the actual human mind; in some programs, as that of Leibniz, those ways involved an ideal language combined with a universal calculus. Once having such a system, one could feed it to a machine as well.
Though nobody heard of Turing machine, the logical idea of formalized reasoning, as algorithmic as computation (claimed by Hobbes), was in vogue owing to the schoolmen, followed by Leibniz. A formalized reasoning requires just a sheet of paper (Turing's tape), a pencil (`calamus'), and an eraser. The steps could be so arranged that a single word was either written or erased in each step. Nihil enim aliud est calculus, quam operatio per characteres, quae in omni ratiocinatione locum habet. (letter to Tschirnhaus, May 1678, see Couturat [1901]).
The technological assumption required to justify Leibniz's project of a fully successful reasoning machine runs as follows: whatever can be thought by the mind can be also recorded both at a sheet of paper and in an aptly devised mechanism, as cogs of the arithmetical machine were apt to record data and operations (cp. Breger [1988]). When discussing such a programme, one should keep in mind that still at the beginning of the 20th century (eg, Hilbert's 1900 programme) nobody was able to guess the results concerning our cognitive limitations, as Heisenberg's principle and the undecidability or incompleteness theorems (initiated by Goedel [1930], [1931]; cp. Church [1936], Davis [1988], Gandy [1988]).
Those theorems speak against the possibility of an algorithmic solution of some mathematical problems. Another argument came from the research on the nervous system, guided by comparisons with digital computers. It proved that an enormous number of operations must be performed at the unconscious level, while their success depends on properties of the organic hardware involved. Thus they are capable neither of being verbally recorded, to be later translated into a piece of software, nor of being performed by a digital machine.
The last mentioned fact and Goedel's limitative results may shed light on each other; the optimistic component of them is to effect that the human mind can do more than any home-made machine, while the pessimistic one -- that artificial machines, because of their less advanced hardware, in some cases fail to strengthen human abilities. All that might have been anticipated by Leibniz, were he more sensitive to consequences of his own metaphysics, and less eager in following his time's slogans.
6. Why Leibniz would have accepted logical physicalism
That Leibniz would not accept logical physicalism is easier to defend than the answer in the affirmative. Premisses for the former were stated by Leibniz explicitly, while those for the affirmative statement may be guessed as being implicit in his concepts of perception and of organic machines (cp. Breger [1989], Schnelle [1991]). For the same reason, though, the affirmative answer is deeper rooted in Leibniz's thought.
Leibniz failed to see the connexions between perception and reasoning -- those exemplified above. Had he noticed them, he would have acknowledged the essential difference in the "technology" of reasoning between natural and artificial machines. As to perception, he voiced its non-mechanical nature in the following way: perception and that which depends on it cannot be explained mechanically, that is to say by figures and motions. (Monadology, item 17).
Did Leibniz admit processes of reasoning, unlike those of perception, to be of mechanical nature? This is likely if we consider his fascination with Hobbes' idea that reasoning is like computing. In the latter there does not exist any direct link with perception. In reasoning it does, but that vital fact was not likely to be discovered until the modern quantification logic, esp. in a computerized form of inferential logic mainly due to Gentzen [1934-35], came into existence.
For, it is the rules of manipulating quantifiers (and like operators, as that of description) that makes us aware of the involved relations between the concrete (as given in perception) and the general. The data-processing done by neural "face cells" (as Fischbach [1992] calls them) which results in perceiving many faces of the same class forms a generalization, rendered by the rule of introducing the general quantifier.
The rule of introducing the existential quantifier defines another kind of reasoning in which a perception yields the premiss in question. Usually, such a premiss remains unverbalized, hence not manageable by a digital computer (unless one becomes able to feed it with non-verbal representations of the objects perceived, and establish logical rules to process such representations).
The rule of concretization (ie eliminating the general quantifier) is of special consequence for the present discussion as it can exemplify von Neumann's claim regarding the difference between the textbook logic and the logic of our brain. The example is found in Marciszewski [1994, 145-9] where the reasoning of an ape is reconstructed in terms of a computerized system, termed Mizar MSE, of quantification logic. The system accepts an orthodox ``textbook formalization'' as well as another one, closer to actual reasonings, in which the general quantifier elimination conflates with modus ponens; thus, so to speak, a macro-rule replaces a set of single rules.
Obviously, the systems compared are identical as to the set of theorems and the meanings of logical constants (hence no alternative logic is at stake), but are different technologically, ie as to the mechanism of producing conclusions, depending on the hardware involved. The connexion between the quantifiers and the perception (requiring an organic hardware) as well as the macro-rule technology may form a basis for "the logical language truly used by the central nervous system" (von Neumann [1958, 82]. The example of Mizar MSE suggests a way of imitating organic reasoning with the macro-rules strategy, but the entanglement of reasoning with perception, characteristic of organic reasoners, is hardly imitatable by computers.
Had Leibniz had our present logical knowledge with its limitative theorems, accompanied by suitable biological premises, he would not have expected the full-scale mechanization of reasonings. Instead, he would have welcome such limitations as supporting his belief in the range of physical differences between natural and artificial hardware -- the belief that each organic body is a kind of divine machine, or natural automaton, which infinitely surpasses all artificial automata. (Monadology, item 64).
L i t e r a t u r e
Albertazzi, L. and R.Poli (eds.) [1991]
Beth, E.W. [1970]
Beth, E.W.and J.Piaget [1966]
Mathematical Epistemology and Psychology. Reidel, Dordrecht.
Boden, M.A. (ed.) [1990]
Breger, H. [1988]
Breger, H. [1989]
Crick, F.and Ch.Koch [1992]
Church, A. [1936]
Couturat, L. [1901]
Davis, M. [1988]
Fischbach, G.D. [1992]
Gams, Matja\uz [1995]
Gandy, R. [1988]
Gentzen, G. [1934-5]
Goedel, K. [1930]
Goedel, K. [1931]
Herken R.(ed.) [1998]
Leibniz, G.W. [1973]
Leibniz, G.W. [1982]
Marciszewski, W. [1994]
Marciszewski, W. [1994a]
Marciszewski, W.and R.Murawski [1995]
Penrose, R. [1988]
Penrose, R. [1989]
Popper, R.Karl, and John Eccles [1977]
Ryle, Gilbert [1949]
The Concept of Mind. Barnes and Noble, New York.
Schnelle, H. [1988]
Schnelle, H. [1988]
Sterelny, K. [1991]
Turing, A.M.[1936-7]
Turing, A.M.[1950]
von Neumann, J. [1951] (lecture held in 1948)
von Neumann, J. [1958]
von Weizsaecker C.F. [1981]
Ein Blick auf Platon. Ideenlehre, Logik und Physik. Reclam, Stuttgart.
von Weizsaecker, C.F.and E.Rudolph (eds.) [1989]
Webster [1971]
Wolenski, J. [1994]
Topics in Philosophy and Artificial Intelligence.
Mitteleuropaeisches Kulturinstitut, Bozen.
Aspects of Modern Logic. Reidel, Dordrecht.
The Philosophy of Artificial Intelligence. Oxford University Press,
Oxford.
`Das Postulat der Explizierbarkeit in der Debatte um die k\"unstliche
Intelligenz' in: Leibniz. Tradition und Aktualitaet. V. Internationaler
Leibniz-Kongress. Vortraege. Hannover 14.-19. November 1988.
Leibniz-Gesellschaft, Hannover.
`Maschine und Seele als Paradigmen der Naturphilosophie bei Leibniz' in:
von Weizsaecker, K. and E.Rudolph (eds.) [1989].
`The Problem of Consciousness. It can be now approached by scientific
investigation of the visual system. [...]'. Scientific American, Sept.\
1992, vol.267, no.3, pp.111-117.
`A note on the Entscheidungsproblem'. J.Symb.Log.,
vol.1, 40-41; a correction, ibid.101-102.
Logique de Leibniz l'apres des documents in 'edites. Alcan, Paris.
`Mathematical logic and the origin of modern computers' in: Herken (ed.)
[1988], pp.149-174.
`Mind and Brain. The biological foundations of consciousness, memory and
other attributes of mind have begun to emerge; an overview of this most
profound of all research efforts'. Scientific American, Sept.1992,
vol.267, no.3, pp.24-33.
`Strong vs.Weak AI'. Informatica: An International Journal of
Computing and Informatics, vol.19, no.4, November, pp.479-493.
`The confluence of ideas in 1936' in: Herken (ed.) [1988], pp.55-111.
`Untersuchungen ueber das logische Schliessen I -- II'. Math.
Z., vol.39, 176-210, 405-431.
`Die Vollstaendigkeit der Axiome des logischen Funktionenkalk\"uls'.
Monatshefte fuer Mathematik und Physik, 37, 349-360.
`Ueber formal unentscheidbare Saetze der Principia Mathematica und
verwandter Systeme, I'. Monatshefte fuer Mathematik und Physik, 38,
173-198.
The Universal Turing Machine. A Half-Century Survey. Oxford
University Press, Oxford.
Philosophical Writings ed.by G.H.R.Parkinson. Everyman's
Library, London, etc. Enthaelt Monadology.
Vernunftprinzipien der Natur und der Gnade. Monadologie.
Franzoesisch-Deutsch. Meiner, Hamburg.
Logic from a Rhetorical Point of View. Walter de Gruyter, Berlin -
New York. Reihe: Grundlagen der Kommunikation und Kognition, Herausgeber: R.
Posner und G.Meggle.
`A Jaskowski-style system of computer-assisted reasoning' in: Wolenski
(ed.) [1994].
Mechanization of Reasoning in a Historical Perspective. Rodopi,
Amsterdam - Atlanta. Series: Poznan Studies in the Philosophy of the
Sciences and the Humanities, ed.by L.Nowak.
`On the physics and mathematics of thought' in: Herken (ed.) [1988],
pp.491-522.
The Emperor's New Mind. Concerning Computers, Minds, and The Laws of
Physics, Oxford University Press, Oxford - New York - Melbourne.
The Self and Its Brain. Springer-Verlag, Berlin. Routledge and Kegan
edition 1983, reprinted 1990.
`Turing naturalized: von Neumann's unfinished project' in: Herken (ed.)
[1988], pp.539-559.
`From Leibniz to artificial intelligence' in: Albertazzi and Poli (eds.)
[1991], pp.61-75.
The Representational Theory of Mind. Basil Blackwell, Cambridge
(Mass.).
`On computable numbers, with an application to the Entscheidungsproblem'.
P.Lond.Math.Soc. (2), vol.42 (1936-37), 230-265; a
correction, ibid.vol.43 (1937), 544-546.
`Computing machinery and intelligence'. Mind, vol.59, no.2236
(Oct.), 433-460. Auch in Boden [1990].
`The general and logical theory of automata' in: Collected Works,
vol.V, ed. A.H.Taub, vol.5, Pergamon Press, New York 1963.
The Computer and the Brain. Yale Univ.Press, New Haven.
Zeit und Logik bei Leibniz. Studien zu Problemen der Naturphilosophie,
Mathematik, Logik und Metaphysik. Klett-Cotta, Stuttgart.
Webster's Third New International Dictionary of the English Language.
Unabridged. Merriam Co., Springfield, Mass.
Philosophical Logic in Poland. Kluwer, Dordrecht.