Editorial, part 2

LOGIC AND INTELLIGENCE

• There are three traits of an intelligent person, to wit those of being:
SYSTEMATIC - CRITICAL - INVENTIVE.
It is the first for which astronomy is found in the curriculum designed by Plato for a future elite: the contemplation of the perfect heavenly order should grant young people a similar order in their heads. The ancients used to associate order and wisdom, as witnessed by the maxim sapientis est ordinare (the wise's affair is to put things in order). Nowadays similar hopes are addressed to logic.

Logic, indeed, provides us with the highest ideal of systemization in the form of both deductive systems (ordered sets of statements) and some semantic models (ordered sets of things). Moreover, logic shares with set theory the well-ordering principle to the effect that every set can be well-ordered; its validity in mathematics is acknowledged, while its validity in empirical domains can be fruitfully conjectured.

It is also logic to which we owe most precise standards of criticism, concerning arguments, definitions, classifications, methods of testing theories, etc. There is a lot of intelligent actions which prove so involved that these logical standards do not match their complexity, nevertheless logic forms a set of standards to be approximated as much as possible.

As for inventiveness, the contemporary logic, unlike the so-called logic of discovery as projected in various forms by Bacon, Descartes, Leibniz, etc., does not pretend to guide the process of discovering truths. On the contrary, among its greatest achievements (due to the discussion on decidability of theories) there is the concept of algorithmic procedures. Since they are the opposite of creative behaviour, logic contributes to the notion of inventiveness by hinting at a necessary negative condition, namely, that for a behaviour to be inventive implies to be non-algorithmic. However, invention as well as other traits of intelligence should be assisted by algorithmic devices to fulfil its potential.

When an algorithm - an abstract mathematical object - is expressed in machine code, in which instructions for the machine are recorded, it becomes a program to control the work of the machine.
These concepts, fruitfully generalized, explain the behaviour of organisms as well: one speaks of genetic programs, of instictive behaviour as being somehow programmed, etc. Presumably the notion of machine code can yield a model to understand the functioning of central nervous system. Such a machine code in organisms deserves to be called an internal language, meant as one in which Nature records its algorithms to control animal behaviour. [*1]

If there is - as claimed by some authors - a Darvinian contest between ideas in an individual mind, a contest to develop this mind's intelligence, then an algorithmic equipment acts like environmental conditions to be met by the ideas in their quest for survival and development. To explore, though, that natural logical environment, we should go far beyond current logic.

• How-Reasoning vs That-Reasoning
• Model-Based vs Text-Based Reasoning
There are reasonings which most logicians did not dream of. They used to tell us that logic copes with reasonings as truth-preserving transformations of sentences. There are, though, innumerable cases of inferences which (i) are not truth-preserving and (ii) do not depend on any texts.

Kind (i) is nicely exemplified by the so-called "problems" in Euclid which consist in making something out of another thing, say, to describe an equilateral triangle on a given finite straight line (Book 1, Problem 1). The property of truth-preserving cannot attach to the reasoning which solves such a problem because the solution does not consist in asserting a proposition; instead, it amounts to producing a construction. In other words, the result does not tell that there is so-and-so, but how to construct a thing. Let such processes be termed as that-resonings and how-reasonings, respetively. Only the former were lucky enough to have become the subject-matter of official logic.

RESEARCH TASK 1. In which way how-reasonings and that-reasonings are related to each other? Sometimes, as seems to be the case in Euclid, it is possible to reduce the former to the latter, since the latter can be interpreted as ones yielding existential theorems (the success of construction proves existence of the object constructed). This supposition should be checked, and then one should find to what extent it can be generalized. Since methods of mechanizing that-resonings have become a matter of routine, this would indirectly pave the way to mechanizing how-reasonings as well.
Kind (ii) is most convincingly exemplified by reasonings carried out by animals as lacking any possibility of producing texts, while being capable of processing pictures. There is a widely known case of unverbalized problem-solving, namely that of Koehler's chimpanzee Sultan who fitted a bamboo stick into another, after many attempts to solve the problem of grasping fruit that was out of his reach. A human would react in a similar way, as it is the only correct solution, and he also would not need any verbalized inference. The whole inference can be done silently in one's imagination; it consists in processing mental images, or (more generally) models of some things, viz., the sticks and the fruit. Before the agent fits a bamboo stick into another, he tries this strategy in a wordless Gedankenexperiment to obtain the hypothesis that with an extended stick one would overcome the distance to the fruit. This encourages one to externalize this mentally modelled action in the form of overt behaviour. Let such processes be called model-based reasonings as opposed to text-based reasonings; only the latter have become the subject-matter of logical research. [*2]
RESEARCH TASK 2. As to kind (ii), ie the model-based reasonings, there is no doubt that many of them are not reducible to the text-based reasonings. Thus AI faces the task of creating the method of recording models in an internal language and providing rules of processing them, analogous to logical inference rules.
Still another step beyond the frontiers of official logic is needed in discussing how-reasonings and model-based resonings from the viewpoint of a theory of intelligence. It should consist in creating a theory of conceptual systems, or (more imaginatively) conceptual nets. The existing logic teaches us how to transform premisses into conclusions, but does not teach how to find premisses which we do not have yet. And this is the main problem in the both "unorthodox" kinds of reasonings. When constructing an object, one looks for a suitable stuff; and when dealing with models instead of texts, one has no ready verbalized premises; it is a conceptual net which should assist an efficient search for them.

There is a strong evidence that the level of intelligence depends on one's set of concepts. The more such a set is like a system, that is, the more systematically arranged, the more it helps in finding pieces of information needed for reasonings in problem-solving. Clear examples can be found in sensory perception. If Sherlock Holmes perceives a lot of details relevant to the case under study, while Dr. Watson does not, and so Holmes proves more intelligent, this is because he has a reacher conceptual apparatus, necessary to put those questions which, in turn, guide his perception.

Obviously, there is a feedback between one's ability to invent and systematize new concepts by himself and a conceptual system acquired through learning: a more learned person is more inventive, and a more inventive one is more capable of learning new things. A similar feedback holds for the two remaining components of intelligence.

Not only the number of elements is what counts but even more their interrelations constituting a conceptual system. This is so because of the enormous definitional role of such relations - as shown in the methodology of deductive systems when it deals with implicit (ie postulational, axiomatic) definitions. Let us imagine, first, two separate conceptual systems A and B, and then a greater system

{A, B, [A*B]}
where "[A*B]" stands for the set of relations of elements of A to elements of B, and vice versa.

Let A be a conceptual system concerning economics, and B - politics; let the concept "market economy" be in A, and "rule of law" in B. In the concept:

"free Competition within the rule of Law", for short "C*L",
"C" belongs to A while "L" to B. This combination modifies the content of each of them, e.g. "free competition" in a lawless society would mean something very different (rather the law of the jungle). One who has the concept "C*L" at his disposal is more than others capable of finding premises for reasoning about public affairs (e.g. in a debate on antitrust regulations).

Another example: one who is versed in computers, logic and neurology (as was, eg, John von Neumann) is better prepared to reason about each of these subjects than someone whose conceptual system is limited to just one of them. His advantage consists in being able to grasp more facts relevant to the problem in question, that is, in greater ability of finding premises.

One more example. An author proves intelligent qua author if he skillfully handles text partition (paragraphs, sections, etc.), indenting, formation of titles, hierarchizing of concepts (bold, italics), etc. Such intelligence may progress owing to a sophisticated software like TeX. Then new typographical means (hardly available with typewriter) as varieties of framing, itemization, etc. extend the conceptual system concerning logical structures of texts, and thereby structures of thought; this, in turn, makes one more versed in recognizing and creating such structures.

RESEARCH TASK 3. Suppose a device should be constructed, e.g. an expert system, to support human intelligence in reasoning and deciding about a definite subject-matter. The task to be performed to grant the system high efficiency will include a systematization of concepts which should approximate axiomatic systems (as much as possible and necessary).
The old Leibniz's dream of universal characteristic, that is a system of concepts so arranged that mere combinations of signs guide problem-solving, to some extent may revive in a system forming the common core of all conceptual systems relevant to our civilization. It should involve fundamental notions of logic and set theory, arithmetic, physics, computer science, cognitive science, theory of human action, etc, properly systematized. Moreover, references from this core to more specialized areas, eg for QED project mathematics, should be provided.

This may seem a crazy idea if taken without due provisos. However, if considered in the long run, and in view of enormous new possiblities of intellectual collaboration to be expected from Internet, and when taking into account the enormous potential of the method of Hypertextual links, this phantastic project becomes more likely to approach reality. When located at carefully chosen Internet hosts, the future Encyclopedia of the Conceptual System of Our Civilization should function like a capital of the United States of Reason.

[*1] There are three adjectives which candidate to qualify the language (or code) in question: of thought, as suggested, eg., by J.A.Fodor in his The Language of Thought (Crowell, New York 1975), and employed by S.C.Shapiro in "Belief spaces as sets of propositions"; besides there occur terms neural, and internal. [-> back to main text]

[*2] The terminology to distinguish this opposite kinds of reasonings is far from being established. The suggested pair of terms is found in a Mechanization of Reasoning by W. Marciszewski and M. Murawski, 1995. The same opposition is redered by the pair "objectual inference - symbolic inference" in the chapter "Reasoning, Logic, and Intelligence" of W. Marciszewski's Logic from a Rhetorical Point of View, 1994. The first pair is preferred here as the concept of (mental) model is very useful in the research in question. [-> back to main text]

File put on WWW server 17-02-96.