Piotr Giza (UMCS): Creativity in Computer Science.

The aim of this paper is to explore creative thinking in computer science and compare it to natural sciences, mathematics or engineering. It is also meant as polemics with some theses of the pioneering work under the same title by Daniel Saunders and Paul Thagard (2005) because I point to important motivations in computer science the authors do not mention and give examples of the origins of problems they explicitly deny. No doubt, creative activity in computer science differs from that in other sciences. Some authors even express general worries that computer science runs counter creative thinking, that as a consequence of the computer revolution, humans will become so lazy that they may lose their power of creative thinking (Gardner, 1978).
Creativity in computer science has different nature and origins of problems, motivations, and methods, then those of natural sciences and engineering. Computer science is a very specific field for it relates the abstract, theoretical discipline – mathematics, on the one hand, and engineering, often concerned with very practical tasks of building computers, on the other. It is like engineering in that it is concerned with solving practical problems or implementing solutions, often with strong financial reasons, eg. increasing a company’s income. It is like mathematics in that is deals with abstract symbols, logical relations, algorithms, computability problems, etc.
Saunders and Thagard analyze rich experimental material from historical and contemporary work in computer science and argue that, contrary to natural sciences, computer science “[…] is not concerned with empirical questions involving naturally observed phenomena, nor with theoretical why-questions aimed at providing explanations of such phenomena”. Now, I argue that there is a field of research in artificial intelligence (which, in turn, is a branch of computer science), called Machine Discovery, where the explanation of natural phenomena, finding experimental laws and explanatory models is the primary goal. This goal is achieved by constructing computer systems whose job is to simulate various processes involved in scientific discovery done by human researchers and help them in making new discoveries.
On the other hand, motivations that give rise to ingenious projects in computer science can be very strange and include curiosity, fun or attempts to be famous out of boring, stable life of a successful programmer in a big corporation. A good example is the phenomenon of Open Source software, especially the development of the Linux operating system and its applications when, from the economical point of view, Microsoft absolutely dominated the software market of PC computers.

Keywords: creativity, computer science, technology, natural sciences


Hajo Greif (PW): Justifying Black Box Models in Artificial Intelligence.

The renaissance of Artificial Intelligence (AI) is marked by two seemingly countervailing developments: more powerful and sophisticated computational resources on the one hand, and increasing modesty concerning the endeavor of modeling human intelligence on the other. The first development has epistemologically relevant implications with respect to the ‘epistemic opacity’ incurred by computational complexity (a.k.a. the ‘Black Box Problem’). The second development manifests itself in distinct strategies of coping with opacity: a return to the original aim of AI of making computers solve problems that would require intelligence from humans, without seeking to provide insights into human intelligence or a deliberate self-restriction to modeling prima facie simple and epistemically tractable, but embodied and environmentally situated activities.

The first, expository aim of this paper is to demonstrate that these alternative routes away from the claims to the cognitive stimulation that dominated classical AI (or ‘GOFAI’) amount to two distinct strategies of coping with the opacity brought about by computational complexity. These strategies are exemplified by Behaviour-based AI and Evolutionary Robotics (BBAI) and Deep Neural Networks (DNNs). In BBAI (Brooks 1991), complexity is reduced in order to gain transparency and biological plausibility – which is bought at the cost of the ‘scaling problem’: will the model also be able to explain complex cognitive abilities, and still remain tractable? In DNNs (LeCun et al. 2015), epistemic opacity is accepted in order to make most of the computational complexity that can now be technologically mastered – which is bought at the cost of limitations on what the observer will be able to learn from and about a model that successfully produces a solution to a given problem.

The second, critical aim of this paper is a defence of epistemic opacity in modelling. Pragmatist views of modeling and simulation that highlight the methodologically ‘motley’ and epistemically ‘opaque’ character of computer simulations drop the requirements that models must be ‘analytically tractable’ in terms of the ‘ability to decompose the process between model inputs and outputs into modular steps’ (Humphreys 2004). The sanctioning of models is ‘holistic’ instead, in that it is based on the ‘simultaneous
 confluence’ of theory, available mathematics, previous results and background knowledge (Winsberg 2010). Analytical intractability need not conflict with, or may even subserve, the representational properties of the model as a whole. Arguably, the requirement of epistemic transparency understood as analytical tractability of models has not been a constituent of scientific practice throughout most of the history of the empirical sciences. It may actually owe to the invention of computational methods of modeling that rely on breaking down complex mathematical structures into elementary, and essentially tractable, computational steps. This latter development has been particularly pertinent to the methods of modeling in AI.

Keywords: models in Artificial Intelligence, black box problem, behaviour-based Artificial Intelligence, deep neural networks

Brooks, R. Intelligence Without Representation. Artificial Intelligence 47 (1991), 139–159.
Humphreys, P. Extending Ourselves: Computational Science, Empiricism, and Scientific Method. Oxford University Press, Oxford, 2004.
LeCun, Y., Bengio, Y., and Hinton, G. Deep Learning. Nature 521 (2015), 436–444.
Winsberg, E. B. Science in the Age of Computer Simulation. University of Chicago Press, Chicago, 2010.


Aleksandra Kołtun (UMCS): Writing as distributed socio-material practice – a case study.

The aim of the presentation is to provide an empirically-based case study of writing understood as a hybrid task distributed in a complex socio-material setting (see O’Hara, Kenton et al. 2002). In the case presented writing takes place in a dynamic environment in which people have to find, share and integrate information from several documents for making decisions and generating text.
The case under scrutiny concerns the process of preparing formal regulations for participatory budgeting in selected Polish municipalities. I will focus on the course of workshops for civil servants and citizens in which the contents of regulations are discussed and established. Firstly, I will provide background information on how the workshops setting is organized in order to ensure open, but goal-oriented communication between the participants. The context of work is structured thanks to 1/ entry points that made some information more available than other as well as helped to keep track of the workflow, 2/ activity landscapes for tasks such as information search and debate, 3/ mechanisms that allow coordination between people and documents (see Kirsh 2001). Secondly, I will present a detailed account of the dynamics of writing activities across various material media: from flipcharts containing handwritten, followed by various versions of the regulations that are reviewed and modified until reaching a black-boxed, neat, and officially appraised draft of the final document.
The case demonstrated can also be treated as a supplement to the field of distributed cognition. So far most research conducted in this framework has dealt with environments that strongly support human-computer interaction. The task described here is performed in a setting that is rich in artifacts and external representations, but involves almost no technological devices.

Keywords: writing, distributed cognition, information search and sharing, office work

Hollan, J. D., Hutchins, E., & Kirsh, D. (2000). Distributed cognition: Towards a new foundation for human-computer interaction research. ACM Transactions on Computer-Human Interaction, 7(2), 174–176.
Kirsh, D. (2001). The context of work. Human–Computer Interaction, 16.2-4, 305-322.
Kirsh, D. (2010). Thinking with external representations. AI & Society, 25(4), 441-454.
O’Hara, K., Taylor, A., Newman, W., & Sellen, A. J. (2002). Understanding the materiality of writing from multiple sources. International Journal of Human-Computer Studies, 56(3), 269-305.
Rogers, Y., & Ellis, J. (1994). Distributed cognition: an alternative framework for analysing and explaining collaborative working. Journal of Information Technology, 9(2), 119-128.


Paweł Polak and Roman Krzanowski (UPJPII): Information – Abstract or Concrete?.

The challenge to science is to figure out how to couple abstract information to the concrete world of physical objects” .
Information is thought of as either abstract or concrete. The dilemma of the nature of information arises as information may be conceptualized as knowledge so it is abstract not concrete or it may be conceptualized as a form of the physical entities so it is concrete not abstract. Paul Davis (and a few other writers) asks whether we have two kinds of information or one. Davis claims that the resolution of this incoherence is critical to the understanding of information. The paper discusses the nature and the source of this dichotomy and inquires whether this dichotomy really exists or is it just the effect of the conceptual framework that we use? Then, the paper evaluates arguments used by Davis and probes what are the metaphysical sources of this controversy. Finally, the paper proposes a way to resolve this dilemma. The proposed resolution requires the acceptance of information in any form to be an integral part of the physical world. So, it essentially mandates the physical nature of information, but not physical reductionism. The paper also discusses how the proposed solution impacts on the definitions of information that we generally accept, how it challenges the concepts of data, information, and knowledge and how it strains the notions of the minimally cognitive systems, the mind and computing.

Keywords: information dichotomy, abstract, concrete


Paula Quinon (PW): Deviant encodings and what “computing” means.

My main objective is to design a common background for various philosophical discussions about adequate conceptual analysis of “computation”.

The core of the problem discussed in this paper is the following: the Church-Turing Thesis states that Turing Machines formally explicate the intuitive concept of computability. The description of Turing Machines requires description of the notation used for the input and for the output. The notation used by Turing in the original account and also notations used in contemporary handbooks of computability all belong to the most known, common, widespread notations, such as standard Arabic notation for natural numbers, binary encoding of natural numbers or stroke notation. The choice is arbitrary and left unjustified. In fact, providing such a justification and providing a general definition of notations, which are acceptable for the process of computations, causes problems. This is so because the comprehensive definition states that such notation or encoding has to be computable. Yet, using the concept of computability in a definition of a notation, which will be further used in a definition of the concept of computability yields an obvious vicious circle.

This argument appears in discussions about what is an adequate or correct conceptual analysis of the concept of computability. Its exact form depends on the underlying picture of mathematics that an author is working with. After presenting several contexts in which deviant encodings are problematized explicitly, I focus on philosophical examples where the phenomenon appears implicitly, in some “disguised” version, for instance in the analysis of the concept of natural number. In parallel, I develop the idea that Carnapian explications provide a much more adequate framework for understanding the concept of computation, than the classical philosophical analysis. Intensional differences between formal models of computation can (and hence should) be directly correlated with different possible clarifications (in Carnapian terms) of the intuitive concept and hence retraced to different intuitions guiding the formalization process.

Keywords: the Church-Turing thesis, deviant encodings, fixed points of conceptual analysis

Benacerraf, P. (1996). Recantation, or: Any Old ω-Sequence Would Do After All. Philosophia Mathematica (4), 184–189
Copeland, J., Proudfoot, D. (2010). Deviant encodings and Turing’s analysis of computability. Studies in History and Philosophy of Sciences 41, 247–252.
Quinon, P., Zdanowski, K. (2007): Intended Model of Arithmetic. Argument from Tennenbaum’s Theorem. In: Cooper, S., Loewe, B., Sorbi, A. (eds.) Computation and Logic in the Real World, CiE Proceedings
Rescorla, M. (2007): Church’s Thesis and the Conceptual Analysis of Computability. Notre Dame Journal of Formal Logic 48, 253–280.
Shapiro, S. (1982). Acceptable Notation. Notre Dame Journal of Formal Logic 23(1), 14–20.


Comments are closed.