The renaissance of Artificial Intelligence (AI) is marked by two seemingly countervailing developments: more powerful and sophisticated computational resources on the one hand, and increasing modesty concerning the endeavor of modeling human intelligence on the other. The first development has epistemologically relevant implications with respect to the ‘epistemic opacity’ incurred by computational complexity (a.k.a. the ‘Black Box Problem’). The second development manifests itself in distinct strategies of coping with opacity: a return to the original aim of AI of making computers solve problems that would require intelligence from humans, without seeking to provide insights into human intelligence or a deliberate self-restriction to modeling prima facie simple and epistemically tractable, but embodied and environmentally situated activities.
The first, expository aim of this paper is to demonstrate that these alternative routes away from the claims to the cognitive stimulation that dominated classical AI (or ‘GOFAI’) amount to two distinct strategies of coping with the opacity brought about by computational complexity. These strategies are exemplified by Behaviour-based AI and Evolutionary Robotics (BBAI) and Deep Neural Networks (DNNs). In BBAI (Brooks 1991), complexity is reduced in order to gain transparency and biological plausibility – which is bought at the cost of the ‘scaling problem’: will the model also be able to explain complex cognitive abilities, and still remain tractable? In DNNs (LeCun et al. 2015), epistemic opacity is accepted in order to make most of the computational complexity that can now be technologically mastered – which is bought at the cost of limitations on what the observer will be able to learn from and about a model that successfully produces a solution to a given problem.
The second, critical aim of this paper is a defence of epistemic opacity in modelling. Pragmatist views of modeling and simulation that highlight the methodologically ‘motley’ and epistemically ‘opaque’ character of computer simulations drop the requirements that models must be ‘analytically tractable’ in terms of the ‘ability to decompose the process between model inputs and outputs into modular steps’ (Humphreys 2004). The sanctioning of models is ‘holistic’ instead, in that it is based on the ‘simultaneous
confluence’ of theory, available mathematics, previous results and background knowledge (Winsberg 2010). Analytical intractability need not conflict with, or may even subserve, the representational properties of the model as a whole. Arguably, the requirement of epistemic transparency understood as analytical tractability of models has not been a constituent of scientific practice throughout most of the history of the empirical sciences. It may actually owe to the invention of computational methods of modeling that rely on breaking down complex mathematical structures into elementary, and essentially tractable, computational steps. This latter development has been particularly pertinent to the methods of modeling in AI.
Keywords: models in Artificial Intelligence, black box problem, behaviour-based Artificial Intelligence, deep neural networks
Brooks, R. Intelligence Without Representation. Artificial Intelligence 47 (1991), 139–159.
Humphreys, P. Extending Ourselves: Computational Science, Empiricism, and Scientific Method. Oxford University Press, Oxford, 2004.
LeCun, Y., Bengio, Y., and Hinton, G. Deep Learning. Nature 521 (2015), 436–444.
Winsberg, E. B. Science in the Age of Computer Simulation. University of Chicago Press, Chicago, 2010.
Email: h.greif@ans.pw.edu.pl