Keywords
Citation
Andrew, A.M. (2000), "Understanding Intelligence", Kybernetes, Vol. 29 No. 9/10, pp. 1333-1340. https://doi.org/10.1108/k.2000.29.9_10.1333.4
Publisher
:Emerald Group Publishing Limited
Interpreting the title of this book invokes a curious form of recursion, in that the attempt to attach a precise meaning to “understanding” inevitably involves assumptions about the meaning of “intelligence”. The treatment in the book starts with what might be termed a common‐sense view of intelligence and goes on to assert that the way to gain understanding is to synthesise agents capable of behaviour that would be accepted as intelligent. The book is about AI, but no longer with emphasis on the esoteric kind of intelligence that can prove mathematical theorems and play board games, but on robots that are “situated agents” and can interact with everyday three‐dimensional environments, perhaps operating within societies of similar agents.
The book is, in fact, an enthusiastic introduction and comprehensive review of the new approach to AI that has also been treated in a number of other works, including Cambrian Intelligence by Rodney Brooks, also from MIT Press and recently reviewed in these pages. The new book is based on a taught course that has been enthusiastically received by the students, though with the curious side‐effect of encouraging a viewpoint that made it difficult for them to accept the arguments of certain other courses they were taking, particularly on cognitive psychology, which presumably embodied insupportable principles taken from traditional AI.
The book can be thoroughly recommended as a persuasive, comprehensive and lucid treatment of the new approach. It is argued that the way forward is by experimentation with robots capable of interacting with the untidy everyday world, and the authors note with satisfaction that there are now ways of producing these economically. An alternative allowing even more economical experimentation, but with inherent drawbacks that are discussed, is computer simulation, and the associated Web site: <http://www.ifi.unizh.ch/∼pfeifer/mitbook> gives information on simulation packages that can be used.
Whether the new approach to AI will fulfil the expectations is another issue. There is little doubt that limitations of traditional AI have become apparent, and the emphasis in the last decade or two on expert systems can be seen as a tacit admission of this, since such systems model human performance without deep analysis, essentially as a curve‐fitting procedure may model an observed physical phenomenon accurately but as a “black box”.
One objection that has been made to the classical AI approach is that it ignores continuity (see MacKay, 1959; Andrew, 1982, 1990; Churchland, 1986). Intelligence has evolved in a continuous 3D environment in which possibilities of interpolation and extrapolation are inherent and are presumably reflected in the behaviours that have emerged. The new approach acknowledges the importance of continuity, as must any approach involving robotics. On the other hand, the full exploitation of continuity has important aspects that have not been touched, particularly the subtle ways in which it enters into the discrete‐concept‐based reasoning that is the concern of mainstream AI. One aspect of this is recognised in Marvin Minsky’s principle of “heuristic connection” (Minsky 1959, 1963).
Pfeifer and Scheier refer to the success of Big Blue against Garry Kasparov as an example of powerful performance achieved by means that are not readily accepted as “intelligent” since they depend on deep search made possible by enormous computing speed (though since exhaustive search of all game continuations is still not feasible, the performance of Big Blue must also have owed something to well‐chosen heuristics). On the other hand, Kasparov was able, using his brain, presumably a vastly slower processor, to play chess that was only just beaten by the might of Big Blue. It can be argued that this depends on derived continuous measures like those of “centre‐control” and “mobility” used by Samuel (1963) in his famous checker‐playing program, combined in some little‐understood way with the discrete environment of the chess game.
The superiority of the new approach to AI in allowing effective mobile robots has been demonstrated, but it is easy to feel that it will meet many of the same difficulties as did the traditional one when the attempt is made to extend its range of application. It will, however, have the advantage of freedom from the prejudice against continuity that has hampered traditional AI, and according to the viewpoint just outlined this could eventually allow its application to top‐level chess without the emphasis on computing power. However, the suggestion of a breakthrough in AI generally is hardly warranted by developments so far. The book is written with the enthusiasm of the converted, and in the Preface the first‐named author recounts the time and place of his conversion, namely a sabbatical leave during 1990‐91 in Luc Steels’ laboratory in Brussels.
It is encouraging to observe that an approach starting from simple robots in a “real” environment has considerable correspondence to the course of natural evolution and can therefore be expected to find solutions that are similar to those found in living systems. This is not quite the conclusive argument it seems, though, since of course AI researchers cannot wait for something like natural evolution to run its course, but must speed things up using their human insights, which amount to “hunches” similar to those underlying traditional AI.
Nevertheless, this is an extremely valuable work, being in fact the first comprehensive textbook of the new approach to AI, admirably presented. My only reservation is some doubt whether the new approach is quite the breakthrough that is implied.
References
Andrew, A.M. (1982), “Logic and continuity – a systems dichotomy”, in Trappl, R. (Ed.), Cybernetics and Systems Research, North‐Holland, Amsterdam, pp. 19‐22.
Andrew, A.M. (1990), Continuous Heuristics: The Prelinguistic Basis of Intelligence, Ellis Horwood, Chichester.
Churchland, P.S. (1986), Neurophilosophy – Toward a Unified Science of the Mind/Brain, MIT Press, Cambridge, MA.
MacKay, D.M. (1959), “On the combination of digital and analogue techniques in the design of analytical engines”, Mechanisation of Thought Processes, HMSO, London, pp. 55‐65.
Minsky, M.L. (1959), “Contribution to discussion”, Mechanisation of Thought Processes, HMSO, London, p. 71.
Minsky, M.L. (1963), “Steps towards artificial intelligence”, in Feigenbaum, E.A. and Feldman, J. (Eds), Computers and Thought, McGraw‐Hill, New York, NY, pp. 406‐50.
Samuel, A.L. (1963), “Some studies in machine learning using the game of checkers”, in Feigenbaum, E.A. and Feldman, J. (Eds), Computers and Thought, McGraw‐Hill, New York, NY, pp. 71‐105.