Logic for Learning: Learning Comprehensible Theories from Structured Data


ISSN: 0368-492X

Article publication date: 1 September 2004




Andrew, A.M. (2004), "Logic for Learning: Learning Comprehensible Theories from Structured Data", Kybernetes, Vol. 33 No. 8, pp. 1333-1335. https://doi.org/10.1108/03684920410545324



Emerald Group Publishing Limited

Copyright © 2004, Emerald Group Publishing Limited

As the author says in his Preface, this book is concerned with the rich and fruitful interplay between the fields of computational logic and machine learning, and the intended audience is senior undergraduates, graduate students, and researchers in either of these fields. He goes on to say that the treatment is meant to be self‐contained, and therefore accessible to specialists in computational logic without previous knowledge of machine learning, and similarly to specialists in machine learning with no previous knowledge of computational logic.

The description is slightly misleading since only one kind of machine learning is treated in any detail, namely that based on logic. Some other kinds are mentioned in the first chapter, including reinforcement learning, neural nets, and genetic algorithms, but apart from a brief reference at one point to use of a genetic algorithm in conjunction with a logic approach, none of them is mentioned again. The approach of the book is essentially that treated in the AI literature as Inductive Logic Programming.

I think it is also a fair comment, though I may be excusing my own ineptitude, to say that the author has underestimated the difficulty of his specialty and that the book is not readily accessible to someone meeting computational logic for the first time. It introduces a higher‐order version which allows functions to have other functions as arguments. The coverage of the logic theory is not claimed to be exhaustive, but to focus on what is applicable to machine learning, and within this a “shortest path” is offered for readers who want to focus even more strongly on applications. The presentation is dauntingly formal, with definitions and propositions to be digested long before their relevance becomes apparent. Though again the observation must be relative to my own comprehension, I think more could have been done to smooth the path of the reader with interspersed plain‐text guidance, in addition to the useful and chatty sections that are provided at intervals.

However, for the reader who perseveres or who has previous acquaintance with advanced formal logic, there is clearly a great deal of valuable material here. As the second part of the title indicates, the aim is to derive theories from structured data, with particular though not exclusive interest in constructs that are comprehensible to a user. The data constitutes a training set that may contain hundreds or thousands of examples, or even, when the term “data mining” is used, some millions. The interest is in “supervised learning” where each example is accompanied by an indication of a value. For a classification task the value would normally be Boolean, and for tasks having the nature of regression it would be numerical. Methods are developed depending on metrics between the individuals in a training set, and on the related idea of kernels, and the procedure of predicate rewriting features largely.

The methods developed are the basis of a learning system called Alkemy that can be downloaded free, as a C++ program, from a Web site associated with the book, and a graphical user interface is also available for free download. The author recommends that readers obtain the Alkemy system and use it to get first‐hand experience.

There are finally examples of the use of Alkemy, and although the author says these have been kept simple for purposes of demonstration, three of them are certainly very impressive. The treatment places great emphasis throughout on data types, and the specifications in training sets acceptable to Alkemy can contain data in many forms, including details of links in structural chemical formulas, and numerical magnitudes as “real” numbers. There is an important field of application to biology and medicine, where the training set can be a list of drugs or other substances of known composition, with an indication for each of whether or not it produces some specific effect. In one example the effect is mutagenesis and the system produces the comprehensible answer: “A molecule is mutagenic if and only if it does not have an atom of type 1 and charge greater than or equal to 0.290, nor an atom of type 50.”

In two other examples referring to molecules the results are a good deal more complex and it is difficult to imagine that they would ever have been derived by purely manual inspection. One example refers to the conditions under which a substance has the smell of musk, and for this physical distances within the molecule have to be taken into consideration.

The book undoubtedly introduces and develops a very powerful technique.

The emphasis on structured data suggests there ought to be a connection with the reconstructability analysis developed by George Klir and colleagues (Klir and Elias, 2002), though no connection is suggested in the book. Applications of reconstructability analysis tend to be statistical in character, whereas the logic approach assumes everything to be “crisp” but there might be some possibility of usefully combining the two.


Klir, G.J. and Elias, D. (2002), Architecture of Systems Problem Solving, 2nd ed., Kluwer/Plenum, New York, NY.

Related articles