Improving scalability of inductive logic programming via pruning and best-effort optimisation
Kazmi, Mishal and Schüller, Peter and Saygın, Yücel (2017) Improving scalability of inductive logic programming via pruning and best-effort optimisation. Expert Systems with Applications, 87 . pp. 291-303. ISSN 0957-4174 (Print) 1873-6793 (Online)
Full text not available from this repository.
Official URL: http://dx.doi.org/10.1016/j.eswa.2017.06.013
Inductive Logic Programming (ILP) combines rule-based and statistical artificial intelligence methods, by learning a hypothesis comprising.a set of rules given background knowledge and constraints for the search space. We focus on extending the XHAIL algorithm for ILP which is based on Answer Set Programming and we evaluate our extensions using the Natural Language Processing application of sentence chunking. With respect to processing natural language, ILP can cater for the constant change in how we use language on a daily basis. At the same time, ILP does not require huge amounts of training examples such as other statistical methods and produces interpretable results, that means a set of rules, which can be analysed and tweaked if necessary. As contributions we extend XHAIL with (i) a pruning mechanism within the hypothesis generalisation algorithm which enables learning from larger datasets, (ii) a better usage of modern solver technology using recently developed optimisation methods, and (iii) a time budget that permits the usage of suboptimal results. We evaluate these improvements on the task of sentence chunking using three datasets from a recent SemEval competition. Results show that our improvements allow for learning on bigger datasets with results that are of similar quality to state-of-the-art Systems on the same task. Moreover, we compare the hypotheses obtained on datasets to gain insights on the structure of each dataset.
Repository Staff Only: item control page