Welcome to alphailp’s documentation!

alphaILP is a neuro-symbolic framework that can learn generalized rules from complex visual scenes. alphaILP learns to represent scenes as logic programs—intuitively, logical atoms correspond to objects, attributes, and relations, and clauses encode highlevel scene information. alphaILP has an end-to-end reasoning architecture from visual inputs. Using it, alphaILP performs differentiable inductive logic programming on complex visual scenes, i.e., the logical rules are learned by gradient descent. [GitHub]

Introduction by Examples

We provide an introduction by giving specific examples of use cases of alphaILP.

Acknowledgements

This project has been supported by SPAICER (01MK20015E) , TAILOR (952215), and AICO.

_images/spaicer.png _images/tailor.png _images/aico.png

Contents:

Indices and tables