Learning Error Functions with Interpretable Compositional Networks

Florian Richoux


Date
14 avr. 2026

In Constraint Programming, constraints are usually represented as predicates allowing or forbidding combinations of values. However, some algorithms can exploit a finer representation: error functions. By associating a function to each constraint type to evaluate the quality of an assignment, it extends the expressiveness of regular Constraint Satisfaction Problem/Constrained Optimization Problem formalisms. Their usage comes with a price, though: it makes problem modeling significantly harder, since users must provide a set of error functions that are not always easy to define. In this talk, we propose a method to learn an error function corresponding to a constraint, given its predicate version. Our method aims to learn error functions in a supervised fashion, trying to reproduce either the Hamming or the Manhattan distance, by using a graph model we named Interpretable Compositional Networks. This model allows us to get interpretable results. We run experiments on 7 global constraints to show its versatility. Experiments show that our method can learn functions that scale to high dimensions, and can learn functions over a very sparse training set.