ECAI workshop on algorithmic issues for inference in graphical models - AIGM'12
28th of August 2012, Montpellier, France Univ. Montpellier, IAE, Room TD07
Most real (e.g. biological) complex systems are formed or modelled by elementary objects that locally interact with each other. Local properties can often be measured, assessed or partially observed. On the other hand, global properties that stem from these local interactions are difficult to comprehend. It is now acknowledged that a mathematical modelling is an adequate framework to understand, be able to control or to predict the behaviour of complex systems, such as gene regulatory networks or contact networks in epidemiology. More precisely, graphical models (GM), which are formed by variables bound to their interactors by deterministic or stochastic relationships, allow researchers to model possibly high-dimensional heterogeneous data and to capture uncertainty. Analysis, optimal control, inference or prediction about complex systems benefit from the formalisation proposed by GM. To achieve such tasks, a key factor is to be able to answer general queries: what is the probability to observe such event in this situation ? Which model best represents my data ? What is the most acceptable solution to a query of interest that satisfies a list of given constraints ? Often, an exact resolution cannot be achieved either because of computational limits, or because of the intractability of the problem.
The aim of this workshop is to bridge the gap between Statistics and Artificial Intelligence communities where approximate inference methods for GM are developped. We are primarily interested in algorithmic aspects of probabilistic (e.g. Markov random fields, Bayesian networks, influence diagrams), deterministic (e.g. Constraint Satisfaction Problems, SAT, weighted variants, Generalized Additive Independence models) or hybrid (e.g. Markov logic networks) models. We expect both (i) reviews that analyze similarities and differences betwen approaches developped by computer scientists and statisticians in these areas, and (ii) original papers that propose new algorithms and show their performance on data sets as compared to state-of-the-art methods.
The workshop will take place in the "Institut d'Administration des Entreprises" (IAE) in the University of Montpellier, room TD07.
- 9h15 Workshop introduction
- 9h30 – 10h30 Invited talk: J. Larrosa and E. Rollon. Algorithms for Graphical Models: Mapping the global picture. Slides by J.Larrosa and E.Rollon.
- 10h30 – 11h00 break
- 11h00 – 11h30 D. Allouche, C. Bessiere, P. Boizumault, S. de Givry, P. Gutierrez, S. Loudni, JP. Metivier, T. Schiex. Filtering Decomposable Global Cost Functions. Associated Slides
- 11h30 - 12h00 J. Larrosa and E. Rollon. Improved Bounded Max-Sum for Distributed Constraint Optimization. Associated Slides.
- 12h00 – 12h30 W. Probert, E. McDonald-Madden, N. Peyrard, R. Sabbadin. Computational issues surrounding the dynamic optimisation of management of an ecological food web. Associated Slides.
- 12h30 – 14h00 lunch break
- 14h00 – 15h00 Invited talk: A. Globerson. Linear Programming Relaxations for Graphical Models - Learning and Inference. Slides:  and 
- 15h00 – 15h30 J. Cussens.An upper bound for BDeu local scores. Associated Slides.
- 15h30 - 15h45 break
- 15h45 – 16h15 J. Vandel, B. Mangin, S. de Givry. New local move operators for learning the structure of Bayesian networks. Associated Slides.
- 16h15 – 16h45 P. Zhang, F. Krzakala, J. Reichardt and L. Zdeborova. Comparative Study for Inference of Hidden Classes in Stochastic Block Models Associated Slides.
Researchers from the fields of Machine Learning, Constraint satisfaction and optimisation, decision theory, Probability and Statistics
Call for paper
Topics include, but are not limited to:
- answering queries in GMs (MAP/MPM/MPE, satisfaction, optimization...),
- evaluation of the normalisation constant of a Markov random field,
- solution counting in deterministic GM or enumeration of k-best solutions,
- decision variable optimisation (optimisation within deterministic or mixed deterministic/stochastic GM),
- variational methods,
- Monte-Carlo methods,
- bounds for approximate inference,
- stochastic satisfiability (SAT) and stochastic constraint programming (CP),
- bridge between probabilistic and logic formalisms
We will consider papers from 2 to 8 pages, using the ECAI formatting style.
Contributions (pdf files) can be submitted no later than the 31st of May, by sending an email to the organisation committee.
- Submission deadline : 31 of May NOW 11 Of JUNE
- Notification to authors: 29 of June
- Submission of final version: 13 of July
- Amir Globerson: Linear Programming Relaxations for Graphical Models - Learning and Inference
Abstract: Many probabilistic modeling tasks rely on solving challenging inference problems. These combinatorial problems arise, e.g., in predicting likely values for variables as in selecting and orienting residues in protein design, parsing in natural language processing, or when learning the model structure itself. In many cases, the inference problems involve densely connected variables (or higher order dependences) and are provably hard. However, recent research has shown that some of these difficulties can be overcome by a careful choice of approximation schemes and learning algorithms. These have yielded very encouraging results in wide array of fields, from machine vision and natural language processing to computational biology and signal processing. The tutorial will focus on linear programming (LP) relaxations which have been particularly successful in solving inference problems. Intuitively, LP relaxations decompose a complex problem into a set of simpler subproblems that are subsequently encouraged to agree. If the subproblems agree about a solution, then the solution is the optimal one, otherwise it is fractional. Geometrically, the relaxation maintains an outer bound approximation to a polytope whose vertexes correspond to valid solutions. I will introduce and explain key ideas behind recent approaches, discuss when they can and cannot be applied, how they can be integrated into supervised learning schemes and what efficient message passing algorithms exist for solving them. Examples from several applications will be provided, including computational biology, natural language processing, and structure learning.
Abstract: several fields of research can be cast into the general framework of Graphical Models. Independent research in the different fields have often developed similar ideas. However, this similarity is often hard to realize because of each field semantics, notation, etc. The purpose of this tutorial is to describe the main common algorithmic techniques in the most general terms. We expect that it will facilitate researchers from different specific fields to realize where "their" techniques fit and how to benefit from similar ideas from different fields. We will do our best to describe general algorithms using standard text-books language (e.g. dynamic programming, memoization, greedy algorithms, relaxation, etc) and avoiding domain-specific one (e.g. inference, consistency enforcement, case-based reasoning, etc). More specific algorithms, mainly from Weighted CSPs and Max-SAT, will subsequently described.
- Nathalie Peyrard (BIA, INRA Toulouse, France)
- Stéphane Robin (AgroParisTech, Paris, France)
- Thomas Schiex (BIA, INRA Toulouse, France)
For paper submission or enquiries about the meeting, please contact the organisation committee at AIGM12firstname.lastname@example.org