Learning to Solve Constraint Satisfaction Problems with Recurrent Transformers
Published
International Conference on Learning Representations (ICLR)
Abstract
Constraint satisfaction problems (CSPs) are about finding values of variables that
satisfy the given constraints. We show that Transformer extended with recurrence
is a viable approach to learning to solve CSPs in an end-to-end manner, having
clear advantages over state-of-the-art methods such as Graph Neural Networks,
SATNet, and some neuro-symbolic models. With the ability of Transformer to
handle visual input, the proposed Recurrent Transformer can straightforwardly be
applied to visual constraint reasoning problems while successfully addressing the
symbol grounding problem. We also show how to leverage deductive knowledge
of discrete constraints in the Transformer’s inductive learning to achieve sampleefficient
learning and semi-supervised learning for CSPs.