Injecting Logical Constraints into Neural Networks via Straight-Through Estimators
International Conference on Machine Learning (ICML)
Injecting discrete logical reasoning into neural network learning is one of the challenges in neurosymbolic AI. We find that a straight-throughestimator, a method introduced to train binary neural networks, could effectively be applied to incorporate logical constraints into neural network learning. More specifically, we design a systematic way to represent discrete logical constraints as a loss function. Minimizing this loss using gradient descent via a straight-through-estimator updates neural network’s weights in the direction that its binarized outputs satisfy the logical constraints. The experimental results show that leveraging GPUs and batch training, this method scales significantly better than the state-of-the-art neuro-symbolic approaches that use heavy symbolic computation as a blackbox for computing gradients.