background
logo
ArxivPaperAI

PuzzleBench: Can LLMs Solve Challenging First-Order Combinatorial Reasoning Problems?

Author:
Chinmay Mittal, Krishna Kartik, Mausam, Parag Singla
Keyword:
Computer Science, Artificial Intelligence, Artificial Intelligence (cs.AI), Computation and Language (cs.CL), Machine Learning (cs.LG)
journal:
--
date:
2024-02-04 00:00:00
Abstract
Recent works have explored the use of LLMs for reasoning tasks focussing on relatively simple problems, such as logical question answering. In our work, we wish to tackle more complicated problems, significantly expanding the capabilities of these models. Particularly, we explore whether LLMs can solve challenging first-order combinatorial reasoning problems, an example being the popular puzzle Sudoku. These problems have an underlying first-order structure described by a general description in natural language and can be instantiated to instances of varying sizes. Moreover these problems are computationally intensive requiring several reasoning steps to reach the solution. We present PuzzleBench a dataset of 31 such challenging puzzles. We observe that LLMs even when aided by symbolic solvers perform rather poorly on our benchmark. In response we propose a new approach, Puzzle-LM which combines LLMs with both symbolic solvers and program interpreters enabling them to reason about such challenging problems. We also show how feedback from smaller solved instances can help improve this reasoning ability.
PDF: PuzzleBench: Can LLMs Solve Challenging First-Order Combinatorial Reasoning Problems?.pdf
Empowered by ChatGPT