Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Recent studies have suggested that Large Language Models (LLMs) could provide
interesting ideas contributing to mathematical discovery. This claim was
motivated by reports that LLM-based genetic algorithms produced heuristics
offering new insights into the online bin packing problem under uniform and
Weibull distributions. In this work, we reassess this claim through a detailed
analysis of the heuristics produced by LLMs, examining both their behavior and
interpretability. Despite being human-readable, these heuristics remain largely
opaque even to domain experts. Building on this analysis, we propose a new
class of algorithms tailored to these specific bin packing instances. The
derived algorithms are significantly simpler, more efficient, more
interpretable, and more generalizable, suggesting that the considered instances
are themselves relatively simple. We then discuss the limitations of the claim
regarding LLMs' contribution to this problem, which appears to rest on the
mistaken assumption that the instances had previously been studied. Our
findings instead emphasize the need for rigorous validation and
contextualization when assessing the scientific value of LLM-generated outputs.
Authors (2)
Julien Herrmann
Guillaume Pallez
Submitted
October 31, 2025
Key Contributions
This paper reassesses the claim that LLM-based genetic algorithms offer new insights into the bin packing problem. It provides a detailed analysis of LLM-generated heuristics, finding them opaque despite being human-readable. The authors propose simpler, more efficient, and interpretable algorithms tailored to specific instances, suggesting the problem instances themselves were relatively simple.
Business Value
Provides a more rigorous understanding of LLM capabilities in optimization, potentially leading to more reliable and interpretable AI-driven solutions for logistics and resource allocation problems.