Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 90% Match Research Paper ML engineers,AI researchers,Developers using LLM frameworks,Prompt engineers 1 week ago

metaTextGrad: Automatically optimizing language model optimizers

large-language-models › training-methods
📄 Abstract

Abstract: Large language models (LLMs) are increasingly used in learning algorithms, evaluations, and optimization tasks. Recent studies have shown that using LLM-based optimizers to automatically optimize model prompts, demonstrations, predictions themselves, or other components can significantly enhance the performance of AI systems, as demonstrated by frameworks such as DSPy and TextGrad. However, optimizers built on language models themselves are usually designed by humans with manual design choices; optimizers themselves are not optimized. Moreover, these optimizers are general purpose by design, to be useful to a broad audience, and are not tailored for specific tasks. To address these challenges, we propose metaTextGrad, which focuses on designing a meta-optimizer to further enhance existing optimizers and align them to be good optimizers for a given task. Our approach consists of two key components: a meta prompt optimizer and a meta structure optimizer. The combination of these two significantly improves performance across multiple benchmarks, achieving an average absolute performance improvement of up to 6% compared to the best baseline.
Authors (4)
Guowei Xu
Mert Yuksekgonul
Carlos Guestrin
James Zou
Submitted
May 24, 2025
arXiv Category
cs.CL
NeurIPS 2025
arXiv PDF

Key Contributions

Proposes metaTextGrad, a meta-optimizer designed to automatically enhance existing LLM-based optimizers and align them to specific tasks. It introduces meta prompt and meta structure optimizers to tailor general-purpose optimizers for better performance on given tasks.

Business Value

Accelerates the development and improves the performance of LLM-powered applications by automating and optimizing the prompt engineering and optimizer design process, reducing manual effort and improving results.