Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 92% Match Research Paper RL Researchers,AI Researchers,Game Developers,Robotics Engineers 1 week ago

PARL: Prompt-based Agents for Reinforcement Learning

reinforcement-learning › game-playing
📄 Abstract

Abstract: Large language models (LLMs) have demonstrated high performance on tasks expressed in natural language, particularly in zero- or few-shot settings. These are typically framed as supervised (e.g., classification) or unsupervised (e.g., clustering) problems. However, limited work evaluates LLMs as agents in reinforcement learning (RL) tasks (e.g., playing games), where learning occurs through interaction with an environment and a reward system. While prior work focused on representing tasks that rely on a language representation, we study structured, non-linguistic reasoning - such as interpreting positions in a grid world. We therefore introduce PARL (Prompt-based Agent for Reinforcement Learning), a method that uses LLMs as RL agents through prompting, without any fine-tuning. PARL encodes actions, states, and rewards in the prompt, enabling the model to learn through trial-and-error interaction. We evaluate PARL on three standard RL tasks that do not entirely rely on natural language. We show that it can match or outperform traditional RL agents in simple environments by leveraging pretrained knowledge. However, we identify performance limitations in tasks that require complex mathematical operations or decoding states and actions.
Authors (2)
Yarik Menchaca Resendiz
Roman Klinger
Submitted
October 24, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

Introduces PARL (Prompt-based Agent for Reinforcement Learning), a method that enables LLMs to act as RL agents through prompting without fine-tuning. PARL encodes states, actions, and rewards into prompts, allowing LLMs to learn via trial-and-error interaction on structured, non-linguistic tasks.

Business Value

Enables the development of more versatile AI agents that can learn complex tasks through interaction, potentially reducing the need for extensive task-specific training data and fine-tuning.