Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 80% Match Research Paper / Competition Submission Participants in NLP Competitions,LLM Researchers,Machine Learning Engineers 2 weeks ago

Finetuning LLMs for EvaCun 2025 token prediction shared task

large-language-models β€Ί evaluation
πŸ“„ Abstract

Abstract: In this paper, we present our submission for the token prediction task of EvaCun 2025. Our sys-tems are based on LLMs (Command-R, Mistral, and Aya Expanse) fine-tuned on the task data provided by the organizers. As we only pos-sess a very superficial knowledge of the subject field and the languages of the task, we simply used the training data without any task-specific adjustments, preprocessing, or filtering. We compare 3 different approaches (based on 3 different prompts) of obtaining the predictions, and we evaluate them on a held-out part of the data.
Authors (2)
Josef Jon
OndΕ™ej Bojar
Submitted
October 17, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

This paper presents a submission for the EvaCun 2025 token prediction task, detailing the fine-tuning of three LLMs (Command-R, Mistral, Aya Expanse) on the provided data. It compares three different prompting approaches for prediction and evaluates their performance.

Business Value

Demonstrates practical application of LLMs in competitive settings and provides insights into effective fine-tuning and prompting strategies for specific NLP tasks.