Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
This paper presents a submission for the EvaCun 2025 token prediction task, detailing the fine-tuning of three LLMs (Command-R, Mistral, Aya Expanse) on the provided data. It compares three different prompting approaches for prediction and evaluates their performance.
Demonstrates practical application of LLMs in competitive settings and provides insights into effective fine-tuning and prompting strategies for specific NLP tasks.