Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
Challenges the assumption that multilinguality is necessary for effective zero-shot transfer in sense-aware tasks. Through large-scale analysis across 28 languages, it identifies pretraining/fine-tuning data and evaluation artifacts as more significant factors, offering a refined understanding of cross-lingual transfer, especially for low-resource languages.
Optimizes the development of multilingual NLP models by focusing on more impactful factors than just language count, potentially reducing training costs and improving performance for low-resource languages.