Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: In the age of open and free information, a concerning trend of reliance on AI
is emerging. However, existing AI tools struggle to evaluate the credibility of
information and to justify their assessments. Hence, there is a growing need
for systems that can help users evaluate the trustworthiness of online
information. Although major search engines incorporate AI features, they often
lack clear reliability indicators. We present TrueGL, a model that makes
trustworthy search results more accessible. The model is a fine-tuned version
of IBM's Granite-1B, trained on the custom dataset and integrated into a search
engine with a reliability scoring system. We evaluate the system using prompt
engineering and assigning each statement a continuous reliability score from
0.1 to 1, then instructing the model to return a textual explanation alongside
the score. Each model's predicted scores are measured against real scores using
standard evaluation metrics. TrueGL consistently outperforms other small-scale
LLMs and rule-based approaches across all experiments on key evaluation
metrics, including MAE, RMSE, and R2. The model's high accuracy, broad content
coverage, and ease of use make trustworthy information more accessible and help
reduce the spread of false or misleading content online. Our code is publicly
available at https://github.com/AlgazinovAleksandr/TrueGL, and our model is
publicly released at https://huggingface.co/JoydeepC/trueGL.