Unsupervised Word-level Quality Estimation for Machine Translation Through the Lens of Annotators (Dis)agreement
Abstract
Evaluation of word-level quality estimation techniques leverages model interpretability and uncertainty quantification to identify translation errors with a focus on the impact of label variation and the performance of supervised versus unsupervised metrics.
Word-level quality estimation (WQE) aims to automatically identify fine-grained error spans in machine-translated outputs and has found many uses, including assisting translators during post-editing. Modern WQE techniques are often expensive, involving prompting of large language models or ad-hoc training on large amounts of human-labeled data. In this work, we investigate efficient alternatives exploiting recent advances in language model interpretability and uncertainty quantification to identify translation errors from the inner workings of translation models. In our evaluation spanning 14 metrics across 12 translation directions, we quantify the impact of human label variation on metric performance by using multiple sets of human labels. Our results highlight the untapped potential of unsupervised metrics, the shortcomings of supervised methods when faced with label uncertainty, and the brittleness of single-annotator evaluation practices.
Community
We exploit recent advances in language model interpretability and uncertainty quantification to identify translation errors from the inner workings of translation models.
Code: https://github.com/gsarti/labl/tree/main/examples/unsup_wqe
Precomputed metrics: https://huggingface.co/datasets/gsarti/unsup_wqe_metrics
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Large Language Models as Span Annotators (2025)
- Same evaluation, more tokens: On the effect of input length for machine translation evaluation using Large Language Models (2025)
- AskQE: Question Answering as Automatic Evaluation for Machine Translation (2025)
- LLMs Are Not Scorers: Rethinking MT Evaluation with Generation-Based Methods (2025)
- Do LLMs Understand Your Translations? Evaluating Paragraph-level MT with Question Answering (2025)
- Calibrating Translation Decoding with Quality Estimation on LLMs (2025)
- Steering Large Language Models for Machine Translation Personalization (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper