Preprints
Preprints in reversed chronological order (generated by jekyll-scholar).
2025
- E-Scores for (In)Correctness Assessment of Generative Model OutputsGuneet S. Dhillon, Javier González, Teodora Pandeva, and Alicia Curth2025
While generative models, especially large language models (LLMs), are ubiquitous in today’s world, principled mechanisms to assess their (in)correctness are limited. Using the conformal prediction framework, previous works construct sets of LLM responses where the probability of including an incorrect response, or error, is capped at a desired user-defined tolerance level. However, since these methods are based on p-values, they are susceptible to p-hacking, i.e., choosing the tolerance level post-hoc can invalidate the guarantees. We therefore leverage e-values to complement generative model outputs with e-scores as a measure of incorrectness. In addition to achieving the same statistical guarantees as before, e-scores provide users flexibility in adaptively choosing tolerance levels after observing the e-scores themselves, by upper bounding a post-hoc notion of error called size distortion. We experimentally demonstrate their efficacy in assessing LLM outputs for different correctness types: mathematical factuality and property constraints satisfaction.
@misc{dhillon2025escores, title = {E-Scores for (In)Correctness Assessment of Generative Model Outputs}, author = {Dhillon, Guneet S. and Gonz\'{a}lez, Javier and Pandeva, Teodora and Curth, Alicia}, year = {2025}, eprint = {2510.25770}, archiveprefix = {arXiv}, primaryclass = {stat.ML}, url = {https://arxiv.org/abs/2510.25770}, }