background
logo
ArxivPaperAI

Batch Universal Prediction

Author:
Marco Bondaschi, Michael Gastpar
Keyword:
Computer Science, Information Theory, Information Theory (cs.IT), Machine Learning (cs.LG), Machine Learning (stat.ML)
journal:
--
date:
2024-02-06 00:00:00
Abstract
Large language models (LLMs) have recently gained much popularity due to their surprising ability at generating human-like English sentences. LLMs are essentially predictors, estimating the probability of a sequence of words given the past. Therefore, it is natural to evaluate their performance from a universal prediction perspective. In order to do that fairly, we introduce the notion of batch regret as a modification of the classical average regret, and we study its asymptotical value for add-constant predictors, in the case of memoryless sources and first-order Markov sources.
PDF: Batch Universal Prediction.pdf
Empowered by ChatGPT