background
logo
ArxivPaperAI

A Cross Entropy Interpretation of R{\'{e}}nyi Entropy for $\alpha$-leakage

Author:
Ni Ding, Mohammad Amin Zarrabian, Parastoo Sadeghi
Keyword:
Computer Science, Information Theory, Information Theory (cs.IT)
journal:
--
date:
2024-01-26 00:00:00
Abstract
This paper proposes an $\alpha$-leakage measure for $\alpha\in[0,\infty)$ by a cross entropy interpretation of R{\'{e}}nyi entropy. While R\'{e}nyi entropy was originally defined as an $f$-mean for $f(t) = \exp((1-\alpha)t)$, we reveal that it is also a $\tilde{f}$-mean cross entropy measure for $\tilde{f}(t) = \exp(\frac{1-\alpha}{\alpha}t)$. Minimizing this R\'{e}nyi cross-entropy gives R\'{e}nyi entropy, by which the prior and posterior uncertainty measures are defined corresponding to the adversary's knowledge gain on sensitive attribute before and after data release, respectively. The $\alpha$-leakage is proposed as the difference between $\tilde{f}$-mean prior and posterior uncertainty measures, which is exactly the Arimoto mutual information. This not only extends the existing $\alpha$-leakage from $\alpha \in [1,\infty)$ to the overall R{\'{e}}nyi order range $\alpha \in [0,\infty)$ in a well-founded way with $\alpha=0$ referring to nonstochastic leakage, but also reveals that the existing maximal leakage is a $\tilde{f}$-mean of an elementary $\alpha$-leakage for all $\alpha \in [0,\infty)$, which generalizes the existing pointwise maximal leakage.
PDF: A Cross Entropy Interpretation of R{\'{e}}nyi Entropy for $\alpha$-leakage.pdf
Empowered by ChatGPT