Nearest-Neighbours Estimators for Conditional Mutual Information

Author:

Jake Witter, Conor Houghton

Keyword:

Computer Science, Information Theory, Information Theory (cs.IT)

journal:

--

date:

2024-03-01 00:00:00

Abstract

The conditional mutual information quantifies the conditional dependence of two random variables. It has numerous applications; it forms, for example, part of the definition of transfer entropy, a common measure of the causal relationship between time series. It does, however, require a lot of data to estimate accurately and suffers the curse of dimensionality, limiting its application in machine learning and data science. However, the Kozachenko-Leonenko approach can address this problem: it is possible, in this approach to define a nearest-neighbour estimator which depends only on the distance between data points and not on the dimension of the data. Furthermore, the bias can be calculated analytically for this estimator. Here this estimator is described and is tested on simulated data.