In information theory, one major goal is to find useful functions that summarize the amount of information contained in the interaction of several random variables. Specifically, one can ask how the classical Shannon entropy, mutual information, and higher interaction information relate to each other. This is answered by Hu's theorem, which is widely known in the form of information diagrams: it relates shapes in a Venn diagram to information functions, thus establishing a bridge from set theory to information theory. In this work, we view random variables together with the joint operation as a monoid that acts by conditioning on information functions, and entropy as a function satisfying the chain rule of information. This abstract viewpoint allows to prove a generalization of Hu's theorem. It applies to Shannon and Tsallis entropy, (Tsallis) Kullback-Leibler Divergence, cross-entropy, Kolmogorov complexity, submodular information functions, and the generalization error in machine learning. Our result implies for Chaitin's Kolmogorov complexity that the interaction complexities of all degrees are in expectation close to Shannon interaction information. For well-behaved probability distributions on increasing sequence lengths, this shows that the per-bit expected interaction complexity and information asymptotically coincide, thus showing a strong bridge between algorithmic and classical information theory.
This paper makes mathematically precise the idea that conditional probabilities are analogous to path liftings in geometry. The idea of lifting is modelled in terms of the category-theoretic concept of a lens, which can be interpreted as a consistent choice of arrow liftings. The category we study is the one of probability measures over a given standard Borel space, with morphisms given by the couplings, or transport plans. The geometrical picture is even more apparent once we equip the arrows of the category with weights, which one can interpret as "lengths" or "costs", forming a so-called weighted category, which unifies several concepts of category theory and metric geometry. Indeed, we show that the weighted version of a lens is tightly connected to the notion of submetry in geometry. Every weighted category gives rise to a pseudo-quasimetric space via optimization over the arrows. In particular, Wasserstein spaces can be obtained from the weighted categories of probability measures and their couplings, with the weight of a coupling given by its cost. In this case, conditionals allow one to form weighted lenses, which one can interpret as "lifting transport plans, while preserving their cost".