A Short Introduction to Kolmogorov Complexity
2010
Sign up for access to the world's latest research
Abstract
This is a short introduction to Kolmogorov complexity and information theory. The interested reader is referred to the literature, especially the textbooks [CT91] and [LV97] which cover the fields of information theory and Kolmogorov complexity in depth and with all the necessary rigor. They are well to read and require only a minimum of prior knowledge.
Related papers
International Journal of Engineering Sciences & Research Technology, 2012
This paper describes various application issues of kolmogorov complexity . Information assurance , network management , active network are those areas where kolmogorov complexity is applied . Our main focus is to show it's importance in various domain including the domain of computer virus detection .
This paper presents a proposal for the application of Kolmogorov complexity to the characterization of systems and processes, and the evaluation of computational models. The methodology developed represents a theoretical tool to solve problems from systems science. Two applications of the methodology are presented in order to illustrate the proposal, both of which were developed by the authors. The first one is related to the software development process, the second to computer animation models. In the end a third application of the method is briefly introduced, with the intention of characterizing dynamic systems of chaotic behavior, which clearly demonstrates the potentials of the methodology.
2019
Romashchenko and Zimand~\cite{rom-zim:c:mutualinfo} have shown that if we partition the set of pairs $(x,y)$ of $n$-bit strings into combinatorial rectangles, then $I(x:y) \geq I(x:y \mid t(x,y)) - O(\log n)$, where $I$ denotes mutual information in the Kolmogorov complexity sense, and $t(x,y)$ is the rectangle containing $(x,y)$. We observe that this inequality can be extended to coverings with rectangles which may overlap. The new inequality essentially states that in case of a covering with combinatorial rectangles, $I(x:y) \geq I(x:y \mid t(x,y)) - \log \rho - O(\log n)$, where $t(x,y)$ is any rectangle containing $(x,y)$ and $\rho$ is the thickness of the covering, which is the maximum number of rectangles that overlap. We discuss applications to communication complexity of protocols that are nondeterministic, or randomized, or Arthur-Merlin, and also to the information complexity of interactive protocols.
Lecture Notes in Computer Science, 2009
The question how and why mathematical probability theory can be applied to the "real world" has been debated for centuries. We try to survey the role of algorithmic information theory (Kolmogorov complexity) in this debate.
Arxiv preprint cs/0410002
We compare the elementary theories of Shannon information and Kolmogorov complexity, the extent to which they have a common purpose, and where they are fundamentally different. We discuss and relate the basic notions of both theories: Shannon entropy versus Kolmogorov complexity, the relation of both to universal coding, Shannon mutual information versus Kolmogorov ('algorithmic') mutual information, probabilistic sufficient statistic versus algorithmic sufficient statistic (related to lossy compression in the Shannon theory versus meaningful information in the Kolmogorov theory), and rate distortion theory versus Kolmogorov's structure function. Part of the material has appeared in print before, scattered through various publications, but this is the first comprehensive systematic comparison. The last mentioned relations are new.
Artificial life, 2015
In the past decades many definitions of complexity have been proposed. Most of these definitions are based either on Shannon's information theory or on Kolmogorov complexity; these two are often compared, but very few studies integrate the two ideas. In this article we introduce a new measure of complexity that builds on both of these theories. As a demonstration of the concept, the technique is applied to elementary cellular automata and simulations of the self-organization of porphyrin molecules.
2020
There is a parallelism between Shannon information theory and algorithmic information theory. In particular, the same linear inequalities are true for Shannon entropies of tuples of random variables and Kolmogorov complexities of tuples of strings (Hammer et al., 1997), as well as for sizes of subgroups and projections of sets (Chan, Yeung, Romashchenko, Shen, Vereshchagin, 1998--2002). This parallelism started with the Kolmogorov-Levin formula (1968) for the complexity of pairs of strings with logarithmic precision. Longpre (1986) proved a version of this formula for space-bounded complexities. In this paper we prove an improved version of Longpre's result with a tighter space bound, using Sipser's trick (1980). Then, using this space bound, we show that every linear inequality that is true for complexities or entropies, is also true for space-bounded Kolmogorov complexities with a polynomial space overhead.
We introduce algorithmic information theory, also known as the theory of Kolmogorov complexity. We explain the main concepts of this quantitative approach to defining `information'. We discuss the extent to which Kolmogorov's and Shannon's information theory have a common purpose, and where they are fundamentally different. We indicate how recent developments within the theory allow one to formally distinguish between `structural' (meaningful) and `random' information as measured by the Kolmogorov structure function, which leads to a mathematical formalization of Occam's razor in inductive inference. We end by discussing some of the philosophical implications of the theory.
Kolmogorov complexity and Shannon entropy are conceptually different measures. However, for any recursive probability distribution, the expected value of Kolmogorov complexity equals its Shannon entropy, up to a constant. We study if a similar relationship holds for Rényi and Tsallis entropies of order α, showing that it only holds for α = 1. Regarding a time-bounded analogue relationship, we show that, for some distributions we have a similar result. We prove that, for universal time-bounded distribution m t (x), Tsallis and Rényi entropies converge if and only if α is greater than 1. We also establish the uniform continuity of these entropies.
2008
Although information content is invariant up to an additive constant, the range of possible additive constants applicable to programming languages is so large that in practice it plays a major role in the actual evaluation of K(s), the Kolmogorov-Chaitin complexity of a string s. Some attempts have been made to arrive at a framework stable enough for a concrete definition of K, independent of any constant under a programming language, by appealing to the naturalness of the language in question. The aim of this paper is to present an approach to overcome the problem by looking at a set of models of computation converging in output probability distribution such that that naturalness can be inferred, thereby providing a framework for a stable definition of K under the set of convergent models of computation.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.