Abstract
If a computer is given access to an oracle—the characteristic function of a set whose membership relation may or may not be algorithmically calculable—this may dramatically affect its ability to compress information and to determine structure in strings, which might otherwise appear random. This leads to the basic question, ‘given an oracle A, how many oracles can compress information at most as well as A?’ This question can be formalized using Kolmogorov complexity. We say that B≤_{LK}A if there exists a constant c such that K^{A}(σ)<K^{B}(σ)+c for all strings σ, where K^{X} denotes the prefix-free Kolmogorov complexity relative to oracle X. The formal counterpart to the previous question now is, ‘what is the cardinality of the set of ≤_{LK}-predecessors of A?’ We completely determine the number of oracles that compress at most as well as any given oracle A, by answering a question of Miller (Notre Dame J. Formal Logic, 2010, 50, 381–391), which also appears in Nies (Computability and randomness. Oxford, UK: Oxford University Press, 2009. Problem 8.1.13); the class of ≤_{LK}-predecessors of a set A is countable if and only if Chaitin's halting probability Ω is Martin-Löf random relative to A.
1. Introduction
Kolmogorov complexity is a fundamental notion, which has found applications in topics as diverse as combinatorics, language recognition, information distance, thermodynamics and chaos theory. The basic idea behind this approach to quantifying the degree of randomness of a finite binary string is that a string is simple or non-random if it has a short description relative to its length. Kolmogorov (1965) formalized this idea using the theory of computation. In this context, Turing machines play the role of our idealized computing devices, and we assume that there are Turing machines capable of simulating any calculational process that proceeds in a precisely defined and algorithmic manner. Programs can be identified with binary strings. A string τ is said to be a description of a string σ with respect to a Turing machine M if this machine halts when given program τ and outputs σ. Then the complexity of σ with respect to M (denoted by K_{M}(σ)) is the length of its shortest description with respect to M.
When we come to consider randomness for infinite strings, it becomes important to consider machines whose domain satisfies a certain condition; the machine M is called prefix-free if it has prefix-free domain (which means that no program for which the machine halts and gives output is an initial segment of another). It can be shown that there exists an optimal prefix-free machine U, i.e. a machine that gives optimal complexity for all strings, up to a certain constant number of bits. This means that for each prefix-free machine M there exists a constant c such that K_{U}(σ)<K_{M}(σ)+c for all finite strings σ. Hence, the choice of the underlying optimal machine does not change the complexity distribution significantly and the theory of prefix-free complexity can be developed without loss of generality, based on a fixed underlying optimal prefix-free machine U.
In order to define randomness for infinite sequences, we consider the complexity of all finite initial segments. A finite string σ is said to be c-incompressible if K(σ)≥|σ|−c, where K=K_{U}. Chaitin and Levin defined an infinite binary sequence X to be random if there exists some constant c such that all of its initial segments are c-incompressible.^{1} This definition of randomness for infinite sequences is then independent of the choice of underlying optimal prefix-free machine, and coincides with other definitions of randomness given in terms of computable betting strategies and also the definition given by Martin-Löf (1966) (a result of Schnorr, see Chaitin (1975)). Strings that are random in this sense are called Martin-Löf random. The coincidence of the randomness notions resulting from these different approaches may be seen as evidence of a robust and natural theory.
If we allow the underlying optimal prefix-free machine access to an oracle A, the resulting complexity K^{A} will often be reduced. Thus, the use of external information often allows for better compression of strings and can be used in order to determine the structure in sequences that would otherwise appear random. The following basic question then arises:
Informal question. Given an oracle A, how many oracles can compress strings at most as well as A?
Formally, an oracle X compresses strings at most as well as A if there exists some constant c such that K^{A}(σ)≤K^{X}(σ)+c for all strings σ. This relation was introduced by Nies (2005) and is denoted X≤_{LK}A. A variant of the LR-reducibility called the vL-reducibility was also introduced and studied by Miller & Yu (2008). The formal counterpart to the informal question above now becomes: 1.1
It is also a natural question as to whether the ability of an oracle to compress random strings is essentially the same as its ability to compress strings in general. In the same paper that he introduced the ≤_{LK} relation, Nies also discussed a simple variation; X≤_{LR}A if all sets which are Martin-Löf random relative to A are also Martin-Löf random relative to X. In Kjos-Hanssen et al. (in press) it was shown that ≤_{LR} is identical to ≤_{LK}, and so our solution to question 1.1 also gives the solution to the corresponding question for the ≤_{LR} relation.
A short history of the literature surrounding question 1.1 can be found in Barmpalias (2010). The special case when A=∅ was question 4.4 in Ambos-Spies & Kučera (2000) (stated in terms of ≤_{LR}). The main motivation for asking this question at the time was the then recent discovery that there are non-computable oracles X such that K(σ)≤K^{X}(σ)+c for a constant c and all strings σ. Such sets X (identifying sets, their characteristic functions and infinite binary strings) are of no use in the task of compressing information and are known as low for K (see Nies 2009, §5.1). Nies (2005) answered this question by showing that if A is computable then the class of question (1.1) is a subset of the class of sets, and hence is countable (a set is iff it is computable in Turing's halting problem—for an introduction to the effective hierarchies, we refer the reader to the excellent book of Hinman (1978)). On the other hand, in Barmpalias et al. (2008) it was shown that this class is uncountable if A is the halting problem. Miller (2010) used a variation of the notion of low for K sets to exhibit a large class of oracles A for which the class of question (1.1) is countable. He called an oracle A weakly low for K if K(σ)≤K^{A}(σ)+c for a constant c and infinitely many strings σ, and then showed that if A is weakly low for K then the class of question (1.1) is countable. He also gave the following characterization of the class of weakly low for K sets in terms of the halting probability . A set A is weakly low for K if and only if Ω is Martin-Löf random relative to A. The sets A with the latter property are known as low for Ω. These results prompted the following question. 1.2In this paper, we give an affirmative answer, thus providing a characterization of the sets with uncountably many ≤_{LK}-predecessors—a set has uncountably many (and so continuum many) predecessors iff it is not low for Ω.
Theorem 1.1
Let . Then the following are equivalent.
.
There are uncountably many for which there exists such that K^{A}(n)≤K^{X}(n)+c for all .
Hence a set is weakly low for K iff the set of its ≤_{LK}-predecessors is countable.
Notice that the first clause of theorem 1.1 (under a standard identification of strings with numbers) means that A is not weakly low for K (or equivalently, A is not low for Ω). This theorem unifies a number of older results. For example, in Barmpalias (2010), the first author showed the following. 1.3A class is perfect if it does not contain any isolated points according to the Cantor topology. Since every weakly low for K set is already low for K, (see Nies 2009, exercise 8.1.11) the first part of result (1.3) can be seen as a special case of Theorem 1.1. We note, however, that the latter (or its proof) does not imply the second clause of result (1.3). In retrospect, result (1.3) from Barmpalias (2010) can be seen as an ‘effectivization’ of theorem 1.1, in the same way that the construction of a maximal set is an ‘effectivization’ of the construction of a cohesive set.
The advantage of the effective nature of result (1.3) (the fact that we obtain an effectively closed uncountable set) is that it lends itself to the application of basis theorems for classes. For example, the low for Ω basis theorem (from Reimann & Slaman (submitted) and independently (Downey et al. 2005)) says that every non-empty class contains a low for Ω path. As was demonstrated in Baartse & Barmpalias (submitted), the proof of result (1.3) can be augmented so as to establish the following. 1.4
Another result from Baartse & Barmpalias (submitted) is the following generalization of the low for Ω basis theorem. 1.5This implies that that every non-empty class without any low for K members contains uncountably many low for Ω paths. Indeed, in that case, the class that is given by result (1.5) cannot have isolated paths since these would be and so low for K (given that they are also low for Ω). We can now use these observations to deduce the following fact about the LK-degrees, the degree structure that is induced by the pre-order ≤_{LK}. Notice that A≡_{LK}B (denoting A≤_{LK}B and B≤_{LK}A) informally means that A and B have the same strength with respect to compressing strings. An LK-degree is if it contains a set.
Corollary 1.2
Every non-zero LK-degree bounds uncountably many LK-degrees with countably many predecessors.
The proof is a straightforward combination of results (1.4) and (1.5) and the result from Miller (2010) that if A is low for Ω then the class {X|∃c ∀σ K^{A}(σ)≤K^{X}(σ)+c} is countable.
Given that there have been a number of previous attempts by different people to answer question 1.2, it seems appropriate that we outline the ideas behind the proof of theorem 1.1, especially the new ingredient that made it possible. We do this in §2, as well as providing some preliminaries and notation for the proof that is given in §3.
2. About the proof of theorem 1.1
(a) Preliminaries
Let U be the optimal prefix-free oracle machine that underlies the prefix-free complexity K^{X} relative to oracles X. Hence for all sets X. This machine is optimal in the sense that given any other prefix-free oracle machine M, there is a constant c such that for all strings σ and oracles X. We let ⊆, ⊂ denote the prefix or the subset relation (with equality or not) depending on the context. Also, 2^{ω} denotes the set of all infinite binary sequences (which we identify with subsets of ) and 2^{<ω} denotes the set of all finite binary sequences. The oracle machine U can be seen as a computably enumerable (c.e.) set of triples 〈τ,ρ,σ〉, which indicate that U with τ written on the oracle tape, and with input program ρ, halts and produces σ. We also write U^{τ}(ρ)=σ in order to denote that this relation holds. The set U induces a partial map from 2^{ω}×2^{<ω} to 2^{<ω}. If X is a set, we let U^{X}={〈ρ,σ〉|∃τ⊂X, 〈τ,ρ,σ〉∈U} and for a string η we let U^{η}={〈ρ,σ〉|∃τ⊆η, 〈τ,ρ,σ〉∈U}. The fact that U^{X} is prefix-free for all X can be expressed by the following condition; 2.1where ρ_{0}|ρ_{1} denotes the incomparability of the two strings with respect to the prefix relation.
It is easy to see that, given U as a c.e. set of triples, we can effectively produce a U′ and a computable enumeration of U′, which satisfies the property that any triple 〈τ,ρ,σ〉 appearing in U′ at stage s has |τ|=s, and which induces the same partial map from 2^{ω}×2^{<ω} to 2^{<ω}. Therefore, without loss of generality, we can assume that in what follows that U is computably enumerated so as to satisfy this latter condition. In this way, we ensure that for each string η the set U^{η} is finite, and we ensure that the map η↦U^{η} is computable.
The weight of a prefix-free set S of strings, denoted wgt(S), is defined to be the sum . The weight of a prefix-free machine M^{X} is defined to be the weight of its domain and is denoted wgt(M^{X}). Without loss of generality, we assume that, for all sets X, wgt(U^{X})<2^{−4} and that all strings in the domain of U^{X} begin with 1.
Prefix-free machines are most often built in terms of request sets. A request set L is a set of tuples 〈ρ,ℓ〉 where ρ is a string and ℓ is a positive integer. A ‘request’ 〈ρ,ℓ〉 represents the intention of describing ρ with a string of length ℓ. We define the weight of the request 〈ρ,ℓ〉 to be 2^{−ℓ}. We say that L is a bounded request set if the sum of the weights of the requests in L is less than 1. This sum is the weight of the request set L and is denoted by wgt(L). The Kraft-Chaitin theorem (see Downey & Hirschfeldt 2010, §2.6) says that for every bounded request set L which is c.e., there exists a prefix-free machine M such that for each 〈ρ,ℓ〉∈L there exists a string τ of length ℓ such that M(τ)=ρ. The same holds when L is c.e. relative to an oracle X, giving a machine M^{X}. In §3, we freely use this method of construction without explicit reference to the Kraft-Chaitin theorem.
A real number 0≤r<1 is called c.e. if it is the limit of a non-decreasing computable sequence of rational numbers. For each set X, we define Ω^{X}:=wgt(U^{X}). Notice that this definition is compatible with the definition of the halting probability Ω that was discussed above since Ω=Ω^{∅}. Similarly, we let Ω^{η}:=wgt(U^{η}) for any string η. The map X↦Ω^{X} is called the Ω operator and plays a crucial role in §3. By the conventions, we adopted earlier concerning U and its computable enumeration, we have that the map η↦wgt(U^{η}) is computable. Our assumptions about U^{X} also mean that Ω^{X}<2^{−4} for all oracles X. Real numbers α in the unit interval (like various images of the Ω-operator) will occasionally be identified with their binary expansions. For example denotes the finite sequence consisting of the first t bits of the binary expansion of the real number α (where a real number has two expansions we take the finite one).
Finally, a tree T is a partial map σ↦T_{σ} from strings to strings which preserves compatibility and incompatibility relations between strings, and which has downward closed domain. For any σ, the image T_{σ} is called a node of the tree. The level of a node T_{σ} is |σ|. An infinite binary sequence is a path through a tree if infinitely many of its initial segments are nodes of the tree. The set of infinite paths through a tree T is denoted by [T]. A tree that is a total function may also be called perfect.
(b) Informal ideas behind the proof
In Miller (2010), it was shown that a set is weakly low for K if and only if it is low for Ω, and that weakly low for K sets have only countably many ≤_{LK}-predecessors. Therefore, it suffices to show that if a set A is not low for Ω then it has uncountably many ≤_{LK}-predecessors. The first construction of an uncountable lower ≤_{LK}-cone was presented in Barmpalias et al. (2008) (in terms of ≤_{LR}). The proof of result (1.3) in Barmpalias (2010) used new ideas in order to implement such a construction below any set that is not low for K. This proof relied entirely, however, on computable approximations. The argument that pointed to the possibility of implementing (a version of) the construction from Barmpalias et al. (2008) below an arbitrary set, which is not low for Ω, was the proof in Miller (2010) that the class of low for Ω sets coincides with the class of weakly low for K sets. In this argument, Miller showed how to use short descriptions of Ω in order to improve the overall compression of programs by any given constant. This is why question (1.2) was asked, sometimes in the form of a conjecture.
Given an oracle A, which is not low for Ω, the basic plan is to use an A-computable construction to build a prefix-free machine M^{A} and an approximation to a perfect tree T such that for all strings σ and all X∈[T]. Since any perfect tree has continuum many paths, this certainly suffices to give the result. The basic obstacle is also clear—the machine M^{A} has to simulate all machines U^{X} for X∈[T], but we must keep the weight of the domain of M^{A} bounded. In order to achieve this, we wish to ensure that where M^{A} has to simulate the descriptions given by a large number of strings in T (corresponding to a high level in T), the weight of these descriptions is relatively small. Why should we be able to achieve such a goal? For each m and each string ρ, there exists such that wgt(U^{τ})−wgt(U^{σ})<2^{−m} for all (see Barmpalias et al. 2008, §4). If we were armed with an oracle for ∅′ then we could simply find the string σ when required and the construction of T would be relatively simple. Now we do not have an oracle for ∅′ but we still wish to use the fact that A is not low for Ω in order to try to identify strings ρ such that: 2.2
How can we make use of the fact that A is not low for Ω? If A can compress initial segments of Ω, then in fact it can do the same for the initial segments of any c.e. real. Indeed, by Solovay (1975) (see Downey & Hirschfeldt 2010, §§8.1, 8.2) if B is a c.e. real then for some constant c and all numbers n. So if we approximate X (which is a potential path through T) in such a way that Ω^{X} is a c.e. real, then A will be able to compress initial segments of Ω^{X}. We shall see in a moment how this is useful to us in constructing T. Here, it is important to note that the apparent obstacle to such an approach is that we cannot allow the approximation to X to make any use of the oracle A. If it were to make use of this oracle then we would have no guarantee that Ω^{X} is a c.e. real, and we would need A to be able to provide short descriptions of Ω^{A} rather than Ω, which clearly is not possible. Thus the key new idea in the argument of §3 is to incorporate into the A-computable construction auxiliary procedures which proceed in a computable fashion and do not make any use of the oracle A.
In order to help us define the paths through T we wish to computably approximate sets X with the property that: 2.3
First of all, let us consider a simplified way of approximating sets X of this kind. Then, we shall have to modify this method slightly in order to ensure that our approximation satisfies some further conditions. The following definition implicitly uses the fact that, for any X, Ω^{X} is not rational.
Definition 2.1
Given a finite or infinite sequence X, the Ω-sequence of X is the sequence (n_{i}), where n_{i} is the least number such that the first i bits (in the binary expansion) of Ω^{X} are the same as the first i bits of .
By definition 2.1, if (n_{i}) is the Ω-sequence of X then for all . This basic fact will be used many times without explicit mention in what follows.
Now suppose that we wish to approximate X extending τ. At stages s≤|τ| let . At stage s+1>|τ|, suppose that we have already define X_{s} which is a string of length s, and let (n_{i}[s]) be the Ω-sequence of X_{s}. Check to see if there is some i<s and a string of length ≤s+1 extending τ such that . If so, then pick the least such i, and for this choice the least such τ′. Define X_{s+1} to be the concatenation of τ′ with s+1−|τ′| zeros. Otherwise, let X_{s+1}=X_{s} * 0.
Since we shall subsequently modify the details of this approximation, we will not yet verify the details precisely. It should be clear, however, that the approximation to X given by this construction converges, that Ω^{X} is a c.e. real, and that if we let (n_{i}) be the Ω-sequence of X then each n_{i}[s] converges to a value n_{i} as . Moreover, for all and all , .
So now suppose that at some point in the process of approximating T, we have defined T_{σ′} for all σ′⊂σ. Imagine that we wish to define T_{σ} to be some initial segment of a set X, which is approximated according to a construction like the one above. We have to decide how long this initial segment should be, i.e. where we should aim to start putting further branching in T. Since A is not low for Ω, for any b we can find ρ and t such that and |ρ|≤t−b. So, as we computably approximate X, we also use the oracle for A to try to search for a string ρ of this kind. When we find ρ that compresses the initial segment of Ω^{Xs} of length t, we can temporarily define T_{σ} to be . If U^{A}(ρ) is not an initial segment of Ω^{X} then we will eventually realize this, because Ω^{X} is a c.e. real. In that case, we can change our mind about T_{σ} and then there is no harm done—ρ simply corresponded to an incorrect guess as regards Ω^{X}. If on the other hand, U^{A}(ρ) is an initial segment of Ω^{X}, then is an initial segment of X, n_{t}[s] has reached limit n_{t} and for all , . Roughly speaking then, since |ρ|≤t−b, there is sufficient room above ρ to simulate the machines corresponding to up to b-many paths extending . So it is reasonable to put a branching into T here.
While this provides the basic idea, what we have said so far is not quite correct. In the situation above, when U^{A}(ρ) is an initial segment of Ω^{X}, it will be the case that is an initial segment of X, n_{t}[s] has reached limit n_{t} and that for all , , while the measure of all strings extending ρ is at least 2^{b−t}. The slight complication is that just as we had to approximate T_{σ}, all the values T_{σ′} for will also have to be approximated. As we approximate T we do not know which nodes we shall subsequently have to change our mind about, and thus, in practice, we have to simulate the machines corresponding to all strings that appear to be in T at any stage (not just those corresponding to the nodes in the final version of T). We, therefore, need to approximate X in such a way that we successfully coordinate our need to satisfy condition (2.3), while at the same time limiting the cost incurred by our changing approximation to X and the corresponding T_{σ}. We now formally describe the computable subroutine that defines the appropriate approximation.
(c) The computable subroutine of the construction
Given inputs σ∈2^{<ω} and e∈ω, the following lemma provides an algorithm that produces a computable approximation (X_{s}) converging to an infinite extension X of σ. For each s, X_{s} is a string of length s. When X_{s+1} extends X_{s}, we define c_{K}(s+1)=0, and otherwise we define , where n is the least such that X_{s}(n)↓≠X_{s+1}(n)↓. We define the ‘low for K’ cost that is associated with (X_{s}) to be: 2.4and we ensure that this cost is at most 2^{−e}. According to the standard ‘cost function method’ of constructing low for K sets (e.g. see Nies 2009, proposition 5.3.34), the set X is low for K. In lemma 2.2, we establish some additional details concerning (X_{s}), which play a crucial role in the proof of theorem 1.1.
Lemma 2.2 (Computable subroutine)
Let a string σ and a number e>0 be given. Then we can find a computable sequence (X_{s}) of strings with the following properties, where (n_{t}[s]) denotes the Ω-sequence of X_{s}.
|X_{s}|=s for each and (X_{s}) tends to an infinite extension X of σ with Ω sequence denoted by (n_{i}).
The low for K cost of (X_{s}) as this is defined in result (2.4) is <2^{−e}.
(Ω^{Xs}) is a non-decreasing sequence tending to Ω^{X}.
For each , the sequence tends to n_{i} as .
for all t and all with .
For each , if n_{i}[s]≠n_{i}[s+1] or then the first i digits (in the binary expansion) of lie lexicographically to the left of the first i digits of .
In fact, there is a computable function such that properties hold for X_{s}:=h(σ,e)[s].
Proof.
Given a string σ and a number e we show how to define the computable function h(σ,e)[s]:=X_{s} for all . The basic idea is as follows. At stage s+1, we make sure that if k is the least number such that X_{s}(k)↓≠X_{s+1}(k)↓ (should there exist any such) and if t is the greatest number such that n_{t}[s]≤k, then . Hence, we only allow this change to our approximation of X at stage s+1 if it adds at least 2^{e−t}−2^{−t} to the current approximation to Ω^{X}. Thus, every time we pay cost c_{K}(s+1)=ϵ, the monotone approximation to Ω^{X} increases by at least (2^{e}−1)⋅ϵ. Since Ω^{X}<1/2^{4} and e>0, this guarantees that the overall cost corresponding to (X_{s}) is less than 2^{−e}.
The precise instructions are as follows. At stages s≤|σ|, let . At stage s+1>|σ|, let n_{i}[s] be the Ω-sequence of X_{s}. Check to see whether there exists some i<s and a string of length ≤s+1 with σ⊂τ such that . If so, pick the least such i, and for this choice the least such τ. Define X_{s+1} to be the concatenation of τ with s+1−|τ| zeros. If not, then let X_{s+1}=X_{s} * 0.
Clearly, the function h(σ,e)[s] is computable. First we show the second property, i.e. that the low for K cost of (X_{s}) (as this is defined in condition (2.4)) is less than 2^{−e}. Suppose that X_{s}(k)≠X_{s+1}(k) and for some . Then at stage s+1, the construction acted on the basis of some i,τ (see the description of stage s+1 of the construction) such that k>n_{i}[s]. Hence, , which means that the Ω-image of X_{s+1} is larger than that of X_{s} by at least 2^{−i}(2^{e}−1). On the other hand, the cost c_{K}(s+1) that occurred at stage s+1 is <2^{−i}. Hence every time some cost ϵ is registered, an increase of at least ϵ(2^{e}−1) occurs in the sequence . According to condition (2.4), it follows that the low for K cost of (X_{s}) is less than 2^{−e}.
For property (3), notice that (since e>0) the inequality implies that Ω^{τ}>Ω^{Xs}. Hence by the construction, Ω^{Xs}≤Ω^{Xs+1} for all . Property (6) follows from property (3). Hence for each , the sequence can only change finitely many times and n_{i}[s] reaches a limit as . Since the image of Ω under any oracle is an irrational number, it follows that the final positions of the n_{i}[s] are unbounded, and X_{s} reaches a limit X. Moreover since (n_{i}[s]) is the Ω-sequence of X_{s}, the limit of n_{i}[s] as is n_{i}. This argument demonstrates properties (1) and (4).
For property (5), let s be such that n_{t}[r]=n_{t} and for all r≥s. Then for all with , since otherwise n_{t}[r+1]≠n_{t}[r] for the first stage r+1>s at which we find ρ violating this condition. ■
With the function h now defined according to the lemma, there are just two more considerations to be had before we can specify the entire construction precisely. Firstly, when we defined T_{σ} in the discussion above, there was a string ρ associated with T_{σ}, which compressed an initial segment of Ω^{Tσ}, and the measure of the set of strings extending ρ was seen to give a certain amount of room for simulating machines with oracle input extending T_{σ}. There is nothing to stop there being multiple strings σ, however, for which ρ is the string associated with T_{σ} in this way. This is easily dealt with using some simple accounting. Corresponding to each σ, we shall have values a_{σ} and b_{σ}, and ρ is chosen so as to compress by a margin, which depends upon these values in such a way that these sums work out as they should.
It is also worth clarifying the architecture of the construction, which allows us to run computable subroutines, while the procedure as a whole makes use of the oracle A in order to build our approximation to the required perfect tree. Certainly one may have a computable routine running in parallel with an oracle computation. If the oracle computation makes use of values outputted by the computable routine, this in no way affects the computability of the routine. Further, if the oracle computation in fact decides which computable routines should be run in parallel, and which values produced by those computable routines it wishes to use in its oracle computation, this in no way affects the computability of the routines that it chooses to run. Thus, we define a construction that uses an oracle A, and which refers to values produced by computable subroutines, which themselves do not make any use of the oracle.
3. Proof of theorem 1.1
(a) The machine M
In §3c, the machine M^{A} is defined in terms of a uniformly A-c.e. family of bounded request sets L_{ρ}, indexed by strings in the domain of U^{A} (and the extra 1-bit string 0, see below). This family gives a uniformly A-computable sequence of machines such that for each request 〈τ,ℓ〉 in L_{ρ} there exists a string η of length ℓ such that . The main machine M^{A} is defined as follows. 3.1Since each machine is prefix-free (and all strings in the domain of U^{A} are incomparable with the 1-bit string 0, according to the conventions of §3a) it follows from definition (3.1) that M^{A} is prefix-free.
(b) Parameters of the construction
Let b_{σ}[0] be a computable sequence of numbers such that 3.2where σ ranges over all strings and ∅ denotes the empty sequence. The parameter b_{σ} will be used in the construction to help make sure that there is a room for the descriptions that are allocated to T_{σ}, and will be updated upon an ‘injury’ of this node. A second parameter a_{σ} will indicate our belief as regards an upper bound to , where ρ runs over all extensions of T_{σ}.
We order the strings first by length and then lexicographically. Define T_{∅}[s]=∅ for all stages s, where ∅ denotes the empty string. This means that in the approximation T[s] to T, the root of the tree will always be the empty string.
Let σ be a non-empty string and let j be its last digit, so that σ=η*j for some string η. Also let denote the Ω-sequence of h(T_{η}[s]*j,a_{η}[s])[s]. Following a standard convention, the latter expression is simplified to h(T_{η} * j,a_{η})[s]. We say that T_{σ} requires attention at stage s+1 if |σ|>0 and one of the following conditions holds.
T_{σ}[s] is undefined and there exists a string ρ of length <s such that U^{A}(ρ) is defined, is a prefix of the binary expansion of Ω^{τ} where , t=|U^{A}(ρ)|, n_{t}[s]>|T_{η} * j| and |U^{A}(ρ)|−|ρ|>a_{η}[s]+b_{σ}[s].
T_{σ}[s] is defined but , where t=|U^{A}(ρ)| and ρ is the string associated with T_{σ}.
So a node requires attention either when it is undefined and is ready to be defined (corresponding to case (a)), or else is defined and should be made undefined (corresponding to case (b)).
As discussed previously, each T_{σ} will be associated with a string in the domain of U^{A}. We trivially let T_{∅} be permanently associated with the 1-bit string 0. In order to cover this trivial case (and since the string 0 is incomparable with all strings in the domain of U^{A}, see §3a), we use the special set L_{0} for the enumeration of requests corresponding to definitions of T_{0} and T_{1}, and we let α_{∅}[0]=4. Thus, every definition of some T_{σ} entails an enumeration of requests into L_{ρ}, where ρ is the string associated with T_{η}, and η is the immediate predecessor of σ.
(c) Construction
At stage s+1 let σ be the least string such that T_{σ} requires attention (or if there exists no such then proceed to the next stage). Let j be the last digit of σ so that σ=η*j for some string η. If T_{σ}[s] is defined, let b_{τ}[s+1]=b_{τ}[s]+1 for all . Also let T_{τ}[s+1], a_{τ} be undefined for all and disassociate the strings in the domain of U^{A} that were associated with them. Declare that the nodes T_{τ} for are injured.
Otherwise let be the Ω-sequence of h(T_{η} * j,a_{η})[s]. Also let ρ be the least string such that, if t denotes |U^{A}(ρ)| and τ denotes ,
— U^{A}(ρ) is a prefix of the binary expansion of Ω^{τ};
— n_{t}[s]>|T_{η} * j| and |U^{A}(ρ)|−|ρ|>a_{η}[s]+b_{σ}[s].
Put T_{σ}[s+1] equal to , let a_{σ}[s+1]=|U^{A}(ρ)|−a_{η}[s] and say that T_{σ}[s+1] is associated with ρ. Also if μ is the string that is associated with T_{η}[s] then for each 〈ν,ξ,υ〉∈U[s] such that T_{η}[s]⊂ν⊆T_{σ}[s+1] enumerate 〈υ,|ξ|−|μ|〉 into L_{μ}.
(d) Verification
The following lemmas establish the required properties of the construction of §3c.
Lemma 3.1
All nodes T_{σ}[s] and parameters a_{σ}[s],b_{σ}[s] reach limit values as . The function T that maps σ to is a perfect tree.
Proof.
We argue the first sentence in the statement of the lemma by induction on the strings σ. The values T_{∅}[s], a_{∅}[s] and b_{∅}[s] are the same for all . This concludes the base case of the induction.
Suppose that |σ|>0 and s_{0} is a stage such that for all s≥s_{0} and all strings τ less than σ the parameters T_{τ}[s], b_{τ}[s], a_{τ}[s] always equal their final values T_{τ}, b_{τ} and a_{τ}, respectively. In particular, if η is the immediate predecessor of σ (say σ=η*j), the parameters T_{η}[s], b_{η}[s], a_{η}[s] take their final values T_{η}, b_{η}, a_{η} at all stages s≥s_{0}. This immediately means that the same holds for b_{σ} since this value can only change when T_{σ} is injured (which happens only when T_{η} is redefined). By lemma 2.2, the sequence h(T_{η} * j,a_{η})[s]:=X_{s} converges to an infinite sequence X such that Ω^{X} is a c.e. real. By the same lemma, each term n_{i}[s] of the Ω-sequence of X_{s} reaches a limit n_{i} as , and since Ω^{X} is irrational . Since A is not low for Ω, there exist infinitely many such that . Choose the least string υ such that for some t with n_{t}>|T_{η} * j| and |υ|<t−a_{η}−b_{σ}. Let s_{1}>s_{0} be sufficiently large that n_{t}[r]=n_{t} and for all r≥s_{1} and . Let s_{2}>s_{1} be a stage such that for any string κ less than υ and all stages s≥s_{2} one of the following holds:
— U^{A}(κ)[s] is not a prefix of the binary expansion of Ω^{Xs};
— |κ|≮|U^{A}(κ)[s]|−a_{η}−b_{σ};
— If t′=|U^{A}(κ)[s]| then n_{t′}≤|T_{η} * j|.
Such a stage exists by the minimality of υ and the fact that Ω^{Xs} is a computable non-decreasing sequence of rational numbers that tends to Ω^{X}.
Now if T_{σ} ever became undefined after stage s_{2}, it would require attention (by the choice of υ and s_{2}). No T_{ρ} with ρ less than σ would require attention at such a stage, since these nodes have reached their limit values. Therefore, T_{σ} would receive attention and would be defined based on υ (i.e. defined with associated string υ) at some stage s_{3}>s_{2}. The parameter T_{σ}[s_{3}] would be given the value (for t=|U^{A}(υ)|), a_{σ} would be given the value |U^{A}(υ)|−a_{η} and these values would never subsequently be redefined. Thus T_{σ}, a_{σ} and b_{σ} reach limit values, as required.
Let for each string σ. Notice that for each η and stage s the nodes T_{η*0}[s] and T_{η*1}[s] are extensions of T_{η} and are incomparable with each other (unless one of them is undefined). Hence their final values will have the same properties and the function σ↦T_{σ} is a perfect tree. ■
Lemma 3.2
Let σ=η*j for some η≠∅ and suppose that at stage s+1, we newly define T_{σ}. Then Ω^{Tσ}−Ω^{Tη}<2^{−aη}.
Proof.
Let η=ζ*z, put X_{s}:=h(T_{ζ} *z,a_{ζ})[s] and let n_{i}[s] be the Ω-sequence of X_{s}. At stage s+1, let μ be the string associated with T_{η}, and let t=|U^{A}(μ)|. Then, at stage s+1, since T_{η} does not require attention, we must have that . Now if we define T_{σ}=τ and then . According to the definition of h this would cause us to define X_{s+1} so that τ⊆X_{s+1} and n_{t}[s+1]≠n_{t}[s], a contradiction. ■
Lemma 3.3
For each string μ the weight of L_{μ} is <1. Hence M is a prefix-free machine.
Proof.
Fix a string μ. Enumerations into L_{μ} occur when a node T_{σ} with σ=η*j becomes newly defined and T_{η} has the string μ currently associated with it. Hence every enumeration into L_{μ} during the construction can be allocated to the unique node T_{η} whose immediate successor became defined at the stage where the enumeration occurred. Let L_{μ}(T_{η}) be the set of tuples 〈υ,ℓ〉 in L_{μ} that are assigned to T_{η} in this way. Then . By result (3.2) it suffices to show that 3.3
In the special case, where μ is the 1-bit string 0, this weight is 0 unless η is the empty string. In the latter case, we have b_{η}[0]=4 by result (3.2) so it suffices to show that wgt(L_{0}(T_{η}))<2^{−1}. The requests that are enumerated into L_{0}(T_{η}) come from various definitions of the nodes T_{0},T_{1} during the construction. Let X_{s}:=h(T_{η} * 0,a_{η})[s] and let X be the limit of this sequence. Notice that T_{η} is the empty sequence and a_{η}[s]=4 for all stages s, so that X_{s}=h(0,4)[s]. Clearly, the weight of the requests that are enumerated into L_{0}(T_{η}) and which are caused by the various definitions of T_{0} is bounded by 2⋅(Ω^{X}+c), where c is the ‘low for K cost’ of the approximation (X_{s})↦X (the factor 2 here comes from the fact that in the last line of the construction we enumerate the request 〈υ,|ξ|−|μ|〉 rather than the request 〈υ,|ξ|〉 into L_{μ}). By lemma 2.2, we have c<2^{−4} and by the conventions established in §3a, we have Ω^{X}<2^{−4}. Hence, the weight of the requests that are enumerated into L_{0}(T_{η}) as we define T_{0} is bounded by 2^{−2}. Similarly the weight that is caused by the various definitions of T_{1} is bounded by 2^{−2}. Hence wgt(L_{0}(T_{η})) is bounded by 2^{−1}. This concludes the proof of result (3.3) in the special case when μ is the 1-bit string 0.
Now suppose that μ is in the domain of U^{A} and let η be any string. Notice that if η is the empty string then the weight in result (3.3) is 0. So without loss of generality we may assume that η is non-empty, say η=ζ*z for a string ζ and some z∈{0,1}. Notice that each time T_{η} is declared injured, the current value of b_{η} increases by 1. Consider a partition of all stages into maximal intervals where T_{η} remains uninjured (in other words, T_{ζ} remains constantly defined). Let denote the requests that are allocated to T_{η} and are enumerated into L_{μ} during one of these intervals I. To establish result (3.3) it suffices to show that 3.4where denotes the value of b_{η} during the interval I. Notice that during the stages in I the node T_{η} may be redefined many times, but each time it is redefined during this interval, it will be associated with a different string than at all previous stages during the interval (in fact, lexicographically to the right of all previously associated strings, within the interval of stages). This follows from the current definition of T_{η} in the construction via h, and property (6) of h in lemma 2.2. Thus we may consider a maximal subinterval J, during which η is always associated with μ, and during which the values a_{η} and T_{η} are fixed (in the following discussion, we let a_{η},a_{ζ} and T_{η} refer to their fixed values during this interval J).
During the interval J, some enumerations allocated to T_{η} are caused by definitions of T_{η*0} and others are caused by definitions of T_{η*1}. Fix j=0,1. Let X_{s}:=h(T_{η} * j,a_{η})[s] and let X be the limit of (X_{s}). By combining lemma 3.2 with the fact that the low for K cost corresponding to the approximation (X_{s}) is at most 2^{−aη}, we get that the weight that is enumerated into as we give the various definitions of T_{η*j} during the stages in J is bounded by 2^{|μ|}⋅2⋅2^{−aη} (the factor 2^{|μ|} comes from the fact that in the last line of the construction we enumerate the request 〈υ,|ξ|−|μ|〉 rather than the request 〈υ,|ξ|〉 into L_{μ}). Then μ was chosen so that Therefore, so . Since the same argument holds for T_{η*(1−j)}, the overall weight that is enumerated into during the stages in J is , as required. ■
Let T denote the tree σ↦T_{σ}.
Lemma 3.4
Given any path X on the tree T we have for all strings σ.
Proof.
Suppose that K^{X}(σ)=n and U^{X}(ν)=σ for some string ν of length n. Then there is τ⊂X such that 〈τ,ν,σ〉∈U. Let η be a string such that T_{η}⊂τ⊆T_{η*j} for some j=0,1. When T_{η*j} was permanently defined at a stage s, the construction enumerated the request 〈σ,|ν|−|μ|〉 to L_{μ} where μ is the string that is cofinally associated with η. Hence . ■
Acknowledgements
This work was supported by an LMS collaborative small grant number 4915. The second author was also supported by a Royal Society University Research Fellowship.
Footnotes
↵1 It is at this point that it becomes important that we restrict to the case of prefix-free machines—if we did not then it can be shown that according to this definition, there would be no random sequences.
- Received January 12, 2011.
- Accepted April 13, 2011.
- This journal is © 2011 The Royal Society
This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.