Near-optimal conversion of hardness into pseudo-randomness
1999, 40th Annual Symposium on Foundations of Computer Science (Cat. No.99CB37039)
https://doi.org/10.1109/SFFCS.1999.814590…
16 pages
1 file
Sign up for access to the world's latest research
Abstract
Various efforts ([?, ?, ?]) have been made in recent years to derandomize probabilistic algorithms using the complexity theoretic assumption that there exists a problem in E = dtime(2 O(n) ), that requires circuits of size s(n), (for some function s). These results are based on the NW-generator [?]. For the strong lower bound s(n) = 2 n , [?], and later [?] get the optimal derandomization, P = BP P . However, for weaker lower bound functions s(n), these constructions fall far short of the natural conjecture for optimal derandomization, namely that bptime(t) ⊆ dtime(2 O(s -1 (t)) ). The gap in these constructions is due to an inherent limitation on efficiency in NW-style pseudo-random generators. In this paper we are able to get derandomization in almost optimal time using any lower bound s(n). We do this by using the NW-generator in a new, more sophisticated way. We view any failure of the generator as a reduction from the given "hard" function to its restrictions on smaller input sizes. Thus, either the original construction works (almost) optimally, or one of the restricted functions is (almost) as hard as the original. Any such restriction can then be plugged into the NW-generator recursively. This process generates many "candidate" generators -all are (almost) optimal, and at least one is guaranteed to be "good". Then, to perform the approximation of the acceptance probability of the given circuit (which is the key to derandomization), we use ideas from [?]: we run a tournament between the "candidate" generators which yields an accurate estimate. Following Trevisan, we explore information theoretic analogs of our new construction. Trevisan [?] (and then [?]) used the NW-generator to construct efficient extractors. However, the inherent limitation of the NW-generator mentioned above makes the extra randomness required by that extractor suboptimal (for certain parameters). Applying our construction, we show how to use a weak random souce with optimal amount of extra randomness, for the (simpler than extraction) task of estimating the probability of any event (which is given by an oracle).
Related papers
2015
We tighten the connections between circuit lower bounds and derandomization for each of the following three types of derandomization: - general derandomization of promiseBPP (connected to Boolean circuits), - derandomization of Polynomial Identity Testing (PIT) over fixed finite fields (connected to arithmetic circuit lower bounds over the same field), and - derandomization of PIT over the integers (connected to arithmetic circuit lower bounds over the integers). We show how to make these connections uniform equivalences, although at the expense of using somewhat less common versions of complexity classes and for a less studied notion of inclusion. Our main results are as follows: 1. We give the first proof that a non-trivial (nondeterministic subexponential-time) algorithm for PIT over a fixed finite field yields arithmetic circuit lower bounds. 2. We get a similar result for the case of PIT over the integers, strengthening a result of Jansen and Santhanam [JS12] (by removing the n...
The starting point of this work is the basic question of whether there exists a formal and meaningful way to limit the computational power that a time bounded randomized Turing Machine can employ on its randomness. We attack this question using a fascinating connection between space and time bounded machines given by Cook [4]: a Turing Machine S running in space s with access to an unbounded stack is equivalent to a Turing Machine T running in time 2 O(s). We extend S with access to a read-only tape containing 2 O(s) uniform random bits, and a usual error regime: one-sided or two-sided, and bounded or unbounded. We study the effect of placing a bound p on the number of passes S is allowed on its random tape. It follows from Cook's results that: • If p = 1 (one-way access) and the error is one-sided unbounded, S is equivalent to deterministic T. • If p = ∞ (unrestricted access), S is equivalent to randomized T (with the same error). As our first two contributions, we completely resolve the case of unbounded error. We show that we cannot meaningfully interpolate between deterministic and randomized T by increasing p: • If p = 1 and the error is two-sided unbounded, S is still equivalent to deterministic T. • If p = 2 and the error is unbounded, S is already equivalent to randomized T (with the same error). In the bounded error case, we consider a logarithmic space Stack Machine S that is allowed p passes over its randomness. Of particular interest is the case p = 2 (log n) i , where n is the input length, and i is a positive integer. Intuitively, we show that S performs polynomial time computation on its input and parallel (preprocessing plus NC i) computation on its randomness. Formally, we introduce Randomness Compilers. In this model, a polynomial time Turing Machine gets an input x and outputs a (polynomial size, bounded fan-in) circuit C x that takes random inputs. Acceptance of x is determined by the acceptance probability of C x. We say that the randomness compiler has depth d if C x has depth d(|x|). As our third contribution, we show that: • S simulates, and is in turn simulated by, a randomness compiler with depth O (log n) i , and O (log n) i+1 , respectively. Randomness Compilers are a formal refinement of polynomial time randomized Turing Machines that might elicit independent interest.
Electron. Colloquium Comput. Complex., 2016
Impagliazzo and Wigderson [25] showed that if E = DTIME(2O(n)) requires size 2Ω(n) circuits, then every time T constant-error randomized algorithm can be simulated deterministically in time poly(T). However, such polynomial slowdown is a deal breaker when T = 2α·n, for a constant α > 0, as is the case for some randomized algorithms for NP-complete problems. Paturi and Pudlak [30] observed that many such algorithms are obtained from randomized time T algorithms, for T ≤ 2o(n), with large one-sided error 1 - e, for e = 2-α·n, that are repeated 1/e times to yield a constant-error randomized algorithm running in time T/e = 2(α+o(1))·n. We show that if E requires size 2Ω(n) nondeterministic circuits, then there is a poly(n)-time e-HSG (Hitting-Set Generator) H: {0, 1}O(log n)+log(1/e) → {0, 1}n, implying that time T randomized algorithms with one-sided error 1 - e can be simulated in deterministic time poly(T)/e. In particular, under this hardness assumption, the fastest known constan...
We define a hierarchy of complexity classes that lie between P and RP, yielding a new way of quantifying partial progress towards the derandomization of RP. A standard approach in derandomization is to reduce the number of random bits an algorithm uses. We instead focus on a model of computation that allows us to quantify the extent to which random bits are being used. More specifically, we consider Stack Machines (SMs), which are log-space Turing Machines that have access to an unbounded stack, an input tape of length N , and a random tape of length N O(1). We parameterize these machines by allowing at most r(N) − 1 reversals on the random tape, thus obtaining the r(N)-th level of our hierarchy, denoted by RPdL[r]. It follows by a result of Cook [Coo71] that RPdL[1] = P, and of Ruzzo [Ruz81] that RPdL[exp(N)] = RP. Our main results are the following. • For every i ≥ 1, derandomizing RPdL[2 O(log i N) ] implies the derandomization of RNC i. Thus, progress towards the P vs RP question along our hierarchy implies also progress towards derandomizing RNC. Perhaps more surprisingly, we also prove a partial converse: Pseurorandom generators (PRGs) for RNC i+1 are sufficient to derandomize RPdL[2 O(log i N) ]; i.e. derandomizing using PRGs a class believed to be strictly inside P, we derandomize a class containing P. More generally, we introduce Randomness Compilers, a model equivalent to Stack Machines. In this model a polynomial time algorithm gets an input x and it outputs a circuit C x , which takes random inputs. Acceptance of x is determined by the acceptance probability of C x. When C x is of polynomial size and depth O(log i N) the corresponding class is denoted by P+RNC i , and we show that RPdL[2 O(log i N) ] ⊆ P+RNC i ⊆ RPdL[2 O(log i+1 N) ]. • We show an unconditional N Ω(1) lower bound on the number of reversals required by a SM for Polynomial Evaluation. This in particular implies that known Schwartz-Zippel-like algorithms for Polynomial Identity Testing cannot be implemented in the lowest levels of our hierarchy. • We show that in the 1-st level of our hierarchy, machines with one-sided error are as powerful as machines with two-sided and unbounded error.
Journal of Cryptology, 2013
We study the complexity of black-box constructions of pseudorandom functions (PRF) from one-way functions (OWF) that are secure against non-uniform adversaries. We show that if OWF do not exist, then given as an oracle any (inefficient) hard-toinvert function, one can compute a PRF in polynomial time with only k(n) oracle queries, for any k(n) = ω(1) (e.g. k(n) = log * n). Combining this with the fact that OWF imply PRF, we show that unconditionally there exists a (pathological) construction of PRF from OWF making at most k(n) queries. This result shows a limitation of a certain class of techniques for proving efficiency lower bounds on the construction of PRF from OWF. Our result builds on the work of Reingold, Trevisan, and Vadhan (TCC '04), who show that when OWF do not exist there is a pseudorandom generator (PRG) construction that makes only one oracle query to the hard-to-invert function. Our proof combines theirs with the Nisan-Wigderson generator (JCSS '94), and with a recent technique by Berman and Haitner (TCC '12). Working in the same context (i.e. when OWF do not exist), we also construct a poly-time PRG with arbitrary polynomial stretch that makes non-adaptive queries to an (inefficient) one-bit-stretch oracle PRG. This contrasts with the well-known adaptive stretch-increasing construction due to Goldreich and Micali. Both above constructions simply apply an affine function (parity or its complement) to the query answers. We complement this by showing that if the post-processing is restricted to only taking projections then non-adaptive constructions of PRF, or even linear-stretch PRG, can be ruled out.
Proceedings of the forty-fifth annual ACM symposium on Theory of Computing, 2013
We study connections between Natural Proofs, derandomization, and the problem of proving "weak" circuit lower bounds such as NEXP ⊂ TC 0 , which are still wide open. Natural Proofs have three properties: they are constructive (an efficient algorithm A is embedded in them), have largeness (A accepts a large fraction of strings), and are useful (A rejects all strings which are truth tables of small circuits). Strong circuit lower bounds that are "naturalizing" would contradict present cryptographic understanding, yet the vast majority of known circuit lower bound proofs are naturalizing. So it is imperative to understand how to pursue unNatural Proofs. Some heuristic arguments say constructivity should be circumventable: largeness is inherent in many proof techniques, and it is probably our presently weak techniques that yield constructivity. We prove: • Constructivity is unavoidable, even for NEXP lower bounds. Informally, we prove for all "typical" non-uniform circuit classes C, NEXP ⊂ C if and only if there is a polynomial-time algorithm distinguishing some function from all functions computable by C-circuits. Hence NEXP ⊂ C is equivalent to exhibiting a constructive property useful against C. • There are no P-natural properties useful against C if and only if randomized exponential time can be "derandomized" using truth tables of circuits from C as random seeds. Therefore the task of proving there are no P-natural properties is inherently a derandomization problem, weaker than but implied by the existence of strong pseudorandom functions. These characterizations are applied to yield several new results, including improved ACC 0 lower bounds and new unconditional derandomizations. In general, we develop and apply several new connections between the existence of certain algorithms for analyzing truth tables, and the non-existence of small circuits for problems in large classes such as NEXP.
Combinatorica, 2006
The Nisan-Wigderson pseudo-random generator [NW94] was constructed to derandomize probabilistic algorithms under the assumption that there exist explicit functions which are hard for small circuits. We give the first explicit construction of a pseudo-random generator with asymptotically optimal seed length even when given a function which is hard for relatively small circuits. Generators with optimal seed length were previously known only assuming hardness for exponential size circuits [IW97, STV01]. We also give the first explicit construction of an extractor which uses asymptotically optimal seed length for random sources of arbitrary min-entropy. Our construction is the first to use the optimal seed length for sub-polynomial entropy levels. It builds on the fundamental connection between extractors and pseudo-random generators discovered by Trevisan [Tre01], combined with the construction above. The key is a new analysis of the NW-generator [NW94]. We show that it fails to be pseudo random only if a much harder function can be efficiently constructed from the given hard function. By repeatedly using this idea we get a new recursive generator, which may be viewed as a reduction from the general case of arbitrary hardness to the solved case of exponential hardness. * This paper is based on two conference papers [ISW99, ISW00] by the same authors.
Studies in Complexity and Cryptography. Miscellanea on the Interplay between Randomness and Computation, 2011
A hitting-set generator is a deterministic algorithm that generates a set of strings such that this set intersects every dense set that is recognizable by a small circuit. A polynomial time hitting-set generator readily implies RP = P, but it is not apparent what this implies for BPP. Nevertheless, Andreev et al. (ICALP'96, and JACM 1998) showed that a polynomial-time hitting-set generator implies the seemingly stronger conclusion BPP = P. We simplify and improve their (and later) constructions.
Journal of Cryptology, 2008
We give a careful, fixed-size parameter analysis of a standard (Blum and Micali in SIAM J. Comput. 13(4):850-864, 1984; Goldreich and Levin in Proceedings of 21st ACM Symposium on Theory of Computing, pp. 25-32, 1989) way to form a pseudo-random generator from a one-way function and then pseudo-random functions from said generator (Goldreich et al. in J. Assoc. Comput. Mach. 33(4):792-807, 1986) While the analysis is done in the model of exact security, we improve known bounds also asymptotically when many bits are output each round and we find all auxiliary parameters efficiently, giving a uniform result. These optimizations makes the analysis effective even for security parameters/key-sizes supported by typical block ciphers and hash functions. This enables us to construct very practical pseudo-random generators with strong properties based on plausible assumptions.
Randomization, Approximation, and …, 2004
A hitting-set generator is a deterministic algorithm that generates a set of strings such that this set intersects every dense set that is recognizable by a small circuit. A polynomial time hitting-set generator readily implies RP = P, but it is not apparent what this implies for BPP. Nevertheless, Andreev et al. (ICALP'96, and JACM 1998) showed that a polynomial-time hitting-set generator implies the seemingly stronger conclusion BPP = P. We simplify and improve their (and later) constructions.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.