Some Notes on Randomness Extraction

Christopher Smith

Last Updated: October 19, 2025

1 Defining Extractors

Informally, a (seeded) randomness extractor is an algorithm that takes as input a truly random seed and some imperfect source of randomness, and outputs bits that are statistically close to uniform. Before formally presenting randomness extractors in 3, we present the prerequisite notions of statistical distance and min-entropy.

Definition 1 (Statistical Distance). Let \(X,Y\) be two random variables with common support \(U\). Then the statistical distance between \(X\) and \(Y\) is \[\begin{equation} \mathbf{SD}(X,Y) = \frac{1}{2}\sum_{u \in U} \left | \Pr[X = u] - \Pr[Y=u] \right | \end{equation}\]

Definition 2 (Min-Entropy). The min-entropy of a random variable \(W\) is defined as \[\begin{equation} \mathbf{H}_{\infty}(W) = - \log \left ( \max_{w} \Pr[W=w] \right ) \end{equation}\]

We may now formally define strong randomness extractors. Strong extractors are typically desirable over weak extractors because as long as an initial seed is chosen uniformly at random, it can be publicly exposed and used for all future invocations of the extractor.

Definition 3 (Strong Extractor). Let \(\mathsf{Ext}\colon \left \{ 0,1 \right \}^n \times \left \{ 0,1 \right \}^{d} \to \left \{ 0,1 \right \}^{\ell}\) be a polynomial time computable function. We say that \(\mathsf{Ext}\) is an efficient \((n,m,\ell, \epsilon)\)-strong extractor if, for any random variable \(W\) on \(\left \{ 0,1 \right \}^n\) satisfying \(\mathbf{H}_{\infty}(W) \geq m\), we have \[\begin{equation} \mathbf{SD}((\mathsf{Ext}(W;U_d), U_d) ~,~ (U_l, U_d)) \le \epsilon \end{equation}\] where \(U_d\) is uniform on \(\left \{ 0,1 \right \}^d\).

Strong extractors as given in 3 are also sometimes referred to as worst-case strong extractors to distinguish them from the average-case strong extractors of 5. The motivation for average-case extractors (as given in [3]) is that an adversary hoping to learn information about a random variable \(W\) may obtain some side information \(z\) (sampled from a random variable \(Z\)) such that \(\mathbf{H}_{\infty}(W|Z=z)\) is unacceptably low. If we assume, however, that the adversary has no control over its side-information \(Z\)—as is often the case—then we can consider extractors based on a weaker notion of min-entropy that averages over all possible \(z \gets Z\).

Definition 4 (Average Conditional Min-Entropy [3]). Let \((W,Z)\) be a pair of random variables. The average conditional min-entropy of \(W\) given \(Z\) is: \[\begin{equation} \widetilde{\mathbf{H}}_{\infty}(W|Z) = - \log \left ( \underset{z \gets Z}{\mathbb{E}} \left [\max_{w} \Pr[W=w ~ \middle | ~Z=z] \right ] \right ) = - \log \left ( \underset{z \gets Z}{\mathbb{E}} \left [2^{- \mathbf{H}_{\infty}(W | Z=z)} \right ] \right ) \end{equation}\]

Definition 5 (Average-Case Strong Extractor [3]). Let \(\mathsf{Ext}\colon \left \{ 0,1 \right \}^n \times \left \{ 0,1 \right \}^d \to \left \{ 0,1 \right \}^{\ell}\) be a polynomial time computable function. We say that \(\mathsf{Ext}\) is an efficient average-case \((n,m,\ell,\epsilon)\)-strong extractor if, for any pair of random variables \((W,Z)\) such that \(W\) is a random variable over \(\left \{ 0,1 \right \}^n\) satisfying \(\widetilde{\mathbf{H}}_{\infty}(W|Z) \geq m\), we have \[\begin{equation} \mathbf{SD}((\mathsf{Ext}(W;U_d), U_d, Z) ~,~ (U_l, U_d, Z)) \le \epsilon \end{equation}\]

2 Basic Results

Perhaps the most important—or at least the most well-known—result about extractors is that they can be constructed from universal hash functions. This famous result is known as the Leftover Hash Lemma. The term was coined by [7], though the lemma originally appeared in [6]. The version provided here is taken from [3].

Definition 6 (Universal Hash Function). A keyed hash function, or, equivalently, a family of hash functions \(\left \{ H_s : \left \{ 0,1 \right \}^n \to \left \{ 0,1 \right \}^{\ell} \right \}_{s \in S}\) is called universal if, for every \(x,y \in \left \{ 0,1 \right \}^n\) with \(x \ne y\), \[\begin{equation} \underset{s \gets S}{\Pr} \left [ H_s(x) = H_s(y) \right ] \leq 2^{-\ell} \end{equation}\]

Lemma 7 (Leftover Hash Lemma [3]). Assume a family of functions
\(\left \{ H_s : \left \{ 0,1 \right \}^n \to \left \{ 0,1 \right \}^{\ell} \right \}\) is universal. Then, for any random variable \(W\), \[\begin{equation} \mathbf{SD}((H_S(W), S) ~,~ (U_{\ell}, S)) \le \frac{1}{2} \sqrt{2^{-\mathbf{H}_{\infty}(W)} 2^{\ell}} \end{equation}\] In particular, universal hash functions are \((n,m,\ell,\epsilon)\)-strong extractors whenever
\(\ell \le m - 2\log(1/\epsilon) + 2\).

Lemma 8 (Relationship Between Worst-Case and Average-Case Strong Extractors [3]). For any \(\delta > 0\), if \(\mathsf{Ext}\) is a (worst-case) \((n, m - \log(1/\delta), \ell, \epsilon)\)-strong extractor, then \(\mathsf{Ext}\) is also an average-case \((n,m,\ell,\epsilon + \delta)\)-strong extractor.

The upshot of 8 is that we can instantiate any average-case \((n,m,\ell,\epsilon)\)-strong extractor with a \((n, m - \log(1/\delta), \ell, \epsilon - \delta)\)-strong extractor, for any \(0 < \delta < \epsilon\). This is useful because we have many constructions of strong extractors. In particular, the Leftover Hash Lemma (7) states that universal hash functions are strong extractors. As it turns out, however, we need not appeal to 8 if we want average-case extractors from universal hash functions, since the following generalized leftover hash lemma gives us this directly.

Lemma 9 (Generalized Leftover Hash Lemma [3]). Assume a family of functions
\(\left \{ H_s : \left \{ 0,1 \right \}^n \to \left \{ 0,1 \right \}^{\ell} \right \}\) is universal. Then for any pair of random variables \(W, Z\), \[\begin{equation} \mathbf{SD}((H_S(W), S, Z) ~,~ (U_{\ell}, S, Z)) \le \frac{1}{2} \sqrt{2^{-\widetilde{\mathbf{H}}_{\infty}(W|Z)} 2^{\ell}} \end{equation}\] In particular, universal hash functions are average-case \((n,m,\ell,\epsilon)\)-strong extractors whenever
\(\ell \le m - 2\log(1/\epsilon) + 2\).

3 Two Matrix Constructions of Universal Hash Functions

Claim 10 (Matrix hashing is universal). Interpret the set \(\mathcal{M}= \left \{ 0,1 \right \}^{\ell \times n}\) as the set of all \(\ell\) by \(n\) matrices over \(\mathbb{F}_2\). Define a hash family \(\left \{ H_M : \left \{ 0,1 \right \}^n \to \left \{ 0,1 \right \}^{\ell} \right \}_{M \in \mathcal{M}}\) by \(H_M(x) = Mx\), where \(x\) is interpreted as a length \(n\) vector over \(\mathbb{F}_2\). This hash family is universal.

Proof. Let \(x \ne y \in \left \{ 0,1 \right \}^n\). We wish to show that \(\Pr_{M}[Mx = My] \le 2^{-\ell}\). See that the event \(Mx = My\) implies \(M(x-y) = 0\). Because \((x-y) \ne 0\), there exists an index \(j^*\) such that \((x-y)_{j^*} = 1\). Let \(M_i = (m_{i,j})_{j \in [n]}\) denote the \(i\)-th row of \(M\). We have \[\begin{align*} &\underset{M \gets \mathcal{M}}{\Pr} \left [ M(x-y) = 0 \right ] = \underset{M \gets \mathcal{M}}{\Pr} \left [ M_i(x-y) = 0 ~\forall i \in [\ell]~ \right ] \\ &= \prod_{i \in [\ell]} \underset{M_i \gets \left \{ 0,1 \right \}^{n}}{\Pr} \left [ M_i(x-y) = 0 \right ] \quad \text{ // $M_i$ chosen independently}\\ &= \prod_{i \in [\ell]} \underset{M_i \gets \left \{ 0,1 \right \}^{n}}{\Pr} \left [ m_{i,j^*} = \sum_{j \ne j^*} m_{i,j}(x-y)_j \right ] \quad \text{// $(x-y)_{j^*} = 1$}\\ &= \prod_{i \in [\ell]} \underset{m_{i,j^*} \gets \left \{ 0,1 \right \}}{\Pr} \left [ m_{i,j^*} = \text{arbitrary bit} \right ] \quad \text{ // $m_{i,j^*}$ uniform and independent of all $m_{i,j \ne j^*}$}\\ &= 2^{-\ell} \end{align*}\] ◻

Notice that the key/seed for matrix hashing is the description of the matrix \(M\), which requires \(n \cdot \ell\) bits. As observed in [5], we can reduce this requirement to \(n + l - 1\) bits by using Toeplitz matrices. A matrix \(M = (m_{i,j}) \in \mathbb{F}_2^{\ell \times n}\) is Toeplitz if it is constant on its diagonals: for all \(i,j\), \(m_{i,j} = m_{i+1, j+1} = \alpha_{i-j}\), where \(\alpha_{1-n}, \alpha_{2-n}, \dots, \alpha_0, \alpha_1, \dots, \alpha_{\ell-1}\) are the \(n+\ell-1\) constants.

Claim 11 (Toeplitz matrix hashing is universal). Consider the same matrix hash family as in 10, except we now require \(\mathcal{M}\) to be the set of all Toeplitz matrices over \(\mathbb{F}_2\). This hash family is universal.

Proof. Let \(x \ne y \in \left \{ 0,1 \right \}^n\). We wish to show that \(\Pr_{M}[Mx = My] \le 2^{-\ell}\). See that the event \(Mx = My\) implies \(M(x-y) = 0\). Because \((x-y) \ne 0\), there exists an index \(j^*\) such that \((x-y)_{j^*} = 1\). Let \(M_i = (m_{i,j})_{j \in [n]}\) denote the \(i\)-th row of \(M\). We have \[\begin{align*} &\underset{M \gets \mathcal{M}}{\Pr} \left [ M(x-y) = 0 \right ] = \underset{M \gets \mathcal{M}}{\Pr} \left [ M_i(x-y) = 0 ~\forall i \in [\ell]~ \right ] \\ &= \underset{M \gets \mathcal{M}}{\Pr} \left [ m_{i,j^*} = \sum_{j \ne j^*} m_{i,j}(x-y)_j ~\forall i \in [\ell]~ \right ] \quad \text{// $(x-y)_{j^*} = 1$}\\ &= \underset{(m_{1,j^*},...,m_{\ell,j^*})}{\Pr} \left [ m_{i,j^*} = \text{arbitrary bit} ~\forall i \in [\ell]~ \right ] \quad \text{// $m_{i,j^*}$ uniform and independent of all $m_{i,j \ne j^*}$}\\ &= \prod_{i \in [\ell]} \underset{m_{i,j^*} \gets \left \{ 0,1 \right \}}{\Pr} \left [ m_{i,j^*} = \text{arbitrary bit} \right ] \quad \text{// $m_{i,j^*}$ independent of $m_{i' \ne i, j^*}$}\\ &= 2^{\ell} \end{align*}\] ◻

Remark 12. Notice the proof of 11 is almost the same as that of 10, except we just had to be more careful about how we invoke independence. In 10 we first appealed to independence between rows of a random matrix, and then to independence of elements within a row. In 11 we cannot appeal to independence between rows since the rows of a random Toeplitz matrix are not independent. Instead, we first appealed to independence of elements within a row of a random Toeplitz matrix, and then we appealed to independence of elements within a column of a random Toeplitz matrix.

4 Linear Extractors are Invertible

It is sometimes desirable in certain applications—such as wiretap protocols [2] and leakage-resilient secret sharing [1]—to have extractors that are invertible. The work of [1] tells us that any linear extractor is invertible. An extractor \(\mathsf{Ext}: \left \{ 0,1 \right \}^n \times \left \{ 0,1 \right \}^d \to \left \{ 0,1 \right \}^{\ell}\) is linear if for any seed \(s \in \left \{ 0,1 \right \}^d\), \(\mathsf{Ext}(\cdot, s)\) is a linear function. That is, \(\mathsf{Ext}(\cdot, s)\) is a linear map between \(\left \{ 0,1 \right \}^n\) and \(\left \{ 0,1 \right \}^{\ell}\) viewed as vector spaces. The following lemma due to [1] gives us a generic polynomial time procedure for finding preimages of any linear extractor. The lemma additionally requires that the linear extractor be “efficient”, but we have already assumed this as [def:worst-ext,def:avg-ext] require extractors to be computable in polynomial time.

Lemma 13 (Linear Extractors are Invertible (Lemma 2,[1])). For every efficient linear extractor \(\mathsf{Ext}\), there exists an efficient randomized function \(\mathsf{InvExt}: \left \{ 0,1 \right \}^{\ell} \times \left \{ 0,1 \right \}^d \to \left \{ 0,1 \right \}^n \cup \left \{ \bot \right \}\) such that

  1. \(U_n, U_d, \mathsf{Ext}(U_n ; U_d) = \mathsf{InvExt}(\mathsf{Ext}(U_n ; U_d), U_d), U_d, \mathsf{Ext}(U_n ; U_d)\)

  2. For each \((y,s) \in \left \{ 0,1 \right \}^{\ell} \times \left \{ 0,1 \right \}^d\),

    1. \(\Pr[\mathsf{InvExt}(y,s) = \bot] = 1\) iff \(\nexists w \in \left \{ 0,1 \right \}^n\) such that \(\mathsf{Ext}(w;s) = y\).

    2. \(\Pr[\mathsf{Ext}(\mathsf{InvExt}(y,s); s) = y] = 1\) iff \(\exists w \in \left \{ 0,1 \right \}^n\) such that \(\mathsf{Ext}(w;s) = y\).

Proof. We only restate the construction of \(\mathsf{InvExt}\). For the rest of the proof, consult Lemma 2 of [1]. Recall that \(\mathsf{Ext}(\cdot, s)\) is a linear map between vector spaces \(\left \{ 0,1 \right \}^{n}\) and \(\left \{ 0,1 \right \}^{\ell}\). Let \(\mathcal{I}_s\) and \(\mathcal{K}_s\) be the image and kernel of \(\mathsf{Ext}(\cdot, s)\). We now define \(\mathsf{InvExt}\) as follows:
\(\mathsf{InvExt}(y \in \left \{ 0,1 \right \}^{\ell}, s \in \left \{ 0,1 \right \}^d) \to \left \{ 0,1 \right \}^n \cup \left \{ \bot \right \}\):

\(\mathsf{InvExt}\) is efficient because the bases for the linear subspaces \(\mathcal{K}_s, \mathcal{I}_s\), and the preimage space on \(y\) can all be determined in polynomial time. ◻

4.0.0.1 A more concrete description of \(\mathsf{InvExt}\).

Because \(\mathsf{Ext}(\cdot, s)\) is a linear map, we may assume that the seed \(s\) describes a matrix \(M\) such that \(\mathsf{Ext}(w,s) = Mw\). For simplicity we additionally assume \(M \in \mathbb{F}_2^{\ell \times n}\). Below we provide an equivalent but more concrete description of \(\mathsf{InvExt}\) more amenable to implementation.
\(\mathsf{InvExt}(y \in \left \{ 0,1 \right \}^{\ell}, s \in \left \{ 0,1 \right \}^d) \to \left \{ 0,1 \right \}^n \cup \left \{ \bot \right \}\):

5 Towards Optimal Extraction: Lower Bounds and Nonconstructive Existence

In 3 we saw two simple constructions of strong (linear) extractors: a random matrix, and a random Toeplitz matrix. It was pointed out that the Toeplitz matrix construction boasts a smaller seed length than the fully random matrix construction.

Informally, short seed length is a desirable property because it represents a “thriftier” extractor that, all else fixed, extracts the same amount of randomness for the price of fewer truly random bits. Thus, given an \((n,m,\ell,\epsilon)\)-strong extractor, it is natural to ask for lower bounds on the seed length \(d\) in terms of \(n,m,\ell,\epsilon\). Of course, we also want to make sure our extractors still extract as much min-entropy from the source as possible. So, in addition to short seeds, we are interested in extractors where \(\ell\) is close to \(m\) 1. In short, to prove a “lower bound” on the “quality” of any extractor, one can lower bound \(d\) and upper bound \(\ell\).

Chris: TODO: lower bound on \(d\) and upper bound on \(\ell\) from NZ93 randomness linear in space.

Chris: TODO: nonconstructive existence result Trevisan psuedorandomness chapter 6 extractors. These aren’t efficient are they? Chris: rrv99 cites Tight bounds for depth-two superconcentrators for... both TODOs?

6 The Trevisan Extractor

The Trevisan extractor is an efficient, near-optimal construction in terms of its seed length and entropy loss. The original construction was given by Luca Trevisan [9], and the extractor quality was subsequently improved to near-optimal factors by Raz, Reingold, and Vadhan [8]. We focus on the improved version [8] given by the following result, and adapt notation to match our extractor notation.

Theorem 14 (Theorem 4 [8]). For every \(n,m \in \mathbb{N}\) and \(\epsilon > 0\) such that \(m \leq n\), there exist (efficient) \((n, m , m - \Delta, \epsilon)\)-strong extractors, where the entropy loss \(\Delta = 2\log(1/ \epsilon) + O(1)\) is optimal up to additive constants, and

  1. \(d = O(\log^2(n / \epsilon) \cdot \log m)\), or

  2. \(d = O(\log^2(n) \cdot \log(1 / \epsilon) \cdot \log m)\)

We are interested in the second extractor of 14, because the seed length does not have a quadratic dependence on \(\log(1 / \epsilon)\). Towards the goal of obtaining a concrete description of this extractor amenable to implementation, we unpack the relevant portion of the proof of 14, which says that this second extractor is obtained by applying 15 to the second (strong) extractor of 16.

Lemma 15 (Lemma 28 [8]). Let \(\mathsf{Ext}_1: \left \{ 0,1 \right \}^n \times \left \{ 0,1 \right \}^{d_1} \to \left \{ 0,1 \right \}^{m - \Delta_1}\) be any \((n,m, m - \Delta_1, \epsilon / 4)\)-strong extractor with entropy loss \(\Delta_1\). Then there exists a \((n,m, m - \Delta, \epsilon)\)-strong extractor \(\mathsf{Ext}: \left \{ 0,1 \right \}^n \times \left \{ 0,1 \right \}^{d_1 + d_2} \to \left \{ 0,1 \right \}^{m - \Delta}\) such that

  1. \(\mathsf{Ext}\) has entropy loss \(\Delta = 2 \lceil \log(1 / \epsilon) \rceil + 5\) Chris: notice this is optimal up to additive constants

  2. \(d_2 = O(\Delta_1 + \log n)\)

  3. \(\mathsf{Ext}\) is poly-time computable with one oracle query to \(\mathsf{Ext}_1\).

Theorem 16 (Theorem 2 [8]). For every \(n,m,\ell \in \mathbb{N}\) and \(\epsilon > 0\) such that \(\ell \leq m \leq n\), there exist (efficient) \((n,m,\ell,\epsilon)\)-extractors with

  1. \(d = O(\frac{\log^2 n \cdot \log(1 / \epsilon)}{\log(m / \ell)})\), or

  2. \(d = O(\log^2 n \cdot \log(1 / \gamma) \cdot \log(1 / \epsilon))\), where \(1 + \gamma = m / (\ell - 1)\), and \(\gamma < 1/2\).

Our task is now split in two parts: (a) understand the transformation of 15, and (b) understand the second extractor of 16.

6.1 Transformation of 15

Chris: \(\mathsf{Ext}(x, (y_1, y_2)) = \mathsf{Ext}_1(x, y_1) \circ \mathsf{Ext}_2(x, y_2)\)

6.2 Second Extractor of 16

Before presenting the construction of the extractor, we introduce the following prerequisite notions: weak designs, multilinear error-correcting codes, the Nisan-Wigderson PRG, and pairwise-independent hash functions.

6.2.1 Weak Designs

In combinatorics, a design is a collection of sets with small pairwise intersections. We focus specifically on so-called weak designs, adapting notation to clarify the relationship between design parameters and extractor parameters.

Definition 17 (Weak \((\ell, t, p, d)\)-Design [4, 8]). A family of sets \(S_1,\dots, S_{\ell} \subseteq [d]\) is a weak \((\ell,t,p,d)\)-design if

  1. For all \(i\), \(|S_i| = t\)

  2. For all \(i\), \(\sum_{j < i} 2^{S_i \cap S_j} \leq p \cdot (t - 1)\)

Constructing explicit weak designs is a non-trivial task. Indeed, the original weak designs used by Raz et al.[8] were obtained by derandomizing a nonconstructive existence proof via Method of Conditional Expectations [10]. While this construction is efficient in the sense that it runs in polynomial time and space, we will focus on the improved weak design constructions given by the subsequent work of Hartman and Raz [4]. These constructions satisfy the following notion of “explicitness”.

Definition 18 (Explicit Weak Design [4]). A weak \((\ell, t, p, d)\)-design \(S = S_1, \dots, S_{\ell} \subset [d]\) is explicit if there exists an algorithm that, given an index \(1 \le i \le \ell\), outputs the subset \(S_i\) in time \(T = \poly(\log \ell, t)\) and space \(S = O(\log \ell + \log t)\).

19 Chris: TODO: think we need theorem 4 as well guarantees explicit weak designs for general choices of parameters. The proof of this theorem relies on application of a certain transformation to the explicit weak designs guaranteed by 20, which fixes parameters \(d = t^2\) and \(p = e^2\). The transformation can be found in Lemma 5.1 of Hartman and Raz [4]. We focus instead on the construction underlying 20.

Theorem 19 (Theorem 3 [4]).  

  1. For every \(\ell, t \in \mathbb{N}\), such that \(\ell = \Omega(t^{\log t})\), and constant \(p > 1\) there exists an explicit weak \((\ell, t, p, d)\)-design, where \(d = O(t^2)\).

  2. For every \(\ell, t \in \mathbb{N}\) and \(p > 1\), such that for some constant \(\alpha \ge 1, 2^{t} < \alpha \ell\), there exists an explicit weak \((\ell, t, p, d)\)-design, where \(d = O(t^2 / \ln p)\).

Theorem 20 (Theorem 1 [4]). Let \(\ell, t \in \mathbb{N}\) and assume for simplicity that \(t\) is prime (if \(t\) is not prime, pick the smallest prime greater than \(t\)), and \(\ell\) is a power of \(t\). Then there exists an explicit weak \((\ell, t, p, d)\)-design, with \(d = t^2\) and \(p = e^2\) (where \(e\) denotes the base of the natural logarithm).

6.2.1.1 Construction.

Let \(\mathbb{F}_t\) be the finite field of size \(t\). Since \(d = t^2\), we can identify each number in \([d]\) with an ordered pair of elements in \(\mathbb{F}_t\) (e.g, \(\mathbb{F}_t \times \mathbb{F}_t \to [d]\) by \((a,b) \mapsto ta + b\), interpreting \(a\) and \(b\) as integers). Thus, constructing the design is equivalent to constructing subsets of \(\mathbb{F}_t^2\). The weak design \(S\) is defined as follows: \[\begin{align*} &S \coloneqq \left \{ S_p ~ \middle | ~p \in \mathbb{F}_t^{\le (\log \ell / \log t) - 1}[x] \right \} \\ &S_p \coloneqq \left \{ (a, p(a)) ~ \middle | ~a \in \mathbb{F}_t \right \} \end{align*}\] Given a subset index \(1 \le i \le \ell\), we can output the set \(S_i\) in time \(T = \poly(\log \ell, t)\) and space \(S = O(\log \ell)\). View \(i\) as its length \(\log \ell\) bit-wise representation, and divide this bitstring into \(\log \ell / \log t\) parts. The \(k\)-th part represents the coefficient of the \(i\)-th polynomial \(p\). Then, for every \(a \in \mathbb{F}_t\), map \((a, p(a))\) to \([d]\). The collection of these mappings is \(S_i\). For the argument that the above construction is a weak design, see the original proof [4].

[1]
Chandran, N. et al. 2022. Short leakage resilient and non-malleable secret sharing schemes. Annual international cryptology conference (2022), 178–207.
[2]
Cheraghchi, M. et al. 2011. Invertible extractors and wiretap protocols. IEEE Transactions on Information Theory. 58, 2 (2011), 1254–1274.
[3]
Dodis, Y. et al. 2008. Fuzzy extractors: How to generate strong keys from biometrics and other noisy data. SIAM journal on computing. 38, 1 (2008), 97–139.
[4]
Hartman, T. and Raz, R. 2003. On the distribution of the number of roots of polynomials and explicit weak designs. Random Struct. Algorithms. 23, 3 (2003), 235–263. https://doi.org/10.1002/RSA.10095.
[5]
Håstad, J. et al. 1999. A pseudorandom generator from any one-way function. SIAM Journal on Computing. 28, 4 (1999), 1364–1396.
[6]
Impagliazzo, R. et al. 1989. Pseudo-random generation from one-way functions. Proceedings of the twenty-first annual ACM symposium on theory of computing (New York, NY, USA, 1989), 12–24.
[7]
Impagliazzo, R. and Zuckerman, D. 1989. How to recycle random bits. 30th annual symposium on foundations of computer science (1989), 248–253.
[8]
Raz, R. et al. 1999. Extracting all the randomness and reducing the error in trevisan’s extractors. Electron. Colloquium Comput. Complex. TR99-046, (1999).
[9]
Trevisan, L. 1999. Construction of extractors using pseudo-random generators (extended abstract). Proceedings of the thirty-first annual ACM symposium on theory of computing, may 1-4, 1999, atlanta, georgia, USA (1999), 141–148.
[10]
Wikipedia contributors 2024. Method of conditional probabilities — Wikipedia, the free encyclopedia. https://en.wikipedia.org/w/index.php?title=Method_of_conditional_probabilities&oldid=1214641630.

  1. Note \(m - \ell > 0\) since we consider strong extractors where the seed can be public. The quantity \(\Delta \coloneqq m - \ell\) is called the entropy loss.↩︎