Invoking Semi-Honest 2PC Simulators

Christopher Smith

Last Updated: 2025-11-20

1 Preliminaries

1.0.0.1 Non-Uniform Computation.

A non-uniform Turing machine is a Turing machine equipped with an extra “advice” tape. At the start of the machine’s execution, the tape may be loaded with arbitrary information that can depend on the length of the input on the machine’s input tape (but not the content of the input). Typically, one considers polynomial time machines where the length of the advice is polynomial in the input length. In cryptography, the input length is typically the security parameter \(\lambda\), which is written in unary on every machine’s input tape as \(1^{\lambda}\). In what follows, it may often look notationally like the non-uniform reductions we give advice to receive more input than just \(1^{\lambda}\), and so one might complain that advice needs to depend on the length of the “entire input”, not just a \(\lambda\)-length prefix of the input. However, this is not the case (at least in this document), as any “extra” input beyond \(1^{\lambda}\) for reductions receiving advice semantically comes from other machines as part of an interaction or oracle query. More precisely, this means that the extra input is written on additional communication/oracle tapes distinct from the machine’s “one true input tape”1, which is just initialized with \(1^{\lambda}\). These formal technicalities are not particularly important for the purposes of this document, at least beyond convincing the reader that non-uniform advice in our reductions only depends on \(\lambda\). For a more formal reference on interactive and oracle access Turing machines, and, more generally, computational models in cryptography, see [2].

Definition 1 (Negligible Function). We say a function \(\mu\) is negligible if for every positive polynomial \(p\) there exists \(N \in \mathbb{N}\) such that for all \(\lambda> N\) it holds that \(\mu(\lambda) < 1 / p(\lambda)\).

Definition 2 (Computational Indistinguishability [3]). Two probability ensembles (i.e., infinite sequences of random variables) \(X = \left \{ X(a,\lambda) \right \}_{a \in \left \{ 0,1 \right \}^*, \lambda\in \mathbb{N}}\) and \(Y = \left \{ Y(a,n) \right \}_{a \in \left \{ 0,1 \right \}^*, \lambda\in \mathbb{N}}\) are computationally indistinguishable, denoted \(X \stackrel{\text{c}}{\approx} Y\), if for every non-uniform PPT algorithm \(D\) there exists a negligible function \(\mu\) such that for every \(a \in \left \{ 0,1 \right \}^*\) and every \(\lambda\in \mathbb{N}\), \[\begin{equation*} \left \lvert \underset{}{\Pr} \left [ D(1^{\lambda}, X(a,\lambda), a) = 1 \right ] - \underset{}{\Pr} \left [ D(1^{\lambda}, Y(a, \lambda), a) = 1) \right ] \right \rvert \leq \mu(\lambda) \end{equation*}\]

Definition 3 (Semi-Honest 2PC [3]). We say that the two-party protocol \(\Pi = (A,B)\) securely evaluates a PPT functionality \(f : \left \{ 0,1 \right \}^* \times \left \{ 0,1 \right \}^* \to \left \{ 0,1 \right \}^* \times \left \{ 0,1 \right \}^*\) in the presence of semi-honest adversaries, if there exists a PPT simulator \(\mathsf{Sim}= (\mathsf{Sim}_A, \mathsf{Sim}_B)\) such that for all \(x_A,x_B\), and for all \(\lambda\in \mathbb{N}\), it holds that: \[\begin{align*} \label{eq:sfe} \left \{ (\mathsf{Sim}_A(1^{\lambda}, x_A, f_A(x_A, x_B)), f(x_A, x_B)) \right \}_{x_A, x_B, \lambda} \stackrel{\text{c}}{\approx} \left \{ (\mathsf{view}_A^{\Pi}(x_A, x_B, \lambda), \mathsf{out}^{\Pi}(x_A, x_B, \lambda)) \right \}_{x_A, x_B, \lambda} \\ % \left \{ (\mathsf{Sim}_B(1^{\lambda}, x_B,f_B(x_A,x_B)), f(x_A, x_B)) \right \}_{x_A, x_B, \lambda} \stackrel{\text{c}}{\approx} \left \{ (\mathsf{view}_B^{\Pi}(x_A, x_B, \lambda), \mathsf{out}^{\Pi}(x_A, x_B, \lambda)) \right \}_{x_A, x_B, \lambda} \end{align*}\] where:

2 Easy: Simulated Preamble, Uniform Reduction

Let \(f_{pre}\) be any two-party PPT functionality securely computed in the semi-honest setting by some 2PC protocol \(\Pi_{pre}\). Define a new two-party protocol protocol \(\Pi = (A,B)\) where \(A\) and \(B\) first execute \(\Pi_{pre}\) and then exchange some polynomial number of (basically) arbitrary additional messages. In more detail, define the view of party \(B\) in a real execution of \(\Pi\) as follows:
\(\mathsf{view}^{\Pi}_B(x_A, x_B, \lambda)\):

  1. Sample sufficient poly-sized randomness \(r_A = r^{pre}_A \| r^{post}_A\) and \(r_B = r^{pre}_B \| r^{post}_B\) for both parties.

  2. Emulate an execution of \(\Pi_{pre}\) where \(A\) enters the protocol with input \((1^{\lambda}, x_A)\) and random coins \(r^{pre}_A\), and \(B\) does the same with \((1^{\lambda}, x_B), r^{pre}_B\). This generates a preamble transcript \((\tau^{pre}_A, \tau^{pre}_B)\), where \(\tau^{pre}_A\) are the messages received by \(A\), and \(\tau^{pre}_B\) is defined analogously. It also generates preamble outputs \((y^{pre}_A, y^{pre}_B)\).

  3. Emulate some arbitrary interactive protocol between \(A\) and \(B\). If this interaction is randomized, then \(A\) uses \(r^{post}_A\) as random coins and \(B\) uses \(r^{post}_B\) as random coins. Let \((\tau^{post}_A, \tau^{post}_B)\) denote the transcript of this interaction.

  4. Output party \(B\)’s view consisting of: \((x_B, r_B, \tau^{pre}_B, \tau^{post}_B)\).

 
Obviously, it is impossible to say whether \(\Pi\) is secure. But all we wish to show in this example is that \(\mathsf{view}^{\Pi}_B(x_A, x_B, \lambda)\) is indistinguishable from the following hybrid view where we invoke the simulator \(\mathsf{Sim}^{pre}_B\) guaranteed by the 2PC security of \(\Pi_{pre}\) for party \(B\):
\(H_1(x_A, x_B, \lambda)\):

  1. Letting \((y^{pre}_A, y^{pre}_B) = f_{pre}(x_A, x_B)\), compute \((x_B, r^{pre}_B, \widetilde{\tau}^{pre}_B) \gets \mathsf{Sim}^{pre}_B(1^{\lambda}, x_B, y^{pre}_B)\).

  2. Generate \(\tau^{post}_A, \tau^{post}_B\) with emulation of the same interactive protocol used in the real execution, sampling random coins \(r^{post}_A, r^{post}_B\) on the fly as necessary.

  3. Output party \(B\)’s hybrid view consisting of: \((x_B, (r^{pre}_B \| r^{post}_B), \widetilde{\tau}^{pre}_B, \tau^{post}_B)\).

Claim 4. \(\mathsf{view}^{\Pi}_B(x_A, x_B, \lambda) \stackrel{\text{c}}{\approx} H_1(x_A, x_B, \lambda)\).

Proof of 4. Let \(D_H\) be any poly-time distinguisher between the hybrids. We construct an adversary \(D_{2pc}\) with oracle access to \(D_H\) against the semi-honest security of \(\Pi_{setup}\) as follows. \(D_{2pc}\) parses its input as \((1^{\lambda}, (x_B, r^{pre}_B, \hat{\tau}^{pre}_B), (y^{pre}_A, y^{pre}_B)), x_A, x_B)\), and uses this information to compute \(\tau^{post}_A, \tau^{post}_B\) as in the real execution, sampling \(r^{post}_A, r^{post}_B\) on the fly as necessary, and setting \(r_B = r^{pre}_B \| r^{post}_B\). It then queries \(b \gets D_H(1^{\lambda}, (x_B, r_B, \hat{\tau}^{pre}_B, \tau^{post}_B), x_A,x_B)\), and outputs \(b\).

Suppressing ensemble indices \((x_A,x_B,\lambda)\) for clarity and without confusion, see that if \(D_{2pc}\)’s input is distributed according to \((\mathsf{view}^{\Pi_{pre}}_B, \mathsf{out}^{\Pi_{pre}})\), then \(D_H\)’s input is distributed according to \(\mathsf{view}^{\Pi}_B\). Conversely, if \(D_{2pc}\)’s input is distributed according to \((\mathsf{Sim}^{pre}_B, f_{pre})\), then \(D_H\)’s input is distributed according to \(H_1\). Thus, if any \(D_H\) distinguishes between \(\mathsf{view}^{\Pi}_B\) and \(H_1\) with non-negligible advantage, then \(D_{2pc}\) breaks the semi-honest security of \(\Pi_{pre}\). The claim follows. ◻

3 Harder: Simulated Postamble, Non-Uniform Reduction

Now let \(f_{post}\) be any two-party PPT functionality securely computed in the semi-honest setting by some 2PC protocol \(\Pi_{post}\). Define a new protocol \(\Pi = (A,B)\) where \(A\) and \(B\) first exchange some arbitrary messages in a preamble, and then run \(\Pi_{post}\) using information derived from the preamble. More formally, define the view of party \(B\) in a real execution of \(\Pi\) as follows:
\(\mathsf{view}^{\Pi}_B(x_A, x_B, \lambda)\):

  1. Sample sufficient poly-sized randomness \(r_A = r^{pre}_A \| r^{post}_A\) and \(r_B = r^{pre}_B \| r^{post}_B\) for both parties.

  2. Let \(z(\cdot, \cdot)\) be some arbitrary PPT computable function. Emulate an interactive protocol between \(A\) and \(B\) for computing \(z_A, z_B = z(x_A,x_B)\). If the interaction is randomized, \(A\) uses \(r^{pre}_A\) as random coins and \(B\) uses \(r^{pre}_B\) as random coins. Let \(\tau_{pre} = (\tau^{pre}_A, \tau^{pre}_B)\) denote the transcript of this preamble.

  3. Emulate an execution of \(\Pi_{post}\), where \(A\) enters the protocol with input \((1^{\lambda}, z_A)\) and random coins \(r^{post}_A\), and \(B\) enters the protocol with input \((1^{\lambda}, z_B)\) and random coins \(r^{post}_B\). Let \(y = (y_A, y_B)\) and \(\tau_{post} = (\tau^{post}_A, \tau^{post}_B)\) denote the output and transcript of this execution, respectively.

  4. Output party \(B\)’s view consisting of: \((x_B, r_B, \tau^{pre}_B, \tau^{post}_B)\).

 
We wish to show that this view is indistinguishable from the following hybrid view where we invoke the simulator \(\mathsf{Sim}^{post}_B\) guaranteed by the 2PC security of \(\Pi_{post}\) for party \(B\):
\(H_1(x_A, x_B, \lambda)\):

  1. Emulate the same interactive protocol between \(A\) and \(B\) for computing \(z_A, z_B = z(x_A, x_B)\) as in the real execution, sampling random coins \(r^{pre}_A, r^{pre}_B\) as necessary. Let \(\tau_{pre} = (\tau^{pre}_A, \tau^{pre}_B)\) denote the transcript of this preamble.

  2. Letting \((y_A, y_B) = f_{post}(z_A, z_B)\), compute \((z_B, r^{post}_B, \widetilde{\tau}^{post}_B) \gets \mathsf{Sim}^{post}_B(1^{\lambda}, z_B, y_B)\).

  3. Output party \(B\)’s hybrid view consisting of \(x_B, (r^{pre}_B \| r^{post}_B), \tau^{pre}_B, \widetilde{\tau}^{post}_B\).

Claim 5. \(\mathsf{view}^{\Pi}_B(x_A, x_B, \lambda) \stackrel{\text{c}}{\approx} H_1(x_A, x_B, \lambda)\)

We present two proofs, where both construct essentially the same non-uniform reduction. The first proof carefully unpacks the formal definitions of non-uniformity and computational indistinguishability (right down to quantifiers), but results in an annoyingly verbose argument. The second proof is less formal but significantly shorter and easier to read; much of the awkwardness of the first proof is avoided by re-interpreting the “rules of the game” for adversaries in a reduction proof of the kind we are dealing with here. Specifically, in the second proof we allow adversaries to submit ensemble indices to their outside challengers.

Verbose Proof of 5. Suppose the two distributions are not computationally indistinguishable. Then (unpacking definitions and quantifiers, including that of negligible function), there exists a non-uniform poly-time distinguisher \(D_H\), and a polynomial \(p(\cdot)\), such that for every \(N \in \mathbb{N}\) there exists a “bad” \(\lambda^* > N\) and some “bad” \(x^*_A,x^*_B\) such that \[\begin{equation*} \left \lvert \underset{}{\Pr} \left [ D_H(1^{\lambda^*}, \mathsf{view}^{\Pi}_B(x^*_A, x^*_B, \lambda^*), x^*_A, x^*_B) \right ] - \underset{}{\Pr} \left [ D_H(1^{\lambda^*}, H_1(x^*_A, x^*_B, \lambda^*), x^*_A, x^*_B) \right ] \right \rvert \geq 1 / p(\lambda^*) \end{equation*}\]

We use this \(D_H\) to construct the following non-uniform adversary \(D_{2pc}\) against the semi-honest security of \(\Pi_{post}\) as follows.

\(D_{2pc}\) parses its input as \((1^{\lambda}, ((z_B, r^{post}_B, \hat{\tau}^{post}_B), (y_A, y_B)), z_A, z_B)\). If \(\lambda\) is one of those bad \(\lambda^*\) for which there exist some bad \((x^*_A, x^*_B)\) on which \(D_H\) distinguishes with non-negligible advantage, then \(D_{2pc}\) receives this bad pair as non-uniform advice, along with some \((\tau^{pre*}_B, r^{pre}_B, z^*_A, z^*_B)\) distributed according to \(z(x^*_A, x^*_B; r^{pre}_A \| r^{pre}_B)\) (over random choice of \(r^{pre}_A\))2. Otherwise, it receives an empty string as advice. If \(D_{2pc}\) has nothing written on its advice tape, or if the \((z_A,z_B)\) parsed from its input does not equal the \((z^*_A, z^*_B)\) written on its advice tape, then \(D_{2pc}\) outputs a random bit3. Else, it must be that \((z_A, z_B) = (z^*_A, z^*_B)\). In this case, \(D_{2pc}\) queries and outputs \(b \gets D_H(1^{\lambda^*}, (x^*_B, (r^{pre}_B \| r^{post}_B), \tau^{pre*}_B, \hat{\tau}^{post}_B), x^*_A, x^*_B)\).

Let \((x^*_A, x^*_B, \tau^{pre*}_B z^*_A, z^*_B)\) be an advice string for \(D_{2pc}\) on input a bad \(\lambda^*\). See that if \(D_{2pc}\)’s input is distributed according to \((\mathsf{view}^{\Pi_{post}}_B, \mathsf{out}^{\Pi_{post}})(z^*_A, z^*_B, \lambda^*)\), then \(D_H\)’s input is distributed according to \(\mathsf{view}^{\Pi}_B(x^*_A, x^*_B, \lambda^*)\). Else, if \(D_{2pc}\)’s input is distributed according to \((\mathsf{Sim}^{post}_B(1^{\lambda^*}, z^*_B, y_B), (y_A,y_B))\) for \((y_A,y_B) \gets f_{post}(z^*_A, z^*_B)\), then \(D_H\)’s input is distributed according to \(H_1(x^*_A, x^*_B, \lambda^*)\). The claim follows, as we have just shown that for every \(N \in \mathbb{N}\) there exists a \(\lambda^* > N\) and some \(z^*_A, z^*_B\) such that the distinguishing advantage of \(D_{2pc}\) between \((\mathsf{view}^{\Pi_{post}}_B, \mathsf{out}^{\Pi_{post}})(z^*_A, z^*_B, \lambda^*)\) and \((\mathsf{Sim}^{post}_B(1^{\lambda^*}, z^*_B, y_B), (y_A,y_B))\) for \((y_A,y_B) \gets f_{post}(z^*_A, z^*_B)\) is non-negligible. ◻

Observe that much of the verbosity of the above proof concerns itself with boilerplate for getting the reduction to only play against challenges of its choosing: it basically “gives up” on any challenge that does not agree with its non-uniform advice. Thus, in this specific situation where adversaries only need to show indistinguishability for an infinite subsequence of “bad” ensemble indices that they receive non-uniform advice for, we can take the liberty of re-interpreting the rules of the reduction proof such that adversaries can submit desired ensemble indices to their outside challengers. This significantly cleans up the proof while preserving the main idea of the reduction, and not sacrificing too much rigor.

Shorter Proof. Let \(D_H\) be a (non-uniform) poly-time distinguisher between the hybrids. We use this \(D_H\) to construct a (non-uniform) adversary \(D_{2pc}\) against the semi-honest security of \(\Pi_{post}\) as follows. \(D_{2pc}(1^{\lambda})\) queries \((x_A, x_B) \gets D_H(1^{\lambda})\) and computes \(\tau^{pre}_B, r^{pre}_B, z_A, z_B\) distributed according to \(z(x_A, x_B ; r^{pre}_A \| r^{pre}_B)\) (over random choice of \(r^{pre}_A\)). It then submits \(z_A, z_B\) to the challenger, and parses the received challenge as \(((z_B, r^{post}_B, \hat{\tau}^{post}_B), (y_A, y_B))\). Letting \(r_B = r^{pre}_B \| r^{post}_B\), \(D_{2pc}\) queries and outputs \(b \gets D_H(x_B, r_B, \tau^{pre}, \hat{\tau}^{post})\).

See that if \(\hat{\tau}^{post}\) is belongs to a real view of \(\Pi_{post}\), then \(D_H\)’s input is distributed according to \(\mathsf{view}^{\Pi}_B\). If \(\hat{\tau}^{post}\) belongs to a simulated view, then \(D_H\)’s input is distributed according to \(H_1\). Thus, if \(D_H\) distinguishes with non-negligible probability, so does \(D_{2pc}\). ◻

[1]
Gentry, C. and Wichs, D. 2011. Separating succinct non-interactive arguments from all falsifiable assumptions. Proceedings of the forty-third annual ACM symposium on theory of computing (2011), 99–108.
[2]
Goldreich, O. 2001. Foundations of cryptography: Volume 1, basic tools. Cambridge university press.
[3]
Lindell, Y. 2017. How to simulate it - A tutorial on the simulation proof technique. Tutorials on the foundations of cryptography. Y. Lindell, ed. Springer International Publishing. 277–346.

  1. To really get in the weeds, interactive Turing machines share a common input tape, so all parties in an interactive computation automatically share a common security parameter. The situation is different with an oracle access machine, which may query its oracle on any input of its choosing, and this input may include an arbitrary security parameter. This subtle issue is sometimes important in security proofs (e.g.,[1])↩︎

  2. Giving the reduction \((\tau^{pre*}_B, r^{pre}_B, z^*_A, z^*_B)\) as advice is only a convenience, as it could sample these on its own if necessary from \((x^*_A, x^*_B)\).↩︎

  3. Or it outputs a garbage bit, or it aborts. Intuitively, this is because our proof only needs \(D_{2pc}\) to “win” against its challenger for some infinite subsequence of bad ensemble indices \((z^*_A, z^*_B, \lambda^*)_{N \in \mathbb{N}}\) that it receives advice for. That is, we only care about the behavior of \(D_{2pc}\) on the bad challenges it is rigged to win against.↩︎