D Error propagation and approximation

This part is based on many different sources, like (Hogan 2006), (Ku et al. 1966). In the following, let \(A\) be the quantity that we want to estimate, and \(\overline{A}\) our estimate. We have the definition of absolute error and relative error.

Definition D.1 (Absolute error) \[|A - \overline{A} | = \epsilon_{Abs}\]

Definition D.2 (Relative error) \[\frac{| A - \overline{A}| }{A} = \epsilon_R\] or equivalently \[ A(1-\epsilon_{R}) \leq \overline{A} \leq A(1+\epsilon_{R})\]

Thus observe that:

  • If (and only if) \(|A| < 1\), then, \(\epsilon_{Abs} \leq \epsilon_{R}\)
  • If (and only if) \(|A| > 1\), then, \(\epsilon_{Abs} \geq \epsilon_{R}\)

We will study the relation between the two errors, often leveraging the trick setting \(\epsilon_{Abs} = \epsilon_R A\). Oftentimes, we would like to move from a relative to absolute precision, or vice versa.

D.0.0.0.1 From absolute to relative precision

Suppose that we have an algorithm that in time \(O(f(\frac{1}{\epsilon_{Abs}}))\) gives us \(|A-\overline{A} | \leq \epsilon_{Abs}\) for \(\epsilon_{Abs} \in (0, 1]\) and we want a relative error \(\epsilon_R > 0\):

  • If \(|A| < 1\), then we want to obtain an error \(\epsilon_{Abs}\) such that \(\epsilon_{Abs} = \epsilon_R A\). For this, we need to have a lower bound \(\lambda^{-1}\) on \(A\). If we have it, we can just set \(\epsilon_{Abs} = \epsilon_R \lambda^{-1}\) and run our algorithm in time \(O(f(\frac{\lambda}{\epsilon_{Abs}}))\)

  • If \(|A| > 1\), If we want a relative error \(\epsilon_R\), then by running the algorithm with \(\epsilon_{Abs}=\epsilon_R\) we have already a relative error bound, as \(\frac{|A- \overline{A}|}{A} \leq |A- \overline{A}| \leq \epsilon_{Abs}\)Note that we \(\epsilon_{Abs}\) is meant to stay \(\in (0,1]\) as it wouldn’t make sense to have a runtime of \(O(\frac{1}{\epsilon_{abs}})\) for \(\epsilon_{abs} > 1\).

Exercise D.1 Are there cases of algorithms with \(\epsilon_{abs} > 1\)? Does it make sense? Can you make examples?

D.0.0.0.2 From relative to absolute precision

If we have an algorithm that in time \(O(f(\frac{1}{\epsilon_{R}}))\) gives us \(|A-\overline{A} | \leq A\epsilon_{R}\) and we want an absolute \(\epsilon_{Abs}\):

  • IF \(A \leq 1\), we could just call the algorithm with error \(\epsilon_R = \epsilon_{Abs}\), and thus obtain \[|A- \overline{A} | \leq \epsilon_R A \Rightarrow |A- \overline{A} | \leq \epsilon_{R} = \epsilon_{Abs},\] as the absolute error is an upper bound of the relative error. Note that the runtime of the algorithm might (should!) depend on the quantity \(A\) that we want to estimate, so we could improve upon this, by trying to not pay a price that depends on \(A\) in the runtime.
  • IF \(A > 1\), \[|A- \overline{A} | \leq \epsilon_R A \] we want \[|A- \overline{A} | \leq \epsilon_{Abs} \text{by setting } \epsilon_R = \frac{\epsilon_{Abs}}{\overline{A}}\] By running algorithm \(\mathcal{A}\) with error \(\epsilon'=\frac{\epsilon}{A}\), i.e. we run it once with \(\epsilon_R =1/4\) error, and than. We run it again with the improved \(\epsilon_R=\frac{1}{\lambda}\), and we have a runtime of \(O( \text{f}(\frac{A}{\epsilon^{-1}}))\).

Example D.1 Amplitude estimation output a scalar \(0 \leq \widetilde{p}\leq 1\) which equal some probability \(p\), such that \(|p-\widetilde{p}| \leq \epsilon p\) in time \(O \left(\frac{1}{\epsilon p}\right)\). We have directly an absolute error of \(\epsilon\) in this estimate (which we will rarely use, as oftenwe would like to multiply this estimate, so that the error scales proportionately).

–> –> –>

–> –> –> –> –> –>

D.0.1 Propagation of error in functions of one variable

Check this, this, this, and (Hogan 2006).

–>

–> –> –> –> –> –> –> –> –>

–>

–>

–> –> –>

–>

–> –> –>

–>

–>

–>

–> –> –> –> –>

–> –> –> –>

–> –> –> –> –> –> –>

–>

–> –> –> –> –>

–> –>

–> –> –> –> –>

–> –> –>

–> –> –> –> –>

–>

–> –> –> –> –>

–>

–>

–> –> –>

–>

–> –> –>

–> –> –>

D.1 Useful quantum subroutines and folklore results

We will often make use of a tool developed in (Wiebe, Kapoor, and Svore 2018). It is standard technique in classical computer science to boost the success probability of a randomized algorithm by repeating it and computing some statistics in the results. For the case of quantum algorithms, in high level, we take multiple copies of the output of the amplitude estimation procedure, compute the median, and reverse the circuit in order to get rid of the garbage.

Lemma D.1 (Median evaluation [@wiebe2018quantum]) Let \(\mathcal{U}\) be a unitary operation that maps \[\mathcal{U}:|0^{\otimes n}\rangle\mapsto \sqrt{a}|x,1\rangle+\sqrt{1-a} |G,0\rangle\] for some \(1/2 < a \le 1\) in time \(T\). Then there exists a quantum algorithm that, for any \(\Delta>0\) and for any \(1/2<a_0 \le a\), produces a state \(|\Psi\rangle\) such that \(\||\Psi\rangle-|0^{\otimes nL}\rangle|x\rangle\|\le \sqrt{2\Delta}\) for some integer \(L\), in time \[ 2T\left\lceil\frac{\ln(1/\Delta)}{2\left(|a_0|-\frac{1}{2} \right)^2}\right\rceil. \]

We will report here some simple statements from literature which now are folklore.

Lemma D.2 ([@kerenidis2019qmeans]) Let \(\epsilon_b\) be the error we commit in estimating \(|c\rangle\) such that \(\left\lVert |c\rangle - |\overline{c}\rangle\right\rVert < \epsilon_b\), and \(\epsilon_a\) the error we commit in the estimating the norms, \(|\left\lVert c\right\rVert - \overline{\left\lVert c\right\rVert}| \leq \epsilon_a \left\lVert c\right\rVert\). Then \(\left\lVert\overline{c} - c\right\rVert \leq \sqrt{\eta} (\epsilon_a + \epsilon_b)\).

Lemma D.3 ([@kerenidis2017quantumsquares]) Let \(\theta\) be the angle between vectors \(x,y\), and assume that \(\theta < \pi/2\). Then, \(\left\lVert x-y\right\rVert \leq \epsilon\) implies \(\left\lVert|x\rangle - |y\rangle\right\rVert \leq \frac{\sqrt{2}\epsilon}{\left\lVert x\right\rVert}\). Where \(|x\rangle\) and \(|y\rangle\) are two unit vectors in \(\ell_2\) norm.

G References

Hogan, Robin. 2006. “How to Combine Errors.”
Ku, Harry H et al. 1966. “Notes on the Use of Propagation of Error Formulas.” Journal of Research of the National Bureau of Standards 70 (4): 263–73.
Wiebe, Nathan, Ashish Kapoor, and Krysta M Svore. 2018. “Quantum Nearest-Neighbor Algorithms for Machine Learning.” Quantum Information and Computation 15.