E Error propagation and approximation

This part is based on many different sources, like (Hogan 2006), (Ku and others 1966).

Definition E.1 (Absolute error) \[|A - \overline{A} | = \epsilon_{Abs}\]
Definition E.2 (Relative error) \[\frac{| A - \overline{A}| }{A} = \epsilon_R \text{ or equivalently}\] \[ A(1-\epsilon_{R}) \leq \overline{A} \leq A(1+\epsilon_{R})\]

Thus observe that:

  • If (and only if) \(|A| < 1\), then, \(\epsilon_{Abs} \leq \epsilon_{R}\)
  • If (and only if) \(|A| > 1\), then, \(\epsilon_{Abs} \geq \epsilon_{R}\)

We will study the relation between the two errors, often leveraging the idea of seeing what happen when we set \(\epsilon_{Abs} = \epsilon_R A\).

Let’s imagine an algorithm \(\mathcal{A}\) that estimates a quantity \(A\) in time \(O(\text{poly}(\epsilon^{-1}))\). We would like to move from a relative to absolute precision, or vice versa.

E.0.0.0.1 From absolute to relative precision

Suppose that we have an algorithm that in time \(O(f(\frac{1}{\epsilon_{Abs}}))\) gives us \(|A-\overline{A} | \leq \epsilon_{Abs}\) for \(\epsilon_{Abs} \in (0, 1]\) and we want a relative error \(\epsilon_R > 0\):

  • If \(|A| < 1\), then we need to set \(\epsilon_{Abs} = \epsilon_R A\). For this, we need to have a lower bound \(\lambda^{-1}\) on \(A\). If we have it, we can just set \(\epsilon_{Abs} = \epsilon_R \lambda^{-1}\) and run our algorithm in time \(O(f(\frac{\lambda}{\epsilon_{Abs}}))\)

  • If \(|A| > 1\), If we want a relative error \(\epsilon_R\), then by running the algorithm with \(\epsilon_{Abs}=\epsilon_R\) we have already a relative error bound, as \(\frac{|A- \overline{A}|}{A} \leq |A- \overline{A}| \leq \epsilon_{Abs}\)Note that we \(\epsilon_{Abs}\) is meant to stay \(\in (0,1]\) as it wouldn’t make sense to have a runtime of \(O(\frac{1}{\epsilon_{abs}})\) for \(\epsilon_{abs} > 1\).

E.0.0.0.2 From relative to absolute precision

If we have an algorithm that in time \(O(f(\frac{1}{\epsilon_{R}}))\) gives us \(|A-\overline{A} | \leq A\epsilon_{R}\) and we want an absolute \(\epsilon_{Abs}\):

E.0.1 Propagation of error in functions of one variable

This is a better formalization and proof.

Theorem E.1 ((Gribling 2019) (Van Apeldoorn et al. 2020)) Let \(0 \leq \theta \leq 1\) and let \(\alpha, \beta, \tilde{\alpha}, \tilde{\beta}\) be positive real numbers, such that \(|\alpha - \tilde{\alpha}| \leq \alpha\theta/3\), and \(|\beta - \tilde{\beta}| \leq \beta\theta/3\). Then: \[\left|\frac{\alpha}{\beta} - \frac{\tilde{\alpha}}{\tilde{\beta}} \right| \leq \theta \frac{\alpha}{\beta}\]

Proof. \[\left|\frac{\alpha}{\beta} - \frac{\tilde{\alpha}}{\tilde{\beta}} \right| = \left| \frac{\alpha\tilde{\beta}- \tilde{\alpha}\beta}{\beta\tilde{\beta}} \right| = \left| \frac{\alpha\tilde{\beta}- \tilde{\alpha}\beta+\alpha\beta - \alpha\beta}{\beta\tilde{\beta}} \right| =\] \[\left|\frac{\alpha\beta-\tilde{\alpha}\beta}{\beta\tilde{\beta}} \right| + \alpha\left| \frac{\beta -\tilde{\beta}}{\beta\tilde{\beta}} \right| \leq\]

\[\left|\frac{\alpha -\tilde \alpha}{\tilde \beta} \right| + \frac{\alpha}{\tilde\beta}\theta/3\] Using the hypotesis on \(|\alpha-\widetilde{\alpha}|\leq \alpha \theta/3\), \[\leq \left|\frac{1}{\tilde{\beta}} \right|\frac{\alpha\theta}{3} + \frac{\alpha}{\tilde\beta}\theta/3\] Now we can use the fact that \(\tilde \beta \geq \frac{2}{3}\beta\). This comes from the fact that if we have a relative error on \(\beta\), we know that \(|\tilde{\beta}| \leq \beta (1+\theta)\). In our case, \(\beta(1-\frac{2}{3}\theta) \leq \tilde{\beta} \leq \theta (1+\frac{1}{3}\theta)\), so in the worst case, for \(\theta=1\), we can simply see that \(\tilde \beta \geq \frac{2}{3}\beta\). Now can put the lower bound on the denominator, and get: \[\leq \left|\frac{\alpha\theta}{3\tilde{\beta}} \right| + \frac{\alpha}{\tilde\beta}\theta/3\]

We can have a more general version of the bound.
Lemma E.1 ((Hamoudi et al. 2020)) Let \(\tilde a\) be an estimate of \(a>0\) such that \(\vert \tilde a- a \vert \leq \epsilon_a a\). with \(\epsilon_a \in (0,1)\). Similarly, let \(\tilde b\) be an estimate of \(b>0\) and \(\epsilon_b \in (0,1)\) such that \(\vert \tilde b - b \vert \leq \epsilon_b b\). Then the ratio \(a/b\) is estimated to relative error \(\left \vert \frac{\tilde a}{\tilde b} - \frac{a}{b} \right\vert \leq \left ( \frac{\epsilon_a + \epsilon_b}{1-\epsilon_b} \right) \frac{a}{b}\).
The proof comes directly from their work

Proof. Note that \(b - \tilde{b} \leq \vert \tilde{b} - b\vert \leq \epsilon_b b\), so as we said before, deduce \(\frac{1}{ \tilde b} \leq \frac{1}{ b (1-\epsilon_b)}\).

Now we can combine the previous observation: \[\begin{align} \left| \frac{\tilde a}{\tilde b} - \frac{a}{b} \right| = & \left \vert \frac{\tilde a b - a \tilde b}{\tilde b b} \right\vert = \left \vert \frac{\tilde a b - ab + ab - a \tilde b}{\tilde b b} \right\vert = \left \vert \frac{\tilde a - a}{\tilde b} + \frac{a}{\tilde b} \frac{b - \tilde b}{ b} \right\vert \\ \leq & \left \vert \frac{\tilde a - a}{\tilde b} \right\vert+ \frac{a}{\tilde b} \left \vert \frac{b - \tilde b}{ b} \right\vert \leq \frac{\epsilon_a a + \epsilon_b a }{\tilde b} \leq \frac{a}{b}\frac{\epsilon_a +\epsilon_b}{ (1-\epsilon_b)}. \end{align}\]

E.1 Useful quantum subroutines and folklore results

We will often make use of a tool developed in (Wiebe, Kapoor, and Svore 2018). It is standard technique in classical computer science to boost the success probability of a randomized algorithm by repeating it and computing some statistics in the results. For the case of quantum algorithms, in high level, we take multiple copies of the output of the amplitude estimation procedure, compute the median, and reverse the circuit in order to get rid of the garbage.

Lemma E.2 (Median evaluation (Wiebe, Kapoor, and Svore 2018)) Let \(\mathcal{U}\) be a unitary operation that maps \[\mathcal{U}:|0^{\otimes n}\rangle\mapsto \sqrt{a}|x,1\rangle+\sqrt{1-a} |G,0\rangle\] for some \(1/2 < a \le 1\) in time \(T\). Then there exists a quantum algorithm that, for any \(\Delta>0\) and for any \(1/2<a_0 \le a\), produces a state \(|\Psi\rangle\) such that \(\||\Psi\rangle-|0^{\otimes nL}\rangle|x\rangle\|\le \sqrt{2\Delta}\) for some integer \(L\), in time \[ 2T\left\lceil\frac{\ln(1/\Delta)}{2\left(|a_0|-\frac{1}{2} \right)^2}\right\rceil. \]

We will report here some simple statements from literature which now are folklore.

Lemma E.3 ((Kerenidis et al. 2019)) Let \(\epsilon_b\) be the error we commit in estimating \(|c\rangle\) such that \(\left\lVert |c\rangle - |\overline{c}\rangle\right\rVert < \epsilon_b\), and \(\epsilon_a\) the error we commit in the estimating the norms, \(|\left\lVert c\right\rVert - \overline{\left\lVert c\right\rVert}| \leq \epsilon_a \left\lVert c\right\rVert\). Then \(\left\lVert\overline{c} - c\right\rVert \leq \sqrt{\eta} (\epsilon_a + \epsilon_b)\).
Lemma E.4 ((Kerenidis and Prakash 2020)) Let \(\theta\) be the angle between vectors \(x,y\), and assume that \(\theta < \pi/2\). Then, \(\left\lVert x-y\right\rVert \leq \epsilon\) implies \(\left\lVert|x\rangle - |y\rangle\right\rVert \leq \frac{\sqrt{2}\epsilon}{\left\lVert x\right\rVert}\). Where \(|x\rangle\) and \(|y\rangle\) are two unit vectors in \(\ell_2\) norm.