23.3 Random walk mathematics

Call \(x^{i}\) the position \(x\) at step \(i\) in a random walk. While we have set this up to be a unit walk, more generally \(x^{i}=x^{i-1}+p(r) \; \Delta x\), with \(\Delta x\) being the jump size (in this case \(\Delta x=1\)) and \(r\) being a random variable:

\[\begin{equation} r=\begin{cases} -1 & p(-1)=0.5 \\ 1 & p(1)=0.5 \end{cases} \tag{23.1} \end{equation}\]

Note that in Equation (23.1) \(r\) is an example of a (https://en.wikipedia.org/wiki/Bernoulli_distribution)[Bernoulli Random Variable] - it can be either 0 or 1, with equal probability. Equation (23.1) is sometimes referred to as the evolution equation. If we have multiple simulations then \(x_{j}^{i}\) is the position at step \(i\) for simulation \(j\).

Let’s introduce some terminology to help us out here. \(\displaystyle \big \langle x^{i} \big \rangle = \frac{1}{n} \sum_{j=1}^{n} x_{j}^{i}\) is the expected value of our position at step \(i\), summed over all of the simulations at that timestep. Notice the connection here to the ensemble average at a given point.

You will learn in a probability theory class that the expected value is a linear operator. Because of this, we can compute the expected value of our evolution equation:

\[\begin{equation} \big \langle x^{i} \big \rangle = \big \langle x^{i-1}+r \;\Delta x \big \rangle = \big \langle x^{i-1} \big \rangle + \big \langle r \; \Delta x \big \rangle = \big \langle x^{i-1} \big \rangle + \big \langle r \big \rangle \; \Delta x \end{equation}\]

Let’s focus in particular on the second term of this expression: \(\big \langle r \big \rangle\). Because \(r\) is a discrete random variable we can also compute its expected value as well. To calculate \(\langle r \rangle\) we adding up the possible outcomes for \(r\) (either 1 or -1) when multiplied by their associated probabilities (for this case 0.5 (\(r=1\)) or 0.5 (\(r=-1\)) - note how these probabilities sum to 1):10

\[\begin{equation} \big \langle r \big \rangle = 1 \cdot 0.5 - 1 \cdot 0.5 = 0 \end{equation}\]

So, the expected value of the evolution equation is \(\big \langle x^{i} \big \rangle = \big \langle x^{i-1} \big \rangle\). This may not seem helpful, but let’s start thinking of this value from the beginning (that is when \(i=0\), or when we begin at \(x=0\)). Now let’s evolve to the next timestep. By what we found, then \(\big \langle x^{1} \big \rangle = \big \langle x^{0} \big \rangle =0\). Connect these together: \(\big \langle x^{1} \big \rangle = 0\) as well. What do you think happens at \(\big \langle x^{2} \big \rangle\)? If you guessed 0, you are correct, because \(\big \langle x^{2} \big \rangle = \big \langle x^{1} \big \rangle = \big \langle x^{0} \big \rangle = 0\). In fact, this pattern continues so that \(\big \langle x^{i} \big \rangle = \big \langle x^{i-1} \big \rangle=0\).

What does this tell us? Between all of this notation, \(\big \langle x^{i} \big \rangle\) is the expected value at each timestep, so we showed that the expected value at any time is zero. In other words, random walk goes nowhere!

23.3.1 Random walk variance

Now that we have characterized the expected value or the average displacement let’s do the variance of this random walk. We calculate this by computing the mean square displacement, or \(\langle (x^{i})^{2} \rangle\). First we compute the square displacement using the evolution equation and multiplying out:

\[\begin{equation} (x^{i})^{2} = (x^{i-1} + r \; \Delta x)^2 = (x^{i-1})^{2} + 2 x^{i-1} \; r \; \Delta x + r^{2} \; \Delta x^{2} . \end{equation}\]

Next what we will compute the expectation \(\langle (x^{i})^{2} \rangle\) of our multiplied expression term by term:

  • \(\langle (x^{i-1})^{2} \rangle\): There is not much we can do with this term.

  • \(\langle 2 x^{i-1} \; r \; \Delta x \rangle\): This term is zero from the expected value: \[\begin{equation} \langle 2 x^{i-1} \; r \; \Delta x \rangle = 2 x^{i-1} \; \Delta x \cdot 0.5 - 2 x^{i-1} \; \Delta x \cdot 0.5 = 0 \end{equation}\]

  • \(\langle r^{2} \; (\Delta x)^{2} \rangle\): We know that \(r\) be either \(-1\) or \(1\), but we then square the value of \(r\) in Equation (23.1): \[\begin{equation} r^{2}=\begin{cases} -1 \rightarrow (-1)^{2}=1 & p(-1)=0.5 \\ 1 \rightarrow (1)^{2}=1 & p(1)=0.5 \end{cases} \tag{23.2} \end{equation}\] So when we compute the expected value of this term we obtain: \[\begin{equation} \langle r^{2} \; (\Delta x)^{2} \rangle = 1 \; (\Delta x)^{2} \cdot 0.5 + 1 \; (\Delta x)^{2} \cdot 0.5 = (\Delta x)^{2} \end{equation}\]

So the end result for the variance is:

\[\begin{equation} \big \langle (x^{i})^{2} \big \rangle = \big \langle (x^{i-1})^{2}) \big \rangle +( \Delta x)^{2} \end{equation}\]

In order to understand this let’s write out the first few terms of this recursive relationship:

\[\begin{equation} \begin{split} \big \langle (x^{0})^{2} \big \rangle &=0 \\ \langle (x^{1})^{2} \big \rangle &= \big \langle (x^{0})^{2}) \big \rangle + (\Delta x)^{2} = (\Delta x)^{2} \\ \big \langle (x^{2})^{2} \big \rangle &= \big \langle (x^{1})^{2}) \big \rangle + (\Delta x)^{2} = 2 (\Delta x)^{2} \\ \big \langle (x^{3})^{2} \big \rangle &= \big \langle (x^{2})^{2}) \big \rangle + (\Delta x)^{2} = 3 (\Delta x)^{2} \end{split} \end{equation}\]

There is a pattern here, in fact \(\langle (x^{i})^{2} \rangle = i ( \Delta x)^{2}\). So the variance, or the mean square displacement grows, proportional to the step size. Another way to state this is that the standard deviation (the square root of the variance) is equal to \(\pm \sqrt{n} \; \Delta x\), where \(n\) is the current step. This matches up with our graphs from earlier since \(\Delta x =1\)! Informally this tells us that on average you go nowhere, but eventually you travel everywhere - how cool!


  1. Generally speaking for a discrete random variable \(x\), \(\displaystyle \langle x \rangle = \sum p_{i} \cdot x_{i}\), with \(\displaystyle \sum p_{i}=1\). ↩︎