This is the third part in a short series of blog posts about quantum Monte Carlo (QMC). The series is derived from an introductory lecture I gave on the subject at the University of Guelph.
Part 1 calculating Pi with Monte Carlo
Part 2 Galton’s peg board and the central limit theorem
Introduction to QMC Part 3: Markov Chains and the Metropolis algorithmSo far in this series we have seen various examples of random sampling. Here we’ll look at a simple python script that uses Markov chains and the Metropolis algorithm to randomly sample complicated two-dimensional probability distributions.
Markov ChainsIf you come from a math, statistics, or physics background you may have leaned that aMarkov chain is a set of states that are sampled froma probability distribution .
More recently, they have beenused to string together words andmake pseudo-random sentences [1]. In this case the state is defined by e.g. the current and previous words in the sentence and the next words is generated based on this “state”. We won’t be looking at this sort of thing today, but instead going back to where it all began.In the early 1900’s a Soviet mathematician named Andrey Markov published a series of papers describing, in part, his method for randomly sampling probability distributions using a dependent data set. It was not always clear how this could be done, and some believed that the law of large numbers (and hence the central limit theorem ) would only apply to an independent data set. Among these disbelievers was another Soviet professor named Pavel Nekrasov and there’s an interesting story about a “rivalry” between Markov and Nekrasov. To quote Eugene Seneta (1996) :
“Nekrasov’s (1902) attempt to use mathematics and statistics in support of ‘free will’ … led Markov to construct a scheme of dependent random variablesin his 1906 paper” Markov layedout the rules for properly creating a chain, which isa series of states where each is connected, in sequence, according to a specific set of rules [2]. The transition between states must be ergodic, therefore Any state can be achieved within a finite number of steps; this ensures that the entire configuration space is traversable There is a chance of staying in the same place when the system steps forward The average number of steps required to return to the current state is finite
These rules may not apply, as such, for the modern Markovian-chain pseudo random text generators discussed above. However for other applications (such asQMC) these are very important.
The stage was set, but Markov would never live to see his ideas applied to QMC. This was done in the 1950’s and paralleled the creation of the worlds first electronic computer. And speaking about that …
Metropolis algorithm QMC requires a set of configurations distributed according to the probability distribution (i.e., sampled from the square of the wave function). A configuration is a vector that contains the positions (e.g.,,
,
coordinates) ofeach particle. Recall, for comparison, how we used the Galton board to sample the binomial distribution. For QMC the probability distributions
, where
is a “many-body” configuration vector, are much more complicated and samples can be produced using the Metropolis algorithm. This algorithm obeys the rules for creating a Markov chain and adds some (crucial) details. Namely, when transitioning from the current state to the next in the chain of configurations, we accept the move with probability:

, where
is the configuration of the current state and
is the configuration of the next proposed state. The move itself involvesshifting the location of some or (in practice) all of the particles and this is done randomly according to a transition rule
. In my experience it’s usually the case that
is equal to the opposite transition
and therefore we can simplify the acceptance probability to:

. In English, this means we take
as the ratio of the square of the wave function evaluated at the proposed and current configurations, or we take
as 1 if this ratio is larger than 1. If
then we accept the move and if
we do the following: Produce a random number
from 0 to 1 Calculate
Accept the move if
, otherwise reject The average acceptance ratio is an important quantity for controlling and understanding simulations and it will depend on the “maximum move size”
(which is the maximum distance each particle can be shifted in each coordinate for each move the actual distance shifted will depend also on a random number). Usually a desirable acceptance ratio is 50%. Metropolis Monte Carlo sampling with Python
Let’s look at a simple script for sampling two-dimensional probability distributions. If you’re familiar with Python then reading over the code should be a great way of solidifying / understanding the Metropolis algorithm as discussed above.
import numpy as np def Metroplis_algorithm(N, m, dr): ''' A Markov chain is constructed, using the Metropolis algorithm, that is comprised of samples of our probability density: psi(x,y). N - number of random moves to try m - will return a sample when i%m == 0 in the loop over N dr - maximum move size (if uniform), controls the acceptance rat