9.1.1
Figure 9.1: Metropolis sampling at temperature T.
The Metropolis algorithm [Metropolis and etal 1953], as shown in Fig.9.1, generates a sequence of
configurations, which is a Markov chain, using a Monte Carlo
procedure. It is originally proposed to
simulate the behavior of a system of particles in thermal equilibrium at
temperature T. At each step, the next configuration is randomly
chosen from
(the vicinity of f) by applying a perturbation on
f, for instance, by changing one of the
's into a new label
. Random[0,1) is a random number from a uniform distribution in
. If it is smaller than P, the configuration change is
accepted. So, a new configuration
for which
is
accepted for sure whereas one for which
is accepted with
probability
. This is the Metropolis criterion.
The convergence condition ``equilibrium is reached'' may be judged by
that ``a prescribed number of iterations have been performed''.
The Gibbs sampler [Geman and Geman 1984] generates
the next configuration based on a conditional probability rather than
energy change. A candidate for the next
is randomly picked
up from the conditional distribution
. It
is shown by [Geman and Geman 1984] that at fixed T, both sampling
algorithms have the Gibbs distribution as equilibrium. In vision
research, [Cross and Jain 1983] on MRF texture models is the first to use
the Metropolis algorithm for sampling MRFs and [Geman and Geman 1984] is
the earliest to use annealing procedures for finding the global
estimates of MRFs.
A new development taking place very fast is Markov Chain Monte Carlo (MCMC) methods of simulation. The reader may refer to [Galfend and Smith 1990,Phillips and Smith 1994] as well as [Smith and Robert 1993,Besag and Green 1993,Gilks et al. 1993] for reviews and discussions.