The MAP solution is defined as . The simplest way
to find the
is to perform gradient descent. Start with an initial
configuration; iterate with
where is a step size and
is the gradient of the
energy function, until the algorithm converges to a point f for which
. Consider
. The gradient is
composed of the following components
where as assumed in (2.14).
Fig.2.2 shows restored curves at three stages of an
iterative minimization process. In the 2D case, it is
Alternatively, one may directly solve the system of equations
which is composed of m simultaneous equations
(
).
Figure 2.2: Minimizing . From left to right: the initial f set to
the observed data d, f after 5 iterations, and the final result
.
When is convex w.r.t. f, there is a unique global energy
minimum
and the convergence to the global is guaranteed by the
gradient descent algorithm. When it is not, as is with the truncated
quadratic potential function (2.17), there are
local minima in the energy landscape and the gradient descent algorithm
only gives a local minimum. Some global optimization techniques may be
used to overcome this difficulty, such as simulated annealing
[Geman and Geman 1984] and graduated non-convexity [Blake and Zisserman 1987]
(see Chapter 9).