next up previous index
Next: Classical Minimization with Continuous Labels Up: Table of Contents Previous: Conclusion

Chapter 8

Minimization -- Local Methods

 

After the energy function , including both the functional form and the involved parameters, is given and thus the optimal solution is entirely defined, the remaining problem is to find the solution. It is most desirable to express the solution in closed-form but generally, this is very difficult in vision problems due to the complexity caused by interactions between labels. Therefore, optimal solutions are usually computed by using some iterative search techniques. This chapter describes techniques for finding local minima and discusses related issues.

A minimization problem can be categorized as continuous or combinatorial according to whether the label set is continuous or discrete; it can be further categorized as constrained or unconstrained according to whether the f is constrained within a subspace of the entire search space. A vision problem may also be modeled as a combination of several minimization processes; for example, continuous restoration with line process is a combination of continuous and combinatorial minimization processes ( cf. Chapters 2 and 3). As far as computational algorithms are concerned, a view of local vs. global methods is important when there are multiple local minima in the solution space.

Because classical unconstrained, continuous minimization methods are most mature, it is sometimes desirable to convert a combinatorial and constrained problems into forms suitable for such classical methods. A combinatorial problem may be converted into a continuous one, for example, using the notion of relaxation labeling [Rosenfeld et al. 1976] or mean field approximation [Peterson and Soderberg 1989]. A constrained minimization may be converted into an unconstrained one, for example, using classical methods such as the penalty method or the Lagrange multiplier method [Fletcher 1987]. This not only is for mathematical convenience but also brings about possibilities for analog implementations.

  
Figure 8.1: Local and global extrema.

Fig.8.1. illustrates extrenal points of a continuous function in . A, C and F are local maxima but only F is the global maximum. Points B and D are local minima of which B is the global one. B, C, D and E are stationary points where the gradient vanishes but E is an inflection point (or saddle point in higher dimensional cases) rather than an extremum. A gradient-descent search algorithm will stop at B or D. Theoretically, it can also stop at a saddle point or even a local maximum because the gradient there is exactly zero. But such chance is small in numerical computation because of the quantization, and the algorithm is likely to pass by these points.

The localness and globalness of a minimum f are said with respect to a neighborhood system where is the set of neighboring configurations of f in the solution space. For , the neighborhood of f can be suitably defined as

where is a norm and is a real number. For discrete problem where is a discrete label set and , we may define the following ``n-neighborhood'' of as

For example, assuming , then is a 2-neighbor of ; and so is because 2-neighborhood include 1-neighborhood (due to the phrase ``at most'' in the definition).

A point is a local minimum   of E with respect to if

 

It is a global minimum   if the neighborhood is defined by

When is strictly the lowest, for all , is the unique global minimum . Multiple global minima   occur if there exist more than one point for which take the globally minimum value .

  
Figure 8.2: Deterministic local search algorithm.

Local search is the basis for many minimization algorithms. Fig.8.2 describes the idea of deterministic local search. The idea is very simple and natural: At a point , we look for where to go within the neighborhood ; if in the neighborhood leads to an improvement, , we replace f by ; this process continues until no further improvement can be made. When is lower bounded, the algorithm converges to a local minimum.

There are different ways of generating (or searching for) a candidate . In the continuous case, may be chosen based on the gradient: , where is a step size. This leads to a zero gradient point, which is necessary for the minimization. In the discrete case, some enumerative method may be used to generate for which .

The local search discussed so far is deterministic. In stochastic local search, is generated at random. It is not necessary that be lower than . Whether to accept for which is higher than is decided according to some probabilistic rules.





next up previous index
Next: Classical Minimization with Continuous Labels Up: Table of Contents Previous: Conclusion