next up previous index
Next: Experimental Comparison Up: The DA Prior Robust Statistics Previous: AM Estimator

4.1.5

Convex DA and M-Estimation Models

 

The convexity of leads to a convex energy (See also [Li et al. 1995]). Convex models have several advantages. The convexity guarantees the stability with respect to the input [Bouman and sauer 1993]. It makes the solution less sensitive to changes in the parameters. Parameter graduation or annealing is not necessary in convex minimization and this reduces the complexity. Besides Shulman and Herve's convex (not strictly convex, though) APF (3.34), other convex models also exist.

Hebert and Leahy (1989) examine the following three potential functions

  1. .

in Bayesian reconstruction from emission tomography data and find that the quality of the reconstruction could benefit from the third function which is a compromise between the first and the second priors. Obviously the first is exactly the quadratic potential function. The second potential function has been used by Geman and McClure in [Geman and McClure 1985]. In fact, both the second and the third potential functions have the property and therefore are non-convex.

Green (1990) suggests the use of

as the potential function for the same problem. It is approximately quadratic for small , and linear for large values, similar to the Huber function. It is convex and satisfies all the properties of the convex DA model.

Lange (1990) proposes seven properties for the suggested potential functions, two of which guarantees robust penalties (bounded smoothing) and convexity. But they require the functions to be twice differentiable and strictly convex. These requirements have been alleviated in our model. In that model, a positive, integrable function is specified on ; the corresponding can be recovered via

In the DA model, an APF is simply recovered by integrating the APF once ( cf. (3.28)).

Bouman (1993) emphasizes the importance of the convexity in Gibbs distribution and constructs a scale-invariant Gaussian MRF model by using the potential function

where . When p=0 it becomes the quadratic (standard) regularizer which estimates smooth parameter fields. For p=1 the corresponding estimator is the sample median and will allow discontinuities. This class of potential functions are strictly convex but unbounded, though the smoothing over discontinuities is fairly limited when p is small enough. This model controls the degree of discontinuities allowed in solutions through the choice of p. But, as pointed out by [Stevenson et al. 1994], this choice for MAP estimation will not allow consistent adjustment of the degree that discontinuities will be allowed. When the data is sparse the model seems to be very sensitive to the selection of p.

Stevenson et al. (1994) presents a systematic Al study on both convex and nonconvex regularization and summarizes four desirable properties for Gibbs models to have good behaviors: convexity, symmetry, restricted smoothing and the adjustability of the degree that discontinuities are allowed in solutions. The restricted smoothing here requires that for large , which is more relaxed than the requirement of bounded smoothing in the DA model. In that work, a class of convex potential functions is defined as

where , and generally . Actually when and , it is the case of Bouman's model [Bouman and Sauer 1993] and when and , that is the Huber function. There are variations on the choice of p and q and the suitable choice will depend on the specific application and the a priori information that is known. In general, when p is chosen to be near and q chosen to be around , an appropriate threshold T will lead to a quite satisfactory solution.



next up previous index
Next: Experimental Comparison Up: The DA Prior and Robust Statistics Previous: AM Estimator