next up previous index
Next: Supervised Estimation with Labeled Data Up: Table of Contents Previous: Discussion

Chapter 6

MRF Parameter Estimation

   

A probabilistic distribution function has two essential elements: the form of the function and the involved parameters. For example, the joint distribution of an MRF is characterized by a Gibbs function with a set of clique potential parameters; and the noise by a zero-mean Gaussian distribution parameterized by a variance. A probability model is incomplete if the involved parameters are not all specified even if the functional form of the distribution is known. While formulating the forms of objective functions such as the posterior distribution has long been a subject of research for in vision, estimating the involved parameters has a much shorter history. Generally, it is performed by optimizing a statistical criterion, e.g. using existing techniques such as maximum likelihood, coding, pseudo-likelihood, expectation-maximization, Bayes.

The problem of parameter estimation can have several levels of complexity. The simplest is to estimate the parameters, denoted by , of a single MRF, F, from the data d which is due to a clean realization, f, of that MRF. Treatments are needed if the data is noisy. When the noise parameters are unknown, they have to be estimated, too, along with the MRF parameters. The complexity increases when the given data is due to a realization of more than one MRF, e.g. when multiple textures are present in the image data, and the data is unsegmented. Since the parameters of an MRF have to be estimated from the data, partitioning the data into distinct MRFs becomes a part of the problem. The problem is even more complicated when the number of the underlying MRFs is unknown and has to be determined. Furthermore, the order of the neighborhood system and the largest size of cliques for a Gibbs distribution can also be part of the parameters to be estimated.

The chief difficulty in ML estimation for MRF is the following: The partition function   Z in the Gibbs distribution is also a function of and has to be taken into consideration; since Z is calculated by summing over all possible configurations, maximizing becomes intractable, in general, even for small problems.

  


Figure 6.1: Good prior information produces good results. (Row 1) True texture. (Row 2) Images degraded by noise. (Row 3) Restoration results with the exact parameters for generating the images in row 2. (Rows 4 and 5) Images restored with incorrect parameters (see text). From (Dubes and Jain 1989) with permission; © 1989 Carfax.

The example in Fig.6.1 illustrates the significance of getting correct model parameters for MRF labeling procedures to produce good results. The binary textures in row 1 are generated using the MLL model (1.52) with the parameters being (for the left) and (for the right), respectively. They have pixel values of 100 and 160. Identical independently distributed Gaussian noise is added to the pixel values, giving degraded images in Row 2. Row 3 shows the result obtained using the true parameters of the texture and the noise. Since such accurate information is usually not available in practice, results can not be so good as that. Rows 4 and 5 show results obtained by using incorrect parameters and , respectively. These results are unacceptable.





next up previous index
Next: Supervised Estimation with Labeled Data Up: Table of Contents Previous: Discussion