From Learning Models of Natural Image Patches to Whole Image Restoration
These are the notes on Zoran & Weiss (2011).
Background
Learning Image Priors is quite valuable
Used for multiple tasks like image denoising and inpainting
Hard task because of the high-dimensionality of images
Earlier work learned only small patches prior to reduce computation
3 Questions to Answer in the Paper
Do patch priors that give high likelihoods yield better patch restoration (Better Patch Priors –> Better Patch Restoration?)
Do patch priors that give high likelihoods yield better image restoration (Better Patch Priors –> Better Image Restoration?)
Can we learn better patch priors?
Do better batch Priors lead to better patch restoration?
They trained several models on 50,000 8 by 8 patches
Calculated the log-likelihood of each model on a set of unseen natural images (Measure of Better Priors)
Calculated the performance in denoising using MAP estimates (Measure of Better Patch Restoration)
Found strong correlation
Answer: Yes
Do better batch Priors lead to better image restoration?
First, How to restore image from patch priors?
3 earlier techniques are mentioned that are simple but weak
The author describes a new framework that maximizes the EPLL (Expected-Patch-Log-Likelihood) while ensuring the restored image is close to the corrupted one
Minimize \(f_p(\mathbf{x} | \mathbf{y}) = \frac{\lambda}{2} \lVert \mathbf{Ax} - \mathbf{y} \rVert^2 - EPLL_p(\mathbf{x})\)
\(EPLL_p(\mathbf{x})= \sum_i \log p(\mathbf{P}_i\mathbf{x})\)
Optimization is done using a method called “Half Quadratic Splitting” which the paper describes in detail
He describes extending this restoration task to denoising, deblurring, and inpainting through changing the matrix \(A\).
The author describes other techniques
A common thing is averaging the clean patches to form final estimate of the image
The author doesn’t do that
Used the priors from the previous section
Used EPLL framework to restore 5 corrupted images
Measured both patch likelihood and restored image quality (PSNR)
Found strong correlation
Answer: Yes
Can we learn better patch priors?
The authors use a Gausian Mixture Model (GMM) with unconstrained covariance matrix
Learning is done through Expectation Maximization Algorithm
Calculating the log likelihood of a patch is done through \(\log p(\mathbf{x}) = \log(\sum_{k=1}^K \pi_k N(\mathbf{x} | \mu_k, \sigma_k))\)
\(\pi_k\) are the mixing weights
The BLS (mean or expected value of the posterior probability) can be calulated easily through a closed form
The MAP (mode or maximum of the posterior) is intractable but the other describe a way to estimate it
It outperforms other patch-based models in log likelihood, patch restoration, and image restoration
It outperforms SOTA generic prior models in image denoising
It is competitive with image specific SOTA models in image denoising