Image Restoration in Fluorescence Microscopy

GMP van Kempen

    Research output: ThesisDissertation (TU Delft)


    Image Restoration in Fluorescence Microscopy Geert M.P. van Kempen promotor: Prof. dr. I.T. Young toegevoegd promotor: Prof. dr. ir. L.J. van Vliet -------------------------------------------------------------------------------- Summary This thesis presents iterative, non-linear image restoration techniques for application on three-dimensional fluorescence microscope images. The goal of this research is to gain a better understanding of the behavior of non-linear image restoration algorithms and to develop novel methods to improve their performance in such a way that measurements can be performed more accurately on three dimensional fluorescence images. The formation and acquisition of a three-dimensional image by means of (confocal) fluorescence microscopy blurs the image and disturbs it with noise. These distortions hide fine details in the image hampering both the visual and the quantitative analysis of the image. The purpose of image restoration is to invert this and to suppress the noise restoring the fine details in the image, which results in an improved analysis of the image. In the first chapter we introduce the principles of fluorescence microscopy and discuss the properties of the three-dimensional image formation in a fluorescence microscope. In the second part of this chapter we introduce the principles of image restoration. We give an overview of various restoration techniques used in fluorescence microscopy and discuss the influence of regularization and the estimation of the background on the performance of non-linear image restoration algorithms. Chapter 2 describes the image formation in a fluorescence microscope based on the wave description and the quantum nature of light. Using the wave description of light, the finite resolution of an image obtained with a microscope is derived. We discuss the conditions under which a fluorescence microscope can be modeled as a linear shift invariant system. Diffraction theory is used to model the field of incident light near focus. Using this model, we derive the image formation in a general fluorescence microscope, having both a finite sized illumination and detection aperture. The image formation in a confocal fluorescence microscope and in a wide-field fluorescence microscope are derived as special cases. We discuss sampling theory to formulate the conditions for an error free conversion of an analog image into a digital representation. Using the quantum nature of light we describe the noise properties of a light detection system. Both intrinsic and extrinsic noise sources are treated, as well as the photon-limited characteristics of scientific-grade light detectors. Using both descriptions of light we model the image acquired by a fluorescence microscope as the original image blurred by a translation-invariant point spread function and distorted by noise. In chapter 3 several methods for image restoration are discussed. The Wiener filter is the linear filter that minimizes the mean square error between the original image and its restored estimate, assuming that the image is distorted by additive Gaussian noise. The Tikhonov-Miller filter is the linear filter found when minimizing the Tikhonov functional. This functional is the squared difference between the acquired image and a blurred estimate of the original object regularized by a Tikhonov energy bound. Both the Wiener filter and the Tikhonov-Miller filter are linear operations on the recorded image. Therefore they cannot restrict the domain in which the solution is to be found nor can they restore information at frequencies that are set to zero by the image formation process. These restrictions are tackled by algorithms discussed in the second part of this chapter. Both the iterative constrained Tikhonov-Miller algorithm and the Carrington algorithm iteratively minimize the Tikhonov functional. They differ however in the way the non-negativity constraint is incorporated. We conclude this chapter with a discussion of the Richardson-Lucy algorithm. This iterative algorithm finds the maximum likelihood solution using the EM algorithm when the acquired image is distorted by Poisson noise. Chapter 4 deals with various aspects that play a role when testing and comparing iterative image restoration algorithms. We start by defining two performance measures, the mean-square-error and I-divergence, that we use for measuring and comparing the performance of image restoration algorithms. Many of the tests we present have been performed on simulated images. We discuss the properties of objects generated using an analytical description of their Fourier transform. Images are created distorting the objects with Poisson noise at a predetermined signal-to-noise ratio. We continue with a comparison of different methods to determine the regularization parameter for the Tikhonov functional. These methods use different criteria to balance the fit of the restored data to the measured image with the imposed regularization on the restored data. The last two sections deal with the iterative character of the discussed algorithms. In the first section we compare different choices for a first estimate. In the second section stop criteria for iterative optimizations are discussed. Chapter 5 discusses several applications of image restoration in fluorescence microscopy. The first section tries to give some insight as to why constrained image restoration algorithms perform better than linear algorithms. We measure the performance of these algorithms as function of the background estimate used by these algorithms. The performance measured outside the microscope¿s bandwidth is used to measure the "superresolution" capabilities of non-linear constrained restoration algorithms. In the next section we show how the performance of image processing algorithms can be improved by reducing the noise influence on the restoration. We continue with a study in which the influence of image restoration prior to quantitative image analysis is investigated. It shows that the accuracy of integrated intensity measurements performed on a spherical object in the neighborhood of another object is increased considerably. The final section of this chapter presents the results of the application of image restoration on confocal fluorescence images of the microscopic network structure of gel-like food samples. The most computationally intensive part of the image restoration algorithms used in this thesis is the convolution with the point spread function. This convolution can be implemented efficiently using the fast Fourier transform. The fast Fourier transform is an example of the class of separable image processing algorithms. In chapter 6 we show that a straightforward implementation of separable image processing algorithms on modern workstations will give the worst possible performance regarding data cache utilization on large images. Modern workstations are equipped with fast cache memory to enable the CPU to access the relatively slow main memory without noticeable delay. However, two typical cache characteristics, limited associativeness and power of two based memory address mapping on cache lines, severely hamper the performance of separable image processing algorithms. We present three methods based on transposing the image to improve the data cache usage for both write-through and write-back caches. Experiments with a 3x3 uniform filter and the fast Fourier transform performed on a range of Sun workstations show that the proposed methods considerably improve the performance.
    Original languageUndefined/Unknown
    QualificationDoctor of Philosophy
    Awarding Institution
    • Delft University of Technology
    • Young, I.T., Supervisor
    • van Vliet, L.J., Supervisor
    Award date11 Jan 1999
    Place of PublicationDelft
    Print ISBNs90-407-1792-3
    Publication statusPublished - 1999

    Cite this