Implementing a fast Gibson-Lanni PSF solver in Python

Many techniques in modern fluorescence microscopy--such as deconvolution and single molecule localization microscopy--rely on the computation of accurate microscope point spread functions (PSFs). Several models exist for these purposes with varying degrees of accuracy and generality. One such model, the so-called Gibson-Lanni model, approximates the PSF of an immersion microscope objective by accounting for the different refractive indexes of the immersion medium, coverslip, and sample, as well as variations in the thicknesses of the different layers. This post describes a Python implementation of a fast computational approach to the G&L PSF that was presented in Fast and accurate three-dimensional point spread function computation for fluorescence microscopy by Li, Xue, and Blu. The approach is fast because it avoids the need to perform any explicit integration of the Kirchhoff diffraction integral, such as is done in PSF Generator. The prototype is based on the manuscript and the associated MATLAB code at http://www.ee.cuhk.edu.hk/~tblu/monsite/phps/fastPSF.php

References

In [1]:
import sys
%pylab inline
import scipy.special
from scipy.interpolate import interp1d
from matplotlib import animation
plt.style.use('dark_background')
plt.rcParams['figure.facecolor'] = '#272b30'
plt.rcParams['image.cmap'] = 'viridis'

print('Python {}\n'.format(sys.version))
print('NumPy\t\t{}'.format(np.__version__))
print('matplotlib\t{}'.format(matplotlib.__version__))
print('SciPy\t\t{}'.format(scipy.__version__))
Populating the interactive namespace from numpy and matplotlib
Python 3.5.2 |Continuum Analytics, Inc.| (default, Jul  2 2016, 17:53:06) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)]

NumPy		1.13.3
matplotlib	2.1.0
SciPy		0.19.1

Simulation setup

Define the simulation parameters

The Gibson-Lanni PSF model accounts for differences between design parameters and real parameters that describe the fluorescence microscope, such as the refractive indexes and thicknesses of the immersion medium, coverslip, and sample. For more information, please see the above references or, if you do not have access to the publications, the description of the model here: http://bigwww.epfl.ch/algorithms/psfgenerator/

In [2]:
# Image properties
# Size of the PSF array, pixels
size_x = 256
size_y = 256
size_z = 128

# Precision control
num_basis    = 100  # Number of rescaled Bessels that approximate the phase function
num_samples  = 1000 # Number of pupil samples along radial direction
oversampling = 2    # Defines the upsampling ratio on the image space grid for computations

# Microscope parameters
NA          = 1.4
wavelength  = 0.610 # microns
M           = 100   # magnification
ns          = 1.33  # specimen refractive index (RI)
ng0         = 1.5   # coverslip RI design value
ng          = 1.5   # coverslip RI experimental value
ni0         = 1.5   # immersion medium RI design value
ni          = 1.5   # immersion medium RI experimental value
ti0         = 150   # microns, working distance (immersion medium thickness) design value
tg0         = 170   # microns, coverslip thickness design value
tg          = 170   # microns, coverslip thickness experimental value
res_lateral = 0.1   # microns
res_axial   = 0.25  # microns
pZ          = 2     # microns, particle distance from coverslip

# Scaling factors for the Fourier-Bessel series expansion
min_wavelength = 0.436 # microns
scaling_factor = NA * (3 * np.arange(1, num_basis + 1) - 2) * min_wavelength / wavelength

Create the coordinate systems

In [3]:
# Place the origin at the center of the final PSF array
x0 = (size_x - 1) / 2
y0 = (size_y - 1) / 2

# Find the maximum possible radius coordinate of the PSF array by finding the distance
# from the center of the array to a corner
max_radius = round(sqrt((size_x - x0) * (size_x - x0) + (size_y - y0) * (size_y - y0))) + 1;

# Radial coordinates, image space
r = res_lateral * np.arange(0, oversampling * max_radius) / oversampling

# Radial coordinates, pupil space
a = min([NA, ns, ni, ni0, ng, ng0]) / NA
rho = np.linspace(0, a, num_samples)

# Stage displacements away from best focus
z = res_axial * np.arange(-size_z / 2, size_z /2) + res_axial / 2

Step 1: Approximate the pupil phase with a Fourier-Bessel series

z.reshape(-1,1) flips z from a row array to a column array so that it may be broadcast across rho.

The coefficients C are found by a least-squares solution to the equation

\begin{equation*} \mathbf{\phi} \left( \rho , z \right)= \mathbf{J} \left( \rho \right) \mathbf{c} \left( z \right) \end{equation*}

\( \mathbf{c} \) has dimensions num_basis \( \times \) len(z). The J array has dimensions num_basis \( \times \) len(rho) and the phase array has dimensions len(z) \( \times \) len(rho). The J and phase arrays are therefore transposed to get the dimensions right in the call to np.linalg.lstsq.

In [4]:
# Define the wavefront aberration
OPDs = pZ * np.sqrt(ns * ns - NA * NA * rho * rho) # OPD in the sample
OPDi = (z.reshape(-1,1) + ti0) * np.sqrt(ni * ni - NA * NA * rho * rho) - ti0 * np.sqrt(ni0 * ni0 - NA * NA * rho * rho) # OPD in the immersion medium
OPDg = tg * np.sqrt(ng * ng - NA * NA * rho * rho) - tg0 * np.sqrt(ng0 * ng0 - NA * NA * rho * rho) # OPD in the coverslip
W    = 2 * np.pi / wavelength * (OPDs + OPDi + OPDg)

# Sample the phase
# Shape is (number of z samples by number of rho samples)
phase = np.cos(W) + 1j * np.sin(W)

# Define the basis of Bessel functions
# Shape is (number of basis functions by number of rho samples)
J = scipy.special.jv(0, scaling_factor.reshape(-1, 1) * rho)

# Compute the approximation to the sampled pupil phase by finding the least squares
# solution to the complex coefficients of the Fourier-Bessel expansion.
# Shape of C is (number of basis functions by number of z samples).
# Note the matrix transposes to get the dimensions correct.
C, residuals, _, _ = np.linalg.lstsq(J.T, phase.T)

Compare the function to its Fourier-Bessel expansion

In [5]:
# Which z-plane to compute
z0 = 24

# The Fourier-Bessel approximation
est = J.T.dot(C[:,z0])

fig, ax = plt.subplots(ncols=2, sharey=True, figsize=(10,5))
ax[0].plot(rho, np.real(phase[z0, :]), label=r'$ \exp{ \left[ jW \left( \rho \right) \right] }$')
ax[0].plot(rho, np.real(est), '--', label='Fourier-Bessel')
ax[0].set_xlabel(r'$\rho$')
ax[0].set_title('Real')
ax[0].legend(loc='upper left')

ax[1].plot(rho, np.imag(phase[z0, :]))
ax[1].plot(rho, np.imag(est), '--')
ax[1].set_xlabel(r'$\rho$')
ax[1].set_title('Imaginary')
ax[1].set_ylim((-1.5, 1.5))

plt.show()

Step 2: Compute the PSF

Here, we use the Fourier-Bessel series expansion of the phase function and a Bessel integral identity to compute the approximate PSF. Each coefficient \( c_{m} \left( z \right) \) needs to be multiplied by

\begin{equation*} R \left(r; \mathbf{p} \right) = \frac{\sigma_m J_1 \left( \sigma_m a \right) J_0 \left( \beta a \right)a - \beta J_0 \left( \sigma_m a \right) J_1 \left( \beta a \right)a }{\sigma_m^2 - \beta^2} \end{equation*}

and the resulting products summed over the number of basis functions. \( \mathbf{p} \) is the parameter vector for the Gibson-Lanni model, \( \sigma_m \) is the scaling factor for the argument to the \( m'th \) Bessel basis function, and \( \beta = kr\text{NA} \).

b is defined such that R has dimensions of len(r) \( \times \) len(rho).

In [6]:
b = 2 * np. pi * r.reshape(-1, 1) * NA / wavelength

# Convenience functions for J0 and J1 Bessel functions
J0 = lambda x: scipy.special.jv(0, x)
J1 = lambda x: scipy.special.jv(1, x)

# See equation 5 in Li, Xue, and Blu
denom = scaling_factor * scaling_factor - b * b
R = (scaling_factor * J1(scaling_factor * a) * J0(b * a) * a - b * J0(scaling_factor * a) * J1(b * a) * a)
R /= denom

Now compute the point-spread function via

\begin{equation*} PSF \left( r, z; z_p, \mathbf{p} \right) = \left| \mathbf{R} \left( r; \mathbf{p} \right) \mathbf{c} \left( z \right) \right|^2 \end{equation*}
In [7]:
# The transpose places the axial direction along the first dimension of the array, i.e. rows
# This is only for convenience.
PSF_rz = (np.abs(R.dot(C))**2).T

# Normalize to the maximum value
PSF_rz /= np.max(PSF_rz)

Profile in r and z

In [8]:
fig, ax = plt.subplots()

ax.imshow(PSF_rz, extent=(r.min(), r.max(), z.max(), z.min()))
ax.set_xlim((0,2.5))
ax.set_ylim((-6, 0))
ax.set_xlabel(r'r, $\mu m$')
ax.set_ylabel(r'z, $\mu m$')

plt.show()

z is the stage displacement in an inverted microscope. So, negative z's correspond to moving "up" in the sample.

Step 3: Resample the PSF onto a rotationally-symmetric Cartesian grid

Here we generate a two dimensional grid where the value at each grid point is the distance of the point from the center of the grid. These values are supplied to an interpolation function computed from PSF_rz to produce a rotationally-symmetric 2D PSF at each z-position.

In [9]:
# Create the fleshed-out xy grid of radial distances from the center
xy      = np.mgrid[0:size_y, 0:size_x]
r_pixel = np.sqrt((xy[1] - x0) * (xy[1] - x0) + (xy[0] - y0) * (xy[0] - y0)) * res_lateral

PSF = np.zeros((size_y, size_x, size_z))

for z_index in range(PSF.shape[2]):
    # Interpolate the radial PSF function
    PSF_interp = interp1d(r, PSF_rz[z_index, :])
    
    # Evaluate the PSF at each value of r_pixel
    PSF[:,:, z_index] = PSF_interp(r_pixel.ravel()).reshape(size_y, size_x)

Save z-stack to a movie

In [10]:
fig, ax    = plt.subplots(nrows=1, ncols=1)

# Rescale the PSF values logarithmically for better viewing
PSF_log = np.log(PSF)
vmin, vmax = PSF_log.min(), PSF_log.max()

img = ax.imshow(PSF[:,:,64], vmin=vmin, vmax=vmax)
img.set_extent([-r.max(), r.max(), -r.max(), r.max()])

txt = ax.text(3, 7, 'z = {:.1f} $\mu m$'.format(z[64]))
ax.set_xlim((-8, 8))
ax.set_xticks((-5, 0, 5))
ax.set_xlabel(r'x, $\mu m$')
ax.set_ylim((-8, 8))
ax.set_yticks((-5, 0, 5))
ax.set_ylabel(r'y, $\mu m$')
ax.spines['top'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['right'].set_visible(False)

# Initialize the figure with an empty frame
def init():
    img.set_data(np.zeros((size_y, size_x)))
    return img,

# This function is called for each frame of the animation
def animate(frame):   
    img.set_data(np.log(PSF[:, :, frame]))

    txt.set_text('z = {:.1f} $\mu m$'.format(z[frame]))
    return img,

# Create the animation
anim = animation.FuncAnimation(fig, animate, frames=len(z),
                               init_func=init, interval=20, blit=True)

# Save the animation
myWriter = animation.FFMpegWriter(fps=10, extra_args=['-vcodec', 'libx264'])
anim.save('gibson-lanni.mp4', writer = myWriter)
    
plt.close()

Summary

In summary, this method for calculating the Gibson-Lanni PSF is fast because it reduces the problem to a numer of matrix operations, rather than a numerical integration of the Kirchhoff diffraction integral. I especially find the Fourier-Bessel approximation of the phase function very clever, since this enables us to apply the integral identity described above. Moreover, it seems like the approach can extend to computing PSFs from other wavefront aberrations, though at the moment I do not know how it might handle non-rotationally symmetric pupils. (I am referring to the W variable above.)

I would like to thank the first author of the work, Jizhou Li, for encouraging me to write this as a blog post and for providing feedback on the implementation presented here.

Accessing the Raspberry Pi camera image sensor

WARNING You can easily ruin your Raspberry Pi camera module by following the steps in this post. Proceed with caution.

Over the past half year I have been making slow but steady progress on my lensless imager project. The purpose, aside from having a bit of fun, is to create an imaging system for basic cell biology that doesn't use an expensive microscope objective.

A lensless imager works just as its name implies: an image sensor records the scattered light from a microscopic, transparent object and computationally reconstructs an image of that object, all without a lens. The best resolutions are achieved when the object is relatively close to the image sensor. Ideally, the separation between the object and the sensor's pixels would be at most about one millimeter. This limit is partly determined by the light source's spatial coherence, but also by the fact that high resolution is achieved by recording the scattered light at very large angles, which is possible only when the sample is close to the sensor.

Today I had a bit of free time so I decided to see whether I could remove the housing that surrounds the Raspberry Pi camera's sensor. The Raspberry Pi Camera Module version 2 sensor is a Sony IMX219 color chip. Directly above the sensor is a filter, a lens, and the housing for both of these that prevent me from placing anything closer than about half a centimeter from the sensor plane. If I would want to use this camera for the lensless imager, then the housing would have to go.

Now, even without the housing the Raspberry Pi camera is not necessarily the best option for the project because the IMX219 is a color sensor. This means that there is a Bayer filter over its pixels, which would cut the resolution of the imager since I would be using a very narrow band light source. Effectively, only a quarter of the pixels would be receiving any light. Regardless, I had a spare second camera and it interfaces well with the Raspberry Pi, so I figured it would make for a good prototype.

As you will see, I did slightly damage my sensor, though it seems to still work. You can easily ruin your camera module or cut your finger by following these steps, so proceed at your own risk.

Step 0: The Raspberry Pi Camera Module V2

In the picture below you see my Raspberry Pi Camera Module. From above, you can see the small circular aperture with the lens immediately behind it. The sensor is inside the small gray rectangular housing that is attached to the control board by a small ribbon cable (to its right) and a bit of two-sided sticky foam tape (underneath the sensor; not visible).

The Raspberry Pi Camera Module V2

Step 1: Remove the main ribbon cable

To make working on the board a bit easier, I removed the white ribbon cable that attaches the module to the Pi. I did this by pulling on the black tabs on the two ends of the connecter until the cable is easily removed. I labeled the sides of the ribbon cable just in case.

Remove the main ribbon cable from the control board

Step 2: Detach the sensor's ribbon cable

Next, I used my finger and thumb nail to remove the small ribbon cable that attaches the sensor to the control board. I essentially applied a small torque to the bottom edge of the connector until it just "popped" up, as seen in the second image below.

The small ribbon cable from the sensor is to the right.

Step 3: Remove the sensor from the control board

In the third step, I used my thumbnail to gently pry the sensor from the control board. The sensor is attached with some two-sided sticky tape and may need a few minutes of work to come free.

Pull the sensor off the control board.

Step 4: Remove the rectangular housing

In this step you risk cutting your finger, so please be careful.

The housing around the sensor is glued. To remove it, you will need to gently work a knife (or, better yet, a thin screw driver) between the housing and the sensor board, taking care not to let the blade go too far into the housing and possibly ruining one of the resistors or wire bonds.

Cut carefully on one side of the housing.

Once you get a knife between the two, try popping the housing off of the sensor.

Once the edge is cut, pop the housing off.

When I did this I cut on three sides of the housing, but in retrospect I should have only cut on the side opposite the ribbon cable and pried the other sides loose. This is because I damaged a small resistor when the knife blade went too far into the housing. You can see this below and, at the same time, get an idea of the layout of the sensor board so you know where you can and can't cut.

The resistor near the top of the board was damaged when cutting the housing.

If you have the normal version of the camera, then you can also find the IR blocking filter inside the housing.

The IR blocking filter.

Fortunately for me the camera still works, despite the damaged resistor. I can now place samples directly on the sensor if I wanted to, though the wire bonds from the sensor to its control board appear quite fragile. For this reason, it may make more sense to build a slide holder that holds a sample just above the surface without touching it. For now, I can use this exposed sensor to prototype different methods for mounting the sample.

Modeling noise for image simulations

In my previous post, I described a method for simulating inline holography images. The ultimate purpose of doing this was to generate holography data that could be used to test the performance of different reconstruction algorithms. Since I would know the input object, I could apply the reconstruction algorithm to the holograms and see how closely the reconstructed object matched the known input.

My modeling however ignored an important aspect about the holographic process, namely the acquisition of digital images and the assoicated noise that would accompany the raw data. This is important because holographic reconstruction algorithms need to be robust against the uncertainty that will inevitably be introduced by recording the hologram with a digital camera.

In [1]:
%pylab inline
import scipy
from numpy.fft import fft2, fftshift, ifftshift
from mpl_toolkits.axes_grid1 import make_axes_locatable
from matplotlib import animation
plt.style.use('dark_background')
plt.rcParams['figure.facecolor'] = '#272b30'
plt.rcParams['image.cmap'] = 'viridis' 

print('Python version:\n{}\n'.format(sys.version))
print('Numpy version:\t\t{}'.format(np.__version__))
print('matplotlib version:\t{}'.format(matplotlib.__version__))
print('Scipy version:\t\t{}'.format(scipy.__version__))
Populating the interactive namespace from numpy and matplotlib
Python version:
3.5.2 |Continuum Analytics, Inc.| (default, Jul  2 2016, 17:53:06) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)]

Numpy version:		1.11.3
matplotlib version:	2.0.0
Scipy version:		0.19.0

A Mathematical Model for Camera Noise

One of the best descriptions of camera noise that I have ever found may be the EMVA 1288 Standard. This document details a standard set of models and procedures to characterize CMOS and CCD cameras and light sensors. From its website:

EMVA launched the initiative to define a unified method to measure, compute and present specification parameters for cameras and image sensors used for machine vision applications... [The EMVA Standard] creates transparency by defining reliable and exact measurement procedures as well as data presentation guidelines and makes the comparison of cameras and image sensors much easier.

In particular, chapters 2 and 3 of the Standard provide a very clear and concise description the mathematical model for CCD/CMOS camera noise. Below, I have adapted the image of the mathematical model presented at the beginning of chapter 2.

Optical system with pupils

In this model, the camera/sensor is a single pixel that takes an input (a number of photons) and transforms this number to an output (a digital value in analog-to-digital units, or ADU). In performing this transformation, various sources of noise are added to the signal so that, given a camera's output in ADU, we can only make probabilisitic inferences about the actual number of photons impinging upon it. There are also a few assumptions in this model that, when taken together, constitute what's called an ideal image sensor. They are:

  1. the numer of photons collected by the pixel depends on the product of irradiance \( E \, \left[ W / m^2 \right] \) and the exposure time \( t_{exp} \, \left[ s \right] \);
  2. the sensor is linear, i.e. the digital signal \( y \) increases linearly with the number of photons received;
  3. all noise sources are constant in time and space;
  4. only the total quantum efficiency of the camera is dependent on the wavelength of the incident irradiation;
  5. and only the dark current depends on the temperature.

From these five assumptions, the laws of quantum mechanics, and the laws of probability, expressions for the mean output, \( \mu_y \), the variance of the output \( \sigma_y^2 \), and the signal-to-noise ratio (\(SNR \)) may be derived.

\begin{equation*} \mu_y = K \mu_d + K \eta \mu_p \\ \sigma_y^2 = K^2 \sigma_d^2 + K \left( \mu_y - K \mu_d \right) \\ SNR = \frac{\eta \mu_p}{\sqrt{\sigma_d^2 + \sigma_q^2/K^2 + \eta \mu_p}} \end{equation*}

In these expressions, \(\mu\) and \(\sigma^2\) represent the mean and variance of the number of photons hitting the camera (\(p\)), the dark noise (\(d\)), and the output signal in ADU's (\(y\)). The number of electrons \( \mu_e \) generated by \(\mu_p\) photons hitting the active area of the sensor is obtained from the expression for the quantum efficiency \(\eta\), an engineered property of the camera that in general depends on the wavelength.

\begin{equation*} \eta \left( \lambda \right) = \frac{\mu_e}{\mu_p} \end{equation*}

The sensitivity of the camera \(K \, \left[ ADU / e^- \right] \) represents the amplification of the voltage in the pixel from the photoelectrons and is also a property of the camera. Finally, \( \sigma_q^2 \) represents the noise introduced by digitzing the continous voltage signal into a digital one.

The EMVA 1288 Standard explains how to derive these expressions from the assumptions and known physical laws, so I will not go into their derivations in this post. Instead, I will focus on explaining how we can use this model to simulate the effects of camera noise in our holograms.

From Model to Simulation

The goal of the simulation will be to take a simulated hologram as input and convert it into an image that might be captured by a digital camera. This means that the final image should contain noise that is realistic and follows the rules of the model.

The model will depend on five free parameters that vary from camera-to-camera. They are:

  1. the total quantum efficiency \( \eta \);
  2. the read noise variance \( \sigma_{r}^2 \) (or its standard deviation);
  3. the dark current \( \mu_I \);
  4. the sensitivity \(K\);
  5. and the bit-depth of the camera \(k\).

The dark noise \( \sigma_d^2 \) is actually comprised of two separate noise sources, i.e. the read noise \( \sigma_r^2 \) and the dark current \( \mu_I \). Because the noise sources are independent, we can write

\begin{equation*} \sigma_d^2 = \sigma_r^2 + \sigma_I^2 \\ \sigma_d^2 = \sigma_r^2 + \mu_I t_{exp} \end{equation*}

where the last step relies on the fact that the generation of thermal electrons is a Poisson process whose mean is equal to its variance. The units of \(\mu_I\) are \(e^- / s \) and values for \( \mu_I \) are often provided on camera spec. sheets. Likewise, the read noise \(\sigma_r^2\) is also provided on spec. sheets, although in my experience the units are likely to be in root-mean-square electrons or median electrons and not electrons-squared.

Other than the inputs, we do not need to provide values for the other numbers in the model because they are related to the free parameters by various physical and mathematical laws.

Finally, let's ignore spatial noise sources in these simulations because it's difficult to characterize them with a single number. Spatial noise is a variation in the free parameters of the camera from pixel to pixel. An example of a common type of spatial noise is photon response non-uniformity and is manifest in a sensitivity \( K \) that varies between pixels. Ignoring spatial noise in the simulations simply means that values for the read noise, dark current, quantum efficiency, and sensitivty are the same for all pixels.

Step 0: Convert the electric field to photons

Before doing anything, we need to take the electric field from our Fourier optics simulations and convert it to units of photons. If it isn't clear how you would first get an electric field, you can refer to my previous post on calculating the hologram of a thin, circular phase-only object. Typically, the calcuation of the electric field will follow from the laws of diffraction and serve as an input for the camera model.

To get the number of photons, we first compute the irradiance of the field using the magnitude of the time-averaged Poynting vector of a linearly-polarzed electromagnetic wave in free space, which is

\begin{equation*} E \left(x, y \right) = \frac{1}{2Z_o} \left| U\left(x, y \right) \right|^2 \end{equation*}

Here, \(E \left( x, y \right) \) is the irradiance at point \( \left( x, y \right) \), \(U \left( x, y \right) \) is the scalar electric field at the same point, and \(Z_0 \approx 377 \, \Omega \) is the impedance of free space. We would change the value for the impedance if we were imaging in a medium other than vacuum. The units of the field are \( V / m \) which, when squared and divided by the units of the impedance (Ohms), gives units of power per area, or \( W / m^2 \). It's also worth mentioning that we are implicitly assuming the field varies smoothly on the length scale of a pixel. This is why we can use the expression for the Poynting vector of a plane wave.

The mean number of photons is obtained from the irradiance by the expression

\begin{align*} \mu_p &= \frac{\lambda A}{hc}Et_{exp} \\ &= \frac{\lambda A}{2hcZ_0} \left| U \right|^2 t_{exp} \end{align*}

where \( A \) is the pixel's area and \( \frac{hc}{\lambda} \) is the energy per photon in units of Joules, with \(hc \approx 2 \times 10^{-25} \, J \cdot m \).

This step is a bit cumbersome, and in the rest of what follows I will skip this step and will use a set number of photons directly as the input to the simulations, rather than a scalar field.

Step 1: Compute the photon shot noise

For the sake of simplicity, let's start by assuming that we have a camera with \( 256 \times 256 \) pixels and a mean incident photon flux of 500 photons on each pixel. This would produce an image that looks like this:

In [2]:
num_photons = 500
num_pixels  = 256
mu_p        = num_photons * np.ones((num_pixels, num_pixels))

fig, ax = plt.subplots()
img = ax.imshow(mu_p, vmin=400, vmax=600)
ax.set_xticks([])
ax.set_yticks([])
ax.set_title('No noise')

cb = plt.colorbar(img)
cb.set_label('Photons')

plt.show()

Photon shot noise arises from the fact that the incident number of photons is not deterministic but rather a random process whose statistics are Poissonian. In a Poisson process, the variance of the noise \( \sigma_p^2 \) is equal to the mean number of photons \( \mu_p \).

Poisson random numbers are easy to generate with NumPy. First, we set a seed so that we can reproducibly generate the same random numbers each time and initialize a RandomState instance with it. Using this RandomState instance, we call the poisson method with a mean of num_photons and a size (num_pixels, num_pixels). Finally, we plot the image and compare it to the one where no shot noise is present.

In [3]:
seed       = 42
rs         = np.random.RandomState(seed)
shot_noise = rs.poisson(num_photons, (num_pixels, num_pixels))

fig, (ax0, ax1) = plt.subplots(ncols=2)
img0 = ax0.imshow(mu_p, vmin=400, vmax=600)
ax0.set_xticks([])
ax0.set_yticks([])
ax0.set_title('No shot noise')

divider = make_axes_locatable(ax0)
cax = divider.append_axes("right", size="5%", pad=0.05)
cb0 = plt.colorbar(img0, cax=cax)
cb0.set_ticks([])

img1 = ax1.imshow(shot_noise, vmin=400, vmax=600)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('Shot noise')

divider = make_axes_locatable(ax1)
cax = divider.append_axes("right", size="5%", pad=0.05)
cb1 = plt.colorbar(img1, cax=cax)
cb1.set_label('Photons')

plt.show()
In [4]:
# It's also interesting to look at the distribution of photons hitting each pixel.
plt.hist(shot_noise.ravel(), bins=np.arange(350, 650))
plt.xlabel('Number of photons per pixel')
plt.ylabel('Frequency')
plt.show()

Step 2: Compute the number of photoelectrons

At this point we're going to need some numbers for our camera. For this example, I chose a camera using the popular Sony IMX264 CMOS chip, the FLIR Chamleon 3 (part number CM3-U3-50S5M-CS). Here are some of the key specs for our modeling:

Spec Value
Quantum efficiency (at 525 nm) 0.69
Sensitivity 5.88 ADU / e-
Temporal dark noise 2.29 e-
Pixel size 3.45 microns
Analog-to-digital converter (ADC) 12-bit

Assuming our light has a wavelength of 525 nm, then the number of photoelectrons is simply the product of the quantum efficiency with the realization of the field.

In [5]:
quantum_efficiency = 0.69
# Round the result to ensure that we have a discrete number of electrons
electrons = np.round(quantum_efficiency * shot_noise)

fig, (ax0, ax1) = plt.subplots(ncols=2)
img0 = ax0.imshow(shot_noise, vmin=200, vmax=600)
ax0.set_xticks([])
ax0.set_yticks([])
ax0.set_title('Photons')

divider = make_axes_locatable(ax0)
cax = divider.append_axes("right", size="5%", pad=0.05)
cb0 = plt.colorbar(img0, cax=cax)
cb0.set_ticks([])

img1 = ax1.imshow(electrons, vmin=200, vmax=600)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('Electrons')

divider = make_axes_locatable(ax1)
cax = divider.append_axes("right", size="5%", pad=0.05)
cb = plt.colorbar(img1, cax=cax)

plt.show()

Step 3: Simulate read noise and dark current

Following the schematic of the camera model above, we see that we next need to add the dark noise. Dark noise (not to be confused with dark current) refers to the noise in the pixels when there is no light on the camera. Typically we deal with two sources of dark noise: readout (or just "read") noise and dark current. The camera specs in our example only provide us with the total dark noise, which is not surprising since dark current is not very significant in microscopy or machine vision. (It is however significant in astronomy because of the very long exposure times of several seconds or more, which allows appreciable dark current electrons to accumulate during the exposure.)

To the best of my knowledge, when dark noise is primarily read noise we may well model it as a Gaussian distribution whose standard deviation is equivalent to the dark noise spec of the camera. This modeling approach is taken, for example, in the 2016 SMLMS Challenge, a community-driven competition to identify the best reconstruction algorithms for single molecule localization microscopy. Please note that the noise model employed in the challenge was for an electron multiplying CCD (EMCCD), so you may find that it differs slightly from what I am discussing here.

In [6]:
dark_noise = 2.29 # electrons
electrons_out = np.round(rs.normal(scale=dark_noise, size=electrons.shape) + electrons)

fig, (ax0, ax1) = plt.subplots(ncols=2)
img0 = ax0.imshow(electrons, vmin=250, vmax=450)
ax0.set_xticks([])
ax0.set_yticks([])
ax0.set_title('Electrons In')

divider = make_axes_locatable(ax0)
cax = divider.append_axes("right", size="5%", pad=0.05)
cb0 = plt.colorbar(img0, cax=cax)
cb0.set_ticks([])

img1 = ax1.imshow(electrons_out, vmin=250, vmax=450)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('Electrons Out')

divider = make_axes_locatable(ax1)
cax = divider.append_axes("right", size="5%", pad=0.05)
cb = plt.colorbar(img1, cax=cax)
cb.set_label('Electrons')

plt.show()
In [7]:
# Plot the difference between the two
fig, ax = plt.subplots()
img = ax.imshow(electrons - electrons_out, vmin=-10, vmax=10)
ax.set_xticks([])
ax.set_yticks([])
ax.set_title('Difference')

cb = plt.colorbar(img)
cb.set_label('Electrons')

plt.show()

Step 4: Convert from \( e^- \) to ADU

Now we need to convert the value of each pixel from electrons to ADU, or analog-to-digital units. ADU's are the units of grey scale that are output by monochrome cameras. To do this, we multiply the number of electrons after the addition of read noise by the sensitivity. We also need to ensure that the final ADU count is discrete and to set the maximum upper value of ADU's to the \( 2^k - 1 \) where \( k \) is the camera's bit-depth. The latter operation ensures that saturation of the pixels is correctly modeled.

In [8]:
sensitivity = 5.88 # ADU/e-
bitdepth    = 12
max_adu     = np.int(2**bitdepth - 1)
adu         = (electrons_out * sensitivity).astype(np.int)
adu[adu > max_adu] = bitdepth # models pixel saturation

fig, ax = plt.subplots()
img = ax.imshow(adu)
ax.set_xticks([])
ax.set_yticks([])

cb = plt.colorbar(img)
cb.set_label('ADU')

plt.show()

Step 5: Add a basline

Many cameras have a built-in baseline ADU value. This prevents the number of ADU's from becoming negative at low input signal. The Andor Zyla 4.2, for example, has a baseline value of 100 ADU. If you take a large number of dark frames and compute the average of each pixel in a Zyla, the average values will very nearly be 100.

We don't have baseline information for this camera, but 100 seems like a reasonable assumption. Here's what our final image will look like if we had a constant average of 500 photons incident on each pixel.

In [9]:
baseline  = 100 # ADU
adu      += baseline

fig, ax = plt.subplots()
img = ax.imshow(adu)
ax.set_xticks([])
ax.set_yticks([])

cb = plt.colorbar(img)
cb.set_label('ADU')

plt.show()