Simulating Inline Holograms

Software has played a big role in digital holographic imaging, arguably starting from the publication in which Goodman and Lawrence generated images from electronically recorded holograms. One of the things I would like to do for my lensless imager project is to develop a small set of tools to simulate various types of digital holograms. My reason for wanting to do this is that it will help in developing the algorithms that will actually perform the reconstructions from real holograms. In this post, I will explore a simple method for simulating holograms that is explained in an excellent publication I recently found: Tatiana Latychevskaia and Hans-Werner Fink, " Practical algorithms for simulation and reconstruction of digital in-line holograms," arXiv 1412.3674 (2016).

Inline holography with plane waves

Create the ground truth

The first step in the simulation of a digital hologram is to create a ground truth object, i.e. a an object whose optical properties we know a priori. Rather than try to infer an object from a hologram--which is what the lensless imager should do--the simulations should work in the opposite sense and create a realistic hologram from the ground truth. Later, we can use the simulated holograms to test different reconstruction algorithms and see which ones come closest to reproducing the known objects.

In [1]:
%pylab inline
from numpy.fft import fft2, fftshift, ifftshift
import scipy
from scipy.integrate import simps
from mpl_toolkits.axes_grid1 import make_axes_locatable
plt.style.use('dark_background')
plt.rcParams['figure.facecolor'] = '#272b30'
plt.rcParams['image.cmap'] = 'viridis' 

print('Python version:\n{}\n'.format(sys.version))
print('Numpy version:\t\t{}'.format(np.__version__))
print('matplotlib version:\t{}'.format(matplotlib.__version__))
print('Scipy version:\t\t{}'.format(scipy.__version__))
Populating the interactive namespace from numpy and matplotlib
Python version:
3.5.2 |Continuum Analytics, Inc.| (default, Jul  2 2016, 17:53:06) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)]

Numpy version:		1.11.3
matplotlib version:	2.0.0
Scipy version:		0.19.0

For the first simulation, I will start by making a relatively simple object: a non-absorbing circle with a constant phase. In holography, objects are represented by a position-dependent amplitude and phase. The amplitude determines how much the object absorbs the incident light and the phase determines by how much the light inside the object is slowed-down relative to the light that passes outside the object.

To create a non-absorbing circle, I create a square array of pixels whose values are all equal to one. What's neat about this is that this array already represents what is perhaps the simplest object to simulate: nothing at all! If every pixel is equal to one, it means that the light is neither being absorbed nor slowed down anywhere.

Of course, we want to eventually look at microscopic objects like cells, so we create a simple cell by making a circular mask and adjust the phase of all the pixels inside the mask relative to those outside it.

In [2]:
numPixels = 512
gtRadius  = 50 # ground truth radius, pixels
gtPhase   = 0.75 # radians
gtCenter  = numPixels / 2 # assumes numPixels is even
W, H      = np.meshgrid(np.arange(0, numPixels), np.arange(0, numPixels)) # coordinates of the array indexes
gtMask    = np.sqrt((W - gtCenter)**2 + (H - gtCenter)**2) <= gtRadius # boundaries of the object

gt = np.ones((numPixels, numPixels), dtype=np.complex)
gt[gtMask] = np.exp(1j * gtPhase)

The last line of the code above, gt[gtMask] = np.exp(1j * gtPhase), sets all the values in the gt array that are inside the circular mask to the complex phasor \( \exp \left( j \phi \right) \), where \( \phi \) is the value of the phase imparted onto the light by the object. For real objects, this phase would be equal to the optical path length of the object, which is the product of the object's refractive index by its real size.

Let's now go ahead and visualize the object to make sure we created it correctly.

In [3]:
def plot_complex_field(field, vmin=0, vmax=1):
    fig, ax = plt.subplots(nrows=1, ncols=2)
    
    amp   = ax[0].imshow(np.abs(field))
    phase = ax[1].imshow(np.angle(field), vmin=0, vmax = 2*np.pi)

    ax[0].set_xticks([])
    ax[0].set_yticks([])
    ax[0].set_title('Amplitude')

    divider = make_axes_locatable(ax[0])
    cax = divider.append_axes("right", size="5%", pad=0.05)
    cb_amp = plt.colorbar(amp, cax=cax)

    ax[1].set_xticks([])
    ax[1].set_yticks([])
    ax[1].set_title('Phase')

    divider = make_axes_locatable(ax[1])
    cax  = divider.append_axes("right", size="5%", pad=0.05)
    cb_phase  = plt.colorbar(phase, cax=cax)
    cb_phase.set_ticks([0, 2*np.pi])
    cb_phase.set_ticklabels(['0', r'2$\pi$'])

    return fig, ax, amp, phase, cb_amp, cb_phase
In [4]:
_, _, _, _, cb_amp, _ = plot_complex_field(gt)
cb_amp.set_ticks([1])
plt.show()

As you can see, the amplitude remains equal to 1 everywhere, which is what we would expect for a non-absorbing object. In the plot of the phase you can see the faint circle at the center of the image. I plotted the phase on a scale of \( \left[ 0, 2 \pi \right] \) for two reasons:

  1. to emphasize that the object only weakly perturbs the light;
  2. because one complete rotation of the phasor in the complex plane is \(2 \pi \) radians. Any value outside the range \( \left[ 0, 2 \pi \right] \) can be mapped back onto a value in this range.

Set the physical dimensions

The final step in the object creation is to set up the physical dimensions of the problem. There are a few ways we can do this, but I typically like to set the number of pixels that sample the object and the pixel size independently. Then, the physical radius of the object is equivalent to gtRadius * pixelSize and the total array size is equal to numPixels * pixelSize. What's more, because the array is square, we can create just one coordinate vector for both directions.

In [5]:
pixelSize = 0.1 # microns
x = np.linspace(-pixelSize * numPixels / 2, pixelSize * numPixels / 2, num = numPixels, endpoint = True)

physicalRadius = gtRadius * pixelSize
print('The physical radius is:  {} microns'.format(physicalRadius))
The physical radius is:  5.0 microns

The computational tool that I will use to propagate the wave fronts is known as the angular spectrum propagator. This tool requires two Fourier transforms and therefore requires that I setup the arrays that denote the spatial frequencies as well. As I discussed in a previous post, this can be done like so:

In [6]:
dx = x[1] - x[0]    # Sampling period, microns
fS = 1 / dx         # Spatial sampling frequency, inverse microns
df = fS / numPixels # Spacing between discrete frequency coordinates, inverse microns
fx = np.arange(-fS / 2, fS / 2, step = df) # Spatial frequency, inverse microns

dx is the sampling period in the object space, whose inverse fS is the sampling frequency. The maximum information content in the object's Fourier transform is found at the Nyquist frequency, or fS / 2.

Compute the hologram

Two common ways to generate a hologram in a real experiment are to illuminate the object with a plane wave and with a spherical wave. In this simulation, I will assume a plane wave illumination because this is somewhat simpler and an easy first case to deal with.

Step 1: Find the Fourier transform of the object

According to the publication by Latychevskaia and Fink, we first need to compute the Fourier transform of the object.

In [7]:
GT = ifftshift(fft2(fftshift(gt))) * dx**2
fig, ax, amp, phase, cb_amp, _ = plot_complex_field(GT)

amp.set_extent([fx[0], fx[-1], fx[0], fx[-1]])
ax[0].set_xticks([-fS / 2, fS / 2])
ax[0].set_yticks([-fS / 2, fS / 2])
ax[0].set_xlabel(r'$f_{x}, \, \mu m^{-1}$')
ax[0].set_ylabel(r'$f_{y}, \, \mu m^{-1}$')
cb_amp.set_ticks([0, 2612]) # I think that there's a bug in matplotlib 2.0 preventing correct tick displays

phase.set_extent([fx[0], fx[-1], fx[0], fx[-1]])
ax[1].set_xticks([-fS / 2, fS / 2])
ax[1].set_yticks([-fS / 2, fS / 2])
ax[1].set_xlabel(r'$f_{x}, \, \mu m^{-1}$')

plt.tight_layout()
plt.show()

We cannot really see much in the amplitude plot, so let's put it on a log-scale.

In [8]:
fig, ax = plt.subplots(nrows=1, ncols=1)

img = ax.imshow(np.log10(np.abs(GT)), extent=[fx[0], fx[-1], fx[0], fx[-1]])
cb  = plt.colorbar(img)
cb.set_label('$\log_{10}$ amplitude')

ax.set_xticks([-fS / 2, fS / 2])
ax.set_yticks([-fS / 2, fS / 2])
ax.set_xlabel(r'$f_{x}, \, \mu m^{-1}$')
ax.set_ylabel(r'$f_{y}, \, \mu m^{-1}$')

plt.show()

Step 2: Verify that the Fourier transform is correct

We can verify that we did everything correctly by computing the discrete integral of the absolute squares of the field passing through the original object and its Fourier transform. By Parseval's theorem, these should be equivalent. Though it may seem a bit mysterious, Parseval's theorem is just restating the well-known law about the conservation of energy. Simply put, the energy in the field passing through the object should be equal to the energy carried by its Fourier transform.

In [9]:
gt_int = simps(simps(np.abs(gt)**2, dx=dx), dx=dx)
GT_int = simps(simps(np.abs(GT)**2, dx=df), dx=df)
print('Integral, object: {:.4f}'.format(gt_int))
print('Integral, Fourier transform of object: {:.4f}'.format(GT_int))
Integral, object: 2621.4400
Integral, Fourier transform of object: 2631.7081

Notice that there's an error here between the sum of squares of the object and its Fourier transform. In this case, the percent error is:

In [10]:
error = np.abs(gt_int - GT_int) / gt_int * 100
print('Percent error: {:.2f}%'.format(error))
Percent error: 0.39%

The error is not large, but it does make you wonder where exactly it comes from. Let's do one more sanity check before going any farther. Another property of Fourier transforms is that the DC component (the value of the Fourier transform at a frequency of zero) should be equal to the average of the original signal.

The average of a two-dimensional discrete signal whose samples are equally spaced is expressed as

\begin{equation*} \bar{f} = \frac{1}{XY}\sum_{i=0}^{M-1}\sum_{j=0}^{N-1} f \left( i, j \right) \Delta x \Delta y \end{equation*}

Here, \( X \) and \( Y \) are the physical lengths of the array representing the signal and \( \Delta x \) and \( \Delta y \) are the spacings between each array element. Furthermore, the array has a number of pixels \( M \) and \( N \) in the \( x \) and \( y \) directions, respectively. In our case the arrays are square and equally sampled in both directions, so \( X = Y \) and \( \Delta X = \Delta Y \). Additionally, the number of pixels is just \( M = \frac{X}{\Delta X} \), which means that the above expression may be simplified to

\begin{equation*} \bar{f} = \frac{1}{M^2}\sum_{i=0}^{M-1}\sum_{j=0}^{M-1} f \left( i, j \right) \end{equation*}
In [11]:
# Compute the average of the original object
gt_avg = np.sum(gt) * dx**2 / (x[-1] - x[0])**2
print('Average of the object field: {:.4f}'.format(gt_avg))
Average of the object field: 0.9959+0.0205j

As for the DC term, we need to extract the zero-frequency value which lies at the center of the GT array. There is one caveat here: the values in the arrays are densities. We therefore need to multiply by the area of the pixel in the array's center, which is simply df**2.

In [12]:
dc_term = GT[numPixels // 2, numPixels // 2] * df**2
print('DC term: {:.4f}'.format(dc_term))
DC term: 0.9920+0.0204j

As you can see, there is also a slight difference between the average of the input field and the DC term of the input's Fourier transform. Since both sanity checks are relatively close, however, I suspect that the differences are due to numerical rounding errors.

Step 3: Create the angular spectrum propagator

The reason we computed the Fourier transform of the object was to determine its angular spectrum, i.e. the array of plane waves traveling at different directions to the axis that coherently combine to form the object. Computing the field at another axial plane means advancing the phase of each of these plane waves by an amount that depends on the wavelength of the light and the angle at which the plane wave is traveling relative to the optical axis. We can do this by multiplying the Fourier transform of the object by what's known as the transfer function of free space:

\begin{equation*} H \left( f_x, f_y; z \right) = \exp \left[ j \frac{2 \pi z}{\lambda } \sqrt{ 1 - \left( \lambda f_x \right)^2 - \left( \lambda f_y \right)^2 } \right] \end{equation*}

\( z \) is a free parameter and is the distance from the object plane in which we wish to compute the field. To get the field from the object in a plane \( z \), we multiply the Fourier transform of the field by this transfer function and then compute the inverse Fourier transform of the product. This is essentially the operation I performed in a previous post where I computed the defocused image of an isotropic point emitter in a microscope.

Here's what the phase angle the transfer function looks like for propagation over \( 10 \, \mu m \):

In [13]:
def H(fx, fy, z, wavelength=0.5):
    square_root = np.sqrt(1 - (wavelength**2 * fx**2) - (wavelength**2 * fy**2))
    temp = exp(1j * 2 * np.pi * z / wavelength * square_root)
    temp[np.isnan(temp)] = 0 # replace nan's with zeros
    return temp

FX, FY = np.meshgrid(fx, fx)
fig, ax = plt.subplots(nrows=1, ncols=1)
img = ax.imshow(np.angle(H(FX, FY, 10)))
img.set_extent([fx[0], fx[-1], fx[0], fx[-1]])

cb  = plt.colorbar(img)
cb.set_label('phase')

ax.set_xticks([-fS / 2, fS / 2])
ax.set_yticks([-fS / 2, fS / 2])
ax.set_xlabel(r'$f_{x}, \, \mu m^{-1}$')
ax.set_ylabel(r'$f_{y}, \, \mu m^{-1}$')
plt.show()
/home/kmdouglass/anaconda3/envs/nikola/lib/python3.5/site-packages/ipykernel/__main__.py:2: RuntimeWarning: invalid value encountered in sqrt
  from ipykernel import kernelapp as app

And the amplitude:

In [14]:
fig, ax = plt.subplots(nrows=1, ncols=1)
img = ax.imshow(np.abs(H(FX, FY, 10)))
img.set_extent([fx[0], fx[-1], fx[0], fx[-1]])

cb  = plt.colorbar(img)
cb.set_label('amplitude')

ax.set_xticks([-fS / 2, fS / 2])
ax.set_yticks([-fS / 2, fS / 2])
ax.set_xlabel(r'$f_{x}, \, \mu m^{-1}$')
ax.set_ylabel(r'$f_{y}, \, \mu m^{-1}$')
plt.show()
/home/kmdouglass/anaconda3/envs/nikola/lib/python3.5/site-packages/ipykernel/__main__.py:2: RuntimeWarning: invalid value encountered in sqrt
  from ipykernel import kernelapp as app

You can see right away that the transfer function appears circular. The reason for this is that the argument inside the square root becomes negative for large spatial frequencies, which makes the whole transfer function in this region into a decaying exponential. These spatial frequencies correspond to plane waves that exponentially decay as they propagate away from the sample; they are so-called "evanescent" waves.

For distances much larger than a wavelength of light, the amplitude of the transfer function in this area is effectively zero.

Step 4: Propagate the field and create the hologram

Let's now compute the field in the new plane at some distance from the sample. We do this by multiplying the Fourier transform of the ground truth by the transfer function and computing this product's inverse Fourier transform.

In [15]:
distance = 10 # microns
gt_prime = fftshift(ifft2(ifftshift(GT * H(FX, FY, distance)))) / dx**2
_, _, _, _, cb_amp, _ = plot_complex_field(gt_prime)
plt.show()
/home/kmdouglass/anaconda3/envs/nikola/lib/python3.5/site-packages/ipykernel/__main__.py:2: RuntimeWarning: invalid value encountered in sqrt
  from ipykernel import kernelapp as app
In [16]:
# Check again for conservation of energy
gt_int       = simps(simps(np.abs(gt)**2, dx=dx), dx=dx)
GT_int       = simps(simps(np.abs(GT)**2, dx=df), dx=df)
gt_prime_int = simps(simps(np.abs(gt_prime)**2, dx=dx), dx=dx)
print('Integral, object: {:.4f}'.format(gt_int))
print('Integral, Fourier transform of object: {:.4f}'.format(GT_int))
print('Integral, hologram plane: {:.4f}'.format(gt_prime_int))
Integral, object: 2621.4400
Integral, Fourier transform of object: 2631.7081
Integral, hologram plane: 2621.0387

Because our hologram will be recorded with a normal camera that is insensitiveto to the phase of the light, we take the absolute square to obtain the final simulated image of the hologram.

In [17]:
fig, ax = plt.subplots(nrows=1, ncols=1)
img = ax.imshow(np.abs(gt_prime)**2)
img.set_extent([x[0], x[-1], x[0], x[-1]])

cb  = plt.colorbar(img)
cb.set_label('amplitude$^2$')

ax.set_xticks([x[0], x[-1]])
ax.set_yticks([x[0], x[-1]])
ax.set_xlabel(r'$x, \, \mu m$')
ax.set_ylabel(r'$y, \, \mu m$')
plt.show()

Movie of propagating hologram

Here I've made a movie of the hologram propagating along the axial direction between \(z = 0 \, \mu m \) and \(z = 100 \, \mu m \).

In [18]:
from matplotlib import animation
In [19]:
numPoints = 1001
z         = np.linspace(0, 100, num = numPoints)

fig, ax    = plt.subplots(nrows=1, ncols=1)
vmin, vmax = 0, 3

img = ax.imshow(np.abs(gt_prime)**2, vmin=vmin, vmax=vmax)
img.set_extent([x[0], x[-1], x[0], x[-1]])
ax.set_xticks([x[0], x[-1]])
ax.set_yticks([x[0], x[-1]])
ax.set_xlabel(r'$x, \, \mu m$')

cb  = plt.colorbar(img)
cb.set_label('amplitude$^2$')
cb.set_clim([vmin, vmax])

txt = ax.text(10, 20, 'z = {:.1f} $\mu m$'.format(distance))

# Initialize the figure with an empty frame
def init():
    img.set_data(np.zeros((numPixels, numPixels)))
    return img,

# This function is called for each frame of the animation
def animate(frame):
    gt_prime = fftshift(ifft2(ifftshift(GT * H(FX, FY, z[frame])))) / dx**2
    hologram = np.abs(gt_prime)**2
    
    img.set_data(hologram)
    img.set_extent([x[0], x[-1], x[0], x[-1]])

    txt.set_text('z = {:.1f} $\mu m$'.format(z[frame]))
    return img,

# Create the animation
anim = animation.FuncAnimation(fig, animate, frames=numPoints,
                               init_func=init, interval=20, blit=True)

# Save the animation
myWriter = animation.FFMpegWriter(fps=10, extra_args=['-vcodec', 'libx264'])
anim.save('propagating_hologram.mp4', writer = myWriter)
    
plt.close()
/home/kmdouglass/anaconda3/envs/nikola/lib/python3.5/site-packages/ipykernel/__main__.py:2: RuntimeWarning: invalid value encountered in sqrt
  from ipykernel import kernelapp as app

Interactively propagate the hologram

Below I've used Jupyter's interactive plotting features to allow me to scan through the axial direction and observe how the hologram changes as a function of distance from the sample.

To use this feature, you will need to download this notebook and run it on your own a Jupyter notebook server.

In [22]:
from ipywidgets import interact, FloatSlider
# Change to interactive mode
# This must be run in a Jupyter notebook!
%matplotlib nbagg

fig, ax = plt.subplots(nrows=1, ncols=1)
vmin, vmax = 0, 3
img = ax.imshow(np.abs(gt_prime)**2, vmin=vmin, vmax=vmax)
img.set_extent([x[0], x[-1], x[0], x[-1]])

ax.set_xticks([x[0], x[-1]])
ax.set_yticks([x[0], x[-1]])
ax.set_xlabel(r'$x, \, \mu m$')

cb  = plt.colorbar(img)
cb.set_label('amplitude$^2$')
cb.set_clim([vmin, vmax])

txt = ax.text(10, 20, 'z = {:.1f} $\mu m$'.format(distance))

plt.show()

def compute_hologram(z):
    gt_prime = fftshift(ifft2(ifftshift(GT * H(FX, FY, z)))) / dx**2
    hologram = np.abs(gt_prime)**2
    
    img = ax.imshow(hologram, vmin=vmin, vmax=vmax)
    img.set_extent([x[0], x[-1], x[0], x[-1]])

    txt.set_text('z = {:.1f} $\mu m$'.format(z))   
In [23]:
interact(compute_hologram, z=FloatSlider(min=0,max=50,step=1,value=1))
plt.show()
/home/kmdouglass/anaconda3/envs/nikola/lib/python3.5/site-packages/ipykernel/__main__.py:2: RuntimeWarning: invalid value encountered in sqrt
  from ipykernel import kernelapp as app

Summary

Simulating an inline hologram with plane wave illumination requires a few steps: generating a ground-truth object, computing its angular spectrum, and propagating this spectrum to a new axial plane using the transfer function of free space. Two Fourier transforms are required: one for finding the object's angular spectrum and the other to convert the propagated spectrum back to real space.

If you have spherical wave illumination rather than plane wave illumination, then you will need to use the Fresnel propagator. This will make for a later discussion.

Micro-Manager on the Raspberry Pi

Micro-Manager is an open source platform for controlling microscope hardware, automating image acquisition, and tracking metadata about how images are acquired. In biomedical imaging research, it serves as an incredibly important tool because it is free and open source, which means that scientists can benefit from the contributions of others to the software without paying costly licensing fees.

I recently managed to compile Micro-Manager version 2.0 on the Raspberry Pi. I did this for a small hobby project I am working on to build a cheap yet effective tool for at-home microscope projects and hacking. Though I am not yet convinced that Micro-Manager will be the best tool for this particular job given it's relatively heavy footprint on the Pi's slower hardware, I thought that I would post my notes so that others could benefit from my experience.

Software versions

I am working with a Raspberry Pi 3 Model B:

pi@raspberrypi:~ $ uname -a & gcc -dumpversion & make -v & ldd --version
Linux raspberrypi 4.4.38-v7+ #938 SMP Thu Dec 15 15:22:21 GMT 2016 armv7l GNU/Linux

pi@raspberrypi:~ $ gcc -dumpversion
4.9.2

pi@raspberrypi:~ $ make -v
GNU Make 4.0

pi@raspberrypi:~ $ ldd --version
ldd (Debian GLIBC 2.19-18+deb8u7) 2.19

Setup a network share for 3rd party libraries

We need to compile Micro-Manager because binares for the Pi's ARM processor are not distributed by the Micro-Manager team (probably because too few people have ever wanted them). To compile Micro-Manager, we need to checkout a rather large set of 3rd party libraries. When I last checked, these libraries occupied 6.7 GB of space on my laptop, a size which can be prohibitive when using the Micro-SD cards that provide storage for the Pi.

To circumvent this problem, I checked out the 3rdpartypublic SVN repository onto my laptop and created a network share from this directory. Then, I mounted the share on my Pi in the directory just above that containing the Micro-Manager source code.

To get started, first have a look at my post on connecting a Pi to a Linux home network for ideas if you haven't already connected the Pi to your other machines at home: http://kmdouglass.github.io/posts/connecting-a-raspberry-pi-to-a-home-linux-network.html

Once the Pi and the laptop are on the same network, checkout the SVN 3rdpartypublic repository onto your laptop or home server. You may need to do this a few times until completion because the downloads can timeout after a few minutes:

svn checkout https://valelab4.ucsf.edu/svn/3rdpartypublic/

Next, we need to setup the network share. If your laptop or server is running Windows, then you will probably need to setup Samba on the Pi to share files between them. I however am running a Linux home network, so I decided to use NFS (Network File Sharing) to share the directory between my laptop--which runs Debian Linux--and the Pi. I installed NFS on my laptop with:

sudo apt-get install nfs-kernel-server nfs-common

Once installed, I added the following line to the newly created /etc/exports file:

/home/kmdouglass/src/micro-manager/3rdpartypublic 192.168.0.2/24(ro)

The first part is the directory to share, i.e. where the 3rdpartypublic directory is stored on my laptop. The second part contains the static IP address of the Pi on my home network. The /24 was REQUIRED for my client (the Pi) to mount the share. /24 simply denotes a network mask of 255.255.255.0; if you have a different mask on your network, then you can find a good discussion on this topic here: https://arstechnica.com/civis/viewtopic.php?t=751834 Finally, (...) specifies shared options and ro means read only.

After editing the file, export the folder and restart the NFS server:

sudo exportfs -arv
sudo /etc/init.d/nfs-kernel-server restart

On the client (the Pi), the NFS client software was already installed. However, I had to restart the rpcbind service before I could mount the share:

sudo /etc/init.d/rpcbind restart

Finally, I added a line to the /etc/fstab file on the Pi to make mounting the 3rdpartypublic share easier:

192.168.0.102:/home/kmdouglass/src/micro-manager/3rdpartypublic /home/pi/src/micro-manager/3rdpartypublic nfs user,noauto 0 0

The first part indicates the IP of the laptop and the share to mount. The second part, /home/pi/src/micro-manager/3rdpartypublic is the directory on the Pi where the share will be mounted. I placed this one directory above where the MM source code is, (/home/pi/src/micro-manager/micro-manager on my machine). nfs indicates the type of share to mount, and user,noauto permits any user to mount the share (not just root), though this share will not be automatically mounted when the Pi starts. The final two zeros are explained in the fstab comments but aren't really important for us. To mount the share, type the following on the Pi:

sudo mount /home/pi/src/micro-manager/3rdpartypublic

In case you're interested in learning more about the intricacies of Linux home networking, I found the following sources of information to be incredibly helpful.

  1. https://www.howtoforge.com/install_nfs_server_and_client_on_debian_wheezy
  2. https://www.youtube.com/watch?v=luqq8DUqqCw
  3. http://nfs.sourceforge.net/nfs-howto/ar01s03.html#config_server_setup
  4. http://www.tecmint.com/how-to-setup-nfs-server-in-linux/

Building MM

Once I was able to mount the share containing 3rd party libraries, I installed the following packages on the Pi and checked out the Micro-Manager source code:

sudo apt-get install autoconf automake libtool pkg-config swig ant libboost-dev libboost-all-dev
cd ~/src/micro-manager
git clone https://github.com/micro-manager/micro-manager.git
cd micro-manager
git checkout mm2

The last command switches to the mm2 branch where the Micro-Manager 2.0 source code is found. Note that it may not be necessary to install all of the boost libraries with sudo apt-get install libboost-all-dev, but I did this anyway because I encountered multiple errors due to missing boost library files the first few times I tried compiling.

The next step follows the normal Micro-Manager build routine using make, with the exception of the configuration step. From inside the Micro-Manager source code directory on the Pi, run the following commands one at a time:

./autogen.sh
PYTHON=/usr/bin/python3 ./configure --prefix=/opt/micro-manager --with-ij-jar=/usr/share/java/ij.jar --with-python=/usr/include/python3.4 --with-boost-libdir=/usr/lib/arm-linux-gnueabihf --with-boost=/usr/include/boost
make fetchdeps
make
sudo make install

In the configuration step, I set the Python interpreter to Python 3 because I greatly prefer it over Python 2. This is done by setting the PYTHON environment variable before running configure. --prefix=/opt/micro-manager/ indicates the preferred installation directory of Micro-Manager. --with-ij-jar=/usr/share/java/ij.jar is the path to the ImageJ Java library, though I am uncertain whether this was necessary. (I installed ImageJ with a sudo apt-get install imagej a while ago.) --with-python=/usr/include/python3.4 should point to the directory containing the Python.h header file for the version of Python you are compiling against. with-boost-libdir should point to the directory containing the boost libraries (.so files). This was critical for getting MM2 to build. If you are unsure where they are located, you can search for them with sudo find / -name "libboost*". Finally, the last option, with-boost, may or may not be necessary. I set it to the directory containing the boost headers but never checked to see whether MM compiles without it.

If all goes well, Micro-Manager will compile and install without problems. Compilation time on my Pi took around one hour.

Set the maximum amount of direct memory

In the next step, we need to make a minor edit to the Micro-Manager Linux start script. Edit the script (/opt/micro-manager/bin/micromanager) to reduce the maximum direct memory to something reasonable:

/usr/lib/jvm/jdk-8-oracle-arm32-vfp-hflt/bin/java -Xmx1024M \
  -XX:MaxDirectMemorySize=1000G \
   -classpath "$CLASSPATH" \
   -Dmmcorej.library.loading.stderr.log=yes \
   -Dmmcorej.library.path="/opt/micro-manager/lib/micro-manager" \
   -Dorg.micromanager.plugin.path="/opt/micro-manager/share/micro-manager/mmplugins" \

Change 1000G to 512M or 256M; otherwise the Pi will complain that the MaxDirectMemorySize of 1000G is too large. You can start Micro-Manager by running this modified script.

What's next?

Though Micro-Manager compiles and runs on the Pi, I have not yet tested it thoroughly acquisitions. I am currently waiting on a camera board to arrive in the mail, and when it does, I will attempt to interface with it through Micro-Manager. Though I could write my own Python library, Micro-Manager is appealing because it can save a lot of time by providing a ready-made means to annotate, process, and store imaging data.

Running Micro-Manager on the Pi also raises the possibility of a fully open, embedded biomedical imaging platform, though I am uncertain at the moment whether the hardware on the Pi is up to the task. If you manage to do anything cool with Micro-Manager and the Raspberry Pi, please let me know in the comments!

Connecting a Raspberry Pi to a home Linux network

I recently purchased a Raspberry Pi 3 Model B and have been tinkering with it for a few days. One of the first things I decided to do was to set it up so that I could access it from my laptop over my home network. This post contains a step-by-step explanation of the process. If you have any questions, feel free to leave a comment or send me an e-mail.

Collect the necessary information

To start, we need to collect a little bit of information about the home network. My internet is provided by a local company that supplied me with a Thomson TWG-870 router. This router determines the IP addresses of all the devices on my network. Since my laptop is running Linux (Debian Jessie, to be exact), I can use the netstat command to get the IP address of the router.:

kmdouglass@kmd-laptop:~$ netstat -rn
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
0.0.0.0         192.168.0.1     0.0.0.0         UG        0 0          0 wlan0

The key part of this output is the Gateway column. A gateway is the IP address of the device (i.e. the router) that provides devices on a local network with access to the Internet.

Knowing the IP address of the gateway, we can next trying entering it directly into the address bar of a web browser. On my machine, this opened a dialog asking for a username and password. (If you're not sure what these are, try asking your ISP. And if you haven't changed them from the default settings, then you really should do this.) After entering them and clicking OK, the browser window displayed the general configuration pages for the router.

The next few steps will depend on the specific router. The information we are after is the list of IP addresses that the router reserves for static IP's. A static IP address is an address that is assigned to a device and doesn't change. Many routers have a so-called DHCP server that dynamically assigns IP addresses to devices such as smart phones as they log onto the network. We probably want to always find the Pi at the same address, however, so a static IP makes more sense than one that the router dynamically assigns.

To find the list of static IP's on my specific router, I clicked on the link entitled Network in my router's configuration page. The relevant information for me looks like that in the image below:

DHCP address pool

This information is telling us that the router is reserving addresses 192.168.0.10 to 192.168.0.254 for the DHCP server. We can therefore most probably use 192.168.0.2 through 9 for static IP's. (Remember that 192.168.0.1 is already taken; it's the address of the router.) I tested 192.168.0.2 by pinging it and received no response, so we will use this address for my Raspberry Pi. (Use Ctrl-C to stop pinging the device.):

kmdouglass@kmd-laptop:~$ ping 192.168.0.2
PING 192.168.0.2 (192.168.0.2) 56(84) bytes of data.
From 192.168.0.15 icmp_seq=1 Destination Host Unreachable
From 192.168.0.15 icmp_seq=2 Destination Host Unreachable
From 192.168.0.15 icmp_seq=3 Destination Host Unreachable
^C
--- 192.168.0.2 ping statistics ---
4 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3014ms
pipe 3

For the next step, we need to collect the broadcast and subnet mask of the network. We can do this from the laptop that is already connected to the network by running the sudo ifconfig command. This command will report information that looks similar to the following example (note that this is not from my machine but is merely for illustration)::

eth0 Link encap:Ethernet HWaddr 00:10:5A:1A:DC:65
inet addr:198.209.253.169 Bcast:208.141.109.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:18940 errors:1 dropped:0 overruns:0 frame:2
TX packets:11554 errors:0 dropped:0 overruns:0 carrier:0
collisions:2 txqueuelen:100
RX bytes:4087250 (3.8 Mb) TX bytes:2499423 (2.3 Mb)
Interrupt:11 Base address:0xd000

The very first line tells us that this block of output belongs to the eth0 interface. If you connect to the internet on your laptop through WiFi, then you may need to find the information for the wlan0 interface instead. wlan0 is usually used to refer to wireless interfaces in Ubuntu and Debian Linux.

The first line of output from ifconfig also provides the type of hardware and the ID of the ethernet card. The information we need, however, is on the second line. The device's IP address on the network is inet addr:198.209.253.169, but we don't really need this information. Rather, we need the two numbers that come next. The broadcast IP is reported in Bcast:208.141.109.255 and the subnet mask in Mask:255.255.255.0. The broadcast IP is used to send messages to all devices on the network, whereas the subnet mask is used to separate the parts of an address that identify the network from the parts that identify the devices and possible "sub-networks."

To summarize this section, we need:

  1. The static IP address that we'll assign to the Pi
  2. The IP address of the router, i.e. the gateway address
  3. The broadcast IP
  4. The subnet mask

Configure the Pi

Now that we have decided on an IP address for the Pi, let's boot it up and configure it to always use this IP address. (I am currently using the NOOBS operating system that came with my Pi starter kit, but this should work with other flavors of Debian Linux as well.)

Once logged on to the Pi, open a terminal and make a backup copy of the file /etc/network/interfaces:

sudo cp /etc/network/interfaces /etc/network/interfaces.bak

Making a backup is good practice; in case we ruin the configuration file, we can simply rewrite it using our backup. Next, open the original interfaces file for editing. In this example, I'll use the nano editor:

sudo nano /etc/network/interfaces

In this file, add the following lines (replacing the addresses with those appropriate for your network):

auto eth0
iface eth0 inet static
    address 192.168.0.2
    netmask 255.255.255.0
    gateway 192.168.0.1
    broadcast 192.168.0.255

What do these lines do, you ask? Let's step through them one-by-one.

Start the network interface at boot

First off, we need to identify the network interface. eth0 is the identifier that is referring to the dedicated ethernet port on the Pi. The line auto eth0 means that this interface will be started at boot.

Configure the interface to use a static IP

Next, we see the line iface eth0 inet static. First, iface eth0 means that we are configuring the ethernet port interface that was described in the last section. Following that, inet specifies that the interface uses TCP/IP networking. Finally, static is telling the NOOBS operating system that the device is going to request a static IP address from the router. (I obtained this explanation from this forum post.)

Set the various addresses

The next lines are indented because they are properties of the inet static family. If you've read everything until now, you should be able to figure out what addresses to enter next for each option. The desired static IP address for the Pi should follow the address field; the subnet mask, gateway, and broadcast IP's described above should follow netmask, gateway, and broadcast respectively.

The network property (which is not shown above) contains the network address and is required for 2.0.x kernels. These kernels are pretty old by now, so it is unlikely that you will need to specify this property.

Restart the network interface

Restarting the interface we just configured on our Pi is as simple as entering these terminal commands:

sudo ifdown eth0
sudo ifup eth0

(Remember to replace eth0 with the appropriate interface if yours is different.) If everything goes well, we should be able to use our web browser to navigate on the Internet. We should also be able to ping the Pi from the laptop and vice versa.

Connecting to the Pi

Once the Pi is on the network, we need a way to connect to it from the laptop and other devices so that we can actually use it for something. One way is to use ssh, or Secure SHell. ssh is program that let's us securely log on to other devices through a shell (i.e. terminal). This is useful for when we need to work only on the command line.

If, on the other hand, we want a "Remote Desktop"-like GUI environment, we can use VNC. The documentation for VNC is quite good but detailed; I'll let you read up on it on your own if you're interested in using it.

I'll now briefly explain how we can set up ssh on the Pi.

EDIT: VNC installation

As it turns out, you may run into some problems if you do try to setup VNC by following the documentation in the link above. Namely, the documentation is missing a key step, at least for me. I had to first install the VNC server software on the Pi via:

sudo apt-get update
sudo apt-get install realvnc-vnc-server

Even though the rest of this post is about ssh, you may still find this information useful.

Enable ssh on the Pi

We need to enable ssh access to the Pi before we can use it. On the Pi, open a terminal and run the configuration utility::

sudo raspi-config

We should see the following window appear.

The raspi-config menu with Interface Options highlighted.

Use the keyboard to highlight Interface Options and tap the Enter key. In the following menu, we now should see an option to enable ssh as in the following image. Use the keyboard to highlight P2 SSH (or the relevant menu item if the name is different on your Pi) and hit the Enter key to enable it. Once ssh is enabled, we can hit Esc or select the <Back> option to until we exit the configuration utility.

The raspi-config Interface Options menu with P2 SSH highlighted.

If you'e following along, you may need to restart your Pi for these changes to take effect.

Log onto the Pi from the laptop

Now for the moment of truth. After restarting the Pi, we need to first ensure that we are not logged in to it. If we are, simply click the Menu button, followed by Shutdown... -> Logout and log out of the session.

Next, open a terminal on the laptop and enter the following command, changing the IP address to whatever was decided upon for the Pi::

ssh pi@192.168.0.2

This command runs the ssh program and asks to sign into the Pi as the user called pi. After running the command, we may be prompted for a password to log on if one was set on the Pi. (You did set one, didn't you?) Once successfully entering the password, we should notice that the terminal prompt has changed to something like pi@raspberrypi:~ $. This indicates that we are logged on to the Pi. If we enter the ls command, we should see the contents of the Pi's home directory. When we're ready to disconnect from the Pi, we can simply use the exit command at any time in the terminal. The prompt should change to reflect that we are back on our laptop machine when we have successfully exited.

If this is all working as described above, then congratulations on connecting your Pi to your home Linux network! I wish you many happy hours of hacking :)

Further Reading

  1. The Debian network setup manual is very detailed and describes many, many more aspects of setting up a network than I touched upon here. https://www.debian.org/doc/manuals/debian-reference/ch05.en.html
  2. The Raspberry Pi documentation about VNC (Virtual Network Computing) is a great resource for setting up a graphical interface to remotely connect to your Pi. https://www.raspberrypi.org/documentation/remote-access/vnc/

FIFI Parts List

Many people have been writing to Suliana or I recently and asking for a parts list for FIFI, the flat-field illuminator I developed for large field-of-view localization microscopy. (The manuscript may be found here.)

I compiled a list of the primary parts and suggestions for the optional parts that one may need to install FIFI into their own microscope. You may find the link to this document (in the Open Document .ods format) below.

FIFI Parts List

Disclaimer

The prices for the parts are approximate and were taken from the respective vendor websites in January, 2017. Do not be surprised if the prices have changed since this time and differ from those in the spreadsheet.

If you find any problems or errors in this parts list, please do not hesitate to contact me. It is highly recommended that you perform the initial design of the illuminator for your setup before buying any parts to avoid making costly mistakes.

The parts list is MIT licensed.

Simple pupile function calculations

Note: This is a remake of my original post on pupil functions. I decided to write this remake to fix what I perceived to be some errors in thinking and to better address the problem of units in some of the physical quantitities. For the original, click here.

A pupil function is a theoretical tool for characterizing an imaging system. In simple terms, it is a mathematical model for any general arrangement of lenses and mirrors used to collect light from an object and form an image of that object in some other plane. A few of the reasons why pupil functions are useful include:

  1. they reduce a complicated optical system--such as a microscope--to a relatively simple, two-dimensional, and complex-valued function;
  2. they provide a convenient way to represent the aberrations present in the system;
  3. they are easy to simulate on a computer using fast Fourier transforms.

In this post I will show you how to write a simple program that computes the image of an isotropic point source of light from an optical system's pupil function.

Theoretical background

In scalar diffraction theory, light is represented as a three-dimensional function known as the scalar field. At every point in space \( \mathbf{r} \), the scalar field \( u \left( \mathbf{r} \right) \) is a single, complex value that represents the electric field at that point. The physical laws of diffraction require that the scalar field be described by two numbers, an amplitude and a phase, that are derived from the field's real and the imaginary parts, \( \text{Re} \left[ u \left( \mathbf{r} \right) \right] \) and \( \text{Im} \left[ u \left( \mathbf{r} \right) \right] \):

\begin{align*} A &= \sqrt{\text{Re} \left[ u \left( \mathbf{r} \right) \right]^2 + \text{Im} \left[ u \left( \mathbf{r} \right) \right]^2 } \\ \phi &= \arctan \left( \frac{\text{Im} \left[ u \left( \mathbf{r} \right) \right]}{\text{Re} \left[ u \left( \mathbf{r} \right) \right]} \right) \end{align*}

If we know the amplitude and phase at a given point, then we know the scalar field at that point. Despite the fact that scalar diffraction theory ignores the polarization of light, it does wonderfully well at describing a large range of optical phenomena.

For most problems in imaging, we don't really need to know the three-dimensional distribution of the field in all of space. Instead, we simplify the problem by asking how an optical system transforms the field in some two-dimensional object plane into a new field distribution in the image plane (see the figure below). Any changes in scale between these two planes are caused by the system's magnification; any blurring or distortion is caused by diffraction and possibly aberrations.

Optical system with pupils

The pupil function is the two-dimensional Fourier transform of the scalar field in the image plane when the object is a point source emitting light equally in all directions, i.e. an isotropic point source. Mathematically, the pupil function is written as

\begin{equation*} P \left(f_x, f_y \right) = \frac{1}{A}\iint_{-\infty}^{\infty} \text{PSF}_A \left( x, y \right) \exp \left[ -j 2 \pi \left( f_x x + f_y y\right) \right] \, dx \, dy \end{equation*}

where \( A \) is a normalizing constant, \( f_x \) and \( f_y \) represent spatial frequencies in the x- and y-directions, and \( j \) is the imaginary number. \( \text{PSF}_A \left( x, y \right) \) is known as the amplitude point spread function. Despite the intimidating name, \( \text{PSF}_A \left( x, y \right) \) is just the scalar field from the isotropic point source in the image plane. The pupil function and the amplitude point spread function form a Fourier transform pair, so we can also write

\begin{equation*} \text{PSF}_A \left(x, y \right) = A \iint_{-\infty}^{\infty} P \left( f_x, f_y \right) \exp \left[ j 2 \pi \left( f_x x + f_y y\right) \right] \, df_x \, df_y \end{equation*}

What all of this means is that we can compute the image of an on-axis, isotropic point source if we know the pupil function that describes the system: compute the two-dimensional Fourier transform of the pupil function and voilà, you have the image (or at least the field that will form the image).

The pupil punction is dimensionless; \( \text{PSF}_A \) is a field

By convention the pupil function is a dimensionless complex number and has a magnitude between 0 and 1. The amplitude PSF, however, has units of electric field, or Volts per distance, \( V / m \).

If you perform a dimensional analysis on the Fourier transform expressions above, you can see that the normalizing constant \( A \) has to have units of \( V \times m \). For example, if \( \text{PSF}_A \) has units of \( V / m \) and \( dx \) and \( dy \) both have units of \( m \), then \( A \) has units of \( V \times m \) and the pupil function is therefore dimensionless. Sometimes in the literature you will find that units or a normalizing constant are ignored or, worse, that the reader can sort of "fill them in" later. I prefer to be explicit and to define the \( \text{PSF}_A \) to be a field.

Pupil functions are not entrance or exit pupils

The pupil function and the pupil planes of a system are not the same thing. The entrance and exit pupils--which together are known as the pupil planes--are the planes in which the images of the system's aperture stop are located. The pupil function, however, is not an image of the aperture stop of the system; it's a 2D Fourier transform of a field.

There is never-the-less a relationship between the pupil function and the plane of the exit pupil. The pupil function represents the relative amplitude and phase of the field on the surface of a so-called reference sphere that intersects the optics axis in the plane of the system's exit pupil.

Pupil function simulations in python

The goal of this simulation will be simple: given a pupil function, a single wavelength, an optical system with a numerical aperture NA, and an amount of power that passes through the image plane, compute the image of an on-axis isotropic point source.

There are only a few steps needed to achieve our goal:

  1. define the simulation's input parameters;
  2. setup the image plane and pupil plane coordinate system;
  3. create the pupil plane and normalize it so that the field carries the desired amount of power;
  4. and compute the field in the image plane.

Before we go further, it's worth pointing out that the pupil function and \( \text{PSF}_A \) are obtained by calculating the continuous Fourier transform of one another. On a computer, however, it's often easiest to compute what's called a discrete Fourier tranform via the fast Fourier transform (FFT) algorithm. The continuous Fourier transform and the FFT are not, strictly speaking, the same thing. Therefore, we should expect from the start that there may be small differences between the computed \( \text{PSF}_A \) and the analytical calculation.

With this in mind, we'll start by importing a few scientific libraries like Numpy and Scipy.

In [1]:
%pylab inline
import sys
from numpy.fft import fft2, fftshift
import scipy
from scipy.integrate import simps
import seaborn as sns # Used only to set up plots
sns.set_context(context = 'talk')
plt.style.use('dark_background')
plt.rcParams['figure.facecolor'] = '#272b30'
plt.rcParams['image.cmap'] = 'viridis' 

print('Python version:\n{}\n'.format(sys.version))
print('Numpy version:\t\t{}'.format(np.__version__))
print('matplotlib version:\t{}'.format(matplotlib.__version__))
print('Scipy version:\t\t{}'.format(scipy.__version__))
print('Seaborn version:\t{}'.format(sns.__version__))
Populating the interactive namespace from numpy and matplotlib
Python version:
3.5.2 |Continuum Analytics, Inc.| (default, Jul  2 2016, 17:53:06) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)]

Numpy version:		1.11.2
matplotlib version:	1.5.3
Scipy version:		0.18.1
Seaborn version:	0.7.1

Step 1: Define the input parameters

Next, we need to define a few parameters that will determine the output of the simulations. These parameters are:

  1. wavelength Units are \( \mu m \).
  2. NA Numerical aperture of the system. No units.
  3. pixelSize The length of a square pixel in the object space. Units are \( \mu m \).
  4. numPixels The number of pixels in your camera. This will be assumed to be even.
  5. power The total power carried by the field in Watts, \( W \).

Note that pixel size is defined as the size of a pixel in the object space. Defining it like this is often more intuitive than defining the pixel size in the image space. For example, when you see a scale bar on a microscopy image, the distances correspond to object space distances, not the actual distance that the image spans on the piece of paper or computer screen. Furthermore, there is no problem with working with an object space pixel size in the image plane since the coordinate systems in the object and image planes of our perfect optical system can be easily mapped onto one another by a linear scaling by the system's magnification.

We don't neccessarily need to use a camera as the detector. Since we are limited to working with discrete arrays in the computer, though, it's convenient to say that we have a camera as a detector since each pixel is a discrete sample of the field.

In addition to the above parameters, we'll assume that the object, the imaging system, and the image plane are all in air. We'll define a constant \( Z_0 = 376.73 \, \Omega \) which is known as the impedance of free space or the vacuum impedance. This is the constant of proportionality between the power carried by the scalar field and the integral of its absolute square in the image plane:

\begin{equation*} P_0 = \frac{1}{Z_0} \iint_{-\infty}^{\infty} \left| \text{PSF}_A \left( x, y \right) \right|^2 \, dx \, dy \end{equation*}

Of course, air does not really have the same impedance as vacuum, but the two values are close enough. I have also used \( P_0 \) to denote the power because I already used \( P \left( f_x, f_y \right) \) to denote the pupil function.

In [2]:
# Setup the simulation parameters
wavelength = 0.68   # microns
NA         = 0.7    # Numerical aperture of the objective
pixelSize  = 0.1    # microns
numPixels  = 2048   # Number of pixels in the camera; keep this even
power      = 0.1    # Watts
Z0         = 376.73 # Ohms; impedance of free space

Step 2: Create the image and pupil plane coordinate systems

Our simulation will transform the values in a two-dimensional square array of complex numbers (the pupil function) into a new two-dimensional array of complex numbers of the same size. Before we do this, however, let's first determine the coordinates of each pixel.

Since we specified the pixel size and the number of pixels, it's easiest to start with the image plane coordinates. We will define the origin of our coordinate system to lie at the center of the array, which, for an even number of pixels and array indexes that start at zero, lies halfway between the pixels \( \left( \frac{\text{numPixels}}{2} \right) - 1 \) and \( \left( \frac{\text{numPixels}}{2} \right) \) in both the horizontal and vertical directions.

In [3]:
# Create the image plane coordinates
x = np.linspace(-pixelSize * numPixels / 2, pixelSize * numPixels / 2, num = numPixels, endpoint = True)

We only need to create a single, one-dimensional array to represent the coordinates because our image and pupil plane arrays will be square; we can use the same array to represent the coordinates in both the horizontal and vertical directions.

With the image plane coordinates taken care of, the next question is: what are the pupil plane coordinates? This question is a frequent source of frustration for students (and full-time scientists). I won't go into the details in this post, but instead will just tell you the two rules you need to remember for Fourier optics simulations

  1. The number of elements in the pupil function array is the same as the number of elements in the image plane array.
  2. For an even number of array elements, the frequency values along either principle direction in the pupil function run from \( -\frac{f_S}{2} \) to \( f_S \left( \frac{1}{2} - \frac{1}{\text{numPixels}} \right) \) with the spacing between discrete coordinate values equal to \( \frac{f_S}{\text{numPixels}} \).

\( f_S \) is called the sampling frequency and is equal to one divided by the spacing between image space coordinates. We can go ahead now and compute the frequency-space coordinate values.

In [4]:
# Create the Fourier plane
dx = x[1] - x[0]    # Sampling period, microns
fS = 1 / dx         # Spatial sampling frequency, inverse microns
df = fS / numPixels # Spacing between discrete frequency coordinates, inverse microns
fx = np.arange(-fS / 2, fS / 2, step = df) # Spatial frequency, inverse microns

Step 3: Create the pupil function

In nearly all imaging systems, the pupil is circular because its optical elements are circular. The radius of the pupil function is the ratio of the system's numerical aperture to the wavelength of the light, \( \frac{\text{NA}}{\lambda} \) (Hanser, 2004). Perfect systems like the one we are modeling here have a pupil with a constant value everywhere inside this circle and zero outside of it.

We can simulate such a pupil by making a circular mask with a radius of \( \frac{\text{NA}}{\lambda} \). The mask is one inside the circle and zero outside of it.

In [5]:
# Create the pupil, which is defined by the numerical aperture
fNA             = NA / wavelength # radius of the pupil, inverse microns
pupilRadius     = fNA / df        # pixels
pupilCenter     = numPixels / 2   # assumes numPixels is even
W, H            = np.meshgrid(np.arange(0, numPixels), np.arange(0, numPixels)) # coordinates of the array indexes
pupilMask       = np.sqrt((W - pupilCenter)**2 + (H - pupilCenter)**2) <= pupilRadius

Define the power carried by the scalar field

I mentioned in the theoretical background above that the total optical power carried by the field is the two dimensional integral of the absolute square of the field divided by the impedance. If we want to set the power as an input of the simulation, we need to first normalize our pupil values by this integral.

Parseval's theorem tells us that we can integrate over \( \text{PSF}_A \left( x, y \right) \) and \( A \times P \left( f_x, f_y \right) \), i.e. the image plane field or the normalized pupil, respectively, and get the same number:

\begin{equation*} \iint_{-\infty}^{\infty} \left| \text{PSF}_A \left( x, y \right) \right|^2 \, dx \, dy = A^2 \iint_{-\infty}^{\infty} \left| P \left( f_x, f_y \right) \right|^2 \, df_x \, df_y \end{equation*}

Now that we have the pupil, we can perform a numerical integration over it using Simpson's rule to find the normalizing constant. We then multiply the pupil by the square root of this constant times our desired value for the power to set the total power carried by the field.

In [6]:
# Compute normalizing constant
norm_factor = simps(simps(np.abs(pupilMask)**2, dx = df), dx = df) / Z0 # A^2
print('Normalization constant:\t\t{:.4f} W'.format(np.sqrt(norm_factor)))

# Renormalize the pupil values
pupil = pupilMask * (1 + 0j)
normedPupil = pupil * np.sqrt(power / norm_factor)

new_power = simps(simps(np.abs(normedPupil)**2, dx = df), dx = df) / Z0
print('User-defined power:\t\t{:.4f} W'.format(power))
print('Power now carried by field:\t{:.4f} W'.format(new_power))
Normalization constant:		0.0940 W
User-defined power:		0.1000 W
Power now carried by field:	0.1000 W
In [7]:
# Show the pupil
ax = plt.imshow(np.abs(pupil), extent = [fx[0], fx[-1], fx[0], fx[-1]])
plt.grid(False)
cb = plt.colorbar(ax)
cb.set_label('$P \, ( f_x, f_y ) $')
plt.xlabel('$f_x$, $\mu m^{-1}$')
plt.ylabel('$f_y$, $\mu m^{-1}$')
plt.show()

# Compute the power
power_pupil = simps(simps(np.abs(normedPupil)**2, dx = df), dx = df) / Z0
print('Power in pupil plane: {:.4f} W'.format(power_pupil))
Power in pupil plane: 0.1000 W

Step 4: Compute the image of the point source

With the pupil and coordinate systems established, we are now ready to compute the image of the isotropic point source that this system produces.

To do this, we need to perform a few easy but important steps. In the first step, we will shift the origin of the pupil from the center of the array to the array indexes \( \left( 0, 0 \right) \) using the ifftshift() function. The reason we do this is that fft2() expects that the zero frequency value lies at the origin of the array. The two-dimensional FFT of the shifted pupil is then computed, producing a new array with the zero frequency at array indexes \( \left( 0, 0 \right) \) and the Nyquist frequency \( f_S / 2 \) in the middle of the array's axes (numpy.fft.fft2 - NumPy v1.11 Manual, accessed on 2016-11-15). We finish by shifting the origin back to the center of the array using fftshift() so that it makes sense when we visualize the results.

The final step is to multiply the result by the square of the spacing between frequency coordinates. This step ensures that the power is preserved during the FFT operation (Schmidt, 2010, pp. 15-18).

Chaining the functions together, these steps look like:

psf_a = fftshift(fft2(ifftshift(normedPupil))) * df**2

The image is the irradiance in the image plane, which is the absolute square of the field, divided by the vacuum impedance. Its units are power per area, or in this case \( W / \mu m^2 \). In optics, the irradiance is what is measured by a camera, not the field.

image = np.abs(psf_a)**2 / Z0
In [8]:
psf_a = fftshift(fft2(ifftshift(normedPupil))) * df**2
image = np.abs(psf_a)**2 / Z0

# Show the image plane
img = plt.imshow(image, interpolation='nearest', extent = [x[0], x[-1], x[0], x[-1]])
cb = plt.colorbar(img)
plt.gca().set_xlim((-2, 2))
plt.gca().set_ylim((-2, 2))

plt.xlabel('x, $\mu m$')
plt.ylabel('y, $\mu m$')
cb.set_label('Irradiance, $W / \mu m^2$')
plt.grid(False)
plt.show()

Above you can see the image of an isotropic point source. The image is not a point but rather a blurred spot in the center of the image plane due to diffraction at the pupil.

Verify the results

The first thing we can do to check whether the above result is correct is compute the power over the image above. By Parseval's theorem, it should be the same as the integral of the normalized pupil function divided by the vacuum impedance.

In [9]:
power_image = simps(simps(image, dx = dx), dx = dx)
print('Power in pupil plane: {:.4f} W'.format(power_pupil))
print('Power in object plane: {:.4f} W'.format(power_image))
Power in pupil plane: 0.1000 W
Power in object plane: 0.1000 W

So far so good. The next thing that we can do to verify these results is to calculate the sampled values of the analytical solution to this problem. Scalar diffraction theory predicts that the solution for the irradiance of the field diffracted by a circular aperture is an Airy disk:

\begin{equation*} I \left( r \right) = I_0 \left[ \frac{2 J_1 \left( X \right)}{X} \right]^2 \end{equation*}

where \(X = \frac{2 \pi r \, \text{NA}}{\lambda} \), \( r = \sqrt{x^2 + y^2} \) is the radial coordinate, and \( J_1 \left( r \right) \) is called the first-order Bessel function of the first kind (Weisstein, Mathworld, accessed on 2016-11-16). This function does not exist inside Python's scientific libraries, so we will need to create it.

In [10]:
from scipy.special import j1 as bessel1
def airyDisk(x,y, NA = 0.5, wavelength = 0.532):
    """Computes a 2D airy disk pattern.
    
    Parameters
    ----------
    x, y : array of int, array of int
        Coordinates where the function will be evaluated.
    NA   : float
        The system's numerical aperture.
    wavelength: float
        The wavelength of the light; same units as x and y.
        
    Returns
    -------
    result : array of float
        
    """
    r      = np.sqrt(x**2 + y**2)
    X      = 2 * np.pi * r * NA / wavelength
    result = (2 * bessel1(X) / X)**2
    
    try:
        # Replace value where divide-by-zero occurred with 1
        result[np.logical_or(np.isinf(result), np.isnan(result))] = 1
    except TypeError:
        # TypeError is thrown when single integers--not arrays--are passed into the function
        result = np.array([result])
        result[np.logical_or(np.isinf(result), np.isnan(result))] = 1
    
    return result

Now we can go ahead and visually compare our image plane calculations with the airy disk. If we subtract one from the other, we should get all zeros.

In [11]:
from mpl_toolkits.axes_grid1 import make_axes_locatable
In [12]:
# Subtraction by dx/2 places the origin at the edge of a pixel, not a center
X, Y = np.meshgrid(x - dx/2, x - dx / 2, indexing = 'xy')

fig, ax = plt.subplots(nrows = 1, ncols = 2, figsize = (12, 8))

img = ax[0].imshow(image, interpolation='nearest', extent = [x[0], x[-1], x[0], x[-1]])

divider = make_axes_locatable(ax[0])
cax = divider.append_axes("right", size="5%", pad=0.05)
cb0 = plt.colorbar(img, cax = cax)

ax[0].grid(False)
ax[0].set_xlim((-2, 2))
ax[0].set_ylim((-2, 2))

ax[0].set_xlabel('x, $\mu m$')
ax[0].set_ylabel('y, $\mu m$')
ax[0].set_title('Simulation')
plt.grid(False)

I0  = np.max(image)
img = ax[1].imshow(I0 * airyDisk(X,Y, NA = NA, wavelength = wavelength), interpolation = 'nearest', extent = [x[0], x[-1], x[0], x[-1]])

divider = make_axes_locatable(ax[1])
cax  = divider.append_axes("right", size="5%", pad=0.05)
cb1  = plt.colorbar(img, cax = cax)

ax[1].grid(False)
ax[1].set_xlim((-2, 2))
ax[1].set_ylim((-2, 2))

ax[1].set_xlabel('x, $\mu m$')
ax[1].set_title('Theory')

cb1.set_label('Irradiance, $W / \mu m^2$')

plt.tight_layout()
plt.show()
/home/kmdouglass/anaconda3/envs/nikola/lib/python3.5/site-packages/ipykernel/__main__.py:21: RuntimeWarning: invalid value encountered in true_divide
In [13]:
# Find and plot the difference between the simulated image and the theoretical one
I0 = np.max(image)
diffImg = image - I0 * airyDisk(X,Y, NA = NA, wavelength = wavelength)

plt.imshow(diffImg, interpolation = 'nearest', extent = [x[0], x[-1], x[0], x[-1]])
cb = plt.colorbar()
cb.set_label('Irradiance, $W / \mu m^2$')
plt.grid(False)
plt.xlabel('x $\mu m$')
plt.xlim((-2, 2))
plt.ylabel('y $\mu m$')
plt.ylim((-2, 2))
plt.title('Difference between simulation and theory')
plt.show()
/home/kmdouglass/anaconda3/envs/nikola/lib/python3.5/site-packages/ipykernel/__main__.py:21: RuntimeWarning: invalid value encountered in true_divide

From the plots above you can see that the simulations perform pretty well at finding the image of the point source. In fact, the differences in the final irradiance values differ by at most a small fraction of a percent.

However, the pattern in the difference image is not random, so these differences are probably not round-off errors. In fact, they come from the minor but important detail that we are calculating a discrete Fourier transform with the FFT command, whereas the theory predicts that the point spread function is a continuous Fourier transform of the pupil.

Final remarks

The calculations I discussed here rely on quite a few simplifying assumptions. The first is that they are based on scalar diffraction theory. The real electromagnetic field is polarized and is described not by a scalar but by a three dimensional, complex-valued vector. The theory of vector diffraction describes how such a field propagates through an optical system; naturally, it is more complicated than scalar diffraction theory.

Isotropic point sources are also an idealization of reality. Single fluorescent molecules, for instance, do not emit light equally in all directions but rather in a pattern known as dipole radiation.

Finally, the optical system as modeled here was a linear, shift-invariant system. This means that the same pupil function can actually be used to compute the image of any off-axis point source as well; we would only need to apply a linear phase ramp to the pupil to account for this. Real optical systems are only approximatley linear and shift-invariant, which means that a more complete model would assign a different pupil function to each object point. This then allows for aberrations that vary as a function of object position, but requires more information to fully describe the system.

Field-dependent photophysics - video

Below you can see a video of a raw STORM movie that I acquired in the lab a few months ago on a standard, inverted laser fluorescence microscope. The sample consists of AlexaFluor 647-labeled tubulin in Cos7 cells at a frame rate of 100 fps. (I have slowed things down a bit to more easily see what's going on.) Go ahead and watch the video now, I'll discuss a little bit about what you're seeing aftwards.

In the beginning of the movie you can see the moment when the laser beam first hits the sample. Since the illumination geometry is just widefield epi-illumination, the laser beam has a Gaussian-like irradiance profile. (As explained in the link, irradiance is the amount of energy the laser beam carries per differential unit area. It is often referred to as intensity, though intensity actually refers to a different quantity in the field of radiometry.) This means that the fluorophores in the middle of the beam are exposed to a higher irradiance than those at the edges.

The effect of the laser beam profile for STORM is quite clear when watching the video. The photoswitching rates of the fluorophores to the long-lived dark state are much higher in the center of the beam than at the edges. This leads to a field-dependent density of molecules in the fluorescence emitting state, which will ultimately affect one's ability to precisely and accurately localize the single molecule emissions. If you look closely, you can also see that the brightness of the fluorophores is also higher near the beam center, meaning that these molecules will be localized more precisely because we recorded more photons from them.

All of these features mean that one must take care when performing quantitative analyses on STORM datasets, because the density, precision, and accuracy of the single molecule position estimates will vary across the field of view.

Perlin Noise

Since my last post, I have been experimenting with the noise package from Casey Duncan. This package is the easy solution for generating Perlin noise that I had hoped for. It seems simple and fast, though one downside at least to me is that it doesn't play so nicely with numpy arrays. This is not a big deal, though. It was probably not developed for scientists but rather for game design and/or computer graphics.

So, let's use this library to continue exploring the question of why correlated noise often makes for a better model of the real world.

In [1]:
%pylab inline
import seaborn as sns
from matplotlib import animation
from noise import pnoise1, pnoise3
from mpl_toolkits.axes_grid1 import make_axes_locatable
plt.rcParams['savefig.dpi'] = 120
sns.set_color_codes()
Populating the interactive namespace from numpy and matplotlib

Generating Perlin noise with the noise library

The basic function pnoise1() takes a single float argument x. It won't work with numpy arrays. For this reason, I'm generating the noise signal in a for loop.

In [3]:
numPoints = 100
time      = np.linspace(0, 10, num = numPoints)

# Generate the signal one element at a time
signal    = np.zeros(numPoints)

for ctr, t in enumerate(time):
    signal[ctr] = pnoise1(t)

plt.plot(time, signal)
plt.xlabel('Time')
plt.show()

The octaves and persistence parameters determine how much energy is contained at frequencies in each successive octave of the noise's power spectrum. These parameters determine the presence and strength, respectively, of the finer details in the noise signal. Before going into them with more detail, we can vary them and observe the outcome.

Here's the result of varying the number of octaves.

In [4]:
signal2 = np.zeros(numPoints)
signal3 = np.zeros(numPoints)
for ctr, t in enumerate(time):
    signal[ctr]  = pnoise1(t, octaves = 1) + 1
    signal2[ctr] = pnoise1(t, octaves = 2)
    signal3[ctr] = pnoise1(t, octaves = 4) - 1

plt.plot(time, signal,  label = '1 octave',  linewidth = 2)
plt.plot(time, signal2, label = '2 octaves', linewidth = 1.5)
plt.plot(time, signal3, label = '4 octaves', linewidth = 1.0)
plt.xlabel('Time')
plt.legend()
plt.show()

In the above plot, and besides constant offsets to aid in visualization, I added one and three additional octaves to the noise, keeping the persistence fixed. Increasing the octaves makes the curve appear more 'jagged' and noisy on the smaller scales, much like extending the frequency range of the filter applied to the random signal from my last post. Over longer time scales, the curve more or less follows the original one.

Below you can see what happens when we use four octaves but increase the persistence gradually. The amplitude of the spikes at small time scales increases with increasing persistence.

In [5]:
for ctr, t in enumerate(time):
    signal[ctr]  = pnoise1(t, octaves = 4, persistence = 0.25) +1
    signal2[ctr] = pnoise1(t, octaves = 4, persistence = 0.5)
    signal3[ctr] = pnoise1(t, octaves = 4, persistence = 1) - 1

plt.plot(time, signal,  label = 'persistence = 0.25',  linewidth = 2)
plt.plot(time, signal2, label = 'persistence = 0.50', linewidth = 1.5)
plt.plot(time, signal3, label = 'persistence = 1.00', linewidth = 1.0)
plt.xlabel('Time')
plt.legend()
plt.show()