This page, and the images on it, are best viewed with Mozilla Firefox (IE creates artifacts when displaying the images)
Get Firefox

The visual appeal of SEM images is what captured me the most when I first saw what a SEM was capable of. The remarkable clarity and depth of focus of a SEM micrograph can be enhanced by coloring the grayscale image, making stereo pair images (anaglyphs), and applying other post-processing kernels. This project will be a detailed analysis of methods to make the beautiful images from SEMs even better through post-processing kernels, deliberate - not arbitrary - colorization, and the creation of three-dimensional stereo pair anaglyphs. Additionally, a small part of my project was a design study in the feasibility of designing an optical system to acquire digital images from the Institute of Optics' Transmission Electron Microscope.

1    Sample Preparation/General Imaging Considerations

When considering what types of samples to use for my project, I knew that samples with a lot of texture and surface topography would be the most interesting. Therefore, I chose the following sample suite:

  1. CD pits
  2. DVD pits
  3. Vinyl Record Grooves
  4. Housefly
  5. Ladybug

The housefly and ladybug turned out to be the most interesting to image because of the wide variety of features that they possessed. Looking at the CD and DVD pits were interesting, but there was not a lot of variety on the samples.

In order to image the pits of the CD, I needed to remove the aluminum layer since the polycarbonate plastic is transparent to visible light but not to an electron beam. In order to do this, I first scored the top surface of the disc with a razor blade. I then firmly applied a strip of carbon tape to the top of the disc, and quickly removed it, pulling up some of the aluminum with it. It was clear that I had removed some of the pits because diffraction could be seen. I then attached the tape to a sample stub and the pits were ready to be imaged with relatively little work.

The DVD pit samples were prepared slightly differently due to the nature of the mastering process for DVDs. At first, I tried to use the same technique that I used for the CD. However, the aluminum that was removed did not show any diffraction and had a dull luster to it. I did notice, however, that the polycarbonate plastic did show diffraction, so I sputter coated the plastic with approximately 60 Angstroms of gold and was able to image the pits this way.

For the vinyl record sample, I simply cut a small section of a record and attached it to a sample stub via carbon tape. I then sputter coated approximately 90 Angstroms of gold onto the grooves. Since the sample was relatively thick (2-3 mm) carbon tape was applied along the side to ensure good conductivity (this was also done with the polycarbonate plastic from the DVD).

The housefly and ladybug required the most in terms of sample preparation. Since they are biological samples, they needed to be completely dehydrated before they could enter the SEM's vacuum chamber. In order to do this, critical point drying (CPD) was performed on the samples. In CPD, the samples are first placed in ethanol. CO2 then replaces the ethanol, and the CO2 is brought above its critical point (31.1 degrees Celsius and 1072 PSI) such that there is a continuity of state. Above the critical point, CO2 can exist as both liquid and gas, therefore eliminating any of the damaging effects that surface tension causes. The gas is then released, reducing the pressure, and the samples are dry of all the liquids used - water, ethanol, and CO2. After dehydrating the samples, they were sputter coated with 90 Angstroms of gold. Carbon paint was necessary to increase the conductivity since the contact between the bodies of the bugs and the sample stub was not always very good.

Secondary electrons were always used to obtain images, since they are very topographically sensitive. I would get very little contrast from backscattered electrons because they mostly show contrast for a change in atomic number. Both the secondary electron detector by itself and the mix mode (SE detector + in-lens detector) were used. The SE detector by itself gave much higher signal to noise ratio, however the images were also usually saturated even at the lowest contrast (gamma) setting due to charging or other effects.

Except for the CD and DVD samples, relatively low magnifications and long working distances were used in order to get good depth of focus (this was much more difficult when the samples were tilted nearly 90 degrees).

2    Colorization of Grayscale Images

Adobe Photoshop is an excellent tool to use for applying false-color to digital SEM micrographs. In some cases, a single-color tint was enough to make a big difference in the appearance of the micrograph. In those instances, I simply made a duplicate of the Background layer and modified the blending options. I used a color overlay to add color and in most cases used either the normal, hard light, or vivid light blending modes.

In order to segment the image to selectively color it, I used the magnetic lasso tool, which makes it very easy to trace along edges. I selected all the features I wanted and then performed a "Layer via Copy" (layer via cut does not work well when the selection is feathered - feathering is sometimes necessary to avoid sharp boundaries, e.g. with the DVD micrograph which does not have as well defined boundaries). I then performed the colorization to each layer that I needed to.

Scroll the mouse over the following images to view them in their original grayscale form.

Small Hairs on Fly

Broken Hair/Antenna on Fly, Between Eyes

Fly's Eye

Fly's Head, Body in Background

DVD Pits

Ladybug Hairs/Cilia


3    Anaglyph Creation

In order to properly view the following images, a pair of 3-D glasses must be worn (with the red filter on the left eye and the blue filter on the right eye). The anaglyphs look much better at full resolution; clicking on any anaglyph opens it in a new window (you may need to disable a popup blocker). In some cases, you may need to work a little bit with your eyes to get the red and blue images to fall on top of each other. Especially at the higher magnifications, it was very difficult to make the tilt and therefore the image shift small. Most of the low magnification anaglyphs require very little strain on the eyes to view them properly.

Fly's Head

Hairs on Ladybug

Grooves of Record

Ladybug on its Back

Ladybug's Claw

Fly's Back

Hair on a Fly


Upside Down Ladybug

The basic premise behind the creation of anaglyphs is to record to images at slightly different tilt angles and display them in a way that allows our eyes to recombine them as one image. The most challenging part of this was to find the eucentric working distance, that is, the working distance for which the axis of rotation of the stage intersects the center of the sample. In order to find this point, I first returned the sample stage to its center point (RECALL + 8) and recorded these coordinates (X = 37500, Y = 37500). In the center of the sample stage, there was an unused stub holder. As I raised the sample stage, Brian tilted the stage from side to side. The stub hole appeared to move less and less to the side as we decreased the working distance. We stopped raising the stage at the point where the stub hole appeared to stay in one spot, even as we tilted the stage. I then focused on the surface of the sample stage and recorded the working distance (~26 mm). This is the eucentric working distance. In order to move to a sample of interest (which was not at the same vertical distance as the stub hole) I simply lowered the stage and focused using the z-axis controls (and maintaining constant current through the final condenser lens). It is important to keep the sample stage centered left to right (Y = 37500), but it can move in and out (X can change) and stay on the tilt axis. Therefore a combination of in/out movement and stage rotation may be necessary to move the sample of interest centered under the electron beam. It is important to apply a 90 degree scan rotation so that when the sample is tilted the image shifts left to right. The first image can be recorded at zero tilt, and applying a tilt of only a few degrees is sufficient before capturing the second image. It may be beneficial to "bracket" the tilt and capture about 3-4 images at tilt intervals of 1.5-3 degrees. In most cases, I tilted the second image less than 5 degrees with respect to the first image.

A different technique needed to be used for the record groove sample, the ladybug on its back, and the anaglyph of the entire fly. Without mounting these samples vertically, the only way to image them with the perspective that I was looking for was to tilt them 90 degrees. Therefore, I simply swapped the function of the sample tilt and stage rotation mechanisms. I tilted the sample to get the viewing angle that I desired and used the stage rotation to achieve tilt between the two images. A scan rotation may or may not be necessary to adjust the image such that rotating the sample stage causes the image to shift left to right. Despite the fact that it was impossible to have the sample stage at the axis of rotation, good anaglyphs were still able to be constructed. I should have placed the sample in the center stub holder (which I did not realize until later), but good results were still obtained with the sample in the outer stub holders. I rotated the sample stage the smallest amount I could, so that a feature on the CRT screen moved about 0.5 cm. Even this small amount was a too much, and I had to shift the images slightly in Photoshop to effectively reduce the amount of tilt and reduce the amount of work our eyes need to do to recombine the images.

After capturing the images, they need to be somehow added together such that our eyes can reconstruct them as one. One way to do this is to place the images next to each other and cross your eyes until the images fall on top of each other (similar to the "Magic Eye" stereograms). This is very difficult to do unless the images are very small and close together. An alternative method is to color one of the images red and the other blue, superimpose the images, and view the resulting image with 3-D glasses so that the left eye sees only the red image and the right eye sees only the blue image.

In order to do this, I used an excellent guide written by Bob Anderhalt from the EDAX Applications Laboratory. The procedure was written for Adobe Photoshop, although other professional image editing software suites (such as Ulead's PhotoImpact) should be able to do the same thing.

  1. Open both the left and right images. The image with lower tilt will be the red image and the image with higher tilt will be the blue image. If you forgot which is which, an easy way to figure it out is to make the images very small (~1 inch), and cross your eyes until they are superimposed on each other. Only one arrangement of the two images will result in a 3-D image.
  2. Click on the title bar of the blue image. Click on the Image menu; select Duplicate... and click OK (using the default file name is fine).
  3. Locate the Channels tab on the Layers/Channels/Path window. With the Channels tab selected, click the triangle in the upper right corner, and select Merge Channels. Select RGB from the pulldown menu and set the number of channels to three.
  4. In the next dialog box, select the left image (lower tilt) for the red channel, the right image (higher tilt) for the green channel, and the duplicate image for the blue channel.
  5. Check the result with 3-D glasses! If you don't see a 3-D image, you may need to either relax or cross your eyes a little bit to force your brain to superimpose the images. If the images are superimposed but you don't see a 3-D image, you may have the two images switched. If that doesn't work, you may not have captured the images correctly.

4    Image Processing Kernels

Image processing is a very important and useful process in enhancing images both for pure aesthetic purposes and also to gain more information which may not be readily seen in the original image. With the SEM micrographs that I have obtained, the image processing will mostly be performed to improve the aesthetic purposes of the image and to bring out as much detail as possible.

The following figure relates several spatial filters to their corresponding spatial filters. The image processing I will be doing will be matrix convolutions; no Fourier transforms will be computed. Lowpass, highpass, and averaging filters will all be used in my analysis.

In order to exactly represent these functions, they would need to be sampled at several points. As a result, when the filters are applied to an image, they would act on as many pixels of the images as points of the function that were sampled. Therefore, small 3 x 3 or 5 x 5 kernels are usually used to represent the filters and therefore only approximate the exact function that describes the filter. Often, slight tweaking of the values are necessary anyway to get the results that one desires. Examples of some common filters are given below:

Note that in each case, the sum of the matrix is one. This is so that the value over a constant area remains unchanged.

To show an example of these filters, I will perform them on a micrograph of record grooves.

Original Image

3 x 3 Sharpening Filter

5 x 5 Sharpening Filter

3 x 3 Sharpening Filter and Blur Filter

The filters that I used on the above images were modified versions of the general sharpen and blur filter. If I had used the sharpening filter above, with a central value of 9, too much sharpening would have occurred and too much noise introduced. The standard sharpen kernel was performed on the original image; the result is below:

Therefore, I modified the sharpen filter to have a value of 3 in the middle (and -0.25 on the edge such that the sum of the matrix was 1.0) and as shown above the result was a much more suitable amount of sharpening; there was just enough to bring out the in-focus middle section of the micrograph. When the sharpened image was blurred with a lowpass filter much of the noise was reduced to give the image in the lower right hand corner. The lowpass filter was very similar to the example given, with some tweaking to optimize the results.

All the image processing was done in MatLab using only a few lines of code. Although image processing may be easier in a program such as Adobe Photoshop, it was much more instructive for me to use MatLab since I could see the effects of different numerical filters. With the image processing toolbox in MatLab, the following code will allow one to process an image using a specified filter:

H = [-1 -1 -1; %specified image processing kernel
-1 9 -1;
-1 -1 -1]
sI = imfilter(I,H,'replicate');%applies the convolution and scales
% the data to remain in the 0 to 255 range
figure;imshow(sI);title('Sharpened Image, 3 x 3 Kernel') % displays the result

In the next example, I show the difference between sharpening an image and first applying a noise reduction filter followed by the same sharpening filter. A noise reduction filter is a median filter - that is, it replaces the central value with the median value in an n x m range (3 x 3 is the most common). Therefore instead of averaging bad or noisy pixels in with the rest of the image, it simply eliminates them. This is a good operation to perform before sharpening so that the sharpening filter does not magnify the effects of noise. In MatLab, a median filter can be run using the function medfilt2. It is time consuming since it is not a matrix operation/convolution; it must compute a median of 9 numbers for every pixel in the image.

Original Image (CD pits)

3 x 3 Sharpening Filter

Median Filter

Median Filter and 3 x 3 Sharpening Filter

The difference between the final and original images is very subtle, but some of the fine structure on the pits is brought out by the noise reduction/sharpening filter.


5     Design of Imaging Optics for TEM - Design Study In Progress

Currently, the Institute of Optics' Transmission Electron Microscope (TEM) only has the capability to record micrographs onto photographic film; digital file acquisition is not possible.   Since the TEM is under high vacuum, a digital camera will most easily be integrated into the TEM via one of the available window ports.   Because of the harmful x-rays that are emitted by the TEM, the viewing windows must be made out of thick (at least 0.75 inches) leaded glass.   One such glass is Schott F2.   However, if one attempts to do high resolution imaging through a plane parallel 0.75 inch thick window, image quality will suffer greatly.   There are several workarounds for this: 1) The imaging optics can be design to compensate for the spherical aberration induced by the fact that a converging beam of light travels through the plate, 2) An additional, identical plane parallel plate of glass can be inserted into the imaging system to compensate for the spherical aberration, or 3) The window can be shaped as an aplanatic surface, i.e., one for which the spherical aberration contribution is zero.   The surface contribution for spherical aberration is given by the equation:

The two physically possible ways to make this equation zero are to make the angle of incidence zero on one surface and on the other surface make the angle of incidence equal to the angle of refraction.   In practice, a designer would usually not choose to make the angle of incidence zero at a surface because of the possibility for retroreflected rays to cause problems later in the system.  

The specifications of the system are as follows:

Design Type Aplanatic Window
Wavelength 550 nm +/- 15 Å
Field of View 35 mm square (49.5 mm diagonal; 25 mm HFOV)
f/# (Infinite Conjugates) f/3
f/# (Actual Conjugates) f/6 (1:1 system)
Object NA 0.1667 (a consequence of f/# specification)
Scintillator – Window Distance 152.4 mm
Material Schott F2; 19.05 – 38.10 mm thick
Additional Comments Scintillating plate can be curved to compensate for field curvature

Because the aplanatic element must be designed not only for an on-axis ray bundle but also for off-axis rays, it will deviate slightly from a perfectly aplanatic condition.   Shift of the stop away from being in contact with the lens will also slightly modify the shape of the element.  

A Micro-Nikkor lens will image the virtual source leaving the aplanatic window onto a CCD.  

The design is shown below, with a perfect lens focusing the diverging beam. The scintillating plate would be on the left side of the drawing below and would be imaged to a CCD on the far right side.

Assuming a perfect lens (Micro-Nikkor lenses are very high-quality diffraction-limited lenses), the overall system is diffraction limited (composite wavefront error is < 1/40 wave). One major issue is that the required lens diameter for the perfect lens is nearly 80 mm, which is larger than that of the Micro-Nikkor lens. Therefore either the scintillating plate - camera distance must be reduced or the numerical aperture must be reduced. The tradeoffs of this - as well as the possibility of other designs (solid Schmidt, hypergon) - will be studied in the future.

6    Summary

Summary of the techniques used in my project:

  1. Secondary Electron Imaging
  2. Sputter Coating
  3. Critical Point Drying
  4. Anaglyph Formation & Component Image Capture
  5. Micrograph Colorization
  6. Lens Design for TEM Digital Camera

In terms of microscope techniques, one of the biggest things I realized during this project was the drastic effect sample tilt can have on images. The most striking differences are with the record groove sample. The first image shown below is imaged from directly above; the second image is with the sample tilted such that the tilt axis is parallel to the length of a groove; the final image is with the tilt axis perpendicular to the grooves. The images are almost unrecognizable as being micrographs of the same sample!

Imaging from Directly Above - Darkest area is top of groove (uncut material)

Imaging from an angle - Medium gray areas are the top (uncut area);
darkest and lightest gray areas are the 45 degree cuts.

It's finally clear what the grooves actually look like!

NOTE: IF YOU WANT HIGHER RESOLUTION RECORD GROOVE IMAGES LOOK HERE: link to images Please properly attribute them if you will be publishing (e.g. Image courtesy of University of Rochester: URnano)

Several techniques were discussed to improve the visual appeal and information content available from SEM micrographs. Through the application of image processing kernels and anaglyph formation, much more information can be obtained than from an original micrograph. Through the use of colorization, micrographs become drastically more visually appealing and allow the viewer to quickly and easily see the important part of the images and separate like features.

I was extremely pleased with how well the anaglyphs turned out. Although I don't believe we were ever exactly at the eucentric point, in my opinion good quality anaglyphs were produced at various (but all greater than 20 mm) working distances.

I've realized that some images can look great after being altered in an image processing program. Some images look great untouched though; here are some of my favorites with absolutely no post-processing done on them:

Many, many thanks to Brian McIntyre for his help and support with this project and for encouraging us to try to make anaglyphs.

Also thanks to Neil Anderson (course TA) and Jim Zavislan (advisor for optical design work) for their help.

Please enter any comments, criticisms, questions, etc. below.

Your name:

Email address: