Part 1: The Nature and Description of Light

False color strain map from phaseshifting speckle interferometry. Courtesy of F. Lanza, S. S. Hong, G. Cloud Light is a form of energy that is marked by two characteristics:

  • It moves—when it ceases to move, it is no longer light.
  • It carries a wealth of information. Apart from the question of exactly what light energy is, we need to describe and predict its:
  • creation
  • propagation
  • interactions with materials We have two experience-based systems to describe light:
  • quantum mechanics
  • electromagnetic wave theory Quantum mechanics tells us that light consists of photons that have characteristics of both waves and particles. This approach facilitates explanation of phenomena such as:
  • photoelectricity
  • lasers
  • photography Electromagnetic Theory teaches that light consists of energy in the form of electromagnetic waves. This approach explains:
  • refraction
  • interference
  • diffraction Electromagnetic theory is sufficient for most of our purposes. Maxwell’s Equations describe the behavior of electromagnetic waves by relating the:
  • wave vectors
  • field quantities
  • material properties For our applications, Maxwell’s equations reduce to the wave equation. The simplest solution to the wave equation is sufficient for our purposes. It is a harmonic plane wave traveling along an axis. The important wave data are:
  • strength
  • polarization
  • wavelength
  • direction
  • speed

Part 2: Interference of Light Waves

Interference fringes in cornea of human eye obtained by reflection photoelasticity. Photo provided by the late Dr. Joseph Der Hovanesian, 1966. Interference of two light waves is one of the cornerstones in application of optics to measurement. To use light in measurement, we must be able to determine the phase difference between two light waves. Our eyes and other detectors cannot detect phase, only intensity. Interference converts phase data, which we cannot see, to intensity information, which we are able to quantitatively detect. The process by which we obtain and interpret the result is common to all interferometry applications. We add together the electric vectors for two waves that are identical except for a phase lag The result is another wave whose amplitude depends on:

  • the amplitudes of the original waves
  • the wavelength of the light
  • the phase lag between the original waves We can detect only the intensity (irradiance), which is the square of the amplitude We find a cosine-squared relationship between the intensity and the phase lag. Success! Phase difference, which we cannot see, has been converted to an intensity variation that we can see and measure. Thus, by measuring intensity, we can determine the phase lag, provided we know the wavelength. This relationship between intensity and phase difference is the basis of all methods of interferometric measurement. A problem is that, for a given intensity, the phase lag is not single valued. Additional data are required.

Part 3: Path Length and the Generic Interferometer

Micro-photoelasticity interferogram showing stress state near a single glass fiber embedded in epoxy matrix after fiber rupture. Courtesy of Dr. Pedro Herrera-Franco† and Dr. L.T. Drzal† Phase difference is a function of optical path length, which depends on:

  • physical distance
  • the speed of travel of the wave Wave speed in a medium is described by its refractive index, which is the ratio of speed in the medium to the speed of light in vacuum. Optical path length is the product of the physical path length and the index of refraction. Path length difference (PLD) between two waves is:
  • the main quantity of interest in interferometric measurement
  • the difference between two optical path lengths PLD depends on:
  • refractive index of the surrounding medium
  • refractive indexes of materials in either or both of the optical paths
  • physical distance that each wave travels Most optical methods of measurement in experimental mechanics involve the measurement of PLD using interference. The ‘‘generic interferometer’’ is a unifying conceptual model of the process of interferometric measurement that converts PLD to intensity The components of all interferometric setups include:
  • light source
  • beam splitter
  • two optical paths
  • beam combiner
  • device to measure intensity There are two main ways that interferometric measurements can be accomplished:
  • measure directly the path length difference between two paths
  • measure the change in one path length by holding one path constant while measuring the intensity change resulting from changes in the second path An advantage of optical methods is that we can perform interferometry over a broad field to obtain a map of local PLDs. To extend interferometry to the whole field, we apply a multitude of interferometers operating in parallel. We modify the generic interferometer as follows:
  • add a beam expander near the source or separate beam expanders in each path
  • use beam combiner that is broad enough to cover the field
  • the detector is replaced by an imaging device such as a camera

Part 4: Some Basic Methods of Interferometry

Three different types of fringes observed during Lloyd’s mirror experiment. Bottom fringes are caused by diffraction at mirror edge. Top fringes are from two-beam interference at about .02 deg. crossing angle. Granular appearance is laser speckle caused by random interference. Argon laser. Courtesy G. Cloud, 2002. Oblique interference:

  • means that two beams cross one another at some angle and interfere,
  • is fundamental in many techniques including moire and holography. A beam is a bundle of waves that are related by direction of travel and a common phase relation. In a planeparallel beam the waves travel along parallel axes and the waves are ‘‘synchronized’’ to form plane wavefronts. Interference of crossing beams yields bright and dark patches according to the following rules:
  • where wave maxima cross maxima, a bright patch is produced,
  • where wave minima cross minima, a bright patch is produced,
  • where wave maxima cross wave minima, a dark patch is produced. For small beam crossing angles:
  • the horizontal spacing of bright patches is less than the wavelength of light and cannot be resolved,
  • the vertical spacing of bright patches is several times the wavelength and can be resolved,
  • alternating light and dark layers (like slats) fill the crossing volume,
  • a screen placed in the crossing volume shows a pattern of dark and light bands. Changing PLD, crossing angle, or wavefront profile affects the position, spacing, or shape of the layers. As PLD changes by one wavelength, light intensity goes through one lightdark cycle. This is a fringe cycle. Successive fringe cycles can be numbered consecutively. These are fringe orders. For general whole-field interferometry, the light-dark distribution of intensity forms a random interference pattern. If the fringe pattern is caused by a spatially continuous process, then points in the interference pattern having common PLD join to form continuous bands called interference fringes. Interference fringes are loci of points having common PLD. They are seen as loci of points giving constant intensity. A picture showing several interference fringes is a fringe pattern. Lloyd’s mirror is a simple experiment that illustrates oblique incidence of two beams and the relations between PLD and fringe order. Lloyd’s mirror is an example of interferometry based on wavefront division.

Part 5: A Classic Interferometry: Newton’s Rings

Newton’s Egg, Newton’s fringes in a film of oil on water; 3-d illusion caused by meniscus. Digital photograph by Gary Cloud, Dec. 2002. Newton’s rings are all around us and may be seen:

  • in a film of oil on water or glass
  • when glass is pressed onto a glossy photo
  • at the interface of a crack in clear plastic. Newton’s fringes:
  • are caused by interference between waves that are reflected from two surfaces that are separated by a small gap
  • were noticed long before Newton
  • were explained by Newton using his particulate theory of light.

Part 6: Another Classic Interferometry: Young’s Experiment

Fringe order is a function of:

  • wavelength
  • gap between the surfaces
  • index of refraction
  • angle of incidence of light
  • angle of observation. Extension to a large field:
  • is accomplished using lenses and a partial mirror
  • creates a full-field fringe pattern
  • yields a contour map of the gap between the surfaces. Newton’s fringes are an example of ‘‘interferometry by amplitude division.’’ The quantitative interpretation of Newton’s fringes is the same as for fringes found by basic forms of other techniques, including:
  • holographic interferometry
  • speckle interferometry

Part 7: Colored Interferometry Fringes

Detail from light-field photoelastic interferometry fringe pattern obtained using simultaneous illumination from two laser sources: Argon at 488 nm (turquoise) and Helium-Neon at 633 nm (red). The granular appearance is caused by laser speckle. Photo by Gary Cloud, March 2003.

  • Why do we see a brilliant pattern of colored interference fringes in some experiments?
  • What do these colored patterns mean?
  • What are their uses? We create a thought experiment:
  • involving some type of interferometer
  • a choice of red or blue illumination is provided
  • the PLD is controllable
  • the intensity detector output is plotted as a function of PLD for each wavelength. The detector output shows that the intensities for red and blue oscillate between zero and maximum at different rates, depending on the wavelength used for illumination. The experiment is then performed with the following changes:
  • combined red and blue illumination is provided
  • a color sensor is substituted for the intensity detector. The sensor yields a resultant color that is created by the mixture of red and blue components remaining at each PLD. If color is plotted with increasing PLD we will see a sort of color spectrum that is fairly complex even though only two wavelengths were used in the experiment. As PLD increases from 0 to 1320 nm, the sequence of colors is:
  • black slowly shading through purple to red
  • red shading quickly through purple to blue
  • blue shading quickly through purple to red
  • red shading slowly through purple to black, where minima of both blue and red are reached
  • the color cycle repeats for each successive segment of 1320 nm PLD.
  • for each black-to-black segment, we pass through one stage of pure blue, two stages of pure red, and 4 stages of purple shades. Our usual concept of fringe counting does not apply to colored fringe patterns.
  • we must count in terms of the cycles of a certain color
  • for some colors, the cycles are not evenly spaced. If more than one wavelength of illumination is used in a whole-field interferometer, then we observe a field of colored fringes. If several wavelengths are used, predicting the resultant colors over a range of PLD’s becomes quite difficult. For continuous spectrum illumination in an interferometer:
  • the fringe colors are the complement of whatever colors destructively interfere at any PLD
  • for PLD’s beyond the first cycle, predicting and interpreting the color sequence is difficult
  • the color sequence is no longer repetitive or periodic
  • for large PLD’s the various colors combine in ways that cause the saturation to diminish, producing pastels that eventually tend to white—no colors. Fringe colors depend on:
  • the spectral content of the source
  • the transmittance spectra of the optical components and specimen
  • the color rendition or accuracy of the observing device Colored interferometric fringes:
  • are not often used for quantitative analysis by themselves
  • help us in fringe counting by showing the fringe gradient
  • help with interpolation between fringes. Colored fringe information is now put to good effect in electronic forms of interferometry including, for example, RGB photoelasticity.

Part 8: Michelson Interferometry

Newtonian Paisley, Newton’s fringes in film of oil on water. Digital photograph by Gary Cloud, Dec. 2002. Michelson interferometry:

  • is important in the history of physics and engineering
  • teaches the behavior of light
  • is the basis of many measurement techniques. Michelson invented the interferometer and used it in:
  • determining effects of velocity of observer on speed of light
  • investigating the structure of spectral lines
  • defining the standard meter. Twyman and Green, among others:
  • converted the Michelson device to large-field
  • greatly expanded the usefulness of this type of interferometry
  • used their device to measure the profiles of lenses and mirrors. Michelson interferometry and its variants are examples of ‘‘amplitude division’’ interferometry For a perfectly square setup, the PLD between any two waves arriving at the viewing screen is twice the difference between the distances from the beam splitter to each of the mirrors or test surfaces. Irradiance at any point on the viewing screen depends on the PLD between the waves arriving at that point. If one or both the mirrors or test surfaces are tilted:
  • the waves arriving at the screen meet obliquely
  • they interfere according to the rules for oblique interference
  • a pattern of parallel fringes is formed
  • fringe spacing indicates the relative tilt of the mirrors. If one of the mirrors is then moved, the fringes translate across the screen. The Michelson interferometer:
  • is a differential measuring device
  • compares two physically separate paths
  • generates fringes that are loci of constant PLD (Fizeau fringes)
  • yields the difference in contours and/ or relative motions of the mirrors. Since this interferometer compares separate paths, it tends to be affected by vibrations. Careful setup and isolation are required. Common-path interferometers:
  • compare PLD between waves that follow the same physical paths
  • are resistant to vibrations
  • include, among others: photoelasticity Newton’s fringes many shearographic techniques Separate-path interferometers:
  • Compare PLD between waves that follow different physical paths
  • are susceptible to vibrations
  • include, among others: Michelson interferometry holography and holographic interferometry most speckle interferometry Lloyd’s mirror Michelson interferometry is a paradigm for many optical measurement techniques, including:
  • holographic interferometry
  • most digital speckle interferometry

Part 9: The Diffraction Problem

Portion of diffraction pattern from photographic replica of a crossed bar-space grating (grid) having spatial frequency 1000 lines/in. Argon laser, no enhancement. Orders visible to eye range from 15 to 15 in both directions. Digital photo by Gary Cloud, Sept. 2003. Diffraction is the second cornerstone of optical methods of measurement. Examples of diffraction of light waves are all around us, and may also be observed through simple experiments. Illuminate a hole (aperture) in a plate and examine the shadow on a screen. The shadow cast by the sharp edge will be found to be fuzzy and might exhibit interference fringes If the aperture is made small,

  • the illuminated patch on the screen will be larger than the aperture
  • clear fringes might be observed near the shadow edge
  • we note an inverse relation between aperture size and expansion of the beam. The pattern on the viewing screen depends on the type of intelligence (signal) in the aperture.
  • If the intelligence consists of two small apertures close together, then we observe Young’s fringes.
  • If a fine mesh is placed in the aperture, we observe an ordered array of bright dots on the screen.
  • The dot spacing is inversely related to the distance between the apertures or the threads of the mesh. The pattern on the viewing screen also depends on intelligence carried by the beam. If the beam comes from an illuminated object and if the aperture is quite small, then,
  • an image of the object appears on the screen,
  • we have created a ‘‘camera obscura’’ or ‘‘pinhole camera.’’ The ability of a camera lens to render fine detail depends on the size of the aperture.
  • Relatively large apertures allow reproduction of finest detail.
  • Tiny apertures limit the detail that can be reproduced.
  • This behavior is contrary to conventional wisdom in photography.
  • A lens appears to be a tunable lowbandpass filter.
  • Lens aberrations modify these phenomena. The ‘‘diffraction problem’’ is stated as follows. Light from a source illuminates an aperture in an opaque plate. Describe the light received at some point downstream from the aperture. Diffraction theory is fundamental in optics, electron microscopy, experimental mechanics, and other fields because it leads to:
  • understanding of image formation of optical systems,
  • ways to specify and test optical devices,
  • conception of apertures and lenses as Fourier transformers,
  • methods to take advantage of frequency response of systems,
  • ability to perform optical whole-field processing to modify frequency content of pictures. For the experimental mechanician, diffraction theory is important in:
  • geometric moire
  • moire interferometry
  • holography and holointerferometry
  • speckle interferometry
  • speckle photography
  • shearography
  • other methods The diffraction problem:
  • is more difficult than it appears to be,
  • has not been solved in generality,
  • was formulated in 1678 by Huygens, who incorporated a major simplifying assumption,
  • was solved by Fresnel and Fraunhoffer in the form established by Huygens,
  • was reformulated and solved as a boundary value problem by Kirchhoff in 1882,
  • was later solved with more rigor by Kottler, Sommerfield, and others.

Part 10: Complex Amplitude

Diffraction pattern created by passing laser light through a pinhole. HeNe laser. Central portion overexposed to show the first few off-axis rings. Digital photo by Gary Cloud, Dec. 2003. We require a representation for light waves that is easier to use than the cosine-wave form for the electric vector. The cosine wave is first generalized to describe waves traveling in any direction. The cosine wave is then written in a form that contains:

  • the vector wave number, which specifies:
  • propagation direction
  • wave length
  • the angular frequency of the radiation, which is related to:
  • wavelength
  • wave velocity The cosine wave is converted to exponential form, and:
  • a phase term is introduced
  • the phase term is the magnitude of the vector wave number times the PLD
  • the part containing the optical oscillation frequency is dropped, since these frequencies are too large to be observed
  • The polarization specification is also dropped. What is left is the ‘‘complex amplitude’’ that contains as a function of position:
  • amplitude of the wave
  • phase
  • wavelength
  • propagation direction Intensity or irradiance is:
  • defined as twice the long-time average of the square of the amplitude
  • the quantity of interest since it is what we can measure Intensity at a point in the optical field is found to equal the square of the local amplitude. Intensity is also found to equal:
  • the local complex amplitude times its complex conjugate
  • the square of the modulus of the complex amplitude. Optical calculations involve:
  • determining the complex amplitude field as waves interact with optical components and other waves
  • converting the complex amplitude to intensity distribution

Part 11: Diffraction Theory, Part I

Arcade Pinhole camera image of shopping arcade taken using a pinhole camera. Note extraordinary depth of field and sharpness obtained by ‘‘lensless photography.’’ Image by Mr. Andrew T. Smith, 2003.* We seek to relate the optical complex amplitude at a point to the complex amplitude field that surrounds the point. Restate the problem. Given the complex amplitude on the surface of a vessel, what is the complex amplitude at any observation point P inside the vessel? Use Stokes’s theorem, which relates certain surface and volume integrals containing two functions that are defined in the vessel. The two functions are taken to be complex amplitudes of optical waves. Because the complex amplitudes are solutions of the wave equation, the entire volume integral portion of Stokes’s theorem vanishes. The second complex amplitude is taken to be a spherical wave centered at the observing point P. In order to evaluate the surface integral of Stokes’s theorem:

  • a second surface is taken to surround P
  • this surface is a sphere
  • the radius of the sphere is vanishingly small. The surface integral is evaluated over the two surfaces, using the adopted spherical wave front for the second function. Since the functions are well-behaved, the integral over the surface of the small sphere surrounding the observing point reduces to a constant times the complex amplitude at the point. The resulting integral relationship:
  • is known as the Helmholtz-Kirchhoff equation
  • relates the complex amplitude at an observing point inside a vessel to the values that the complex amplitude has on the surface of the vessel
  • is difficult to evaluate for practical problems.

Part 12: Diffraction Theory, Part II

Transmission Laue X-ray diffraction pattern as used to identify crystal structure and orientation. Courtesy of Dr. K. N. Subramanian, Michigan State University. The Helmholtz-Kirchhoff integral is to be modified so that it can be applied with ease to useful diffraction problems. Kirchhoff greatly simplified the problem by assuming that the aperture is a hole in a vessel that is large, dark, and nonreflective, implying that:

  • The complex amplitude and its normal derivative are zero on the inside vessel surface.
  • There are no reflections or edge effects that modify the complex amplitude at the aperture.
  • The diffraction integral reduces to a constant  zero everywhere but in the region of the aperture.
  • The integral needs to be evaluated only over the extent of the aperture. The positions of the source and receiving points are established with respect to an integration element ds in the aperture. A point source outside the vessel is assumed to illuminate the aperture with a spherical wavefront. The complex amplitude falling upon a receiving point inside the vessel is sought. If the source and receiving points are more than a few centimeters from a relatively small aperture, then certain terms can be dropped from the integral. These simplifications reduce the general diffraction integral to the Kirchhoff integral, also called the Fresnel-Kirchhoff formula, giving the complex amplitude anywhere in the region downstream from a diffraction screen that is illuminated by a point source. The Kirchhoff diffraction integral:
  • can be evaluated for certain simple cases,
  • is difficult to evaluate for problems of practical importance,
  • must be simplified further through development of some approximations,
  • needs to be modified to account for the case where some intelligence, such as a transparency, is placed in the aperture.

Part 13: Diffraction Theory, Part III

Family Portrait-Hubble Space Telescope NICMOS image of NGC 2264 IRS mother star and baby stars in the Cone Nebula. The rings and spikes emanating from the image form diffraction patterns that demonstrate near-perfect optical performance of the camera. Portion of image no. STScIPRC1997-16. Image by R. Thompson, M. Rieke, and G. Schneider of University of Arizona and NASA. The purposes of this article are to:

  • incorporate a signal that might be included in the aperture,
  • simplify the diffraction integral for practical applications. A transmittance function is included in the diffraction integral.
  • The complex amplitude exiting the aperture is the complex amplitude from the source times the transmittance function.
  • It might be a complex function.
  • It can modify the phase or amplitude distributions, or both.
  • It is defined in local aperture coordinates. To simplify the integral, certain limitations on the geometry are accepted. The Fresnel approximation:
  • assumes that the source and receiving points are ‘‘quite far’’ from the aperture,
  • assumes that the aperture is ‘‘quite small,’’
  • allows some location variables inside the integral to be replaced by constants,
  • greatly simplifies the integral. The integrand is further modified by replacing the geometric factors in the exponential with equivalent series expansions. The diffraction integral is now much simpler, but it is still too unwieldy for practical applications. The Fraunhoffer approximation:
  • requires that the source and receiving points are ‘‘very far’’ removed from the aperture,
  • requires the aperture to be ‘‘very small,’’
  • eliminates one of the difficult exponential expressions from the integrand,
  • places severe physical restrictions on the application of the integral,
  • reduces the integral to one that is easily evaluated for many applications,
  • proves very useful, even with its inherent restrictions. The diffraction integral becomes the Fourier transform of the aperture function. Diffraction at an aperture decomposes optical information (e.g. a picture) into its constituent space-frequency components (e.g. lines per millimeter). Distance in a transform plane is proportional to spatial frequency in the aperture signal. An aperture is a physical Fourier transformer.

Part 14: Diffraction at a Clear Aperture

Diffraction pattern for clear circular aperture, photographed on monochrome film with wide dynamic range so as to show the rings beyond the 7th order. Compare with the digital color picture shown in Part 10. Photo by G. Cloud. The objectives are to:

  • calculate the diffraction pattern for two mathematically equivalent clear apertures:
  • a long narrow slit,
  • a circular hole,
  • compare the predictions with experiment,
  • explore some implications and applications of the result. Collimated light is incident upon a plate containing a small hole or a slit. The complex amplitude at some remote point downstream is sought. Mathematically, the transmittance function for this aperture is the ‘‘rect’’ or ‘‘top-hat.’’ The complex amplitude at general observing point P is the Fourier transform of the transmittance function times a constant and an obliquity factor. These multipliers can usually be ignored. The Fourier transform is of the form (sin ax)/x, which is known as the ‘‘sinc function.’’ The intensity distribution is the square of the sinc function. For the circular hole aperture, the diffraction pattern will be a central bright patch surrounded by concentric light and dark rings that rapidly decrease in visibility with increasing distance from the center. The predictions compare well with laboratory observations. The central bright patch of the diffraction pattern:
  • is called the Airy disc,
  • provides a metric for predicting the resolution of optical systems,
  • is related to the size of laser speckles,
  • has many other applications.
  • is a function of:
  • aperture size
  • distance to the observing plane
  • wavelength. The relation between aperture size and the breadth of the diffraction pattern is inverse.
  • Small apertures give a large central patch.
  • Wide apertures give a small central patch. If the aperture is large, laboratory observations do not seem to agree with the prediction that the diffraction pattern should be small. We see the ‘‘shadow’’ of the aperture plate on a viewing screen. The problem is that, for the large aperture, the diffraction pattern must be observed several hundreds of meters away in order to satisfy the Fraunhoffer restrictions.

Part 15: Fourier Optical Processing

Top: High-resolution phase contrast transmission electron micrograph of Ni3Al compound taken at wavelength 1.9 pm. The horizontal distance between the bright dots (atoms) is 2.2 nm. Center: Electron diffraction pattern from same material formed in the back focal plane of the objective lens of the TEM. Bottom: A fast Fourier transfom of the high-resolution image that replicates the general sense of the diffraction pattern including the varying intensities associated with the lattice ordering. Photos provided by Dr. Martin A. Crimp, Michigan State University. Objectives are to:

  • use a lens to force the optical Fourier transform to appear close to the aperture, even for large apertures,
  • use spatial filtering to modify the frequency content of an image. The Fraunhoffer limitation implies that the optical transform appears far away from the aperture. The distance might be several kilometers for a broad optical signal such as a moire´ grating. A lens is placed in the system adjacent to the aperture, with these results:
  • the optical transform appears in the back focal plane of the lens,
  • the spatial frequency metric in the transform plane is changed. Several different optical setups can be used.
  • The optical signal can be ahead of or behind the lens,
  • The light passing through the signal need not be collimated,
  • The spatial frequency metric depends on the setup. Recall that:
  • distance from the center in the transform plane is proportional to spatial frequency at the input plane
  • local intensity in the transform plane is proportional to the amplitude of the corresponding spatial frequency component in the input. Optical spatial filtering or Fourier optical processing is implemented by:
  • modifying the frequency content of the input signal by use of a filter placed in the transform plane,
  • using a second lens to create an inverse transform. The inverse transform is an image that is:
  • formed with the light that passes through the spatial filter,
  • a replica of the input picture but now with its spatial frequency content modified. Optical spatial filtering:
  • can remove unwanted obscuring or confusing details from a photograph,
  • makes sought information more visible,
  • improves signal-to-noise ratio,
  • cannot generate new information that is absent in the picture,
  • is often used with smoothing and blending techniques to improve photographs. Applications include, for example:
  • removing raster scan lines from pictures,
  • enhancing photographic intelligence gathering,
  • increasing fringe visibility in interferometry,
  • multiplying moire sensitivity,
  • controlling the frequency bandpass of a lens,
  • making cracks visible. Fourier processing of pictures is now often done digitally. But, the analog version is still useful and sometimes is the only option.

Part 16: The Optical Doppler Effect

An example of the optical Doppler frequency shift is provided by this photo of galaxy NGC 7673 in the constellation Pegasus. Two other galaxies are seen in the background. These galaxies are further away and are receding faster, so they appear reddish owing to their greater Doppler red-shift. Photo from Hubble Wide Field Planetary camera. Courtesy of European Space Agency and Nicole Homeier of the European Southern Observatory and University of Wisconsin-Madison. Laser Doppler interferometry:

  • measures the velocities of objects,
  • uses interferometric observation of the change of frequency of light that is:
  • emitted by a moving source
  • recorded by a moving observer
  • reflected from a moving object
  • some combination of the above
  • is utilized in determining dynamic behavior of materials and structures. The objective is to relate the Doppler frequency shift to the relative velocity of source and observer. The acoustic Doppler frequency shift that is caused by a moving sound source:
  • provides a good example of the Doppler effect,
  • is easy to detect aurally, as when listening to a train pass on a nearby track or hearing a moving bell. Optical Doppler differs from acoustic Doppler effects in that:
  • the perceived speed of light waves does not depend on the speed of the observer,
  • the frequency shift for a moving observer differs from that for a moving source.
  • However, the difference is not significant for ordinary terrestrial observations.
  • The explanation for the difference lies in the Theory of Relativity Space-time considerations show that if the source is moving, the wavelength of the emitted radiation will be shortened or lengthened, depending on the direction of the motion with respect to the observer. Hence, the frequency will be increased or decreased. The results suggest that observation of the shift of frequency from a moving source can be used to determine the velocity of the source. The Doppler shift in frequency is small relative to the large fundamental optical frequency for velocities encountered in typical engineering applications, so direct measurement of the change of frequency by measuring the original and final frequencies gives uncertain results. A differential measurement of the Doppler shift, wherein the frequency change is determined directly, is required. Interferometric comparison of the original light frequency with the changed light frequency allows accurate determination of the Doppler shift.

Part 17: Laser Doppler Interferometry

An example of the optical Doppler frequency shift is provided by this photo of galaxy NGC 7673 in the constellation Pegasus. Two other galaxies are seen in the background. These galaxies are further away and are receding faster, so they appear reddish owing to their greater Doppler red-shift. Photo from Hubble Wide Field Planetary camera. Courtesy of European Space Agency and Nicole Homeier of the European Southern Observatory and University of Wisconsin-Madison. Objectives are to:

  • learn how to measure Doppler shift using interferometry,
  • calculate the frequency shift for light that is reflected from a moving target,
  • explore some useful applications of Doppler interferometry. Measurement of the Doppler frequency shift for typical velocities involves the determination of the small change in a large quantity. Interferometry is a simple and accurate way to measure the small Doppler frequency shift for optical applications in engineering The Michelson interferometer:
  • is an excellent classical approach for measuring Doppler shift,
  • provides a paradigm for understanding measurements of this type,
  • interferometrically combines a portion of the original wave from the source with the wave reflected or scattered from the moving target. Combination of the original beam with the Doppler-shifted beam yields:
  • 3 waves of such high frequency that they cannot be tracked by a detector and so are not useful,
  • 1 wave that oscillates at the beat frequency and so can be tracked by a detector to yield the Doppler shift. The output of the Doppler interferometer is a frequencymodulated wave whose instantaneous frequency is proportional to the speed of the target. An alternative approach to interpreting the Doppler interferometer is to calculate the rate at which oblique interference fringes move across the detector when the target object moves. Introduction of a bias frequency shift:
  • facilitates determination of the sign of target motion,
  • eliminates some data processing artifacts,
  • creates a frequency-modulated carrier wave,
  • is implemented by introducing into the interferometer, for example:
  • motion of the reference mirror,
  • a rotating diffraction grating,
  • a Bragg cell. The Doppler shift for a moving source differs from that observed when the light is reflected or scattered from a moving target. Consideration of the laws of reflection shows that the Doppler frequency shift for light reflected from a moving target is twice that for light from a moving source. Sample useful applications of optical Doppler interferometry include:
  • Laser Doppler Velocimetry (LDV), as used in fluid flow studies,
  • Laser Doppler Vibrometry (LDV) as used in vibration and modal analysis,
  • calibration of other motion measuring devices such as accelerometers,
  • the laser gyroscope.

Part 18: Geometric Moire´ Phenomena and Simulations

Part 19: Basic Strain Measurement by Geometric Moire´

Geometric moire´ pattern showing displacement field created by coldworking fastener holes in an array near a plate edge. Grating photography, Fourier optical processing, pitch mismatch, and sensitivity multiplication enhanced results for this case involving plastic deformations. Some of the closely packed fringes are lost in this reduction. Photo by G. Cloud and M. Tipton, 1980. The relationship between observed moire´ fringes, displacement, and strain for uniform strain in one-dimension is studied. Moire´ fringes are created when:

  • light is projected through two superimposed gratings,
  • the line spacing (pitch) of one grating differs slightly from the pitch of the other grating,
  • light passes through the cracks between the lines of the two superimposed gratings,
  • the imaging system acts as a lowpass filter so that the local intensity differences are smoothed out,
  • alternating light and dark bands called moire´ fringes are seen. One moire´ fringe cycle is created whenever n lines of the specimen grating are stretched to fill the space of n 1 lines in the undistorted master grating. The normal strain along an axis perpendicular to the grating lines is the grating pitch times the gradient of the moire´ fringe order (fringes per unit length) along the same axis. Similar relationships can be derived for normal strain in other directions, so the complete strain map over the extent of the specimen can be determined. Determination of strain requires that the derivative of fringe order with respect to distance (fringe gradient) be obtained.
  • Differentiation of primary experimental data tends to increase scatter.
  • Errors are reduced if the fringe gradient is large, meaning the fringe orders are closely packed.
  • Fine gratings (small pitch) are required to obtain high sensitivity.
  • Obtaining adequate sensitivity to measure elastic strains in metals is difficult with basic geometric moire´ because fine-enough gratings are not easily created.
  • Fourier optical processing can be used to enhance sensitivity and fringe contrast in geometric moire´.
  • Moire´ methods became practical for measurement of small strains when interferometric moire´ and phase shifting methods were developed. Rotation of one grating with respect to the other also produces moire´ fringes. This effect can be analyzed by considering the changes of grating pitch in the cross section that are produced by the rotation. Extension of this simple analysis to nonuniform strain is accomplished by realizing that:
  • the gratings are very fine,
  • only a very small area of the specimen grating is considered,
  • the displacement/ strain field in adjacent small areas will differ,
  • the transition of displacement from one small area to another is smooth in a continuum,
  • the moire´ fringe gradient changes smoothly across the specimen.
  • the local fringe gradient is proportional to the local strain. A more general parametric analysis of moire´ phenomena is required to understand combined rotation and strain effects.

Part 20: Parametric Analysis of Geometric Moire´

Parametric analysis of moire´ fringes is undertaken to:

  • determine the effects of simultaneous rotation and strain,
  • show that rotation and strain effects are decoupled for small rotations,
  • demonstrate that strain measurement is not affected by rotation,
  • gain understanding of important related techniques such as speckle interferometry. Two line gratings having different pitches are superimposed, and one is rotated relative to the other. Light fringes are created along the intersections of the grating lines, and these are taken to be the whole-order fringes of interest. The lines of the gratings and the fringe orders are numbered starting from a common origin. Then, analytic geometry is used to describe the line families. A consistent relationship between grating line numbers and moire´ fringe order is evident over the whole field. The difference of pitch between the two originally identical gratings when divided by the original pitch is the local normal strain in the direction perpendicular to the grating lines. For small rotation, the final result takes the form [strain times x] 1 [rotation times y] 5 [pitch times moire´ fringe order]. Rotation and normal strain effects are decoupled because experiment and analysis show that:
  • fringes caused by small rotation lie perpendicular to the grating lines,
  • fringes caused by strain lie parallel to the grating lines. If rotation and strain occur simultaneously, the fringe gradient perpendicular to the grating lines:
  • is not seriously contaminated by the gradient of the rotation fringes,
  • yields normal strain when multiplied by the pitch. The analysis is easily extended to cover related moire´ phenomena, including:
  • nonuniform strain,
  • shear strain measurement,
  • pitch mismatch,
  • phase shifting,
  • sensitivity multiplication,
  • other types of gratings,
  • moire´ interferometry,
  • speckle interferometry

Part 21: Shadow Moire´

Shadow moire´:

  • appears when a grating is superimposed with its own shadow,
  • yields fringes that are loci of points of constant separation between the master grating and the specimen,
  • provides a contour map of the specimen,
  • can be used to measure out-of-plane deformation,
  • is easily implemented. Shadow moire´:
  • is readily apparent in everyday life,
  • can be created by placing screen material adjacent to a curved surface and illuminating the combination,
  • is easily demonstrated using laser-printed grating transparencies. Shadow moire´ simulations can be generated using CAD software that incorporates a shadow feature In the shadow of the master grating that appears on the specimen, for a given angle of illumination:
  • the shadow lines are elongated by the inclination of the specimen,
  • the shadow lines are shifted laterally by a factor depending on the distance from the master grating to the specimen. When the shadow of the master grating is viewed through the master grating:
  • some light areas of the shadow will coincide with spaces in the master to create a light area in the image,
  • some light areas of the shadow will be blocked by lines in the master to create a dark area,
  • dark areas of the shadow will be dark in any case,
  • the light areas blend together to form light fringes,
  • the dark areas blend to form dark fringes. In the image, one fringe cycle appears when m grating shadows are elongated to fill the space of m 6 1 lines of the master. Simple geometric analysis shows that the gap between specimen and master equals the local fringe order times the grating pitch divided by the tangent of the angle of incidence. A field lens:
  • is required to establish viewing along the normal for the expanse of the master grating,
  • can be eliminated with little loss of accuracy if the viewing distance is much larger than the specimen. Sensitivity of the method can be increased or lowered by viewing along an inclined axis. Some applications of shadow moire´ include:
  • observation of buckling of panels,
  • diagnosis of illnesses that affect body conformation,
  • mapping contours of manufactured parts,
  • quality control in industry

Part 22: Projection Moire´

Projection moire´ is a simple technique that provides:

  • contour maps of objects,
  • changes of contour caused by deformation,
  • comparisons of contours of two different objects. The steps in basic projection moire´ are:
  • project a grating on a specimen as by a slide projector,
  • photograph this grating,
  • deform the specimen or replace it with a different object,
  • photograph the grating a second time, often by double exposure,
  • develop the doubly exposed film to see the fringes resulting from superimposition of the two distorted gratings,
  • interpret the fringes to obtain the map of change of elevation between the two specimen states. Projection moire´ yields the change in contour of a specimen, whereas shadow moire´ gives an absolute contour map For the basic setup and assuming paraxial conditions are met, the change of elevation along the direction of viewing is the fringe order times the pitch of the grating divided by the sine of the angle between the projection axis and the viewing axis. Specimen motion perpendicular to the imaging axis does not affect the meaning of the fringe pattern. But the fringe pattern no longer compares the ‘‘before’’ and ‘‘after’’ elevations of the same specimen point. Because the projection technique involves two different pictures of a projected grating, two entirely different specimens can be compared to obtain the difference between them. Projection moire´ gratings are created by:
  • using a Ronchi ruling in a slide projector,
  • using oblique interference of plane coherent light,
  • creating a master using computer graphics and using a presentation projector. Unlike shadow moire´, the projection technique requires that the imaging system be able to resolve the projected grating lines. This restriction limits the sensitivity of the technique and requires that quality optics be used. The basic fringe order-elevation change equation is in error if paraxial conditions are not met. The grating projector and the imaging device should be far from the specimen to reduce these errors. Digital photography of the grating and digital processing of the grating images:
  • can create resolution and aliasing problems,
  • facilitates rapid processing of the data,
  • allows significant refinements such as phase stepping and filtering to improve accuracy and sensitivity. Certain laser scanning methods for obtaining digitized surface shape profiles in industry are essentially the same as projection moire´. Applications include:
  • measuring changes of contours of human or animal body components caused by muscular effort,
  • comparing shapes of manufactured objects with a prototype for quality control,
  • observing contour changes in aircraft structures.

Part 23: Reflection Moire´

Reflection moire´ is deserving of study because of its:

  • unique features and applications in structural analysis,
  • significance in understanding and extending other optical methods,
  • ease of implementation. Reflection moire´:
  • involves recording the image of a remote grating that is reflected in the polished surface of a specimen,
  • requires, in its simplest form, exposures for the ‘‘before’’ and ‘‘after’’ states of the specimen,
  • yields directly a moire´ pattern that is a map of the slope change between the specimen states,
  • is especially applicable to plates and similar engineering structures,
  • can be applied over a large range of sizes,
  • requires very little equipment and is simple to implement. The basic experimental setup requires:
  • a specimen such as a plate with a reflective surface fixed in a loading frame,
  • an illuminated flat or curved coarse master grating fixed at some distance from the plate,
  • an imaging device set up behind a hole in the grating to record the reflected grating. Procedure:
  • photograph the reflected grating for the initial state of the specimen,
  • load the specimen,
  • photograph the reflected grating for the final state of the specimen,
  • superimpose the photographs or use double exposure. As the slope at a point on the plate specimen changes, different portions of the master grating are reflected from that point. The reflected grating seems to distort and sweep across the image of the specimen as it deforms. If the viewing distance is large relative to plate size, then the slope of the plate at any point is the moire´ fringe order times the grating pitch divided by twice the viewing distance. Good sensitivity can be obtained with rather coarse gratings, and the grating can be simply ruled on posterboard or created with a computer printer. Improvements include:
  • use of a curved grating,
  • rotating the specimen or the grating to obtain slopes in both directions,
  • using a grating image on a computer monitor as the master,
  • use of digital imaging and computer superposition of the grating photos,
  • implementing an alternative setup that gives the slope map with only one exposure,
  • using strobe lighting or averaging techniques to study dynamic problems.

Part 24: Demonstrations of Laser

Simple experiments that demonstrate basic laser speckle phenomena are described. Images made with laser illumination are contaminated with a ‘‘salt-pepper’’ pattern called ‘‘laser speckle.’’ Laser speckle:

  • was first viewed as a nuisance,
  • proved to be a useful discovery,
  • carries much information about the illuminated object,
  • is the basis of a family of measurement techniques, including: a. digital and electronic speckle pattern interferometry, b. speckle photography, c. speckle shearography. The demonstration experiments require only:
  • a minimal laser,
  • a beam-expanding lens such as a microscope objective,
  • a screen for viewing the expanded laser beam,
  • stability of the system. Sit quietly and stare at the illuminated patch on the screen to see the laser speckle. While looking at the speckle pattern, move your head sideways. The speckles will seem to move sideways also. Likewise, if the screen is moved, the speckles will move with it. Since the speckles move sideways with the specimen, they can serve as a fine array of surface markers and, so, can be used to measure displacement by the method called ‘‘speckle photography.’’ To observe the effect of aperture size on speckle size, look at the speckle pattern through an aperture formed by:
  • bringing your eyelids close together,
  • curling your finger against your thumb,
  • crossing the index and middle fingers of one hand over the same fingers of the other hand,
  • punching small holes in a card. You will find that the smaller the aperture, the larger the speckles, meaning we can control speckle size in an experiment. Some people are able to see the effect of longitudinal head movement on speckle brightness, a difficult feat that can be accomplished with practice. To show that changing distance between eye and specimen changes speckle brightness:
  • stabilize the head as much as possible,
  • look through a small aperture to make the speckle large,
  • focus on only one large speckle,
  • move the head slightly toward or away from the screen. That speckle brightness changes with longitudinal motion of an object forms the basis of several measurement techniques, including:
  • electronic speckle pattern interferometry (ESPI),
  • digital speckle pattern interferometry (DSPI),
  • electronic and digital speckle shearography (DSS),
  • phase shifting methods that enhance these techniques. Subsequent articles will discuss:
  • the types of speckle,
  • the physics of speckle formation,
  • speckle size,
  • speckle brightness prediction.

Part 25: Objective Speckle

Objective coherent light speckle:

  • is so-named because a lens is not used in the system,
  • cannot be observed directly,
  • has little practical application,
  • provides the basis for understanding the more useful subjective speckle,
  • helps us estimate speckle size and brightness. To create objective laser speckle patterns:
  • illuminate an object having a matte scattering surface with an expanded laser beam,
  • collect scattered waves on a screen, usually a photo film or a sensor array The basic physics of objective speckle formation are that:
  • each point of the screen receives light waves from every point of the illuminated object,
  • each wave travels its own particular path length to a screen point,
  • the multitude of waves arrive at the point with a multitude of different phases,
  • the waves all interfere with one another as they arrive at the point,
  • at some points, the waves are predominately in phase so they constructively interfere to create a bright patch,
  • at other points, the waves are predominately out of phase so they form a dark patch,
  • many points have a mixture of phase differences, so the result is a gray patch. The simple model described is sufficient if the object is small and at large distance from a small screen, in which case the interference systems are close to the collinear case. For a large object close to the screen, the interference systems are complicated because the waves meet at significant angles. In this case, oblique interference must be considered, and this provides an approach for estimating speckle size. Speckle brightnesses are:
  • not predictable,
  • random because the path length relationships for a given speckle are random,
  • not related to one another,
  • do not combine to form continuous fringes. One cannot see accurately an objective speckle pattern with the eye or a camera because the lens in the viewing system converts the pattern to subjective speckle. To view an objective speckle pattern:
  • expose a photographic film, using no lens, develop the negative, and print it; this image might be affected by film characteristics,
  • use an analog TV sensor without a lens; the pattern is modified by the raster scan,
  • allow the pattern to fall directly on the sensing array in a digital camera; the recorded pattern will be affected by the pixel size and pixel spacing. Speckle topics to be studied next include:
  • subjective speckle,
  • estimates of speckle size,
  • speckle brightness distributions,
  • combinations of speckle fields.

Part 26: Subjective Speckle

Subjective coherent light speckle:

  • is so-named because a lens is used,
  • affects all pictures taken with coherent illumination,
  • is of great practical value in measurement. To create objective laser speckle patterns:
  • illuminate an object having a matte scattering surface with an expanded laser beam,
  • use a lens to create an image of the illuminated object on a screen, a photo film, or a sensor array The basic physics of subjective speckle formation are that:
  • each point of the screen image receives light waves from only one corresponding point of the illuminated object,
  • each wave travels its own particular path length to a screen image point,
  • the multitude of waves arrive at the image point with a multitude of different phases,
  • the waves all interfere with one another as they arrive at the point,
  • at some points the waves are predominately in phase, so they constructively interfere to create a bright speck,
  • at other points, the waves are predominately out of phase so they form a dark speck,
  • many points have a mixture of phase differences, so the result is a gray spot. The simple model described is sufficient if the lens aperture is small relative to lens–image distance, in which case the interference systems are close to the collinear case. Otherwise, oblique interference must be considered. The resolution of even a perfect lens is limited by diffraction, so:
  • there is a fundamental limit on the smallness of information that can be resolved in an image,
  • smaller scale detail will be averaged over the diffraction-limited patch or cell,
  • the waves coming into an image cell will mix and interfere,
  • the speckle size would seem to be the same as the cell size. Lens aberrations and exemplar defects enlarge the resolution cell size considerably beyond the diffraction limit. Subjective speckle is ubiquitous in images made with coherent light and, so, is easily observed.

Part 27: Speckle Size Estimates

Speckle size

  • is a useful parameter in designing certain optical measurement systems,
  • must first be defined,
  • is calculated by various methods. Our intuitive concept of ‘‘size’’ leads to difficulty and error when used to assess speckle size because the pattern is very intricate. Define speckle size as the center-to-center spacing of adjacent dark or adjacent light spots in the speckle pattern. Assume that the smallest fringe spacing created by oblique interference is the dominant size of the speckles.
  • Interference of only the most widely divergent waves is considered.
  • Larger speckles produced by less-divergent waves are modulated by the smaller ones. Objective speckle size is calculated from the oblique interference formula to be wavelength 3 (object 2 sensor distance)/ (width of the illuminated object patch). For an object at infinity, subjective speckle size in the image plane is wavelength 3 (f 2 number of the lens). For finite object–lens–image conjugate distances, the oblique interference equation predicts the subjective speckle size in the image plane to be wavelength 3 (lens f-number) 3 (one plus system magnification). The resolution limit of a lens as determined from diffraction theory provides alternate approaches to determining dominant minimum subjective speckle size. A logical claim is that subjective speckle size is the radius of the Airy disc for a lens. The result is the same as that obtained through interference calculations. The Rayleigh resolution limit, based on a different determination of the radius of the Airy disc, gives a result that is 1.22 times that obtained by oblique interference computation. Lens flaws and aberrations cause the resolution limit from diffraction theory, and therefore the speckle size, to be larger than predicted. In practice, speckle size is difficult to gage because of the intricate randomness of the pattern. Calculations of speckle size are only approximations. But, the estimates are useful in optical system design. Recorded speckles can never be smaller than the resolution limit of the recording medium, whether photographic film or digital sensor array.

Part 28: Speckle Brightness Distributions

This article:

  • summarizes the complex subject of estimating the irradiance distribution in a speckle pattern,
  • determines the probability that any single speckle will exhibit a certain irradiance. Assume that:
  • each speckle is formed from a large number of waves that arrive at the speckle point with a random distribution of phase and amplitude,
  • all the waves have identical polarizations and are able to interfere. The aim is to predict the resultant brightness of any speckle in the field. This problem is one of ‘‘random walk,’’ for which:
  • the walker takes successive random steps in random directions,
  • we require estimation of where the walker will be after many steps,
  • the magnitude and direction of the steps are analogous to amplitude and phase angle of the rays of light in the optics problem. Quantitatively, the problem is to calculate the probability that the walker ends up within an annulus of radius r and thickness dr centered on his starting point. The amplitude r of his final position vector is the resultant amplitude of the accumulated light rays. The probability function is negative exponential showing that:
  • the most likely intensity of a given speckle is zero, meaning dark,
  • bright speckles are least likely. Visual examination of a speckle pattern suggests that the predictions are wrong because bright speckles seem at least as numerous as dark ones. There are three reasons for this perception:
  • the conditions on polarization and capability for interference might not be met, C in which case the probability of dark speckle falls
  • we capture and process speckle patterns to suit our visual needs, C the overall brightness is boosted so the pattern ‘‘looks right’’
  • our vision system is nonlinear, C it is more sensitive to dim speckles, so we perceive them as brighter than they actually are.

Part 29: Photoelasticity I—Birefringence and Relative Retardation

This article discusses the propagation of light through a birefringent material, which is the core of photoelasticity. Photoelasticity:

  • is a highly developed and important tool for stress analysis
  • uses polarized light to obtain the stress state in a loaded transparent model, in a deformed three-dimensional component, or in a coating on the surface of a prototype
  • uses the interaction of light with birefringent materials
  • utilizes optical interference to determine path length difference, which is related to stress
  • is an amplitude-division method of interferometry
  • is a common-path interferometer and so is easy to use in practical situations. Birefringent materials are those in which the index of refraction varies with the direction of polarization of the light passing through it. Experiments that pass plane-polarized light through a slab of birefringent material lead to the following conclusions:
  • the surface of the slab acts as a beam splitter, dividing the entering wave into two waves that are polarized in orthogonal directions called the principle axes of refractive index
  • the amplitudes of the two wave components are found by vector resolution of the entering electric vector
  • the two component waves travel at different speeds that are established by the principal values of refractive index
  • when the waves exit the slab, one lags behind the other by an amount called the path length difference or, in photoelasticity, the relative retardation. Photoelasticity measurement determines the principal directions and the relative retardation through optical interference. Relative retardation:
  • is determined by calculating the difference between the absolute retardations of the two component waves
  • is the difference between the principal indexes times the thickness of the slab divided by the refractive index of the immersion medium
  • is measured by combining portions of the two waves by means of a second polarizer to convert phase difference to intensity.

Part 30: Photoelasticity II—Birefringence

This article establishes the correlations between principal stresses, relative retardation, and stress at a point, which make photoelastic stress analysis possible. Some materials are naturally birefringent; for example, quartz and calcite. In other materials, birefringence can be induced by stress or deformation; for example, glass, many plastics, semiconductors, various fluids, and some biological tissue. For a range of stress and time, experiments show that at any point in a birefringent slab:

  • the axes of principal stress and principal refractive index coincide,
  • each principal refractive index is a linear function of both the principal stresses. The coefficients that relate stress to refractive indexes and relative retardation must be defined and determined to make photoelasticity possible. Experiments support the definition of two absolute photoelastic coefficients that relate absolute retardations to the principal stresses.
  • the absolute retardations are not often used in ordinary photoelasticity,
  • they demonstrate the proportionality between principal refractive indexes and principal stress difference. The stress-optic coefficient:
  • is defined as the difference between the two absolute coefficients,
  • relates the relative retardation directly to the principal stress difference,
  • is determined for the photoelastic material through an experiment on a specimen in which the stress state is known,
  • makes possible the measurement of stress in an unknown stress field by observation of the relative retardation,
  • must be measured and used in a way that accounts for the time-dependent behavior of most photoelastic materials,
  • one way to do this is to record all photoelastic data at the same time after loading; it need not be done quickly,
  • is also a function of wavelength and temperature, and these effects must be considered,
  • is one of several possible stressbirefringence parameters that are used in photoelasticity. Efforts to create fundamental molecular or atomic models that explain stress birefringence in materials in a satisfactory way have not been very successful. Birefringence is a complex phenomenon that merits more study.

Part 31: Photoelasticity III—Theory

This article describes in physical and mathematical terms the function of the simplest optical arrangement for performing photoelasticity measurements of stress. It is the linear dark-field polariscope. The linear polariscope:

  • incorporates:
  • a source of radiation of a single wavelength,
  • two polarizers that have their transmission axes crossed,
  • an intensity sensor,
  • is used for pointwise measurement of birefringence in a specimen that is placed between the polarizers. Unpolarized monochromatic radiation from the source is made to travel along the optical axis and pass through the polarizer. The polarizer passes only waves for which the electric vectors lie in a single plane. The plane-polarized wave passes through a birefringent slab that:
  • divides the wave into two polarized components whose electric vectors are parallel to the principal refractive axes, thus perpendicular to one another,
  • retards the component waves by differing amounts called the absolute retardations. As they exit the birefringent slab, the two orthogonally polarized component waves are out of phase by the difference of the absolute retardations, called the relative retardation. The second polarizer, called the analyzer, passes those components of the two incident waves that are parallel to its transmission axis. Downstream from the analyzer are found two waves that:
  • lie in the same plane,
  • are out of phase by the relative retardation,
  • have equal amplitudes,
  • are able to interfere. The result of the interference is as single wave that:
  • has the same wavelength and velocity as the original wave from the polarizer,
  • is polarized in the plane of the analyzer,
  • has been retarded by a certain amount,
  • has an amplitude that depends on:
  • the relative retardation caused by the birefringent slab,
  • the orientations of the principal axes of the slab relative to the polarizer axis. The sensor:
  • provides output proportional to the intensity of the wave created by interference,
  • gives information about the stress magnitude and principal axes in the birefringent slab. Photoelasticity:
  • is actually quite simple and easy to understand when studied systematically,
  • is a practical example of the ‘‘generic interferometer,’’
  • is of the amplitude-division class of interferometry,
  • is a common-path interferometer and so is stable and easy to use,
  • can give useful data when configured as linear light field, with polarizer and analyzer parallel.

Part 32: Photoelasticity IV—Observables and Interpretation

Objectives of this article are to:

  • interpret the observable light from a linear dark-field polariscope that is used to measure stress parameters in a birefringent model,
  • describe a simple experiment to illustrate point-by-point photoelasticity. The sensor responds to intensity, but we need look only at the amplitude for basic photoelasticity. Extract the amplitude from the photoelasticity equation and determine for what conditions the amplitude is zero. Sensor output will be zero for two cases that are useful for determining stress parameters in the birefringent slab or photoelastic model. In the first case, light is extinguished when the principal stress axes are aligned with the crossed axes of polarizer and analyzer. This result yields principal directions. In the second case, no light reaches the sensor when the relative retardation is an integer multiple of the wavelength. This result gives information about stress magnitude. The difference between the principal stresses is an integer multiple m of the wavelength divided by the product of the stress-optic coefficient and the thickness of the photoelastic model. Presupposed is that the correct integer multiple at which the light is extinguished can be correctly ascertained for the loaded specimen. The simple dark-field linear polariscope can be used without any other apparatus for point-by-point determination of stress direction and magnitude in a photoelastic model. This approach:
  • yields excellent results if carefully implemented,
  • is used as an adjunct to whole-field photoelasticity,
  • might be the only option for studies at nonvisible wavelengths, such as in the infrared. As an experiment, set up a linear polariscope with:
  • crossed polarizers,
  • a photoelastic model,
  • a narrow beam light source such as a laser or laser pointer,
  • a photocell, or
  • a white card or ground glass if observations of intensity are to be visual. To determine principal stress directions at the chosen point in the loaded model:
  • rotate polarizer and analyzer together, keeping them crossed,
  • observe intensity at the sensor while rotating the polarizers,
  • stop the rotation when the sensor output is minimum,
  • at this point, the polarizers are aligned with the principal stresses,
  • record the principal angle. To determine difference of principal stresses:
  • rotate polarizer and analyzer about 458 from the position established above,
  • begin with zero load on the model,
  • increase the load and monitor sensor output as it cycles between maximum and minimum,
  • count the number of times the light intensity passes through zero until maximum load is reached,
  • the number of cycles from black to black is the value m that appears in the equation for the principal stress difference. The simple experiment described:
  • has various shortcomings,
  • is easily enhanced to eliminate the problems,
  • still, is capable of yielding excellent results.

Part 33: Photoelasticity V—Fringe Patterns

Photoelastic analysis is extended to the whole field so as to obtain fringe patterns that are easily observed, recorded, and interpreted to concurrently obtain stress magnitudes and principal stress directions over the entire extent of the model. All points in the model are interrogated at once by using a collimated beam of light to create a multitude of interferometers acting in parallel. To create the beam of light, a point light source is used with a collimating lens. To view the entire model, a field lens is also added to the polariscope, as is an imaging device. The relative retardations and/or stress direction variations are spatially continuous in a deformed solid, so all the spots that have common retardations and/or principal directions join up to create patches of uniform intensity, called interference fringes, in the image. Two different fringe systems appear in a photoelastic pattern taken with a linear polariscope, namely:

  • isoclinic fringes,
  • isochromatic fringes. An isoclinic fringe can be defined in at least three equivalent ways as a locus of points:
  • that have constant inclinations of the principal axes of refractive index,
  • that have constant inclinations of the axes of principal stress,
  • where the principal stress axes are aligned with the axes of polarizer and analyzer. Isoclinic fringes:
  • provide a map of principal stress directions over the extent of the model when properly collected and interpreted,
  • are black, even with white-light illumination,
  • appear as only one fringe, perhaps broken into segments, for given azimuth settings of polarizer and analyzer,
  • remain stationary as the load on the model is changed,
  • appear to move when the crossed polarizer and analyzer are rotated relative to the model,
  • dominate wherever they cross isochromatic fringes. Isochromatic fringes can be defined in at least five equivalent ways as loci of points having:
  • specific uniform color,
  • constant path length difference,
  • constant relative retardation,
  • constant principal stress difference,
  • constant maximum shear stress. Isochromatic fringes:
  • yield stress amplitude data if the model stress-optic coefficient is known,
  • change with changes in load,
  • become more numerous as load is increased,
  • must be correctly numbered to obtain quantitative data. If white light is used, the isochromatics are colored. If monochromatic light is used, the integer-order isochromatics are black.

Part 34: Photoelasticity VI—The Circular

The goal is to make visible a complete isochromatic fringe pattern that is not obscured by isoclinic fringes. Isoclinics are eliminated through use of the circular polariscope, which uses circularly polarized light to interrogate the photoelastic model. Linearly polarized light is transformed to circular polarization by:

  • use of a birefringent plate that induces a relative retardation of onequarter the wavelength,
  • placing the quarter-wave plate in the optical path with its principal axes at 458 to the axis of the polarizer. The electric vector of the light exiting the quarter-wave plate:
  • has constant magnitude,
  • at any instant traces out a circular helix in space,
  • at any position along the optical axis traces out a circle over time. Circularly polarized light:
  • carries no directional data,
  • is used to eliminate isoclinics from photoelastic fringe patterns,
  • is also used in compensation methods to precisely measure isochromatic fringe order. To convert the linear polariscope to a circular instrument:
  • obtain two quarter-wave plates,
  • place one of the plates between polarizer and model,
  • place the second plate between model and analyzer,
  • adjust the axes of the quarter-wave plates so they are at 458 to the polarizer axis. The quarter-wave plates should have their ‘‘fast axes’’ crossed. Start with the dark field linear setup, then ensure that the field is still dark when the quarter-wave plates are installed. This arrangement is known as the dark-field circular polariscope. The light-field circular polariscope is also useful. To convert the dark-field arrangement to a light-field system, merely rotate the polarizer or analyzer by 908 in either direction. The equations for the electromagnetic vector of the light from light-field and dark-field circular polariscopes:
  • are developed in a manner similar to that used for the linear polariscope,
  • involve more terms because of the added optical elements,
  • are the same as those obtained for the linear configurations except that the directional data disappear. In practice, quarter-wave plates:
  • usually do not induce exactly one-quarter wavelength relative retardation,
  • produce elliptically polarized light,
  • create errors in photoelastic measurement that:
  • can be ignored for most applied work,
  • must be considered and eliminated in advanced procedures.

Part 35: Photoelasticity VII—Basic Polariscope

Basic polariscope setups that can be used to obtain both isoclinic and isochromatic data are described. The classic transmission polariscope:

  • contains, in order:
  • a point light source,
  • usually a monochromatic filter,
  • a collimating lens,
  • the polarizer,
  • the first quarter-wave plate,
  • the photoelastic model,
  • the second quarter-wave plate,
  • the analyzer,
  • a field lens,
  • an imaging system,
  • is the version most often used for precise photoelasticity experiments,
  • utilizes light efficiently,
  • requires two large lenses, so tends to be expensive,
  • requires careful setup. The diffused-light polariscope:
  • is identical to the transmission instrument except that the collimating lens is replaced by a diffuser,
  • can utilize a common fluorescent light fixture as the source,
  • works best if the monochromatic filter is placed near the imaging system,
  • is less expensive than the transmission polariscope,
  • is very common and is capable of yielding good results when properly set up,
  • is not efficient in its use of light,
  • requires that the distance from field lens to imaging system be exactly equal to the focal length of the field lens. A very simple polariscope:
  • is identical to the diffused-light instrument except that the field lens is eliminated,
  • is very economical,
  • tends to induce errors because the incidence angle at the model varies over the field,
  • requires that the imaging system be far removed from the model to reduce incidence angle errors,
  • is exceptionally useful for demonstrations because fringe patterns can be seen from a range of viewing positions. Many other polariscope configurations can be devised for particular applications, including, for example:
  • dynamic studies on moving models,
  • microscopic photoelasticity on small specimens,
  • point-by-point interrogation of the model,
  • demonstrations using an overhead projector.

Part 36: Photoelasticity VIII—Recording

This article describes the recording and numbering of isochromatic fringes as the first steps in obtaining quantitative stress information from photoelasticity. The photoelastician should be wary of the ‘‘black box’’ approach because it can lead to undetected errors. The direct approach should be understood even if more sophisticated methods will be used in an experiment. To record isochromatic fringe patterns:

  • set up the circular dark-field polariscope with monochromatic light,
  • place the model in the load device and determine the correct photo exposures,
  • apply the load and wait the selected time interval,
  • record the dark-field photo,
  • rotate the analyzer 90 - to convert to light field,
  • record the light-field photo,
  • remove the load on the model, but leave it in the load frame if possible,
  • print some copies of the fringe photos for analysis. during a demonstration experiment. This example is a digital scan from a contact print of an 8 in. × 10 in. negative recorded in about 1967 via a Linhof scientific view camera with a Zeiss lens—a combination rarely seen in experimental mechanics labs anymore. To save space, the fringe orders are shown in red as they were established by the methods outlined in the next sections. A light-field photo of a loaded arch is used to demonstrate the fringe numbering procedure. It also illustrates some common set-up errors. Some important rules and procedures for ordering the isochromatics are summarized as follows: - fringes that match the background are whole-order fringes,
  • isochromatic order is proportional to the maximum shear stress at any point,
  • use white light observation to assist in assigning order,
  • the colors of high-order fringes are ‘‘washed-out,’’
  • watch the fringe pattern develop as the load is increased,
  • isochromatics start from points of high stress and spread to areas of low stress,
  • the low-order fringes might disappear,
  • the stresses in a projecting free corner are zero, so the fringe order there is zero,
  • fringe orders at load points are large, but might not be the largest in the field,
  • between adjacent fringes, the order changes by +1 or −1 or zero,
  • use knowledge of related stress fields, particularly to establish the direction of the stress gradient,
  • isotropic points might appear in the field, for which,
  • the principal stresses are equal,
  • the fringe order is zero,
  • the maximum shear stress is zero,
  • the principal stresses are not necessarily zero,
  • push an indentor onto the edge of the loaded model,
  • the motion of the adjacent fringes yields information about the sign of the boundary stress and the direction of the stress gradient. Some of the rules and procedures given are employed to establish the fringe orders in the example. We note that photoelasticity indicates immediately where material can be removed to save weight without affecting strength or stiffness. The results of the analysis seem correct and are internally consistent, but some uncertainty remains because of the handicaps of not being in the lab and having only one isochromatic pattern to work with. Use of white light or changing the load would confirm or improve the result. A similar photoelastic pattern can be downloaded to provide an exercise in fringe numbering.

Part 37: Photoelasticity IX—Fringes

Isochromatic orders are converted to stress data, and a boundary stress plot is created to show graphically the stress distribution. As established earlier, the difference between the principal stresses at a point equals the fringe order at the point times the wavelength used divided by the product of the material stress-optical coefficient and model thickness. Stress distributions from photoelastic experiments:

  • are difficult to visualize directly from isochromatic fringe patterns,
  • must often be made easily accessible for managers and laypersons,
  • are useful for comparison with theoretical and numerical analyses. A boundary stress plot:
  • can be constructed quickly from photoelasticity data,
  • is easily comprehended,
  • facilitates design evaluation and optimization. Reasons for the usefulness of the boundary stress graph include:
  • maximum stresses occur at boundaries, so failures begin there,
  • one of the principal stresses is zero at a free boundary, so the plot represents the distribution of maximum normal stress or twice the maximum shear stress,
  • boundary stress data are easily correlated with data from other techniques, notably resistance strain gages,
  • the stress distribution can be sketched very quickly with minimum effort and resources. The steps for creating a boundary stress plot are:
  • tape a sheet of tracing vellum over the light-field isochromatic pattern,
  • trace the specimen boundaries,
  • place a tick mark at the center of each isochromatic fringe where it touches a boundary,
  • indicate the order of the isochromatic at that location,
  • transfer the sheet of vellum to the other isochromatic pattern and match up the boundaries,
  • repeat the process of placing and numbering ticks where the isochromatics intersect the boundary,
  • construct normals to the specimen boundary at each tick mark,
  • The length of the normal is scaled so that it is proportional to the fringe order,
  • connect the tips of the scaled normals with a smooth curve, fill the boundary stress plot with extra lines or shade it in to satisfy cosmetic expectations,
  • write in the magnitudes of the boundary stress at critical locations. Constructing a boundary stress graph with computer graphics software parallels the steps for manual implementation. The stress distribution plot indicates immediately how the shape might be changed to save weight while improving strength and/or stiffness. Comments:
  • compressive and tensile stresses should be plotted on opposite sides of the specimen boundary if possible,
  • creativity is useful when drawing the stress plot around re-entrant corners and fillets,
  • fringe orders are not necessarily integers, although integral and half-orders are usually all that are needed for constructing the boundary stress diagram,
  • a complete photoelasticity study to determine stress magnitude and distribution can be executed in a very short time.

Part 38: Photoelasticity X—Transfer

Stresses obtained from a photoelastic model are transferred to the prototype, which likely differs from the model in material, size, and shape. To determine stresses in the prototype from model studies, the following questions must be answered.

  • How does one account for the difference of material properties?
  • How does one adjust for the difference of loads?
  • May one make the model larger or smaller than the prototype, and, if so, how does one compensate for the size difference? Transfer of results from a model experiment to the prototype is:
  • a profound topic,
  • productively applied in many fields including, among many more,
  • aviation,
  • fluid mechanics,
  • geomechanical engineering,
  • biomechanics,
  • based on dimensional analysis and known related solutions. The general field of dimensional analysis is sophisticated, but specific applications facilitate less-general solutions that are correct and serviceable. Basic solutions for stress in structural components do not include material properties, so we infer that material properties need not be considered when transferring stresses from model to prototype. This conclusion does not apply to all cases, so more comprehensive study is required. Elasticity theory leads to the following rules that govern the importance of the properties of model and prototype:
  • For shapes with no holes, material properties need not be considered.
  • For shapes with holes that are free of unequilibrated force on any boundary, the material properties need not be considered.
  • If any boundary of a shape with holes carries an unequilibrated load, then Poisson’s ratio is a small factor in the stress distribution.
  • If the body forces are not zero or constant, then the Poisson ratio is a small factor.
  • In those cases where it is a factor, ignoring the difference between Poisson’s ratios of model and prototype induces errors that are usually small enough to ignore.
  • If displacement boundary conditions are imposed, then the modulus of elasticity is an important factor that is easily taken into account. The conclusion is that in most photoelasticity studies, material properties need not be considered when transferring the results of experiment to the prototype. The load difference between model and prototype is easily accounted for because stress is proportional to load, other things being equal. Geometric similarity requires that the size of the model can be changed relative to the model, but the shape must not be changed. Think of a photographic enlargement or reduction. Scaling stresses to account for the difference in size between model and prototype is easy to do, because stress is inversely proportional to the square of the magnification.
  • If the model is twice as large as the prototype, then the stresses are reduced by a factor of four.
  • This conclusion, based on elementary considerations, is supported by strict dimensional analysis. In photoelasticity, the model-to-prototype thickness ratio can be made different from the size ratio. This practice is often useful. In words, the scaling law declares that the stress in the prototype equals the stress in the model times (the ratio of prototype load to model load), times (the ratio of model thickness to prototype thickness), and times (the ratio of model size to prototype size). The scaling laws for all types of model analysis, including photoelasticity, are obtained through dimensional analysis based on the Buckingham Pi Theorem, first used by Lord Rayleigh and formalized by Edgar Buckingham. Dimensional analysis is a powerful aid in understanding complex physical phenomena, and it can lead to the maximization of benefit from experiments on problems involving many parameters.

Part 39: Photoelasticity XI—Polariscope

A simple approach to properly align the axes of the polarizers and quarter-wave plates in a polariscope is described. This calibration is required before accurate photoelastic data can be obtained. No data can be better than the calibration of the instrument used to acquire that data.

  • This axiom applies to all measurement apparatus, including optical devices.
  • It is not wise to trust the calibration provided by others. Assume that:
  • the polarizers and quarter-wave plates can be mounted loosely in holders that allow them to be rotated during calibration and then clamped,
  • the holders rest in mounts so that they can be independently rotated through known increments,
  • the mounts are attached to the polariscope chassis, likely using platforms that slide along an optical bench,
  • the optical bench is level. Calibration steps are summarized as follows:
  • Make or obtain a calibration specimen.
  • This specimen must show a well-defined known isoclinic that is sensitive to misalignment of the polarizers.
  • A disc in exact diametral compression is commonly used.
  • Set the holders to zero rotation angle in their mountings.
  • Establish approximately the polarization axis of the polarizer by creating extinction of obliquely reflected light. Mark this axis and mount the polarizer loosely in its holder with its axis vertical.
  • Repeat the above step for the analyzer, but mount it so its axis is horizontal.
  • Get the axes of polarizer or analyzer crossed by rotating one relative to the other to create minimum intensity of the transmitted light.
  • Mount the calibration specimen in the load frame and apply load.
  • A plumb line assures that the load is along the vertical diameter.
  • Rotate the crossed polarizer and analyzer together to create the zero-order isoclinic in the specimen image.
  • For a disc, the isoclinic will be a cross.
  • This step fixes the polarization axes relative to earth. Clamp the polarizer and analyzer in their respective holders.
  • At this point, the device is calibrated and set up in the linear dark-field configuration.
  • Place the first quarter-wave plate between polarizer and model and adjust it to regain the zero-degree isoclinic, then clamp it in its holder.
  • The quarter-wave plate axes are now aligned with the axes of polarizer and analyzer.
  • Remove this plate from the system.
  • Place the second quarter-wave retarder between model and analyzer, and repeat the step above to align it.
  • Put the first quarter-wave plate back into the system and check the isoclinic with both wave plates in place.
  • If the background has become light instead of dark, rotate one of the quarter-wave plates in its holder by 90 - in either direction, then reclamp it.
  • At this point, the polariscope is calibrated, the axes of all four optical components are properly aligned, and the quarter-wave plate fast axes are crossed. The setup is now in the dark-field linear configuration, even with the quarter-wave plates in the optical path. For isoclinic studies, it is better to remove the quarter-wave plates. To create the linear light-field system from the linear dark-field configuration, rotate the polarizer or analyzer by 90 - . To convert the linear dark-field polariscope to circular dark field, rotate both quarter-wave plates by 45 - in the same direction. To change from dark-field circular to light-field circular, rotate either the polarizer or analyzer through 90 - . Additional useful comments:
  • Sheets that combine a polarizer and a quarter-wave plate are available and are useful for simple polariscopes.
  • The zero-degree isoclinic in a disc is very sensitive to misorientation of the crossed polarizers, hence its usefulness as a calibration specimen.
  • Use a reasonably large load and intense light to narrow the isoclinic.
  • White light illumination will wash out distracting high-order isochromatics.
  • Nonlinear processing of the image can help sharpen the isoclinic.
  • Several alternative setups are possible.
  • Errors are reduced if the fast axes of the quarter-wave plates are kept crossed.

Part 40: Photoelasticity XII—Recording

This article:

  • reviews the basic characteristics of isoclinic fringes,
  • mentions some various uses of isoclinic fringe data,
  • describes in turn methods of acquiring the stress direction data that are needed to satisfy various application requirements,
  • presents some advanced tips and techniques. An isoclinic fringe provides the inclination of the principal stress axes along only that one fringe. A family of isoclinics is needed to visualize the stress directions over the entire extent of the model. Reasons for acquiring isoclinic fringe data include:
  • discovering the best orientations for strain gages,
  • measuring precise isochromatic fringe orders at particular points by compensation methods,
  • obtaining separate principal stresses σ1 and σ2 by various techniques,
  • comparing experimental observations with numeric or analytic solutions,
  • visualizing the entire stress field for design improvement. To obtain principal directions at a particular point in the model:
  • rotate polarizer and analyzer until the isoclinic covers the point of interest,
  • read the inclination of the stress axes off the calibrated mounting rings containing the polarizers. To obtain the locus of points in the model having known orientations of principal axes,
  • Set the polarizer and analyzer to that angular orientation,
  • record, as by tracing, the complete isoclinic. The complete family of isoclinic fringes is called an isoclinic pattern. Creating the isoclinic pattern requires recording each isoclinic for several incremental rotations of the polarizers. Various techniques can be implemented to accomplish this task. A simple and effective technique is to simply trace the isoclinics by hand. Detailed steps to accomplish the task are:
  • Set up the linear dark-field polariscope in a darkened laboratory with the polarizer mounting rings set to 0 - rotation.
  • Put the model in place and in known alignment with the previously determined axes of the polarizers.
  • Use a large-format view camera to focus an image of the model on the ground glass camera back.
  • Tape a sheet of tracing medium to the ground glass.
  • Place enough load on the model to clearly establish the isoclinics in the image.
  • Use a marking pen to trace the outline of the specimen, the load points, and load direction.
  • Trace the centerline of the observed isoclinic and label it as the 0 - isoclinic.
  • Indicate clearly on the drawing the direction in which the polarizers are to be rotated in order to create successive isoclinics.
  • Rotate the crossed polarizer and analyzer by a certain increment, say 10 - in the direction chosen.
  • Trace the centerline of the new isoclinc and label it appropriately.
  • Repeat the above step until 90 - rotation is reached and the isoclinics start to repeat.
  • Switch on the lab lighting and examine your tracing. Fix mistakes as necessary.
  • Remove the tracing from the ground glass and transfer it to a table. Use another sheet of tracing medium as an overlay and manufacture a tidy version with drawing equipment. Tracing the center of a dark blob in the dark will seem difficult and imprecise. But, the human eye coupled with our highly developed hand-eye coordination assures a good result when local wobbles in the tracings are averaged out. More advanced techniques and useful tips to generate isoclinic patterns include, in increasing order of sophistication:
  • Place the analyzer close to the model and trace the isoclinics directly on the model surface.
  • Reflect the image from a mirror onto a horizontal ground glass or tablet to facilitate tracing.
  • Use an ordinary lens to focus the image onto a large tablet.
  • Install a video camera into the polariscope and couple it directly to a monitor. Tracing medium is then taped directly to the monitor screen and the successive isoclinics traced with a marker.
  • Scan the traced image into a computer and use the tracing function in graphics software to ‘‘trace the tracing’’ and smooth it.
  • Use a video camera, a computer, and a digital projector to create a large image on posterboard, trace it with a coarse marker, then reduce the picture.
  • Employ a video camera with a computerized digital image capture system to record the successive isoclinic fringes separately and then combine them into one comprehensive pattern.
  • Direct analog photography, including multiple exposures of the succession of isoclinics does not seem to work very well without nonlinear processing.
  • Use white light for observation of isoclinics. Make a model for the isoclinic study from a plastic that has limited stress-birefringence to keep the isochromatic orders low.

Part 41: Photoelasticity XIII—Stress

This article describes how to convert an isoclinic pattern into a system of stress trajectories, which is a picture of the principal stress directions over the entire model. A stress trajectory is a line that is everywhere tangent to one of the principal stresses in a stress field. A stress trajectory pattern:

  • is a network of stress trajectories,
  • shows the principal stress directions throughout the specimen,
  • consists of two families, one for maximum principal stress and one for minimum,
  • is an orthogonal network. The steps to draw a stress trajectory pattern by hand are as follows:
  • Create a large copy of the isoclinic pattern.
  • Draw a large number of small guide crosses along each isoclinic so that,
  • the axes of each cross are parallel to the crossed axes of the polarizer and analyzer as they were set to create that particular isoclinic,
  • the axes of each cross are color-coded to show which is maximum principal stress direction and which is minimum principal stress direction.
  • Attach a transparent overlay to the isoclinic pattern with its multitude of crosses for drawing the stress trajectories.
  • Trace the specimen outline as well as the load points and load directions.
  • Create a stress trajectory by sketching on the overlay a line across the isoclinic pattern that is parallel to the appropriate axis of the nearby crosses wherever the line transects an isoclinic. Helpful tips are:
  • The crosses serve only as local directional guides.
  • Do not just connect the crosses.
  • Use a color that matches the color of whichever principal stress axis you are following.
  • Remember that trajectories that are very near an unloaded boundary must lie parallel or perpendicular to the boundary.
  • Do not switch from one principal stress axis to the other as you sketch an isoclinic.
  • Repeat the drawing of stress trajectories until they form a pattern that shows the principal directions over the extent of the specimen.
  • Examine the pattern to see that it forms a smooth orthogonal network that is consistent with the behavior of deformable solids. Touch up as necessary.
  • Use drafting instruments to generate a cosmetic copy on a second overlay. Stress trajectory patterns are easy to generate on a computer using good graphics software and following these steps:
  • Open the isoclinic pattern file, or create it by scanning the isoclinic pattern and using the trace function and node adjustment tools.
  • Create a large cross on the monitor that shows horizontal and vertical.
  • Rotate the image of the isoclinic pattern to align it with the cross, if necessary, in order to make the image coordinates match the global system in which the polariscope was calibrated.
  • Generate a small cross with its axes horizontal and vertical, color-code its axes appropriately, and group it into a single object.
  • Make a multitude of duplicates of this cross.
  • Move these crosses so as to scatter them profusely along the 0 - isoclinic. Rotate the master cross in the direction in which the polarizer and analyzer were rotated when making the isoclinics, and by the increment of rotation used.
  • Duplicate this rotated master cross, and distribute the duplicates along the appropriate isoclinic.
  • Continue this process of rotation, duplication, and distribution until all the isoclinics have been covered with crosses showing the corresponding stress directions. Pay attention to the color-coding of the cross axes as you do this.
  • Save this pattern of isoclinics with their crosses.
  • Select the Bezier curve drawing tool.
  • Draw a stress trajectory with mouse or touch pad by placing nodes fairly close together and forcing the curve generated to have the proper inclination as it transects each isoclinic.
  • Use the shape (node adjustment) tool to smooth the curve and get it everywhere aligned with the nearby guide axes.
  • Give the finished trajectory the proper color and line weight.
  • Repeat the above four steps until the pattern is complete. Singular points, including load application points and isotropic stress points can cause confusion when drawing either isoclinic patterns or stress trajectories. Be aware of these potential difficulties, follow the fundamental rules, and read about them if necessary.

Part 42: Photoelasticity XIV—Reflection

This article describes how stresses or strains over the surface of an actual prototype are determined through the use of photoelastic coatings. Basic devices, applications, theory, limitations, and advantages are discussed. Reflection photoelasticity:

  • provides strains or stresses on the surface of an actual structural component,
  • does not require fabrication and testing of a model,
  • can be used on parts of any size and those having complex geometries,
  • can be used on a wide variety of materials and structures, including concrete, biological structures, machine parts, tires, and so on,
  • can be used to measure dynamic or cyclic strains,
  • is capable of useful accuracy but is often used in semi-quantitative analysis,
  • is easy to use,
  • is cost-effective,
  • is widely accepted in industry. The basic reflection polariscope is similar to a diffused-light transmission polariscope that is folded in the middle and includes in the optical path:
  • a light source that is usually a projection lamp,
  • a crude collimator or diffuser,
  • a polarizer,
  • a quarter-wave plate,
  • the birefringent coating attached to the specimen,
  • a diffusing layer between the coating and the specimen
  • a second quarter-wave plate,
  • an analyzer that is the second polarizer,
  • a color filter,
  • a system to view or photograph the specimen coating. Birefringent coatings:
  • are available in a wide range of stiffnesses and thicknesses,
  • are chosen to fit the problem at hand,
  • can be made using ordinary photoelastic plastics,
  • are often purchased as flat flexible sheets that can be applied to flat or cylindrical surfaces,
  • may be applied to complex surfaces using the ‘‘contour sheet’’ method,
  • are usually attached to the specimen with cement containing aluminum powder, thereby creating the light-scattering layer between the specimen and the coating. The strains in the photoelastic coating are assumed to be the same as the strains in the specimen surface, so the isochromatic fringe pattern does not yield directly the stresses in the specimen. The interpretation of isoclinic data from photoelastic coatings is the same as that pertaining to transmission photoelasticity. Analysis shows that the difference between principal strain difference at a point on the specimen surface is proportional to isochromatic fringe order in the coating and involves also:
  • the wavelength of light,
  • the stress-optical coefficient of the coating,
  • the coating thickness,
  • the Poisson ratio of the coating,
  • the elastic modulus of the coating. The coating material properties are often lumped into a ‘‘K’’ factor that is provided by the manufacturer. The analysis can be extended to obtain the principal stress difference in the specimen, in which case the specimen material properties must be considered. Isochromatic orders from photoelastic coatings are usually low, so techniques for fractional fringe measurement must be implemented for precise studies. Potential sources of error in reflection photoelasticity include:
  • The light incidence angle and the angle of viewing are not normal to the coating in the usual setup, so the photoelastic effect is averaged over a finite area.
  • The polarization of light is known to change when it is reflected or scattered.
  • The stress-optic properties of the coating might not be accurately known.
  • The specimen might be reinforced by the coating, thereby changing the strain field.
  • If out-of-plane bending occurs, the coating does not accurately render the strain at the specimen surface. The strain at the edge of a coating might not be the same as the specimen strain; this problem is especially serious at the edges of holes or similar stress risers. Coating thickness, strain-optical coefficient, and stiffness must be chosen with care to achieve a balance between sensitivity and acceptable error. Reflection photoelasticity:
  • is capable of roughly 5 to 20% accuracy, depending on technique,
  • is often used in a semi-quantitative mode using white light for design optimization.

Part 43: Photoelasticity

This article overviews two of the five classical techniques for performing photoelasticity experiments in 3 dimensions. The most useful, demanding, and interesting problems in experimental mechanics require 3-dimensional analysis. 3-dimensional photoelasticity:

  • is more complex and time-intensive than are studies in two dimensions; the need must justify the undertaking,
  • serves as a paradigm for other methods of 3-D analysis, in addition to being very useful itself,
  • requires that the model be optically or mechanically sliced into an assembly of 2-D problems. The five classic approaches to 3-dimensional photoelasticity are:
  • slicing after stress freezing,
  • embedded polariscope,
  • layered models,
  • scattered light,
  • holographic photoelastic interferometry. The isochromatic pattern obtained from a slice yields information about the principal stresses in the plane of the slice that, in general, are different from the absolute principal stresses. Additional experimentation and/or analysis are needed to determine the latter from the former. For a surface slice or any other slice taken perpendicular to one of the principal stresses, the stresses obtained are the true principal stresses. Only these cases are considered in this article, as they are sufficient for the majority of studies. The birefringence is frozen into the model by:
  • heating it to the rubbery state,
  • applying the load or deformation,
  • slowly cooling it while loaded. Following stress-freezing, the model is sawn into slices. The stressbirefringence remains frozen into the T slice and is not affected by the sawing A surface slice:
  • contains two of the true principal stresses,
  • is viewed in an ordinary transmission polariscope,
  • yields isoclinic fringes that indicate the principal directions,
  • allows determination of the principal stress difference (σ1 − σ2) in the plane of the slice. A subslice:
  • is taken from the surface slice so that its edges are parallel to one of the principal stresses, say σ1, as indicated by the isoclinics,
  • is viewed in a polariscope normal to its edges along the σ2-axis,
  • yields the single principal stress σ1 at the point of interest. A sub-subslice may be taken from the subslice so that:
  • its edges are normal to the σ1-axis,
  • is viewed along the σ1-axis,
  • provides the second principal stress σ2. The slicing and observing can be repeated to map the stress field for the entire surface. An interior slice normal to one of the principal stresses is treated the same as a surface slice, except that only principal stress differences can be obtained without further work 3-D photoelasticity with composite models is accomplished by using one of the following:
  • a model with a layer of birefringent material sandwiched inside an otherwise optically inactive material, which is viewed in a polariscope,
  • a model of birefringent material that carries embedded polarizers so as to form an internal polariscope. The embedded polariscope model is viewed normal to the internal slice and polarizers. Technique parallels that for observing isochromatic patterns in ordinary transmission photoelasticity, and interpretation of results is the same.

Part 44: NEXUS

Part 45: Measuring Phase Difference—Part I: The Problem

This article:

  • reviews basic interference concepts,
  • studies non-ideal interference,
  • leads to formulation of the two problems that must be solved to obtain precise data from any implementation of interferometry. In whole-field experiments, the goal of interference measurement is a map of change of phase difference, relative retardation, or change of path length difference. The result is usually called a phase change map. For ideal interference of two identical waves, the observed intensity varies between zero and a maximum as the square of the cosine of the PLD times pi over wavelength. The power of interference techniques is that PLD can, in principal, be obtained from measurements of intensity. Three related problems arise when trying to determine PLD from a measurement of final intensity through use of the retardation-intensity equation, namely:
  • We cannot determine which cycle of the intensity graph is the correct one.
  • When possible, this problem can be solved by counting cycles and half-cycles as they pass.
  • We cannot determine the exact fraction of a cycle without knowing one more datum.
  • Possibly an observation of the maximum intensity can be used.
  • The problem is poorly conditioned near maxima and minima of the graph.
  • Large changes of retardation yield only small changes of intensity in these regions. The problems found in the ideal case are compounded in real-world interferometry. More analysis is needed. Complications experienced in non-ideal interference include:
  • The interfering waves do not have identical amplitudes.
  • They might not have the same polarizations.
  • They might not be perfectly coherent.
  • The starting PLD is probably not zero or at a maximum or minimum.
  • The intensity versus PLD relationship is probably contaminated by noise.
  • The interfering waves might not be brought together along the same axis. Analysis of the collinear interference of two waves that have differing amplitudes shows how to attend to the complications mentioned above and suggests what measurements are required for precise determination of phase difference. The observed intensity can be seen as the average of the intensities of the two waves plus half their difference times the cosine of the phase difference. To determine phase difference from intensity measurements in the non-ideal case, the following problems must be addressed:
  • Again, we do not know which cycle of the curve is correct.
  • Counting cycles and halfcycles of maximum and minimum between start and finish might solve this problem if the experiment allows such a procedure.
  • There are three unknowns in the equation.
  • Measurements of maximum and minimum intensity in addition to the final intensity would allow a determination of the fraction of a cycle.
  • The phase difference at the start of the experiment must also be measured because it is probably not known a priori.
  • The procedure is poorly conditioned.
  • Small changes of intensity correspond to large changes of phase difference near the maxima and minima of the cosine relationship. In summary, to obtain a valid measurement of the change of phase difference:
  • The phase differences at the start and the end of the experiment must be established and the results subtracted.
  • For each determination of phase difference, at least three intensity measurements are required.
  • Some method of counting whole interference cycles through the course of the experiment must be incorporated.
  • The analysis method should be well conditioned to minimize uncertainties.

Part 46: Measuring Phase Difference—Part II: Compensation

This article introduces the use of a compensator or phase shifter along with a photometric device to accurately measure path length changes via interferometry. An interferometer creates a volume of fringes in space. The objective is to measure exactly the interference order at a specific point in the field. As path length difference changes, the fringes appear to move past the detector. To determine nearest whole- and half-orders, count the cycles of maximum and minimum detector readings as the experiment progresses from its initial state to its final state. To measure exact partial interference order, insert into one of the interferometer paths a graduated calibrated adjustable retarding plate, which is called a phase shifter or a compensator. Adjust the phase shifter so as to bring the detector output to a maximum, which is equivalent to moving the next lower or next higher whole order to the detector location. The phase shifter, if properly calibrated for the wavelength used, reads out the fractional fringe order.

  • Care is required to determine if the reading should be subtracted from the higher order or added to the lower order. If the starting intensity is not a maximum, then the initial interference order is measured using the same procedure, with the nearest whole order arbitrarily numbered. The initial measurement is then subtracted from the final measurement. Many types of phase shifters have been invented and used, including:
  • a glass or quartz wedge with an aperture,
  • two glass or quartz wedges in contact, but with one able to slide over the other so as to create a slab whose thickness can be changed,
  • a mirror fastened to a piezoelectric crystal that is controlled by a driving voltage,
  • rotation of one of the polarizers in photoelasticity,
  • for photoelasticity, a calibrated tensile specimen of birefringent material. Sensitivity can be enhanced by detecting intensities that are at the midpoint of the swing from maximum to minimum. Two compensator readings are taken at this level, one on either side of the whole- or half-order. These readings are averaged to give the correct value.

Part 47: Measuring Phase Difference—Part III: A Phase-Shifting Setup

This article describes a basic phase-shifting interferometer and discusses its limitations and possible refinements. The Michelson-type reflection interferometer is chosen as a tutorial model because it:

  • is inexpensive and easy to set up,
  • is a useful pedagogical exemplar,
  • can be made to give good results,
  • allows us to point out some important limitations and refinements. The function of the interferometer is as follows:
  • Coherent light is collimated.
  • The light is then divided into two beams by a beam splitter.
  • The object beam illuminates the specimen, which is flat and reflective.
  • Light from the specimen is reflected to an array of sensors.
  • The reference beam is diverted to the phase shifter, which is a mirror that can be translated by an actuator.
  • The reference beam is then directed to the sensor array. Surfaces of constant intensity caused by interference between the reference and object beams are created where the beams overlap.
  • These surfaces are loci of constant phase difference.
  • Each sensor in the array generates a voltage that is proportional to the local irradiance.
  • A computer records each sensor voltage and the location of the sensor, the result being a map of intensity distribution in the plane of the sensor array.
  • The computer signals the actuator to move the phase-shifting mirror so as to introduce a known change of path length difference.
  • The sensor voltages are again recorded.
  • This process is repeated until enough data are recorded.
  • An algorithm reduces the data to a map of phase difference.
  • The phase difference map(s) are reduced to displays of initial and final specimen shape, or the difference between them. Limitations of the basic setup include:
  • The specimen must be reasonably flat in both its initial and final states.
  • If no imaging lens is included, establishing correspondence between a sensing element and a point on the specimen is difficult.
  • The specimen must not scatter light.
  • The reference beam is not collinear with the object beam.
  • Oblique interference causes in-plane movement to contaminate the measurement of out-of-plane displacement.
  • Motion of the phase-shifting mirror causes discrete but usually small lateral shifts of the reference beam.
  • The mirrors must be large.
  • The heavy phase-shifting mirror is actuated only with difficulty. The limitations of the basic setup can be eliminated by introducing the following refinements:
  • A lens may be used to create an image of the object on the sensor array.
  • Correspondence between points on the specimen and the sensor elements is established.
  • The specimen need not be flat.
  • The specimen need not be reflective but should have a matte finish to scatter the light.
  • Errors caused by ray obliquity might be introduced, but these can be minimized. - The path lengths should be made long relative to beam diameter to reduce the effects of oblique interference on the measurement.
  • The reference beam can be steered so it is collinear with the object beam by introducing a partial mirror between the specimen and the sensor array.
  • The reference beam may be expanded and collimated on its approach to the sensors, and the object beam may be expanded and collimated as it approaches the specimen.
  • The beam splitter and mirrors can be small.
  • The phase-shifting mirror can be small, light, and easily actuated.

Part 48: Measuring Phase Difference—Part IV: Phase-Stepping Algorithms

This article:

  • describes the steps to perform whole-field phase-stepping interferometry,
  • develops the equations for two methods of data reduction. Determination of the phase difference profile over the optical field necessitates the following steps:
  • record intensity maps for three or more phase steps,
  • perform phase difference calculations for each detector in the array,
  • eliminate ambiguities and adjust the calculated phase difference to modulo 2π,
  • remove the 2π limitation to obtain the final correct phase difference,
  • store and display the results. From this point onward, φ is used to represent the phase difference between the interfering waves at a point in the detector array. φ will be used later to identify the change of phase difference. The intensity at a detector = average intensity(1 + modulation times cosine of the phase difference). The unknowns are:
  • the average intensity,
  • the intensity modulation,
  • the phase difference. Assume that:
  • the phase shifter is calibrated,
  • the phase shifts are applied in known discrete steps (phase stepping),
  • the phase shifter is stopped at each step while intensity data are recorded. In the three-step technique, intensity maps are recorded for phase steps of π/4, 3π/4, and 5π/4. I1, I2, and I3 are the intensities recorded at any specific detector for these three phase steps. At each detector, the phase difference is found to be the arctangent of the ratio (I3 − I2)/(I1 − I2). The arctangent function yields the phase difference modulo π, meaning that we do not know for sure in which quadrant the correct value lies. This ambiguity must be eliminated. The modulation metric is also obtained in terms of the intensity data. For the four-step technique, intensity maps are recorded at phase steps of 0, π/2, π, and 3π/2. The recorded intensities at each detector are I1 through I4. At each detector, the phase difference is found to be the arctangent of the ratio (I4 − I2)/(I1 − I3).

Part 49: Measuring Phase Difference—Part V: Phase Calculations

This article:

  • describes two methods to convert raw phase-stepping results to modulo 2π,
  • tells how to construct a wrapped phase difference map,
  • explores the relationships between fringe patterns and wrapped phase maps,
  • suggests how to unwrap the phase data to complete the experiment. Two ambiguities must be eliminated in order to utilize the data from phase-stepping interferometry, namely:
  • The phase differences are known only to modulo π,
  • we do not know in which quadrant the true result lies.
  • We do not know how many multiples of 2π must be added to the measured phase difference. To refine the data so that it is useful, we must:
  • convert all the values to modulo 2π,
  • eliminate the 2π boundaries to get the complete phase difference profile. In the direct look-up-table method:
  • Only absolute values are used to calculate the phase difference to modulo π/2 from the arctangent function derived for phasestepping,
  • A table is used to interpret the signs of numerator and denominator of the arctangent function to place the phase difference in the correct quadrant modulo 2π. A convenient way to eliminate the π ambiguity is to use the atan2(y,x) function that is contained in most commercial software packages.
  • Care is required because this function is not uniform across software. Use of the phase difference data depends on applications, which are divided into two general cases.
  • If the phase difference distribution is smooth and the starting values were zero or constant, then the data can be used directly.
  • If the starting phase differences are not constant, then initial and final phase data must be recorded and the initial results subtracted from the final to obtain the change of phase difference during the experiment. To create a useful picture of the wrapped phase difference, assign colors or gray scale to the phase difference values in the array and plot them at the coordinates of the sensors in specimen space.
  • If in gray scale, sharp black-white breaks will delineate the 2π phase boundaries.
  • The display resembles an interference fringe pattern where the phase boundaries correspond to the whole-order fringes.
  • Phase difference maps should not be called ‘‘fringe patterns.’’ The fringe pattern in an electro-optic birefringent fluid is used to explore the relationships between interference fringes and wrapped phase difference maps.
  • Any interference fringe pattern can be viewed as a phase difference map.
  • But, the phase data are now in a numerical array.
  • We do not need to track and count fringe orders any more.
  • Our insight allows us to infer the saw-tooth appearance of a phasedifference map that corresponds to the fringe pattern.
  • This exercise leads to clues as to how to determine the complete unwrapped phase difference map.
  • Phase unwrapping is analogous to fringe counting in analog interferometry. Phase unwrapping to complete the experiment requires that we:
  • establish a known starting point in specimen space,
  • unwrap the graph of phase difference versus distance from the starting point to eliminate the 2π breaks along a given cross section of the specimen,
  • repeat the process for all cross sections.

Part 50: Measuring Phase Difference—Part VI: Phase Unwrapping and Determining Displacement

This article:

  • describes how to unwrap a wrapped phase difference map,
  • tells how to obtain specimen displacements from unwrapped phase data,
  • presents a computer routine for conducting a complete phasestepping experiment,
  • shows a complete example from digital holographic interferometry. Phase unwrapping is critical in many areas of science, including medicine and geography, in addition to its extensive value in experimental mechanics. Phase unwrapping:
  • seems simple in that it involves stringing the segments of the wrapped plot together so that they form a continuous line,
  • is actually a complex process that has been the focus of much research,
  • can be implemented using many different algorithms. Unwrapping a phase change map requires the following basic steps:
  • scanning progressively along rows or columns of pixels,
  • calculating the phase differences between adjacent pixels—the ‘‘phase gradient,’’
  • finding where the phase gradient becomes discontinuous,
  • adding or subtracting a multiple of 2π to/from all downstream values to force continuity,
  • repeating the process until the entire phase difference map is covered. If the phase gradient between adjacent pixels is larger than π, then a phase discontinuity has been located. The simple procedure described here assumes quality data and the lack of physical discontinuities in the specimen. Otherwise, more sophisticated approaches must be employed. The unwrapped phase difference values are all relative to the starting point. If it is a fixed point then the phase difference values obtained are absolute. A MATLAB® script is presented that:
  • initializes the input and output devices,
  • sets the phase stepping increment,
  • acquires a before-load intensity map for each of 4 phase steps,
  • incorporates a pause during which the load is applied to the specimen,
  • acquires an after-load intensity map for each of 4 phase steps,
  • uses the 4-step algorithm for computing the before- and after-load phase difference maps,
  • subtracts the before-load map from the after-load map to develop the change-of-phase-difference map,
  • rewraps the change-of-phasedifference map,
  • smooths the map,
  • unwraps the map using the unwrap function built into MATLAB®. An example application shows the results of a complete time-average digital holographic interferometry analysis of a vibrating clarinet reed. Conversion of change of phase difference to physical specimen displacement requires finding the optical path length change that corresponds to the measured change of phase difference at each sensor. For the simplest case where out-of-plane displacement is measured using normal incidence and illumination, the result from digital phase-stepping interferometry is the same as that found by other interferometry procedures such as Newton’s rings, provided that the phase difference change is converted to fringe order. This article closes the series on the basics of optical methods of measurement.