June 14, 2004 This was my
first day at the Laser Teaching Center and I worked on editing my
website using HTML format. I also learned some Linux basics. It's
wonderful to be able to learn and use these new computer skills.
June 17, 2004
For the past few days, I have been exploring the topic of
holography. I found an article on Physics Web
that dealt with the current applications of holography in data
storage. I don't know if this is a feasible idea for an experiment,
but I am exploring this topic, nonetheless. I also visited a couple of other websites related to holography, including hologram.net . This webiste is great for beginners, and provides simple explanations and diagrams. It basically explained that a hologram is a method of producing a threedimensional image on a photosensitive plate by recording the interference pattern of split laser beams. A hologram records the phase and amplitude information about an image, while a simple photograph records only amplitude information. I started reading a book by Winston E. Kock called Lasers and Holography . It introduced to me some basic terminology necessary for understanding holography. Kock mentioned, briefly, the concepts of zone plates, diffraction gratings, virtual and real images, redundancy in transmission holograms,and interference fringes. Then, I began reading a different book for beginners called The Complete Book of Holograms by Kasper and Feller. This book, in simpler terms, explained the difference between transmission and reflection holograms. It spoke about the properties of the HeliumNeon laser and the coherency of lasers in general. It explained the orthoscopic properties of virtual images and the pseudoscopic properties of real images reconstructed by holograms. Kasper and Feller spent a great deal of the book explaining the different models that are used in describing the properties of holograms. Some of the models included the geometric model, the Bragg reflection model, and the zone plate model. The geometric model treated the interference fringes recorded by the hologram emulsions as simple reflective surfaces, the Bragg reflection model helped explain the reflective properties of whitelight reflection holograms (and why transmission holograms look smeared when viewed with white light), and the zone plate model referred to diffraction and interference of light waves in hologram viewings. The book also introduced some basic setup strategies and processing tips for making holograms. A number of different types of "copy" holograms, such as projection holograms, rainbow transmission holograms, and integrams were addressed as well. After finishing this book, I will be able to formulate more complex and detailed ideas of what my project might entail.
June 18, 2004 I have
finished The Complete Book of Holograms and got a general idea
about the setup and the equipment needed to construct simple
holograms. More importantly, the book introduced me to the various
applications of holograms, including pattern recognition. This method
uses hologram plates, converging lenses, and transparencies to detect
certain directed patterns in the text of the transparency. This
involves the concept of Fourier holograms and convolutions. I am now trying to find a text that will help me explain, in intelligible terms, the Fourier transformation equation. I hope the book called Who is Fourier? can help me with this mathematical obstacle. Then, I can move on and read about the Fourier series in The Feynman Lectures on Physics . June 21, 2004 I spent
the first half of the day creating my Websites
page. The links have a lot of information, and I have not read all the
materials on the websites. Some of it is review (most of the websites
on holograms)and some of it is new material ( most of the material on
Fourier theory). I found the book called Mathematical Physics:
Applied Mathematics for Scientists and Engineers by Bruce Kusse
and Erik Westwig in the Stony Brook Math and Physics library. I looked
through it, and I think I will be able to figure out what it has to
say about Fourier series after I finish reading Who Is
Fourier?. So far, the book reviewed basic wave properties and
simple mathematical concepts, such as trigonometric functions, angular
velocity, differentiation, and integration, relating all of this to
Fourier series. Also, the book introduced some new concepts, such as
mathematical filters for certain amplitudes of waves, Fourier
coefficients, and discrete Fourier expansions. The underlying idea
behind Fourier series is that "a wave that is periodic... consists of
the sum of many simple waves." The FFT analyzer was briefly covered as
well. Later that day: Having read more of Who is Fourier?, I have come to the realization that this is a very well designed book. All the topics flow nicely together, and though seemingly simple, the books allows the reader to view even basic concepts with a fresh and new perspective. This book ingeniously intertwines history and theory, tying apparently disparate topics into a flowing narrative. It clearly explained the relationship between vector calculus and Fourier coefficients, as well as Maclaurin's expansions. Formulas in the Fourier series begin to make sense, though the question of how they tie into spatial filters, and how one can apply obscure mathematics to relevant lab procedures still stands to be answered. June 22,
2004 I have finished reading the chapter on Fourier
series in the Feynman lectures. It reviewed some basic concepts
covered in my previous readings. Also, the chapter introduced me to
some new aspects of the applications of the Fourier series, including
Fourier series for discontinuous functions, the energy theorem, and
nonlinear responses of waves, including those of light. Though most of
the lecture concentrated on sound waves, the information was
nonetheless pertinent, concise, and to the point.The websites on my list are very detailed. The one labeled "Fourier Transforms, DFTs, and FFTs" covers a lot of relevant ground. It refers to the Discrete Fourier Transform and the Fast Fourier Transform, as well as to spectrum analyzers and spatial filters, which is the topic I am trying to work my way towards. After I finish looking through this website and the Mathematical Physics book, I am going to begin looking at more practical laboratory procedures and techniques. I am also going to try to understand how and FFT analyzer and a CCD camera work. And, on another note, the REU students, including myself, were given a tour of the Van de Graff Accelerator. The concepts of the ion acceleration process were a bit complex, but also quite intriguing. Also, it was interesting to see how such aged equipment in the Stony Brook lab can still function so efficiently. June 23, 2004 I
looked through Mathematical Physics and I would not recommend
this book to someone working on introductory Fourier optics it is
too mathematically complex and not practical for a limited lab
setup. I have, on the other hand, found a website
which contains a simple lab setup which would demonstrate fundamental
Fourier principles. Before I can set this up, though, I think I need
more background in general Fourier optics and I found a book that
might help me. It is called Fourier Optics: An Introduction by
E. G. Steward. Along with a college physics textbook (University
Physics) I will slowly work my way through this book, setting up
basic experiments along the way to help me visualize concepts.
I worked with a Helium Neon laser and some gratings. I made a basic setup to demonstrate Young's double slit experiment, as it was explained in this website. It worked fairly well, though the setup was rough and only tenative. I also looked at some speciallymade gratings and grating plates. it was interesting to see the difference between singleslit and doubleslit diffraction. June 24, 2004 I am slowly trying to
read through Fourier Optics: An Introduction. It's going slowly
because I have to keep cross referencing University Physics for
simpler mathematical representations. I am reading about mathematical
models for amplitude and intensity patterns in diffraction gratings. I
am also trying to understand the bizarre way Steward's book uses
phasor diagrams to represent the summations of amplitudes in
interference patterns. I am now very comfortable with the interference
pattern formed in Young's experiment, so I can move on to Fraunhofer
diffraction and the Airy effect with circular apertures. Maybe next
week, I'll get to optical imaging and processing. June 25, 2004
University Physics has an ingenious way of explaining phasor
diagrams and how they are used to calculate intensity in singleslit
diffraction. I am currently reading about the graphical and
mathematical representations of Fraunhofer diffraction patterns. This
website is a good summary of what I have been reading: http://hyperphysics.phyastr.gsu.edu/hbase/phyopt/sinint.html. I
plan to begin reading further in Fourier Optics next week, as
well as attempting to figure out a way to physically represent Fourier
transforms in a lab setup.June 28, 2004 I spent
most of the day finishing my reading about diffraction and diffraction
gratings. In Fourier Optics I read about 2dimensional
gratings and the patterns that ensue. I think this is related to my
main research topic of optical transforms, and it seemed quite
interesting. June 29, 2004 I have
reached the chapter in Fourier Optics which reviews Fourier
series and coefficients. This chapter relates optical diffraction to
Fourier analysis mathematically. I also spent a large part of the day
trying to figure out why decreasing the spacing between slits spreads
out the fringes in diffraction pattern. For a double slit diffraction,
squaring the equation for the amplitude of the pattern (attained by
summing up all the different amplitudes at all possible points in the
interfernce pattern from all the different slits using phasor
diagrams) gives the mathematical function for the intensity pattern of
the diffraction.I=4I_{0}[(sin π u a /π u a)^2][cos(&pi u D)]^2 where u=sin(θ)/λ, D is the slit separation, a is the slit width,θ is the angle the between the ray leaving the slit and the perpendicular to the slit, I is the intensity, and I_{0} is the intensity in the straightahead direction. This equation is a combination of the intensity patterns of a singleslit diffraction pattern and a double aperture interference pattern. (where θ=0) When the path difference between the waves from adjacent slits in the grating equals an integral multiple of the wavelength, complete constructive interference (and thus a maximum) in the pattern occurs. Or, mathematically: Dsin(θ)=nλ So then u is also = n/D. Graphing the intensity pattern in terms of u makes the pattern reciprocally related to the slit separation, D. So, the smaller the value of the slit separation, the wider the separation of the fringes in the pattern. Also, logically speaking, if the slit separation is smaller, the circular secondary waves coming from the slits are closer together, making their interfernce pattern more spread out. June 30, 2004 Today,
Professor Metcalf gave a talk, introducing us to laser cooling. He
told us a lot of history behind the development of the model for the
atom and the particle and wave properties of light. He talked about
Planck's constant, DeBroglie's wave theory of electrons, and even
touched on Schrodinger's wave equations. He also mentioned the
development of Maxwell's equation for electomagnetic
waves. Dr. Metcalf will finish his talk and describe laser cooling in
detail on Friday. I read another chapter in Fourier Optics. It described to me how to translate, using Fourier analysis, the aperture function for a diffraction gating (basically a square function) into a sinc funtion for general Fourier coefficients. The sinc function gives the amplitudes and phases of the pairs of diffraction maxima (of a certain diffraction order) described by the grating aperture function, and "... the amplitude of the nth order pair of diffraction maxima from a garting is a measure of the amplitude of the nth order pair of harmonics comprising its overall aperture function..." An aperture function, or the "electric field distribution across the object mask," as Hecht's Optics refers to it, is analyzed and is translated into Fourier coefficients in terms of a sinc fucntion, as shown here. These coefficents can then be used to express the aperture function in terms of sin and cos. Example in Hecht's Optics : The general Fourier function is: (with k=angular spatial freq=2π/λ) f(x)=A_{0}/2+Σ(1 to ∞)A_{m}cos(mkx)+Σ(1 to ∞)B_{m}sin(mkx) A_{m}=2/λ ∫(0 to λ)f(x)cos(mkx)dx B_{m}=2/λ ∫(0 to λ)f(x)sin(mkx)dx A certain aperture function is centered at x=0 above the xaxis, with λ as it's spatial period, and a peak width of 2(λ/a). The Fourier coefficients are thus: A_{0}=4/a A_{m}=(4/a)[(sin(m2π)/a)/(m2π/a)] (aka a sinc function) So f(x)=1/2+(2/π)(cos(kx)(1/3)cos(3kx)+(1/5)cos(5kx)...) Reducing peak width (by increasing a)increases the number of terms needed in the series to produce the same general resemlance to the square aperture function f(x). July 1, 2004 We had pizza and
strawberries with Dr. Noe and Professor Metcalf. It's strange how a
room full of physicists and physics majors did not once bring up the
topic of physics in the course of a conversation. I read the part of Chapter 7 in Optics that talked about nonperiodic functions and expressing them in Fourier form. Nonperiodic waves can be treated differently in terms of Fourier optics than periodic waves. Since these waves don't have a definite period, the period is said to be infinity. Increasing the spatial period to infinity increases the distance between peaks in an aperture diagram, until they're an infinite distance apart (single square pulse results). The pulse (or peak) in the aperture function gets smaller compared to the infinite spatial period (λ), and it thus requires higher frequencies to synthesize it. So, as the number of terms in the Fourier function required for percision increases, the discrete sinc (no, not sine!) function representing the Fourier coefficients graduallly merges into one solid "evelope." It, then, becomes meaningless to talk about fundamentak frequencies or harmonics in nonperiodic waves. Also, the summation symbol used for discrete Fourier functions, becomes and integral sign: f(x)=(1/π)[∫(0 to ∞)A(k)cos(kx)dk+∫(0 to ∞)B(k)sin(kx)dk] A(k)=∫(∞ to ∞)cos(kx)dx B(k)=∫(∞ to ∞)sin(kx)dx A(k) and B(k) are now called Fourier transforms. Considering the single square pulse: f(x)= E_{0} when x is between L/2 and L/2 and 0 when x is more than L/2 or less than L/2 A(k)= E_{0}Lsinc(kL/2) (Notice: When L gets larger, the spacing between zeroes of A(k) decreases; or, when the slit width increases, the maxima are closer together in the intensity interference pattern) Then, f(x)= (1/π)[∫(0 to ∞)E_{0}Lsinc(kL/2)cos(kx)dx. The chapter also talked about the coherence limits of monochromatic light and how this depends on the length of the wave trains of individual photons. It said that the frequency bandwidth of the spectral lines of a beams is: Δν≈(1/Δt) Or: the frequency bandwidth (usually the width of the peak of the principal maximum in the Fourier transform or 2π/L in a transform where transform function's maximum is between π/L and π/L) is of the same order of magnitude as the reciprocal of the "temporal extent of the pulse." Also, Δl_{c}=cΔt_{c} where Δl_{c} is the coherence length, c is the speed of light, and Δt_{c} is the coherence time. Coherence length is the "extent over which the wave is nicely sinusoidal so that its phase can be predicted reliably." July 2, 2004
Professor Metcalf gave a more detailed lecture about laser cooling
today. He blew my mind with velocity dependent forces and "optical
molasses," which uses light beams to trap atoms. It also made me think of my own topic. I think I should clarify yesterday's entry. Coherence time is the length of one wave train of a photon or atom. Thus, frequency bandwidth gets smaller as the length of the wave train becomes larger. If the wave train is infinitely long, it means that the light is completely coherent, and the frequency bandwidth becomes zero. That is, the beam is of a single frequency. The Fourier transform would thus be one single vertical line of one amplitude and frequency. Also, Fourier Optics points out something interesting. Nonperiodic waves treat pulses as being infinitely far apart. This can be physically compared to singleslit diffraction. With a single slit, the equation for the Fourier transform is a sinc function, just as it is for an aperture function with a single square pulse for a nonperiodic wave. Jose gave us a detailed lecture about the CCD camera. He explained how it responded to light, the photoelectric effect, and blooming. He also explained the difference between the CCD camera and a regular camera. Finally, we got to play around with the camera using the computer in the lab, changing different options on the program. July 6, 2004
The floor was being waxed in the Laser Teaching Center so my computer access was very limited today. We had a meeting with Professor Graf at noon. He showed us an introductory physics lab and demonstrated some basic optics experiments such as experiments involving an optical lever, diffraction, and Snell's law. It was mostly review, in terms of the material discussed, but I enjoyed the lecture nonetheless I got a chance to finish Chapter 4 in Fourier Optics. Here is a summary: • impulse function (δ): It is a vertical line function that has the value of zero, except at one point, where it is ∞. It represents an ideally narrow strip where there are no path differences in light to cause interference. • The Fourier transform of the Nslit grating is made up for a singleslit term and a grating term (which is a transform of an array of δfunctions defining the "grating lattice"). Thus, the aperture function is the "distribution of a single slit aperture function in accordance with an array of δfunctions defining the lattice on which the grating is based." • convolution: the "distribution of one entity in a manner specified by another." Nslit Fourier transform is an example of convolution. The general effect of "smearing" of f(x) at x by g(x): h(x)=∫(x'=from ∞ to +∞) f(x')g(xx')= f(x) * g(x)= f * g This is equivalent to multiplying each output of a function by "the whole of another function and summing the results." g(x) is called the "smearing" function, and is reversed and translated during convolution. • The chapter contains a proof the convolution theorem: "The Fourier transform of the convolution of the single aperture function with the δfunction array is equal to the product of the individual transforms." Or, "convolution in real object space corresponds to multiplication in diffraction space." H(u), F(u), G(u) are transforms h(x)= f(x) *g(x) is equivalent to H(u)=F(u)G(u) • Though I am not sure how this relates to image processing, 2 types of "correlation" were also defined. Autocorrelation P_{ff}= ∫ ∫ (from ∞ to +∞)f(x,y)f(x+u,y+v)dx dy The value of P "for any chosen u,v is obtained by shifting the function f with respect to itself by u,v and determining the area of overlap." Crosscorrelation (using two different functions g and f and describing their overlap) P_{12}(u,v)= ∫ ∫ (from ∞ to +∞)f(x,y)g(x+u,y+v)dx dy • The autocorrelation theorem states that the "Fourier transform of the autocorrelation of a function f(x) is the squared modulus of its transform" i.e. the transform of [abs( F(u))] ^2 is the complex autocorrelation of f(x) so, transform of [abs(F(u))^2]=f(x) · complex conjugate of f(x) (· represents correlation) The transform of the autocorrelation of f(x), physically speaking, is the power spectrum of f(x) in terms of spatial frequency. July 7, 2004 Rita
gave a lecture today on chaotic coupling and controlling chaos. It was
interesting, especially about the creation of chaotic images from
nonlinear crystals. She mentioned autocorrelation in her talk and this
concept is relevant to my experiment with light processing. Dr. Noe and I discussed my topic in some detail and then I spent the rest of the day reading part of Chapter 5 in Fourier Optics. This part of the chapter was about incoherent optical imaging and the optical transfer function (OTF) of a system which processes an image and "maps a set of input functions to a set of output functions." Basically, image formation is treated as convolution, or multiplication in Fourier space. The OTF is the Fourier transform of the impulse function or the "smearing function" mentioned previously. So the multiplication of the OTF and the transform of the object intensity distribution (which is the square of the amplitude frequency spectrum) in Fourier space is equivalent to convolution of the object intensity distribution with the "pointspread" or "smearing" function. This simplifies the analysis of many filters in a system on an image. The system alters only the intensity and phase of the object distribution. She mentioned autocorrelation in her talk and this concept is relevant to my experiment with light processing. Dr. Noe and I discussed my topic in some detail and then I spent the rest of the day reading part of Chapter 5 in Fourier Optics. This part of the chapter was about incoherent optical imaging and the optical transfer function (OTF) of a system which processes an image and "maps a set of input functions to a set of output functions." Basically, image formation is treated as convolution, or multiplication in Fourier space. The OTF is the Fourier transform of the impulse function or the "smearing function" mentioned previously. So the multiplication of the OTF and the transform of the object intensity distribution (which is the square of the amplitude frequency spectrum) in Fourier space is equivalent to convolution of the object intensity distribution with the "pointspread" or "smearing" function. This simplifies the analysis of many filters in a system on an image. The system alters only the intensity and phase of the object distribution. Certain terms need to be defined for incoherent processing. Aberrations in the lens processing the image can be taken into account by the OTF, which is in terms of the modulation transfer function : OTF=(MTF)exp(i(PTF)) PTF= phase transfer function, often negligible i= imaginary number The MTF can be viewed as the modulus or amplitude of the OTF This function takes into account the ratio of the modulation (or visibility) of the image as compared to the modulation of the object. With an ideal lens, the MTF is close to linear, fairly high for all frequencies, starting at unity, and decreasing as spatial frequency increases until the resolution limit of the lens is reached. As the Handbook of Optical Engineering explains, OTF has to do with "the performance of an optical system in terms of its capacity to image faithfully the individual spatial frequency components of the object." F(u)= complex amplitude diffraction pattern of the image of a point source from a pupil function of the lens (f(x)), which has a value of one inside the aperture and 0 outside. For incoherent illumination [F(u)]^2= PSF= intensity distribution function (for coherent illumination F(u)=PSF).The PSF is the power spectrum of the pupil function, or the electric field in the focal region of a lens, and the OTF describes the spatial frequency content of the PSF. The greater the aperture size, the smaller the PSF and the smaller the degradation of the image caused by the lens. Transform of [F(u)]^2= OTF= autocorrelation of f(x) or the complex amplitude distribution over the lens aperture (a square wave) As far as I understand this, the above statement makes sense because the system cannot vary the original image in frequency, so the autocorrelation shows the overlap between the amplitude function and its displaced self. Thus, the higher the OTF, the greater the overlap, and the more "faithful" the lens is to the original image. July 9,
2004 Yesterday, we had a lecture on optical vortices
and singularity. It was very mathematically complex, but I enjoyed
listening about fork gratings because they are similar to diffraction
gratings that I've studied. I also asked the lecturer about the
phenomenon of the alternating intensities seen when shining a HeNe
laser at a Ronchi grating. He said the blurred maxima seen in the
resulting pattern are actually "missing orders" of when maxima of the
multipleslit interference pattern (at (d)sin(θ)=m(λ) where d
is the slit spacing and theta is the angle that the waves bend)
cooincide with the minima of the single slit diffraction pattern (at
(a)sin(θ)=m(λ) where a is the slit width and theta is the
angle that the waves bend). The location of the missing orders depends
on the ratio between the slit spacing and slit width. For example,
when the d=4a, the every forth maximum is missing (i.e. when d is an
integer multiple of a, missing maxima occur). Professor Metcalf gave a talk today concerning magnetism to prepare us for a discussion of magnetooptical trapping. He told us the distinction between optical molasses, which works with velocity dependent forces, and optical trapping, which relies on position dependent forces. He told us about phase space and, what was even more interesting, he told us about the magnetic properties of all matter, including subatomic particles. He explained that subatomic particles can only have certain discrete orientations in magnetic fields. This baffled all of us and we are all excited to learn more about that. On a more relevant note, I spent most of yesterday and today trying to sort out the small things that were unclear in my mind about Fourier optics. Thanks to Dr. Metcalf and hours of contemplation, (I never knew it took so much work to figure out what it is that you do not understand!) I have finally filled in those holes, and I shall summarize the clarifications at present: • First of all, I finally figured out how wavelength affects diffraction patterns. Basically, if the wavelength is longer, the rays of light from the different slits have to bend at greater angles in order to interfere constructively. This spreads out the diffraction pattern more. • I also found a site that explained the effect of of changing the slit grating very well. • The most confusing thing that I didn't understand was spatial frequency and orders in a diffraction grating pattern. Professor Metcalf clarified that each maximum in a grating pattern corresponds to an order number, which is simply related to how much the rays bend at the grating edges. Fourier Optics confused me because they called the term n/D the spatial frequency, when in reality, they combined the concepts of an order number (n) and the frequency (1/D where D is the slit separation). This is what confused me. They graph the Fourier transforms in terms of u, which is n/D or sin(θ)/λ. So basically, these numbers are all related to spatial frequency, but combine several variables. Thus, the Fourier graphs have maxima when n is an integer, just like in real life where the intensity maxima correspond to integer order numbers. • Finally, I was a bit unclear about why the minima for a one slit diffraction grating correspond to (a)sin(θ)=mλ (a= slit width, θ is the direction of the bent rays, and m is an integer). I realized that the top ray at the the edge of the grating destructively interferes with the ray at the center, so the equation is actually (a/2)a sin(θ)=m(λ/2). The ray right below the topmost one interferes destructively with the ray right below the central one, and so they all add up in pairs to produce a minimum in a certain direction. Now that I have all that straight, I can move on to read about optical processing in Fourier Optics. July 12, 2004 I
finished reading the part of Fourier Optics relating to optical
filtering. This introduced me to different kinds of filtering, such as
amplitude filtering, phase filtering, and complex filtering. Now, I
want to make a basic lab setup to demonstrate the properties of
amplitude filters. Phase and complex filters (using holograms which
store both phase and amplitude characteristics) are much more
intricate and can be left to future study. Currently, I have to figure
out how to make a spatial filter, or find some other way to clean up
the laser beam, using fairly primitive materials. Also, Fourier Optics introduced me to the concept of imaging as a process of double diffraction, with an object plane, a diffraction plane, and an image plane. For perfect imaging an infinite Fourier synthesis is required. The first order pair of maxima in the diffraction plane interfere in the image plane to show the basic repeating of the grating, without any fine detail. As more orders are allowed to pass through the aperture, the image becomes more detailed. Limiting the aperture creates a lowpass filter where only the lower frequencies are allowed to contribute to image formation. A basic lab setup can be used to demonstrate the effects the aperture's characteristics can have on the final image produced. This has important relevance in imageprocessing equipment, such as microscopes and telescopes. Amplitude filters can also be used in relation to micrographs. Finally, a point that has not been clarified earlier: the diffraction pattern one sees is actually an intensity pattern, which is amplitude squared. So, referring to a lens as a transformer is technically inaccurate because it actually produces a squared Fourier transform of the aperture function. July 13, 2004
Today, Professor Metcalf gave a lecture, introducing us to angular momentum of electrons as it relates to the magnetic moment of the atom. He explained the phenomenon of electric spin and the polarization of light needed to change the total angular momentum of an atom when that light is shone on it. Later in the day, Professor Allen came to speak to us about crystallography and the theoretical structure of substances close to the earth's core. He also explained a mathematical model which attempts to predict the movement of objects in a planetary system. Then, Dr. Noe had us perform a little experiment to understand the thin lens equation as it works with light sources that are not infinitely far away. This is related to the topic of a 4f lens system used for spatial filtering. I spent the rest of the day looking at the equipment in the lab, trying to find things I might need for a general lab setup for Fourier optics experiments. July 14, 2004
After coming to a distressing realization that I have no idea how lasers work, I read the first 3 chapters of Understanding Lasers by Hecht and listened to Jose's talk about lasers. The first 2 chapters of Hecht's book reviewed basic quantum mechanics. What I had forgotten was that each electron in a specific atomic orbital has a certain wavelength assigned to it. Also, this book neatly tied in particle and wave properties of matter and light. Because λE=hc (tying in wave properties with quantum properties), the higher the frequency, the lower the wavelength (property of light), the higher the photon energy. So, all matter absorbs and emits light but only at specific wavelengths and frequencies because energy is quantized at the atomic level and only certain amounts of energy can make an electron excited and move to a higher energy level. (i.e. Photoelectric effect) A new thing I learned from the book was about population inversion, where, contrary to the state at equilibrium, there are more atoms in the excited state than the ground state. When this is true, more photons stimulate emissions, instead of being absorbed. Thus a laser is "light amplified by the stimulation emission of radiation." By achieving a metastable state, where the atoms remain excited for an unusually long period of time on the atomic scale, population inversion can be achieved. 4level lasers are particularly useful because they don't need as much energy as 3level lasers because with 4 levels, the population inversion needs to be achieved only between an intermediate lower laser level and the metastable state. The lower level naturally depletes, and there are not as many atoms in the lower level. Thus, less energy is needed to maintain the difference between the two states. I also learned about resonant cavities and gain (or the amount of stimulated emission a photon can generate as it travels a certain distance). A certain range of wavelengths can resonate in a specific cavity under a certain gain bandwidth (there's gain over a range of wavelengths). Longitudinal modes( N) correspond to wavelengths which would resonate in a certain cavity: Nλ=2L L= the length of the cavity The shape of resonant cavities is modified for different types of lasers. If the gain is high, an unstable cavity can be used, where the photons do not travel back and forth so many times in the cavity. This increases output efficiency, because the light can go through a larger area of the cavity and stimulate more of the medium. For lower gain, the cavity has to be able to retain the photons for a greater time period so the photons travel back and forth, increasing path distance and, thus, the number of stimulated atoms. Transverse modes determine the pattern of intensity across the crosssection of a laser beam. The lowest order transverse mode has a beam profile of a Gaussian curve. TEM_{00} is the first order and the numbers in the subscript determine the minima locations in the beam cross section. The intensity (I) function for a Gaussian curve is: I(r)=2P/(π d^2)exp(2r^2/d^2) r= distance from the center of the beam, d= spot size, and P= power The TEM_{00} is usually desirable because it has less spreading than the other modes. Optical pumping is one of the ways to achieve population inversion. It illuminates atoms with light to excite the laser species. A wide range of wavelengths can be used to stimulate emission, because there are many upper levels in a laser which decay to the metastable level. The problem with optical pumping is that the stimulating photons must have a higher energy than that of emitted light because the laser species must be raised above the upper level to which it later decays from a starting point ususally below the lower level. This makes it difficult to produce a steady beam. Optical pumping can be used with any medium transparent to the light. I will read about other kinds of laser excitation techniques tomorrow. (Amplification by laser=Power=Output/ Input= (1+Gain)^Length traveled.) Question of the day: Why does an emitted photon travel in phase with its stimulation photon? July 15/16, 2004
I finished reading the first 4 chapters of Understanding Lasers:• Other forms of excitation in lasers include electrical pumping (usu. more efficient than optical pumping because electricity is used directly in excitation) and the use of semiconductors, chemicals, and nuclear radiation. • Lasers with large atoms (for laser species) are capable of more transitions because more energy levels are possible. • All lasers can usually be tuned to function at a specific wavelength by manipulating the cavity or using diffraction gratings. • Lasers do not have perfect coherence: not all photons are stimulated by one original photon, there are always tiny variations in wavelength and thermal variations in the laser. All these factors interfere with coherence. Lasers with one longitudinal mode, low gain, and one transverse mode tend to be the most coherent. As the pulse length increases, the range of wavelengths decreases, so continuous beam lasers are more coherent due to the uncertainty principle. • Temporal coherence: coherence length=c/2Δv (c= speed of light in vacuum, Δv= frequency bandwidth) • The Doppler effect also broadens the range of wavelengths in the cavity: Δv/v=[.545kT/Mc^2]^.5 (Δv/v= Doppler bandwidth  freq broadening caused by the effect, M=mass, T=temperature, k=Boltzmann constant) • The Rayleigh range= distance at which the laser beam remains fairly parallel or spatially coherent)= πd^2/λ (d=distance) • Beam divergence θ is given by: D=2L tan(θ), where D is the diameter of the beam and L is the length; the edge of the beam is defined as when the intensity drops off to 1/e^2 of its maximum value. • If the resonator mirrors are curved, the beam will have a "waist"(w) or narrow point inside the cavity. The divergence is then defined as θ=λ/πw. • Also, the larger the output port, the smaller the divergence. And, the greater the number of transverse modes, the larger the divergence of the beam. • The energy of a beam from power (P) emitted at a given time t: E=∫P(t)dt • The chapter also talked about amplifiers and using a Brewster window to polarize laser light. It was interesting to find out that light is made up of only 2 perpendicular linearly polar components. • On a slightly different note, I read about lenses and lasers. Because laser light is fairly parallel, a lens concentrates a laser beam at its focus. Because of spherical aberrations (foci different for light at the edges and center of a lens), the lens concentrates the laser beam to a spot of size=fλ/D (where f is the focus, D is the diameter of the beam). Clarification: "Fraunhofer diffraction deals with the limiting cases where the light approaching the diffracting object is parallel and monochromatic, and where the image plane is at a distance large compared to the size of the diffracting object. The more general case where these restrictions are relaxed is called Fresnel diffraction." I am also working on filtering a laser beam for my setup. July 19, 2004
We visited the Brookhaven National Lab today. It took 5 hours but it was worth it. We saw huge machinery designed to perform electron acceleration and collisions. It was fascinating to find out the different kinds of experiments going on in the lab and how versatile basic principles are in analysis of crystals and even subatomic particles.This tour took up most of the day, but I've succeeded in filtering laser light using 2 biconvex lenses and a 50 micron aperture. I need to fine tune the setup so that the beam is expanded more and that the light is more collimated, but I got a clean beam, which resembles a Gaussian function and is fairly collimated. Dr. Noe somehow created an image of a grating using only one lens, as opposed to the "standard" two lens system like the one shown in the American Journal of Physics. I'm going to try to find out why he was able to do that and why texts usually have 2 lens systems. July 20, 2004
I found a good link explaining how image resolution of a lens decreases with higher spatial frequencies of a grating (going down until it reaches the resolution limit of the lens). i.e. there is a limit to the frequencies that a lens can image faithfully. What Dr. Noe did yesterday with the one lens in the Fourier setup was simply creating an inverted image of the diffraction grating simple lens optics. As concerning the 4f optical system, using 2 identical lenses in a Fourier setup simply allows the image distance and the object distance to be equal and saves space. Mathematically, one lens serves as a Fourier transform and the other one as an inverse Fourier transform, to convert the light back into an image of the object. Thus, the second lens is simply used for convenience and helps focus the image at a closer plane. Also,this 4f setup makes it easier to find the location of the Fourier plane (with identical lenses, it is simply exactly in between them) and to put different masks in the Fourier plane to filter different frequencies of light from the final image. This helps visualize the explanation. Finally, the 2 lens system results in the formation of an image that is not upside down (which a one lens system produces) and allows controlled magnification of the upright image. Professor Metcalf gave us another talk about MOTs. It's amazing how magnetism and optics interact on an atomic level to create a trap. Also, I finetuned my spatial filter. I made the beam wider by using a greater focal length for the second (collimating) lens. I also measured the distances more carefully. As a result, the final beam remains parallel for long distances. Later in the day Fraunhofer diffraction, as previously explained, is Fresnel diffraction at long distances. The lens in the Fraunhofer diffraction setup is there because "It is rather inconvenient to work with infinite or near infinite distances so we use a convenient property of a simple lens to convert an image at infinity to an image at the front focal plane..." Thus, if the light is monochromatic, a simple lens helps transform near field spherical waves into nearplanar waves. This allows the waves to be in phase and it is thus convenient to make other generalizations about interference which lead to a mathematical description of a simple diffraction pattern. I also made a great discovery that the connection between diffraction and image formation was made by a man named Ernst Abbe in 1873. I found an experiment that confirms his theory, and involves the 4f setup. July 22, 2004 I have been
trying to put together the main 4f setup for 2 days now. It seems to
be working, but I am having trouble finding a highpass filter. I need
to take this weekend to come up with an abstract and to figure out how
to make different kinds of masks. I also talked to someone Dr. Metcalf recommended I speak with, and the he helped with the theoretical side of my experiment. Yesterday I gave a talk on Fourier optics for an hour and a half. It was good practice for the symposium. July 26,
2004 I have finished putting together my setup. (I
made the laser beam go straight through the middle of all the lenses,
found similar mounts for the main lenses, optimized the airy
diffraction pattern in the spatial filter, etc.)I also found a great article called "Spatial filtering in optical data processing." I got really excited because it basically explains my experiment and ties everything together. It explains Abbe's theory of image formation: " In this theory (Abbe 1873) consideration is given to a gratinglike object, illuminated by a plane wave such that its diffraction pattern appears in the back focal plane of the objective, and it is demonstrated that for faithful imagery all diffracted orders must pass to the image plane." The article also explains some applications of spatial filtering:" correcting aberrated images"," edge sharpening and the detection of signals immersed in background noise," and with binary filters (which is what I'm working with), "the removal of nonperiodic noise from a strongly periodic signal." Binary and lowpass filters can reduce granularity noise from photographs. Though my project uses coherent illumination is was interesting to find out that " incoherently illuminated data processing systems are in general less flexible in their operation; in particular the form of the filter function is rather restricted and cannot be specified arbitrarily." The author, Birch, also explains that image formation using a lens system is actually the convolution of the original image and the point spread function: "object plane is described by the function A (x, y) the image plane distribution will be A'(x`, y'). It should be noted that the function A'(x', y') is not the exact duplicate of A (x,y), transposed to the image plane with a magnification of M = x`/x = y'/y, but is the convolution of the perfect image A(x', y') with a degrading spread function A'(x', y') = ∫(∞ to ∞) ∫(∞ to ∞) B'{(x'  x_{0}), (y`y_{0})}A(x_{0}, y_{0}) dx'_{0} dy'_{0} In this equation B'(x',y') is the system spread function, describing the two dimensional distribution in the image of a true point object. Equation (1)[above] is a convolution integral indicating that each point of the perfect image A(x', y') is replaced by the spread function B'(x', y'). The final amplitude a t the point (x', y ' ) is obtained by a summation of the contributions from adjacent spread functions, at distances ((x'  x_{0}), (y' y_{0})} from the point of interest." Thus, the spread functions of adjacent points influence the image of a point source. One thing that needs to be noted is that a lens is finite so it acts as a lowpass filter, not allowing some of the higher frequencies to contribute to the final image. "For a lens of diameter d [and focal length f] the highest spatial frequency represented in the uv [frequency] plane is s =d/2λf." Mathematically, if a (u,v) is the amplitude distribution of an object in the frequency domain, and t(u,v) is the amplitude transmittance function of a filtering transparency placed in the spatial frequency plane, then: " The amplitude distribution in the back focal plane of lens 2, which is the image plane of the system, is given by the equation ... [A'(x',y')= ∫(∞ to ∞) ∫(∞ to ∞)a(u,v)t(u,v)exp{i2π(x'uy'v)}dudv]... It will be noted that this equation has taken into account the coordinate system of the image plane which is inverted relative to that of the object plane and is scaled such that x'/x = y'/y= f_{2}/f_{1}. Thus if f_{2} = f_{1}, that is, if two lenses of equal focal length are used, the magnification of the system is 1 [inverted image]." (remembering that convolution in the spatial domain is multiplication in the frequency domain) Filters can be differentiated mathematically, as Birch puts it: "The spatial frequency filter t(u ,v) can be a complexvalued function with an amplitude transmittance of t(u,v) and a phase factor of exp{iφ(u.v)}, that is, t(u,v)= t(u,v)exp{iφ(u,v)}. The filters employed in optical dataprocessing systems are usually passive, that is, they do not amplify incident light distributions, and therefore t(u,v)≤ 1. Filters which only modify the amplitude of the incident distribution, that is, where φ(u,v) is constant, are termed `amplitude filters' and if t(u,v) only takes the value of unity or zero they are termed `binary filters'. Filters in which t(u, v) is constant but φ(u,v) varies as a function of u and v are termed `phase filters' and those in which both amplitude and phase transmission vary as a function of u and v are termed `complex filters'[holographic]." Binary filtering, in specific, can be used to remove "grain" noise and filter out objects of specific orientation. Because diffraction near an edge occurs "in the direction perpendicular to the edges and lines of the structure," using a single slit in the diffraction plane can filter out structures of certain orientation in the image plane. Highpass filtering, also explained in the article, is basically blocking out lower frequencies by obstructing them with a mask. Actually, this is not truly highpass filtering, but band pass filtering, because the lens limits the transmittance of the highest frequencies. Highpass filtering allows for image enhancement. The filter highlights the area near the edge of the object in the image plane and in the center of the highlighted image lies a dark minimum of where the edge lies in the original object. This works only for halfplanes, with an edge between areas of very high and very low transmittance (refer to page 24 for mathematical details). An intensity plot of highpass filtering displays 2 sharp maxima (usually the higher the ratio between the outer, limiting diameter of the lens and the width of the obstruction in the filter, the sharper the maxima) and a very low, distinct minimum, representing the location of the minimum. Today, I will put up pictures of my final setup and examples of the different kinds of filtering I performed. Sometime this week, I will use the CCD camera to create intensity distribution graphs of my results. July 29, 2004 I have been
working on my data and setup all week. I used a CCD camera to analyze
my images. I also wrote my abstract.I am reading about highpass spatial filtering and its applications . From Birch's article called "A spatial frequency filter to remove zero frequency," I found out that high spatial filtering "is advantageous when making quantitative measurements of the profile of an object or observing surface contour irregularities visually." The article also explained highpass filtering well: " A consequence of the removal of the zero spatial frequency is that diffracting object structure is imaged with enhanced contrast because object areas of constant or slowly varying transmission are suppressed...[by removing lower spatial frequencies] The image of the edge [of an object] may be shown to have the form of a symmetrical spread function with a sharp central dip which falls to zero intensity and which is coincident with the image position of the edge as predicted by geometrical optics." I also found out that there are inherent errors in optical edge enhancement, especially of small objects: "However, in making such measurements of small objects the mutual interference of the spread functions generated by adjacent edges must be considered to determine whether or not a positional error of the central dips of the individual spread functions is introduced." I should also mention that there were some sources of error present in my project as well. First of all, there is a lot of grain noise in my images from the CCD camera. This makes it difficult to view edge enhancement, but I was unfortunately limited by many technological barriers. Also, the transparency used in high pass filtering as part of a filter had a certain refractive index which slightly changed the phase of the light. Thus, my filter was not purely an amplitude filter. August 3rd, 2004
Professor Metcalf and Dr. Noe made me realize I was not being completely clear about image formation and how a lens contributes to it (Abbe's theory). Collimated light illuminates an object. For simplicity, let's use a Ronchi grating for the object. Light passes through the transparent portions of the grating undiffracted. The edges of the grating bend the light at certain angles to form a diffraction pattern without a lens. The center of the pattern contains light that was not bent. Light that is bent, forms maxima further from the middle maximum. These maxima are diffraction orders. A lens "focuses" the diffraction pattern at its focus. Actually, the unbent light that passes through the grating is focused at the focal distance of the lens because it is collimated light. The light that is bent by the grating comes to the lens at an angle, and thus the lens, helps bend it even further away from the unbent light maximum at its focus. So this is how diffraction maxima are formed at the focal plane of the lens. The lens simply "maps" and enhances how much the light bends around the object; it sorts the light into diffraction orders at its focus. After the light passes the focus of the lens, it begins to interfere to reform the image of the object. Theoretically, the image is never truly focused if the object is located at the focus of the lens, thus a second lens is needed to bring what would be an infinite image distance to a particular, finite image plane. (In other words, the second lens makes the infinitely far away image focus on the image plane.)
