Journal

2014 | 2013 | 2012



Friday 15 August 2014

I stopped into the lab for a couple of hours this morning to finish up a few things, which included organizing the reimbursement submission copies, looking into how to use E-RAS vs. SBU Reporting (See 10 July 2014 journal entry), and tracking down an online pdf of the Nikon D3000 manual to figure out how to change the playback display mode back to the large preview (versus the overview data screen it’s been showing us).

I can’t believe the summer program’s over - but it was once again a rewarding experience for me. I really appreciated being able to return to the LTC, once again, to work with these exceptional students and guide them through the unique LTC research experience - what an invaluable opportunity for all of us!


Thursday 14 August 2014

This morning I prepared reimbursement forms for the summer expenses - after totaling everything (making sure to subtract tax this time), I organized the receipts into each expenditure category, filled out a cash voucher, and attached the Excel spreadsheet with details of each expense. I also finalized the overall list of 2014 expenses and sent off that spreadsheet to John. Chris and Eric from Eden’s group stopped by to have their pictures taken for the LTC website, which I then cropped to the usual 300 x 300 pixel squares.

I once again caught up with some of my web journals (see Art/Optics OPN article info below) and did some cleaning in the lab. I put away Ikaasa’s tape layers in a drawer labeled “quarter-wave plates / retarders / polarizers” (they’re in a small envelope with her name on it), packed up the spectrometer in its case and put it on the back room shelves (with the CMOS cameras), and wiped down the student desks.

Peter van der Straten, Hal’s collaborator from Utrecht who helped him co-author the book on Laser Cooling and Trapping, gave a talk this afternoon on "Spin drag in a Bose gas.” The physics was over my head, but definitely it was a good experience - and it was interesting that Hal remarked at the end that it created more questions than answers, which was a compliment to Peter and his research team.

I stumbled upon a very interesting OPN article the a while back and didn’t have a chance to write about it yet - Optical Insights into Renaissance Art (Falco and Hockney 2000), which investigated the use of optical aids by Renaissance painters. Charles M. Falco is an optics and physics professor at the University of Arizona in the College of Optical Sciences, and David Hockney is an English artist and photographer. (This paper was part of a long list of optics-art connections that Falco has researched).

They argued that some Renaissance artists could have used a concave mirror (or lens) to project their subject directly onto the canvas, making it easier to draw out the correct proportions etc. A good example is the painting Husband and Wife, by Lorenzo Lotto 1523. You can see where the pattern on the tablecloth goes out of focus and has two vanishing points, as if Lotto had been using an optical element and had to reposition it at some point in order to continue drawing past the depth of field.

Knowing the geometry of the original scene and the size of the canvas, they determined the focal length of the lens used (this is an exercise that I then tried with my students during one of our morning discussions!). With that and the depth of field (the total distance on either side of the focal point in which the image is “sharp”), they also figured out the diameter of the lens used. If there were discrepancies in the magnification or vanishing points of different sections of the painting, this probably attested to the fact that the lens was repositioned during the painting process. Distortions in the painting could have resulted from the lens’s optical aberrations.

There is a lot of controversy surrounding the Hockney-Falco thesis because of its implications, as if it were some sort of attack on the artist’s skill. I personally think this is a fascinating discovery that doesn’t belittle the artist’s talent, but rather shows their innovation and interest in using science to enhance their work. Falco suggests the artists learned about such possibilities from Alhazen’s book of optics, written in 1021. A good source that lays out the theory and evidence is this Brandeis website.


Wednesday 13 August 2014

This morning I caught up in detailing the lab’s summer expenses and created a requisition form for Libby’s housing expenses (This proved to be especially time-consuming… Since we previously established that the reason the Requisition Form wasn’t printing with a requisition number was because it’s often fickle with Macs and Safari. I decided to try to fill it out on John’s computer, but Internet Explorer crashed every time I tried to open the form online. So then I moved to the other Windows desktop computer in the lab, but Internet Explorer wouldn’t even open on this computer and Firefox wouldn’t let me fill-in any of the form boxes. Chrome appeared to be working okay, just incredibly slowly, until I tried printing/saving the form after finishing it - everything just froze… So finally I moved back to my own Mac and used Chrome - which I might not have actually tried other times - I filled out the form once again and when I hit “print” it prompted me to select a funding source (i.e. SBF) and gave me a requisition number, but deleted everything else I had typed in! Oh well. I handwrote it after printing - at least we got the number to generate, finally!

In the afternoon there were a couple of grad student talks - the first was Spencer Horton’s oral exam. The title of his talk was “Development of an 8eV Light Source for Measurements of Excited State Lifetimes and Direction Comparison of Weak and Strong Field Ionization.” He’s working in the ultrafast spectroscopy group with Prof. Weinacht. The second was Zakary Burkley’s masters thesis defense, titled “Towards Single Photon Nonlinearities using Cavity EIT (Electromagnetically Induced Transparency). He’s been working in the quantum information lab with Prof. Figueroa.

Marissa stopped by this afternoon for a little bit! It was great seeing her. Libby and I showed her the summer students’ projects and the laser light show.

I spent a lot of time organizing the photos I had uploaded yesterday - I’ve now updated the lab photos page and the summer calendar page. I also created a page for the Simons students’ LTC tour and for the Simons symposium. I still have the originals on my computer if anyone wants a higher-quality copy of any of the photos.


Tuesday 12 August 2014

This morning was the poster symposium in the Wang Center for the Simons summer program. It was great seeing our students’ posters up and to also have the opportunity to look at other students’ work. Afterwards, there was a short awards ceremony where each student was presented a certificate for completing the research program.

We then gathered our group and had a nice farewell lunch at the Simons center cafe - 16 of us in total! There was John, me, Libby, Andrea, Marty, Alex and his parents, Jonathan and his mom, Ikaasa and her parents and aunt, and Hal with his friend and collaborator visiting from Holland - Peter van der Straten. Before saying our goodbyes, we took a group photo out on the cafe deck.

Back in the lab, I spent a lot of time sorting through pictures from the tour yesterday and the poster event today, uploading them to my laser account, resizing them and shrinking their overall quality. Tomorrow I’ll put them on a couple of separate webpages.


Monday 11 August 2014

We started off the day with a final morning discussion at the white board - I wrote down a bunch of important equations that we had discussed together at various points during the summer program. But I didn’t label any of them! Instead, I asked the students to go through each equation and explain what it meant, where it comes from, what a sketch of it would look like, etc. These topics included the binomial and small angle approximations, single- and double-slit equations, circular aperture equations and Rayleigh criterion, derivatives and integrals, resonance, Law of Malus, Euler’s formula, the golden ratio and Fibonacci sequence, and of course the pig toy. We hit a few snags along the way, but in general the students were able to give a basic review of each topic.

We then did a lab clean up - putting away unused optics, tidying up optics tables, organizing demos, getting “mood-lighting” up and running (i.e. sodium lamp, UV light, christmas lights), vacuuming, and discussing once more the structure of the tour.

I caught up on some missing journal entries from the previous week - it’s especially hard to keep up during crunch time!

Our lab tour for the Simon’s students was at 3:30 pm - I think it went well! We started off with everyone in the conference room for some opening remarks from John, which included a description of the LTC, why it’s special, what’s expected of its students (i.e. webpages and lab notebooks - my regular lab notebook and mini notebook were passed around as examples), and how important optics is to all types of research. We then divided the students into 4 groups of about 6-8 students. While the first group moved into the lab, the other stayed behind in the conference room to do interactive demos with John.

  1. Jonathan showed the interferometer and rubber band demo and then talked about his Airy beam project

  2. Andrea talked about her single bubble sonoluminescence project

  3. Alex showed his optical tweezers setup

  4. Ikaasa talked about polarized light and the spectrometer

  5. Libby ran the laser light show.

The beginning groups that finished first circled back to the conference room to see the demos they and missed. I made sure the whole event ran smoothly and took plenty of pictures! Overall, the tour ended up lasting about an hour and a half (3:30pm - 5pm), and I think that it worked well that we had it on the day before the symposium since everyone had basically finished their abstracts, posters, and research.

Tomorrow we’re planning to have a nice farewell lunch at the Simons Center Cafe with LTC students, parents, and mentors. For future reference - the number to call for reservations at the Cafe is 2-2881.


Friday 8 August 2014

I spent most of the morning discussing posters with Ikaasa and Jonathan and reviewing them with John and Marty.

I also talked a little bit with the group about responsibilities for the the upcoming lab tour - they’ll each be doing 5-10 mini demonstrations about their projects or other related demo. They’ll need to start off with some good background about the physics involved, since many of these Simons students are working in other fields and may not have a good understanding of optics. Then the idea is to try to involve the students - explain things to them, but not lecture them - give them the opportunity to learn by participating.

Last year we did parallel sessions, and that worked pretty well. But this summer John suggested we try doing it in series, with the laser light show as the grand finale, so I was trying to figure out the best way to organize that. I worked through a plan for the students will move through the lab from Jonathan, to Andrea, to Alex, to Ikaasa, then finally to Libby for the light show - while groups are waiting to enter the LTC, they’ll do general demos with John in the conference room.

We tidied up the lab a little bit in the afternoon - but the bulk of the clean-up will have to be done Monday morning.

We’ve reached the end of the summer crunch time! The last week is always a little hectic - everyone’s trying to finish last minute things with their projects while doing abstracts and posters, and they should also be keeping up with their web journals and lab notebooks. We’ve got the Simons students’ lab tour Monday afternoon and their poster symposium is Tuesday morning.


Thursday 7 August 2014

Rachel gave a great informational talk on spacial light modulators this morning - she started with a little background on liquid crystals, different types of SLMs (reflective vs. transmissive, VAN PAN and TN), their applications, advantages versus disadvantages, how to control them (either by means of an extended monitor with a Paint, Powerpoint, or Photoshop mask, or using a separate monitor with a Matlab or Mathematica generated mask), and how to encode a phase mask and optimize it with a blazed grating instead of a binary one (and the SLM sensitive area should be completely filled by the laser to avoid extra, undesired diffraction).

Libby, Ikaasa, John and I had a nice lunch at the Simons Center Cafe with Rachel and Niyati Desai (another Simons student working in the physics department) and discussed the Stony Brook Honors College versus the WISE program and choosing colleges in general.

In the afternoon, we worked with Jonathan for a little to help him model the theoretical expansion of a Gaussian whose beam waist starts at the same narrow width as his Airy beam to drive home the fact that the Airy beam is non-diffracting. I also talked with Ikaasa more about her project and the math behind it. I seems that those articles I found at the end of the day yesterday aren’t directly relevant, and that the math Ikaasa did for rotating a half wave plate on top of another one doesn’t turn it into a quarter wave plate. We also discussed modeling the birefringence versus wavelength, and she was able to plot this for one layer of tape using her 10-layer data.


Wednesday 6 August 2014

In the morning I helped individual students with their questions about the research update presentations and the symposium posters. I also worked with Ikaasa - she’s trying to figure out if the theory actually works as far as turning two half wave plates (one rotated with respect to the other) into a quarter wave plate. Using an angle between them that she had calculated with Jones calculus, she got some interesting but unexplainable experimental results, so we’ll need to look more into the physics of what’s happening…

Our pizza lunch meeting this week featured Rachel Sampson, a Stony Brook University rising junior. I worked with her last summer in the LTC, so it was great seeing her again! She spent her summer doing an RE at the University of Rochester in the Boyd group - her project was titled: “Sorting Laguerre-Gaussian Radial Modes Via Projective Measurement,” which was related to her work here in the LTC. After her talk and discussion, we had our students give their research update presentations (with a little extra background than last so that Rachel could be caught up to speed on their projects). Rachel’s abstract and student research update talk titles can be found here.

We ordered more cylindrical lenses from Thorlabs for Jonathan’s project after he found out it was possible to have the shop do a gold reflective coating over them to make cylindrical mirrors - we got one LK1487L1 - uncoated plano-concave with a -400 mm focal length, and one LJ1363L1 - uncoated plank-convex with a 400 mm focal length.

Throughout the afternoon, I worked more with Ikaasa and Marty too to try to figure out the physics behind her project. Marty’s not quite sure why her the “half-wave plate” maxima are flattening out when she rotates one stack of tape on top of the other. At the end of the day, I found a few articles that I didn’t get a change to really go through yet but they seem relevant - the first is Achromatic combinations of birefringent plates (Pancharatnam 1955). This article (part 1 of 2) talks about using two half-wave plates and a quarter-wave plate of the same birefringent material to make an achromatic circular polarizer. But it does mention a previous investigation that used two birefringent plates to "transform incident circularly polarized light to plane polarized light.. or vice-versa." There's also an interesting note about why this can't be called an achromatic quarter-wave plate, but rather an "achromatic circular polarizer." This was done in the article: Réalisation d'un quart d'onde quasi achromatique par juxtaposition de deux lames cristallines de même nature" (Destriau and Prouteau: J. Phys. Radium 10, 2 (1949) 53-55). I unfortunately haven't been able to find a translation yet from the original French version..

The “part 2" of Pancharatnam’s article this discusses "superposing birefringent plates in such a manner that the combination as a whole behaves as an achromatic quarter-wave plate." But it may just be producing elliptically polarized light… We should also do some research into other articles of this type. John pointed out that Pancharatnam is well-known for his discovery of a geometric phase (now named after him) for polarized light passing through crystals.


Tuesday 5 August 2014

Today I sent out a couple of poster templates from last summer that the students can use to start making their posters and had another brief discussion with them about these. These are posters that are meant to be presented, so they really shouldn’t include too much text, but just enough so that someone looking at the poster alone can get a good gist of the project. It takes time to put together an informative and aesthetically pleasing poster that has a good balance of words and graphics. Also this morning Ikaasa very generously bought Starbucks for the lab with the extra money on her meal card :)

I spent some time with John trying to figure out an invite list for tomorrow’s pizza lunch. It was decided that he’d cc: many of the LTC alums about the event, since it’s the last Wednesday meeting for this year’s summer program. Though the talk is featuring Rachel’s research experience at the University of Rochester this summer, our current students will give brief research updates as well.

Since particle rotation wasn’t working out the way Alex had hoped in his tweezers setup, he decided to try using a different liquid than water. We went on a small hunt around the lab and stumbled upon some mineral, baby, and vegetable oils that he’ll try. Note - the “chemicals” cabinet (in the back room with the wooden optics table) actually contains more than just traditional chemicals; it’s also got various oils, syrups, sugar, and baking ingredients.

Ikaasa is still working on creating a quarter-wave plate at the wavelengths for which the tape acts as a half-wave plate. It’s possible that the intersection points between the transmission curves for crossed and parallel polarizers signify the wavelengths for which the tape acts as a quarter-wave plate already. Using this idea, we saw that (between crossed polarizers) when she rotated her second 6-layered filter at about a 170 degree angle, the old maxima (which signified where the filter acted as a half-wave plate) dipped down a little bit past where the intersection points were - which agrees with what we were thinking and possibly illustrates a whole group of wavelengths where the two filters together act as quarter-wave plates. Now it’s a matter of playing around with the angle until those dips plateau and checking that this theory makes sense in a model.

It seems as though Jonathan created an Airy-like beam from two reflective polarizers curved by attaching them to pieces of an empty paper towel roll. Alex was able to get a good video on my iPhone of a copper oxide particle spinning and then stopping after he removed the spiral phase plate from his set up (possibly proving that it was spinning due to the transfer of angular momentum from the optical vortex).


Monday 4 August 2014

This morning Ikaasa started us off with a derivation of the Law of Malus with Jones calculus. It was fairly straightforward - she started with a diagram of linearly polarized light going through a second polarizer with its axis oriented at some angle relative to the first. Using Jones calculus, a linear polarizer is sandwiched between a basic rotation matrix and its negative version, and this is multiplied by the incident light’s electric field. The result squared and put into a ratio of output intensity over incident intensity boils down to cos^2 of the angle between the incident polarized light and the axis of the second polarizer.

I then had a conversation with the students about posters. (The poster symposium is a week from tomorrow, yikes!) It’s important that they start collecting photos of their setups, making detailed diagrams and graphs, and brainstorming text. John will send them a template from a previous year to work from. We’ll probably need to have these done by Friday at the latest so that we’ll have enough time to get them printed before Tuesday morning.

John, Libby, Andrea, and I had lunch with Doreen, the Academic Advisor of Stony Brook University’s WISE program - Women in Science and Engineering. It’s a 20-year old program set up to encourage and support undergraduate girls majoring in the science, math, and engineering fields. Throughout the academic year they have special events and lectures as well as a upperclassman to underclassman mentoring program. Something like this would have been great to be a part of as an undergraduate - in my graduating class, I was actually the only female physics major! Though since Dickinson was a small school, I received a lot of support from the department and my fellow majors. I think WISE is especially beneficial at a large university, since it gives students the opportunity to bond in a smaller community setting.

A couple of other random tid-bits from the day - (1) We got our replacement filter today! - the dielectric short pass 600 nm cut off one from ThorLabs. And then sent even more lab snacks… (2) The power meter probe is unfortunately missing - Alex needed to use it. We found the actual meter, but the probe wasn’t attached, nor was it in the surrounding area or in any other visible place in the lab. (3) Alex captured some videos of trapped particles behaving strangely using my iPhone camera - at some points large particles are repelled from the trap and at other times a bubble-like thing formed when the laser was focused on very large particles. (4) Andrea is working on understanding resonance in an LRC circuit and is currently trying to model her data using Excel. (5) Ikaasa is cleverly using Jones calculus to see how she can use her half-wave plate setup to create a quarter-wave plate. (6) Jonathan is now trying to make cylindrical mirror-like devices to repeat his experiment with a new twist.

Also, John went over to Paul (who does poster printing) in the library to ask about printing deadlines, and he got into a conversation about one-way mirror cubes. What a really neat piece of optics art!


Friday 1 August 2014

Again we put off our morning discussion so that the students could work on their writing (hopefully we’ll have time to start these up again next week!). We’ll still need to have Ikaasa do her Law of Malus derivation with Jones calculus, and I have a couple of other small things I wanted to do with the students as well.

John and I did one-on-one editing with the students - going through their abstracts line-by-line. A good abstract for the webpage and Simons symposium will have an opening with some background about the physics involved and the motivation (starting broadly, and then zeroing in on the focus of the project), a middle paragraph with the experimental methods and results, and (if necessary) a final paragraph with future prospects. It should cite any key papers that the student used and include acknowledgments and the source of financial support (in this case the Simons Foundation).

The CMOS camera arrived today (with more lab snacks)! However, like all ThorLabs equipment, the software is only compatible with Windows computers… I suggested that Alex try downloading the free image/video capture software from Bodelin technologies, which is meant to be used with the company’s specific ProScope camera (that I had used during my honors thesis research), but it can capture stills and video from any generic USB camera. However, it turned out that the problem was with the camera itself not even being recognized on a Mac.

So I did another live chat with a ThorLabs rep and explained how we recently purchased this USB2.0 CMOS camera from Thorlabs (DCC1545M), however the computers that we would like to use it with run Mac OSX. I mentioned how we’re a bit disappointed that it apparently cannot be used without installing the included Windows7 software, and asked if they have any suggestions for how we can utilize the camera? (i.e. any additional drivers we can download or other camera capture software that’s compatible with a Mac). They apologized for the inconvenience, but said that the camera does not support MacOS directly right now and the only option is installing Windows Virtual machine. How unfortunate!

I also uploaded some pictures from various LTC events/moments and created a lab photos page. I shrunk a lot of the pictures using linux’s resize command, but I think I’ll need to also decrease the quality so that the file sizes aren’t as big as they still are.


Thursday 31 July 2014

Today we didn’t do a formal morning discussion again so that the students could work on their abstracts and update their web journals. Though it’s great to get wrapped up in a good project, it’s important to continue to document what’s being accomplished and learned. Being able to communicate well is an important skill for a researcher. Keeping up with the web journals also allows the mentors to be able to gain insight into students’ understanding - sort of like a “window into their head” as John has called it.

So I spent that time updating my own webpage - catching up on journal entries and updating the calendar page. I then repackaged the Thorlabs part (filter with a cutoff that’s too high for a red HeNe) that we’re returning. It’s great that the company always takes care of things very promptly - I emailed them yesterday evening about wanting to exchange the part, and they replied within the hour to give us return instructions and let us know that they’re already in the process of shipping us out the replacement.

There was an AMO physics seminar this afternoon by Michael Keller of the University of Vienna whose talk was titled: Towards experiments with momentum entangled He* atom pairs. He described the process of creating a metastable Helium source and then the slowing and cooling of this beam to create a BEC. He then talked about how to create momentum entangled atom pairs and his lab’s single-atom detector (which costs something on the order of €50k) for reconstructing 3D momentum space. As usual, the physics at these specialized talks is sometimes too advanced for me to get everything, but I always enjoy the experience and find discussions about quantum entanglement very fascinating. After his talk, Michael came into the LTC for a brief tour.

We also started looking over student abstracts today. As always, these types of things take a lot of editing and re-editing to make sure the wording is just right for what we’re trying to convey. It can even take several hours sometimes just to formulate a suitable title! Tomorrow we’ll do some one-on-one (or rather, two-on-one) revising with the students.


Wednesday 30 July 2014

This morning we skipped our daily talks so that the students could focus on preparing for the pizza lunch. John and I spent some time discussing with Ikaasa the best way to plot the inverse wavelengths of the minima (in her parallel polarizers 10-layer filter setup) as a function of “m” (odd integer multiples of pi) to signify how the filter acts as a half-wave plate for these wavelengths of light.

Our pizza lunch this week featured student research updates in the form of short powerpoint presentations. We invited a much smaller group than the previous few weeks so that the meeting could focus more on discussions about our individual student projects. When I get a chance, I’m going to make a link from the calendar page with descriptions of each student’s progress.

This afternoon I worked mostly with Ikaasa and Jonathan to help them figure out some issues with graphs they were trying to make - (for Ikaasa it was normalizing transmission data of light through a 10-layer filter and crossed polarizers, for Jonathan it was fitting the Airy beam deflection theory to his data). By the end of the day, both were resolved! (Marty and Ikaasa realized that something was fishy about her oscillating input light and fixed the graph after retaking the data, and Jonathan and I realized that being off in his beam FWHM measurement by two pixels was causing the theoretical curve to look radically different than his data, since there’s a 1/w^3 dependence).

We received part of the Thorlabs order (the CMOS is backordered at the moment), however it turns out that the filter we ordered doesn’t actually block a 632 nm beam. We ordered the short pass 650 nm cutoff - after doing some research into the product specifications, we decided to exchange this for a short pass 600 nm cutoff (FES0600). Since Thorlabs’ online RMA form generator wasn’t working, I did a “Live Chat” with a sales representative and they suggested that I email the RMA department directly. Hopefully we can take care of this as quickly as possible!


Tuesday 29 July 2014

For our morning discussion, Andrea talked about the derivation for the intensity of sound in decibels as a way to quantify loudness. This equation takes into account the sensitivity of the ear, since we have a certain threshold of hearing. The second thing that she did was go through a calculation she did earlier with John - which compares the theoretical wavelength of the sound wave in the spherical flask of her sonoluminescence project with the radius of the flask. However, we still need to better understand what the standing waves look like in a spherical cavity.

This paper that looked at basketballs as spherical acoustic cavities, Russell 2009, modeled some of the pressure nodal surfaces for the lowest modes. But John pointed out that acoustic waves in a fluid will be different than those in air. These lecture notes provide some basic information about pressure and velocity nodes and antinodes. Andrea also found this colloquium presentation that specifically looks at resonance in spherical cavities. This paper, Kontharak 2008, also goes through the theory of capturing bubbles and spherical acoustic mode geometry.

In the conference room, we had a group brainstorm about abstracts in which each student listed some key words and phrases that should be included and thought about possible titles to capture the overall concept of their projects.

  • Ikaasa:

    birefringence, polarization, polymer (i.e. cellophane, not all cellophane works), Jones calculus (and possibly Law of Malus derivation), retardance - (half and quarter) wave plates, these methods can be used to create a tunable bandpass filter, spectrometer (with specific model number, results with it), excel spreadsheet, mathematica, evaluated simple Jones matrices by hand.

    Understanding/Examining birefringent properties of cellophane; other good verbs: Applying, Measuring, Modeling, Simulating, Studying

  • Jonathan:

    Airy beam (generating and properties of, history), propagation invariant, compared to straight line propagation, accelerating motion, diffraction, cylinder lenses (utilized those found on hand in lab), Fourier optics, cubic phase profile, coma aberration, compare to other methods - e.g. SLM, equipment, description of setup, light source, camera

    Generating/Creating 1D Airy beams with cylindrical lenses (i.e. simple optical elements)

  • Alex:

    optical vortex, optical tweezers, orbital angular momentum, gradient - scattering forces, torque, spiral phase plate, inverted microscope, particles that were trapped (yeast, latex spheres, copper oxide), topological charge, video camera (model number)

    Demonstrating the transfer of optical angular momentum to particles trapped in optical tweezers; other verbs: Quantifying, Analyzing, Studying

  • Andrea:

    modes, acoustics, frequencies, description of single bubble sonoluminescence, spherical flask, resonance

Right after lunch before most of the students returned a tour group came through with incoming freshman for the CSTEP program - Collegiate Science and Technology Entry Program - hoping to speak with Hal. We couldn’t find him but I gave them a brief tour of the LTC anyway - and since Alex was the only student back from lunch, he gave them an explanation of his tweezers setup. I think they’ll be returning with more students another day!

We made another Thorlabs purchase today. First we decided to buy a dielectric filter - FES0650 which is short pass with a 650nm wavelength cutoff - to attenuate the beam incident on the camera. We then spent some time comparing CCD cameras to the CMOS we bought last time, and after considering prices and the uses in our lab, we decided to buy another compact CMOS. We’ll also buy an external C-Mount to internal SM1 (SM1A9) to put the filter on the camera - after doing a little bit of research about the difference between camera C-mounts and CS-mounts, this source proved to be useful. The difference is in the flange focal distance - which is the distance from the mounting flange (the metal ring on the camera and the rear of the lens) to the film plane - shorter for CS-mounts (12.5 mm, vs. 17.526mm for C-mounts).

Also this afternoon I helped Alex get together a photodetector for beam profiling and tried to help Ikaasa normalize and graph her data. Overall, today turned out to be another great productive day!


Monday 28 July 2014

This morning Jonathan started us off with a short lesson on Fourier series and the Fourier transform. The math is a little advanced, but the important difference to understand between these two is that a Fourier series is used when a periodic wave can be decomposed into discreet frequencies, whereas a Fourier transform is used when a non-periodic wave (T => infinity) is decomposed into a spread of possible frequencies (so there is a certain level of uncertainty). He then talked about Fourier optics and how one can find the Fourier transform of an object or aperture.

I worked a little bit on my webpage today - uploaded and resized a bunch of photos (using the convert command: convert oldfilename.jpg -resize widthxheight newfilename.jpg) and then used these to update the pizza lunch page for Wednesday 23 July and the 2014 calendar page.

Finally today I took care of the property control form for the spectrometer - sent it through campus mail from the physics office. Later in the day I took a walk down to the Stony Brook Foundation office to submit student support forms.

I spent some time with Ikaasa talking about how she can normalize her data if different sets were collected using different integration times. (She had to use a shorter integration time to attenuate the signal produced by the incident light, since it was otherwise flooding the spectrometer and flatlining in the data). We came to the understanding that if the integration time was (for example) 8 times longer than the previous sampling, we could divide that data by 8 to account for this. (This is a good informational source about spectrometers that Ikaasa found.

We then had a discussion about normalizing the data and taking into account the incident light, transmission through each polarizer, and the differing integration times. John is under the impression, which makes sense, that once the data is normalized (dividing output by the input), the filter’s transmission peaks should all be about the same height.

Simons abstracts are due a week from today, and the posters not long after that! The summer is really flying by.


Thursday 24 July 2014

Alex started us off with a continuation of the double slit intensity derivation. He did a good job of going through the simplifications and trig identities needed to reduce the intensity equation down to its 4cos^2[ ] form. Then we actually derived the path length difference in two different ways - the first in which we assumed the lines connecting a point from each slit to the same point on the screen were approximately parallel near the slits, and the second used pythagorean theorem and the binomial approximation. Both arrived at the same answer in the end - so it really came down to a matter of where you make your approximations.

I continued to have individual discussions with students about their projects. It’s challenging trying to keep track of everyone’s research - to be able to have an in-depth conversation with a student about their work and to check their understanding, the mentor first has to have a good understanding of what’s involved. So I’ve been spending a lot of time reading up on the theory behind each person’s work. It was one thing being a student researcher and diving into my own project, but now as a mentor, I have to dive into four different ones!

A couple of odds and ends from the day - (1) We received the package with the adaptor threads already. They arrived this morning even though we only paid for “next-day PM” shipping. Thorlabs is pretty fast - and they sent lab snacks this time! (2) Since we’ll be supporting a couple of Eden’s students, I helped take care of the necessary paperwork, and we’ll also setup webpages for them soon. (3) I finally worked on (and completed) the Responsible Conduct of Research RCR through the CITI program (Collaborative Institutional Training Initative). There were short modules and quizzes about research misconduct, data management, authorship, peer review, mentoring, conflicts of interest, and collaborative research.

At the end of the day, I tried to help Ikaasa figure out a Jones calculus problem in the textbook that she had been stumped by. The tricky part was figuring out the Jones matrix for a half wave plate with it’s slow axis oriented at 45 degrees. I didn’t have time to work out the derivation yet, but this source talks about the general matrix form for a wave plate with it’s slow axis oriented at a certain angle and the specific form for a HWP with a 45 degree slow axis.


Wednesday 23 July 2014

Today our morning discussion featured a short lesson about Jones calculus from Ikaasa. Jones calculus is used to mathematically describe the polarization of light as it emerges from an optical element (such as polarizers, wave plates, birefringent materials, etc.). Optical elements are represented as matrices, and the incident light’s polarization is represented as a vector. The product of these will tell you the polarization of light that gets transmitted. Ikaasa gave a nice introduction to the representation of a wave’s electric field in matrix form and then walked us through a derivation for the Jones vector of circularly polarized light.

We had a group meeting in the conference room to discuss student research updates. Libby has built her basic Michelson interferometer setup with collinear paths, however she needs to make the path lengths equal. Eventually she’ll replace the laser with an LED and one of the mirrors with a touch surface. Jon has been able to create an Airy-Like beam, however he’ll need to account for the room light flooding the camera and to possibly write to the author with other questions about the experiment. Alex has been able to get the tweezers setup up and running and actually trap a particle; he’ll need to deal with some issues with light scattering off the particles and look into the use of the phase plate.

Hal took the annual LTC and AMO group photo outside on the steps of the earth and space sciences building. We took an LTC group photo as well.

At our pizza lunch today, Eden Figueroa and his students gave a talk about the concepts of quantum information processing and quantum information technology. Quantum information is a very interesting new field that makes use of quantum entanglement and superposition states. Their lab is working to interface single-photons and rubidium atoms through the use of coherent laser control. After an introduction by Eden, Mehdi Namazi talked about building a quantum memory for quits, Bertus Jordaan gave a talk about the production of single-photons tuned to atomic transitions, Zakary talked about making single-photons interact, and finally undergrads Chris Ianzano and Eric Fackelman discussed their work in Characterizing high-finesse optical cavities. We then had a short tour of their (relatively new) lab.

As far as paperwork, today I prepared more documentation for reimbursement under the supplies, postage and shipping, and printing categories. I also looked into buying adaptor threads for the Laser Diode and SLM that we bought from the UK (which have metric threads). There’s the AP25E6M (external M6 to external 1/4”-20) and the AS25E6M (external M6 to internal 1/4”-20). We chose next-day FedEx shipping, so hopefully they’ll be here by tomorrow afternoon!


Tuesday 22 July 2014

Our morning discussion featured a derivation of the single slit minima equation, led by Libby. She did a good job of explaining all of the details, starting from the diagram, creating an equation for the incident wave amplitude, integrating this over the length of the slit, squaring this to get the equation for intensity, and then solving for the zeros of the function - i.e. when we have minima in the diffraction pattern. I wrote this up in my lab notebook. The plan for tomorrow is to have Ikaasa give an introduction to Jones calculus.

Again I worked with individual students, helping them with little things here and there with their projects. We also had a long talk with Andrea about possible projects - first we reviewed the two articles she had been focusing on - the Amateur Scientist article about Hele-Shaw cells (which looked at qualitatively studying the behavior and interactions of bubbles in a fluid) and the study of liquid-liquid interfaces article (which quantitatively looked at how the interface between two fluids changes with the passage of a bubble). We also did a couple of Am. J. Phys. searches and found a couple of interesting sonoluminescence articles - one about creating a single sonoluminescent bubble (Seeley 1998) and the other about using a strobed LED to measure the bubble radius (Seeley 1999). Then we checked out the sonoluminescence apparatus created by TeachSpin - they have a great brochure that has information about the physics behind sonoluminescence and various experiments that can be conducted to understand resonance, etc. This then led us to do a little research into hydrophones and how it’s possible to create your own. Finally, we talked about the idea of studying electrical resonance by means of a classical RLC circuit, but we weren’t able to locate am Am. J. Phys. article we had found another time… After I did some sleuthing and found it using the original search terms (acoustic AND modes) - Cafarelli 2012.

For our spectrometer, I filled out the equipment inventory control form that was included in our package (Stony Brook took care of assigning a number and labeling our device before we received it), which we’ll have to return to the Property Control Office. Then I did a little more organizing with the receipts that need to be reported to SBF and be reimbursed. I collected and documented all of the “Entertainment” charges from Summer 2013 (i.e. all of our pizza lunches and other special meals with LTC students and guests). All expenses have been written up on a spreadsheet and photocopies will be made of the receipts tomorrow.

I also finally updated my webpage with the total power output of the Sun calculation, here.


Monday 21 July 2014

Today we started out with a brief derivation of the equation for the radius of the first minimum in the Airy diffraction pattern. Though I’ve been asking the question since last week, the students were still stumped by the factor of 1.22. I led them through the derivation part of the way and then let them finish it off. (The details are on my “Resolving two point sources of light” page). We also talked about the possibility of the students doing mini lessons the next couple of mornings about different optics topics they’ve been reading about.

It’s great that most of the students have gotten started with some hands-on work - I spent most of the day running between them to help find optical components, pick up the spectrometer package, set up equipment, have discussions, talk with Marty, get the desktop running, clean optics, etc … Everyone seems to basically be on track! I think it’s been a productive day.

I finally also typed up our calculation of the Earth’s velocity as it orbits the Sun - the details are here.

At the end of the day, I spent some time reading through an article that Andrea was interested in - “Passage of a Gas Bubble through a Liquid-Liquid Interface” Kemiha 2007. In order to better understand it, I had to look up some things (such as surface vs. interfacial tension), and a good resource is this MIT page for the Non-Newtonian Fluid Dynamics research group).


Friday 18 July 2014

This morning during our daily discussion, we first reviewed derivatives and then talked about indefinite and definite integrals. For indefinite integrals, we discussed the mathematical definition and how we “undo” what happens when we take the derivative - called the “antiderivative.” We then talked about definite integrals, and how these represent the difference between the antiderivative evaluated at two limits - also known as “the area under the curve” between these two points. We used an example with a mass on a spring and tried to calculate the work done from pulling the spring a certain distance x. This is a fairly simple calculation, since the graph of the force is just a straight line, but it’s a good example for deriving the definite integral by means of a Riemann sum with an infinite number of rectangles.

I worked on my webpage a little and wrote up the Golden Ratio derivation (and its connection to the Fibonacci sequence) that we went over together last week, with some extra links for more information. The page for this can be found here.

I read through more of the Papazoglou (2010) article about tunable AIry beams that Jon is using for his research and realized that the lens setup isn’t as complicated as I had originally thought - well it will take some careful calculations, but the apparatus to create a 1D cubic phase modulation is simply two lenses rotated and displaced respectively along the longitudinal and transverse axes. By imparting a cubic phase on a Gaussian beam, the Fourier transform of this leads to an Airy beam. While Jon was tinkering with a couple of cylindrical lenses, John pointed out an interesting phenomenon in which the net effect of a converging and diverging lens placed in the right configuration is a converging lens. This is similar to alternating-gradient focusing used in accelerator physics.

John then directed the students part-way through the intensity of light from two slits derivation. It’s a great derivation that includes pythagorean theorem, complex numbers, and the binomial expansion. The goal is to derive the equation for the intensity on the screen at an arbitrary point P by adding the amplitudes of waves coming from each slit and then (squaring this for the intensity). At the board we also talked briefly about De Moivre’s theorem and all the roots that the fourth root of 1 has (4: 1, -1, i, -i), etc.

After meeting and talking briefly with Richard Lefferts from the nuclear structure lab, he invited us to the closing ceremony for the Keith Sheppard, who works in the science education department here. Afterwards, we got to talk with Gillian (the Physics Summer Camp director) who stopped by the lab for a little.

We’ve now finished our third week - time is really flying. But most of the student are now underway with some hands-on work, which is great! Piano, piano as they would say in Italy.


Thursday 17 Juy 2014

We started off the morning with a review activity. I had each student make a its of 8-10 things that they understand well - equations and/or concepts that we’ve talked about together or other optics topics that they’ve been researching on their own. Then they passed their lists to the student on the right and I had them circle the things on the list they received that they didn’t understand. The original lists were passed back to the original students and we made a master list on the board of the topics that were circled. Then, each student gave a mini lesson on the topics that were circled on his/her list. Jon talked about the binomial approximation and lens aberrations, Ikaasa talked about birefringence, Andrea talked about AOMs and TAG lenses, Libby talked about Poisson’s spot, CCD/CMOS, oximetry and integrating spheres, and Alex talked about Euler’s formula and lenses used for burning (like what we did outside on the first day).

Then I set the students to work on a mini project exploring diffraction through circular, triangular, and square apertures. With the circular one, they had to use the diffraction pattern to calculate the wavelength of the laser (they got 615 nm - pretty good!). They used two different apertures (200 microns and 500 microns) and projected the pattern of the larger first on the door and then to the far wall outside the lab. I also asked them to figure out and be able to explain where the factor of 1.22 comes from in the Rayleigh criterion equation.

David from the Stony Brook Foundation called - he said that there are two possible reasons why a requisition number wasn’t generated on the form, either (1) that when we put the account number, we don’t need a dash in there (though on past forms, it was listed with the dash and it still worked), or (2) that there’s some issue with it when used with Safari on a Mac, and that it would work in Internet Explorer. Either way, we’ve handed in the forms and the spectrometer purchase should be taken care of by the SBF today.

Ludwig Krinner, a grad student working with Dominik, gave his oral exam this afternoon on the topic “A Quantum Zeno’s Paradox.” Though the physics was complicated, it was an interesting experience to witness. He gave a talk about past research done on this topic by other research groups and then talked about what his lab is currently doing. It was good that we had Dominik’s talk yesterday, because it helped me understand parts of Ludwig’s presentation. As far as Zeno’s paradox - I think I understand it better in the quantum sense rather than the classical examples he described, or at least I can wrap my mind around it better in the quantum sense. Here, they’re inhibiting the quantum evolution of a particle by continually measuring it - since it’s continually observed, it never decays.


Wednesday 16 July 2014

In the morning, we skipped our daily discussion so that the students could work more on their project proposal presentations. I talked individually with Jon and Ikaasa about their projects - regarding the idea of spatial frequencies/Fourier optics and birefringent filter theory respectively. Then we had our group meeting at 11 am and each student presented their ideas. Libby is most interested in pursuing white light interferometry for use in characterizing rough surfaces, Andrea talked about the Hele-Shaw cell and acousto optic modulators, Ikaasa is interested in doing a project based on the interference birefringent filter article, Jon would like either to follow the article that described creating Airy beams from lens aberrations or to create other exotic beam modes by means of our SLM, and Alex is interested in quantifying the transfer of orbital angular momentum from an optical vortex to a particle or analyzing optical vortices in some other way.

At our pizza lunch meeting, Dominik Schneble gave a great talk on Ultracold Atoms and how his BEC lab worked. Though Hal had talked a little bit about this stuff last week, it was interesting hearing it from a different physicist’s perspective. Dominik talked about Bose Einstein Condensation, how the wave properties of matter are “hidden” at room temperature (the relationship between the de Broglie wavelength and temperature), the physics of laser cooling (by means of the Doppler effect) and recoil limit (there’s always that final photon that must be emitted so the atoms never have exactly v=0), magneto optical trapping (with two coils that create a point of zero B-field), evaporative cooling (by repeatedly removing hot atoms and waiting for the system to rethermalize), and finally imaging BECs by means of their shadow (with an absorption technique).

Afterwards, he gave us a tour of his lab - pretty intense stuff! He said that they leave the laser and machinery running most of the time and that it takes sometimes the whole day just to realign the setup - sometimes students will stay until the wee hours of the morning to complete an experiment or do data collection, because if they leave the apparatus and come back the next day, it won’t still be aligned.

Back in our lab, I talked with Jon a little bit more about spatial frequencies and filtering with a 4-f setup. We then did a mini demo of light diffracting through a circular aperture to create the Airy pattern. This was at the end of the day, so there really wasn’t time for collecting any measurements etc, but it’ll be a good hands-on activity for the students tomorrow!

Also this afternoon, I filled out a Stony Brook Foundation Requisition form (couldn’t get a requisition number to generate! even with the tricks..) for the Thorlabs spectrometer we’re going to buy. We’ll submit this form, with the company’s W-9 (which I had called them about), and a printout of the online catalogue page. This will be great for Ikaasa’s project and probably future LTC work as well.


Tuesday 15 July 2014

Today the topic of our morning discussion was derivatives. A couple of the students had taken calculus and had some idea of what they are, but the others hadn’t. We first talked about what the definition of the derivative is in words (i.e. the rate of change of the function at a point; the slope of a tangent line at that point) and then in mathematical terms (i.e. equation for f’(x) in terms of the function and some small change in x, with the limit as Δx goes to zero, etc). Then we went through the derivation, starting with the slope of the line and showing how we can find the quotient difference of the function between x and a Δx, which tells how the function is changing on average. With a very, very, very small Δx, we are able to understand how the function is instantaneously changing at a point. We talked about some instances of a zero derivative and an undefined derivative, and the impact (or rather lack there of!) of adding a constant to the function, and then we proceeded to use the definition to find the derivative of a simple quadratic and the sine function.

We then had another group meeting in the conference room to discuss areas of interest and further hone in on possible projects.

With Libby we now talked about white light interferometry and Doug’s project. From her interest in Ronchi Testers, we looked at Allison’s project on Talbot images, which then led us to a paper by Michael Berry, Quantum carpets, carpets of light - note, all of his papers can be found here. Then we looked at Michael’s page (his dad was the one who worked in medical diagnostics and gave that neat fiber optic bundle to the lab).

Alex is still interested in optical vortices used for tweezing, and how rotating particles (to study the shape of their orbits) would be easier to see, rather than spinning the particles. The Padgett group at the University of Glasgow is a good place to search for resources.

Jon has taken an interest to the Phys Rev article on creating Airy beams (Papazoglou 2010), and in general he’s still interested in creating beam modes with an SLM (which Rachel explored this past Spring with our low-cost SLM).

Looking around the lab for the sonoluminescence setup, we stumbled upon some other cool things, including Jacob’s spatially-varying wave plates, a bunch of QEX quantum electronics lab notebooks, and finally the sonoluminescence setup! All neatly put together in a (TV) box with a couple of articles (2002, 2007) about the skepticism surrounding its means for creating fusion.

I briefly looked into buying a compact CCD spectrometer from Thor Labs, which would be useful for Ikaasa’s project. We would want to get the CCS100 (universal/imperial), which has a wavelength range of 350 to 700 nm, a 0.5 nm spectral resolution, and it costs $1,950. (Note - it’s on pg. 1604 in the big Thor Labs catalogue)

Tomorrow morning we’ll do project-proposal presentations at 11am with Marty - in which the students will talk about the research project they’d like to conduct. The presentations should include some basic information from the paper(s) where they got the idea from, as well as relevant diagrams etc. Also, it should have a detailed list of all of the physical things that need to be done and physics topics that need to be understood.

I read through a paper that Ikaasa seems interested in doing her project on - creating a birefringent filter using layers of scotch tape (Velasquez 2004), and then did some research into Jones calculus using this source. Hecht’s optics textbook will also be a good source.


Monday 14 July 2014

In our morning meeting, I discussed the idea of the Rayleigh Criterion with the students and had them estimate the distance from which you could resolve the headlights of a truck as two separate light sources (given a long, straight road with no obstructions, and a person with perfect vision). Our calculations revealed that it’d be possible at about 16 km away from the truck. I put together a summary of our discussion here.

After we all moved into the conference room for another group brainstorm meeting. We discussed how a good project will be a mix of scholarship and hands-on work. A great place to look for possible project ideas is in the American Journal of Physics - these articles often describe tutorials for projects that can be completed in an undergraduate lab. They can provide a great jumping off point for a more original project, or they can simply help a student understand, simulate, demonstrate, etc. optics phenomena.

We used AIP Scitation to do our AJP searches.

  • Libby is interested in the application of optics to biology and medical instruments, as well as describing nature with math.

    AJP Search: optics AND biology, oximeter

    We found an interesting article about photoacoustic imaging (Leboulluec 2013).

  • Alex is interested in optical vortices, specifically the light and matter interaction (i.e. being able to rotate a particle with the beam).

    AJP Search: optical AND vortex

    I also suggested he look over Jonathan Preston’s webpage and Singular Optics map. In general that’s a good way to organize your ideas about a field of research - looking at all of the prominent people, topics, papers written, etc.

  • Jon is interested in different beam modes, such as Bessel, Airy and Ince-Guassian beams.
    We found a list of Jeffrey Davis’s (San Diego State University) publications, which is good to go through for different ideas since a lot of his work has been about these different beam modes. There’s also this article about making an Airy beam from a tilted lens (Papazoglou 2010).

  • Andrea is interested in the TAG lens and possibly other acoustics applications.

    AJP Search: acoustic AND modes, acoustic AND lens

    We found a couple of interesting papers - one was about measuring the speed of sound in a fluid by light diffraction (Diego 2002) and the other was about using Mie scattering to determine particle size (Weiner 2001).

  • Ikaasa is interested in fiber optics, the fiber Bragg grating, and possibly creating a noninvasive pressure sensor using acousto-optics.

    AJP Search: doppler AND velocimetry

    We also talked about Molly’s work with Mueller matrices to characterize polarize light and Mike’s project in photorefractive optics (his abstract can be found here on page 8).

After we did some further searching individually. With Jon we looked at how past LTC projects sprouted from mistakes or curiosities, (for example Max’s project imaging intensity from the double-slit interference pattern and Will’s project in understanding the Fresnel diffractions patterns of a circular aperture. With Ikaasa we looked up an AJP article in which tunable filters were created using scotch tape (Velasquez 2005), and other possibilities with Optical Coherence Tomography, as Jon Wu had studied.


Friday 11 July 2014

We started off the morning with a calculation - how fast is the earth traveling around the sun? This isn’t too trivial to figure out, so I made the students calculate the final number in both meters per second and kilometers per hour. What I should have also done, was have them then convert it to miles per hour! I guess I’ve just gotten too used to using metric units for everything in Italy :) I’m in the middle of creating a page with some of the details of the calculation, as well as the other calculations we’ve done together.

Today there was an AMO seminar given by Dr. Yev Lushtak from SAES Getters USA about “Sorption Mechanisms and Pumping Characteristics of Non-Evaporable Getter (NEG) Pumps.” A getter is a device that removes molecules from an evacuated space by sorbing active atmospheric gases. Non-evaporative getters are made of porous reactive alloys and are very effective pumping devices. This company created a number of different compact NEG models, some that combine the NEG with a sputter ion pump (SIP) to take care of non-getterable gases. His presentation was a little bit of a sales pitch for his company’s NEXTorr, but it was interesting to learn about the sorption mechanisms of NEG materials. The company is also based out of Linate, Italy!

In the afternoon we took care of some paperwork and finished the reimbursement forms for the 2013 Lab Equipment purchases. On our way to the SBF to drop everything off, we stopped by the closing ceremony for the Physics Summer Camp and had a talk with the program director, Dr. Gillian Winters. This camp is a one week program meant to introduce high school students to physics by means of hands-on activities. It sounds like a great opportunity for students who have a strong math and science background, but haven’t necessarily taken physics.


Thursday 10 July 2014

Today I started by talking briefly with the students about Enrico Fermi and what “Fermi Problems” are. We then did an estimation problem together about figuring out the focal length used by a camera to take a certain image. I got the idea to do a question like this from this article, Optical Insights into Renaissance Art (which I’ll mention more about soon). The image that I found here has the actual lens focal length listed, so that we were able to check our work at the end. The important thing that I wanted the students to understand was that this problem had nothing to do with the physical size of the photo, but rather estimating the distances of things in the photo. I guided the students through a couple of steps, but they were eventually able to understand how to handle estimation calculations.

I spent some time afterwards putting together a page with the details of this estimation.

We then had a very productive group meeting in the conference room, in which we went through each student’s ideas, looked through past LTC projects, and gave general feedback or suggestions for further research.

  • Libby has been interested in the optical properties of skin. She should check out Foo’s project on using an electro-optical sensor to measure pulse rate. Libby’s also interested in zone plates, and we talked about Pradyoth’s project with linear zone plates.

  • Alex is interested in optical vortices, so John directed him to Azure’s work, Les Allen and Miles Padgett’s book on the orbital angular momentum of light, and a (autographed!) book that Hal has with a collection of Padgett’s papers.

  • Jon has been looking into Optical Coherence Tomography (OCT), so he should look at Jon Wu’s project. He’s also interested in Ince-Guass beams.

  • Andrea has been reading a lot into supercooling and its possible application to sonoluminescence (though this would require a very powerful pulsed laser, which we don’t have in the lab).

  • Ikaasa, in addition to her research into laser cooling, is interested in acousto-optics and its application to medical imaging. John suggested she talk to Marty, since he’s an expert on AOM etc.

Back in the lab, we showed the students Kathy’s laser tweezers setup as well as the blue Argon laser. We also showed them what happens when the laser goes through a pair of the diffracting grating glasses (the 2D ones make a pretty cool design!) and when it goes through your hand.

We did a little more paperwork and walked over to the Stony Brook Foundation office in Admin for help in understanding the proper way to fill out reimbursement requests. We can continue to use the Cash Voucher, but the purchases must be divided based on the various categories SBF has set up. Also, for account balance information, purchases, etc, you can log in to E-RAS (Electronic Record of Authorized Signatures) for a snapshot of the monthly reports. SBU Reporting can be used to check day-to-day reports. Information for obtaining access to this system can be found here. David (24469) in the Stony Brook Foundation office suggested we get in contact with Michael Danielson if we need further assistance.

I then helped John sort through some old receipts and tried to make an organized system. I labelled a number of small envelopes with SBF’s category names and descriptions and made sure to sort the receipts accordingly. There is of course a miscellaneous “Other” envelope (where I placed Azure’s “SORT” post-it note on the front), for those receipts which don’t fall into any of these. All the envelopes and papers related to this are in a nice neat box for the moment.


Wednesday 9 July 2014

Today was chock-full of great talks and discussions! We started in the morning with a review of yesterday’s overview of some important topics in preparation for Hal’s discussion about laser cooling. These included harmonic motion (with a quick look at the differential equation and solution for motion of a mass on a spring), resonance and the Lorentz function that describes the oscillation intensity, doppler shift (and how an atom moving towards the laser light would see a higher frequency), the k’s (spring constant, Boltzmann constant, and wavenumber), the energy levels in an atom (describing energy and momentum of a photon in terms of h-bar), and how the temperature of something is a measure of the random internal kinetic energy of its atoms (and not! related to the overall velocity of the object. i.e. a baseball wouldn’t get hot just from being thrown super fast).

Hal’s master class with Ikaasa was a great learning experience. He began with a general introduction to some key concepts about temperature and absolute zero, the relationship between pressure, volume and temperature in PV=nRT (and how your tires (fixed volume) don’t go flat in the winter, even though the temperature drops it’s on the scale of a few degrees Celsius, so there’s only a very slight decrease in pressure), discrete energy levels, the doppler shift, and the scale of accuracy that is needed to match the resonant frequency of an atom (one part in a billion! like the leg of a small bug compared to the distance from Stony Brook to midtown Manhattan).

In laser cooling, a moving atom that sees its resonant frequency will absorb the light, also absorbing some momentum. Through spontaneous emission, it reemits the energy and loses momentum. This process continues until collectively the atoms have slowed down, and therefore have been cooled. Six lasers are directed at each other to create a zone of optical molasses. In this region, there will be a force opposing the atom’s motion in any direction. Once the atom is stopped, the forces from the laser cancel, there’s zero kinetic energy, and therefore the temperature should be absolute zero. However, it’s impossible for an ensemble to reach absolute zero, because on the collective average of the velocity of the atoms won’t be zero.

We had our pizza lunch (a big group this time! we went through about 5.5 of 6 pizzas) at the usual time and place. I’ve added the abstracts here. The first presenter was Leighton Zhao, a Simons fellow working with Phil Allen. He discussed his work on analyzing the normal modes of a seven particle system. He was given trajectory data from an experiment conducted by Peter Koch in which seven steel balls (magnetized and confined within a circular steel wall) were jostled in one direction. Leighton described his work in writing programs to fit the data and better understand the behavior of the system.

Taylor Esformes, a Stony Brook masters student working in an off-campus lab, described hyperspectral imaging (an image created by a stack of monochromatic images in which each layer represents spectra from a different wavenumber) and some of its applications. There are two different methods of collecting data for these images - (1) “push-broom” method, in which data is collected for one strip of pixels at a time; in the air at an altitude of 25,000 m, a pixel would be 1 square meter. (2) filter wheel, a multispectral device that will map an entire pixel array for one wavenumber at a time. Some of the agricultural applications were that by examining the detailed information about spectral emissions of fields, one could check the health of plants, the fertility of soil, water content, etc. in a quick and noninvasive manner. Taylor is working to make an inexpensive commercial device that would make use of the filter wheel technique.

Marty then gave a brief overview and demonstration of a BEC imaging project. The goal is to image Rubidium atoms that are contained in a vacuum cell with glass walls. They’re using a finely tuned laser to do so, however in the image there are extra interference fringes that show up from dust on the glass because the laser has high spatial coherence. To limit this, they need to lower the spatial coherence of the beam. Marty says that using an engineered diffuser (similar to ground glass) and a 5-meter multimode fiber could solve the problem. Furthermore, to deal with the speckle in the beam after it goes through the diffuser, he proposes using an acousto-optic modulator to continually deflect the beam through it (this would be better than any sort of mechanical movement of the diffuser, which could cause extra vibrations on their work table). After the diffuser, the beam will need to be coupled through a multimode fiber and at the output have its spatial coherence measured. He finished by showing us how sending a laser pointer through the diffuser creates a large square pattern.

In the afternoon, we did a little bit of organizing in the LTC office - sorting and filing papers etc. I also updated the calendar 2014 page with pictures from the welcome lunch and first day festivities.


Tuesday 8 July 2014

We started off the morning with brief group meeting to discuss the upcoming two pizza lunches and the possibility of doing mini lessons. I asked the students to start thinking about some optics topic(s) that they’ve either stumbled upon in their literature research or have just found to be interesting. I think these mini lessons would be best presented in the form of a whiteboard discussion, rather than a powerpoint, because this encourages more involvement from the other students. I also encouraged the students to read each other’s online journals, to get an idea about what everyone else is reading about. They might also find something new that interests them or help out if there was something another student didn’t understand.

The idea for doing a Journal Club is still up for discussion whether, but I’ve been thinking that it might work best once the students start focusing on their actual projects. Each student will choose a journal article and (possibly) create a mini presentation about it - reviewing the main idea, key points and findings (important equations, graphs, etc), and explain how this aids in his/her own research. The other students should read the abstract and (at least) skim the rest of the article to have a general idea of what it’s about. We can set a few dates and have 2-3 students present per day.

We sat down for a while with Phil Allen’s student, Leighton Zhao, to review his presentation for tomorrow’s pizza lunch and check on how we can connect his laptop with the projector. The pizza lunch tomorrow will feature two presentations (from Leighton Zhao and Taylor Esformes), a short talk by Marty about a project in the BEC lab that he’s interested in possibly pursuing with an LTC student, and general updates from the other students about what they’ve been researching.

In the afternoon, John and I sat in on what became a preview for Ikaasa and Hal’s master class. They discussed a little bit about her understanding of laser cooling and also the logistics of how this “class” would run - John envisioned a dialogue between Hal and Ikaasa that the others could listen in on. A couple of miscellaneous interesting things that Hal mentioned: the importance of understanding the role of entropy in laser cooling and just in general how sacred the laws of thermodynamics are, the general uncertainty across the internet of the spelling of the Lorentz Gauge (which he says they’ve decided should have the “t”!), and in the spirit of estimation problems, he described how he’s able to understand how much a million is - say you have a million pennies, all laid down next to each other, if you’re far enough away to be able to see all of them, you’re too far to be able to resolve a single one.

Back in the lab, we huddled around a computer and reviewed some topics that would be important for the students to understand for tomorrow’s discussion, namely the doppler shift, harmonic motion, resonance, Q-factor, temperature and kinetic energy, and the energy and momentum of light. Ikaasa also had found a very helpful video in which a very enthusiastic researcher at the University of Nottingham describes how it’s possible to use laser light to cool atoms.


Monday 7 July 2014

This morning I helped the students through a simple derivation of the golden ratio and showed its geometric connection to the Fibonacci sequence. I’ll include a write up on some sort of derivation page soon to keep track of these things… I then updated the lab’s calendar (note to future LTC-ers, keep this calendar current, erase and rewrite often - my notes from last summer were still up and it was near impossible to erase the marker!) with the remaining events for the summer. We’ve got a little more than 5 weeks, and a lot to get done! In the morning John and I also made a trip to the administration building and submitted all of the stipend paperwork to the Stony Brook Foundation.

Jonathan had a couple of questions about the journals he’s been reading, so I’ve been trying to catch up a little on laser vibrometetry and the articles he’s been looking at. Laser vibrometers detect surface vibrations without having to be in contact with the surface. Jon linked to an article in his webpage ( Wang 2009) about using a photo-EMF pulsed laser vibrometer with high sensitivity and more accuracy than a conventional laser vibrometer based on optical interferometers. (He also included a link to Prislan 2008, which describes laser vibrometry in general). Speckle in light beams makes readings of surface vibration measurements from optical interferometers inaccurate (i.e. there will be sudden drop-offs from the speckle, unrelated to vibrations). This is discussed in Numerical simulation of speckle noise in laser vibrometry, Rothberg 2006.

This seems to be an early article that talks about the photo-EMF detection technique (cited by Wang 2009), Measuring vibration amplitudes in the picometer range using moving light gratings in photoconductive GaAs:Cr, (Stepanov 1990). It might also be useful to look up what laser Doppler anemometry (LDA) is, which is where laser vibrometry developed, (as mentioned by Rotheberg 2006). There’s also this article, Frequency detector using photo-EMF effect (Lara 2006) which discusses a novel method for measuring an unknown frequency using a photo-EMF detector, also giving a short explanation about what this effect is. They use two phase-modulated interference patterns incident on a device that will generate a photo-EMF signal - one pattern is phase modulated by a known frequency and amplitude while the other is modulated by a vibrating object of unknown frequency and amplitude.

In trying to understand this photo-EMF sensor, I looked up photoelectric devices. I’ll need to look into these things a bit further, but here are some brief notes - There are two types: a) photoelectric tube (aka phototube): vacuum-tube, photoemission, generates electric current, and b) photoelectric cell (aka photocell): solid-state, internal photoelectric effect, generates photo-emf. Photoelectric devices have 4 main distinguishing characteristics:

  1. luminous sensitivity: ratio of photoelectric current to the luminous flux producing the current

  2. spectral response: optical wavelength range of sensitivity

  3. voltage-current characteristic: relationship between photoelectric current and voltage across the device

  4. conversion efficiency: ratio of electric power generated to the incident luminous power

There was also this interesting 1934 article by Sharp about clarifying the names of photoelectric devices.

We had a group meeting in the afternoon and talked about a few different things. First was the upcoming pizza lunch meeting - we’ll hear talks from Taylor Esformes and Leighton Zhao (a Simons student working with Phil Allen), and Marty will talk briefly about a possible project in the BEC Lab. The LTC students are also expected to give summaries of what they’ve been reading and learning about, and discuss any project ideas they may have. At the pizza lunch next week, the students will put together a powerpoint and discuss 3 possible projects they’ve been thinking about (Marty’s idea to do something like this!). We also talked about the possibility of doing a Journal Club and/or having the students create mini lessons - more to come about these ideas, but I think something along these lines will be good for the students. You don’t really understand something until you can teach someone else about it!


Thursday 3 July 2014

Today I worked on updating my webpage for the new summer program (e.g. creating a Summer 2014 page, creating a new calendar page, adding the abstracts from yesterday’s pizza lunch) and helping with some more of the paper work. For future mentor reference, here is some information about Minors in Research Labs, the required Laboratory Supervisor Safety course, and the Parent/Guardian consent form.

I attended a Responsible Conduct of Research and Scholarship lecture/tutorial with the Simons students given by Professor J. Peter Gergen from the Department of Biochemistry and Cell Biology. First he walked the students through a registration process for RCRS training. And actually, accordingto the Stony Brook policy on Responsible Conduct of Research and Scholarship, as a “non-degree visitor [conducting] research for less than one year at SBU,” I’ll need to complete the on-line training component through CITI. The Responsible Conduct of Research in the Physical Sciences course consists of 9 modules with quizzes (your quiz average must be at least 80% at the end). Other training resources can be found here.

For the remainder of his presentation, Gergen talked about how a scientists reputation is based both on credibility (publishing the truth and not falsifying results just to make headlines) and ‘glamour’ as he called it (publishing results first), the responsibility that comes with authorship of a paper, the necessity of citations and understanding your field as a whole, and the importance of acknowledging others and respecting ownership of intellectual property.


Wednesday 2 July 2014

We did a run-through of each student’s presentation in the conference room this morning and I gave some final feedback before their actual presentations at the pizza lunch. Overall everyone did a good job! The abstracts can be found here.

In the afternoon we did student headshots (cropping and resizing them to 300x300) and got each of them started on their personal webpages – Libby was a great help in guiding the students through the login process and showing them the essential Linux commands. I also found a concise "cheat sheet" that could be useful. Note the login and file upload/download processes are a little different for Mac users:

  1. Open Terminal

  2. Select New Remote Connection from the File menu

  3. Add laser.physics.sunysb.edu to the Server list

    • To log in for normal functions (i.e. viewing directories, editing files),
      select Secure Shell (ssh)

    • To upload (“put”) or download (“get”) files, select Secure File Transfer (sftp)

  4. Fill in username in the User box and click Connect.

I also helped John with some of the paperwork necessary for the summer program. For future mentor reference, each of the students who are being paid a stipend directly from the LTC need: a Participant Stipend Form, a W-9, and either a Stony Brook Foundation Cash Payment Voucher or Requisition Form. (Note the trick to have a requisition number generated on print: SBF must be checked, and “Office Phone” “Office Fax” must be filled in.)


Tuesday 1 July 2014

We spent the entire day reviewing the students’ presentations and abstracts for our pizza lunch tomorrow. The LTC started a tradition (last summer) in which the new students give an informal talk about a project they’ve completed (or optics topic they’ve researched) as a way to “introduce” themselves.

I also put together a brief personal introduction presentation about myself (my experiences in the LTC the past two summers and other research and teaching I’ve done in between) that I’ll share at the lunch meeting.


Monday 30 June 2014

Today marks the beginning of my third summer in the LTC! It’s great to be back :)

I met the high school students this morning at the Simons welcome breakfast – Ikaasa, Jonathan, and Alex, our Simons fellows, and Andrea, our independent high school student. After this we headed to the lab and I met Libby, a rising sophomore who worked in the LTC this past spring and will continue in the lab this summer. We spent some time first talking about past LTC projects (looking over the hallway displays) and then had various discussions in front of the whiteboard (such as talking about the small angle approximation, orders of magnitude, and how to keep a proper lab notebook and mini notebook).

After pausing for a delicious welcome lunch with Marty at the Simons Center Café, we went outside with magnifying glasses and black paper in hand to test out what it takes to use the sun to burn a hole in the paper. The students explored how it was fairly easy with the magnifying glasses (especially when they were doubled, one over the other, making the focal length half as long), but impossible using a pair of glasses (sorry Piggy!). We later had the students calculate the sun’s irradiance that was being focused by the magnifying glasses by comparing the area of the lens to the area of the resulting light spot - turned out to be about 1800 kw/m^2.

For the remainder of the afternoon we looked at various demos (the pig toy, the interferometer, the polarizers) and did a few calculations on the board (such as, the total power output of the sun, which is about 4x10^26 watts, or “400 yottawatts, that’s a lotta watts!” as John pointed out). An interesting derivation that I had never done was changing around the thin lens equation such that it gave you the distance to the image based on the total distance from the object to a screen and the focal length of the lens being used. This was a great mathematical representation of the demo in which the students had to find the correct place(s) to put a magnifying glass in front of a light source such that it created a clear image on the whiteboard.



Summer 2013


Friday 16 August 2013

Today is my last day as a summer mentor in the LTC. It’s crazy to think how fast the summer has gone. I feel like I just got here! Rachel, Dr. Noé, and I had a nice “Farewell Lunch” at Muse’s family’s Thai restaurant, Phayathai.

I spent the rest of the day looking a little more into the customs issues with our Cambridge Correlators order and also touching up my webpage. Among a few other things, I made sure to update Week 10 on my calendar page, with information about this past Wednesday’s pizza lunch talks. I also tried to organize all of the printouts and forms from the Cambridge Correlators order and just from my research into SLMs in general.

We bought a small package of CD-R’s – I had this idea of using a CD to store data pictures from each student, since there won’t be enough space to hold all of these pictures on the lab camera’s memory card forever! I only had time to start with Samantha’s caustic pictures. Since there turned out to be a lot of extra space, I also put photos of her giving presentations and from the poster symposium on the CD and included a little “table of contents” type sheet in the case. Since the CD-R allows a one-time-write, it would be best to buy CD-RW, so that more photos and other important things (maybe such as Mathematica simulations or spreadsheets with data, etc) could be continually added. I wanted to get one sample disc done before I left, so hopefully a future LTC student or mentor can continue this job.

Overall, I'm glad I had the opportunity to come back to the LTC this summer - it's been an invaluable experience for me.


Thursday 15 August 2013

The Stony Brook University purchasing agent taking care of our Cambridge Correlators order contacted us this morning to let us know that the SLM Kit seems to be cleared for delivery, however the LM635 Laser Module will need to be inspected by the FDA. She forwarded a form from the FedEx Import Coordinator that we needed to fill out: “Declaration for imported electronic products subject to radiation control standards.” I first contacted the purchasing agent who got in touch with the import coordinator from FedEx with a few questions regarding the form. I then tried to contact Cambridge Correlators, to ask about whether they’ve filed a radiation product report with the FDA/CDRH for their Laser Module or had American customers with similar issues in the past.

As what usually seems to be the case with legal documents, the jargon was a little hard to navigate. But after studying the declaration form all morning, I started to understand it better. It looks like unless Cambridge Correlators has filled out a radiation product report, we may need to fill out a FDA 766, which puts a limit on the length of time and dates the product will be in use in the country... (which seems silly because this is the least dangerous out of any of the lasers we've got in the LTC!). I don't see an appropriate reason that this device would not be subject to the radiation performance standards (situations listed under Declaration A), but I guess we'll have to look further into #2 (under Declaration A), and see if the device can be excluded based on the "applicability clause or definition in the standard or by FDA written guidance."

At the bottom of Form FDA 2877, it says that we can consult the following 3 FDA web pages for additional guidance:

  1. http://www.fda.gov/cdrh/ - error: directory listing denied,

  2. http://www.fda.gov/ora/hier/ora_field_names.txt - page not found,

  3. http://www.fda.gov/ora/compliance_ref/rpm_new2/contens.html - a very long manual on regulatory procedures.

While this manual "provides FDA personnel with information on internal procedures to be used in processing domestic and import regulatory and enforcement matters,” I did find the following:

“Questions regarding importation of specific products should be referred to the appropriate Center…medical devices and radiation emitting electronic products or their components should be referred to the Center for Devices and Radiological Health, Office of Compliance, Division of Program Operations (HFZ-305).”
(pg. 9 – 68)

“radiation-emitting devices must meet established standards” (pg. 9 – 94)

But there doesn't seem to be any information in this manual on figuring out if a product meets certain standards or not ... where are these standards listed? Do we have to contact the Center for Devices and Radiological Health directly? Other things to note on FDA 2877 form that could be useful to remember:

The one that was emailed to us by the FedEx Import Coordinator had actually expired on 11/30/2003. I'm not sure that much has changed on the form, however I found the more recent one online. The bottom does read "Previous version is obsolete."

Under Declaration A – 7, it says to chose this option if the product is being reprocessed in accordance with P.L. 104-134, but after looking into this further, it appears to only pertain to “For Export Only” products.

Googling this Center for Devices and Radiological Health didn’t yield many helpful results. So, I turned to Sam’s Laser FAQ and found a valuable page on Laser Safety Classifications, which led to a link for the performance standard for light emitting products. (This has info on the applicability clause that we were wondering about! However, it doesn’t seem like our laser module is applicable …) Our LM635 is a <2mW laser – according to Sam it’s a Class III A. These are medium power lasers that can injure the eye if the beam is focused. The official (type-written?!) FDA classification of Class III A lasers is found here.

I then tried to find some further information on this website, first searching radiation-emitting product codes, but this simply listed acronyms for various products, then searching the establishment registration and device listing. There was no “Cambridge Correlators” explicitly listed, but they could possibly use a different name for manufacturing.

Then at the end of the day we talked to Hal about this problem and he said we should either go through the university’s custom’s broker, or just reject the whole order and start over.


Wednesday 14 August 2013

Today I read some more from Tony Phillips column in American Mathematical Society on catastrophe theory and linguistics. The number of arguments of the verb in a sentence (which is usually three, but at most four) corresponds to the number of critical points (specifically minima) that can simultaneously exist in a function. For the simple minimum (not a catastrophe), represented by a quadratic function, the corresponding sentence type is “I am;” small perturbations to x2 do not change the location of the minimum by that much, so it’s considered stable. For the fold, represented by a cubic function, the corresponding sentence type is “The day begins;” there is a drastic difference in the number of critical points between whether the control parameter for a perturbation term is positive or negative, which is consistent with the fact that the process represented by this sentence type is different depending on the “direction” (i.e. “The day begins” or “The day ends”). The verb phrases (which come from the process/event in the external world) get more involved with each of the more complex catastrophes.

Our pizza lunch was small since most of the LTC students left last week; we heard short talks from Seth, James, Casey, Rachel, and Stefan: Casey talked about his work studying the Zeeman effect on the HeNe gain curve, Seth and James gave a joint presentation on the trials and tribulations or working with the ARP (adiabatic rapid passage) experiment, Seth then gave a short presentation on his bichromatic force computer simulations. Afterwards, Rachel talked about how she was attempting to create optimized optical vortices with our spiral phase plate, and Stefan discussed how he was simulating the interaction between atom clouds and Laguerre-Gaussian beams.

Hal gave us a short lecture on the optical Bloch equations (OBE). He started with a short introduction to quantum mechanics and then derived the Rabi equations (1937), Feynman, Vernon, and Hellwarth’s version for working with real numbers (1957), and Bloch’s vector for the optical case (i.e. parallel to the equations for Rabi oscillations!). It was interesting that these three equations, with three unknowns, were similar structurally to the output you get from taking the cross-product of two vectors (i.e. the equation for du/dt contained only the variables v and w, while the equation for dv/dt contained only the variables u and w, and finally the equation for dw/dt only contained the variables u and v). Note also that these are all real quantities.

Later, Dr. Noé coupled single-mode fiber, sent through 100-micron pinhole, trying to show the Fresnel pattern with central dark spot, (in the same way he tried to show Will an Airy pattern but ended up with this unusual beam) but this time, all we could see is the Airy pattern! After a little maneuvering, we did end up seeing the dark central spot when the fiber tip was very close to the pinhole.


Tuesday 13 August 2013

We’re having some slight issues acquiring the Cambridge Correlators SLM – the device has successfully made its way from the UK to the US, however it’s stuck in customs! There’s some debate as to whether there are crystals emitting harmful radio frequencies, when really the tiny liquid crystal display panel has the same electronics as LCD projectors, TV screens, etc. Hopefully this can all be straightened out soon, although it doesn’t look like the SLM will get here in time before I leave on Friday…

I created a short version of the Laser Teaching Center Summer 2013 slideshow and uploaded it to YouTube. Hopefully it’ll be a good piece to share with those who are not necessarily directly connected to the LTC but are interested in what goes on here. It’s about 3 minutes long and contains our group photo, one picture of each student, one of each mentor, a couple from the pizza lunches, and a few from other special events (i.e. tour of Eden's lab, welcome lunch for Laser Sam, Simons students tour the LTC, Simons poster symposium and farewell lunch). I’ve also added this to my slideshow page.

In the afternoon I spent some time talking to Rachel about her setup, which freezes a Fresnel diffraction pattern from a pinhole (at N=2) and then sends this through another circular aperture to cut off the outer part of the pattern (which as a dark center). She then uses this output to illuminate a section of a spiral phase plate to theoretically create an optical vortex, however there was then some discussion as to whether the resulting beam was an actual optical vortex or not.

While reaching out to faculty last week who would potentially be interested in Samantha’s caustic/catastrophe theory poster at the Simons symposium, Dr. Noé got in contact with Tony Phillips from the Stony Brook Math Department. Prof. Phillips pointed out a couple of pieces he had written for a column in the American Mathematical Society on the “catastrophe machine” and catastrophe theory and linguistics. The latter, “Topology and Verb Classes,” describes a few examples from René Thom’s Topologie et linguistique (1970) of how the elementary sentence types relate to the elementary topological structures (“elementary catastrophes”) underlying events in the world around us. He proposed that when an event or process happening in spacetime can be characterized by one of the catastrophes, the mental process that perceives the event will imitate the catastrophe, and furthermore the syntax of the verb phrase used to describe the event or process will correspond as well. Prof. Phillips then walked us through a few examples with the first few elementary catastrophes. I found it all a little bit abstract and hard to comprehend because I haven’t studied syntax or sentence structure since middle school.. I decided to look up some information on the argument structure of verbs and stumbled upon this Argument Structure guide for Italians learning English!

The AMS column also led me to another interesting article (that Tony Phillips had cited): How We Came To Be Human, by Ian Tattersall in the Dec 2001 issue of scientific American, which I plan to read when I get a chance. The article evidently poses the question, how was language invented? Consciousness depends on language, however, many millennia before there was evidence of conscious activity, there are fossil records of language (i.e. of the necessary cerebral and vocal apparatuses). The evolution of human intelligence is pretty interesting stuff!


Monday 12 August 2013

I first spent a lot of time catching up on my journal from last week – it had been so busy with all of the deadlines and events that I only had time to jot down notes each day of what I accomplished. Today I finally went through and wrote out the full entries.

I heard a clip from NPR titled Why aren’t more girls attracted to physics? which gave an interesting view on one reason that more or less girls will study physics in a particularly area. While this does sort of play to stereotypical situations, I agree that generally the community and environment in which a child grows up play an important role in shaping the child's views on achievable careers. As a young girl growing up in my small suburban town, the only woman I knew with a Ph.D was my elementary school principal (but it wasn't even in science), and basically all of my friends' parents and other women that I was surrounded by were stay-at-home moms, nurses, teachers, lawyers, or musicians. I honestly didn't even consider the possibility of a career in science until college.

I finally finished my CCD vs. CMOS page. Now that we have one of each type of camera in the lab, hopefully this guide will provide future LTC students with a good introduction to their different qualities and uses.

Today I also helped Rachel with her research. We got out the optical fiber breadboard since she’s interested in modeling part of her project after Will’s. We were successfully able to couple the multi-mode fiber, however we didn’t have time to finish the single-mode since we got distracted trying to maximize her spiral phase plate setup with Stefan – they were hoping to attach it to an adjustable mount for more fine-tuned adjustments. Stefan created a useful diagram of the various patterns on the phase plate in one of his journal entries back in February.


Friday 9 August 2013

This morning was the poster symposium for the Simons Summer Research Program! Everyone’s posters looked great! ☺ There was a nice ceremony afterwards in which each student was called forward and presented with a certificate (signed by Jim Simons!). We had a farewell lunch at the Simons Center Café (John Noé, Kevin Zheng, Melia Bonomo, Rachel Sampson, Dave Battin, Samantha and her father, Kathy, William and his parents and two siblings) and took a group picture:

After lunch, William brought his family into the lab to see his project, Rachel’s laser light show, Kevin’s 1.4-meter long laser, and Kathy’s tweezers setup. Kathy stuck around for the rest of the day, since her flight wasn’t until Saturday, however we said our goodbyes to everyone else.

The lab was sadly a lot quieter in the late afternoon… I spent some time uploading pictures from the symposium and organizing these on their own page. I created a Closing Lunch Meeting page, to list each student presentation, provide links to their abstracts, and organize photos from the event. Finally, I updated the calendar page with all of our “Week 9” activities - it’s been a very busy week!


Thursday 8 August 2013

This morning we went over the final estimation problem that we had kind of been putting off for a few days because of all of the deadlines (i.e. abstracts were due Monday, posters were due Tuesday, and presentations were yesterday): If you paved a pathway to the moon, how long would it take to walk, jog, and sprint it? Even though this was a fairly straightforward problem compared to past weeks, the important part was trying to come up with a way to estimate the distance to the moon. For someone who doesn’t know it off of the top of their head, it’s at first hard to fathom coming up with a number for this distance. Obviously it’s impossible to just guess, so you have to think more creatively! For instance, I estimated the radius of the Earth and then used this to come up with a distance – the moon was definitely farther than 10 times the radius of the earth and 1000 times seemed too far, so I used 100 times: 400,000 km.

The ranges for our answers were then as follows: it would take 7-9 years to walk to the moon, about 4-5 years to jog, and about 2 years to sprint. Afterwards I asked the students about whether they thought these types of exercises were useful / fun, or if they felt too much like homework problems. They all seemed to agree that the estimation problems were interesting and should be continued for future LTC students.

In the afternoon Dexter Bailey, Vice President for University Advancement and Executive Director of the Simons Foundation, came to visit our lab. After introducing ourselves, each student gave him a brief overview of his/her project. The students are really getting the hang of presenting their research in a variety of forms (i.e. 1-minute verbal progress reports, powerpoint research updates, more formal powerpoint presentations, and demonstrations in the lab) to a range of audiences (i.e. mentors, professors, other LTC students, their peers in other labs, and non-science professionals). I think Dexter Bailey enjoyed his visit and mentioned before he left that he thinks a good meeting is one in which you come away having learned something new – and after an hour in the LTC, there were plenty of things that he learned!

Helping Sam shrink the file size of her poster reminded me that I had to do that for my Dickinson poster (which I have a link for on my presentation page). So I went through and shrunk the pdf files for my poster, 2012 REU presentation, and 2012 FiO/LS presentation. There’s a pretty simple way of doing so on a Mac (I’m not sure if there are similar options on a computer using Windows).

  1. Save the PowerPoint file as a PDF

  2. Open the PDF (the default application for Mac is Preview)

  3. Save As…

  4. Under “Quartz Filter” select “Reduce File Size”

In other news, I found the birthday problem article that had come up in a recent conversation – I had remembered there was some show host who made the mistake of assuming that the magic number of 23 people for there to be over a 50% chance that two people shared the same birthday meant you could pick a specific birthday for two people to share, but I forgot where I read the article... The article turned out to be part of that same Me, Myself, and Math series from the NY Times Opinionator!


Wednesday 7 August 2013

Today was our Closing Pizza Lunch Meeting, which also marks the end (for most of the students) of the Laser Teaching Center’s 15th summer! In honor of this, we shared some updates and remarks from some LTC alumni that Dr. Noé wrote to. (I shared the powerpoint I had been working on and read the quotes from the alum that had wrote to Dr. Noé. After each of the students gave their ~10 min presentations, I shared the video slideshow that I’ve been working on. I had first explained how this summer, as a mentor and assistant, I’ve had a variety of duties: helping students hands-on with their research and writing, doing literature research on students’ topics of interests to keep up with their work, doing research for the spatial light modulator purchase, and keeping organized records of guests and other special events we’ve had in the LTC this summer.

For the rest of the afternoon, I spent some time adding to my website – I created a page for pictures from the Simons students’ tour of the LTC and an Alumni remarks page with the information/pictures I had put in the PowerPoint. Later, I uploaded the slideshow to YouTube.

I also finished going through each students’ poster one more time before Dr. Noé ran them off to the printer for the 6:30 PM deadline. We just made it in time!


Tuesday 6 August 2013

In the morning I stumbled upon a NY Times article on catastrophe theory – part of the same Opinionator series that produced the singularity article I had used for my senior research: Me, Myself, and Math, by Steven Strogatz. The article discussed the theory’s applications to the economy and sleep patterns, among other things. Just a note about some of the terminology used - Contrary to connotations of the name of this theory, a “catastrophe” isn’t necessarily something positive or negative, it’s simply a catastrophic event. The word “caustic” actually means “able to burn;” in physics it’s highly concentrated envelopes of light formed by the intersection of reflected or refracted parallel rays from a curved surface.

The NY Times article cited two important sources; one was an article in Nature by Berry about caustics through lollipops. Berry’s poster explained how caustics are catastrophic events in a more straightforward way than I’ve read about it so far: in a swimming pool, you see a bright web of light when your eye, the sun, and the water are all at an ideal distance, in which the light rays (critical points) intersect.

After lunch, I gathered optics demonstrations for the Simons program’s tour of the LTC this afternoon. I also planned out how the station rotations would play out. The tour began at around 4pm with some remarks by Dr. Noé in the conference room. He went around the room and had each student say what they’re researching and then think about how that relates to optics. Afterwards, I divided the students into three groups that I helped rotate on a 20-minute/station schedule – one group stayed with Dr. Noé for demonstrations, one group listened to William and Kathy discuss their projects, and the third group listened to Sam, Kevin, and Rachel explain their projects and also see a laser light show (run by Rachel). Throughout the event, I made sure to keep things organized and moving smoothly. In the beginning I had to be a little creative and rework the original plan on the spot to fit the lengths of the student explanations. But overall, I’d say it went really well! And it was great to see how our LTC students got really excited about sharing their work with their peers. (I also took lots of pictures!)

Throughout the day and after the tour especially, I looked over the students’ posters and gave them comments/suggestions. Today I also finished the iMovie slideshow, the LTC alumni quote presentation, and a program with the list of student talks/titles for tomorrow’s pizza lunch.


Monday 5 August 2013

Today I did more abstract editing! I sat down with William for the entire morning for editing, rearranging, etc his abstract. Again, it was important that I fully understood each piece of his project in order to help make constructive changes, so we also spent some time discussing his procedures and the reasons behind them. I then worked with Sam after lunch on the theory section of her abstract to try to define catastrophe theory (Dr. Noé later suggested to use proverb “the straw that broke the camel’s back,” which I had brought up after hearing it used in Robert Gilmore’s article on catastrophe theory). For now we ended up with: the study of how changing control parameters leads to qualitative change in the solutions of a differential equation.

I did some more literature research on Samantha’s project and looked over Berry’s 1976 paper that connected caustics with René Thom’s catastrophe theory and his article in Physics Bulletin, which sort of summarized it. (It’s really great that Berry has all of his papers available online.) Structurally stable caustics are those that are unaffected by generic perturbation; these are the “elementary catastrophes,” such as the fold or cusp. The higher-dimensional catastrophes are structurally unstable and go through an unfolding process (e.g. the parabolic umbilic that Sam observed in her evaporating droplet went through a series of stages in which you could see the individual elementary catastrophes). This higher-dimensional catastrophe appears because of surface tension and gravity (from the vertical microscope slide she’s using).


Sunday 4 August 2013

We spent most of the afternoon editing student abstracts – it’s a very time-consuming process! I found out very quickly that in order to properly help someone revise a piece of writing, you really need a firm grasp of the content that’s being written about. Editing is more than just checking spelling and grammar – it’s fixing sentence structure and reorganizing ideas to make sure the student is communicating his/her message clearly and concisely. In Sam’s case, I had been keeping up with her journal and experimental progress but was still not quite sure how the math behind her research into caustics worked. Therefore, it was difficult to check over her theory section.

I started with Nye’s 1979 paper that she’s modeling her work after and the Gilmore catastrophe theory article that I found couple of weeks ago. After several discussions with her throughout the rest of the day, I think I’ve come away with a better understanding of the theory for her project!

In short: The germ is a polynomial (with state variables x, y) that is unique for each type of elementary caustic (these are very complicated to derive), while the unfolding terms may be common among a few different caustics and contain the control variables (i.e. a, b, c, d). For a particular catastrophe (i.e. caustic shape), you produce the generating function by adding the germ and unfolding terms together. The first partial derivative set equal to zero (the “formation of the ray”) reveals the function’s critical points, and the second partial derivative set equal to zero (the “formation of the caustic”) reveals the degeneracy of these points. What she’s interested in is finding the caustic curve equation, which comes from solving the formation of the caustic equation for the state variables. These plotted against the control variables produces a graph that looks like the caustic shape she observed.


Saturday 3 August 2013

Today I worked on the LTC Summer 2013 slideshow a little bit more. I also created a page with general tasks that can be done during a Friday clean up session to keep people on task and make the afternoon as productive as possible. As seen from yesterday’s clean up, when there’s a plan laid out ahead of time, it’s remarkable how much can be accomplished! With the addition of this new page, I decided to organize my Summer 2013 page, since it was starting to become somewhat of a jumbled list.


Friday 2 August 2013

Today I spent a lot of time putting together a slide show of pictures from the LTC’s summer program using iMovie. I first introduced each student/mentor, then I added photos from special events, and finally I had a few general photos of happenings in the lab. I’ll add in a few more photos at the beginning of the week from the Simons LTC tour and then burn the whole thing onto CDs to hand out (since it’s already too large of a file to be emailed).

In the afternoon, we all got together and did some very productive cleaning in the lab – First everyone worked on cleaning up around his/her individual table space, i.e. putting away extra pieces that may have accumulated (e.g. posts, post holders, screws, etc) and making sure that delicate optics that aren't immediately being used (e.g. lenses, wave plates, polarizers, etc) are either put away / covered / placed somewhere safe. We started putting together the "laser light show" on the wooden table for the LTC tour next week, and setting aside some of the other demos that will be used as well. Other things that were done included general neatening up of loose papers / books / magazines, sweeping up loose screws and washers from the floor, and putting up a few more things on the wall.

I think the lab is looking great!


Thursday 1 August 2013

I helped Rachel with a number of Mathematica-related issues, since she’s trying to fit a Bessel function to her Airy diffraction pattern data, similar to what I had done last summer. I had to look back at some my old Mathematica files, but there were a few things that I couldn’t remember how I did them and unfortunately I don’t have my actual notes from last summer with me right now... One important command that I figured out was what you can use to import lists of data from an Excel spreadsheet. The data appears as an array of points that can then be plotted using the ListPlot command:

I was able to help her with a couple of other things that I remembered having difficulty with – extra parenthesis around the data set and extra space between the words of certain commands that the program doesn’t like. (The error message that Mathematica spits back out is never much help with this type of thing!) There were still some questions about what certain values were in my equation and why her fit still wasn’t working correctly…

Later in the day I looked over my old Mathematica files some more, and I'm pretty sure now that the values in my equation were in micrometers, not millimeters - which would make sense that the "150" was in there (since my pinhole was a=150um) and that the wavelength was "0.632." I'm still not sure about my L distance of "11310," because I definitely had the photodiode farther than 11mm away from my pinhole! There may have been an issue with my "x" distances not being in microns. I was having trouble locating the raw data on my computer to check this, but it's possible that I made an error in their order of magnitude and then somehow fixed it by adjusting L ... It’s possible that double-checking all of her units again might help her fit work a little better.

It was also exciting today that William came full circle in his interests – he actually used a pair of 3D glasses in his setup to demonstrate what effect a left-handed or right-handed circular polarizer had on his output beam.

I created a pizza lunch meeting summary and typed up a brief description of each presenter’s talk from yesterday. I also updated I updated the calendar page with information from Jenny Magnes’ visit. When I was uploading a couple of pictures from yesterday’s events, I found that one of the pictures was set to private (if you tried to view it online, the page is listed as “Forbidden”). This isn’t the first time this has happened to me (I’m not sure what even causes it), but there’s an easy code to make pictures/files public:

Also, Rabbit, rabbit, rabbit!


Wednesday 31 July 2013

Today Jenny Magnes from Vassar College visited with three of her students to speak at the pizza lunch and spend the afternoon in the LTC. As per usual for our lunch meetings, I helped set up the projector and her presentation in the morning. (Hopefully we’ll be able to get a new projector for the AMO conference room soon!)

She and her students (Brian, Ramy, and Tewa) spoke about their work in biological applications of diffraction, which ranges from observing changes in the intensity of diffraction patterns due to a changing magnetic field of a regenerating planaria or the quantification of thrashing movements of the C. elegans based on its diffraction pattern. A video of her work in Quantitative Locomotion Study of Freely Swimming Micro-organisms using Laser Diffraction can be found in the Journal of Visualized Experiments (JoVE).

After the Vassar group finished presenting, Rachel and Kathy each gave a short presentation on their work – Rachel spoke about her initial interest in identifying bacteria by their diffraction patterns and how she came to start researching reconstructing an object from its diffraction pattern and aperture functions. Kathy explained her optical tweezers setup and the biological applications this type of apparatus has.

I also spent some time during the day working with William to try to retrieve his images off of the “CCD computer.” We tried hooking up the computer tower from Sam’s desk to a monitor - the computer started up fine and went to the log-in screen, but then we had no way of actually communicating with it because the keyboard and mouse were both unresponsive (we even tried multiple different mice).

I suggested we purchase a portable USB floppy drive for the lab, to make transferring images from the “CCD computer” a lot easier. These are fairly inexpensive, however usually not in-stock in stores, so we’ll have to buy one online from Staples, BestBuy, or even Amazon.


Tuesday 30 July 2013

Today I worked with William to try to optimize his setup with the 4-f component. In order to minimize extra scatter, I had him clean most of the optics and tried to find a less-scratched polarizer. We then took some time to set up the CCD camera and computer near his table, and found that the striations in the stress optic were causing extra stripes in the output. Even so, William was able to adjust the optics such that you could see the bulls-eye beam output with either a dark or bright center depending on the orientation of his linear polarizer. The next issue to deal with is finding empty floppy discs and a way of transferring them to a computer with both a floppy disc reader and USB port or network connection.

I started brainstorming what to put into my presentation for next week, since I’ve done research into a variety of topics (most in-depth with the SLM, but also little things here and there to keep up with the students’ interests), and I’ve also helped out with different tasks around the lab (whether it be assisting students with projects or recording accounts of the events and visitors we’ve had). I’m thinking that I might put together some sort of “Experiences as a mentor in the LTC” presentation…

I started creating a “CCD vs. CMOS cameras” page with the information that I had been reading about. I’ll have to finish it up in the next few days. (The html code that I used to create the table came from Temple University.)


Monday 29 July 2013

I updated the estimations page with the answers from last week and a new problem for this week – since all of the students are busily engaged in their projects now, I chose something fairly straight forward, but still interesting to think about:

If you paved a pathway to the moon, how long would it take (in years) to:
(a) walk it? (b) jog it? (c) sprint it?

I did some research into the differences between CCD (charge-coupled device) and CMOS (complementary metal-oxide semiconductor) cameras. There was some useful information on the Thor Labs product page, and I also found this Photonics Spectra article. Both provide a means of converting an optical image to an electronic signal, but the main difference is the way they’re wired: in a CCD camera, each pixel’s charge is sequentially transferred to a common output structure where the charge is converted to a voltage; in a CMOS camera, the charge-to-voltage conversion takes place at each pixel. CCD cameras provide superior image quality, but at the expense of a larger system; they are more suitable for top-notch imaging applications such as: digital photography, broadcast television, and scientific/medical applications. CMOS cameras are immune to blooming and the system is often more compact, but at the expense of image quality (especially in low light); these cameras are more suitable for high-volume and space-constrained applications such as: security cameras, videoconferencing devices, fax machines, and consumer scanners.

I met Jon Sokolov from the Garcia Center who discussed their Summer Scholar program as well as his research in DNA cutting with soft lithography. (This method allows for greater efficiency with DNA sequencing, because the DNA strands are cut into pieces in such a way that you’re able to determine each one’s overall location in the strand based on the length of the piece.) The scholar program hosts about 60 high school students, 10 undergraduates (REU), and a handful of high school teachers (RET).

Rachel forwarded me a very interesting article: Reading Art through Science. It described the work done by physicists and chemists in the Metropolitan Museum of Art’s science department to study and protect great works of art. Their labs use lasers, electron microscopes, and x-ray machines to perform ultra-sensitive and minimally invasive experiments. One chemist, Marco Leona, describes the importance of science to better understanding the history of a work of art: “Through the ‘materiality’ of a piece we can learn something about the artist as a person who existed at a specific point in time, in a specific place and society, with access to certain knowledge and technologies.” One of my favorite lines from the article is “Hints of [these scientists’] broad interests line their desks: books on laser physics sit next to texts on ancient metallurgy practices, Cambodian history, or Italian Renaissance painters.” The museum offers Conservation and Scientific Research fellowships – for those who have recently completed graduate level training, or even professionals in the field.


Friday 26 July 2013

Today was a very productive day! In the morning I reviewed the estimation problem I had given the students for this week:

If we had a red HeNe laser with a Fabry-Pérot cavity the length of Long Island:

  1. What is the frequency spacing between two adjacent longitudinal modes in the cavity? (Our range of answers were 600-800 Hz)

  2. How many lasing modes would be present at a given time? (We all got about 2 million modes)

After we discussed our answers, Casey jumped in and asked if the students knew where the formula for frequency spacing comes from. They could somewhat explain it, but he encouraged them to write it out. Kevin started, and at one point Kathy and William joined him - so all three were writing on the board at once! They made quick work of it together, and afterwards Casey explained how he arrived at the equation slightly differently. It was great that our discussion evolved into this mini derivation exercise - basically driven by the students.

I spent most of the day jumping around between various student projects -

I helped Rachel take pictures of her pinhole diffraction pattern with the CCD camera using the old computer on the cart. We ran into some issues when wanting to transfer images, since there’s no longer any other (hooked up) computer in the lab with a floppy disk reader! One possibility is hooking up the computer to the network and signing into laser using SSH … In the mean time, we also tried Kevin’s suggestion of using the Nikon camera with the lens removed. I then showed her how to do an intensity profile of the beam using Image J, however the Nikon images were somewhat saturated from the central disk of the Airy pattern. She instead decided to setup a photodiode to manually profile the beam, however we ran into some issues trying to find the correct banana clips to connect a resistor in parallel with the voltmeter (since I had actually just helped Kevin setup the photodiode to an oscilloscope for his cavity dump laser setup).

With William we discussed the possibility of using a 4-f setup to clean up the output from his stress optic, which has a number of striations effecting the diffraction. Thankfully, I was able to find the lenses from my setup last summer - 333mm focal length, 40mm diameter achromats. (I ended up labeling the drawer on the desk in the back electronics room, since it contains other useful spatial filtering supplies).

Samantha’s project deals with the caustics of water droplets and catastrophe theory. After reading up on catastrophe theory in general, I realized it has ties to my senior research project on singularities in freezing water droplets. The solution to the differential equation I had used to describe the changing shape of the droplet as it freezes undergoes a pitchfork bifurcation at a critical density ratio (of the solid to the liquid). Before the bifurcation, droplets will freeze with a flat top, but after this critical point, frozen droplets will have a cusp formation at their tip. It turns out that the appearance of new equilibria and the disappearance of old equilibria is essential to the study of catastrophes. I plan to explore this connection further and have started going through a very detailed and useful source on catastrophe theory – it even has a glossary of terms at the end.

The new CMOS camera we ordered from Thor Labs yesterday arrived this morning! Kathy tried using it in her tweezers setup, however there wasn’t great contrast on the video fee… Hopefully she can fix this by playing around with some of the settings, but something I’d also like to do is read up on the differences between CCD and CMOS cameras.


Thursday 25 July 2013

Today I compiled a calendar of LTC events, to keep track of all the of the special meals and visitors we’ve had this summer. I also did a little bit of research into the USB CMOS camera, which we just purchased, and found the manual.

I updated my SDE1024 page with information on suppressing ghost spots and fixed the paragraph I had at the beginning about twisted nematic liquid crystal cells – this is a reflective SLM so the incident light’s polarization axis is rotated 90 degrees upon entering the cell, but then rotated back after reflecting back out of it, therefore it’s a phase-only device. I also updated my introduction to SLM page with information on how liquid crystals rotating polarization of incident light.

Speaking of Rosalind Franklin (Friday 19 July journal entry on the play that was written about her life), today is her birthday!


Wednesday 24 July 2013

Today at the pizza lunch, Giovanni Milione (who is currently a PhD candidate at CUNY and used to be a student in the LTC) gave a talk on “Classical Entanglement with a Vector Light Beam.” He explained how a vector vortex beam has a combination of spin angular momentum (SAM - circular polarization, specifically radial or azimuthal) and orbital angular momentum (OAM – shape of wavefront, specifically (Laguerre-Gaussian mode). While the polarization and spatial phase of the beam are two separate qualities, they are “classically entangled” such that affecting the SAM affects the OAM. This is mathematically equivalent to quantum entanglement; therefore it opens up the door to the possibility of using vector vortex beams for advanced communication (i.e. being able increase the amount of energy transferred through an optical fiber).

During the discussion after his talk, there was some controversy on how the OAM and SAM could be determined from a single photon, since you can’t necessarily measure both the polarization and mode of it simultaneously. Something else that was interesting that came up was the no-cloning theorem, which states that one can’t make an exact copy of an unknown quantum state, which is one reason why quantum encoding of information is so appealing – one would be able to tell right away if someone tried to “eavesdrop” on an entangled photon.

The students then gave a brief summary of what they’re working on now and how they got to this point. This was good practice for figuring out how to concisely describe what you’re researching about to someone who has not watched your progress over the past month but still has a substantial knowledge of optics. In such a short amount of time, there is a delicate balance between providing some context and details, but not getting bogged down with too much background or small intricacies of the project.

I also started putting together a calendar of special events to keep track of all the special visitors and pizza lunches we’ve had this summer.


Tuesday 23 July 2013

Today I did a lot of research into answering the question- Why do the liquid crystal molecules in a twisted nematic alignment rotate the polarization of light? Most of the papers I had read about SLMs simply stated that light’s polarization axis follow the helical rotation of the liquid crystals, yet none of them ever really explained why that is.

I started by looking at a Polymers and Liquid Crystals page from Case Western Reserve University, which broke down polarization in general and then talked more specifically about birefringence in liquid crystals, with some useful simulations (that I couldn’t run properly on my Mac!). This page on Anisotropy in Liquid Crystals from Kent State University is useful in describing how the extraordinary ray of incident light sees an “effective” index of refraction, based on its angle.

After looking over many sources, compiling this information, and a couple of times confusing myself further, I think I better understand what’s going on (i.e. how a liquid crystal molecule’s “tilt” changes the phase of an incident beam and it’s “helical alignment” rotates the polarization of an incident beam):

If light enters a liquid crystal molecule with its polarization axis parallel to the slow axis of the molecule this extraordinary ray will be slowed down; the rays will travel faster when the polarization axis is perpendicular to the slow axis. Therefore, phase modulation occurs as the liquid crystal molecule is tilted from having the polarization axis of incoming light parallel to the slow axis to perpendicular to the slow axis.

This can be seen in the parallel-aligned nematic liquid crystal, where the molecules begin in an upright position and then tilt towards the direction of the electric field at an angle θ in the longitudinal (y-z) plane. The stronger the electric field, the more the tilting angle increases in the direction of the axis of propagation, and the more the phase is modulated.

On a much smaller scale, Rayleigh scattering occurs between the electric field of the incident light beam and the electrons of the liquid crystal rod-like molecule. This is a type of elastic light scattering, in which negligible energy is transferred and therefore the wavelength of the incident photon is conserved – only the direction changes. Since these electrons are bound to the crystal, scattering occurs along the axis of the rod.

In a twisted nematic liquid crystal, where the molecules are aligned in a helical fashion, the scattering causes light’s polarization axis to follow the helix. Since each molecule is tilted away from the longitudinal axis (in the transverse x-y plane) by a certain angle α, the polarization of incident light will be rotated by an angle α as it exits the molecule.

I found a paper that appears to describe this phenomenon: Quasielastic Rayleigh Scattering in Nematic Liquid Crystals, which I’ll read more into tomorrow. Upon first glance, it seemed to keep citing previous work and not explaining the basics. I tried to work backwards and look at the earliest paper that they cited multiple times, by Chatelain in 1948, who was the first to study the intensity of light scattered by liquid crystal molecules, but his paper is only in French!

In other news, looking at the information page from Case Western Reserve University reminded me of the Free GRE Flashcards a professor in the physics department put together. I ordered them last summer and have only just recently had the time to start going through them. These have great short-answer style questions that cover all of the basic physics concepts you need to know for the exam (according the types of questions asked on previous exams), and everything is neatly organized by topic. Unfortunately, the flashcards are currently out of stock, but it seems that they’re working on an iPhone / iPad app!


Monday 22 July 2013

Today I reviewed the Stony Brook Solar Array (as William termed it) estimation problem with the students. Our results, on the estimation page, ranged from 104 to 106 kWh (1010 to 1014 J). It was great to see that each of us compared the resulting value to a different thing (i.e. household energy usage, skyscrapers, light bulbs, and lightning bolts). The new problem for this week is:

If we had a red HeNe laser with a Fabry-Pérot cavity the length of Long Island:

  1. What is the frequency spacing between two adjacent longitudinal modes in the cavity? [Hz]

  2. How many lasing modes would be present at a given time?

I read through the second part of Padgett et al’s paper, Optimisation of a low-cost SLM for diffraction efficiency and ghost order suppression, which discussed suppressing ghost traps. When trying to create an array of multiple spots in the Fourier plane (for example, two spots in the first diffraction order), they used the complex addition of multiple individual holograms. This, however, creates a complex field often with extra, unwanted light spots known as ghost traps. The problem is that, in addition to the continuous vertical difference in phase, their “desired field” (for creating these two spots) contained a square-wave profile on the horizontal axis that can’t be reproduced with a phase-only SLM. Therefore, the SLM output looked as follows:

The high contrast between columns resulted in more high spatial frequencies being present in the first diffraction order. To correct for this, they created a phase profile on the SLM in which the phase contrast was decreased near horizontal phase jumps. Therefore, when they subtracted their “desired field” from the SLM output, the “left over” field contained very little phase modulation, as seen below:

The unwanted light is now concentrated in the zeroth diffraction order (i.e. not diffracted because low contrast means low spatial frequencies, which remain in the center of the Fourier transform). The overall intensity of the resulting first-order diffraction pattern is decreased, but it also no longer contains the unwanted ghost spots.


Friday 19 July 2013

Laser Sam and Dr. Noé showed us an example of laser-induced fluorescence with the green Lightwave NPRO laser (~50mW) in the side room and a glass cell with Iodine gas. When we shined the laser through the cell, it excited the gas’s molecules, causing a sharp green/yellow streak to appear in the laser’s path (at certain times because of the fluctuating laser). We also tried shining the green laser pointer through, and it was able to create the same reaction at certain locations. Dr. Noé said that a green HeNe could also be used, and that there would even be a weak reaction for a red HeNe. This paper discusses a more in-depth study of the phenomenon.

Image Source: Bayram and Freamat (2012)

Sam was able to get the temperature-stabilized laser somewhat running - with this laser, you can see the oscillation between its two orthogonal modes by sticking a polarizer in the path of the beam and watching the beam alternate between getting dimmer and brighter. There’s also a feedback circuit, which has LED lights that show you how the laser switches between modes. (The feedback circuit switch: angled away from the board is ON, straight up is OFF, and angled towards the laser is LOCK). After the laser heats up (~20 minutes), one can then flip the switch to LOCK. However it turned out there was an issue with the circuit board’s feedback mechanism, so he’s going to bring that back home with him to work on.

Dr. Noé pointed out an article in APS News, Physicists in Outreach Face Tricky Career Choices, which describes the delicate balance between the need for outreach in the physics community and the need for an individual to establish his/her career. According to the article, older physicists often frown upon physicists earlier in their career for partaking in outreach for not being as serious about their scientific work. Those interviewed for the article suggest outreach should be a more of a hobby until a physicist actual establishes him/herself in the field; however for some, it may suit them better to pick up outreach as an actual career. It all depends on the individual person’s interests and goals, and there will inevitably be tradeoffs.

One of the physicists from the article particularly intrigued me - Sidney Perkowitz, a physics professor at Emory University. He was noted as publishing over 100 scientific articles and many works geared towards non-scientific audiences (e.g. five books, 2 plays, one performance dance piece). I looked him up on Google and found a biography on his personal website; the first sentence read: “Sidney Perkowitz is that rare blend of scientist and artist—a whole-brain thinker.” Wow! If only I had gone to Emory… The type of work he’s done is what I eventually would like to do – communicate interesting/important scientific phenomena and discoveries to the non-scientific community in various artistic forms.

One of the plays he’s written is titled Glory Enough, which follows the life and accomplishments of Rosalind Franklin. Franklin performed an integral role in Watson and Crick’s discovery of DNA’s double helix structure, however she received none of the glory. The play captures this injustice and delves into male views of women in science. His performance-dance piece, titled Albert and Isadora, portrays a series of interactions between Einstein and Isadora Duncan (who I learned about in a Dance and Culture course I had taken at Dickinson). Duncan was an American dancer who had an integral role in the development of Modern Dance at the turn of the 20th century, but the play’s dialogue reveals the similarities in her dances and views of the universe to Einstein’s theory of relativity.

I also updated the Laser Sam visit page.


Thursday 18 July 2013

Today I added a lot to my Summer 2013 page. Dr. Noé wanted to make sure we keep a record of Laser Sam’s visit, so I created a page to document the various events from the week. This includes a separate place for information on the Pizza lunch meeting.

I also typed up a page for Marty’s guide to cleaning lenses and mirrors, which I hope will be useful for current and future LTC students.

In the afternoon, Sam gave a very informative talk on laser safety. He described the dangers that lasers can pose, different classes of lasers, and good habits to get into when creating setups. All of the details can be found on the safety page of his Laser FAQ.

Dr. Noé showed us how one of the IR Cards (infrared sensitive card) works by first exposing it to visible light and then holding it in front of the IR beam. Then, you have to keep moving the card around to reactivate the spot that had come in contact with the beam.


Wednesday 17 July 2013

In the morning I worked on putting together a short presentation to give an update on the Cambridge Correlators SLM and Padgett article that goes along with it. (In the afternoon I typed this information up and made a separate page on my website, which I hope will be a good introductory resource to future students who will be using the SLM).

At our pizza lunch meeting, Laser Sam gave a very comprehensive presentation on the different types of lasers, longitudinal modes in HeNe lasers, and using a scanning Fabry-Pérot interferometer to analyze these modes. Afterwards, each student presented a short update on his/her progress in experimental setups and/or literature research. I’ll summarize the information on a separate page tomorrow -

Since Kevin needed some space on the table in the main room, we took apart and packaged some of Bolun’s setup where he studied evanescent waves. For future reference – the box (labeled of course) is in the back electronics room on a shelf.

           

Tuesday 16 July 2013

Today I read through part of: Independent phase and amplitude control of a laser beam by use of a single-phase-only spatial light modulator. The main idea is that Bagnoud et al (2004) use a phase-only SLM for amplitude modulation by placing a low-pass spatial filter in the Fourier plane of a 4-f setup (in which the SLM output was the “object”). It appears that they’re using the SLM to modulate the phase of the input laser beam at high spatial frequencies, and then cutting them off with the spatial filter in order to achieve the amplitude modulation.

I tried to help Kathy find some gold mirrors for her setup, but it turns out most he ones (pieces) we have are heavily scratched. There were also a lot of somewhat dirty optics, which prompted Marty to give us a short demo on how to properly clean them – he showed us with a plane mirror and with the dichroic lens. I plan to write this up on a separate webpage later in the week.

After the students left for lunch at the SAC with Laser Sam, Marty and I discussed progress current student projects. It’s crazy how fast the summer is going – the undergrads have been here about 6 weeks, the Simons students 4 weeks, and this will be the end of Samantha’s second week. There’s a lot still to be done, but just from the students’ journals alone it’s clear that we’ve got a very serious and dedicated group this summer.

I organized the spatial light modulator binder and added a few more articles that I thought were relevant. I also included information on the Cambridge Correlator devices that we purchased. It turns out that Azure (?) had already started a “Liquid Crystals & SLMs” binder – it was up on top of Dr. Noé’s bookshelf! At some point I plan to go through and consolidate these into one resource.

Marty found an article from Padgett’s group on the Cambridge Correlators SLM - Optimisation of a low-cost SLM for diffraction efficiency and ghost order suppression. It describes how increasing the contrast in a blazed diffraction grating code for the SLM output optimizes the device even with its shallow phase depth (~0.8 Pi). I haven’t read through the entire article yet, but I’ll explain some of this at the lunch meeting tomorrow during my update.


Monday 15 July 2013

In the morning I did a notebook check where I briefly looked over each of the Simons’ students' lab books and gave them some comments. Overall, they looked really good! Everyone seems to be off to a great start and following most of the criteria listed on my advice page.

Laser Sam came today! We had a great LTC lunch at the Simons Center Café, and then he immediately started getting involved in the long list of laser projects we had for him this week.

We’ve decided to purchase two SLMs and one laser module from Cambridge Correlators, who offered us a nice volume discount. They said that they would be able to ship before the end of the month, so hopefully we’ll be getting these devices in time for students to use before the summer program finishes.

I also updated my resources page today by adding a section on popular SLM and optics companies. (Eden suggested we keep ValueTronics in mind for inexpensive new and used testing equipment).


Friday 12 July 2013

In the morning I reviewed the “maximum angular resolution of the human eye” estimation problem - it was great to hear that all of the students approached the problem in a slightly different way (William described his creative method in his July 12th journal entry), but our answers were all basically within one order of magnitude of each other (7 x 10-3 radians to 8 x 10-5). The new problem that I gave them to think about over the weekend is:

If we covered all of the roofs of buildings on Stony Brook campus with solar cells, how much energy could we produce in 12 hours? [J] and [kWhr]
(assuming constant, unobstructed sunlight during this period)

I finished up my colloquia page with the list of relevant talks, abstracts, videos, and links to information about each speaker, so that’s up and running!

Samantha gave us a very informative tutorial on Python. It seems to be a great tool for analyzing large amounts of data, especially when having to pull from multiple files. We also spent a good amount of time in the conference room exploring different topics with the AJP and Optics Infobase databases. I decided to include links to these on my resources page.

I spent some time shrinking the file sizes of some of the images on my laser account (I had a lot of large photos from the REU symposium from last summer) using the convert and resize command:

There are still a few particularly large files in my home directory (e.g. pdf files of presentations and my senior seminar research poster) that I’ll have to shrink down as well at some point.

Dr. Noé briefly mentioned a parable story that he couldn’t quite remember the details of, but thankfully there’s Google! It’s called the Blind Men and An Elephant, and there are numerous versions of it. Most revolve around the idea that there are a group of blind men in a room with an elephant; each man touches a different part of the elephant and therefore when all of the men converse, they are in disagreement about what they are touching. There’s a great line on Wikipedia that describes the message of the parable: “While one's subjective experience is true, it may not be the totality of truth. Denying something you cannot perceive ends up becoming an argument for your limitations.”


Thursday 11 July 2013

Today I contacted Cambridge Correlators regarding their SDE1024 Low Cost Spatial Light Modulator. The representative that responded said that we could get a volume discount, and that the device can be shipped before the end of the month. He also suggested we buy their laser module to be used in conjunction with the SLM. We’ll have to do a little more research into these options, but hopefully in the next few weeks we can acquire this inexpensive SLM.

Dr. Noé asked me to go through Stony Brook University’s physics colloquia pages and pick out all of the ones that are relevant to the LTC. It did take quite a while to browse through the lists, there were separate pages for each academic year over the past six years or so (which is part of the reason why I’m doing this, so it’ll be easier to find the ones that LTC students are most likely to be interested in). I’ve also started organizing this list on a separate page, which is still clearly under construction!


Wednesday 10 July 2013

Today was Samantha’s first day! I presented her with her LTC lab notebook, mini “on-the-go” notebook, and gave her a brief tour around the lab. She then gave a great talk at our pizza lunch about her research and experiences as an Intel finalist. Afterwards the LTC students each gave a short update on what they’re working on.

The third part of the lunch meeting consisted of looking through some old collectible books that Dr. Noé picked up during his trip to Ithaca. I really enjoyed flipping through all of them. A treatise by William Hershel reminded me of my visit to a museum dedicated to him and his sister Caroline as part of my History of Science course that I took in England. (The museum was in Bath in a tiny three-story home just in the middle of a residential area.) I also found Lord Kelvin’s particularly interesting because he mentioned a lot about “ether;” reading books like this are a great way of seeing into the minds of scientists at the time. The illustrations are also incredibly intricate and the writing in general is very detailed.

At the end of the day, I helped Stefan set up the CCD camera for taking pictures of his optical vortex. Dr. Noé ended up having to get a new monitor from another computer, but once we finally got it to work, I gave him a short tutorial on using the Electrim EDC 1000N software for image capture.


Tuesday 9 July 2013

In the morning, I reviewed the Umbilic Torus estimation problem with the Simon’s students. For the first part, regarding the two lenses that would be needed to expand the beam enough to fill the aperture, we were all around the same order of magnitude for the ratio of the first lens’s focal length to that of the second (103), and our actual distances to the far-field ranged from high 103 to low 105. Examples of how this problem can be done are in Kathy’s July 8th journal entry and Kevin’s write-up of the problem. The next problem is to estimate the maximum angular resolution of the human eye (i.e. what is the smallest item we can see at a certain distance, based on the limitations of the eye). I updated the estimations page.

I also cleared up a small mistake I had made regarding the resistor we used in the photodiode connection for the Simon’s students’ mini project – I had mistakenly explained the resistor’s function as if it were in series. But seeing as it is connected in parallel, it acts like a current-to-voltage converter. Without this, the AVO meter wouldn’t be sensitive enough to pick up the current created from the laser beam’s interaction with the photodiode. The larger the resistor, the more the signal will be amplified.

For most of the afternoon, I helped Kathy start on her optical tweezers setup. We began by measuring the reflectivity of the dichroic mirror (reflects red light, transmits all other) that she’ll be using and noticed some interesting spots on the mirror (which Marty helped explain later as a product of scattering when light was directed at the "wrong" side of the dichroic mirror). Afterwards did some brainstorming with Dr. Noé about possible ways to raise the laser up to the height that she’ll need the beam to be at.

At the end of the day, I was able to finish up my page for the square aperture diffraction mini project that Rachel and I worked on last week.


Monday 8 July 2013

The hunt for an SLM continues!

Marty heard back from Prof. Pearson at Dickinson about the spatial light modulators that he’s worked with. Evidently the Holoeye LC-R 2500 (discontinued now) and Holoeye Pluto NIR-II (trial model) both had significant phase-to-amplitude leakage. If I understand correctly, it seems that the issues he describes were due to the coupling of phase and amplitude modulations.

Dr. Noé had a long discussion with Kiko Galvez about the spatial light modulators that they use in their optics labs at Colgate University. He suggested we go with the Hamamatsu, however he also pointed out a very low cost SLM kit (the SDE1024) from Cambridge Correlators. (I had actually used information on optical correlators from this company’s website when I was doing research into the various uses for SLMs). The device is a TN LCOS and doesn’t have a very substantial phase depth (only 0.8pi at 633nm). However if we were just using it for amplitude modulation purposes (e.g. to explore diffraction through various apertures or masks), I think that this inexpensive spatial light modulator would be a great addition to the LTC.

Kathy has been doing a lot of literature research in optical tweezers, and it seems that she has a lot of good ideas for potential projects. The first step will be to rebuild the inverted optical tweezers setup that Hamsa used (later we could also try making the tweezers from optical vortices). We’ll need to track down all of the parts, clear off the table with the microscope, and get the laser set up. Even before any of this, a good introductory activity that I helped get her and the other Simons student started on was to (1) practice creating a beam expander (which should help with the estimation problem for this week!) and then (2) profile the resulting beam with a photodiode. I then had them plot their data and fit a Gaussian equation to the curve.

Marty came in and explained that the beam could be cleaned up by putting an iris diaphragm at the focal point between the two lenses of the beam expander and also cleaning the lenses themselves. (The profile that the students graphed had a strange double peak; even by eye, you could tell when looking at the beam that it was a little messy). Marty also said that he would give us all a little lesson on how to properly clean lenses at some point later this week.


Friday 5 July 2013

This morning I reviewed the “homework” estimation problem with the Simons students - Kevin and Kathy had both estimated 1 x 106 m3, William was also on the 106 order of magnitude, and I estimated a high 105 volume. It’s great to see that we were all relatively close! (Kevin wrote up a very nice description of how he did the problem in his July 2nd journal entry). The problem for next week is as follows:

If we use a red HeNe laser and Stony Brook’s Umbilic Torus as our aperture:

  1. What are the focal lengths of the lenses you would need to expand the beam enough to fill this aperture?

  2. How far would we have to go to see the far-field diffraction pattern?

Now that Kevin had his camera with him in the lab today, we redid the diffraction setup and again sent the laser beam through a square and triangular aperture (separately) to observe the pattern across the lab on the door. These photos came out slightly better than the ones on Rachel’s phone, however it was still hard since the camera doesn’t have as large of a dynamic range as the human eye. The extra-bright center of the diffraction patterns made it hard for the camera to register the lesser-intense side lobes.

Rachel caught a minor mistake in the calculation we did with Marty yesterday – we had used the equation to find the distance to the first minimum in the square aperture’s intensity pattern on the door, but had incorrectly multiplied our answer by 2 instead of 3/2 to find the location of the first maximum intensity peak after the central one. Our calculation is now about 9mm.

We also re-measured this distance experimentally and found that to be about 9mm! (Note, this measurement was just from marking the distance by eye, so there is some uncertainty there that we should account for if we were doing a further study on this; but since this is just a mini-exploration of diffraction, our measurements aren’t that precise).

I decided to make a separate page to show the calculations and setup, which I’ll finish up over the weekend.


Wednesday 3 July 2013

Today we got to start some hands-on work – recently, Rachel has been reading about elastic light scattering and how various bacteria colonies can be identified based on their diffraction patterns; she therefore wanted to start by observing the diffraction patterns that are caused from different shaped apertures. We used the red HeNe in the back room with three different apertures: circular with diam=200 μm, square with side=1.4mm, and triangular with side=1.7mm (the square and triangular ones were originally from David’s project).

At first we set it up somewhat crudely, trying to bend the laser light around a little bit in order to avoid disturbing Stefan’s setup. We were able to observe the Airy pattern, but the others didn’t look that great. I gave a brief explanation to the students about pinhole diffraction, Fourier optics, spatial frequencies, and near versus far-field diffraction, and I also mentioned the mini project I did last summer.

After lunch Marty stopped by and helped us set up a beam expander for our setup; we then turned off all of the lights and projected each aperture’s diffraction pattern across the lab to the far door. It was pretty neat to see the square and triangular aperture patterns – the square aperture diffraction pattern had two-fold symmetry with a very bright square in the center and smaller less-intense squares coming off in a cross shape; the triangle aperture had a bright triangular shape in the center and two layers of three-fold symmetric arms coming off of it. We were unable to take pictures of these, but below are examples from outside sources that look similar to what we saw. Hopefully on Friday we can use Kevin’s camera.

Marty also suggested calculating the distance to the first minima and comparing that to what we experimentally observed. While the diffraction pattern for the square pattern was projected across the room, we marked where the central maximum and first order maximum were on a sheet of paper. We then decided to first calculate the distance we’d have to be in order to see Fraunhofer diffraction, since these apertures were relatively large, and afterwards we calculated where the first bright spot should be and it was pretty close to our measurement!

[Note: these calculations are in the next journal entry, after we fixed a couple of minor corrections]


Tuesday 2 July 2013

I started off the morning by giving a brief introduction to estimation problems (aka Fermi estimates) for the high school students and walking them through a simple example: “How tall is a stack of a trillion one-dollar bills?” I explained the basics about how to approach these types of problems, and together we estimated that it would be about 2 x 105 km high. The final important thing to do with estimation problems is to make the answer relatable. For instance, 105 km is somewhat abstract – it’s such a large number that we can’t even really comprehend what it means. We decided to compare it to the circumference of the earth and found that our stack of bills would wrap around the earth about 5 times!

The “homework” problem I gave them for this week was to figure out “What is the volume of rubber warn off of all the tires in the U.S. in one year?” in cubic meters. This is a problem I had to work on for my senior seminar class last semester. I think that it’s a good starting estimation for them to make, since it has multiple parts to account for, but it’s not overly complex. In the upcoming weeks, I’m going to try to develop some questions that incorporate optics topics. I also created a new page to keep a running blog with the problems and results we do together.

I also finished my Introduction to Spatial Light Modulators page. Hopefully this will be a valuable resource for LTC students to use once we purchase an SLM.

An interesting article that I started to read, Spatial amplitude and phase modulation using commercial twisted nematic LCDs, used a spatial filtering technique to combine four neighboring pixels into one “super pixel.” This method uncoupled the phase and polarization (for amplitude) modulations (which normally is an issue with LC SLMs), and the researchers were able to very precisely modulate the phase and amplitude for each of these super pixels.

Dr. Noé and I spoke about the various events, talks, and visits for the coming weeks – there’s a lot we’ll need to squeeze in before the summer is up! I’m going to start working on an online schedule to keep it all organized. A previous LTC student, Sage, had created a calendar on her webpage, so I’ll probably borrow the format from her.


Monday 1 July 2013

I spent some time in the morning reading over the Simons/LTC Fellows’ journals and providing comments/suggestions. It’s great that each of them avidly records their daily activities, and it’s interesting to hear what they’re learning about. I even learned a few things too (e.g. optical tweezing, anamorphic formatting, sonoluminescence, etc).

I also created an introduction to SLMs page on my website using the information from my presentation with a little bit more detail. I’m almost finished - should be done with in the next couple of days. I then reorganized my website and made a Summer 2013 page.


Friday 28 June 2013

This morning we had a white-board talk about double-slit interference. Dr. Noè asked the students to derive a function to describe the intensity I(y) of the interference pattern of a wave with wavelength λ on a screen that was a distance L away from two slits (which were a distance 2a apart), introducing only the binomial approximation and the complex notation for waves. The students made very quick work of the derivation! (Maybe even quicker than Marissa, Jonathan, Ariana, and I did last summer).

We then talked a little bit about LaTex – Rachel presented a useful equation editor that lets you create images of equations using LaTex codes, but there’s an option to choose the symbol by sight instead of having to remember how to code each symbol. This reminded me of another website I had used, Detexify, where you can actually draw the symbol that you’re thinking of to find out its code. Afterwards I gave a brief introduction to the actual text-editing program, TexShop, and showed how to use some of the important features with my honors thesis as an example. (I remembered later that TexShop is only a Mac program, so editing may be slightly different for the Windows version of the program, MiKTeX. However, the symbols and codes will be the same for any version.)

I started a resources page where I plan to put links to outside sources that I’ve found useful. So far I posted a few things about LaTex (e.g. a tutorial from Dickinson, where to download different Tex editors, and where to find codes for mathematical symbols). I’ll continue to update this as the summer goes on.

Dr. Noé picked up on a small typo in my journal entry from Wednesday, where I misspelled Casey’s name with a “K.” I think the mistake stemmed from the fact that this summer I’m working with Casey and Kathy, however back home I know a Kasey and Cathy - so one can see why I might occasionally slip up with the spelling :)


Thursday 27 June 2013

The search for an SLM for the LTC continues! I heard back from a Boulder Nonlinear Systems representative, who gave us a quote (for an SLM designed to provide 2Pi phase shift at 633nm) and a helpful description about how BNS SLMs compare to other companies.’ For the XY Nematic Series (reflective) SLM, BNS pays special attention to:

  1. Refresh rate: other companies design their SLMs to function as microdisplays, therefore there are often “phase ripples” because the refresh rate is too slow to sustain a constant phase value across each pixel

  2. Reflectivity: other companies will often directly coat the silicon chip with a dielectric mirror, not paying attention to the fact that this results in a series of grooves to appear where the pixel gaps are located, therefore often causing higher order diffraction

I also contacted the Dickinson student who made and SLM from an LCD projector for part of his senior research project. He used the AJP article we were looking at and said that it was fairly straightforward to convert the LP1000 projector. However, the pixilation of the LCD panel created a diffraction pattern that interfered too much with the diffraction pattern they were trying to program for the SLM. In the end they decided to put it aside because they couldn’t create any useful output from the device. This will be something we should probably consider – maybe there’s a way to make an SLM with the instructions from this article using an LCD panel with a larger fill factor.

An article by Bowman, Wright, and Padgett - "An SLM-based Shack-Hartmann wavefront sensor for aberration correction in optical tweezers" - describes how an SLM could be used as a closed-loop adaptive optics system (functioning as both the wavefront sensor and the corrective element) for estimating and correcting aberrations in holographic optical tweezers (HOT). This is based off of the idea of a Shack-Hartmann wavefront sensor: an array of lenslets that focus a collimated beam into an array of spots; the displacement of each spot is proportional to the tilt of the wavefront at that point (which is integrated to obtain phase information).

Bowman et al segmented the active area of an SLM into an array of circular apertures (each with a different blazed diffraction grating). They then used a lens to focus each aperture to a spot into an array on the sample plane. By looking at the distortion of the array (which can be seen by eye!) they were able to estimate the tilt of each region on the SLM. From here they were able to estimate a phase map of the aberration and subtract this from the hologram to correct the wavefront.

Dr. Noé pointed out a recent AJP article (submitted Jan 2013): "Reconstructing the Poynting vector skew angle and wavefront of optical vortex beams via two-channel moiré deflectometery." The article talks about splitting an optical vortex beam and sending each arm of the beam through a pair of moiré deflectometers. (Moiré deflectometry is an interferometry technique that uses a pair of transmission gratings to create a fringe pattern that corresponds to the optical properties of the object being tested). The research group then described how the moiré deflectogram revealed a relation between the skew angle of the beam’s Poynting vector (directional energy flux density) and l value (topological charge). Last summer when Jonathon did his project on optical vortices and Ariana was looking into moiré patterns, we had thrown around the idea of trying to combine the two projects; if only we had followed through!


Wednesday 26 June 2013

Today we had our pizza lunch presentations, and there was a pretty good turn out. I gave my talk on “An introduction to spatial light modulators,” which I’d like to turn into an informational page on my website. Stefan and Casey discussed previous research projects on optical vortices and HeNe laser modes, respectively, Rachel explained how light scattering patterns can be used to identify various colonies of bacteria, Kathy and William gave very informative talks about optical tweezers and 3D effects in film, respectively, and Kevin described some of the research he had done at the University of Minnesota.

Afterwards, we talked with Dave Battin in the LTC. He showed us his Pico Projector - a pretty neat pocket projector that he was able to hook up to iPhone. We also talked about spatial light modulators and the possibility of making our own, since Dave knew someone who had done the electrical wiring for a similar project.

There’s an AJP article (from Huang et al) that gives detailed instructions on how to turn a low cost LCD projector into a spatial light modulator. The article gives a brief introduction to transmissive TN LCD SLMs, which describes amplitude modulation from a different polarizer configuration than Boruah's AJP article - Boruah explains that due to the orientation of the polarizers at the entrance and exit face of the LCD, light is blocked when there is no electric field, whereas Huang et al explain that the polarizer orientations they’re using block output light when there is an electric field. The underlying principles are still consistent with the fact that the TN liquid crystals will rotate the polarization of incident light when there is no electric field.

Huang et al mention that they start with an Infocus LP 1000 LCD projector that uses SONY LCX017AL LCD panels. I looked up the datasheet for these LCD panels and found no information on the pixel fill factor. This is an important parameter that will affect the overall effectiveness of the device. At Dickinson, one of the other physics majors actually turned an LCD projector into a spatial light modulator for his senior research project. I vaguely remember that he was having issues with extra diffraction patterns because of gaps between the light-sensitive areas of the pixel, so much so that he was wasn’t able to get a clear output, (which he mentions in our class blog). However, I heard about this before I had a good understanding of how SLMs work, so I’ll have to contact him to find out more.

I also spent some time cleaning up my Linux directories by making sub-directories to organize images and files from my Bessel beam project, Airy beam mini project, research journal, and SLM research.


Tuesday 25 June 2013

Today I worked on my spatial light modulator presentation. Since these talks are meant to be informal and encourage conversation, I’m just putting together a few slides to help organize what I’d like to say. I figured the best way to communicate what I had read about would be to first introduce some applications, describe the basics of what an SLM is, go into more detail about electrooptical liquid crystal SLMs, and then explain how a couple of example models work.

The two examples I’m going to talk about are an optically-addressed PAN SLM and an electrically-addressed TN SLM. I’ve already read a lot about the latter, but I decided to look more into how the reflective, optically-addressed PAN works. There was one particularly useful article in Optical Review, Phase Modulation Characteristics Analysis of Optically-Addressed Parallel-Aligned Nematic Liquid Crystal Phase-Only Spatial Light Modulator Combined with a Liquid Crystal Display.

Rachel told us about a great website for making diagrams/charts/schematic called Creately, so I’m using that for most of the figures in my presentation. For example, I was able to create this one to describe how a parallel aligned nematic liquid crystal cell worked:


Monday 24 June 2013

This morning there was a breakfast for the Simons program, where we had the chance to meet the three Simons/LTC Fellows who will be working in our lab – Kathy, William, and Kevin. Afterwards we invited the students and their families to the LTC for a few demonstrations (which included activities with the pig toy, the optical fiber bundle, polarizers, the oscilloscope, the large interferometer setup, 3D glasses, and finished with using magnifying glasses to burn black paper in the sun). We then had a welcome lunch in the Simons Center Café with Marty. The afternoon was spent talking about (1) titles for our lunch meeting talks this Wednesday, (2) how to keep a good lab notebook (all of the details can be found on my Advice on notebooks page) and (3) how to edit the students’ web pages (Rachel made a very useful Linux guide for beginners).

On an unrelated note, Marty mentioned an exhibit currently at the Guggenheim by James Turrell, which explores perception, light, color, and space. Among other works of his on display, Aten Reign is particularly interesting; it’s an installment in the main rotunda of the museum that transforms both the natural and artificial light in the space. Using hundreds of LEDs and a few large concentric fabric circles suspended from the ceiling, Turrell created a mesmerizing display of light that plays with the viewer’s experience of both space and time by means of the Ganzfeld effect – a phenomenon in which a person exposed to a uniform field of color experiences a loss of visual perception and/or hallucinations. The exhibit runs through September, so I may try to visit on a free weekend or at the end of the summer!

As far as the hunt for an SLM: the Holoeye representative I’ve been emailing with said that we would be able to achieve about a 1.5Pi phase shift with a 632-nm laser, and that unfortunately they do not sell the discontinued devices. I also contacted Boulder Nonlinear Systems about their two dimensional spatial light modulators, just to see if that’s another option for us.


Friday 21 June 2013

I looked over the LC 2012 user manual – most of the information was already online, but there were some important passages about the connection scheme and sequence for powering up the device (i.e. HDMI cable, then power cable, then USB). An AJP article from 2009 actually went into detail about using the LC 2000 (basically the same as the LC 2012, just with a lower resolution) with a 632-nm laser: Dynamic manipulation of a laser beam using a liquid crystal spatial light modulator, by B. R. Boruah. The article described the theory behind TN LC (twisted nematic, liquid crystal) cells and how polarizers on either end can achieve amplitude and phase modulations; they use a computer generated holography technique to achieve the desired optical wavefronts. Boruah also mentions one key issue stemming from the LC 2000’s poor fill factor (around 50%) - there are optically inactive gaps between adjacent pixels, therefore even when a uniformly bright or dark image is sent to the cells, there will be extra diffraction orders present besides the 0th order.

Something else that might be useful to consult is Experiments with the HoloEye LCD spatial light modulator from an MIT research group, and I want to look more into the birefringence of liquid crystals as well.

I contacted Holoeye to ask about possibly purchasing discontinued devices and to double check that the LC 2012 would be able to appropriately phase modulate a 632 nm HeNe beam for our desired applications (optical vortices, higher order Bessel beams, etc). Dr. Noé also suggested that we contact Boulder Nonlinear Systems, to see what the cost and estimated delivery times are for their SLMs.


Thursday 20 June 2013

After discussing my Advice on Notebooks page with Dr. Noé, I added a couple more things and made it a little more specific to researching in the LTC. Hopefully future students will find it useful :)

I emailed back and forth today with Holoeye about their SLMs. The representative I spoke with suggested we look at either the PLUTO, LETO, or LC 2012 models for the applications I had described (vortices, Bessel beams, etc). The PLUTO and LETO SLMs were both expensive, reflective PAN LCOS (liquid crystal molecules aligned in parallel nematic on silicon) whereas the LC 2012 was a more reasonably priced, transmissive TN (liquid crystal molecules in a twisted nematic alignment). In addition to the individual SLMs, there’s also an OptiXplorer educational kit, which includes an LC 2002 SLM, laser, polarizers, and various softwares for completing 6 experiment modules:

  1. Using the SLM as amplitude modulator for image projection experiments

  2. Figuring out the parameters of the SLMs TN-LC cells by measuring their Jones matrix (see below) components

  3. Creating beam-splitter gratings with the SLM

  4. Using Ronchi gratings for measurement of the phase modulation of the SLM

  5. Computer generated holograms (with lens and prism phase functions)

  6. Interferometric fringe-shift measurement of the phase modulation of the SLM

I think our best bet might be to purchase the individual LC 2012, which seems to be a good fit and price range for the kinds of experiments we’d be doing in the LTC. I looked up some of its specifications and found:

  1. it can perform a maximum 2π phase shift for 532-nm light
    (1π phase shift for 800-nm light)

  2. it has a resolution of 1024x768 with pixel pitch (size) of 36-μm and 58% fill factor (the ratio of the light sensitive area of the pixel versus the total area of the pixel). [A useful resource about pixel size and sensitivity can be found here.]

  3. it can be addressed simply like an external monitor using a standard DVI/HDMI interface or more complexly with the USB interface and SLM software input

The Holoeye SLM software seems very impressive. Among various other abilities, it can program the LCD to act as an aperture (e.g. rectangular, circular, single slit, double slit), a lens (e.g. Fresnel zone, Axicon), or create an image representation of a vortex phase! I found a useful source from MIT about experiments that can be done with the transmissive SLM from Holoeye.

As I mentioned above, one of the experiments from the OptiXplorer kit dealt with the “Jones matrix” of the TN-LC cells. I wasn’t really familiar with what that meant, so I quickly looked up Jones calculus. Basically it’s used to figure out the resulting polarization of light (which was already fully polarized) emerging from an optical element. Jones vectors describe polarized light (e.g. linear x, linear at an angle, right circular, etc), whereas Jones matrices represent various optical elements (e.g. lenses, beam splitters, mirrors, polarizers, etc). Therefore, to find the resulting polarization of light, you simply operate on the Jones vector of the incident light with the Jones matrix of the optical element it passed through.


Wednesday 19 June 2013

I continued reading about SLMs today from two different sources: Hamamatsu LCOS-SLM information sheet and Liquid Crystal Spatial Light Modulators in Optical Metrology. [Also, for future reference, I found a useful technical glossary from Applied Materials, which helped with some of the acronyms I came in contact with.]

During our pizza lunch meeting, Marty brought up an article he recently read about how researchers were trying to switch up how arithmetic is taught in elementary school so that students aren’t simply memorizing answers, but rather learning how to figure them out. It immediately reminded me of a discussion I had had in my Introduction to Discipline-Based Education Research course (Fall 2012 at Dickinson) about how math is traditionally taught, but I couldn’t quite recall the details… After looking over the blog our class had kept, I quickly figured out that I was thinking about an article by Richard Skemp, Relational Understanding and Instrumental Understanding.

Instrumental understanding involves the memorization of rules and mathematical situations to come to an answer; equations and theorems are simply means to an end (for example: memorizing a table of the sine and cosine of special angles). On the other hand, relational understanding is achieved through conceptually appreciating the math behind a certain result, through careful selection of a method to figure out the answer (for example: using a unit circle and special triangles to figure out the sine and cosine values). It’s hard to say which method of learning is better: relational understanding clearly appears to be the more useful type of comprehension, however instrumental understanding can make complicated problems easier to work through in a less-discouraging amount of time. If I become a teacher, I will probably try to incorporate both.

I started brainstorming some points about the lab notebook and research journal that I definitely want to share with the high school students. These are organized on a new page that I created today. I also put a link for it on my main webpage called “Advice for Notebooks” for now, but I’m not sure if I want to keep that label or call it something different.


Tuesday 18 June 2013

This morning we had a mini LTC meeting to plan out the upcoming week because the Simons/LTC Fellows arrive on Monday! There’s a welcome breakfast at 9am for the Simons program, and then we’ll have an LTC welcome lunch at noon. Also on Monday we’ll be distributing supplies (lab notebook, mini notebook?, optics book?), teaching how to keep a good lab notebook, and showing how to access and update their laser webpages. On Wednesday during the pizza lunch meeting, each student will give a 10-minute talk on something they’ve been working on or some other relevant optics topic. (I agree with Dr. Noé, this should be a good ice breaker/get-to-know-you event!) At some point during the week, we’ll need to also show the students around the LTC equipment and explain the lab rules.

We got the printer on my desk to work! Well really all I did was plug it into my Mac and press “print,” but for some reason it doesn’t work that easily with Windows computers. Until we buy a new printer, I’ll just be the intermediate step for students who want to print from the lab. For future reference, it’s an HP Photosmart C4480, and here is the fact sheet with cartridge information.

Since the Holoeye website doesn’t list prices for its devices, I crafted an email asking for these, an expected delivery date, and the possibility of getting an education discount for the LTC. Hopefully we’ll hear back ASAP so that we can order an SLM and have it delivered in time to be used this summer for research.

Dr. Noé showed Rachel and me an online optics textbook written by Prof. Peatross (from Brigham Young University): Physics of Light and Optics. For future reference, the textbook can be found here.

I started making an information page about spatial light modulators. There’s still a lot to add/format, but it’s certainly a start! I spent a lot of time trying to reorganize the information I got from the tutorial article I’ve been reading (since the way they divided their topics wasn’t exactly the clearest way of presenting the information), and soon I’ll also be adding in info from Azure’s paper and other sources. Next week I’ll be giving a presentation on SLMs to the high school students at our pizza lunch meeting.


Monday 17 June 2013

Today I spent most of my time reading more of Two-dimensional spatial light modulator: A tutorial. Some of the things I’ve taken notes on are: (1) SLM applications- optical correlators (very fascinating!), optical crossbar switches (used in broadcasting, also very interesting), digital optical architectures (for parallel computing), and displays (improving cathode ray tube (CRT) systems). (2) modulation mechanisms- mechanical, magnetic, electrical, and thermal. (3) modulation variables- intensity (amplitude), phase, and polarization. (4) addressing modes- optical (with a special detection mechanism) or electrical (which is functionally identical to CRT displays). I’ve been writing everything up in my lab notebook, and tomorrow I plan to start organizing the information on a webpage.


Friday 14 June - Sunday 16 June 2013

Using the Holoeye website, I’ve learned some more about the differences between each of their spatial light modulator models. First of all, there are two types of liquid crystal displays: LCD (liquid crystal display) models transmit the incident light, while LCOS (liquid crystal on silicon) reflect it. There are then three types of orientations for these microdisplay cells: VAN (vertical aligned nematic), PAN (parallel aligned nematic), and TN (twisted nematic), where nematic refers to liquid crystal molecules oriented in parallel but not necessarily in well-defined planes. (Twisted nematic orientation of the molecules means that there is typically a 45/90-degree difference between the top and bottom of the LC cell, where the inbetween molecules are arranged in a helix-like structure). VAN and PAN cells can only modulate the phase of an incident beam, while TN can modulate phase and amplitude. There are several other distinguishing characteristics of each model (as I mentioned in my journal entry from 13 June: resolution, input image frame rate, phase shift ability based on the incident light wavelength limits, pixel pitch, and size of active area), which I still need to do more research into.

Two-dimensional spatial light modulator: A tutorial has proved to also be a very useful and detailed source for learning more about SLMs. Some of the things that I’ve read about so far- (1) the two overarching types of modulators: electrically-addressed (which uses an electrical signal to change a variable associated with the incident beam) or optically-addressed (which uses one light beam to change something about another light beam), (2) main functions: analog multiplication (when an optical wavefront amplitude is modified by the reflectivity/transmissivity of the propagation medium), analog addition (when the optical input signals are summed), signal conversion (to various frequencies, incoherent to coherent, etc), and thresholding (create a binary image from the analog input). After I finish this article (which is pretty long and highly technical), and also look over Azure’s paper and journal, I’m going to organize all of my notes on SLMs into a separate webpage.

Random Find: From a convoluted chain of internet searches and page surfing, I somehow stumbled upon a very intriguing research center led by an anesthesiologist Dr. Hameroff, which applies quantum mechanics to theories of consciousness. As of right now, consciousness is something that scientists still don’t know much about; even in medicine, it’s unclear how general anesthesia actually works to bring patients into and out of consciousness. Some research suggests that conscious thoughts and responses are products of signals to stimuli that are sent backwards in time; using fMRI to monitor brain activity while test subjects were shown a series of neutral or violent images, Bierman and Scholte found that the subjects exhibited a precognitive emotional response up to 4 seconds before the violent stimuli were displayed. While there is still much research to be done to come to more concrete conclusions about consciousness, I still think it’s a very fascinating interdisciplinary field of study.


Thursday 13 June 2013

In the morning, we watched J. Eberly’s lecture “When Malus tangles with Euclid, who wins?” on how the Bell inequalities do not hold up for quantum mechanics. He began with a classical example with coin tossing, in which there were three coins (penny, nickel, and dime) that could land on heads (P, N, D) or tails (p, n, d). He set up an inequality that was always true:

   

This is because both components of the total outcome for (N,d) are already in the other parts of the LHS of the inequality. Therefore, the RHS will always be equal to or less than the LHS. Eberly then transposed the same idea into an example with 3 pairs of birefringent calcite crystals (which have a different index of refraction for beams of light with different polarizations), that have x/y, θ/Θ, and ϕ/Φ polarization channels. He creates a (seemingly) similar situation to the three coin toss equations by setting up three experiments with these loops (the details of which can be read in his article).

   

However, the Bell inequalities only work in a classical world when we know the coin has to land on heads or tails, even if we’re not “looking” at the outcome of that particular coin. In each calcite crystal experiment there was always one loop that we didn’t “watch,” but in our equation we assume the photon must have been polarized either one way or the other. This is incorrect! Since we didn’t make a measurement, the photon existed in a superposition of both states.

Dr. Noé asked me to do some research into Holoeye spatial light modulators since he’d like to buy one for the lab. An SLM in general is a device that can modulate (in space and time) the phase, amplitude, or polarization of incident light waves. They have a liquid crystal microdisplay that’s either translucent (LCD) or reflective (LCOS, liquid crystal on silicon).

Holoeye currently has 5 different models: PLUTO (LCOS, phase only), LETO (LCOS, phase only), LC-R 1080 (LCOS, phase and amplitude), LC-R 720 (LCOS, phase and amplitude), and LC 2012 (LCD, phase only). Tomorrow I’m going to read up on each feature of these models (type of LC microdisplay, resolution of pixels, input image frame rate, wavelength limits, size of active area, etc) to figure out which would be the best fit for our research needs.


Wednesday 12 June 2013

I found the AMO lecture this morning very interesting: A roadmap for the production of ultracold polyatomic molecules, by Dr. Sotir Chervenkov. There are two methods that can be used to produce ultra cold molecules: indirectly by combining ultracold atoms (which is only applicable to a few dimers), or directly by decelerating a sample and filtering out the slow molecules. While the latter has not yet been achieved, this research group aims to combine three successful cooling processes: opto-electrical cooling, cryogenic buffer-gas source, and mechanical deceleration, in order to come up with a large sample of slow, cold atoms.

Just an interesting note: the opto-electrical cooling portion is done by means of a Sisyphus process, in which a finite well-like potential is created to trap slow molecules; radiofrequency is then applied to trap molecules in a more shallow potential. If we wanted to visualize this, it’s as if the molecules are forced to continually “travel uphill,” therefore slowing down. I then found out that Sisyphus was a figure in Greek mythology who was forced to roll a stone up a hill for eternity in the underworld; every time he reached the top, the stone would roll back down to the bottom.

We were fortunate enough to have Prof. Eden Figueroa give us a tour of his lab, which is working on creating quantum systems that can be used to transfer information. The “work horses” (as Prof. Figueroa called them) that will carry this information are photons. Their aim is to create a system without cooling atoms, which is a method used by other research groups; this is because the eventual goal is to shrink down everything to put into a computer.

He used a good analogy to explain the unique capabilities of quantum computing and information exchange: If a regular car hits a fork in the road, it needs to separately travel down each path to gain information about them; however, a quantum car could simultaneously travel down both paths and obtain information about them at once. As far as computing, our systems currently use bits, which can be either 0 or 1; on the other hand, quantum computing would make use of cubits, which can contain a superposition of much more information at once.

Their research group is also working on creating a quantum memory, in which the photon containing information can be stored and released later. The lab is overall very clean and organized (it’s also very new, as of February, so they’re still waiting on some more equipment). There are large covered chambers for each piece of the experiment, computer-controlled lasers, and wires and optical fibers running neatly overhead in special enclosures.

At the Pizza Lunch we discussed important dates and the possibility of having a reunion event in which previous LTC students would come back and give talks. While it might be difficult to schedule one date that works for everyone through email, maybe we can create a doodle poll with a couple of weeks of dates and ask all of the nearby LTC veterans to let us know their availability.

Hal decided that this summer he would talk to us about entanglement, an important phenomenon that distinguishes classical and quantum systems, but is rarely covered in undergraduate quantum mechanic classes. He opened up by explaining that quantum mechanics violates our common sense because we live in a classical world, which is completely true. The “sense” that we’ve acquired has come from a collection of interactions and experiences from living in a classical world; therefore we really don’t have any physical intuition about the quantum world. So as strange as it may seem, we have to just accept the fact that, before a measurement is made, the particle exists in a superposition of all possible states.

Tomorrow Hal will show us Eberly’s lecture on the Bell inequalities, in which he proves how the equations do not hold up for a quantum system. I’ve seen the video once before, when Hal showed it to us last summer, however since I’ve now taken a quantum mechanics course and learned a little bit more about this topic, I feel like I’ll appreciate the lecture more. That’s the thing about physics- sometimes you need to read/see/be taught a topic many times over before you actually come to a complete understanding of it. It’s usually difficult to grasp something (especially as complex and strange as quantum mechanics) from just a single exposure to it.

Dr. Noé brought out the 34th book in Encyclopedia Britannica’s series: “Great Books of the Western World,” which contained the works by Newton and Huygens. I flipped through it a little bit and found a few interesting discussions about visible light in Newton’s Optics section. In Proposition 6, he discussed a detailed scheme for figuring out the color and degree of its intensity for a certain mixture using a complex color wheel.

   

It was a little difficult to follow at first, but interesting once I understood it better. He simply used the geometry of the circle and devised a method of taking into account the different quantities of reflected light in each mixture to characterize the overall hue. In general, the book was filled with very intricate propositions and diagrams.


Tuesday 11 June 2013

Today I focused on reacquainting myself with the Linux environment and updating my webpage. The last time I had really altered anything was back at the end of last summer, and as a result I had forgotten some essential commands. For instance, I had thankfully written down my password in a safe place, but then once I logged in, I realized I had no idea what the next step was! But after rereading some of my old journal entries and googling a few Linux/html help sites, everything starting coming back to me. I successfully figured out how to change directories, list all of my files, edit them, upload pictures, etc.

I then restructured my main page such that I could separate out the research I did last summer from the pages that I’ll continue to update this summer in a way that should be easier to navigate. I also created a new bio that briefly talks about my research last summer, how I recently graduated, what I’ll be doing here this summer, and my plans for next year. (I then put the old one on the Summer 2012 page). Finally, I updated my presentations page to include those that I made for my senior research project at Dickinson, and also to add some pictures of the other students who were involved.


Monday 10 June 2013

Today was my first day back at Stony Brook. After quickly moving into my summer housing, the LTC group had a wonderful lunch in the Simons Center Café with Marty, Hal, and a couple other undergraduates as well.

We then spent some time cleaning up and organizing the lab before heading over to the AMO Seminar by Dr. Stephan Ritter: “An Elementary Quantum Network of Single Atoms in Optical Cavities.” A quantum network was created between two spatially separated labs (connected by a 60-meter optical fiber) using two single atoms as nodes. Each was trapped by means of a Magneto Optical Trap (MOT), transferred into a cavity, and controlled by means of a 3D optical lattice. A single photon was then produced using a vacuum-stimulated Raman adiabatic passage (vSTIRAP). From what I understood, information could then be exchanged, stored, and reemitted between the two labs using the entanglement created locally between the state of the atom and photon. This research has applications in quantum communication, cryptography, and computing.

Following the talk, we went back into the lab to do a little more organizing. I found an interesting article in Photonics Spectra while sorting through piles of old magazines: “Photonics for Art’s Sake,” by Hank Hogan (June 2007). The article talked about how photonics was used in a variety of ways to preserve and restore works of art. For instance, using what we know about the absorption of different wavelengths of light by certain pigments and materials, we can alter how a painting is illuminated such that it won’t shorten its lifetime. Another example discussed how infrared reflectography could be used to examine the underdrawing of a painting and possibly reveal hidden features; this is because IR penetrates surface pigments, but is reflected by the prepared canvas/wood underneath. While I had already heard about a few of the methods discussed, I still enjoyed reading the article. I’m always intrigued by the connections between physics and art, and conservation science is one of the flourishing fields at this crossroads.



Summer 2012


Friday 3 August 2012

Today was the big day: our final presentations for the REU program. I decided at the last minute before leaving my room in the morning that I would bring my camera, and good thing I did! It turned out no one else had brought theirs, so I became the official photographer for the event. I put up photos from the symposium here. Overall I enjoyed the event. It was interesting to see the outcomes of everyone’s projects, considering the past few weeks we only heard little snippets of what each person was currently working on at our Wednesday REU meetings. I felt like Marissa and I delivered our presentation successfully, and I’ve put up the newest version on my presentation page. Afterwards, Dr. Noé took the LTC group out to lunch at the Simons Center Café, which was a delicious treat, as always. We then went back to the lab to do some cleaning and neaten up the area around our setups. I also picked out a few more tomatoes to bring home.

It was sad saying bye to some of the other REU students, some of whom I may never get a chance to see again.. But I’m sure we’ll stay in touch! Though I didn’t have to worry about saying a real goodbye to the LTC group, since I’ll be seeing them again at the Rochester undergraduate research symposium in October. Despite the fact that the Stony Brook REU program has ended, there are still a few more loose ends to tie up with my Bessel beam project. So as I continue to work on it, I will continue to update my journal.


Thursday 2 August 2012

Today we all busily worked on putting the finishing touches on our presentations and collecting last minute data.

Using the same technique I explained yesterday with the double-lobed ring or light, I compiled an intensity profile of the Bessel beam at a distance z=340 mm behind the final lens. (As expected, the central spot size varies slightly in size and intensity over the axis of propagation). I averaged the intensity of 10 different radial lines that stretched the full diameter of the beam (which consisted of about 6 concentric rings). The central spot size at this distance was 44.4 microns.


In total, I took 130 more images of the Bessel beam. First I took a series of images behind the final lens, from 50 mm to 750 mm. By taking out the variable polarizer and using a very long exposure time, it turns out the camera can capture what’s going on right behind the lens! (Also- by taking out the variable polarizer it turns out you can see the whole evolution with your eye too—not the Bessel beam’s concentric rings, but you can see the ring of light, the two lobes converging and then diverging on either side, and then the bright center of the Bessel beam). I didn’t worry about making sure there were no overexposed parts of the images in this round, since I took these to qualitatively analyze the evolution of the beam, instead of trying to quantitatively analyze the intensity profiles and such.

Next, I took a series of images (from 70 mm to 470 mm behind the second lens) of the beam created from a setup with a collimated laser beam going into the OBJ aperture. This was done by using a telescope configuration—I had a lens with a short focal length placed right near the HeNe and then I used another one of the 333mm focal length achromats after this to collimate the magnified Airy pattern. At the 1 mm aperture, the center Airy disk was about 1 cm in diameter.

In front of the focal plane (which was at 293 mm… even with a collimated beam), I did not observe the mirrored Bessel beam formation, as I had when using an uncollimated beam. The center area was clearly brighter, but there was no clear central circle; also, there were concentric ring-like features. But the way this looked was incomparable to the very clear, bright central circle and crisper concentric rings that appeared in the Bessel beam after the focal plane. Interesting.. When I have time, I will also string these photos into an animation and upload it to my webpage.


Wednesday 1 August 2012

Today I worked on creating an intensity profile of the double-lobed ring of light. I used ImageJ to determine a radial average by figuring out the intensity across 20 different lines from the center of the ring to the outside, subtracting background, and then graphing these all together. It took me a while since I had to line up the peaks in each individual line of data, but once finished, I figured out that the peak-to-peak spacing of the double-lobed ring was 14.8 microns.


In regards to the brainstorming I had done yesterday, Marty said that the part he’s unsure about is whether the delayed rays of light coming through near the edge of the OBJ aperture become part of the outer lobe in the intensity profile of the ring of light IMG. He suggested the best thing I could do was to email the author of the 4-f paper and see what their opinion on the matter is. It would also be interesting to learn if they saw the symmetric Bessel beam evolution on either side of the ring of light.

During the final LTC Pizza Lunch, Ariana, Jonathan, Marissa, and I presented our research. It was a good way of hearing feedback and practicing for the presentations we’ll be giving at the REU program’s symposium on Friday. I received a lot of helpful suggestions and spent the rest of the day improving my PowerPoint.

After a long day of working, Dr. Noé took us out to dinner at the tavern restaurant of the Three Village Inn, which served really good oysters! Afterwards we went to Pentimento and had dessert while enjoying another great jazz night with Ray Anderson.


Tuesday 31 July 2012

Today I figured out how to make an animated gif using ImageJ by stringing together a series of images. So I created one of the Bessel beam evolving from the ring source. On each image of the animation I included both a 1000 micron scale and the distance that it was taken from the final lens in my setup. At some point when I have free time, I’ll put some of the images and this animated gif up on my webpage.

I went back and carefully readjusted my setup so that all the distances were the exact f distance apart. It was basically only the final lens that needed adjusting, since it turned out to have been a full centimeter too close to the spatial filter. But even after doing this, the ring of light still came to a focus at 293 mm behind the final lens (instead of at the expected 333mm). So this means that the too-close focal plane was not caused by the misalignment of the setup. Maybe it has to do with the fact that the input light beam at the object plane was not collimated? I’ll have to see what happens when I rearrange my setup to incorporate a collimated beam.

I've been doing a little more thinking about illuminating the OBJ aperture with the image of the 150 micron pinhole instead of a collimated beam. As I had figured out with Dr. Noé, illuminating the OBJ with a diverging wavefront means that the light waves propagating through near the edge of the aperture have a longer distance to travel than those in the center as determined by the Pythagorean theorem (even though it is only a 1/16 of a wavelength difference in phase). I think that if you carry this information through the 4-f setup, it ends up helping in the end..

These delayed rays of light coming through near the edge of the aperture become part of the outer lobe in the intensity profile of the ring of light IMG (see intensity versus radius graph in Fig. 6 of 4-f paper). If you look at the graphs comparing the ring of light's amplitude and intensity profiles (Fig. 3, (c) and (f) are clearest), you see that the outer lobe of the intensity profile was the negative lobe of the amplitude profile. Then if you look to the diagram (Fig. 1) that shows the propagation delay between the lobes and how the constructive interference of these creates the Bessel beam on axis, you can see that the negative lobe is the one that is delayed (again, this can be determined by the Pythagorean theorem).

Therefore, my thinking is that by adding the original delay of these outer light rays (from the phase variation of waves illuminating the OBJ aperture) to the delay caused by the geometry of the diverging thin ring of light (Fig. 1), this increases the overall propagation delay between the two lobes. Meaning, a longer Bessel beam should form.

Saying that the original phase variation helps the setup obviously goes against the need for a "uniformly illuminated annular aperture" to create a Bessel beam. However, as I discussed in my journal entry from yesterday, this spatial filtering method is fundamentally different than the one described by Durnin and Eberly. It depends on the propagation delay between the two lobes of the thin ring to create the Bessel beam. Tomorrow I’ll discuss whether this logic is correct with Marty and try to figure out if it’s actually beneficial to illuminate the OBJ aperture with a phase varying wavefront.


Monday 30 July 2012

In the morning, Marty helped explain the Fresnel/Fraunhofer zones in my setup that I was confused about. As I already understood, the distance between the OBJ aperture and the first lens is subject to Fresnel diffraction. Since the lens is placed one focal length away from the aperture, we now are moved into the Fraunhofer diffraction regime. This continues through the spatial filter to the second lens. Then after the second lens, we are back in the Fresnel zone: the light rays converge towards the focal point in the reversed way that they converged originally from the OBJ aperture to the first lens. After they come to a focus to form the thin ring, they start to diverge again, in a mirrored process. (I feel like this mirrored Fresnel diffraction might explain why I’m seeing a double Bessel beam form before and after the thin ring of light..)

After going back through some of the new photos I took over the weekend, I zoomed in on the Bessel beam ones using ImageJ and counted that the average central spot size is about 5 pixels in diameter (times 7.4 microns per pixel), which translates to about 37 microns.

Dr. Noé called me after he left to discuss the illumination of my OBJ aperture. He says there is a slight phase variation of the light waves passing through the center of the aperture versus those passing through near the edge (which will have a slightly longer path, as clear from the Pythagorean theorem). After doing the calculation, we came out with a phase variation of 1/16 in the wavefront from the center of the aperture to the edge. We can correct this by collimating the beam and/or sending a beam through the OBJ that has a larger diameter. This can be done by using a lens with a really long focal length or by using two lenses in a telescope configuration: one to magnify the beam right after the pinhole and the second to then collimate it. This way we’ll know there’s no phase/intensity variation of the light illuminating the aperture.

On a different note, I finally think I’ve come to a pretty good understanding of how spatially filtering this circular aperture creates a quasi-Bessel beam, and the reason that we have to block some of the high frequencies (even though we are trying to make an edge-enhanced image). It starts from the fact that the amplitude of light going through a circular aperture will resemble a square wave (see Fig. 2c graph from the 4-f paper); with no frequencies blocked, there is a little bit of a Gibbs overshoot at the edge of the aperture (this in turn means that there is always going to be a zero in the diffracted field of the ring source when r = the radius of the OBJ aperture). Now with the spatial filter, it’s clear that the inner diameter controls how many of the low frequencies are being blocked. The outer diameter determines the shape of the edge-enhanced image. The article describes how a ring image with a well-separated double-lobed amplitude is desirable (as seen in Fig. 3c graph). And this will of course still have the zero at r = radius of OBJ. But if the amplitude lobes are well-separated from the zero, this means there will be a propagation delay between them (as seen in the diagram in Fig. 1).

It is the propagation delay between these two amplitude lobes that causes a zone of constructive interference on axis, aka the Bessel beam! (If there was no propagation delay, this would cause a zone of destructive interference, and no Bessel beam would form). So, we want a large separation between these distinct amplitude lobes. We can achieve this if we diminish the overshoot at the edge of the aperture. (A useful connection I made to understand this is visualizing the Fourier series needed to fit a square wave with the summation of sines and cosines. The more sinusoidal waves you use, the closer the overall curve will be to a square wave; however, the Gibb’s overshoot becomes more prominent.) To do so, we limit the number of high frequencies that are allowed through in the Fourier plane, aka using a spatial filter with an outer diameter limit.

So as the thin ring of light diverges, the double lobes of the amplitude continue to be separated by the same propagation delay, but they begin to spread too as you move along the z-axis, creating a line of constructive interference on axis. Eventually at a certain distance away from the ring source, the lobes have diverged to the point where they now destructively interfere on axis, which corresponds to the end of the Bessel beam. This also explains why you can’t include an extra lens at the end of the setup to collimate the thin ring of light, as Durnin and Eberly did. Collimating the thin ring would prevent these two lobes from diverging at all, meaning they destructively interfere on axis from the start and you don’t get a Bessel beam.

Just as a final note, I reworked my abstract with Dr. Noé and Marty for a long time today. It’s sounding much more concise and almost ready to submit. I now have a separate page on my website for it.


Friday 27 July 2012 - Sunday 29 July 2012

Over the weekend I took a bunch more photos of my setup. I started by recording the evolution of the thin ring into the Bessel beam behind the last lens in the setup. This time, I took care to make sure I had at least one photo taken with a 1 ms exposure time for every distance; this way it will be easier to compare one photo to the next just by looking at them. This was done with the initial pinhole of 150 microns, focused down to a magnified image of 2.6 mm, sent through 1 mm OBJ aperture, and spatial filter outer and inner diameters of 20 mm and 6 mm.

Next I tried using the Airy pattern to illuminate the 1 mm OBJ aperture (in other words, with the lens removed from in front of the 150 micron pinhole), but it didn’t seem like there was enough power in the beam by the end of the 4-f setup. I probably will need a smaller inner diameter for the filter or use a larger aperture, namely the 2.4 mm washer hole. Either way, it would call for a lot of realigning, so I figured it would be smarter to finish all the photos with this setup before changing things around.

I next took a progression of photos along the 4-f setup (a) image the Fresnel diffraction zone between the object and first lens, (b) image the Airy pattern between the first lens and spatial filter, and (c) illustrate the effect of using an annular filter by taking photos behind the Fourier plane with and (d) without it in place. The one snag that I hit was that once the computer memory filled up, it didn’t notify me, but instead continued to “save” my photos. It wasn’t until I had finished that I realized the second half of the images were “0 bytes” and couldn’t be opened. So I had to spend some time redoing these…

I decided to take a break and work a little more in Beam2. I have my setup basically all coded out with the correct measurements in centimeters and the rays programmed correctly as far as I can tell. (I used the Lensmaker’s formula to figure out the necessary “curve” value for my lenses, 0.03, and measured the width and diameter to be 0.6 cm and 4.0 cm respectively). But it seems like the rays in the diagram cross/come to a focus before the final image plane in the setup.. I tried making a thinner lens, smaller focal length, and even throwing off the distance for a few of the optical elements. But in each case, the rays still crossed slightly closer than the image plane. This is something I’ll have to look into more..

Afterwards, I realigned everything so that my setup would work with the central disk of the Airy pattern (which expanded to about 20 mm) going through the 2.4 mm diameter aperture. But the ring of light was a lot messier than it looked with the 1 mm aperture (it now has numerous extraneous bright features in the center, kind of like in the photo on pg 232 of the 4-f paper). The Bessel beam wasn't as well defined, and it only formed after the ring of light came into focus and not before. Overall, it seemed like there was less intensity in the beam as it propagated through the setup. (The messier ring could have been due to the fact that the 2.4 mm washer hole didn’t provide as clean of an aperture as the 1 mm pinhole.)

I then moved the camera behind the aperture, like I had done earlier, and took a few shots of the Fresnel zone diffraction patterns; similar to before there were areas where the center was bright and areas where it was dark. I tried taking pictures of the beam after the filter but I couldn’t collect enough light to find where the rays were. It’s possible that I need to decrease the size of the inner diameter of the filter since the larger OBJ aperture would create a smaller diffraction pattern in the Fourier plane.

We've all got a very busy week ahead of us!


Thursday 26 July 2012

I spent a lot of time today working with Mathematica and attempting to fit the intensity data for my Bessel beam at a certain distance with the actual intensity equation, however it wasn’t fitting correctly. When I plugged variables that Mathematica defined based on my data back into the intensity equation, it did not produce a curve that resembled anything like my data plot. So I’ll have to look into this more over the weekend and see what’s wrong with the coding I’m trying to use.

After looking through the sequence of images I had taken of the evolution of the Bessel beam, I marked out the distances where the beam formed and where the rays were focused into a ring. The first beam started forming (as in, there was a central core with concentric rings visible) at about 220 mm and dissipated (when there is no longer a clear central core) at a little fewer than 270 mm. The ring of light came into focus at 293 mm. Then the second Bessel beam started at about 320 mm and ended at a little under 370 mm. The processes were mirror images of each other! I guess that’s to be expected with ray geometry, however it was still interesting to have calculated with the experimental results. It’s also curious to note that the ring comes to a focus before the 333 mm mark… This could have been caused by the distances being a little under 333mm between a couple of the optical elements in the 4-f setup.

When I finish creating a ray-tracing model, I want to first see what the ideal ray trace would look like with each element exactly 333 mm away from each other. I then want to see how much the rays change if one or more of the elements are slightly off of the f distance. It would helpful to use the model for seeing the effect of changing the filter size and even changing the object aperture size. I then want to change the filter width in my setup and see if this affects the formation of the pre-focal plane Bessel beam.

So I started putting in the actual values from my setup into Beam 2 and learned how to include the annular filter. One problem I faced was that the light rays didn’t seem to be affected when going through the lenses because the curvature was so shallow (since a focal length of 33.3 cm yields a radius of curvature of 66.6 cm and the program asks for the reciprocal of this value, 0.015). So I decided I’m going to need to use much smaller but proportional values to use this program.

I also worked on my abstract for the REU presentations that will be held next Friday, Dr. Noé gave me a few initial suggestions for what to include/not include. It’s difficult writing about my project when I haven’t even finished it yet, and even more so because I’m just starting to analyze the results! However, I’m sure by the end of the weekend I’ll have more concrete conclusions to write about.


Wednesday 25 July 2012

I went through my series of 45 images and cropped them all to about the same size. In this way, I’ve zoomed in but kept them all to the same scale. So it’s now possible to flip through them quickly and see the evolution of the beam. Later in the day I actually put them into a separate PowerPoint presentation and labeled the distance, exposure time, and scale on each photo.

At our pizza lunch meeting, each of us in the LTC gave a short talk on what we had been working on and the current goals we have to work towards. We also had a visit from Dave Battin; he gave a lot of valuable input during each of our presentations. After giving a brief background on Bessel beams and my 4-f setup, I discussed my creation of one of these beams and showed the series of images I had taken; I also explained a little bit about the imaging software I would be using to analyze these images. Marissa then explained her interest in the TAG lens and how she’s still in the process of trying to get a hold of one. Jonathan discussed his MPI and how the shop was finally able to drill one for him (later in the day we actually saw it work!). Ariana gave a very informative talk on moiré patterns (superimposing two patterns to create a new one) and aliasing (having to do with the ability of a camera not being able to resolve the detail of a pattern with pixels). Two things that I found especially interesting: (1) that there are two different kinds of aliasing: temporal (for instance the wagon-wheel illusion of the spokes turning backwards in a forward moving vehicle) and spatial (for instance, a zebra against a picket fence appearing to be a white horse or black horse depending on where you look), and (2) that moiré patterns can be used in navigation, (for instance underwater to alert ships of oncoming hazards, with a pattern hovering over it that changes depending on whether the hazard is being approached or passed over).

I spent a lot of time later in the day analyzing my sequence of images. I created a surface plot of the intensity of each image, so it was also interesting to see the progression of the beam in this way. I then created a graph/list of data of the intensity across each transverse profile, especially taking note of the maximum value. Finally, I attempted to measure the central spot size and its average intensity over the course of the actual Bessel beam.

With 7.4 microns being the smallest distance that the camera was able to resolve, it was difficult to see the exact boundaries of the central bright spot and its concentric rings; even the small detail of the double-lobed structure of the ring source of light was hard to resolve (as seen below, the intensity profile drastically changed from one radial slice to the next due to the coarse gradient of pixels). I’ll have to work on creating average values of the intensity profiles based on combining data from multiple radial axes.


Interesting to note: It seems that there is a Bessel-like beam that forms both before and after the light is focused into a clear ring, which is something I hadn’t expected to see. I found an article that discusses this phenomenon, so I’ll read over that tomorrow to hopefully gain some insight in the matter.

I briefly looked at how to do ray tracing in a program called Beam 2. I started out just trying to make myself familiar with how the program works, but I’d like to use the software to create my own ray trace diagram of the 4-f setup (like Will had done) and maybe see if I can include the spatial filter in the Fourier plane. This would then be a nice way to model how the ray diagram would change with different sized filters.

Dr. Noé took us out to Pentimento for dinner and to see a jazz performance by Ray Anderson. (The name of the restaurant is actually an art term that refers to when part of a painting has been altered, usually with another layer of paint, due to an artist changing his/her mind. "Pentimento" actually means repentance in Italian.) Between the food and the music, it overall was a fun lab night out!


Tuesday 24 July 2012

I started off in the morning by taking more pictures of the resulting beam in my setup. The images were a little over-exposed, but my main focus was just to see which object size and initial pinhole would provide the best results. I started out by changing the initial pinhole to 150-microns (instead of 75-microns); the beam of light coming through here was then magnified to 2.6 mm. With the 2.4 mm aperture now being used as my object, I found that the ring source at the end of the 4-f setup was not as clean as when I had used a 1 mm aperture. Additionally, there was no clear evolution of the ring into a Bessel beam. To fix this, I went back and tried to make sure all of my optical elements were neatly aligned. Additionally, I included a high pass filter in the image plane where my ring source was forming to try to clean up the center; in other words, I used a small circular block that was about the size of the middle of the ring in order to eliminate the extra bright features inside. However, there wasn’t much of a difference in the results. A central bright spot formed in the diffraction pattern, but there were no clear rings.

So I decided to go back to the 1 mm aperture. Out of the couple of filters I tried, the central circle size that worked best was 6 mm in diameter, and the outer radius that provided the cleanest ring source was 28.5 mm (I figured this out just by slowly reducing the size of an iris aperture and not measuring until after seeing what produced the best results). The ratio for the radii of these filters turned out to be 4.75, which is larger than Kowalczyk’s magic ratio of 3.83. This disparity could have arisen from the different ratios of the sizes/strengths of optical instruments I used when compared to their setup. However I plan to look into this more and maybe try some other sized filters..

Dr. Noé showed me some of the special features of the EDC 1000N imaging program I’ve been using to capture pictures. We first looked at how to work in sub-array mode, in which only a portion of the camera screen is used (based on the number of rows and columns of pixels you specify); this could presumably save some memory space, though it’s a little more time consuming trying to make sure the light source remains visible in the cropped portion of the screen.

He then showed me how to check for and correct over-exposure. You can see if the camera has been saturated by using the “tag pixels” option and then checking how many pixels are at the maximum intensity (pure white = 254). This is a good tool to consult while adjusting the polarizer; Dr. Noé even suggested adding in a neutral density filter too. Additionally, by capturing a picture when all the light is blocked, you can use the “tag pixels” option again to see at what value the majority of dark pixels are labeled; then you know how much to lower the initial bias (which will retag those dark pixels at a value closer to zero).

It’s also possible to count the number of pixels to figure out the actual size of features of the transverse pattern. To do so, I looked up the size of the camera screen. I couldn’t find an official specifications sheet for the ECD 1000N model, however I found two separate sources that confirm each pixel is a 7.4 micron square: a website that lists specifications for all different CCD camera models and a journal article which describes research that made use of the ECD 1000N.

I reconfigured my camera setup so that the track to slide the camera back and forth on was more stable. To make measuring easier and more consistent, I taped a ruler down at the foot of the lens. I then implemented two neutral density filters (total 0.3) in between the laser and initial pinhole, and moved the polarizer right up against the camera to attenuate the beam. But when the captured images were still over-exposed, I used a second polarizer attached directly to the front of the camera and then put the variable polarizer between the laser and initial pinhole.

When I started to take some more pictures, I was promptly notified by a pop-up that the computer was out of hard drive space… (The computer actually only holds 4 GB of memory total; it’s funny to think how advanced technology has become—even my inexpensive flash drive can hold 8 GB!) It suggested that I empty the recycling bin, which gave me about 4 MB of memory. Then Dr. Noé went through to find some stuff we could delete and cleared up about 10 more MB of memory for me for the time being.

I took a lot more pictures in small increments to really show how the Bessel beam forms from the diffraction of the thin ring source of light. Afterwards, I transferred all of these image files onto floppy disks and then used another computer that had both a floppy drive and USB drive to transfer the image files onto my flash drive and then onto my laptop. With 4 floppy disks, space for 4 images per disk, and about 50 images to transfer, it seemed like it might be a tedious job, but I had a nice rotational system going that got the job done in under 10 minutes.

I downloaded ImageJ onto my mac for analyzing the images. After playing around with it a little bit, I figured out how to set the 7.4 micron scale (even including a key on the image) and how to graph an intensity profile of a portion of the image. So far I determined the intensity across the transverse profile of the ring source and of the Bessel beam. Pretty exciting stuff! Tomorrow I’m going to analyze multiple images and compare the intensity of the beam and calculations of the size of the central spot and rings. In this way, I’ll be able to see if the central bright spot retains its size and power over the axis of propagation.


Monday 23 July 2012

Today I reconfigured my 4-f setup so that I would have more table space at the end for placing a camera (a rough outline of the layout is pictured below). In doing so, I realize that there are a couple of different parameters that I can play around with: the object size (which is related to how much the initial pinhole is magnified) and the spatial filter parameters. To start, I’m sending the laser beam through a 75-micron pinhole, magnifying it to 1.3 mm, and sending it through my object aperture (diameter 1.0 mm). The second possible combination utilizes a 150-micron pinhole at the beginning, which gets magnified to 2.6 mm, and then sent through the object (diameter of 2.4 mm).


With my spatial filter in place, (currently an inner radius of 8 mm and an outer radius of 30 mm) the setup produces a very clear thin ring of light at the end; afterwards it becomes very dim, so it’s hard to see what’s going on, if anything.. However! Later in the day, I used the CCD camera (with a polarizer in front of it to attenuate the beam a little) and took some photographs of the resulting beam. After the thin ring of light, it appears that the beam evolves into a Bessel beam! (The pattern has a very bright center, which remains at a basically consistent size over a certain distance). Tomorrow I’m going to experiment with a larger object size and different filters.

I spent some time with Mathematica trying to model what the intensity of my resulting beam will look like at various distances, however I’m having some difficulty at the moment. I was successfully able to model what the intensity would look like in a setup such as Durnin and Eberly’s that contains a lens after the ring of light. However its more difficult working with the z-dependent diffraction equation… I also was looking around the internet to see if I could come up with a simulation of what the ray diagram would look like with two identical lenses and the spatial filter in between them. It seemed highly involved to create one with Mathematica, but I might try to code it if I have some extra time. (Speaking of Durnin and Eberly-- I still, at some point, want to try to include a lens at the end of my setup, to see what would happen even though Kowalczyk made it clear that his thin ring source of light is fundamentally different than that which comes as a result of a uniformly illuminated circular aperture.)

Also- this afternoon Dr. Noé brought in some tomatoes for us from the farm stand. They were huge! And also very tasty.


Friday 20 July 2012

Since I had been fairly busy this week putting together the Bessel beam presentation, I didn’t have enough time to regularly keep up with my journal entries, so this morning I spent a lot of time catching up with them.

Using a Mathetmatica package that Jonathan had showed me I made a couple of lines of different sized dark circles to use as high-pass filters (since it was too hard to make a clean edge with whiteout). I then took it a step further and found the codes to make an array of white annular shapes on a black background. As far as the dimensions, I wasn’t sure how the graphic might get resized when trying to print it. Therefore, instead of figuring out exact sizes for the inner and outer radii, I calculated what the relative width of the annulus should be based on the ratio proposed by Kowalczyk.

Marty then helped me print these shapes onto transparency paper (Transparency Film for Monochrome Copiers LCT PBS 100) with the special printer upstairs. The only issue was that the printed black color was highly translucent. We tried adjusting the exposure to correct for this, but even using the darkest setting didn’t make much of a difference.. So what I ended up doing was layering the pages, which turned out to fairly effective. I then played around with these different sized filters in my setup, faint ring showed up at end, which is definitely a step in the right direction!

I uploaded a compressed pdf of the Bessel beam presentation to my website on a new Presentations page. This took a little while because I was unfamiliar with the process of transferring a file to the website with a mac. But I was eventually able to figure out the appropriate secure file transfer protocol commands using this helpful website.


Thursday 19 July 2012

Today was a very long but fulfilling day. It began with the drive into the city, during which Marissa and I did a few more practice run-throughs, as well as take notes on useful last-minute suggestions from Dr. Noé. We arrived at the City College of NY for the 2012 Optical Vortex Party around noon.

Giovanni first gave an introductory talk on optical vortices, explaining the idea behind a light wave having angular momentum. He discussed some of their applications, including communication—for instance, being able to encode information in each level l of topological charge and then sending a larger amount of information all together. Giovanni also discussed the Berry phase, which is when a vortex acquires a phase from moving along the Poincaré Sphere; in other words, the vortex starts with one phase and undergoes a change based on the geometry of the situation. A useful example is that of a cat that can twist to land on its feet if falling.

Additionally, there were numerous interesting student talks and poster projects. One CCNY girl gave a talk on characterizing Q-plates, which could be used to give light a polarization. Another CCNY student discussed how an OAM sorter is able to transform the donut shaped vortex into a line, separating the OAM states, while still maintaining its phase. One of the posters done by a Colgate REU student was about quantum computing and being able to encode information in entangled photons by altering their polarization and phase (this allows for much more information to be encoded than with the traditional binary bit computing since really there are numerous characteristics of a photon that can be altered).

Kiko Galvez gave a very comprehensive talk on the Poincaré Sphere and this idea of mapping polarization states onto its surface. He discussed the goal of singular optics was to search for singularities. For instance, an optical vortex has a dark center because it contains all phases and therefore they cancel to zero; same goes for the research with the Poincaré modes and polarization singularities. As a side note, I really liked the quote that he opened up his presentation with: “Research is to see what everybody has seen and to think what nobody has thought,” Albert Szent-Gyorgyi.

Afterwards, Marissa and I presented our Bessel beam PowerPoint and then Jonathan presented his Multi-Pinhole Interferometer research. Then what was really cool was that we took a short tour of one of their optics labs and actually had the opportunity to see a spatial light modulator in action.

I think that the Optical Vortex Party was a great experience overall! I very much enjoyed sharing my research and hearing from other REU students who were doing similar or even completely different projects. It was also very valuable forging connections with some of the students and making plans to follow up with the research we each were doing. Before heading back to Stony Brook, Dr. Noé very kindly took us out to dinner at an authentic Greek restaurant Loi on West 70th Street. Everything was absolutely delicious!


Wednesday 18 July 2012

The weekly Wednesday REU meeting was structured a little different than usual. Today instead of each of us presenting to the group what we had been working on the past week, Michal Simon gave us a presentation about his work in astronomy, since it was somewhat related to Jonathan’s multi-pinhole interferometer. However, the difference was that the pinhole layout for the astronomical device was purposefully irregular (which was developed through trial and error), whereas Jonathan’s pinholes would have to be aligned perfectly in a circle formation. The reason for this disparity obviously stems from the way in which the device is put to use.

Michal Simon is an observational astronomer who focuses on studying young stars. In order to correct for atmospheric blurring and improve resolution when using a telescope, it’s necessary to use a non-redundant mask. Since no two pairs of holes have the same separation vector, each pair provides a set of fringes at a unique spatial frequency in the image plane. Jonathan, on the other hand, needs the pattern of pinholes to be symmetric because he is looking to use the MPI for uncovering the topological charge of his optical vortex without having to interfere or split up the light beam.

Besides this meeting, Marissa and I spent the whole day working on our presentation for the Optical Vortices Party tomorrow. We first added together our individual slides into one PowerPoint and took some notes on how we would make the transitions. At the optics pizza lunch we presented what we had so far to the group. It was very helpful hearing everyone’s comments and suggestions after doing a run-through. The afternoon was spent making corrections, rearranging the slide order, and writing out notes for how we would discuss each slide.


Tuesday 17 July 2012

Marty helped me understand a couple of items from the 4-f setup article a little better. First of all, there was the part about not using a final lens in the setup, which was something Durnin and Eberly had included in theirs. It turns out it has to do with the nature of the thin ring source of light. Whereas Durnin and Eberly simply used an annular aperture, Kowalczyk created theirs through spatial filtering; therefore there’s a zero between the two intensity peaks of the ring source which would not have been present with the use of annular aperture.

Secondly, I was concerned about the very small diffraction pattern that I presumed I would have to filter with an equally small high-pass filter (that is, use a very small whiteout dot to block the central bright spot. This being the case, I figured I would have to magnify the light beam more to use a larger aperture for the object so that at the Fourier plane the diffraction pattern would be larger. But Marty helped me realize that the high-pass filter mentioned in the article was actually somewhat larger than the central bright spot of the diffraction pattern. Therefore the problem of my filter being too small was not actually that big of a problem after all.

I combed through the barcode scanner 2003 patent to really try and understand how these devices work. Inside the barrel portion the beam is created form a laser diode inside of a metal channeling tube. After going through a first lens to partially collimate the beam, it is directed through an axicon lens, which transforms the Gaussian into a Bessel beam. The beam is then reflected out of the tube by a folding mirror towards a pivoting mirror, which oscillates to generate the scanning movement of the beam. After exiting the beam comes in contact with the optical code. The ability of the beam to resolve the symbol is limited by the density of the bar code, but more importantly the working range of the laser beam, which is the distance over which the central spot size of the beam is unaffected by diffraction. The use of an axicon produces beam that has a constant spot size over a more substantial distance, two or three times the range of a conventional Gaussian beam.

After backtracking through the sources cited, I found the earlier 1992 patent for the original idea of making optical scanners with the axicon element. It’s interesting to note that the chief inventor from both patents (Vladimir Gurevich 2003 and Joseph Katz 1992) was from Stony Brook.

I decided to read through McLeod’s article, The Axicon: A New Type of Optical Element, more thoroughly and ended up learning some more important facts. First of all, he listed various examples of axicon optical elements, such as a conical lens, narrow annular aperture, and certain hollow objects such as a cylinder, cone, flared reflector, or sphere. For the most part, axicon lenses don’t suffer from chromatic aberration, since each color of light finds its own path through the cone to the image. A telescope that employs an axicon lens is able to simultaneously view two or more small sources in focus that are placed along the same line of sight; because the nearer sources do not block light that is coming from the farther sources. Another important application is to autocollimation, in which the axicon element is used to determine if a mirror surface is perpendicular to the line of sight.

Dr. Noè, Marissa, and I spent some time in the afternoon clearing off the wooden table in the back room so that I would have more space to expand my 4-f setup. It looks a lot cleaner, and there’s so much space now! I then spent the rest of the day working on my part of the Bessel beam presentation.


Monday 16 July 2012

Prof. Metcalf showed us a very interesting video today; it was a lecture given by Joseph Eberly (when he received the Frederic Ives Medal in 2010) titled When Malus tangles with Euclid, who wins? It described in fairly simple terms why the Bell inequalities violate quantum mechanics. He started out by using a classical example of how one of the inequalities holds true when counting the outcomes of a series of penny, nickel, and dime coin tosses. Eberly then applied the same logic to a photon polarization experiment, in which three types of calcite (crystals that have an index of refraction determined by the polarization of incident light) analyzer loops took the place of the three types of coins. He developed the inequality using Malus’s law (the intensity of polarized light transmitted by the analyzer is proportional to the squared cosine of the angle between the transmission axes of the analyzer and polarizer), which turned out to not be compatible with Euclidian geometric trigonometry (specifically after using the cosine identity to simplify); additionally this inequality was also experimentally disproved. The problem arose from the fact that we developed the inequality after assuming the existence of specific states of photon polarization, which is incorrect according to quantum mechanics. In actuality, the polarization state doesn’t exist if we don’t observe it; specifically here, we chose not to observe one of the three analyzer loops, which means the photons assumed both states. It is from the extra unknown polarization information that the Bell inequality fails.

I played around with my 4-f setup a little more, specifically by adding in the high and low pass components of the spatial filter. Even after making the necessary annular filter width calculations, the outcome didn’t really appear to be much of anything because the incident beam of light was too small at the Fourier plane to properly be affected by the filter. So I still will need to magnify the incident beam some more, but to do so we’ll have to clear some more space on the table first.

Prof. Metcalf later came in to the LTC and asked if we knew what Brownian motion was. After reading a little bit about it, I learned that it refers to the irregular motion of minute particles of matter (about 0.001 mm in diameter and smaller) in a fluid; this random movement is caused by the thermal motion of molecules in that fluid. A useful analogy to think of is the erratic motion of a very large beach ball in a stadium of people; due to the random directions in which people exert force on the ball as it comes to them, it gets propelled at various angles around the stadium. Brownian motion was important to the development of Avogadro’s number and therefore the size of molecules.

Marissa and I discussed our Bessel beam presentation some more- we divided up who would present each part and also started making a PowerPoint from our outline. This lead me to start organizing all of the Bessel beam articles I had read and rewriting my notes.


Friday 13 July 2012

Today Dr. Michal Simon took the REU group on a trip to the (http://www.amnh.org/) American Museum of Natural History. On the train ride, I read through most of Cheng-Shan Guo’s article: Characterizing topological charge of optical vortices by using an annular aperture. The introduction had a very useful summary of what the topological charge of an optical vortex was (which is that it refers to the orbital angular momentum of the beam) and previous methods that have been used to determine this value (such as interfering a wavefront with a mirror image of itself, using a Mach-Zehnder interferometer with a Dove prism at each arm, or as Jonathan’s been reading about- using a multi-pinhole interferometer). Guo, however, sent the vortex beam through an annular slit (of about 1 mm width) and used the fact that the resulting beam retained its azimuthal phase variation to measure the vortex’s topological charge. The aperture was placed in the front focal plane of a lens (f=240 mm) and the screen was located in the rear focal plane. The number of bright rings in the spatial frequency spectrum of the observed far-field diffraction intensity pattern (which was determined by taking the Fourier transform of this pattern) was equal to the topological charge of the vortex.

Guo specifically notes that the resulting intensity pattern approximated to the square modulus of a higher-order Bessel beam (the order of which determined by the topological charge of the incident vortex beam). However, the article was not focused on the fact that they had stumbled upon another method to generate a higher order Bessel beam, that is, by way of sending a beam with an azimuthal phase variation through an annular slit. This could be an interesting method to try with the intent of making a Bessel beam since a 1 mm ring aperture is definitely achievable (that’s about the size of the ring I had used when playing around with spatial filtering an Airy pattern, 8 July 2012).

At the museum, we had the opportunity to go behind the scenes to the staff section where we had been invited to sit in on the AMNH REU program’s weekly meeting. Four students presented on various astrophysical topics based on their personal interest. Jumari had studied the rover Opportunity which had landed on Mars in 2004 and recently was required move to a sun-facing slope known as Greely Haven so that it would be able to maintain its solar-powered batteries. Munazza described the increased frequency of solar activity spikes; what I found most interesting was when she was explaining the general make up of the sun and mentioned that the equatorial region rotates faster than the polar regions. Nicole presented on Hanny’s Voorwer, a mysterious cloud of green gas emitted from the black hole jet of another galaxy. Finally, Nettie explained the growing issue of light pollution (in the form of light trespass, light glare, sky glow, and light clutter) and its effect on nocturnal and ecological life as well as human health.

We then had the opportunity to explore the museum on our own. I happened to find, slightly by accident, a small hallway that described the imaging tools used by the museum’s Microscopy and Imaging Facility. The exhibit described the four imaging techniques: confocal laser scanning microscope (for fluorescence imaging and surface detail), scanning electron microscope (for magnified details), electron microprobe (for determing chemical composition), and CT scanning (for 3D interior images). This was especially fascinating to me because it dealt with a research endeavor I want to engage in after graduating next year, that is, to study the degradation, conservation, and restoration of cultural heritage objects by employing imaging techniques that were primarily developed for the medical world. However, I’m more interested in the use of Nuclear Magnetic Resonance, which is an imaging technique that has yet to become popular in the research facilities of American museums.

I also enjoyed the Creatures of Light: Nature’s Bioluminescence exhibit. As well as the actual material covered by the exhibit, I was very interested in the education aspect of how the information was presented. In a very kid-friendly environment, it discussed the science behind organisms that either chemically produce their own light (such as fireflies, certain fungi, glowworms, various deep-sea fish) or re-emit absorbed light (such as fluorescent coral).


Thursday 12 July 2012

I found a comment on Durnin and Eberly’s paper, written by DeBeer. He explained that him and his colleagues saw a connection between Durnin’s experiment and the Poisson (Argo) spot. (This is a relationship that has now come up a few times in my readings lately...) If an illuminated opaque sphere is placed in the focal plane of a following lens, the bright spot retains its intensity and size over the axis of propagation. It can be thought of as a line image, since the spot doesn’t disappear if an obstacle is placed in its path. This is proof that the Poisson spot is a product of conically interfering rays, since it could not self-reconstruct if it had formed from traveling on axis.

Marissa and I brainstormed on the white board about how we would structure our Bessel beam presentation at the Optical Vortex party next week. We decided to first touch on the basic properties, appearance, amplitude equation, ideal beam vs. what’s experimentally possible, and the difference between zero-order and higher-order beams. Next we would explain the uses and applications of these nondiffracting beams. Afterwards, we would discuss the various methods of generating Bessel beams that we’ve come across from various physicists in the field: by use of an aperture, an axicon, with spherical aberration, a TAG lens, optical fibers, or an SLM. Finally, Marissa and I would each briefly discuss our current research projects.

Duocastella and Arnold actually wrote a very straightforward summary article about Bessel beams, which would be useful to consult while trying to piece together a 20-minute presentation on all of this information. In the publication, they discuss the distinguishing properties of Bessel beams, each of the methods by which they can be created, and the major applications. I thought it was also interesting that the article discusss the fate of the Bessel beam in the far field (since it has been typically created from Fresnel near-field diffraction); after some time, the beam actually becomes annular, with a Gaussian intensity pattern in the radial direction.

The achromat lenses arrived in the mail today! At the end of the day I spent some time setting up the 4-f spatial filtering system, as described by Kowalczyk. Earlier, Marty had expressed his confusion as to why they did not include a final lens in their setup. The article mentions several times that it would make for a more accurate Bessel beam if the lens was included (as with Durnin and Eberly’s endeavors), however the article also mentions how the it was not possible to use it in their experiment. Marty and I decided we would look at what actually comes from the 4-f setup, and see if we run into any problems while attempting to include this final lens.

Though I had some difficulty in working with the limited space I had on the table, I was able to fit everything for now (as I start to reconfigure things, I’ll probably add in a couple of mirrors to bend the setup around and make more space). I sent the laser first through a 150-micron pinhole and then through a 10-cm focal length lens to magnify the beam size for it to fill a 2-mm diameter aperture. About one focal length behind the aperture (33.3-cm) I placed the first achromat. After marking the midway point between two subsequent focal lengths (for the inclusion of a spatial filter later) and placing a second achromat lens, I came out with the image of the 2-mm diameter aperture one more focal length away. I might need to rearrange things slightly, since the beam size in the Fourier plane (spatial filter location) is still relatively small and it would be difficult to filter a ring out of it. But it was nice to be able lay everything out as a starting point.

In other news, I’ve started using EndNote: a really great application (courtesy of Stony Brook) to assemble and categorize all of the articles and resources I’ve been consulting. It organizes the citations, allows you to attach PDF files or links to the articles, and then search through them based on author, title, year, keywords, etc. It’s too bad that I didn’t realize I could download this application at the beginning of the REU, because it would have been easier to just add in each source as I printed it out. Though it will take some time to catch up with entering in all of the articles I’ve read so far, overall this should be a helpful way to organize my research.


Wednesday 11 July 2012

Today I read though Marston’s comment on Kowalczyk’s “Generation of Bessel beams using 4-f spatial filtering system” which pointed out that their nondiffracting Bessel beam is the result of a diverging pattern and has an approximation similar to that for glory scattering of spheres (glory scattering- scattering of light that causes a bright halo of color around a shadow). Normally for creating a Bessel beam approximation, Marston thought that a final lens needed to be used, as in Durnin and Eberly’s setup (which I talk about in the next paragraph). There was then a reply by Kowalczyk in which he acknowledged the terminology issue, however their resulting pattern did have a Bessel function radial profile and the specific beam properties. He also explained how their diverging pattern approximation to the beam was more closely related to the Poisson spot (bright spot in the shadow of opaque circular disk created from the scattering off hard edges), rather than glory scattering (a polarization-dependent scattering of light off spherical objects).

I read through Durnin and Eberly’s paper Diffraction-free beams, which was one of the first papers written on experimental recognition of a beam with nondiffracting properties. They explained how the intensity distribution of (what we now call) a Bessel beam is part of a special class of non-spreading solutions to the Helmholtz equation on diffraction phenomena. Durnin and Eberly illuminated an annular slit and then placed a lens in front. The 1987 publication put simply what I had inferred from reading several more recent and complex articles on this topic, which is that: Each point on the slit acts as a point source, which is then each transform into a plane waves (by the lens) with its k-vectors lying on the surface of a cone. The maximum distance of propagation of the resulting Bessel beam is dependent on a large lens radius, long focal length, and small annular slit width. This was figured to be much longer than the Rayleigh range (which is the distance over which a normal beam remains undiffracted while propagating in free space).

At our Wednesday REU meeting, each student explained what he or she had accomplished in the past couple of weeks. I explained my mini-project: profiling the Airy pattern from pinhole diffraction and then fitting the data with a Gaussian and Bessel curve using Mathetmatica. Marissa explained her interest in the tunable acoustic gradient index of refraction lens for making Bessel beams and how an acousto-optic modulator works. Sara and Kate have been plotting the wavelength spectra from nine stars (M and K) and analyzing the presence of certain elements at certain wavelengths, such as the sodium doublet. Jonathan explained his newest interest in the multi-slit interferometer through which one can send a vortex to figure out its orbital angular momentum. David is having trouble with his simulation at the moment since the fact that his basis is not orthonormal, therefore the program cannot compute the necessary algorithms for understanding atomic band structures. June is currently writing up his report from his mini-project: profiling a Gaussian laser, which turned out to not be Gaussian after all. Yakov demonstrated with his laptop some of the beam simulations he’s been programming. Joe, who is another student working with Dr. Michal Simon, explained how he was looking at the spectral energy distribution for certain stars and trying to detect the presence of dust in the data.

Next, at our optics lunch meeting, each of us in the LTC lab explained to the rest of the undergrad students what we were currently working on. I gave a brief overview about what Bessel beams are and the methods to go about making them. I then explained my current reading on axicon lenses, bringing up how the apex angle of the cone is important in determining the range of the Bessel beam, as well as the ring spacing and central spot size. The shallower the angle, the larger all three of these values will be; however Prof. Metcalf was interested in what the limit was on this angle, in other words, how shallow is too shallow that the resulting beam would no longer be a conical superposition of plane waves. This is something I haven’t come across yet in my reading, so I’ll be sure to do some brainstorming on the matter.

Dr. Noé and I spent some time searching for less-expensive axicon lenses, in hopes that could use one to create a higher-order Bessel beam with an optical vortex, but didn’t come across anything too helpful. Instead, an LTC alum Giovanni is kindly letting us borrow the one from his lab, which we can pick up at the Optical Vortex party he’s hosting next week. So while we wait for the axicon lens, Dr. Noé suggested I do some research on how to make a higher-order Bessel beam using some of the simpler methods I’ve already come across (in other words, the spatial filtering and annular aperture setups I had studied first).


Tuesday 10 July 2012

Today I read through most of Herman’s Production and uses of diffractionless beams article, which went into both mathematical and conceptual detail about using an axicon or highly spherical lens to create a Bessel beam. With the axicon lens: the incoming plane waves bend according to the apex angle and index of refraction of the lens, which results in the superposition of positive and negative conical waves around the optical axis. To use a lens with high spherical aberration (which means light rays are focused tighter at different distances from the optical axis), you illuminate it with a ring of light as far as possible from both the margin and the center of the lens. In this way, the light will be focused in a conical fashion between the central and marginal focal points. The very center of the lens is blocked by an obstruction to limit the complicated interference pattern that would arise from these central rays. Herman spoke only of creating zero-order Bessel beams with these methods.

I then decided to geometrically brainstorm a little more about Prof. Metcalf’s proposal of using a glass tube instead of axicon lens. Here, the angle that the resulting Bessel beam rays make with the optical axis is 90º - θc (the critical angle that the incoming diverging rays make with the optical axis which would cause total internal reflection between the glass and air). If the light source diverging from a point is placed at a distance L away from the tube, this means that the tube would have to be L long, and it would create a Bessel beam about the optical axis that had a propagation range of L. The incoming light beam would have to be diverging in such a way that after the distance L from its origin to the tube entrance, it would have a diameter D, approximately the same D as the diameter of the tube to ensure it was coming in at the critical angle. Using a trigonometric analysis of the situation and the index of refraction for crown glass, the ratio of D/L is 2.29, and for flint glass the ratio is 2.55. The next question would be to figure out the necessary optical power for a lens to produce a beam of light that diverges to a diameter D from a point L away.

Milne’s article Tunable generation of Bessel beams with a fluidic axicon described a method that involved creating a mold of an axicon and then changing the fluid inside of it to alter the index of refraction. But I thought it was a particularly useful article because it clearly laid out the importance of the apex angle (as well as the index of refraction) of the axicon to the maximum beam range, ring spacing, and central maximum size with their respective equations.

I read through an article by Jaroszewicz from Optics and Photonics News: Axicon- The Most Important Optical Element. The actual definition of an axicon, as defined by McLeod in 1953, is an optical element with rotational symmetry that images a point into a line segment along the optical axis. Jaroszewicz explains that the “first axicon” was the pinhole camera, first mentioned by a Chinese philosopher in the fifth century B.C., which provides an infinite depth of focus. The article also provides a brief comparison to the Argo (or Poisson) spot, which is created by interference in the center of the shadow of an opaque disc/sphere when this opaque sphere is being used as an image-forming device. Finally, it brought up Bessel beams, and how they can be used as an application of an axicon to define a reference line, since the resulting beam is long and narrow.

For future reference, after looking through Pradyoth’s Intel report, we found the actual name of the company he used for his printing job: Darkroom Specialties LLC, Eugene, OR.


Monday 9 July 2012

This morning Dr. Andrew MacRae came in for a tour of the LTC so Jonathan, Marissa, Ariana, and I explained some of the projects we were working on. Later he gave a talk about the generation of arbitrary quantum states from atomic ensembles. He discussed two methods for entangling photons to isolate single states: spontaneous parametric down-conversion and the use of a vapor cell. I was familiar with the SPDC process from my sophomore year Intro to Relativistic and Quantum Physics course, since we had used one of these crystals in our experiments to demonstrate the existence of single quanta of light. Later in the afternoon, Dan Stack presented his thesis: Optical Forces from Adiabatic Rapid Passage, which basically described an alternative method for cooling atoms-- instead of laser cooling, he had used coherent optical forces.

I started reading some of Herman’s Production and uses of diffractionless beams article which describes two methods to create zero-order Bessel beams: using a conical lens or by means of spherical aberration. He flat-out explained what I had assumed from reading other papers about the basic Bessel beam criteria which is that: (A) the central region keeps a constant size and intensity due to energy being diffracted into this region from the surrounding ring system, and (B) the transmitted intensity pattern remains unchanged at a distance past an obstruction of the central intense region due to energy being diffracted into the region on the other side of the obstruction. I will finish the article tomorrow.

After reading some of Arlt’s Generation of high-order Bessel beams by use of an axicon paper, I’m beginning to understand the difference between zero-order and higher-order Bessel beams. A J0 beam has a bright central maximum while a Jn (from this point on, assuming n≠0) beam has a dark central core. You can use the same optical elements to create both a J0 and Jn beam, it just depends on the input light beam that is used. In other words, if an axicon is illuminated with a plane wavefront, it yields a J0 beam with an annular spectrum; if the axicon is illuminated with a beam that has an azimuthal phase variation (such as from a Laguerre-Guassian mode), the resulting Bessel beam has an annular spectrum and azimuthal phase variation, signifying a Jn beam. Again, I will finish reading this article tomorrow.

As far as the advances I had made with my pinhole diffraction setup over the weekend, Marty pointed out that according to Durnin and Eberly’s Diffraction-Free Beams article, it would need a second lens (after the filters) to focus the ring of light into the Bessel beam. He suggested making the first lens (right behind the pinhole) one with a longer focal length.

Dr. Noé helped Jonathan set up a camera that takes pictures of the transverse wavefront of a beam of light. He explained that a circular polarizer was needed to prevent the camera from being flooded. The polarizer was set up very close to the camera and configured so that the light was almost completely attenuated. For future reference- the program that was used to capture images was EDC 1000N.

Marty brought up a fascinating application of Bessel beams that I hadn’t ever realized—Evidently the light beams used on the scanning devices at supermarket cash registers that read the UPCs are Bessel beams! The 2003 patent describes how the device makes use of an axicon optical system to generate nondiffracting beams of light. Since the central peak in the transverse intensity does not diverge for a range larger than the typical laser, it allows for an increased maximum working distance of the scanner to about 520 inches.

Dr. Noé and I brainstormed creating Bessel beams by using a long glass tube, as Prof. Metcalf had suggested during our Wednesday 27 June meeting. After drawing a couple of diagrams and thinking things through based on the other known methods of generating Bessel beams, we decided that sending a Gaussian beam through would yield a zero-order Bessel beam, while sending a vortex beam (from an LG mode) would yield a Jn beam. I tried to search for any publications that might discuss this method but was unsuccessful in finding anything. The search did bring me to Tom’s journal entry from 19 June 2009 where he describes the similar conversation he had had with Prof. Metcalf about how a glass tube would give a similar ray trace as an axicon lens. If we can figure out the correct tube dimensions, this would be an interesting alternative to try. I think once I finish reading the articles that discuss the use of an axicon, I’ll have a better understanding of the proportions that would be needed for the glass tube.


Sunday 8 July 2012

It seems like I might have created a Bessel-esque beam from spatially filtering the diffraction pattern so that only a thin ring of light was allowed to pass through. When following the beam along its axis of propagation, the ring developed a bright spot in its center about 50 cm behind the filter system (which consisted of the hole-punch high-pass filter and iris diaphragm low-pass filter). As the distance increased, a couple of inner concentric rings began to develop from the outside in. The pattern basically remained constant from about 80 cm through 100 cm. Then around 130 cm the rings started to collapse so that there was only a bright center and one ring around it. Farther down, though the central dot remained bright, the rings started to become fuzzy (160 cm) and eventually spread so much that they meshed together (210 cm). Soon after (220 cm), it was possible to see that a very tight Airy diffraction pattern had faded back in.

I decided to test the self-reconstructive abilities of the beam I was observing and placed one of the pieces of transparency paper that had a misshaped whiteout circle at about 80 cm from the filters so that the beam’s bright center was blocked. The center continued to be clearly obstructed until around 50 cm behind the obstruction when a new bright spot began to appear again. By around 90 cm, the center had definitely reappeared, though the outer rings were slightly fuzzier than before.

Despite the fact that what I observed was not a very well-defined Bessel beam, since it was fairly small (maybe only a little more than 1 cm in diameter) and only contained the bright center and two outer rings, it still exhibited the Bessel-like qualities of (A) being the product of a thin-ring light source diffracting, (B) containing a uniformly-sized bright center over considerable distance, and (C) having the ability to self-reconstruct. I’m sure with a thinner ring of light, as would be created with the 4-f spatial filtering system, the Fresnel diffraction pattern would develop into a more pronounced Bessel beam.

I was finally able to figure out the correct code in Mathematica to fit a Bessel function to my Airy pattern intensity data, and it’s clear right away that it’s a more suitable fit than the Gaussian one. This function takes into account the aperture diameter, wavelength of the laser, and angular radius from the pattern maximum. Afterwards, I also plotted all three of these curves on a semi-logarithmic plot. Again, it was clear instantly that the data did not behave like the Gaussian curve, which was parabolic.

My Ideas and Resources page has now been updated with the Bessel beam brainstorm that Dr. Noé and I came up with. It contains all of the major ways we’ve come across to generate these nondiffracting beams as well as links to the articles that describe each method. I’ve started reading through more of these articles and will continue tomorrow.


Friday 6 July 2012

Dr. Noé showed me an article that explained why we were seeing different patterns over a certain distance behind the focused image of the pinhole yesterday. Evidently with pinhole aperture diffraction, the classic Airy pattern is only present at the point of paraxial focus, in other words the intermediate image plane. The article contained an axial intensity distribution plot that showed why we were observing the dark-center diffraction pattern at certain distances; there was also a helpful demo to click through for a transverse representation of the diffraction pattern at each of those distances. I thought it was also interesting to note that there were more higher-order diffraction rings at a distance ±6π from the point of paraxial focus than there are in the classic Airy pattern at this paraxial focus point.

Prof. Metcalf gave us another quantum lecture, this time focusing on the Bloch sphere and Rabi frequency, since these would be topics covered in the thesis defense on Monday. He made an interesting connection to the oscillation lecture he had given to us on Friday 29 June 2012: moving an atom between quantum states is similar to driving an oscillator at its resonant frequency. When an oscillator is driven at its resonant frequency, it is said to be in one of its normal modes and moves in a clean and repetitive motion. Between normal modes, the oscillator’s motion is a superposition of all frequencies. The discrete energy levels of atomic states are in essence the normal modes of the system, and in between these discrete states, the atom exists in a superposition of all its states. For the atom to change states it has to be driven with a certain frequency; in other words, it has to receive a certain amount of energy to make the clean transitions.

Later we located a multi-slit interference simulation for Mathematica on Steph’s LTC page. It was really useful to be able to play around with the different parameters (wavelength of light, number of slits, spacing between the slits, and the distance from the screen) in Young’s experiment model to see what the resulting interference pattern would look like. Additionally, it was interesting that when there were many slits it was possible to observe the Talbot effect (when an image of the slits can be seen at multiples of a discrete length) at various screen distances.

I did a little bit of internet searching with Ariana to see if we could come up with some possible project ideas that connected a few of the topics she was interested in. We came across a couple of attention-grabbing articles so far; unfortunately neither would work for a potential optics exploration, but they were interesting none the less: one was about how moiré patterns could be used for visual cryptography, and another one was about the moiré fringes that appear from interfering two acoustic Airy patterns.

I attempted to set up a high-pass filter in my Airy pattern setup in order to see the effect on the image. Since the high spatial frequencies contain information regarding the edges of an object, I assumed I would see the pinhole image with a sharper border and maybe a darkened center. I tried creating a circle of whiteout on a piece of transparency, but no matter how uniform the circle looked in my hands, once I placed it in the setup in front of the beam of light the shadow it created revealed the whiteout shape’s uneven edges. Nonetheless, Marty helped me to configure my setup so that the pinhole image would be magnified enough for us to observe the effect of the filter. The image was similar to what we had expected: there was a bright circular ring of light with a dim inside, but it also seemed messy inside- with an extra softer ring of light and some speckle outside. This was probably due to the irregular shape of the filter and maybe some kind of aberration effect from the transparency sheet.

Later I tried to solve the filter-edges issue by substituting the whiteout with a hole-punch cutout attached to a microscope slide. Since this was larger than the whiteout circle, some of the inner rings were being blocked along with the bright center. So I decided to take it a step further and spatially filter the beam so that only one ring of the Airy diffraction pattern was allowed to pass. By placing an iris diaphragm about 7 cm behind the high-pass whiteout filter, I was able to isolate a single ring. This was, in effect, a rudimentary version of the methods suggested in Kowalczyk’s and Basano’s articles that call for a thin ring of light to create a Bessel beam. Over the weekend I plan to play around with this setup a little more.


Thursday 5 July 2012

I spent a lot of time working with Mathematica this morning and was able to figure out: 1) how to import data from an Excel spreadsheet, 2) graph the list as points in the x-y plane, 3) fit a Gaussian function to the data to find the parameters for the equation (height, width, and central position), and then 4) graph the data points and Gaussian curve on the same axes. In my previous physics courses that touched on Mathematica, we were always just given the straight codes; therefore I see figuring out these trivial operations as being a major accomplishment for me. The next step is to figure out how to fit a Bessel function to the data; I’ve tried out a few codes so far, but haven’t figure out the correct one yet. Dr. Noé hinted that I would need to incorporate the aperture diameter as a parameter to find the spacing of the maxima for fitting the Bessel function, so I’ll probably look into this in greater detail over the weekend.

Dr. Noé also explained that the calculations I did yesterday with my “diffraction equation” were incorrect, since I used a relationship reserved for diffraction through two or more narrow slits: d Sin θ n = n λ. We then had a group discussion about diffraction starting out with the difference between two terms that are sometimes confused: diffraction and interference. Diffraction is what happens to light as new wavefronts are formed, according to Huygon’s principle. Each point on the wavefront is a source of a new “wavelet,” and the envelope of wavelets becomes the new wavefront of this propagating wave. Interference is the process of adding up the resultant of individual waves that are constructively or destructively altering each other.

We then discussed the difference in the intensity pattern from different numbers and sizes of slits, and it turns out that diffraction through two finite-width slits produces an interference pattern that combines the one finite slit and two infinitely small slit patterns. As more slits are added, the peaks of the diffraction pattern become thinner and thinner (until they become spectral lines, with the use of a diffraction grating), sort of like how the shape of the Fourier transform graph will change as the amount of uncertainty decreases.

Next we examined the distinction between near and far field diffraction. Fresnel diffraction occurs in the near field; the shape of the pattern depends on the location on the longitudinal axis of propagation where you’re observing it. Will observed this accidentally when he noted a dark spot in the center of his Airy pattern, and then subsequent changes in the pattern as he moved farther away. For Fraunhofer diffraction in the far field, the pattern’s shape is only dependent on the angle of diffraction and no longer on the longitudinal distance. Therefore, the farther you move away from the aperture, the diffraction pattern will get larger but its shape will remain constant. So Dr. Noé proceeded to help us derive the equation for the intensity distribution of two infinitely small slits; we saw how the intensity has a cosine2 shape, like we expected, and also how the Fraunhofer diffraction only depends on the angle of the waves coming at the screen, and not on the distance of the screen from the object.

As a group, we looked again at the Airy pattern and the image in focus at the opposite side of the lab. Dr. Noé showed how you could see the Fresnel diffraction patterns as you move a piece of paper closer to the lens to observe the transverse plane of the beam. At a certain distance, there was a dark spot in the middle of a light ring, then a light spot in the center, etc. This pattern was what the lens was imaging right in front of the pinhole, and it alternated until you reached the image of the Fraunhofer diffraction zone. We then saw something unexpected, which was that when we moved the paper farther than the focused pinhole image, out of the lab to the wall across the hall, we noticed the same alternating Fresnel patterns again. These patterns were what the lens was presumably imaging on the other side of the pinhole? But we weren’t exactly sure and are going to think on it a little more..


Wednesday 4 July 2012

I decided to collect more data points for the profile of the laser beam, in order to more clearly see what’s going on in the wings of the graph and show that the beam is not completely Gaussian. This time I collected intensity information from the n=3 diffraction fringe on one side of the center bright spot to the n=5 diffraction fringe on the other side (the unsymmetrical data collection is due to the positioning constraints of the photodiode on the moveable stage). I again graphed the results in Excel and added a Gaussian curve to the graph. This time it was more apparent that the data values did not follow the ideal Gaussian values on the wings of the graph; and even more apparent was the un-parabolic shape of the data curve on a logarithmic scale.

I then decided to use the diffraction equation along with the distance of the photodiode to the pinhole and the spacing between fringes to work backwards and determine the wavelength of the laser. With a slit distance d = 0.15 mm, a longitudinal distance of 1194 mm, and the transverse distance from the central bright spot to the n = 5 maximum of 25 mm, I calculated the wavelength of the laser as 627 nm. The actual HeNe wavelength is 632 nm.

Afterwards I tried sending the diffraction pattern across the length of the lab and projected it onto the main door to again make the appropriate measurements and work backwards with the diffraction equation to check the laser wavelength. The distance from the pinhole to the door was 1317 cm and the distance from the central maximum to the n = 3 diffraction fringe was 16.5 cm, meaning the calculated wavelength was 626.5 nm.

For my final measurements of the day, I spent some time using different lenses at varying distances in the setup to try to focus the image of the pinhole onto the door across the lab. I achieved this with the BSX085 lens (f = 20 cm) at a distance of 21.6 cm from the object and 1295 cm from the image. Using the lens equation, the sum of the distance to object and to image reciprocals was 0.047 cm-1, relatively close to the actual focal length reciprocal of .05 cm-1. The magnification equation with these distances resulted in a magnification of 60 times, which means the calculated image height was 9 mm. The actual crisp-edged pinhole image had a diameter of about 11 mm.

The source of discrepancy between calculated and actual values for the above mentioned mini-projects was probably mostly due to the means of measurement. Especially while examining the airy pattern projected onto the door across the room, it was difficult to see the edges of the diffraction fringes, so the measurements are estimates. Additionally, I was measuring the longitudinal distance on my own with a tape measure, so it is possible that it may have moved slightly from its actual position while I stretched it across the length of the lab.

Today I was also able to download Mathematica and Maple from the Stony Brook website and tried playing around with them a little. I still haven’t be able to successfully graph my data from profiling the laser beam (with the Gaussian and Bessel curves on the same graph), but after working more with the programs in the next few days I’m confident that I can figure it out.


Tuesday 3 July 2012

After Dr. Noé explained that I could obtain a much better airy pattern by moving the pinhole closer to the lens (BPX065, f = 7.5 cm) and using a cleaner pinhole, I reconfigured my setup. Now with a 150-micron pinhole, and both the lens and pinhole moved closer to the laser, the airy pattern is much more prominent. I then experimented with different lenses to see the distances at which I could focus the airy diffraction pattern back down to create an image of the pinhole. For each configuration, I used the thin lens equation to relate the reciprocals of the distance of the object to the lens, the distance of the image to the lens, and the distance of the focal length. And in each case, the results were fairly accurate. For instance, with the BSX085 lens (f = 20 cm): the object was 28 cm away, the image 68 cm away, and the sum of their reciprocals was 0.0504 cm-1, very close to the focal length reciprocal of 0.05 cm-1. The magnification equation revealed that the pinhole was magnified 2.43 times, (meaning its image was 364-microns). By changing the object to lens distance slightly, I observed the following: the object was 21.5 cm away, the image was 365.76 cm away, so therefore the thin lens equation yielded a sum of 0.04924 cm-1, which is still close to the 0.05 cm-1 reciprocal focal length.

Jonathan and I spent some time practicing how to profile the laser beam with a photodiode (connected with a 100 ohm resistor to an AVO meter) placed in the line of the airy diffraction pattern. We started by sweeping the intensity of the center bright spot of the pattern on a moveable stage (which took some time to set up, since the first stage did not allow a wide enough range of motion, and the second one we found didn’t have standard screw-hole sizes, but then we came across another one that worked perfect for our needs). At first the results we were getting seemed off—the sides of the center beam were Gaussain-esque, however the center seemed to just plateau at a constant value. Dr. Noé explained that the hole of the photodetector was too wide to make a fine measurement of the intensity across the small light spot. So we had been in fact measuring a convolution of the wide hole and the Gaussian shaped intensity of the beam, which (according to Fourier mathematics) appears as a square pulse with rounded edges. He also suggested that we use a much stronger resistor to make the AVO meter more sensitive to small changes in the intensity.

So now with a 200-micron pinhole covering the photodiode and a 10 M ohm resistor in place of the other, we set to work at charting the intensity (in milli-volts) across the central bright spot of the airy diffraction pattern. When we had finished, we graphed the results in Excel and tried fitting a Gaussian curve to the data. For the most part, it appeared to fit, with the exception of the edges of the data points. However, when using a logarithmic y-axis, the data points did not form a true parabola like an actual Gaussian curve would. Though it’s too difficult to do in Excel, it would be best to try fitting a Bessel function to the data points; Dr. Noé suggested using Mathematica or Maple.

On another note, Prof. Metcalf pointed out that my explanation of his setup (from Friday 29 June 2012) was incorrect, since the negative momentum doesn’t necessarily signify a negative energy transfer. So I’ll have to rethink the problem a little more..


Monday 2 July 2012

After looking through a few different sources (including Lidiya’s journal) over the past couple of days, I feel like I’ve come to a better understanding of what it actually means to break up an object into its spatial frequencies. As I had superficially understood, high spatial frequencies are associated with the object’s edges. Mathematically, the spatial frequency is the reciprocal of the wavelength: the number of cycles over a certain distance. In other words, it is a rate of change in space. Therefore, thinking of an object as a 2D picture, at an edge where there is a more abrupt change (in the instance of a grating, it is the transition between the transparent slit and opaque part), there is a steeper gradient (rate of change with direction). It helped looking over a website about edge detection, which described the technique as finding the steepest gradient between neighboring pixels. For the other areas of the object where there is no (or very slight) variation between pixels, the gradient would be zero (or very close).

In a 4-f set-up, for instance, light has to bend at a greater angle at the edge of an object, therefore the lens will focus these rays at the outer sections of the diffraction pattern. On the other hand, transparent parts of an object allow light to be transmitted unbent, which is why these parallel rays (aka the low spatial frequencies) pass through the lens and become focused down to the focal point in the center of the pattern. The Fourier transform is what organizes the different frequencies that make up the overall object into a diffraction pattern; the order numbers of the pattern refer to how much the rays have bent at the object’s edges, with the higher orders signifying higher spatial frequencies. It would be interesting to play around with spatial filtering and see what a diffraction pattern would look like for a two-dimensional object (such as a transparent slide with a design or simple picture).

Dr. Noè and I sketched out a quick list of the different methods to create a Bessel beam based on the articles we’ve come across so far. This week I plan to reorganize my ideas/resources page to create a map of these different possibilities with links to the publications that describe their methods.

  1. Axicon Lens (to create a zero-order or higher order Bessel beam)

    • We first looked up axicon lenses made by Thor Labs however these were fairly expensive. It’s possible that they may be cheaper if we could get them without the anti-reflection coating..

  2. Spherical Aberration (to create a zero-order, possibly higher order beams too?)

  3. Thin Ring Light Source (Bessel beam created through Fresnel diffraction) from:

    • 4-f Spatial Filtering, using:

      • Compound 1000mm FL Planoconvex Lenses

        • We did some brainstorming for the 4-f spatial filtering method to create a thin ring light source; we weren’t able to find the exact lenses used in Lidiya's setup, and the larger ones we did find would be difficult to mount apex-to-apex (as suggested in Kowalczyk’s setup).

      • Achromat Lenses

        • We found some achromat lenses from Surplus Shed that would definitely fit into the equipment we have here. (Achromat lenses limit the effects of chromatic or spherical aberrations; they are made from two lenses that have different dispersive properties, bringing the rays of light with slightly different wavelengths to the same focal point.) Tomorrow Dr. Noè said he would order some of the 400mm focal length lenses.

    • Ring Photo

      • As for creating the transparent ring photo in the setup described by Basano, Dr. Noè showed me on Xfig how it was easy to produce mathematical graphics. (There isn’t a straightforward way to install Xfig for Macs, but I found a set of steps here that I’ll take some time to go through tomorrow.) The project called for a 4mm-diameter transparent ring with a width of 0.016mm; however we figured out that a normal printer would only be able to resolve 0.025mm, meaning we wouldn’t be able to print out the required size..

        I looked up Pradyoth’s LTC journal to try to find the name of the printing company he used for his project and found that on 3 August 2010 he obtained “contact information of a transparency printing service through Professor Michael Raymer at the University of Oregon.” In the next couple of journal entries it seems that he first tried to order the printing job from a Mr. Walt O’Brien but instead used a Mr. Gene Lewis when the former notified Pradyoth that he couldn’t work with the desired film size. A quick google search of the names with some keywords didn’t yield much, however I’ll do a more thorough search tomorrow or see if Dr. Noè has any ideas. Also- I thought it was interesting that Pradyoth had Bessel beams down as a potential project on his ideas page. He commented on the possibility of devising a tunable lens that could create a Bessel beam approximation, instead of using an axicon lens.

Dr. Noé suggested that I play around with a simple pinhole setup to achieve an airy pattern (diffraction from a pinhole- light center with concentric dark and bright rings). With a second lens after the diffraction pattern, it would be possible to refocus the rays into the shape of the pinhole. I started out using the BPX065 planoconvex lens, 75mm focal length, to focus light into a 200-micron pinhole, but the airy pattern was pretty deformed. I plan to work on this more tomorrow.


Friday 29 June 2012

I started reading through a diffraction grating handbook that Dr. Noé gave me. The first section described the specific properties of two different types of diffraction gratings: reflection and transmission gratings. The reflection grating is an array of evenly spaced grooves on a reflective surface, while the transmission grating is a pattern of evenly spaced transparent slits on an opaque screen. The distance between grooves or slits is supposed to approximately be the wavelength of the light being studied. The electromagnetic wave will be diffracted from the grating with a change in amplitude, phase, or both.

Professor Metcalf explained how he planned to give a lecture series on quantum mechanics and started out today with oscillations. Specifically, in a doorframe he had set up two masses on separate strings that were connected by a horizontal straw; the connected pendulums were meant to represent a coupled oscillator. Prof. Metcalf showed us the normal modes of the apparatus: the special cases when these pendulums would oscillate at the same amplitude for an infinite amount of time. We watched a few more demonstrations of normal modes of different oscillating systems in a very useful video. Finally, Prof. Metcalf left us with something to think about- With the coupled pendulums, you can start moving one mass and have it eventually transfer its energy to the second mass, which will start oscillating as the first mass comes to a stop, and have this continue infinitely. At one point during the cycle, both masses have the same amplitude and frequency before one mass starts to slow down as the other continues speeding up. So the question was how is it that the masses “know” whose turn it is to have the energy transferred to it when they are in this equal state of motion?

The way I approached the problem was to think conceptually about the momentum and direction of energy transfer in the system. I drew out a series of simple sketches to describe the basic motions of how the first pendulum moves, the second one starts to move as the first one slows down, they move at the same speed, the second one is moving faster than the first, and then how eventually the first pendulum is at rest as the second one moves with its original speed. At the time when the first pendulum has slowed down enough and the second pendulum has sped up enough to be swinging at the same speed, the first mass is already in the process of slowing down and is losing kinetic energy. It has a negative change in velocity, therefore a negative momentum. The second mass is already in the process of speeding up and is therefore gaining kinetic energy; it has a positive change in velocity, therefore a positive momentum. This can be seen on the amplitude versus time graph that a student had made at one time and posted next to the set-up. At the time when both amplitudes are the same, one curve has a positive slope while the other curve has a negative slope, illustrating how the masses “know” to continue their energy transfer in the appropriate direction.

Later we did some more of our usual Friday afternoon cleaning of the LTC. Today I focused mainly on organizing the desktop shelving units according to the use of the material. In this way, all of the optical instruments and pieces are easy to spot on one side and the mechanical parts for building setups are on the other side.


Thursday 28 June 2012

Dr. Noé left the July/August 2012 issue of OPN Optics and Photonics News on the desk so I flipped through it a little and came across an article on physics education, Photonics Explorer: Working within the curriculum to engage young minds. The article discussed how there is an increasing number of students not showing an interest in science, which is something I’ve noticed, and actually it’s one of the main reasons why I want to go into teaching. Oftentimes teachers are able to spark students’ interests with a fascinating demonstration, but this does not last long enough to inspire a career in science. Therefore, the article cites that a possible solution is to promote continuous hands-on discoveries throughout the curriculum, even on a daily basis. The Photonics Explorer kit is meant to do just that-- make abstract concepts (for light and optics) easier to understand based on guided inquiry-based learning. Students have the opportunity to create their own setups, predict results, test different theories, and then discuss the applications of what they found out or how it could be useful outside of their simple experiment. I think it’s definitely more valuable to actually make a real world connection with the material and come to an inherent understanding instead of simply memorizing equations and concepts.

Then Prof. Metcalf stopped by with a researcher who was looking for help in designing new automotive features, such as a camera that could read street signs without being saturated with light from the taillight of the car in front of it. This sparked my interest in reading up on another article from OPN, Vision Sensors in Automobiles. It discussed new exterior cameras/sensors (for night/fog vision, 360-degrees panoramic views, and 3D imaging) that were being tested in India to improve safety. Instead of the usual CCDs, they used CMOS (complementary metal oxide semiconductor) sensors, which are photodiodes that collect light, generate charge, and immediately convert this into a voltage at the pixel. I found it interesting that as they described each type of vision implement, they also described how most of their inspiration was picked up from nature. For instance, they are looking to develop polarization based vision systems to see through fog, springing from the fact that fish seem to use polarization to easily move through dense aquatic fog. This reminded me of a point that was brought up at the graduate school talk yesterday, about how important multidisciplinary research is.

I was also a little curious about this strange phenomenon about fish so I quickly did a little extra research and found an article about the polarization vision in cuttlefish. Evidently most aquatic creatures can sense polarization because of how the photoreceptors in their eyes are orthogonally oriented; therefore they have a high sensitivity to how light waves approaching them are oriented and can improve the contrast of the image scene in front of them. Using an imaging polarized light analyzer, researchers actually found that cuttlefish give off a prominent polarization pattern of light around their eyes and forehead. This pattern however disappeared when they were camouflaged, attacking prey, or laying eggs. A study was conducted where researchers noted the behavioral responses of cuttlefish when the polarization of their reflected image in a mirror was changed. They found that the cuttlefish stayed in place when the polarization of the reflection was distorted, but they retreated from their mirror image when the polarization was left unaffected. The research concluded that cuttlefish use their polarized vision and the polarization coming off of themselves for recognition of and communication with each other.

Today also I started reading the Diffraction Grating Handbook that Dr. Noé gave me and updated my Ideas and Resources page (to reorganize the topics a little and to include some more of my interests).


Wednesday 27 June 2012

Today we had our second REU group meeting in which everyone described his or her progress since our previous get-together (20 June 2012). Kate and Sarah are currently working with the actual spectral data for their star, specifically trying to apply the χ2 method of reducing the data for their use; Sarah also mentioned that their task was to study the sodium doublet in the spectral data, which I already knew about from Benjamin’s LTC project that I read yesterday. David has started producing actual graphs charting the band structure of silicon crystals and how the bands change when he substitutes boron into the structure. Yakov is in the process of switching over to a new software program to do the calculations for his group’s simulations of the electron and proton collisions. June has been spending some time figuring out how a new light sensitive camera will work with the AMO group’s apparatus, as well as practice profiling Gaussian laser beams. Jonathan explained how he’s working on a Singular Optics Map for his website as a resource to trace the research and publications written about optical vortices and related topics. Marissa explained the step-sweep method of aligning a beam to go through an optical fiber and briefly discussed the difference between single and multi-mode fibers.

Afterwards there was a seminar on getting into graduate school, specifically to continue science research. A panel of professors and the dean of the graduate school described the importance of the six key components: your undergrad GPA (especially from junior and senior year), your GRE score (mainly the quantitative and verbal combined), your personal statement (highlighting your passion for a subject), your letters of reference (one of which should definitely come from your research advisor), your on-campus interview (at which you should also ask plenty of questions to demonstrate your engagement and interest in the field), and (most importantly) your previous research experience.

Then we had an optics group meeting in the conference room where each of us discussed out present work. While I was discussing my interest in the creation of Bessel beams using a spatial filter, Professor Metcalf asked if I had grasped the concept of “spatial frequencies” and what it actually means to decompose an image into these. I understood that mathematically the spatial frequency, as the inverse of the wavelength, is the number of cycles of a wave per meter, (similar to how the temporal frequency, as the inverse of the period, is the number of cycles of the wave per second). It’s easy to see this on the graph of a sine wave; however, I realized that it was difficult to picture how a 2D object in the spatial domain could be decomposed into a pattern of bright and dark spots in the frequency domain, as seen in this example. The Fourier transform is what describes the object in terms of the individual spatial frequencies that make it up and by filtering these frequencies you can change the appearance of the image. But it’s confusing to think of an object being described by 2D Fourier analysis.. Tomorrow I’m going to look into this some more to see if I can better understand the visual application of this seemingly abstract concept.

Later in the day I also helped out Ariana with some of the basics of Fourier series and then showed her some examples on the oscilloscope (the same useful examples that Dr. Noé had shown me, such as comparing the complex sound wave of your voice to the single sine wave of a tuning fork or a whistle).


Tuesday 26 June 2012

Today I read through Benjamin’s LTC project Diffraction Grating: Can We Detect the Doublet. He had examined the use of a diffraction grating in spectroscopy. Spectroscopy is the analysis of radiated energy of an object by looking at its emission spectrum. The “sodium doublet” that Benjamin looked at is a pair of two of sodium’s spectra lines from two electrons in the same orbital that have separate energies associated with them based on their magnetic moment, and therefore they have different but very, very similar wavelengths. He was able to resolve these two lines with a pair of toy glasses (the kind that give a light source a rainbow of halos) that had diffraction grating “lenses.” A diffraction grating basically spreads out the wavelength of light comprising a light source. It works like the double slit experiment in which the light propagating through multiple slits interferes to create a pattern of high and low intensities; for the grating, however, there are an extensive amount of slits, meaning the resulting interference pattern becomes very narrow peaks of intensity (aka the spectra lines).

I did some research on the concepts behind Bessel functions since I was having trouble grasping what the equations actually meant. As I learned from Kowalczyk’s article, the Bessel beam created from diffraction of a ring source of light is directly proportional to the zero-order Bessel function. The general Bessel function is a cylindrical function that describes a converging series (which is the first kind of solution to the Bessel differential equation). The different “orders” are parameters that refer to various solutions; they govern the shapes of the graphs: an even ordered Bessel function has an even curve, an odd ordered function has an odd one, and a real order in general makes the Bessel function curve’s amplitude decay like a damped oscillation over time.

I also was curious about what a “CCD” camera actually was, since it seems to be used often with collecting image data, so I did some quick Wikipedia research (and cross checked with an information page from the University of Oregon). The CCD is a charge-coupled device that contains a number of image sensing pixels. Each pixel is a photosensitive cell made of a semiconductor material that gives off electrons when a photon strikes it. The cell collects these charges and then shifts them within the device to an area where they can be converted into digital values.

I read through most of the article: Demonstration experiments on nondiffracting beams generated by thermal light by Lorenzo Basano et al.. At one point it started talking about spatial coherence and temporal coherence, and I realized that I wasn’t too sure about the distinction between these two types. The section in K. K. Sharma's book: Optics: Principles and Applications, makes the difference easy to picture. Temporal coherence is a longitudinal coherence; it occurs when two points along the direction of propagation (z-axis) possess a phase relationship. Spatial coherence depends on the physical size of the light source and if the points are radiating together; it occurs when two points lying in the transverse plane are still correlated in phase after a certain amount of time. Therefore, the “nondiffracting” Bessel beam is spatially coherent, since its transverse intensity profile is independent of the z-axis.

The apparatus can be used with either a HeNe laser or 150W halogen lamp (halogen lamps can run at higher temperatures because a halogen cycle chemical reaction prevents the tungsten filament from evaporating). The light source is focused through a microscopic objective and then through a pinhole aperture. The light is then sent through a transparent annular ring (made from a photographically produced black and white drawing), focused in a converging lens, and collected in a CCD camera as a quasi-Bessel beam. This can be used to study the difference in spatial intensity and self-reconstruction ability between a laser light and the halogen lamp, the effect of large and small pinholes, superluminality, and optical coherence theory.

The two setups described by Kowalczyk and Basano are fundamentally similar: a light source is sent through a small aperture (the center of a washer in the first and a pinhole in the second), it is spatially filtered to a certain extent (Kowalczyk uses a washer with its center filled-in suspended concentrically in an iris diaphragm and Basano uses a black and white ring image), and the Bessel beam is the product of a final diffraction of this annular light source (detected a certain distance away from the second lens of the 4-f system in Kowalczyk’s, and brought together a certain distance from a converging lens in Basano’s). However it seems like the one Kowalczyk describes (see description below in 25 June 12) produces a beam more true to the actual Bessel beam (though an actual Bessel beam is impossible to make, since you need an infinite amount of power). The setup described by Basano still seems very useful for becoming acquainted with the peculiar properties of this type of beam.


Monday 25 June 2012

This morning Dr. Noé asked me to join him at the Simons program breakfast to meet Ariana, a high school student who will be working with us in the Laser Teaching Center this summer. It seems like she has a very broad spectrum of interests, like myself, so I’m excited that she’ll be joining us. After the breakfast, we walked over to the lab (luckily the torrential downpour from this morning had let up!) and had a group meeting to discuss research notebooks and our interests in optics. Then I spent some time helping Marissa and Jonathan try to realign the single-mode optical fiber, by walking the beam with the position and angle mirrors.

I finished the theory section on how to create a ring source (which can then be diffracted to produce a Bessel beam, see 21 June 12 below) from the Generation of Bessel beams using a 4-f spatial filtering system article. The method employs a 4-f system (similar to Lidiya’s spatial filtering project setup), in which there are two lenses that facilitate a two-stage operation of successive Fourier transforms. Kowalczyk et al. designate 3 planes: the object plane, the Fourier plane (between the two lenses), and the image plane. The object plane is the input focal plane for the first stage, the Fourier plane is the output focal plane for the first stage and input focal plane for the second stage, and finally the image plane is the output focal plane for the second stage. The input plane (in both instances) is converted into the output field by a spatial Fourier transform, and we can spatially filter the image by altering the diffraction pattern in between these two fields. The goal is to only transmit the high-spatial frequencies that compose the sharp edge of the object’s field.

This can be done with an annular mask that has a specific ratio of its outer radius to inner radius (3.84 was used in this case). A very large outer radius of the filter allows more high spatial frequencies to pass, meaning the image will have a very sharp edge and the graph of amplitude versus image radius will display Gibb’s phenomenon at the edge of the aperture. A very large inner radius of the filter determines the overall width of the filtered edge because it suppresses the lower spatial frequencies.

I also read through the apparatus section and made a diagram for myself based on the focal measurements and object/filter radii Kowalczyk et al. used. A HeNe laser is sent through a pinhole to improve the mode quality (by means of two polarizers, two lenses, and a mirror). The beam is then collimated with a third lens and sent through the object, which is a steel washer mounted on a glass slide with an iris diaphragm fitted around the outside to keep excess light from leaking around washer. The next part is the 4-f spatial filtering system in which the light rays are sent through a compound-planoconvex lens, a spatial filter, and a second compound-planoconvex lens, all separated by one focal length. Then about 9 cm past the image plane, the Bessel beam forms from diffraction of the thin ring source, which is recorded by a CCD camera.

It's a complicated set up, however if possible, I'd like to try to see if I can build it to create a Bessel beam. There's also supposedly a simpler way to produce one, using thermal light, which I'll read about tomorrow.


Sunday 24 June 2012

I read through the abstract from Jonathan Wu’s project, Fourier Transform Spectroscopy , which discussed the Michelson interferometer. He described how Fourier transform spectroscopy plays an important role in Optical Coherence Tomography (OCT), which is a non-invasive imaging technique that uses a Michelson interferometer. It works with light sources that have a very short coherence length (that is, a short distance over which the various light waves stay in step). On my first day in the LTC, we did a project with one of these interferometers (see 12 June 12 below) where we altered the path length of one of the beams of light and counted the number of passing fringes in the subsequent interference pattern to determine the laser’s wavelength by working backwards. But now after reading through this abstract I’ve come to realize that there’s so much more to the apparatus: the interference pattern is a Fourier transform, showing the spectral distribution of the laser light (intensity versus wavelength).

Which leads into Doug Broege’s Observing the Sodium Doublet and Laser Diode mode structure with a Czerny-Turner Spectrometer project. The spectrometer he used, which was donated by Dr. Wagshul, separated the components of a particular source of light according to wavelength. He discussed how he rigged a set of mirrors and a diffraction grating to project a light source onto an array of photodiodes. By connecting the photodiodes to an oscilloscope, he was able to view the relative intensities of different wavelengths of light and how this relationship changed over time as the light source warmed up. Photos from his apparatus and oscilloscope results can be seen here. The spectrometer displayed the separate modes running through a multimode laser and therefore could be used to determine more of its properties. Though the results are obtained by a different means than Jonathan Wu, Doug’s project seems very similar to Fourier Transform spectroscopy, since it deals with analyzing the spectral components of a light source.


Friday 22 June 2012

Today I read most of the second section of the theory in Generation of Bessel beams using a 4-f spatial filtering system . I am going to finish up the last bit on Monday morning, so I will wait until then to post a complete summary. Then in the afternoon, we spent some time cleaning the lab. First Dr. Noé gave us a tour of where things should go and showed us what areas needed the most organization. I spent some time straightening up the electronics area- neatening some of the drawers, putting away objects that had been left out, and rearranging the resistor drawers by dividing them up and labeling the different strengths.

While I was sorting out the resistors, Dr. Noé explained to me that some of the equipment in the lab had special markings to show which pieces were donated by Dr. Mark Wagshul, such as the cabinet I was organizing. Currently, Dr. Wagshul is at the Albert Einstein College of Medicine in the Department of Radiology and the Department of Physiology and Biophysics. His ongoing research efforts include examining the uses of Magnetic Resonance Imaging (MRI) to image blood flow and cerebrospinal fluid flow in the brain, the uses of MR-spectroscopy to quantify concentrations of common metabolites in the brain, and the manipulation of pulse sequences for MRI to obtain new types of informative images. (That last part is something that I again can connect to the medical physics class I took last semester, since we went over how important the sequences of radiofrequency pulses are. For instance, if you want to gain information about the relaxation times of the nuclear spins of a type of atom, you use a pi pulse first to rotate the spins by 180-degrees. However, if you want to gain information about the desynchronization times of the spins, you first use a pi/2 pulse to initially excite the spins and then a pi pulse to measure the spin-echo.)

Dr. Noé then got on the topic of Dr. Bill Hersman, who is a physicist at UNH who discovered that it is possible to use MRI techniques to image the lungs with the use of polarized xenon gas. It was already commonly known that helium (though more expensive) could be polarized by mixing it with vaporized rubidium, polarizing the rubidium with a laser, and then having the rubidium transfer the polarization to the helium atoms before condensing back into a solid. But the problem was that rubidium didn’t polarize xenon (a more accessible gas in the medical field) as efficiently. So he created an elongated contraption that alternatively pointed the laser in the direction against gas flow to polarize rubidium better, therefore polarizing 60-70% of the xenon gas. The part that I found most interesting about Dr. Hersman’s story was that he came up with this apparatus with much less funding than the other leading researchers around the world who were working on the same task. Due to the fact that he was on a strict budget, he had to come up with a different way than what was widely anticipated to be the answer, and it turned out that his creative approach is what succeeded. In general, I think it is important to try to always look at all angles of a situation before diving into the obvious solution, because sometimes it’s necessary to use unconventional reasoning.

I found these medical physics tangents very intriguing, since this is a branch of physics that I’ve become increasingly interested in. What I’ve currently been looking into on my own is the connection between various medical imaging techniques and their application to conservation science. In other words, the use of nuclear magnetic resonance imaging or x-ray tomography to non-invasively study the degradation, restoration, and conservation of elements of cultural heritage: artifacts, works of art, or important structures. Conservation science has become an increasingly popular field in Europe, especially in Italy. A particular example is at the University of Bologna where there is a Magnetic Resonance of fluids in Porous Media (MRPM) research group that uses MRI to test the efficacy of certain hydrophobic treatments to elements of Italy’s cultural heritage. As I mentioned in my biography, I’ve always had a love of art and art history, and it is fascinating when these topics can be innovatively incorporated into physics (or vice versa).


Thursday 21 June 2012

I was able to get my Ideas and Resources page up and running after looking up how to add a webpage to my directory. Right now it contains links to the sites and papers I’ve found most helpful and/or interesting. As I begin to fine-tune my project idea, I’ll probably add/remove some of the links.

We spent some time today in the conference room going over past LTC projects; Dr. Noé pointed out some of the ones that pertained to the topics Marissa, Jonathan, and I are interested in. One of the projects that stood out for me was Simone and Daniel’s Understanding "Walking the Beam." They analyzed the process of adjusting two mirrors (the position and angle mirror) in order to send a beam through two consecutive pinhole apertures. It can be a tricky process to make sure everything is aligned—Simone and Daniel described how you have to go systematically back and forth between the two mirrors, optimizing the position mirror, then the angle mirror, then back to the position mirror, etc. Another interesting project was Sarah Campbell’s Single Mode Optical Fiber, where she measured the high-intensity profile of a Gaussian beam by first measuring the main lobe and then blocking out that part in order to measure the much-less intense side lobes. Also, a random but important thing to note is that when you’re cleaning optical equipment, such as a mirror, you are only supposed to wipe once in one direction to avoid altering its optical properties.

I looked over the abstract for Annie’s LTC project, Investigating optical vortices created with a single cylinder lens, where she describes how an astigmatic lens causes two orthogonal portions of a laser beam to diverge and gradually become circular in the far-field. This process converts the Hermite-Gaussian (HG) laser mode to the Laguerre-Gaussian (LG) mode, a votex with a dark spot in the center. The hollow part has zero intensity from the helical phase fronts cancelling each other out. She also found out that this vortex beam disappears in the focal region, where the HG mode subsequently reappears. There’s a problem with the link to her actual Intel report, however I found it with a simple Google search and will read over it tomorrow.

Today I studied part of Generation of Bessel beams using a 4-f spatial filtering system by Jeremy M. D. Kowalczyk. The publication discussed how to make a Bessel beam from the diffraction of a thin ring source of light. The ring source comes from a 4-f spatial filtering setup where a uniformly illuminated circular aperture is subjected to a high-pass filter. (As I already learned from Lidiya’s project, this type of filter enhances the edges of the image by blocking out the lower spatial frequencies located in the middle of the diffraction pattern.) It turns out that the zero order Bessel beam is a uniform superposition of plane waves (each with a complex amplitude and angle of propagation in relation to the optical axis); with a two-dimensional spatial Fourier transform, you can calculate the angular spectrum of plane waves for the beam.

I read through the first part of the theory that describes creating a Bessel beam from the ring source. When I first started the article, I struggled for a little bit with the derivations of equations. I understood some of the math would be difficult, but this looked downright impossible! I decided to look back at the article on the computer, and it turned out that all of the parenthesis, integrals, and summation signs did not come out on the printed copy I was reading from. Strange.. After hand-writing those in, the math became easier to follow. It started out with the Fresnel diffraction equation using cylindrical coordinates; this accounts for the curvature of the wavefront approaching the diffraction field, and how the phase of each individual portion of the light varies depending on the radius (this is assuming the input field is azimuthally symmetrical). Through simplifying, subbing in the Bessel function cosine identity, and some integration, you come out with the Hankel transform, which is the two-dimensional Fourier transform of a circularly symmetric function; it is the weighted sum of an infinite number of Bessel functions of the first kind. Using the delta function to represent the thin ring source of light, the result showed that the diffracted field was directly proportional to the zero order Bessel function.

Evidently the equation does not produce a perfect Bessel beam, since it violates two of the requirements: planar wavefronts (our equation was derived from a spherical wavefront) and a constant spot size throughout propagation (the central lobe has a radius that increases with distance). The use of a thin lens to collimate the diffracting waves would convert the beam into a standard Bessel beam with a uniform phase (plane wavefront) and constant amplitude (constant spot size). Though Kowalczyk et al. did not use a collimator in their setup, they assure that it all still works.

Tomorrow I’ll read the second part of the theory (which discusses how to use a 4-f spatial filtering system to create the ring source), and describe the experimental setup. There was also another interesting article I found about creating Bessel beams with a thermal light, and Jonathan told me about an article that described how to create complex beam shapes with an array of individual phases (which sounded like the optical version of the sonic screwdriver I read about yesterday).


Wednesday 20 June 2012

We had our first REU group meeting today in which everyone in the group gave a brief summary of the project they were working on. I explained my general progression so far from reading the book on Fourier analysis, to becoming interested in its role in diffraction patterns and spatial filtering, to trying to correct optical aberrations with amplitude and phase filters, and now to the idea of nondiffracting Bessel beams. David is working on a theoretical solid-state physics project in which he’s currently using computer simulations to see what would happen when silicon is substituted for nitrate in gallium-nitrate. June is with the atomic, molecular, and optical physics group, and is currently doing ray tracing with Mathematica in order to eventually understand how to build a diode laser. Yakov has been working on a project at the Brookhaven Lab since back in October and is currently using C++ computer programming to figure out the best materials for an electron beam. Kate and Sarah have so far learned how to create a Linux graph to chart the luminosity versus effective temperature of different sized protostars (measured in terms of solar masses) over an extended period of time. Jonathan explained how he was interested in optical vortices, especially in conjunction with their unusual application of the Maxwell equations, and then Marissa described her fascination with optical fibers and possibly trying to send a vortex through one. We’ll be meeting again every Wednesday to report our progress.

During the first weekly pizza lunch Professor Dominik Schneble explained the process of creating ultracold atoms, which was the basic principle behind Bryce Gadway’s thesis defense from Monday. The atomic, molecular, and optical physics group is currently working with Bose-Einstein condensates (BEC): a phase of matter in which there are bosons (one of the fundamental classes of subatomic particles) occupying the lowest quantum state. They are created from laser cooling and magneto-optical trapping (MOT). Then the AMO group uses optical lattices to study properties of the BECs. An optical lattice is created from two opposing beams that give the boson a momentum from continuous absorption and stimulated emission. There is a certain level of uncertainty in the energy due to the lattice being pulsed on for a very short period of time; this makes sense in connection with the Fourier transform, since the shorter the time, the more uncertainty there is in the function. It was interesting to learn that every time they make a measurement, the BEC is destroyed and they have to create another one (which only takes about one minute!) to do the next measurement. After the lecture, we also got a brief tour of the AMO lab, which was pretty cool to see all of the actual instrumentation used.

Dr. Noé gave me an article to read that he found in this month’s issue of Physics Today: Classical votex beams show their discrete side, by Ashley G. Smart. It reported the invention of a sonic screwdriver, which is an ultrasound device that can generate high-angular momentum acoustic vortices. The research group from the University of Dundee in the UK was able to create a sound wave with three intertwined helical wavefronts by individually adjusting an array of 32 x 32 transducers. With a water chamber device that could measure torque exerted by the acoustic vortex and the radiation pressure, they could determine the relative amounts of orbital angular momentum and beam energy, respectively. I had to look up orbital angular momentum (OAM), since it was something I was unfamiliar with. It describes the actual revolving motion of the beam, whereas the spin angular momentum (SAM) is a product of the polarization of the beam. As is the usual case nowadays, there was a connection to the Fourier mathematics I’ve been studying, since the shape of the overall acoustic vortex is really a superposition of each wave produced by each individually phased transducer. Overall it was very interesting stuff! The end of the article suggested advances in developing complex sound beam shapes (such as the nondiffracting Bessel beams) could be useful to ultrasound surgery.

I also read through Will’s LTC project: Modeling Diffraction by a Circular Aperture Illuminated by a Diverging Light Beam. He described the accidental discovery of an unusual diffraction pattern after passing a beam from an optical fiber through a pinhole. It started out as a dark spot in the center, and as the distance between the tip of the fiber and the pinhole decreased, a more complex pattern of concentric dark rings developed. Will was able to model the observed intensity distributions at certain optical fiber-to-pinhole distances by squaring the Fourier transform of the pattern (which was the product of the Gaussian intensity function, the phase function, and the zero order Bessel function). His report also contains a very helpful ray diagram to visualize the diffraction pattern in the Fourier plane of a 4-f spatial filtering system.

First on my list tomorrow is to read through more of the Bessel beam articles I printed and also review some of the mathematics behind Bessel functions.


Tuesday 19 June 2012

Today Dr. Noé helped us to better understand the Polaroid CP-70 contrast enhancement filter from last week. When you hold the filter up to a mirror and look straight through one side, you can see your image clearly in the mirror. However, when you turn it over to the other side and look straight through it to the mirror, everything appears dark. The reason the reflection was blocked only one way was because the Circular Polarizer (CP) is a combination of two sandwiched polarizers: a linear and a quarter wave plate. When light travels through the linear polarizer first, it becomes oriented at 45-degrees, and then after going through the quarter wave plate, the x and y components are rotated with a 90-degree phase shift, thus creating circularly polarized light oriented in a certain direction (it can be either left, right, horizontal, vertical—we don’t know specifically for this polarizer, though it is possible to figure out with a few reference polarizers..). After this light bounces off of the mirror, it comes back circularly polarized in the opposite direction. This means after going through the quarter wave plate again, the circular polarization is undone and the linearly polarized light that emerges (oppositely oriented to the original linear polarization) is blocked. (If you look through the other side of the filter, with the quarter wave plate in front of your eyes, there is no change in your reflection, since the non-linearly polarized light passes through the quarter wave plate unaffected.) This filter can be used on a computer screen to block reflections from outside light sources, therefore eliminating screen glare.

The articles I found on phase filtering used to reduce the effects of optical aberrations turned out to be mainly theoretical and fairly complex. This is what I’ve come up with though today:

(1) Firstly, I found a project done by Xueqing titled The Aberration Correction of a Diode Laser after doing a search on the Laser Teaching Center website for projects that dealt with aberrations. The student’s objective was to make a diode output beam as round as possible, since the beam normally diverges differently in different planes and is subject to optical aberrations. The key points from the project: (A) the aberrations were effects of the diode laser itself; in other words, they were not caused by an external deformed lens, but rather the imperfect light source. (B) Correction for these aberrations was done with the use of a lens, not by filtering. However, in addition to the lens, she did use an anamorphic prism to allow for the magnification of the beam size along one axis, but not the other, which I thought was interesting.

(2) Amplitude and Phase filters for mitigation of defocus and third-order aberrations (Samir Mezouari, 2004) described a theoretical analysis of possible filter designs. The amplitude filter is easier to create than a phase filter, and achieves the same correction, however at the expense of light transmission. There were two amplitude examples discussed: the annular aperture method (which uses a very small effective pupil) and the use of shaded filters (which is a method that uses the whole aperture). The publication then discussed that wavefront coding (carried out by means of an aspherical phase plate) could be used to tag the wavefront coming in so that it would be possible to restore the image (with digital image processing) over a large range of defocus. This method makes it theoretically possible to have a diffraction-limited resolution, which means that for the most part diffraction alone determines image quality (since the aberrations are greatly diminished).

(3) I read through some of Spatial Filtering in Optical data-Processing (Birch, Rep. Prog. Phys, 1972), which is a publication that Lidiya cited in her project. The section on phase filters describes how they are used to advance/retard the relative phase of transmitted light and a few methods of fabrication. The one that seemed the simplest was the use of Vectograph film, which produces a phase shift from two sensitive layers separated by a supporting base layer. Each sensitive layer passes one polarization freely and transmits the perpendicular polarization depending on the properties of the layer. Usually the two polarizations will be given a 180-degree phase shift. Besides optical aberration correction, phase filters can be used for enhancing discontinuities at slits and edges, while at the same time suppressing the illumination of uniform areas.

Since Dr. Noé says it’s best to have multiple project ideas, I’ve also started looking at some articles about Bessel beams. So far what intrigues me is that these beams are nondiffracting and self-reconstructing. After reading more about their properties, I’d like to brainstorm new interesting ways to analyze Bessel beams..


Monday 18 June 2012

Dr. Noé recommended that I read Lidiya’s report Spatial Filtering in Optical Image Processing since I was interested in Fourier analysis in connection with optics. The project was focused on Abbe’s Theory of Image Formation, which states that objects illuminated by a plane wave form diffraction patterns in the Fourier plane (back focal plane) of an objective lens. The squared Fourier transform (squared since our eyes don’t know what a negative light wave is, we just see the brightness of the light) is the intensity distribution of an image’s diffraction pattern. In other words, the diffraction pattern is a visual representation of the optical signal divided in terms of spatial frequencies, all with a different weight of importance to the overall image.

Through filtering certain parts of the diffraction pattern, Lidiya observed how different spatial frequencies of the pattern contributed to certain components of image formation. Obstructing the low spatial frequencies (located on the inside of the pattern) with a high pass filter enhanced the edges. Blocking the high spatial frequencies (located on the outer part of the pattern) with a low pass filter blurred the image details. Lidiya had used a Ronchi grating as her object. I think it would be interesting to try a similar project with a more complex object, such as a design or picture printed on transparency paper. Though now that I think about that again, it would be difficult since the laser beam would be too small comparatively.. I would have to devise a way to expand the beam.

I am also interested in Mara’s unrelated project on the Optical Activity in Sugar Solutions. She investigated the polarization of light due to chiral molecules as the solution density and path length were altered. When light beams of varying wavelength were sent through a sugar solution, a phase difference was introduced to the x and y components, which rotated the linearly polarized light. As the solution density increased, so did the rotational angle, and it was found that the waves with the shorter wavelengths were rotated the most.

While trying to combine a few of the topics I’ve become interested in, (namely astigmatism, using Fourier mathematics to describe the diffraction pattern before image formation, and spatial filtering to improve image quality), I found the following: Circularly symmetric phase filters for control of primary third-order aberrations: coma and astigmatism (source: Mezouari, Samir; J. Opt. Soc. Am. A / Vol. 23, No. 5 / May 2006). I haven’t had a chance to go through the mathematics yet, but the article describes the development of a quartic filter, which is a phase-mask that makes use of the interference pattern of phase differences to retard portions of a light beam. This is used to improve image resolution and light gathering, making it possible to correct the coma and astigmatic aberrations. Besides this, I found a number of other publications that discussed similar topics. I’m excited to learn more about it tomorrow and maybe try to develop a project somehow from this-- that is, from the use of a phase filter to correct optical aberrations (such as astigmatism, coma, spherical aberration) based on the Fourier transform of the diffraction spectrum.


Sunday 17 June 2012

The Bruce W. Shore lectures lasted for about an hour and a half each on Tuesday, Wednesday, Thursday of this past week. Bruce Shore is known for his work on the theory of coherent atomic excitation in conjunction with multilevel atoms and incoherence. The title of his lecture series was “Visualizing Quantum State Changes.” For someone who’s only taken an introductory course on quantum mechanics, I was able to recognize some concepts but it was still a little hard to follow. The lectures described ways in which to picture quantum state manipulation (that is, transferring population from one state to another) in Hilbert space (referring to different energy levels instead of an orbital atomic structure).

The first day he described a simple two-state example, in which pulsed energy was used to create a population transfer. A gradual turn-on/turn-off of the pulse resulted in the excited population returning to their original state. However an abrupt pulse left some electrons in that second state. This was all based on the time-dependent Rabi frequency: the frequency at which there were oscillations between states. The second day he discussed a vector model of the situation and how a full population transfer could be achieved with a sequence of two pulses. And then on the third day he discussed the slightly more complicated three-state system, which had two separate Rabi frequencies.

The part of the lecture that really sparked my attention was on the second day when he began discussing the change in the vector orientations based on the pi and pi/2 pulses, which reminded me of the pulse sequence used in NMR imaging I learned about last semester in my Medical Physics course. I’m not sure if during his lecture he had mentioned the application to nuclear spins being oriented in a magnetic field, but if he had I guess I had missed it. Bruce Shore was very good at explaining every small detail and term he used, but at the same time some of the material was hard to digest with the continuous speed he was moving at, so I found myself getting caught behind numerous times.

But now after sitting down with the material on my own time and coming to the initial realization, it all started to click into place! The whole principle of NMR is moving between two states: equilibrium and excitation of the nuclear magnetic moments (which are proportional to the nuclear spins). This is achieved through a series of radiofrequency pulses, pi and pi/2. Based on the sequence and timing of pulses, information about the part of the body being imaged is gained both through the desynchronization and relaxation of the spins. The Larmor frequency is the quantized frequency of precession of the transverse nuclear magnetic moments about a static magnetic field, which is along the same lines of the Rabi frequency.

Now that it became so clear and obvious, I feel kind of silly for not making the connection myself during the lecture. However, I guess it was partially due to the fact that in my Medical Physics class, we only briefly mentioned the Bloch equations, and we didn’t use the Rabi frequency, the time-dependent Schrödinger equation, or really any linear algebra for that matter.

I scanned through Bruce Shore’s article Coherent manipulation of atoms using laser light [from Acta Physica Slovaca 58, No.3, 243-486 (2008)] and sure enough he mentioned nuclear magnetic resonance! If I have any spare time this week, I’d like to read more of the publication. Now that I’ve recognized a basic application of the material, I feel like it will be easier to teach myself the parts I didn’t understand about the lecture.


Friday 15 June 2012

Since it was sunny today, we spent some time out of the lab to do a few experiments outdoors. Dr. Noé showed us multiple demonstrations: how a shadow becomes blurrier as the object is moved farther away from the screen (which was what brought forth Marissa’s project last semester), how the sky is polarized from Rayleigh scattering (which can be seen through a circular polarizer), the different focal lengths of various sized lenses, and how the distance between a mirror and screen changes the size of the magnified reflection of light.

(1) Image magnification with a mirror: We used a mirror was that completely covered with the exception of a small hole in the center. Marissa then focused the reflected light onto my notebook, and together we adjusted the distance until the reflected spot had a 1 cm diameter (the size of the spot increased as we increased the distance between the mirror and notebook). The distances we calculated from the mirror to the image were 86.4 cm, 91.4 cm, and 81.3 cm. Marissa’s project from last semester determined that the diameter of the image and distance from mirror to image (along with the diameter of the hole for the mirror) could be used to solve for the angular size of the sun with a hyperbolic function.

(2) Using magnifying glasses to burn a piece of paper: Dr. Noé started out by explaining how the scene in The Lord of the Flies where the boys use Piggy’s glasses to start a fire is not physically possible. Since Piggy was nearsighted, that means he would have had concave lens for seeing far distances, and concave lenses don’t focus light to a point. That being said, even with a pair of reading glasses (which are made from convex lens that do focus light rays to a point), there wasn’t enough intensity in the focused beam to ignite a piece of black paper. The focal length was 80 cm and the convex lens was +1.25 diopters. A diopter is the unit of measurement of optical power and is the reciprocal of the focal length. Optical power is the degree to which the lens converges/diverges light.

There are a couple of factors that come into play: the area of the lens (which determines the intensity of the focused light) and the focal length (which is a function of the index of refraction of the lens, the thickness, and the radii of curvature). With the smallest lens we were able to focus the sunlight to a tiny point (at a focal length of 5 cm), but the small area of the lens meant there wasn’t much intensity being concentrated on the paper. With the largest lens, there was plenty of intensity in the focused light, however the focal length was so long (262 cm) that the projected image was a lot larger than that of the smaller lenses. The magnifying glass had the perfect area and focal length to focus an intense amount of sunlight onto a small spot, successfully burning a hole in the black paper.

(*) Interesting extra research: Just a note about optical power- I thought it was interesting that the optical power of two or more lenses close together is approximately equal to the sum of each. In addition, the optical power changes when the lens is immersed in a refractive medium, which therefore means the focal length changes. This reminded me of my time on the swim team at my high school and how everything always seemed slightly magnified when I went underwater with my goggles.

(*) When I was looking up different types of glasses, since personally I don’t wear them and therefore don’t know much about them, I came across astigmatism (which was a condition I’ve heard about but never fully understood). I found it fascinating that an irregular curvature of the cornea causes the formation of multiple focal points along different planes (making the overall image blurry). Further more, it was interesting that a toric lens could be used to correct this condition.


Thursday 14 June 2012

Today after getting through more of the Fourier book I came to understand Fourier mathematics in a whole new way. When I first learned about Fourier analysis, to me it simply meant a way to decompose a complicated wave form into its simpler components. I had learned the different forms for the Fourier Series with its separate coefficient formulas, the concise complex versions of both of these, and then Fourier Transform coefficient formula and its inverse. But after going through the entire derivations again, I realized that I hadn’t actually grasped the conceptual differences that well the first time around. (This may contain a little repetition from my last journal, but I just wanted to set everything straight in the same place.)

The Fourier Series formula does exactly what I remembered: it is used to quantify a complicated periodic wave (the key point is that the wave is periodic- it repeats itself in a finite amount of time), to show how it is the sum of simple waves. It contains three important terms which represents the combination of three types of simple waves, a0, cosine, and sine.

To find the individual coefficient formulas, it’s a matter of finding the area under the curve of that specific wave and then dividing by the period (since the area is the amplitude times the period). So we take the integral of one period after multiplying the function by either cosine, sine, or nothing (depending on whether you are looking to find an, bn, or a0) in order to cancel out all of the net areas of the other coefficients.

The Fourier spectrum creates a graph of the discrete frequencies and their amplitudes contained in the complicated waveform. (Example application: This is an important tool for analyzing which frequencies make up the defining characteristics of a certain sound. For instance, the first 2 peaks of a vowel, known as “formants,” are what define the vowel sound).

Now after using Euler’s formula to create a simplified complex number representation of the Fourier series and its coefficients, we don’t end up with an easier way to do the calculations, but rather an easier way to understand the relationship between a complicated waveform and the simple waves it contains.

Fourier Transform, used to analyze non-periodic waves, is derived from the complex number representation of the coefficients for a wave with an infinite period. It no longer incorporates the integral multiple “n” into the formula because we aren’t calculating discrete amplitude values; we’re just looking at a function of the possibilities, which is still informative about the relative amplitudes of each frequency in the overall function. To do calculations, we have to define a finite period of time; the longer the period of time, the more closely the graph will represent the true shape of the relative amplitudes. In other words, it declares with more certainty the frequency of the wave. The fact that the Fourier Transform incorporates the uncertainty of waves is important to quantum mechanics applications of the wave-like properties of quanta, since their behavior at the subatomic level is uncertain and we use probabilities to quantify the possibilities.


Wednesday 13 June 2012

Today, after setting up my bio page, Dr. Noé gave me one of his books to read, Who is Fourier? a Mathematical Adventure, since he knew I was interested in sound as well as optics. While I had already learned about Fourier analysis in my sophomore year Vibrations, Waves, and Optics course, this proved to be a very useful refresher of the Fourier series equation and coefficients. I hope to finish up the book with the chapter on Fourier Transforms tomorrow..

What I read so far brought up some interesting points regarding the application of Fourier analysis to the way languages are learned. I had never really stopped to think about the fact that babies learn how to talk just from listening to others speak around them constantly, and not in the conventional way languages lessons are taught in schools, beginning with basic vocabulary lists and such. This means that in the complex mix of sounds a baby hears, he is able to pick out a simple regularity among them, which can be analyzed through Fourier analysis. The Transnational College of LEX actually did a study where they realized that the first two frequency peaks on the Fourier spectrum of every vowel are its defining characteristics. So by altering these “formants” as they’re called, it transforms the sound into a different vowel. The study also found that there is a symmetrical pattern formed when the spectrums of each of the vowels are all analyzed together, known as the Formant Diamond.

Another interesting side note was the way the book approached teaching logarithms. It pointed out that logarithms express the perception of things in proportion, and that this actually reflects how humans perceive certain quantities, such as the brightness of light or the spacing of notes on a music scale. For instance, we are under the impression that there are equal intervals between octaves, however the interval actually doubles each time.


Tuesday 12 June 2012

Today we met up with the summer REU students and mentors for an informal breakfast at the Simons Center, and then afterwards we broke up into our individual research groups. In the Laser Teaching Center, Dr. Noé started us out with a few basic optics demonstrations.

The first dealt with a “mirage toy” that projected an image on top of a central hole in the apparatus of two small pig figurines placed on the inside of two concave mirrors. When he shined a laser at the image of the pigs, it appeared that the light was somehow interacting with the image. However this was not actually the case. After a little thought, we realized that the beam of light went straight through the image and into the central hole of the apparatus where it bounced off the concave mirror inside and shined on the actual object. So when we saw the laser light “hitting” the image, we were really just seeing the projection of the laser light hitting the actual object below.

Dr. Noé also explained how, if you were to change the distance between these two concave mirrors, it changed whether the image was a true image or inverted. After briefly reviewing ray diagrams for concave mirrors, I understood that this was due to the fact that the inversion of the image depends on where the object is in relation to the focal points of the two mirrors. With concave mirrors, when the object’s distance from the mirror is closer than the focal point length, the image is enlarged but not inverted. Therefore, initially when the two mirrors are placed together, the image is inverted. However as the second mirror is moved farther away from the first, the image will flip back to its true orientation once the object is farther than the focal point.

The other main project we learned about was the Michelson Interferometer, which consisted of a laser beam being sent through a beam splitter to two mirrors. The light was then reflected off of each mirror, through the same beam splitter, and each was then further split so that now there were two beams reflecting back to the laser and two beams sent on a perpendicular path through a lens to a counter. When the two beams were overlapped, they created an interference pattern that shifted as the path length of the beam changed. We used this change in path length (the distance one of the mirrors was moved closer) and the number of passing fringes the counter recorded to determine the wavelength of the laser beam.

Random New Fact of the Day- (1) Birefringence: when the refractive index of a material depends on the polarization and propagation direction of the light ray penetrating it. In other words, the speed of light in a material is different along different axes.