One of the most frequent task when assembling optic setups is to place an element in the focal plane of a lens. It may be a light source, an object, a pinhole, a camera sensor, a projection screen… When an illuminated object is placed in the focal plane of a lens, we usually say that the output beam is collimated and the image formed at infinity. At contrario, when a collimated beam enters the lens, it will be focused at the focal plane of the input lens. When the collimated beam entering the lens is coming from an object, the image of that object is then formed at the focal plane of the lens. This is the basic configuration of the infinity-corrected microscope that we already covered in [»] a previous post.
The usual problem when placing an element relative to a lens is to know where the actual focal plane of that lens is. In some application, the positioning accuracy is not utterly important but, in some other, positioning errors may dramatically lower the instrument performances. It is therefore mandatory to have an accurate way of estimating if the element is correctly placed relative to the lens focal plane. Please note that we will focus here only on the axial distance between the two elements, and not to eventual tilts between them. A good way to prevent tilt is to rely on mechanical tolerances by using, for instance, optical manufacturer cage systems. This will work for most application but you may need more advanced technique for the most sensitive setups such as when assembling lenses group to make a camera or microscope objective.
I will focus here on the collimation of lighten objects such as backlit targets or pinholes (it also works with optical fibers which can be seen as homogeneously backlit pinholes). Alignment of a camera sensor or projection screen usually requires to already have an object projected at infinity and to make the image of that object on the camera sensor or projection screen. Please note also that when positioning the lens relative to the illuminated object, we are trying to place the object at the focal plane of the lens and not the light source. Failing to do so will result in producing an image of the light source at infinity which is not what we want. Also, and to prevent having the light source included in the collimated beam image, it is recommended to have a lightning source as homogeneous as possible by using at least a strong diffuser and, if possible, an homogenising light rod or a piece of optical fiber.
Let us now go back to our business.
The first way students try to align their object is by using a ruler and the nominative focal length of their lens (e.g.: 50 mm, 200 mm…). This is a very bad way of aligning a system for several reason. First, the focal length labelled on the lens is not the actual focal length of the lens. A lens labelled 200 mm can be, in some case, quite different. Also, you have manufacturing tolerances of typically 1% on medium quality lenses and the focal length will be wavelength dependent if they are not achromatic. But the biggest issue with that alignment procedure is where to start measuring your focal length. Measuring 200 mm from the centre of the lens or from its back surface is not the same when the lens is 10 mm thick! Nonetheless, using a ruler is usually a good starting point for further alignment so that you can start with an approximation and you don’t have to scan large range of distances to find the exact focal plane. If possible, when applying this first rough alignment, use the back focal length as given by the manufacturer and start measuring from the back surface of your lens (without touching it to avoid scratches!). This usually allows for a +/- 10 mm accuracy on positioning.
Another popular way to align setup among students is to think of a distant wall as an approximation of a screen placed at infinity. The farthest the wall, the more accurate the approximation will be. This is true but it is far from being accurate enough for most applications. With a bit of re-arrangement of the lens formula (1/f=1/i+1/o), it is possible to show that the wall should be placed at 101 times the focal distance of the lens to be at 1% of the focal plane on the other side. With a 100 mm lens, this requires having a wall more than 10 meters away for just 1% accuracy! This is far from being satisfying and is worth only trying to get a more accurate estimate than with a ruler.
One other common trick is to use a camera objective with the focus ring at infinity. This is usually acceptable for non-critical applications but it is not perfect because it heavily relies on the accuracy of the mechanical positioning of the infinity hard stop on the camera objective. I have tested several objectives with more or less satisfactory results. For instance, the infinity of the Nikkor AI 200 mm f/4 objective I have was slightly over-estimated by about ~1%. That is, to actually achieve infinity I would have had to go beyond the mechanical stop of the focus ring which is not possible. For a camera objective, it is not a big issue to over-estimate infinity because of the hyper-focal distance. When focusing at the hyper-focal distance, everything will seem relatively in-focus from half of that distance up to infinity. So when shooting far away the photographer does not see the difference between focusing at true infinity and focusing at, say, 2000 m in most cases. More embarrassing was the Nikkor I 80-200 mm I have because this one under-estimated infinity! This is much more of a problem because when going beyond infinity you exit the imaging conditions and your image will look blurry, unlike the previous situation when we were working in the hyper-focal distance range. Finally, the best result I got was with the machine vision c-mount objectives bought directly from optical supplier which had much more accurate hard stop for infinity. Last but not least, I do not recommend using objectives without hard stop for infinity, such as almost all of the current DSLR camera objectives because you lose the benefits of reproducibility offered by the hard stops.
So what is the most accurate procedure for aligning an object at the focal plane of a lens? The answer is auto-collimation.
The principle of auto-collimation is to state that if an object is placed at the focal plane of the lens then the output beam will be collimated. Now, we also know that putting a collimated beam in a lens will produce an image focused in the focal plane of that lens. The solution is then to lit the object and place a mirror after the collimation lens to send the light rays back to the lens. When the object is placed at the exact focal position, both the object and its image reflected by the mirror will overlap perfectly.
There are several implementations possible for the auto-collimation principle, the one I have used for this post is shown in Figure 1.
In this setup we have a light source (red LED) illuminating a diffuser whit coarse grit placed behind a reflective crosshair target with thin lines (25 µm wide). The light then passes through a 50:50 cube beam splitter and a 200 mm 2” doublet achromat lens before reflecting on a flat precision mirror. It is important to use precision element to avoid blurring of the crosshair pattern due to eventual distortion or aberrations produced by the optical elements. The light rays re-entering the system are then focused back on the crosshair target if it lays perfectly in the focal plane of the lens. Finally, an imaging system comprising a 50 mm bi-convex lens with an iris and a camera sensor records both the original pattern of the crosshair target and its reflection. To allow capturing the reflection, it is important to use a negative reflective crosshair target such as those available at Thorlabs. Most of the target will then act as a mirror apart the thin lines that will produce the crosshair pattern.
When aligning the setup, you first need to remove the mirror and adjust the re-imaging lens and camera sensor position until you have a perfectly sharp image of the crosshair pattern. You may have to close slightly the iris to avoid spherical aberrations that will blur your image slightly. Do not close the iris too much because you may increase the depth-of-field and reduce the accuracy of the focus identification. The idea is to find the optimum between the smallest depth-of-field possible and the least aberrations in the image such that you have the sharpest image.
Once you have a sharp image of the crosshair target, put the mirror such as to reflect the beam perpendicular to the lens. Then move the 200 mm achromat until the reflection of the crosshair is at its best focus. You may have to slightly tilt the reflection mirror in order to shift the crosshair reflection on the side to distinguish it from the original crosshair. Use only rigid fixture to hold everything because the system is very sensitive to small tilts such as shaking hands or badly fixed setups. Figure 2 shows an out of focus image (left) obtained when the system was unaligned and the focused image (right) when the system was properly aligned. Notice that the original crosshair image appears strongly saturated because the camera exposure time had to be increased to catch the reflection signal which was attenuated by the additional beamsplitter reflections.
When your setup is correctly aligned and all the elements properly fixed, you may remove the camera and re-imaging lens and place the camera on the other side of the beamsplitter such as to produce a sharp image of the reflection signal only this time. This way you both have a collimated beam to align camera and a camera at focal plane to align external beam for collimation. You may also use the setup as a more compact version of the angle measurement setup described in [»] our previous post. A good idea is also to put the iris after the beam splitter such as to limit the actual aperture of the lens to about ~f/5 maximum to prevent aberrations in the output beam.
The auto-collimator setup described here is an extremely powerful tool and I strongly advise you to build one for your own lab. Not only does it give a reference for infinity and allow angular deviation measurement with high accuracy, it has many other usages that I will try to highlight in future posts. So stay tuned for updates!
[⇈] Top of PageYou may also like:
[»] Building a Microscope from Camera Lenses
[»] The Most Critical Step in OpenRAMAN
[»] #DevOptical Part 1: The Real, the Thin and the Thick Lenses