Measuring small angles deviations (<5°) can be a challenging task but optics offers a simple solution when the object under study possesses a flat reflective surface. This may seems to be somehow restrictive but attaching a temporary mirror to the object can already solve a lot of practical case. Also, the technique can successfully be applied on transparent surfaces as well (such as a glass sheet) and allows then to measure the parallelism of two or more interfaces.
In this post, I will present a technique commonly used in optics to measure the wedge of optical transparent flats that can also be used to measure the deviation of a beam with a resolution of 0.007° and a range of up to 3° but that can easily be made even more sensitive by adapting the optics being used. The optical principle is shown on Figure 1.
The idea consists of projecting a collimated beam of a reticle target onto one or more reflecting surface. As the beam is reflected back to the source, it is focused on a camera sensor using a second objective. Any tilt in the collimated beam will then be translated into a shift at the camera sensor position. Because the beam is collimated at the output of the setup, the relative position of the mirror does not influence the reading as the reflective surface can be moved laterally without problem as long as it keep reflecting the beam back into the system. Also, the position of the target along the optical axis has no importance as the beam is collimated which allows reflecting any number of interfaces such as in multi-layer glass sheets.
If f2 is the focal length of the objective in front of the camera lens, then any small angle ∆α degrees will be shifted by f2*∆α*π/180°. The angular resolution of the system is then fixed by the smallest displacement that is observable. As a first approximation, one can use the size of a pixel (e.g.: 5.2 µm using our black&white Thorlabs camera) but the resolution can be either better or lower than that depending on the actual image of the reticle obtained at the sensor. If the image is well defined then it might be possible to have a sub-pixel resolution but if the reflective surface is not so flat (wavy or diffusive for instance) then the reticle image will be distorted or blurry and make the identification of displacement imprecise. This is one of the reasons for keeping the reticle thickness image low, such as a few pixels large. To achieve this, I have used a Thorlabs R1DS3N which features a 1” diameter 25 µm thick reticle and two Olympus Plan Achromat 4x microscopy objectives to prevent field distortion that could have happened with normal spherical lenses. This resulted in a 1:1 imaging ratio and a reticle thickness of about 5 pixels on the camera. The only down-side of the system implementation is the limited numerical aperture of the microscopy objective used which limited the field of view of the camera to about half of the sensor.
A photography of the setup is given on Figure 2. From left to right: a red power LED, a diffuser grit with the reticle target, the microscopy objective, a beam splitter cube, the second microscopy objective and the camera.
The setup was mounted as a 30 mm cage system for easiness of handling and has relatively limited hindrance although it is limited to a 0.007° resolution. Better resolution and field of view can be obtained by using teleobjective lenses such as 200 mm photography lenses but they would probably not fit the 30 mm cage system anymore and would then require either a 60 mm cage system or a fixed bench configuration on post holders. If you would like to replace the light source, a conventional low-power LED is already enough for highly reflective surfaces such as mirrors but if you would like to measure transparent flats it is then required to use more powerful LED (> 10 mW). The light source used here is the Thorlabs M625L3 run at 50 mA (typical performance is 600 mW optical power at 1 A).
The system expected behaviour was confirmed by reflecting the signal onto a mirror fixed on a precision rotary stage (Thorlabs PR01/M) and recording several image at different positions of the Vernier scale. The results are presented on Figure 3 and confirm that the system has a linear behaviour over a 3° range of displacement.
A close examination of the results of Figure 3 shows that the experimental points do not fall perfectly on the fitted line but this is due to the relatively low resolution of the Vernier scale of the rotation stage as each point in only distant by 5 ticks of the micrometer screw. This experiment then does not give actual information on the system performance because the system itself is much more sensitive than the rotation table used. Instead, this gives an indication on the resolution of Thorlabs PR01/M rotation stage. Thorlabs datasheet refers to a resolution of 2.4 arcmin per tick (0.04°/ticks) and the standard deviation of the error measured during the experiment was 0.035° which is consistent with the datasheet although we could have expected the actual resolution to be better than one tick (for example, half a tick).
The setup was then tested on a few transparent surfaces including the window glass of my lab room, a 10 mm thick polycarbonate block and the beam splitter cube itself. The results are presented on Figure 4.
It can be observed on Figure 4 that the widow glass sheet is actually quite parallel with only a 0.026° wedge. The most surprising result was obtained with the polycarbonate block as I would not have expected the two faces from being parallel at up to about 0.05°. Please note that the reflection of the polycarbonate is not as nice as with glass interfaces. Concerning the beamsplitter, it is actually the light reflected on the inner surfaces of the beam splitter that goes back to the sensor as ghosts because their reflectivity is not null (although the reflectivity is quite low thanks to the anti-reflection coating). The parallelism of the two faces was measured to be 0.140° which falls well into the manufacturer datasheet (20 arcmin for the reflected beam, 5 arcmin for the transmitted beam).
Before concluding this post I would like to discuss a little bit how to process the images obtained to get the deviation angle.
The simplest way of measuring the shift of the beam is to locate the centre of the reticle using a program like Paint or Photoshop which will give you an accuracy up to the pixel level. This is actually how I extracted the information out of the experiments of Figure 4. For Figure 3, I have used an automated method based on phase correlation. There is a nice article on [∞] Wikipedia about it and I recommend you to read it.
Phase correlation consists of performing the 2D correlation between two images A and B which will give you one (or more) peaks corresponding directly to the actual offset ensuring good superposition fit between image A and image B. When the images represent the same object (for example, our reticle) but shifted, this will give you the value of the shift. The nice thing with phase correlation is that you can get a sub-pixel accuracy by weightening the maxima of the correlation map with the neighbouring pixels (a similar way of doing so is to upscale the input image using a bicubic or bilinear filtering although it is more computationally expensive). Obviously, do not fall into the common error of confusing the number of digits returned by the algorithm and the actual meaningness of these figures. Only careful calibration can tell you what resolution you can achieve when working at the sub-pixel level.
Nevertheless, doing phase correlation can be a tricky thing because it is almost impossible (or at least, impractical) to compute it directly on full 1 Mpixels images. You can try to use Matlab corr2 function to convince yourself if you need to.
A common workaround is then to perform the convolution operation is to do it through a product in the Fourier space. Going from convolution to correlation simply then requires to normalize the Fourier transform. It is up to you to decide if you want a convolution or a correlation as a convolution is less sensitive to noise but a correlation is less sensitive to different brightness level between the two images. As our reticle target images are quite similar here, it might be a good thing to test the convolution approach without the normalization step.
Computing the convolution through Fourier transforms may look relatively easy but do not forget that when doing so you are actually doing a circular convolution which has the effect of limiting the output to half of the width of the image. Indeed, if you have two reticle images near the boundaries of the images (one near the left boundary and the other near the right boundary for instance), the circular convolution will compute you a small negative offset instead of a large positive one. To prevent this, we have to compute the linear convolution of the images. One way of doing this using the Fourier transforms is to pad the images with zeroes such as to cancel this circular behaviour. However, you may then have issues with the edges of the image just next to the zero padded zones. To prevent this, you can copy the last pixels or use more sophisticated filtering.
To help you with the analysis of your own data, here is the actual Matlab program that I used for the experiment of Figure 3:
% clean up everything
clear
clc
% load reference image
ref = imread('new exp/0.bmp');
% list all bitmap in directory
ls = dir(fullfile('new exp/*.bmp'));
% process all images
for i=1:length(ls)
s = sprintf('new exp/%s', ls(i).name);
% invoke correlation
[ofs_x, ofs_y] = imcorr(imread(s), ref);
% print results
fprintf('%s\t%.0f\t%.0f\t%.1f\n', s, ofs_x, ofs_y, sqrt(ofs_x * ofs_x + ofs_y * ofs_y));
end
% output the offset between the images 'im1' and 'im2' using fft
% correlation
function [ofs_x, ofs_y] = imcorr(im1, im2)
% must padd the images with zero or it will be a circular correlation
im1 = padarray(im1, 0.5 * size(im1), 'replicate', 'both');
im2 = padarray(im2, 0.5 * size(im2), 'replicate', 'both');
% correlation map = F-1[F(im1) * F(im2)']
F = fft2(im1) .* conj(fft2(im2));
F = F / sqrt(sum(F(:)));
Q = real(fftshift(ifft2(F)));
% locate maximum
[sx, sy] = size(Q);
[d, i] = max(Q(:));
[ofs_x, ofs_y] = ind2sub([sx, sy], i);
% remove center coordinate to output signed result
ofs_x = ofs_x - sx * 0.5 - 1;
ofs_y = ofs_y - sy * 0.5 - 1;
[⇈] Top of PageYou may also like:
[»] #DevOptical Part 13: Quantifying the PSF
[»] 400-800 nm Spectrometer Performances
[»] Camera Lenses as Microscopy Objectives