With the exception of telescopes, most optical systems are essentially made of lenses. It was therefore only logical to start our journey into the #DevOptical series with them. There is, however, a lot to say about lenses, much more than what could fit in a single post. I will therefore limit myself to describing the important quantities about them before we dig further into more complex topics. I will cover notions such as mounting of lenses, manufacturability criteria and aberration compensations in later posts.
Lenses themselves encompass a large variety of optical elements with singlet lenses, doublet, triplet, aspherical lenses, toroidal lenses, freeform etc. I will stick to singlet spherical lenses at the moment and will cover the other types of lenses later.
In their most simple forms, lenses are thin glass cylinders ended with spherical surfaces of radii R1 and R2. One of the face might be plane in which case the radius of curvature is said to be infinite. Conversely, it is sometimes more convenient to talk about curvatures which are the reciprocal of radii, since an infinite radius of curvature yields a zero curvature which is a concept better handled by computer software than infinity.
A CSG (Constructive Solid Geometry) representation of a positive bispherical singlet lens is given in Figure 1 with two sphere and an infinite cylinder. The lens is the intersection of all three solids. It is worth keeping in mind that all represented quantities (thickness of cylinder, diameter, centers of spheres and radii) will have manufacturing tolerances – we will talk about that in a later post. The optical axis of the lens is the imaginary line joining the two center of curvatures (spheres) of the lens. It is usually represented as a dashed line. Keep in mind that the optical axis is therefore not the mechanical revolution axis of the lens since manufacturing error in positioning the center of curvature will create a departure between the two. Typical departures are on the order of a few minutes of arc for well-made lenses. This has big consequence when we will start talking about the mounting of optics.
Lenses can have different overall shapes depending on the sign of their radii of curvature and each type has its own name as given in Figure 2. When the positive/negative term is between brackets, it is generally omitted because there should not be any ambiguity. A meniscus lens can be either positive or negative depending on its radii of curvature. Note that the sign convention for a positive lens is R1>0 and R2≤0. Conversely, the sign convention of a negative lens is R1<0 and R2≥0.
Lenses have the property of bending light rays in a controlled way. Positive lenses have image-forming properties for both object located at finite distances as well as objects located at infinity (e.g. a star can be considered at infinity relative to the dimension of the lens), as shown in Figure 3. Negative lenses have useful properties too but need to be combined to positive lenses to yield images. We will cover them later when talking about lenses systems.
The drawing of Figure 3 were obtained by a process known as raytracing which is at the core of optical design. Again, it is a bit too early to talk about raytracing now but it is important that you understand the fundamental of it. We will also cover later why we chose a biconvex lens for the top system and a plano-convex one for the bottom system.
When a light ray hits the glass surface, it’s direction will change due to the refraction effect. The ray then propagates until it reaches another glass/air interface and refracts again. Once it leaves the glass, it propagates again in air. Note that the rays do not stop as they all reach the same point; they continue their way until they hit something else. If you place a screen at the locus where all the rays converge you will see the image of your object. And since we are usually interested in studying the imaging capabilities of system, we conveniently stop the raytracing at the plane where all ray merges.
Such a description of the lens, involving raytracing and intersection with spherical surfaces is referred to as a real lens. It is however not straightforward to design a system using real lenses because it keeps the mind busy with a ton of details that are not necessary when thinking at the system architecture level.
Let us now take a biconvex lens with a collimated beam and extend the rays just like if they did not refract. We obtain the situation of Figure 4. We notice that all the intersections occur in a same plane (blue plane). We call this plane the principal plane of refraction.
By sending a collimated beam from the other direction, we can draw the second principal plane as shown in Figure 5. The previous principal plane is still kept on the figure in a dimmed color.
If we were picky, we could add that this previous assertion is only true for rays that are close to the optical axis – the paraxial region. Away from the paraxial region, the principal planes bend as spheres. When sketching up a system, we usually assume that we are working in the paraxial region, even when we are not. Again, it is essentially a matter of simplifying things enough so that the mind is not busy with non-essential details for system design. Refinement to the sketch will be made during the optical design part using more sophisticated techniques anyway.
Going back to our principal planes, the distance between the vertices of the lens and the principal planes for a bispherical lens are given by the formula
and the focal length of the lens, f,
Given that all the refractions occurs at the principal planes, we can already simplify the representation of the lens for a system-level sketch as shown in Figure 6. We call such a representation of a lens, a thick lens.
This is already an improvement for describing our system but it is still not very practical for sketching up our optical system. If we simplify the thick lens representation by neglecting the small distance between the two principal planes, we obtain the thin lens representation, as shown in Figure 7. We will come back later on when this assumption is valid and when it is not. Let’s assume for the moment we can do it.
Thin lenses are really convenient to work with for sketching up our systems. They have the following properties:
(1) A ray parallel to the optical axis will be focused to a point at a distance f from the lens.
(2) A ray going through the center of the lens exit unchanged.
(3) A ray located at position (f,y) will exit as a collimated beam with an angle Ï´=tan-1(y/f) relative to the optical axis.
(4) A point source at any other distance o will have an image formed at distance i from the lens according to the formula
It is worth noting that any distance o smaller than f will not generate an image and will require another lens to produce a physical image because the exit rays will diverge, just like with negative lenses. It is often said that in such conditions the image is virtual.
Most of the job of the system engineer is to think in terms of thin lenses and most of the time they are used in a collimated beam situation so that no computations are necessary and you can rely almost exclusively on rule #3. Adding the concepts of STOP and pupils that will follow in Part #3 of this series, we have 90% of the required tools to think like a system engineer about an optical design.
As an example, Figure 8 shows two very popular way to do a 1:1 imagers. The object to image is often represented as a black arrow. The top version uses a single f=50 mm lens while the bottom version uses two f=50 mm lenses but the total distance is 200 mm in both case. Although the second system uses more lenses, it brings many advantages that will be discussed later. It is called a 4-f system and you will learn to love it; It is like the Swiss-army knife of optical systems!
Now that we know how to go from a real lens to a thin lens, we are close to achieving the inverse – going from a thin lens description to a real lens description. But before we can tackle that task, we need to define a few other important optical concepts first. In the next post, I will explain how you can perform raytracing in the paraxial region for thin and thick lenses.
I would like to give a big thanks to James, Daniel, Naif, Lilith, Cam and Samuel who have supported this post through [∞] Patreon. I also take the occasion to invite you to donate through Patreon, even as little as $1. I cannot stress it more, you can really help me to post more content and make more experiments!
[⇈] Top of PageYou may also like:
[»] #DevOptical Part 2: Paraxial Raytracing and the ABCD Matrix
[»] #DevOptical Part 7: Replacing Thin-Lenses by Real Lenses
[»] #DevOptical Part 0: Introduction