Camera obscura


The easiest way to project images is the camera obscura, which is nothing more than a dark space with a tiny hole in one of the walls which will give an upside down and mirrored image of the world outside on the opposite wall. In the scheme below on the left, a box with a tiny hole is used, which gives a projection on the inside of the box.

If the opposite wall is some sort of sensor, then the picture can also be captured, like in the picture on the right, which was taken with my digital camera covered with a piece of aluminum foil with a very small hole.*

But it's clear that there is room for improvement in the pinhole photo and that's where lenses enter the picture; they are sharper, collect more light and are more versatile in terms of depth of field. Lenses in their simplest form are made of just one element, but the lenses used for photography consist of various elements.


*A nice opportunity for some math! The focal length of the pinhole setup was measured to be about 47-48 mm and this corresponds well with the focal length of the lens used for the photo taken with a lens, which was 48 mm. Furthermore, the pinhole photo needed a shutter speed of 4 seconds, while the photo taken with a lens only needed 0.05 seconds at an aperture of f/13. So the difference in exposure time was about 6.3 stops (log(4/0.05)/log(2) = 6.3), which means that the pinhole had an aperture of about f/115 (13 × √(2^6.3) = 115), which corresponds to a diameter of 48/115 = 0.4 mm.


Focal length


The focal length of a lens is the distance of the image for an subject at infinity as is shown on the right, where f represents the focal length. However, if we want to focus on something at a closer distance, then the image is formed at a larger distance than the focal length, and the lens optics need to be adjusted for that by increasing the distance from the lens to the image plane.


For a thin lens, the amount of adjustment needed can be calculated by the thin lens formula: 1/u + 1/v = 1/f (where f = focal length, u = subject distance and v = image distance). For a lens with multiple elements, this is a bit different as it has two hypothetical planes in stead of just one and the distance between these two planes is ignored in this formula (in the picture below, the combined lens elements are represented as the thick lens).


So for an subject at infinity, 1/u will approach zero giving 1/v = 1/f and v = f. But with a 50 mm lens and a subject at 5 meters distance (= 5000 mm), the image distance will be: 1/v = 1/50 -1/5000 = 0.0198, so v = 1/0.0198 = 50.5 mm. So the focal plane is shifted 0.5 mm (50.5 - 50.0) backwards and, by focusing our lens, we shift the whole lens system 0.5 mm forwards, so that the focal plane coincides with our sensor.


From this formula, it can also be deduced that the shortest focus distance (the distance between subject and sensor, so u + v) has to be 4 times the focal length: The shortest focus distance is when u and v are equally long, so v = u. This will transform the lens formula in 1/v + 1/v = 1/f. This is equal to 2/v = 1/f, or v = 2f. And, since u = v in this case, and focus distance = u + v, the focus distance is 2f + 2f, or 4f. At this point magnification is 1:1, and a greater magnification can only be obtained by using, for example, extension tubes (giving a longer focus distance).


In general, a focal length with the same size as the diagonal of the sensor is regarded as normal, so in the case of full frame format this would be 43 mm (50 mm is taken in stead as there are no fixed 43 mm lenses). A 50 mm lens is generally regarded as having approximately the same perspective as a human eye. This is, however, something different than the field of view, as the field of view of a human eye resembles a 22 mm lens.


Lenses which have a focal length shorter than the long side of the sensor are considered wide angle, and if the focal length is shorter than the short side of the sensor than we're dealing with ultra wide angle lenses. (A special type of ultra wide angle lenses are the fisheye lenses, which do not produce a rectilinear but a circular image.) For full frame, the different focal lengths are summarized in the table below. Wide angle lenses often give an exaggerated feeling of perspective, while photos taken with telelenses have a flatter feeling.

< 24 mm   ultra wide angle
24mm - 35 mm   wide angle
50 mm   normal
80mm - 300 mm   tele
> 300 mm   super tele


In photography there are two types of lenses: fixed lenses (also called prime lenses), which have a fixed focal length, and zoom lenses, which cover a range of focal lengths. Fixed lenses are easier in their design and as a result they usually give better results than zoom lenses (although the price tag is also of big importance! A cheap fixed lens will be inferior to an expensive zoom lens). For zoom lenses there is a so called zoom factor, which is the maximum focal length divided by the minimum focal length. So a zoom lens with a range from 25 mm to 75 mm has a zoom factor of 3 (= 75/25). The higher the zoom factor, the more problems the lens designers face in terms of corrections and, as a rule of thumb, a larger zoom factor often means a lens with a decreased quality.


Two different types of lens design are very common:

• The first is the retrofocus design. In this case, a diverging lens is used as the first component, which makes it possible to place a wide angle lens further away from the sensor than in a conventional design. This is a big advantage when using a wide angle lens on an SLR camera, as the mirror will need more space for proper functioning than a conventional wide angle can give. An extra advantage of this design is that the light will reach the sensor at a smaller angle, which will decrease vignetting. This does come at a price though, because of the large front element, a retrofocus lens is heavy and it will often show barrel distortion.

• The inverse of this principal is used for long focal lengths in order to make tele lenses more practical, a lens design known as telephoto. This means that a 300 mm telephoto lens can be significantly shorter than a regular 300 mm tele lens. Like the retrofocus design, this also comes at a price as retrofocus lenses often suffer from pincushion distortion.


FOV (field of view)


The focal length of a lens is a measurement of the field of view which the lens covers, and the picture on the right shows some examples of the horizontal field of view for some focal lengths based on full frame. For example, a focal length of 16 mm gives a horizontal angle of 97°, whereas a focal length of 200 mm gives an angle of 10°.


The picture below on the left was taken at 16 mm, and by hovering over the focal lengths you can see what the field of view would have been if I had used that focal length.


Do keep in mind that the field of view changes slightly when adjusting the focus. Especially when working at really close focus distances, like in macro photography, this is readily observable (for more on this, see more under macro photography/magnification). In the example below on the right, focus is on the front part of the lichen. When looking at the second picture, where the focus is on the rear part of the lichen, we see that the picture has a slightly wider field of view. This is the reason why the calculator below has a checkbox for "take magnification into account".


The field of view can be calculated below.

Focal length (mm):


Focus distance* (m):

Take magnification into account**:

Sensor dimensions (w×h, mm):



Horizontal FOV (°):

Horizontal FOV (m):

Vertical FOV (°):

Vertical FOV (m):

Diagonal FOV (°):

Diagonal FOV (m):

*Focus distance is the distance between sensor and subject, should at least be 4 × focal length (if no value for focus distance is submitted, then only the angular values will be calculated).

**Only of importance when taking pictures at high magnifications. For more information see lenses/magnification.


Note: the pupil factor is not taken into account in this calculator, which only starts having an influence at smaller focus distances (it is defined as the ratio between the apparent exit pupil diameter and entrance pupil diameter of the lens).


FOV - panoramas


If you want to increase the field of view but do not have a lens with a smaller focal length at hand, then it is also possible to stitch several photos afterwards on the computer with special software. As long as each photo has some overlap with the next photo it should be possible to make a panorama stitch of those pictures. The quality of the outcoming panorama does depend quite strongly on the software, some programs give better results than others. Below is an example of a panoramic picture, consisting of seven pictures


It is important for making panoramas to have a fixed exposure for all pictures, so shoot in manual mode. Don't let the camera determine the exposure for each individual photo separately as this will give problems when stitching the pictures due to the differences in lightness. Determine the correct exposures along the whole scene and select a good average exposure for all the pictures. When there is great contrast along the scene, then the panorama might not be possible to capture. Also, when lighting conditions are changing rapidly (cloud formations drifting by, sunset, etc.) it can be tricky to get the photos nailed and shooting the pictures quickly after each other is the way to go in these situations.


Be careful when using a polarization filter when taking panoramas, as the degree of polarization depends on the angle to the sun. And since a panorama will most likely span a large angular view, it is possible that a polarizer will give bad results. It is not impossible, but be careful with it.


Also be careful when using wide angle lenses for panoramas, because they might have some distorton which makes stitching them together a bit more problematic. It's probably better to take a slightly longer focal length and some more picture in stead.

While it is possible to make the shots handheld, it is a lot better to use a tripod for several reasons. The point of rotation is kept at the same place when rotating, it is easier to control the amount of rotation for each photo, and lastly, as the pictures are all perfectly aligned it is much easier for the software to stitch them, giving better results.


There is one big issue with panoramas and that is the parallax error. When there are no objects at a close distance in the pictures, then this problem is negligible, but with objects at a close distance this is definitely something to take into account. The problem is that a camera which is rotated on a tripod will do so around a small distance behind the center of perspective. In the picture on the left there are parts of two pictures taken while simply rotating the camera on the tripod, which gives a serious change in the distance from the chimney and the door on the left. It is clear that even the most expensive stitch software will have great difficulties to stitch these together properly. When this is corrected for the parallax error, the stitching will go much smoother (although the bird landing on the chimney introduces a new problem for the software...).


The solution is to rotate the camera around its center of perspective, which can be found by looking directly into your lens, like in the pictures below on the left. The position of the image of the aperture that you see when looking into the lens is the center of perspective. It will probably be situated somewhere in the front of the lens and if you make sure to rotate around that point than the resulting pictures are free of parallax errors.


In order to be able to do this, you need to mount the camera differently on the tripod, and there are many devices available for that. If you don't want to spend a lot of money on this, then you can make your own, there are several tutorials for this scattered around the internet! Below on the right is my own homemade creation, which is not as stable as I had hoped for, but it does the job. But I rarely take it with me because I don't have any space left in my camera bag.

Uncorrected: front view - rotated

Corrected: front view - rotated



Tilt-shift lenses


A tilt-shift lens gives a lot of possibilities to play with perspective and sharpness, which can be really useful in landscape photography. Unfortunately, they are also really expensive.... Since I did not want to empty my bank account before having experimented with the tilt and shift effects, I made one myself by using a Mamiya 645 50 mm lens. It's a lens which is used in medium format photography, which is perfect for this purpose for two reasons. First, it has a bigger image circle than a 35 mm format lens, which allows for a lot of freedom in movement. Second, it has a larger flange focal distance (which is the distance between the lens and the sensor) giving some space for shifting and tilting. Of course it's not as versatile and easy to use as a real tilt-shift lens, but it's the next best thing, and it's fun to build! Below is what it looks like.


Tilt-shift lenses - shift


By employing the shift function of your lens, you can play around with the perspective. In the example below on the left, the subject is a piece with a red part and a blue part, and it is important to notice that both parts are equally long. If you can take the picture directly in front of the subject, than that will give the first "normal" example, where the red and blue part end up equally large on the photo as well.

But if you can't stand in front of the subject, than you can rotate your camera to give the second "rotated" example. This also captures the same scene, but now the red and blue parts will not end up equally large on the photo! The blue part will be larger than the red part, and it is this effect that causes perspective distortion in pictures that are taken at and angle. (It's the same effect that causes the edges of photos taken with a wide angle lens to be distorted.) This effect can be prevented by shifting your lens. So the camera stays at the same place and angle as the "normal" setup, but the lens is shifted to the left until the same composition is obtained, and this will give a photo that is free from perspective distortion.


On the right is an example of this effect, which is a picture of a piece of wood lying on the ground, where the first picture is taken directly above it. The second picture is taken with the camera rotated to the right, and you can see perspective distortion. The lines of the piece of wood, which are relatively parallel, seem to be converging. Also, you can see how the part on the right is already getting out of focus. In stead, by shifting the lens to the right, the perspective is undistorted and the whole piece of wood is in focus.


Because the shifted images are free from distortion, they can be a good way of making photos that are to be stitched together. The only problem is that it will be limited to a stitch of about about three pictures (normal, shifted to the left, shifted to the right) since there is a limit for how far a lens can be shifted.

To the left is an example of a tree which I took from a relatively close distance, where both stitches were made from three pictures. In the first case, the top and bottom pictures were taken with a rotated camera, in the second case those were taken with a shifted lens. In both cases I only manually repositioned the three pictures as they came out of the camera, so no warping with software to make them match. The first case clearly gives a very bad fit, with several parts not transitioning fluently. Also, the lower part is already getting out of focus. However, the shifted stitch shows perfect transitions between the three photos, the whole part is in focus, and there is no perspective distortion.



In architecture photography, the shift function of tilt-shift lenses is often used to take pictures of shiny buildings, where you don't want your reflection to show up on the photo. On the right is an example that I took in my living room. In the first picture, which is taken directly in front of it, you can clearly see my reflection. So I moved my setup to the left, and shifted the lens to the right, which gave a similar picture, but without my reflection.






Tilt-shift lenses - tilt


By tilting a lens, you can play with sharpness, and it is possible to both increase and decrease sharpness. The scheme on the left shows how the depth of field depends on the angle of tilt. In each case, the bright green line is the plane of sharp focus, and an approximation of the depth of field is represented by the two outer, dimmer green lines. Focus in these examples is not on infinity, so that's why the distance between sensor (the black bar) and lens is slightly larger than the focal length f.

This principle of the plane of sharp focus changing because of the tilting of the lens is called the Scheimpflug principle. The degree in which it is tilted is determined by two cross sections, called the Scheimpflug line and the Hinge line which are shown in the picture below on the right. By connecting these two lines, you get the plane of sharp focus (the green line).

Since the plane of sharp focus gets tilted in the case of a tilted lens, the depth of field also gets tilted. Moreover, with a tilted lens, the depth of field is no longer defined by two parallel planes, but it has the shape of a cone, starting at the Scheimpflug line.


Focussing with a tilted lens is a bit trickier than usual. First of all, tilt-shift lenses are manual focus, and second, it is an iterative process to focus them. As can be seen below on the left, near focus is determined only by the angle of the lens (since that determines distance J), and not by focussing the lens. So the first thing to do is to find the right angle to get the foreground in focus. After that, you focus the lens with the focus ring to achieve far focus. But, as can be seen in the difference between close focus and focus on infinity in the scheme below on the left, focussing also changes the position of the Scheimpflug line, which means that the angle of the lens has to be adjusted slightly to correct for near focus. But that also changes the far focus, so that has be slightly corrected as well. After a couple of times doing this process, everything should be sharp.


As is the case with regular lenses, the depth of field can be regulated with the aperture, as is shown in the scheme below on the left.

Below on the right is a scheme depicting what the influence of the focal length is, while maintaining the same level of tilt.



These are some real life examples of using the tilt function. The first shows how you can achieve both very selective focus in the foreground and in the background.


And two examples were the depth of field was increased. In both examples, the camera was facing downwards at an angle of approximately 45 degrees and all four photos were taken at f/2.8! In the example on the left, the lens was tilted downwards, and in the second example on the right it was tilted upwards.


The calculator below gives the angle for the plane of sharpest focus, and the distance J.

Focal length (mm):

θ (°):

Focus distance* (m):

J (m):

ψ (°):

*Focus distance is the distance between sensor and

subject, should at least be 4 × focal length.


35 mm equivalent and crop factor


The crop factor of a camera is used to describe the size of a sensor relative to full frame format. Full frame is 36 × 24 mm, so if our camera sensor is 24 × 16 mm, then it will have a crop factor of 1.5. If we are using a camera with a 12 × 8 mm sensor, then we have a crop factor of three.

If you are using a lens on a camera with a crop factor, then this changes the field of view on your picture compared to using the same lens on full frame format. A so called 35 mm equivalent of a focal length can be calculated by multiplying the focal length with the crop factor. This does not mean that the focal length of that lens changes, it is just a way to compare the result to the result on a full frame camera. So a 50 mm lens will still be a 50 mm lens when used on a camera with a crop factor of 1.5, but due to the cropping caused by the smaller sensor, the produced field of view of your image is comparable to using a 75 mm lens on a full frame camera.


If you take a picture of the reindeer below with a full frame camera equipped with a 200 mm lens, then you end up with the picture on the right. However, if you take the picture with the same lens on a camera with a crop factor of 1.5, then the picture will be the one for the blue outline. Effectively, this is the same field of view as a 300 mm lens would have given you on a full frame camera, but it's still a 200 mm lens! In the same way, crop factors of two and four give the corresponding field of views as a 400 m lens and an 800 mm lens would give on a full frame camera, respectively.


The 35 mm equivalent can be calculated with this calculator.

Focal length (mm):

Crop factor:

35 mm equivalent (mm):


To calculate the crop factor based on the size of your sensor, use this calculator. If the horizontal and vertical crop factors are not identical then the aspect ratio of the sensor is not the same as 35 mm film format (aspect ratio 3:2) in which case the diagonal crop factor is often used. Be sure to fill in the sensor dimensions in correct order (width × height).

Sensor dimensions (w × h, mm):


Crop factor vertical:

Crop factor horizontal:

Crop factor diagonal:


This one calculates the sensor dimensions of your camera based on the crop factor. It assumes that your camera has an aspect ratio of 3:2.

Crop factor:

Sensor dimensions (w × h, mm):





The magnification is the relative size of the subject on the sensor. For example, a focal length of 100 mm and a focus distance (the distance between subject and sensor) of 0.4 m gives a magnification of 1:1, which means that the subject is projected life size on the sensor (the size of the sensor plays no role in magnification). Lenses which can produce magnifications of 1:1 or larger are considered macro lenses.


The scheme on the right shows the relationship between magnification, focal length and subject distance (the subject distance, u, is the distance between lens and subject). Blue is the subject, red is the formed image, f is the focal length.


1: u > 2 × f

2: u = 2 × f

3: u < 2 × f, but u > f

4: u = f

5: u < f


→ mag < 1:1, focus distance > 4 × focal length

→ mag = 1:1, focus distance = 4 × focal length

→ mag > 1:1, focus distance > 4 × focal length

→ no image formed, no sharp picture recorded

→ image projected in front of the lens, no sharp picture recorded








The calculator below calculates what magnification you are working with for a known focal length and focus distance. It also gives an effective aperture factor, which becomes important in the case of higher magnifications. When working at higher magnifications, the effective aperture changes because of this magnification (more on that under macro photography/magnification), and you should take that into account when determining at which aperture diffraction starts to degrade image quality. For this reason the calculator for diffraction (under exposing/aperture) can be adjusted with the effective aperture factor that the calculator below gives.

The effective aperture can be calculated by multiplying the aperture number with this factor. E.g. a lens set at f/8 will have an effective aperture of f/16 when the effective aperture factor is 2. This is mostly of importance in the case of macro photography where magnifications are high.

Focal length (mm):

Focus distance* (m):


Effective aperture factor:

* Focus distance is the distance between sensor and subject, should at least be 4 × focal length.


Vibration reduction (VR)


Some lenses are equipped with something called Vibration Reduction (VR) or Image Stabilization (IS), which is a mechanism that reduces blur caused by shaking the camera. I'm sure many love it, but I love my tripod and see no need for Vibration Reduction for my photography. So I was a bit disappointed when Nikon released the 16-35mm with Vibration Reduction, but I like that lens a lot so I purchased it anyway. However, I soon found out that there was a problem with the lens when doing night photography.

The examples below to the left show a red glow and some other red structures as well and it took me a long time to realize the Vibration Reduction was the culprit I can say. Below to the right are some tests I did at home with a lens cap on the lens to test the Vibration Reduction, and indeed the same structures appear on the photo. The structure changes and becomes less noticeable when changing the focal length to 35 mm, but the red glow is still there. Changing to a good old nifty-fifty 50 mm lens removed almost all of the red glow. The only glow visible in that image is caused by my sensor getting really warm after several long exposures (~7 minutes each) at room temperature.

After scouring the internet looking for confirmation, I read that there is an infrared sensor in the lens which is used to control the Vibration Reduction, which is what is causing the red glow. And in case you wondered, turning off the Vibration Reduction does not make a difference, it was off during all the photos below. As far as I know, this problem is not consistent, some people have no problems with their lenses it seems.

So there you go, one more reason for me to dislike Vibration Reduction. But I still really like this lens, I just have to use another wide-angle for night photography.


Lens characteristics


Below is a list of some of the most common lens characteristics.


Lens characteristics - spherical aberration


Spherical aberration is the phenomenon that light rays going through the edge of a spherical lens (meaning that the lens is evenly curved) are not focused at the same point as light that is going through the center of the lens.


This effect is especially observed at large apertures where pictures can be a bit blurry, but stopping down will minimize the spherical aberration. To reduce this effect, lenses often have aspherical lens elements, which means that for these elements, the curve changes from center to edge.


An example of the influence of spherical aberration at large apertures is shown on the right. In this case f/4.5 was wide open and sharpness suffers visibly compared to f/8 or f/11. Enlargements are at 100 %.







Lens characteristics - coma


Coma is related to spherical aberration, but in this case the rays enter the lens at an angle (oblique rays). The result is that points along the optical axis will be sharp (if we ignore any spherical aberration), but points toward the edges will have a comet shape. Stopping down can decrease coma.


Some examples of coma. The first one shows a crop from the left part of a star picture, and shows how the stars are shaped somewhat like a comet. The pictures on the right show an extreme example of coma, and how stopping down minimizes coma. These are crops from the lower left corner.


f/1.8 - f/2.2 - f/2.8 - f/4.0


Lens characteristics - chromatic aberration


There are two main types of chromatical aberration:

• Longitudinal or axial chromatic aberration is caused by the effect that shorter wavelengths are focused closer to the lens then longer wavelengths. Together with spherical aberration, longitudinal chromatic aberration is the only on-axis aberration in this list (meaning that it also occurs in the center of the picture). Longitudinal chromatic aberration is most obvious in areas with great contrast, such as dark objects against a light background, and the effect can be reduced by stopping down the lens.


• Lateral or transverse chromatic aberration is caused by the effect that when light enters the lens at an angle, shorter wavelengths will focus at a longer distance from the optical axis then longer wavelengths. Like longitudinal chromatic aberration, lateral chromatic aberration is most obvious in areas with great contrast, such as dark objects against a light background. However, in this case the effect can not be reduced by stopping down the lens.


The picture on the left is a nice example of lateral chromatic aberration, taken from the left upper corner of the original picture. The picture on the right is an example of longitudinal chromatic aberration and shows how stopping down can decrease this (f/3.5 is wide open).


f/3.5 - f/4.5 - f/5.6


Many lenses have special constructions to minimize the effect of chromatic aberrations, such as achromats and apochromats:

• In achromats, red and blue light focus in the same plane while green focuses somewhat differently, thereby reducing primary chromatic aberration (chromatic aberration of the primary colors). But there is still secondary chromatic aberration left which can be seen as green and magenta edges (magenta = blue and red together).

• In apochromats, red, green and blue all focus in the same plane, giving very little chromatic aberrations.


However, as a result of correcting lenses for these two aberrations, a different chromatic aberration becomes more noticable; spherochromatism. Spherochromatism is the result of lenses being corrected for in focus areas, but not for out of focus areas. Like longitudinal chromatic aberration, spherochromatism is caused by the variation of focus distances with different wavelengths, and thus is also an on-axis aberration.


The result of spherochromatism is greenish out of focus highlights in the background and magenta out of focus highlights in the foreground. A clear example of this can be seen to the right. Spherochromatism is a bigger problem in faster lenses, but it can be reduced by stopping down the lens.




Lens characteristics - field curvature


Field curvature is caused by the fact that oblique rays focus slightly in font of the sensor yielding a curved image.


On the photo this will result in objects in the edges of the photo being blurred to a certain extent while objects in the center are sharp (example 1). Or, when the focus is on the edge of the picture, the rays going through the center will focus behind the sensor, resulting in the center of the picture being blurred (example 2).


On the right is a picture of a flat rock surface covered with lichen. The first picture shows the whole picture, while the other two show enlarged parts of respectively the center and the lower right corner. It is clear that the part from the center is a lot sharper than the corner part, which is mostly due to curvature of field.







Lens characteristics - astigmatism


Astigmatism is the effect that a lens does not focus at the same point in tangential/meridional (the wheel) and sagittorial (the spokes) orientations.

This is never a problem in the center of the image, as both orientations focus at the same point along the optical axis, but towards the edges of the picture this will change the shape of details. The exact shape of the details depends on how the details are blurred in both directions. If there is more blurring in the tangential direction than in the sagittal direction, then tangential astigmatism is observed. Similarly, if there is more blurring in the sagittal direction, then sagittal astigmatism is observed. Astigmatism can be decreased by stopping down.


An example of astigmatism is shown on the left, where the out of focus parts towards the edges of the picture are elongated outwards (tangential astigmatism).




Lens characteristics - distortion


There are a few types of distortion and the most common are shown below on the left. They are most obvious when straight lines are present in the picture.

Below on the right is an example of distortion, where the lens suffers from quite a bit of moustache distortion. In the corrected example all the distortion is removed. In nature photography, distortion is not that often a big issue, since there are not that many straight lines in nature. In the uncorrected example it is hardly noticeable that there is such a big amount of distortion, until you compare it to the corrected picture!


Lens characteristics - flare


When light from a bright light source (either from inside or outside the frame) can enter the lens, it can reflect between different elements of the lens which will form bright spots, called flares. When the light is reflected in such a way that it reaches the sensor as a haze, it is called veiling glare, which will lower the contrast of the scene. Filters in front of the lens are extra surfaces on which light can reflect and potentially enhance flaring.

The brightest flares become visible when looking through the viewfinder and pushing the depth of field preview button (if the camera has one). This stops down the aperture to the value you have chosen for the picture and this will usually reveal flares.


The picture below on the left shows a classic example of flare caused by the sun, whereas the one on the right shows a type of flare whose shape is a bit more peculiar. By looking at the enlarged version, it can be seen that these flares have a strange shape and these types of flares can be found on the internet, where people claim them to be the evidence of UFOs, which is pretty hilarious. I am quite sure I did not see anything flying around when I took this picture! Notice that these flares are caused by the lamps from the factory, and if you draw lines between the lamps and their corresponding flares, they all meet in the middle of the picture. Often when the source of the flare is in the picture, its corresponding flare is opposite to the center of the picture (this rule only applies to flares that are caused by internal reflections in the lens, not to flares that are caused by for example dust on the lens).

When the light source that causes flaring is outside the frame it can be removed by simply blocking that light source. This can be done by using a proper lens hood, but this only works well for prime lenses, since a lens hood only works optimal at the widest angle in the case of zoom lenses. Another way is using your hand or something else to block the light, but care has to be taken not to enter the frame with the "blocking device". I never carry lens hoods with me, so my hand is my trusted "blocking device". The examples below show how carefully using a hand can be very effective against flaring. In the first case on the left you can see regular flares and a bit of veiling glare, and both are removed by using my hand. In the second example on the right, the lighter circles that can be seen on the left side of the photo are caused by light reflected on dust particles on my lens (I know, shame on me for not cleaning my lens....). Furthermore, there is a massive amount of veiling glare, but, once again, my hand comes to the rescue and gives me a clean picture.

It is of course difficult to block the light source when the light source is part of the composition. In the case of flares caused by dust on the lens, simply cleaning your lens does the job, as can be seen in the pictures below on the left.

For flares that are caused by internal reflections in the lens, things become a bit more complicated, but there is a work around. The solution is to take two pictures, one normal and one where the light source is blocked. These two can then be combined afterwards on the computer to give a clean picture. In the example below to the right, several flares can be seen throughout the photo, which is caused by the sun in the top of the frame. By taking another picture while blocking the sun, and combining that one with the original, you get a flare free photo.


Lens characteristics - vignetting


There are several types of vignetting, all caused differently. I will discuss four of them here:

• Optical vignetting is caused by the effect that the lens will block part of the light traveling at an angle, which is demonstrated by the pictures below on the left, where the bright white area in the lens represents the entrance pupil. The wide open apertures show a significant decrease in area when seen from an instead of head-on, and this causes the vignetting. Because light travelling at an angle is blocked to a certain extent, a gradual darkening in the edges of the picture is observed. The smaller apertures have the same size, wether they are seen from the optical axis or from an angle. As a result, this type of vignetting can be decreased by stopping down.

The example on the right demonstrates how stopping down decreases optical vignetting. Stopping down one or two stops usually suffices to suppress most of this type of vignetting.

• Mechanical vignetting is the simplest type of vignetting and it can be caused by a number of things, like when your lenshood is too long, or when you stack multiple filters on top of each other. This will cause them to enter the field of view giving rise to dark corners in your picture. It is very easy to prevent this type by using the proper lenshood and not stacking too many filters, especially in the case of wide angle lenses. If you want to be sure you can check it by just looking through your viewfinder (although keep in mind that many viewfinders don't show the full 100 %) to see if you have some mechanical vignetting. On the right is an example, where the upper corners have turned black because of a filter blocking the way.


• Natural vignetting is caused by multiple factors which together add up to the cosine^4th law, where it is the angle that the light makes relative to the sensor which matters. The first factor is the inverse square law which says that light that has to travel farther will decrease in intensity. And in our case, the light travelling to the edge of the sensor travels a longer distance than light going to the center of the sensor. The loss of intensity this causes is according to a cos^2 factor.

Second, light hitting a surface at an angle will be spread out over a larger surface than light which will hit the surface straight on. This is the same principle why an evening sun is cooler than a midday sun, its light is spread out over a larger area. And in our case, it is again the edges of the sensor which suffer from this as they recieve their light at an angle, which introduces another cos factor.

Third, a circle seen from an angle does not look like a circle, but looks elliptical, and that same principle can be applied to our aperture. Light entering from an angle will not "see" the aperture as a circle but as an eliptical shape, which effectively reduces the area of the aperture and thereby blocks part of the light. So again it is the edge of the picture which is affected with yet another cos factor.

These three factors combined give the cos^4 factor for natural vignetting, and there is no remedy for this type of vignetting.


• Pixel vignetting only applies to digital cameras, and it is in some way related to natural vignetting. It is caused by the fact that sensors will record the light as slightly less bright when it hits the sensor at an angle, which it does at the edges. This can to a certain extent be minimized by using microlenses on top of the sensors.


Lens characteristics - bokeh and cat's eye effect


Bokeh is a term used to describe the quality of the out of focus elements, but it is hard to describe what is good bokeh and what is bad bokeh, it's mostly up to ones personal taste. The shape of out of focus objects is determined by the aperture, lens construction, lens aberrations and lens aberration corrections. When the aperture is not wide open, the out of focus elements will take the shape of the aperture, which depends on the number of blades used in the lens. An example where bokeh can be seen in the background is on the left.


The same principle that caused the optical vignetting (see lenses/vignetting) is also the cause for the so called cat's eye effect, which is seen at larger apertures. It's the effect that the shape of the bokeh on the optical axis is round, whereas it has the shape of a cat's eye toward the edges. This is caused by the cat's eye shape of the entrance pupil when seen from an angle.














To the right is an example where the cat's eye effect is clear around the edges. This also shows how stopping down decreases this effect, as it's almost gone at f/2.8, and completely gone at f/4. Note that, except for f/1.8 which is wide open, all apertures give heptagonal shaped results, which results from the seven blades of the aperture.