PHOTOSYNTHESIS IN NATURE  Nature & outdoor photography

HOME      GALLERIES      PRINTS      [ PHOTOGRAPHY ]      EXPERIMENTAL      ABOUT/CONTACT

CAMERAS        LENSES        FILTERS        METERS & MICROMETERS        TYPES OF LIGHT        EDITING        BITS & BYTES        COLOR MANAGEMENT        PHOTOGRAPHING THE SKY
MACRO PHOTOGRAPHY        MICROSCOPE PHOTOGRAPHY        IR PHOTOGRAPHY        STEREO PHOTOGRAPHY        SOLARGRAPHY        MATTING CALCULATOR

White balance

 

Different white balance settings =>
[2000 K] [2500 K] [3000 K] [4000 K] [5000 K] [6000 K] [8000 K] [10000 K] [15000 K]

When objects are photographed, we basically record the light that they reflect (unless we're photographing a light source itself of course). This has as a consequence that the light these objects reflect depends on the light of the source. If the light source emits warm light, then the object will also reflect a warm light. Our brain automatically compensates for this and a white piece of paper is a white piece of paper, wether we stand outside in the shade or inside with the lights on. But a camera has to be compensated for this, otherwise the piece of paper will look blueish in the shade and yellowish inside under the light bulbs.

Temperature: Type of lighting:
1000-2000 K Candle light
2500-3000 K Tungsten lighting
3000-4000 K Sunrise and sunset
4000-5000 K Fluorescent light
5000-5500 K Electronic flash
5500-6500 K Daylight
6500-8000 K Overcast skies and shade
9000-10000 K Heavily overcast skies
10000-18000 K Sunless blue skies

 

The color temperature is given in degrees Kelvin and is based on the type of light a black body would emit when heated to this temperature. It may seem a bit counterintuitive, but the higher the temperature, the cooler the light is.

 

You can either choose to let the camera determine the white balance for you, adjust it manually according to some presets, or choose a color temperature instead of a preset. Optionally, if you're working in RAW format, you can adjust the white balance afterwards on the computer. The camera often does a pretty good job determining the white balance, but it can also be way off.

 

Curves

 

One of the most efficient ways of editing the tonal range of a picture is the curves tool (the white rectangle in the photo below), which is a graph with two bars, both ranging from white to black. The horizontal one is the input while the vertical one is the output. The easiest way to explain this tool is probably by showing some examples for the photo below.

Adjusting using curves =>
[original] [example 1] [example 2] [example 3] [example 4] [example 5] [example 6]

1) The first one has the curve dragged down at one point, which makes that all points on the curve are darker on the vertical output bar compared to the horizontal input bar, except the very brightest and darkest parts (the lower left and the upper right points of the curve), which are still on the same spot. Since the whole curve has been dragged down, the overall picture is darkened.

2) This is the opposite of the first example, and the whole picture becomes lighter.

3) This time two points are used, which makes the darker parts of the picture become even darker and the lighter parts become even lighter (except, once again, for the very brightest and darkest parts, since they have not moved). The midtones, on the other hand, remain relatively unaltered. This is a way of increasing the contrast without chaging the white point and black point, and it is known as the S-curve.

4) This is again the opposite as the third example and decreases the contrast.

5) This is a way of adjusting the black and white points, and this will also increase contrast as the third example did. The big difference is however that by changing the black and white points, you may discard some information. In this case, the blacks become a bit too dark, but they are still ok. The whites are washed out though, which is something that should be avoided. The curve used in the third example is the better choice most of the time.

6) Again the opposite of example five, decreasing the contrast.

 

Of course there are many more ways to use the curves tool, more points can be added and more precise adjustments can be done in any sorts of way. Therefore this is a very nice and useful tool, although it requires a fair amount of practise.

 

Contrast mask

 

Using a contrast mask => [original] [contrast mask] [contrast mask applied (69%)]

When a picture is very contrasty, a contrast mask can be applied in order to lower the contrast. It is created by copying the original picture into a new layer, desaturating this layer (making it black and white) and inverting it. The contrast mask is then blended with the original picture in the blending mode "overlay", and its opacity is decreased to obtain the desired effect.

Because this slightly decreases the sharpness of the picture (as it is in some way the opposite of applying an unsharp mask), the contrast mask needs to be slightly blurred using a Gaussian blur. This will restore the original sharpness of the picture.

On the left is an example of applying a contrast mask.

 

 

 

 

 

 

 

 

 

 

 

 

Local contrast enhancement (LCE)

 

Local contrast enhancement (LCE) is a great way of enhancing the clarity of a picture, and it is very easy to do so by using either the unsharp mask or the high pass filter which are both described below under editing/sharpening. Sharpening is done by enhancing the acutance, but if we change the settings to a large radius and a small amount (as opposed to a small radius and a relative large amount as is used for sharpening), than the same effect will be spread out over a much larger area, and this gives a boost in contrast around edges. But the great thing is that this only applies to edges, so in the example below on the left we can see that the contrast is larger around the edge, but the middle of the circle and the outer parts of the picture are not altered (which is why it is called local contrast enhancement)! Compare to what adding regular contrast (global contrast) does, which darkens the whole circle evenly, and lightens the rest.

 

The reason that this works is because our brain is very susceptible to observing sharp transitions, but not to observing slow gradients. So the brain immediately sees a stronger transition in the local contrast enhancement example, but the gradients in the other parts can only be seen by looking closely and will not even be visible in most photos.

 

There is one thing to keep in mind with this technique, which is that the enhancement in contrast can also clip very bright and very dark parts. In that case you can of course lower the contrast enhancement, but there is a better solution, which is using the "blend if" functionality in Photoshop (this can be found in Photoshop under "layer style/blending options").

You'll need two layers, were the background is the original layer, and the second layer is the one to which you applied local contrast enhancement. If you adjust the "blend if" slider so that the brightest parts (and/or the darkest parts if it is needed) of the enhanced picture are not used, then this is a good solution in most cases. There is however the possibility of artifacts, so always keep an eye on the result! Below on the right is again an example, where I exaggerated the local contrast enhancement for demonstration purposes. Parts of the clouds are washed out after applying local contrast enhancement, but using the "blend if" function recovers the clouds and some of the darker parts of the land.

Simplified display of contrasts => [original] [global contrast added] [LCE applied]

 

Luminosity masks

 

Luminosity masks are a really fantastic way to get the most out of your pictures. It's definitely more time consuming and it takes a while to get the hang of it, but it's worth it! But credit where it's due: there is already a good explanation about luminosity masks written by Tony Kuyper, so I won't waste my time on it.

 

Dodging/burning

 

Example of dodging and burning => [original] [D/B layer] [D/B applied]

This is another technique that gets its name from the darkroom. Dodging and burning are processes that can be used to selectively darken (burning) or lighten (dodging) parts of an image, and are an integral part of darkroom processing.

 

The dodging and burning tools are available in most editing programs, but I personally prefer working with a grey layer set to "overlay" mode. In this way you edit the image non-destructive, and you can always go back and change it some more or undo something. This layer can be created by simply adding a layer filled with 50% grey and setting its blend mode to "overlay".

Painting with black and white on this layer has the same effect as burning and dodging, respectively. This process works best for me with the opacity of the brush down to a couple of percent (usually 1-5). If I want the effect to be stronger, I use several strokes with the brush at that spot. By painting with 50% grey, you can undo any dodging or burning that you want to get rid of. Finally, the effect of the dodge/burn layer can also be adjusted by changing the opacity of that layer.

 

I added a fair amount of dodging and burning to the example on the left, to demonstrate what it does.

 

 

 

 

 

 

 

 

 

 

 

 

 

Sharpening

 

[original] [ideal] [sharpened]

Since some blurring occurs when taking a digital picture due to the anti-aliasing filter (and some algorithms), the picture needs to be sharpened afterwards. If you shoot RAW, you'll need to do it yourself, otherwise your camera will do it for you. Unfortunately sharpening does not really enhance the sharpness of a picture, but enhances the apparent sharpness (or acutance). If you look at the example on the right, then you'll see that the original is a bit blurry, and ideally we would like to obtain a sharp edge as in the "ideal" example. But that's impossible to do afterwards, so we'll have to compromise and enhance the acutance, as in the "sharpened" example. As can be seen, it is very easy to get all kinds of halos when sharpening (I actually oversharpened deliberately here to show the halo formation), so the compromise is to sharpen enough so it looks sharp, but not so much that halos start to be apparent. There are several ways for sharpening a picture, and I will discuss two here.

 

Sharpening - unsharp mask

 

Sharpening using an unsharp mask => [original] [gaussian blur] [difference] [sharpened]

The method most often used is by a method called the unsharp mask (USM) and the reason a method for sharpening is called UNsharp mask, is because an unsharp version of the picture (done by applying a Gaussian blur) is needed to determine where the edges are in the original picture. The difference between the original and the blurred picture tells the software where the edges are in the photo (the white parts in the "difference"-example on the left), and accordingly, where to enhance acutance.

 

This whole process is done by the software itself, and we only need to choose these three parameters:

• Amount: Speaks pretty much for itself, this is the amount of sharpening that is applied.

• Radius: This is the radius that will be applied to get the blurred layer. A larger radius will give broader acutance enhancements.

• Treshhold: This is the minimal amount of difference between the original and the blurred layer that will be regarded as an edge.

 

 

 

 

 

 

Sharpening - high pass filter

 

Sharpening using a high pass filter => [original] [high pass filter] [sharpened]

Sharpening with a high pass filter gives pretty similar results as using an USM, but it has the advantage of being less sensitive to noise. So for noisy pictures, the high pass filter method might be the preferred one. In this case, a bit more work is needed since this is not a standard process in the software.

You start by duplicating your original layer and applying the high pass filter. Desaturate this layer, as you only want to apply sharpening on the luminosity levels. Then change the blending mode to overlay, which will give a sharpened version of your original.

 

The parameters in this process are:

• Radius: Like with the USM method, a large radius will result in broader acutance enhancements.

• Opacity: Changing the opacity of the second layer (the one with the high pass filter) will also change the amount of sharpening.

• Blending mode: This is optional, but by changing the blending mode to 'soft light' or 'hard light', the sharpening can be decreased or increased, respectively.

 

 

 

 

 

 

Focus stacking

 

Focus stacking is a method for obtaining pictures with a larger depth of field, which is a very useful method especially for macro photography. The depth of field in macro photography is often extremely small and using a small aperture is not a good solution since diffraction kicks in pretty badly. So focus stacking makes it possible to use a suitable aperture and combine the pictures afterwards on the computer to give a larger combined depth of field. More about the use of focus stacking in macro photography can be found under macro photography/focus stacking.

 

In order to do focus stacking, several pictures need to be taken, where the focus on each picture is moved a small amount (either by adjusting the focus, or by moving the whole camera setup on a rail). These are then combined on the computer with software which determines the sharpest parts in each picture and combines those in a single picture.

The picture below on the left shows a macro shot of a weevil, which is a focus stack of 35 photos. You can see a big difference between the focus stack and the single exposure, where a lot of the weevil is out of focus. I also included a GIF with the sequence of the 35 photos to clarify the process a bit.

An important thing with focus stacking is that the shifts of focus between the separate photos should not be too big, because that will give blurry transitions in the end result. Below on the right is an example of this, where the photo with bigger steps has several blurry bands, which is not what we want. It's better to take steps that are too small than too big, because too small steps will only give more photos to work with while still resulting in a nice end product, while too large steps will not result in a good photo at all.

Example of stacking => [single exposure] [focus stacked]

The influence of the step size => [whole picture] [small steps] [large steps]

 

Besides macro photography, focus stacking can also be very useful in for example landscape photography. Below is an example where I wanted both the foreground flowers and the background mountains in focus. Accomplishing that with one single picture would have meant a very small aperture which would have resulted in decreased overall sharpness due to diffraction. So, in stead, I made four exposure with the focus shifted for each of them. This gave me a photo that was sharp from front to end, whithout using a small aperture.

Below to the right are close-ups at 100% of both the foreground and the background. In both cases the top half is the single exposure, and the lower half the focus stack version. The background is slightly sharper in the focus stack, but that difference is negligible. The foreground on the other hand is a lot sharper in the stacked version.

The whole picture

Details at 100% => [foreground] [background]

 

In addition to giving sharpness throughout the whole photo, focus stacking is also very effective to blur out backgrounds. In the example below to the left, the focus stacked version is sharp from top to bottom with a nicely blurred background. However, if complete focus of the twig was obtained by using a single exposure with a small aperture, then the background gets too detailed, which takes the attention away from the twig. On the other hand, a single picture with a large aperture would give a nice background, but in that case only part of the twig would be in focus. So in this case only a focus stack will give the desired result.

 

Still, focus stacking is not some magic procedure that will simply make everything better, and below on the right is an example of this. The slime mold looks detached from the background in the single picture, which directs the attention to the slime mold. However, the focus stacked version has lost all sense of depth and the background has become very distracting. So, in this case, the single picture is the one to prefer for me. For this reason, I often make what I call partial focus stacks, where only a part of the subject is focus stacked and therefore in focus, but there is still depth in the photo because of the out of focus parts.







Single picture vs focus stack => [single picture] [focus stack]

 

Stitching

 

A panorama made up of seven photos => [panorama] [separate pictures]

If you want to increase the angle of view but do not have a lens with a smaller focal length at hand, then it is also possible to stitch several photos afterwards on the computer with special software. As long as each photo has some overlap with the next photo it should be possible to make a stitch of those pictures. The quality of the outcoming stitch does depend quite strongly on the software, some programs give better results than others.

To the right is an example of a panoramic picture, consisting of seven pictures, and below is a vertical stitch of three pictures.

A stitch made up of three photos => [stitch] [separate pictures]

 

It is important for making stiches to have a fixed exposure for all pictures, so shoot in manual mode. Don't let the camera determine the exposure for each individual photo separately as this will give problems when stitching the pictures due to the differences in lightness. Determine the correct exposures along the whole scene and select a good average exposure for all the pictures. The same goes for the focus, determine a good focus point, focus on it manually and don't change the focus during the shooting of the stitch.

 

Be careful when using a polarization filter when taking panoramas, as the degree of polarization depends on the angle to the sun. And since a panorama will most likely span a large angular view, it is possible that a polarizer will give bad results. It is not impossible, but be careful with it.

 

The influence of the parallax error => [with parallax error] [without parallax error]

Also be careful when using wide angle lenses for stitches, because they might have some distorton which makes stitching them together a bit more problematic. It's probably better to take a slightly longer focal length and some more picture in stead.

 

While it is possible to make the shots handheld, it is a lot better to use a tripod for several reasons. The point of rotation is kept at the same place when rotating, it is easier to control the amount of rotation for each photo, and lastly, as the pictures are all perfectly aligned it is much easier for the software to stitch them, giving better results.

 

When lighting conditions are changing rapidly (cloud formations drifting by, sunset, etc.) it can be tricky to get the photos nailed and shooting the pictures quickly after each other is the way to go in these situations.

 

 

There is one big issue with stitching and that is the parallax error. When there are no objects at a close distance in the pictures, then this problem is negligible, but with objects at a close distance this is definitely something to take into account. The problem is that a camera which is rotated on a tripod will do so around a small distance behind the center of perspective (which is located somewhere in the lens).

In the picture on the right there are parts of two pictures taken while simply rotating the camera on the tripod. The tree in the distance just left of the foreground tree has a completely different distance to the foreground tree in these two pictures, which is caused by the parallax error. It is clear that even the most expensive stitch software will have great difficulties to stitch these photos together properly. However, when the set-up is corrected for the parallax error, you can see that the distance between the two trees remain the same, no mattor how much you rotate the camera.

 

The solution is to rotate the camera around its center of perspective, which can be found by looking directly into your lens, like in the pictures below on the left. The position of the image of the aperture that you see when looking into the lens is the center of perspective. It will probably be situated somewhere in the front of the lens and if you make sure to rotate around that point than the resulting pictures are free of parallax errors.

 

In order to be able to do this, you need to mount the camera differently on the tripod, and there are many devices available for that. If you don't want to spend a lot of money on this, then you can make your own, there are several tutorials for this scattered around the internet. Below on the right is my own homemade creation, which is not as stable as I had hoped for, but it does the job. But I rarely take it with me because I don't have any space left in my camera bag.

Uncorrected for parallax error => [front view] [rotated]
Corrected for parallax error => [front view] [rotated]

My homemade version of a pano head

 

Stitching - bokehrama

 

A special form of stitching is the bokehrama, which simulates the effect of shooting with a very large aperture, giving a very shallow depth of field. In stead of taking a single photo you take a lot of photos with a longer focal length, with the aperture wide open. If you stitch those to a single picture you get a photo that looks similar to a photo that would have been taken with a much smaller focal length and a very large aperture, giving a very shallow depth of field which is otherwise difficult to achieve. However, there is one major difference between a bokehrama and a comparable single photo, which is that the plane of focus in the bokehrama is curved, whereas it is not curved for the single photo. The reason the plane of focus becomes curved for a bokehrama is because you rotate your camera around the nodal point for each consecutive photo.

 

Below are two examples, where the single pictures both were taken with a 14 mm lens with the aperture wide open at f/2.8. The bokehramas were both taken with a 50 mm wide open at f/1.8, with several tens of pictures to get a similar field of view as the 14 mm photo. In both cases the resulting photos approximately correspond to a single image taken with a 14 mm lens at an aperture of f/0.5, which is why they have such a small depth of field. The effect is especially strong in the example below to the right, since the foreground tree, on which I had focussed, was so close to the camera.

bokehrama stitching => [single photo] [bokehrama stitch]

 

Blending

 

Combining photos to simulate a long exposure => [single picture] [ten pictures combined]

Blending pictures can be done for several reasons, like decreasing noise (see bits and bytes/noise), or making star trails from several consecutive photos (as described in photographing the sky/star trails).

 

It can also be done to simulate a longer exposure on, for example, a bright sunny day, when even a small aperture will result in a short shutter speed. If you don't have access to a ND filter, then blending might be a solution. It is done by simply taking multiple exposures of the same scene and blending them so that, for each layer, the opacity is set at 100/"nr of the layer" in percent: layer 1=>100% opacity, layer 2=>50% opacity, layer 3=>33% opacity, layer 4=>25% opacity, etc. If ten pictures are blended with each a shutter speed of 1/10 s, then the blended picture will look like it was taken with a shutter speed of 1 s. The picture to the right shows the difference between a single exposure and a blended picture.

 

Another reason to blend photos is when a scene has a large dynamic range and filters are not an option. In that case, you can take several pictures of the same scene with different exposures so as to correctly capture all parts of the scene, and combine those exposures afterwards. It is similar to the high dynamic range process described below, but the results are often more realistic and you have more control over the process (and it's a lot more fun to do!).

 

Combining different exposures => [light exposure] [medium exposure] [dark exposure] [blended]

The image below to the left is an example of exposure blending, where three pictures were combined (with one stop difference between the pictures). The light photo has a good exposure for the shadow areas, but the light areas are washed out and featureless. The dark photo has a good exposure for the bright highlights, but in this case the shadows are too dark. Combining the pictures gives the blended photo where the exposure is more balanced.

 

 

 

 

 

Two things are important when making pictures for exposure blending. First, only adjust the shutter speed as adjusting the aperture will also alter the depth of field, and second, use a tripod, as that prevents you from having to align the pictures prior to combining them.

 

If you shoot RAW, then it is also a good option to create several photos with different brightness created from the same RAW-file. Combining these often gives good results as well! The advantage is that you don't need to worry about moving objects like branches or clouds, but the disadvantage is that the camera needs to capture the whole dynamic range in one exposure, which is not always possible. Although modern cameras have a very impressive dynamic range and can handle contrasty scenes remarkably well.

 

 

 

 

 

 

 

High dynamic range (HDR)

 

Combining different exposures with the HDR technique =>
[light exposure] [medium exposure] [dark exposure] [HDR blended]

If a scene has more contrast then the dynamic range of your camera can record and filters are not a suitable solution, then high dynamic range photography might also be a solution. Once again, this means that you take several pictures of the same scene with different exposures and combine them aftwerwards.

 

In general, producing an high dynamic range photo goes as follows, the software will first combine the several pictures to a 32 bit high dynamic range photo, but this can't be displayed by most screens (which are generally 8 bit) so tone mapping is done, which will convert the 32 bit file into an 8 bit file (so most pictures labelled as high dynamic range are not true high dynamic range, but tone mapped pictures of an high dynamic range original). Of course, along this path there are several options to adjust settings to get the picture that you want.

 

In my opinion, high dynamic range is very capable of screwing a picture, but apparently the same effect which I despise is something that many people like. Just do a Google search on high dynamic range and look at the results. The horror... the horror...

I have found that I get the best results when using high dynamic range very modestly, but then again, I am not a big fan and hardly ever use it. I find that if I need to blend several pictures, then doing it manually (as described above) gives far superior results.

 

The same two things that were important for exposure blending are important in this case. So only adjust the shutter speed (adjusting the aperture will also alter the depth of field) and use a tripod (prevents you from having to align the pictures prior to combining them).

 

Dust removal

 

Most of the time, there will only be a few dust particles on your sensor, and removing them one by one will not be a daunting task. However, in the case of my infrared cameras, I converted those myself and ended up with quite a lot of dust on my sensors as a result. Since the dust from the conversion is sandwiched between the sensor and the infrared filter, there is no way to remove it by cleaning. And since it did get a bit on my nerves to painstakingly clean my infrared pictures with the dust removal tool, I experimented around to find a way that would make it more easy to remove larger amounts of dust. It took some time, but now I found a way that seems to work pretty well, and it is a lot faster than removing every single dust bunny. I apply it only on my infrared photos, but it will work on normal photos as well.

 

Here's how I do it: When you take your photo, you take a dust reference photo as well. Very important for this dust reference photo is that the aperture and focal length are identical to your normal photo!

Below to the left are examples of how the aperture changes the look of dust bunnies on photos. The smaller the aperture, the more evident the dust particles will be, and at larger apertures the dust is hardly visible.

The influence of the focal length on the appearance of dust is shown below to the right. Light will hit the sensor at a slightly different angle for different focal lengths so the position of the particles on the photo will also change slightly as a result. This change is most significant for small focal lengths.

Influence of the aperture on dust => [f/22] [f/16] [f/11] [f/8] [f/5.6]

Influence of the focal length on dust => [10 mm] [24 mm]

 

The focus distance also makes a small difference in how the dust appears on the sensor, but not at all as significantly as focal length and aperture. So a reference photo taken at a different focus distance will probably still work fine.

Removing dust bunnies => [original] [dust reference photo] [dust removed]

 

Capture the reference shot by taking a shot of an evenly lit surface which has as little detail as possible (a uniform grey or blue sky, snow, etc.) while moving and rotating the camera around. The moving around is needed in order to get a reference photo that is as smooth as possible (it will blur any details in the background), and the longer the shutter speed of the reference photo the easier it is to achieve this.

Having the background for the reference photo out of focus further improves the reference photo by blurring background details, but it is not a requirement as long as there is sufficient motion blur due to the moving around of the camera.

 

On the computer, apply the same RAW-settings to both photos and open them both in Photoshop (do this procedure on 16 bit files, and not on 8 bit files, because that might give ugly posterizations!). Copy the reference photo as a new layer on top of the normal photo and perform a high pass filter on the reference layer (the radius of the high pass filter can be played around with, but so far 200 pixels seems to work best for me). Invert the reference layer and desaturate it, then change its blend mode to "linear light" and its opacity to 50%. Now most/all of your dust bunnies should have disappeared. An example of this procedure is on the left.

 

As long as the dust does not move around on your sensor, you can reuse the dust reference photos, as long as the aperture and focal length match that of the normal photos. But if you want to be sure, just take a new reference photo. Better safe than sorry!

 

Δ
Δ
Δ