PHOTOSYNTHESIS IN NATURE  Nature & outdoor photography

HOME      GALLERIES      PRINTS      [ PHOTOGRAPHY ]      EXPERIMENTAL      ABOUT/CONTACT

CAMERAS        LENSES        FILTERS        METERS & MICROMETERS        TYPES OF LIGHT        EDITING        BITS & BYTES        COLOR MANAGEMENT        PHOTOGRAPHING THE SKY
MACRO PHOTOGRAPHY        MICROSCOPE PHOTOGRAPHY        IR PHOTOGRAPHY        STEREO PHOTOGRAPHY        SOLARGRAPHY        MATTING CALCULATOR

Sensor

 

      Foveon type sensor                                       Bayer type sensor            

The principle of a digital sensor is that when a photon hits a pixel, an electron is released and all the electrons which are released during the exposure are stored, which creates a voltage. After the exposure is done, this voltage is converted to a digital value and all these values together create the information needed to create the picture.

 

There are basically two main types of sensors at the moment of writing (2016), the Bayer type sensor and the Foveon type sensor. Only Sigma delivers cameras with the Foveon type, so the Bayer type is by far the predominant type. The difference between the two can be seen in the picture on the left.

 

So the Bayer type has one color for each pixel in a ratio of green:red:blue is 2:1:1, which is based on the fact that the human eye is most sensitive for green. The Foveon type however, has three layers on top of each other and detects all three colors at each pixel site (the red and green light is not blocked by the blue sensor, and the red light is not blocked by the green sensor).

 

This difference in design has some consequences. First, because the Bayer type detects only a specific color at each pixel site, it only uses about 1/3 of the light that hits the sensor (1/2 of the green light, 1/4 of the red and 1/4 of the blue light), while the Foveon type uses all available light. Furthermore, a green pixel in the Bayer type sensor has no information about the blue and red channels, so the values of the other colors for that pixel are estimated using an interpolating algorithm, based on the information of the neighboring pixels (which causes some softening). This method has its flaws as a 10 mp sensor won't produce a true 10 mp picture (more on this below under "effective vs actual pixels"). But Bayer type sensors have been optimized for a long time, and this is hardly an issue.

 

Sensor - effective vs actual pixels

 

Example of recovering hidden pixels => [original] [hidden pixels recovered]

A sensor is often described as having a higher number of actual pixels than the amount of effective pixels, which has several causes. First, part of the outer edges of the sensor are often covered and do not recieve any light. These pixels are used as black references.

Second, because of the interpolation process described above for Bayer sensors, some more pixels are lost. Since each pixel only has information for one of the three colors, algorithms use information from the neighboring pixels to approximate the other two colors for that pixel. But the outer rows have less neighbors than pixels elsewhere on the sensor, so their information is less accurate since it is based on a smaller number of neighbors. For this reason, these pixels are removed.

 

A funny thing is that these hidden pixels can be recovered from the RAW file! You need to download a program, called "DNG Recover Edges" (written by none other than Thomas Knoll) and save your RAW file as a DNG. The program then recovers the hidden pixels from the DNG file, like in the example on the right. The original has a size of 4256 × 2832 pixels (12.052.992 pixels), and the recovered picture has 4284 × 2844 pixels (12.183.696 pixels), so 130.704 pixels are generated extra. It is not a lot, but it might save your picture if you composed it too tight! Of course, these edges may have lower quality.

 

 

Sensor - aliasing

 

Moiré in a strap of my backpack

When very fine details are present in the picture, a sensor can give rise to an interference pattern known as aliasing.

The examples below shows some random examples of aliasing, where the top row is the light which arrives at the sensor as a black and white pattern. The blueish row represents the individual pixels, and the bottom row is what will be detected by the pixels. Some different patterns emerge as the black and white details start to approach the pixel pitch.

Examples of aliasing => [example 1] [example 2] [example 3] [example 4]
[example 5] [example 6] [example 7] [example 8] [example 9]

 

 

 

 

 

 

This is only a monochromatic example, but in the case of Bayer-type camera sensors, this effect can take place for red, green and blue, giving colorful patterns where there aren't any.

 

To prevent this from occuring too much, an anti-aliasing filter is placed in front of the sensor, effectively softening the picture a bit (which, together with the interpolation algorithm, is the reason why sharpening is applied to pictures afterwards).

 

On the left is a classic example of a moiré pattern as a result of aliasing. This is a uniformly colored strap of my backpack, but the pattern of the strap confuses the camera and causes it to show a moiré pattern where there is none in real life.

 

 

 

 

 

 

 

 

Sensor - pixel pitch

 

The pixel pitch is, as the name suggests, the distance between the pixels on your sensor. Calling this the pixel size is however not entirely correct as there are unused spaces in between two pixels, making the actual pixel size smaller than the pixel pitch.

 

In general, a larger pixel pitch has a better signal to noise ratio and will therefore produce less noise and have a higher dynamic range, because a smaller pixel will recieve less photons and the signal needs to be amplified to a greater extent. For this reason, a 10 mp photo from a full frame sensor will look less noisier than a 10 mp photo from a compact camera (with, for example, 1/2 the width and height of full frame) when viewing them both at the same enlargement, like for example 100% on screen.

 

However, this is not an entirely fair comparison because the pixels are not only smaller, but are also spread over a smaller surface. If you take this into account then the two photos should be compared at 100% for the full frame and 50% for the small sensor, and this will in most cases give fairly identical results.

Sensor dimensions (mm):

×

Number of megapixels:

pixel pitch (µm):

Calculate!

 

Shutter

 

The shutter is the most important way to adjust the exposure over a wide range. The principle behind most shutters is a pair of curtains blocking the sensor from incoming light as long as they are closed. When pressing the button the first curtain will open, then after a while, the second curtain will close the shutter again. For relatively long exposures the curtain speed (the time it takes the shutter curtains to cross the sensor) is more or less irrelevant, but for shorter exposures it will matter. There are three different situations, depending on the chosen shutter speed. So imagine a hypothetical shutter where the curtain speed is one second, then the three different scenarios will be like this (in real life, most shutters have a curtain speed of about 1/250th of a second):

I) Shutter speeds longer then the curtain speed: If we take a shutter speed of three seconds, then it will take one second for the first curtain to reach the other side. Then, the shutter will remain open for two seconds, after which the second curtain will start to close.

II) Shutter speed equal to the curtain speed: In this case, the second curtain starts closing as soon as the first curtain has reached the other side.

III) Shutter speeds shorter than the curtain speed: In the case of for example a shutter speed of 1/20th of a second, the first curtain will open, and 1/20th of a second later, the second will already start closing.

Shutter speed > curtain speed (I) => [take photo]

Shutter speed = curtain speed (II) => [take photo]

Shutter speed < curtain speed (III) => [take photo]

Distortion when moving a camera fast with a short shutter speed =>
[short shutter speed (1/320 s)] [no movement] [long shutter speed (1/20 s)]

 

A curiosity especially in the case of very short shutter speeds is that, when photographing a really fast moving object (or moving the camera really fast), it will be aimed at a different place in the beginning when the first part of the sensor is exposed, than when the last part of the sensor is exposed. This will cause the scene to be tilted unnaturally.

A famous example is the photo of a racing car by Jacques Henri Lartigue, where the back wheel looks tilted to the right, while bystanders on the other side of the road are tilted to the left. The bystanders are tilted to the left because Jacques panned the camera to the right in order to capture the racing car, and the wheel of the racing car is tilted to the other direction because he panned the camera slower than the car was driving.

 

On the right is an example of that, taken at 1/320 while moving the camera from left to right. If you compare the picture taken with a short shutter speed to the one without movement, you can see that the trees are tilted anticlockwise, which means that the curtains went from the bottom to the top of the picture. If you take a picture with a long shutter speed while moving the camera slower to get the same amount of blur, you can see that the trees are not tilted. This is because the whole sensor is exposed at the same time during most of the exposure.

 

Short shutter speed versus long shutter speed => [short (1/250 s)] [long (1/15 s)]

 

 

 

Another result of this is that a flash cannot be used when the shutter speed is shorter than the curtain speed. A burst from a flash is many times shorter than the curtain speed, which means that when the flash fires, only the part exposed to the sensor at that moment will be bright, while the rest will be dark. This shutter speed is the so called "flash sync speed" and is about 1/250 for many cameras.

However, on modern cameras there is a function called "high speed flash sync" which enables pictures to be taken with shutter speeds shorter than the flash sync speed while using a flash. The principle is that the flash will not give one single strong burst, but it will give several weaker bursts over a time equal to the chosen shutter speed, which gives equal brightness across the whole picture.

 

Apart from just using the shutter speed for adjusting the exposure, it is also very useful to create effects caused by long exposures. For example, a stream of water will be captured "motionless" with a short shutter speed, but a long shutter speed will blur the water, creating "movement" in the water, like the examples on the left.

 

 

 

 

ISO

 

The ISO value is used for films as a measurement to describe its sensitivity. Besides an increased sensitivity, a higher ISO value also means that the grain size of the film is increased which has some influence on the pictures, becoming more grainy.

The same ISO standard has found its way into the digital cameras although it is not exactly the same thing. Basically, a digital sensor has only one real ISO value (one sensitivity), often called the base ISO or native ISO, and all other ISO values are derived from that native ISO. There is however a big difference in how the other ISO values are obtained. If we assume a native ISO of 100, then specific multiples of the native value (ie 200, 400, 800, each with one stop difference) are obtained by amplifying the analog signal gain of the sensor prior to conversion to a digital signal. The other ISO values (like 125, 160, 250, 320, 500, etc) are obtained by digital tricks with the nearest multiple after the conversion. For example, ISO 500 is in reality pushed ISO 400 while ISO 320 is pulled ISO 400, which has some consequences of course.

A "pulled ISO" means that the brightness is digitally decreased from the nearest "multiple ISO", so ISO 320 is ISO 400 decreased with 1/3 of a stop. This results in the darkest and noisiest 1/3 of a stop being discarded, which makes that these pulled ISO values are often regarded as the cleanest (less noise), but the trade off is that the dynamic range decreases with 1/3 of a stop compared to ISO 400. Similarly, a "pushed ISO" will generally look noisier.

 

Higher ISO values are usually regarded to give more noise, but that is not true. The real reason is that when people increase the ISO value with for example one stop, they often simultaneously decrease the exposure (shutter and aperture settings) with one stop. And that results in only half of the original amount of light hitting the sensor, which decreases the signal to noise ratio and thus gives more noise.

It may seem counterintuitive, but increasing the ISO value while maintaining the same exposure (thus resulting in a brighter picture) actually improves the signal to noise ratio, which is because higher ISO values have lower read noise levels. If an increased ISO value would simply increase the output signal, there would be no extra benefit compared to just increasing the brightness afterwards on the computer. But increasing the ISO value does not only increase the output signal, but also results in a higher signal to noise ratio, and a photo taken with a higher ISO is thus superior to a picture which has been brightened to the same brightness on the computer (more on this can be found under bits & bytes/ISO to the right).

The downside of raising the ISO value is a loss in dynamic range. By doubling the ISO value, a full stop is lost at the brightest side of the dynamic range, but on the other hand, since the read noise is lower for higher ISO values, the dynamic range is increased at the darker end. However, the gain in dynamic range is smaller than the loss and effectively the dynamic range decreases.

 

Stops

 

 

 

 

Shutter:

1/125

1/30

 

Aperture:

f/8

f/16

 

ISO:

200

800

A "stop" in photography is used to describe a relative difference in the brightness of light. Doubling the amount of light means a stop brighter, half the amount of light means a stop darker. When refering to altering the aperture, the terms "stopping down" and "opening up" are often used, where stopping down means a smaller aperture (higher f/number) and opening up means a larger aperture (smaller f/number).

 

• For the shutter speed, every stop represents a factor 2 ×. An exposure time of 2 seconds gives double the amount of light compared to an exposure time of 1 second.

• For the aperture, every stop represents a factor 1.4 × (= √2) and, as a result, a factor of 2 × means a difference of 2 stops, a factor of 2.8 × means a difference of 3 stops, etc (for more information on the aperture see here).

• For the ISO value, every stop represents a factor 2 ×. An ISO value of 200 is twice as sensitive as an ISO value of 100.*

 

I've tried to clarify it with the example on the right. In reality this picture was taken with completely different settings, but let's suppose that it was taken at f/11, 1/60 and ISO 400. By changing the settings you can see what that does to the exposure and what the picture would look like with those settings.

 

Now, in reality it's not that easy, because we can't just change these settings without interfering with other factors. Changing the shutter speed can result in moving parts in the picture, or the shutter speed may become too long to be able to take the picture handheld. Changing the aperture changes the depth of field, and changing the ISO has an impact on both noise and dynamic range! All these things have to be taken into account when choosing the right settings, so there are lots of things to think about....

 

* Strictly speaking, ISO values are not part of the exposure since it has no influence on the light which is captured by the sensor. When keeping the exposure constant (the shutter speed and aperture settings), changing the ISO will only change the brightness of the resulting photo, but this is done electronically after the picture has been captured by the sensor.

 

Dynamic range (DR)

 

A scene with a large dynamic range

 

The dynamic range is the range of levels of brightness a sensor can capture and ranges from the the full well capacity of the pixel (the maximum amount of signal a pixel can detect without being saturated) to the lowest level where the signal to noise ratio is 1. It is usually given in stops or EV's (exposure values).

 

If the dynamic range of a scene exceeds the dynamic range that the sensor is capable of capturing, then information will be lost, either on the dark side or on the light side, depending on the exposure. The combination of our eyes and our brain gives us a much larger dynamic range than cameras and one of the most challenging things in photography is to capture the scene we see with the camera.

A good way to mimic the lower dynamic range of a camera is to squint your eyes, which gives you an estimate of how the camera "sees" the scene.

 

On the left is an example of a scene with a dynamic range larger than the film was capable of capturing; the shadows on the left side have turned into featureless black.

 

 

 

 

 

How the camera sees the world

 

Exposing is probably the most important part of photography, but also one of the trickiest since a camera is far less flexible than our eyes. Not only because the dynamic range of our eyes exceeds the dynamic range of camera sensors (film, for that matter, is of course also a type of sensor), but also because the camera has a very rigid way of measuring the exposure, which we need to compensate for.

The way a camera measures the light conditions is that it always adjusts the exposure so that it will be 18% grey (which is mid grey for us, since we see light in a different way than cameras). The problem is that a camera has no way of deciphering a black object from a white object, so for a camera, a dark object in bright light looks the same as a light object in the shade. So regardless of the nature of the subject (dark or light), the camera will assume it is a midtone object, and expose accordingly. But only a small part of the real world is midtone so therefore it is often necessary to overexpose or underexpose.

 

For example, if you want to take a picture of a bright white object like the sunlit snow in the picture below to the left, then the camera will still assume that it is midtone and this will result in the camera choosing settings that will give an underexposed picture, with grey snow. So in order to achieve a correct exposure, we need to overexpose with a certain amount, 1 stop in this case.

The same goes for dark objects, a dark object will still be regarded as midtone and the photo will be overexposed so that the dark object ends up as a midtone object. Take for example the burnt tree below to the right, if we let the camera determine the exposure, then the picture will be overexposed. But if we apply an exposure correction of -1 stop, then the exposure is fine.

Correcting for a bright scene => [standard camera settings] [1 stop overexposed]

Correcting for a dark scene => [standard camera settings] [1 stop underexposed]

 

So, although it sounds a bit counterintuitive, dark objects need to be underexposed and light objects need to be overexposed! Note that most cameras only measure luminance, and are therefore colorblind. So even though grass is green, it is often still a pretty good 18% grey standard, depending on the light conditions.

 

Then there is also the fact that exposing on digital cameras is different than on film. If you want your pictures to be free of noise as much as possible, you'll need to "expose to the right", which is described in detail under bits & bytes/expose to the right.

 

Tripod

 

The difference a good tripod can make => [bad tripod (2007-02-28)] [good tripod (2015-09-28)]

For me, a tripod is without doubt one of the most important pieces of equipment as it enables the use of the full range of shutter speeds without any limitation. Nowadays, I almost feel naked when I am not carrying my tripod with me, even though it means carrying a couple of extra kilograms on my shoulders.... I almost consider it a part of my camera, which is why it is included on this page, even though this page is about cameras.

 

Unfortunately, not any tripod will do, just take a look at the picture on the right, which was taken with a tripod that I bought while having a very limited budget. Watching the lunar eclipse was really nice, but getting the slides back from the lab was a huge disappointment and I found out once again why it was necessary to buy a good tripod. And this picture is actually the best one from all the pictures I took that night, so it's clear that this was no award winning tripod, especially since there was hardly any wind! 8,5 years later I did the same thing with a good tripod, and the difference is pretty obvious!

 

But after bying a real tripod I was amazed at how much my pictures improved. For me, there are several reasons why pictures will improve when using a good tripod:

• Obviously, sharpness of the pictures increases due to reduced camera shaking.

• Because setting up the tripod takes some time and is a bit of a hassle, I think twice before taking a shot. This results in me taking less uninteresting pictures that are likely to be discarded anyway.

• If I do take the picture, it takes more time doing so, which makes me think more carefully about the shot and choosing the right settings more accurately.

• Using a tripod means I'm no longer limited to the "reciprocal rule", a shutter speed of 1/10 or slower is no longer an obstacle, which gives more freedom in choosing your settings.

• Then there are some extra advantages, like the fact that working with gradual ND filters is a lot easier with a tripod.

 

If you want to make your tripod even more sturdy, you can weigh it down with something heavy, like the camera bag on the left. On the right is a different way of using a tripod, where I was taking a focus stack of a red mushroom. I had to use my tripod, but that also meant that I couldn't get low enough with my camera to achieve the point of view that I wanted. Therefore, I decided to use the weight of the tripod as the stabilizing factor, and just put the whole set-up on its side. By carefully putting branches under my camera, I could get the point of view that I wanted, and the set-up was stable enough to complete the focus stack.

A suspended camera bag stabilizing the tripod with its extra weight







A different way of using a tripod

 

An even better way to get my camera at low points of view is shown below. I made a tripod extension out of aluminum, with a smaller tripod head mounted at the end of it. This way, I have much more freedom to position my camera to low positions, and I can even position the camera lower than the tripod, as is shown below to the right. This set-up is still stable enough to be able to do focus stacks and longer exposures.






Using the tripod extension

Using the tripod extension

 

But wait, there's even more! A tripod can also be used for other useful purposes! It can hold your cup of tea or coffee when it's winter and there is no other place to put it because of all the snow. Or, in combination with a hiking pole and a tree/rock/tent poles, it can make a nice drying rack during long hikes.

My tripod as a cup holder during winter

My tripod as part of a drying rack when combined with the two storm poles for my tent

 

Tripod - mirror lock up and shutter delay

 

The advantage of using shutter delay => [no shutter delay] [1 second shutter delay]

When using a tripod, it is a good idea to use either mirror lock up or shutter delay (if your camera is equipped with it of course). First of all, when pressing the button to take the picture you can create a lot of camera shake, resulting in a blurry picture. But another reason is the fact that, when you take the picture, the mirror inside of your camera moves up, which also can cause some camera shake. Note that this camera shake will only last for a short period of time, so this is especially important for pictures taken with shutter speeds between about 1/50th and 2 seconds (although these values depend a lot on the focal length you're using). If you are using shorter shutter speeds, than the camera shake will likely not be visible because of the short shutter speed. And if you are using a longer exposure, then the duration of the camera shake will be insignificant compared to the exposure time, and won't be visible either.

 

When using mirror lock up, the mirror will move up when the button is pressed, but the shutter won't open until the button is pressed one more time. Obviously, you need to work with a cable release, otherwise you will still introduce camera shake by pressing the button!

Shutter delay is basically according to the same principle. After pressing the button, the camera will move the mirror up, and automatically open the shutter after a chosen period (like, for example, one second).

 

Both these methods will result in sharper pictures, like in the example above on the right. This was taken at 200 mm and with a shutter speed of 0.3 seconds. The moon is clearly blurred in the first case, but activating the shutter delay gives a sharp moon! In this case, the camera shake was caused by my pressing the button.

 

Δ
Δ
Δ