A Guest post by Dave Ware from Whalebone Photography.
This note is aimed to be a quick discussion on High Dynamic Range and possible future enhancements to improve it.
What is High Dynamic Range?
High Dynamic Range is a digital processing effect used within photography to combine a number of images of differing exposures to create a consistently exposed picture throughout the entire frame. This increases the luminance (amount of light) visible within an image.
Why is it required?
The camera’s limitation of amount of colour and luminance it can record is governed by the sensor’s capability and the dynamic range of the camera’s electronics. For example, the Canon EOS 40D uses a 14 bit analogue to digital converter which digitises the analogue signals received from the sensor. The 14 digital bits allow 16,384 different colours to be recorded within the camera.
Looking a a histogram, the horizontal axis is the level of luminance of an image. The vertical axis represents the amount of the image which contains that level of light. For example, a histogram with a single line at the left hand edge shows that the image is purely black. Likewise, a single line at the right hand edge represents an image which is purely white. The amount of data which may be compressed within the histogram is limited by the dynamic range of the camera. A very low dynamic range results in the horizontal axis limits close together. A high dynamic range places these axis far apart.
Here, the exposure of the camera has been set for the balloons – this was chosen as the balloons were the subject of the image and the trees in this case were used to ‘frame’ the balloons. The histogram shows the spike on the left of the histogram representing the trees, and the data on the right represents the balloons and sky. If the photographer wanted both the balloons and the trees exposed then a compromise would have been required so that the balloons become slightly over-exposed and the trees only slightly underexposed.
The above image shows the traditional compromise – the sky has lost some of its saturation in colour, but the trees have retained some detail. Notice also that the histogram shows a slightly narrower spike on the right hand edge (the balloons are now slightly over-exposed), and the left hand edge indicates that more detail is present (the trees are no longer a complete silhouette).
So, to overcome this, the photographer may take a photo exposed for the background and then another photo exposed for the foreground. A few other photos are usually taken between these 2 exposures.
When combining each image, a visually pleasing picture is created and the effects can be quite dramatic. This is the basis of digital HDR. A quick Google search will provide some more examples.
The Future of HDR
Currently, HDR is a post-processing technique, but as cameras advance, its possible that this is an area which may be really improved by manufacturers.
The dynamic range of the camera is likely to be improved. The 14 bit ADC mentioned above allows 16,386 colours to be recorded. 24 bit ADCs have been in manufacture for many years which would allow a total of just under 17 million colours to be recorded! The sensor would have to be capable of matching this dynamic range and the camera’s internal processor would have to be capable of processing the data. This capability exists already as is evident in home computers which have operated from 32 bits for years and are now up to 64 bit processing. Whether or not the sensor is capable of this is another matter for discussion and the additional processing required would increase the amount of time to write the data to the memory card. This may limit the number of full speed frames taken before the cache is full and the camera writes the images to the memory card. These drawbacks are perhaps what is impeding the development of increased in-camera dynamic range as with many advantages, there is often a draw-back.
Another ‘in camera’ technique may be to use numerous sensors within the camera. If one sensor and accompanying electronics can be capable of a certain dynamic range, then 2 sensors may be used to increase the overall dynamic range. For example, one sensor can expose for the highlights and 1 sensor can be used to expose for the shadows, thus creating a higher dynamic range. Sensors can be made incredibly small – just look at the size of phones which have numerous megapixel cameras, and so it’ll probably be no issue squeezing 2 sensors (or more!) into a single camera. However, as the sensor size decreases, the noise of the recorded image (the ‘grainyness’ of the image) becomes greater. Once again, this is a trade-off between high dynamic range, image quality and size.
Another method could be to use an alternative tone-curve algorithm which is currently generally applied to images within the camera. When a photo is taken, signals from the sensor are turned into digital bits and sent to the camera’s computer. To make sense of these signals, the computer processes the data and turns them into something meaningful. This is a form of tone curve. Normally this is employed over the entire image as an ‘average’. Modern techniques however can apply an individual tone curve to every single pixel within the image. This can render a image exposed in a similar manner to that seen by the human eye (ie with a higher dynamic range). This inevitably will increase the processing time within the camera, although as the current method of HDR imaging is to take numerous photos at different exposures, the additional processing time for one single image is probably still a huge time saver.
This new tone-curve method is being advance by companies and Samsung has recently purchased a license to use the technology.
Perhaps other manufacturers have an alternative method, or do not consider high dynamic range of high importance in their cameras, or are just biding their time. This technology is still developing and is an exiting area of camera technology especially as the mega-pixel battle is becoming old news.
High Dynamic Range techniques can be overused and images can easily be made unnatural. The reason they are unnatural is because they extend the range possible by the human eye. It would be sad if technology removed the authenticity of photography, which separates this art from the art of painting (where both composition and exposure is only limited to imagination). If technology however was able to replicate the images as seen by the human eye, then perhaps that is an acceptable technological milestone.
Check out more of Dave’s work at Whalebone Photography.
Post from: Digital Photography School - Photography Tips.
The Future of HDR and its Use within the Camera
"
The information you provide which is very important and effective. Especially, it is very effective for me. Many many thanks for your valuable sharing. Good wish for you. May you live long.
ResponEliminahow to remove clothing wrinkles in photoshop