• Members 140 posts
    July 26, 2024, 6:49 p.m.

    As I understand it, dynamic range is the number of stops which you can capture or render from pure black to pure white. But, dynamic range can also pertain to ranges of color, can’t it?

    And if we consider the number of stops available between pure black and pure white, don’t we need to consider the amount of noise encountered when exploiting the full dynamic range available to us? So how is this properly measured, and what is it expressing?

    I exploit my sensor’s dynamic range all the time, whenever I push the shadows, drop the highlights and change the saturation of a RAW image and output it as a clean jpeg.

  • July 26, 2024, 7:47 p.m.

    You opened yet another can of worms :)

    Dynamic range is generally ratio between maximum and minimum signal level, in digital (photgraphic) context usually ratio between maximum [unsaturated] level (you can call that white) and noise floor. You can express it in EV stops (log2 of said ratio).
    It certainly is NOT ratio between pure white and black - because that would be infinite.

  • Members 2330 posts
    July 26, 2024, 10:10 p.m.

    where did he quote that ?

  • Members 177 posts
    July 27, 2024, 2:20 a.m.

    I just call it "exposure latitude" - too busy with photography to fret over inane technical terms.

  • Members 4254 posts
    July 27, 2024, 2:24 a.m.

    How do you define "exposure" because it means different things to different people?

    I normally aim to set the largest exposure that meets my DOF and blur requirements without clipping important highlights.

    "Exposure latitude" then becomes just how close to the largest exposure I am prepared to accept.

  • Members 676 posts
    July 27, 2024, 4:04 a.m.

    In my opinion, DR is the most misunderstood aspect of photography. In technical terms, ArvoJ is correct (although he should have also mentioned that we should specify an area, typically a proportion of the entire photo, over which the DR is measured, and that when comparing the DR from different systems, it's important that we are comparing over the same proportion of the photo -- e.g. the DR of two pixels combined on a 48 MP sensor vs 1 pixel on a 24 MP sensor).

    In nontechnical terms, that is, why we care about DR at all, more DR simply gives us more processing options. It's not dissimilar to resolution or noise, in that respect. That is, a sensor that has higher resolution allows us to crop more and still maintain "sufficient" resolution. A sensor that is less noisy allows us to use a shorter exposure time and/or deeper DOF for a given exposure time and still have a photo that is "clean enough".

    With regards to a sensor that delivers more DR, it gives us the option to push shadows with less of an IQ penalty and/or use a lower exposure to preserve more highlights (and/or avoid weird colors at the high end of the exposure as a result of some color channels blowing out before others) and then apply the desired tone curve for the rest of the photo with less of a noise penalty.

    Let me give what I feel is a great example of the utility of more DR. Consider the following photo:

    pbase.com/joemama/image/173969024/original.jpg

    More DR would allow me to lighten the foreground with less of a noise hit and better color fidelity. For this particular photo, DR wasn't that much of an obstacle for what I wanted. But for a different photo, for example:

    pbase.com/joemama/image/173950749/original.jpg

    the sensor may very well have had insufficient DR to lighten the foreground enough for me to make the scene "realistic" (I forget if I processed this photo in the way I did because I wanted the more dramatic effect or because I didn't have a choice because the DR was insufficient to maintain the desired level of IQ for a more "realistic" representation).

    So, like less noisy sensors, sensors with more pixels, sensors with more DR allow more processing options. For most situations, the DR of most modern cameras (and even many not-so-modern cameras) is well past the "good enough" point. For other situations, it's not even possible to make a sensor "good enough". In between these two situations, an excellent strategy, if possible, is to stack and merge multiple exposures. Smartphones make use of this technique automatically which, in situations where that works, yields significantly better photos than a single exposure from the best consumer digital camera. Of course, consumer digital cameras can also stack and merge multiple exposures, but smartphones do it so, so, so much more conveniently. And, yeah, convenience is really a big deal. But some times, "higher IQ" is a bigger deal.

  • Members 322 posts
  • July 27, 2024, 12:42 p.m.

    The problem with dynamic range is that different people use the term to mean different things. So the question that you have asked is not a simple one to answer. But there is a misconception buried in your question, which is a very common one, which is promulgated by quite a few supposed 'experts', and bedevils most of teaching about photography most days. Which is to say it's not your fault that you picked it up.
    Think about what you mean by 'pure black' and 'pure white'. These are things that don't exist in the physical world. One could argue about 'pure black' - which would presumably mean emitting or reflecting zero radiation. As for white, there is physically (at least in terms of classical theory†) no limit to how bright something can be, so no 'pure white. 'Black' and 'white' are perceptual terms. It is mixing up the physical and perceptual that leads to most misunderstandings in photographic theory. Photography is a process that takes measurements in physical space (of electromagnetic energy in the visible wavebands) and transforms them to perceptual images which humans perceive as a scene. An 'image' is not a physical thing - it's a perceptual phenomenon carried by some physical medium (paper or some kind of light emitting display device). In order to keep things clear you need to clearly differentiate.

    So, with that preamble over, let's talk about dynamic range. DR comes from communication engineering. It refers essentially to how much information can be carried by a communication channel. You can think of a camera as a 'communication channel' which communicates both over distance and time. The normal engineer's definition is the maximum possible signal divided by the minimum signal. 'Signal' is another weasel word. In a communication system it's required to differentiate between the signal, which is the message being communicated, and the carrier, which is the physical medium being used to transmit that signal. Many discussions on the topic confuse these two things. Let's start with the minimum. The minimum signal is the point at which some information can be discerned from the background noise. This can be set at a number of levels. Some engineering conventions set it as a signal to noise ratio of 1, that is that the amplitude of the information imposed on the carrier is the same as the mean value of the noise. Others use a signal to noise ratio of 0 - which reflects the fact that noise, being random, can be separated from a non-random signal (if it were random it wouldn't be information). Getting the DR this way means dividing the maximum carrier level by the mean noise level. This results in a number. Usually it is expressed in a logarithmic manner, in the case of photography to log base 2, which can be called 'stops' or 'bits' (the unit of information) which in this context are equivalent. In a photographic context it means the amount of information per sample (or pixel) that is captured about the exposure (amount of visible light energy) to which that pixel is exposed. The greater the dynamic range the greater the range of exposures that pixel can measure.

    The term 'dynamic range' was used commonly in TV engineering and transferred to digital photography. From then its meaning changed amongst many people, based on the misconceptions above. It started to be used in the way you have put forward above, as the range of tones between 'black' and 'white' -0 which is actually pretty meaningless for reasons that I'll go into. To understand this discussion we need to be careful about the word 'exposure'. I'm using it as defined above (though it should strictly be 'per unit area'). Commonly it's misused to mean how light or dark the output image looks. I'll call the latter 'lightness'‡. People get upset about this apparent complication, but if you use the same word for two different things you can't understand what's happening at all.

    A raw file is essentially a set of exposure readings. When it's processed these are used to calculate a set of lightness values (along with some colour values). The calculation includes setting both white and black points - the exposure readings that will correspond to the perceptual 'black' and 'white'. All exposure readings that don't produce a value within that range are essentially clipped - that is, information is lost. Generally default processing sets a black point considerably above the noise floor and a white point substantially short of the maximum exposure that the sensor can read, so the default processing throws away a fair amount of information that can be used by non-default processing. 'Dynamic range' gives an estimate of what is the amount of information available§.

    † Almost everything is Physics is arguable at some level. If you put too much energy in a fixed volume it will collapse spacetime into a singularity, so it could be argued that somewhere there is an absolute limit to energy density. 'Pure white' is the level just before it turns into 'pure black'.😀

    ‡ In film sensitometry it was called 'density', but that makes little sense for digital. 'Lightness' is a printers term, an appears in colour space names, such as Lab.

    § Amount of information available is not the same as amount that an individual might choose to use - which of course depends in individual and context - one reason that I'm not a fan of 'PDR' which puts in what I consider to be arbitrary decisions on what information is useable. The above is of course a per-sample measure - which can't really be used to compare cameras with different sample rates (pixel counts). To compare those they need to be normalised. The basic point is that more pixels about an area provides more information about that area, so you can't compare the combined effect of those pixels with a single larger pixel.

  • Members 616 posts
    July 27, 2024, 2:56 p.m.

    The simple answer: International Standard ISO 15739

    www.iso.org/standard/82233.html

    Not free, so discussed in considerable detail here:

    dougkerr.net/Pumpkin/articles/ISO_Dynamic_range.pdf

  • Members 616 posts
    July 27, 2024, 3:15 p.m.

    No need ... 255/0 = infinity, obviously.

  • July 27, 2024, 3:43 p.m.

    255 and 0 are just (two of many) codes for 'pure white' and 'pure black'. The problem is that neither 'pure white' and 'pure black' are points on an infinite scale.

  • Members 1795 posts
    July 27, 2024, 3:48 p.m.

    So camera companies should give us a code reference, when stating dynamic range numbers. At least we would have a consistent point of reference, for those of us without a degree in Electrical Engineering.

    Then we can argue about the real dynamic range on photo forums.

    I wonder how much lens flare counts in real life. I shoot a lot of interiors, and Veiling flare often reduces DR to zero in some local areas.

  • Members 616 posts
    July 27, 2024, 4:30 p.m.

    ISO 12232 defines exposure, not "different people".

    exposure definiton.jpg

    exposure definiton.jpg

    JPG, 199.6 KB, uploaded by TexasTed on July 27, 2024.

  • Members 616 posts
    July 27, 2024, 4:41 p.m.

    "counts"?

    Back to the Standard which specifies the DR test Method which in turn excludes Real Life.

  • Members 2330 posts
    July 27, 2024, 5:37 p.m.