I always imagined that DR was (maximum signal-noise floor) - (minimum signal-noise floor). So you end up with a voltage range for what the sensor (pixel?) has measured. Divide that into discrete units (DB?) and you have the DR.
I assume I am wrong since you guys have been doing this for a lot longer than me.
At it's basics it's just maximum signal/noise floor - other definitions of the lower bound do exist but that's the default. The minimum signal is defined by the noise floor. Take it in binary terms. If you measure something that's within the noise floor, you don't have any information about what it's intended to be because it's noise. If you measure something above the noise floor you know it isn't within the noise floor, so now you have two states - within the noise floor and higher than the noise floor. That's one bit of information. Now measuring above the noise floor if it's above the noise floor and more than twice the noise floor you have three states. If more than twice the noise floor but less than three times the noise floor you have four states, that's two bits of information, and so on.
The log scale doesn't have much to do with it, you can measure it in plain numbers, dB or stops, all the same measurement just expressed differently.
I think this thread is extremely helpful!
I understand the pushback: Jim is describing techniques which don’t work in many scenarios. But he is describing how to squeeze every pixel of perfection in an image, and that is certainly photography.
I think you’re reacting the same way you’d react to Ansel’s Zone System. And, I just watched a nice video by Jan Wegener and he explained that low ISO is the enemy of good nature photography; he explains that if your shutter speed is too slow, you lose the shot, so ignore the ISO you need.
But knowledge is always good. This information is fantastic, and when you need to take studio shots, you’ll know more about how to optimize them. Nothing wrong with that.
By tricking you into viewing at a lower magnification with the go-to 100% pixel view.
100% pixel views are important to some degree, but they are totally meaningless without context. I contend that if many photographers had access to a 2 GP FF sensor with higher QE and less read noise per unit of area than current sensors, they would say that the camera is "as soft and noisy as all hell".
A certain amount of noise is actually helpful in creating smooth gradients, and the main source of noise for most parts of most images is photon noise, not read noise. See the quantitative discussion at the end of the article referenced in the OP.
And sensor area, not pixel size, is what determines the photon noise at a given print size, all else equal.
Your argument assumes a distribution that read noise doesn't have. In order for the noise to determine the precision of a single reading (and there are reason why that's not very important) the way you say is for the noise to have a uniform distribution, and the metric for the noise to not be rms value. But read noise is pseudo-Gaussian, and the metric is usually the rms value.
But I question whether the concept of distinct values is even applicable here. Noise is necessary for the camera to work the way it's supposed to work, and the right amount of noise can make repeated or averaged reading more accurate than no noise at all. The ADC needs to be dithered to work as intended.
By the way, the concept of dithering ADCs is close to my heart:
You mean checking for camera or subject motion blur? A properly resampled display would work fine for that, and not reinforce the idea that there's something magic about turning pixels into little squares on the display.
When I wrote that I had not written any code to read any modern ADCs, I was clearly wrong since all of them I wrote code for were manufactured at least a decade after your patents.
Of course the camera's samples shouldn't be thought of as squares, but square presentation can help with some kinds of image inspection, as long as you remain aware of scale. It's a lot easier to see many issues with largish, square-ish display pixels, as it makes it easier to distinguish and compare neighbor samples. I have switched many times to Nearest Neighbor at 200/300/400/500% instead of a smoothing, ramping method just so that I can see exactly what is going on. For viewing images as intended to be viewed as a photograph, then a super-hires display with reconstruction would always be ideal, unless you were doing "mosaic art" intentionally.
I only find the nearest neighbor resampling useful when I'm testing or reverse engineering, never for actual photography. I wish Ps offered a mode where the displayed image is properly resampled.
I don't feel that it is a pissing contest,
I know for myself how and what I like to photograph require a great deal of planning that often rather difficult, expensive, and time consuming in preparation and planning.
There is often a great deal of planning, this can also mean knowing your subject, I could be tracking and following subjects for months that require time and cost expense.
Going through all of this why would a person not want to know how take a better photograph by using your camera to its full extent, this can decrease how much money your are investing as most of the time if you maximize how you use your camera you can often increase the capability of that camera to one that can cost several times more money. One of my more favorite subject to photograph my only come around 3 or 4 times a year and often times only allowing a single photograph to be taken so why would I not go the extra step in taking a better photograph.
Learning how to maximize how you operate your camera with better exposure management is no different than learning how your cameras AF system works under different circumstances.
The old adage that ones that learn about how things work from a technical standpoint know nothing bout real photography is just plain silly, were does one stop from the technical standpoint, a great deal of great portraiture comes from a technical understanding of lighting