That's not what I meant saying "Raw DNs give no account for how bright the scene was, too. One can shoot the same scene under the same lights and arrive with different raw DNs, even if the exposure is the same."
"As more photons reach the pixel, you get higher raw values."
Wrong. Highlight clipping is one of the most fundamental concepts ("Hence the use of the word "essentially". This makes it clear that the statement is about the fundamental nature of the relationship").
"In the vast majority of cameras, each pixel is behind a colored filter, so you can't draw a definitive conclusion from a single pixel as to how dark or light the scene was at that point."
Wrong.
But what I see dramatically wrong is this:
"All the beginners needs to know"
and I don't mean grammar.
It wouldn't be hard to incorporate my two suggestions into that. The fact that scaling occurs virtually all the time makes the construction easier, and it's not hard to say the counts are for the photons to which the pixel responded. Why say something that is wrong when something that is right is just as simple to say?
Are you suggesting that you get lower values as more photons make it though the Bayer Pattern filter? Or is your issue that the above is only true over a certain range, and at some point the numbers max out?
Are you disputing that most cameras use colored filters in front of the pixels? Or are you suggesting that in the general case you can absolutely tell how dark or light the scene was from the value of a single pixel?
Which aspects of ISO are you suggesting that a beginner absolutely needs to know?
Your comments would be more helpful if we didn't have to guess at the nature of your objections. It's hard to respond to your position when we don't know what it is.
Please drop the "we".
One can start with defining a sensor operating range as the range where the sensor statistically operates to specifications of linearity of the response.
Raw DNs, single pixel or not, are no indication of the scene absolute characteristics. Only limited relative conclusions can be drawn from raw DNs. Raw DNs depend not only on the scene, but also on exposure, ISO setting, ISO implementation ladder, and some other factors.
Explaining ISO to a beginner, I would be guided by their questions, and not by my presumptions of what they need to know.
You have a reasonable point. My thought in saying "may be scaled" was to emphasize that there is a lot of flexibility in how this is implemented. While most (if not all) implementations scale the numbers, this is a convention, and not a requirement. But I agree, it would also be reasonable to leave out "may be scaled", and just say "are almost always scaled".
I don't think it's important to mention at this point, that pixels are not perfect photon counters. If you think that's important, I would say something like "...These counts, which are not perfectly accurate, are almost always scaled..."
My goal was to hit the important conceptual points, without getting lost in the details. To me this means talking about the general case, and not bringing up every exception or limitation, Reasonable people can differ on what details to include up front, and what can be deferred to later.
For instance, if I was teaching someone how to drive a car, I would start by telling them that as a general rule, when you are driving, the further down you press the gas pedal, the faster you go. I don't know if I need to immediately mention, that there is an upper limit on how fast the car can go, nor that other factors may affect the speed (such as the air conditioner cycling on/off, or the large truck you were tailgating moves to a different lane.
My goal was to find a simple way of talking about "the image plane's illuminance times the exposure time". I decided to go with the raindrop analogy, with raindrops being an analogy for photons of light, and buckets the analogy for pixels. I like this analogy as people understand that the buckets get more raindrops if it is raining harder, or they are exposed for a longer period of time. My goal was to limit myself to the details relevant to this analogy. I wanted an easy way to explain that ISO is not simply looking at how bright the light was on the sensor.
I am sure that there are better ways of explaining it. I am sure that reasonable people can differ on which details to include and which to exclude. However, I think the choices I made are reasonable ones (but not the only reasonable ones) for giving a short introduction to ISO.
I guess our differences is that I was trying to give a conceptual overview, and you prefer a more detailed explanation of real world implementations.
I would characterize your objections as being that my conceptual overviews did not include enough details, nor did it enumerate various limitations of real world implementations. I will suggest that the ISO standard does not speak to how ISO is implemented, so it is not critical to understand specific implementations in order to understand what the standard describes.
I think we have now both made our positions on this issue clear. Thanks for clarifying your position.
In a Bayer Pattern camera, most photons never reach the sensor. They are filtered out by the color filter and don't reach the pixel.
I think we have a difference in terminology. I suspect you are considering the color filter in front of a pixel as part of the pixel. I consider is separate. Thus, I would say that a red photon that gets filtered out by the color filter never reaches the pixel. I suspect you would say, that it reached the pixel, but was not counted due to the color filter.
In either case, the pixel is counting photons. Many photons are not counted because of the color filter. Some are not counted due to the pixel not being perfect.
I was trying to avoid a discussion of the Bayer Pattern color filter, as it is not important to understand it in order to understand the concept of ISO.
My goal here was to introduce a single concept (ISO). I think an introduction to Bayer Pattern filters/sensors would be a wonderful thing. It's just that I think it should be separate from the introduction to ISO.
You could also use the bucket analogy to explain ISO, if you said that there are a bunch of square funnels, one over each bucket. To measure the rain, the height of the water in each bucket is measured. Making the buckets smaller (but keeping the height the same) is like raising the ISO. That makes it easier to measure small amounts of rain, just like high ISO makes it easier to measure small amounts of photons.
Then you should make that clear. I doubt that a beginner understands the sensor toppings.
But there is still the QE of the sensor. It only takes a few works to allow for photons that aren't counted. And, even with modern sensors, the QE is not close to unity, and varies with wavelength to boot.
My position is that I see peer-review articles on this and similar subjects, in-depth ones, well-structured, easy to understand yet without any factually wrong shortcuts; and the Beginner's Questions discussion is where the beginner's questions are answered.
The lower you can get to base ISO (typically ISO100) for your camera the better. You'll maximise DR (dynamic range) and lower the noise in your image. Because you're using your sensor at it's optimal settings. ENDS.
The reason I simplify this is that such threads are aimed at beginners. And getting too scientific will confuse that audience.
Remember folks, threads like this are there to inform the target audience. In this case beginners. It's not to get into scientific minutiae that scare such folk away......