By 10% and 90%, are you talking raw values? If so, that's a bit tough at the high end and not nearly challenging enough at the low end. When I calculate photon transfer curves, which don't depend on shutter or aperture accuracy, I usually get better than 1% linearity over the region from 10 stops or so down from clipping to the region a stop or so down from clipping.
There are other problems with that article, not only the ones you pointed out.
"The nonlinear human eye response." I do not even know what that means. You are the expert, maybe you can correct me. Whether certain object feels twice as bright or not is a feeling, not something objective. Most people cannot even compute a15% or 20% tip, and now we are asking them to quantify something (a perception) which is not even measurable. If I am ever in such an experiment, I would just honestly say that I have no idea what it means some object to feel/look twice as bright. Same with weights, etc.
This creates the impression that noise in the shadows is caused by not enough levels there (discretization error) while this is not true. There is enough shot noise there for the quantization error not to matter (much). You may get a "posterized noise" which you would hardly distinguish from a normal one but not the typical posterization.
The just noticeable differences (JNDs) in luminance tend to be the same if luminance is measured as the cube root of Y, with a straight line segment near 0. That is an objective test that can be, and has been performed. It's not a cheap or easy test, though. Weber's law loosely predicts this behavior. No idea about the "feels" part.
Indeed. I have covered that elsewhere. There is a good reason to strive for ETTR (photon noise) but quantization levels aren't the reason.
ADC precision is determined by the shadow performance desired and the system noise. The unnecessary precision in the highlight regions is a side effect of that.
Part of is is education. There's a saying in audio that 10 dB feels twice as loud. But I've been working with audio so long that 3 dB feels twice as loud to me.
On a tangent: I’ve often wondered how did 2.2 come about? Or in the world of that other computer OS, why 1.8? And what is being done behind the curtain to make the same images look good on both PC and Mac?
Are those just carryovers from early image handling software having catered to CRT characteristics that were retained in the transition to LCDs?
I also find it odd that the default pixel pitch setting in Adobe software is 72 per inch when every LCD monitor I’ve ever used is closer to 100 per. I sense a whiff of the cliche of “railroad cars matching Roman donkey carts” there.
2.2 came from TV CRTs, as standardized by SMPTE, and others. I don’t know why Apple picked 1.8. It was a big problem in the days before color management.
I did some testing recently on divergence between EFCS, MECH, & ELEC shutters.
Using a lens with a fly-by-wire only aperture, at (nominal) 1/250 s - 1/4000 s, in 1 stop steps, at base ISO.
I did 16 replications of each combination of shutter speed and type, recording stats for 256(w)x32(h) patches, half-way across the sensor, at about 0%,25%,50%,75%,100% of sensor height.
At 1/4000 s, mechanical shutter, for the average of the merged green channels of the centre patch, I got a mean of 9.742EV (relative to 1DN=0EV), with a population (of patch means) standard deviation estimate of 0.0059EV.
I.e. the exposure variation (aperture and shutter) was about 0.0059EV at 1/4000 s.
The least consistent figures were for EFCS. At 1/2000 s, I got a standard deviation estimate of 0.0184EV. 1/2000 s is the maximum EFCS speed supported by the camera.
All other combinations tested gave standard deviation estimates in the range 0.0041EV..0.0069EV, with electronic shutter slightly more consistent than mechanical, and mechanical slightly more consistent than EFCS.
There were also systematic differences in exposure between EFCS, MECH, & ELEC, not very significant to most folk, but way above my noise floor - that's a different story.
The decision predates “Think Different”, and for that matter “Photoshop” by a number of years. “Think Print” would be more accurate. 1.8 more closely emulated the ouput of printers back in the day when DTP and pre-press dictated the primary workflows.
And just before that, in 1990:
"Desktop Color is the greatest thing that ever happened to prepress. It represents the final seduction of the world into having every page in color. However, the desktop color systems can’t do what it is popularly believed that they can do." -- Tom Dunn.
If we were to instead consider our eye's instantaneous dynamic range (where our pupil opening is unchanged), then cameras fare much better. This would be similar to looking at one region within a scene, letting our eyes adjust, and not looking anywhere else. In that case, most estimate that our eyes can see anywhere from 10-14 f-stops of dynamic range, which definitely surpasses most compact cameras (5-7 stops), but is surprisingly similar to that of digital SLR cameras (8-11 stops).