I went from Nikon D50 -> D7000 -> Z 6, and there was marked difference in how much shadow noise I had to (not) deal with. Longtime Nikon user, didn't feel like putting Canon in the equation, whatever their DR...
Indirectly I consider frame size. Mostly now, the AF-P 70-300mm lives on the D7000 for tele shots, the Z 6 carries the 24-70mm, don't like to change lenses while steam locomotive shooting. But, If I go to shoot low-light performances, I'll move that lens to the Z 6, don't take the D7000 to those. Choosing the Z 6 over the D7000 is always a DR consideration, which the sensor size does affect. But, I wouldn't be specifically considering frame size until I started shooting landscapes or group photos for printing, where I'd want a starting resolution that looked good in large-format rendtions...
I don't think that has a lot to do with DR per se, it has to do with lower read noise, and depending on how you respond to the second question, to do with getting more light energy into the image.
What I was asking was how you make your exposure decisions when using the different cameras. I'm still not sure what you are thinking of as 'DR'. The D7000 has just shy of 14 stops of DR, the Z7 just over. I very much doubt that unless you have a really coherent and meticulous methodology for setting exposure and processing your photos you're using anything close to that - so DR really shouldn't be an issue between those two cameras. D50 is a somewhat different proposition.
I'm not sure I quite understand what you were doing there, Jim. If I understand correctly, the figures you're getting are the raw numbers? How can you relate those directly to 'sensitivity', and what are you calling 'sensitivity'?
I'd add on your caveat - "I don’t know which camera has the most accurate ISO setting. In fact, as far as I’m concerned, what with all the ways a manufacturer can rate the ISO sensitivities of sensors, I’m not sure that question has much meaning." - of course a camera manufacturer does not rate the ISO 'sensitivity' of its sensors - there is no 'ISO' of a sensor. ISO only applies to processed output, and the effect of ISO setting on the raw file is not mandated.
Interestingly, the QE appears to be in reverse order compared with this 'sensitivity'.
In this case, sensitivity -- and it's not quite the right word, as you pointed out earlier -- is proportional to the raw count as a percent of full scale over the exposure. It's useful for comparing how several cameras ISO settings affect raw files -- and, for the most part, the ISO settings do affect the raw files, even though ISO is not defined in terms of raw files. But if you're a raw shooter, and you've got a GFX 100S and an X2D, as I have, it's nice to know how the two cameras respond to the same amount of light at various ISO settings.
That's an interesting point. I've read unsubstantiated claims that a smaller sensor makes it easier to do sensor-based stabilization (less mass to move around) and faster readouts/global shutter sensor designs. I'd be curious to know if these claims actually bear fruit.
elsewhere, I've seen sensitivity expressed as "The conversion efficiency is 7.14μV/electron" in a joint paper between Alternative Vision Corp and Foveon.
Unfortunately that leaves out photon flux, QE and wavelength ... oh well ...
Suppose, I need 12.5mm aperture diameter to get the DoF I need and 2 secs exposure time to get the waves blurred the way I want. With a 100mm lens on an FF camera I'm stopping down to f/8, with an m4/3 I'm stopping down to f/4 using a 50mm lens. Photographic exposure is 2 stops hotter with m4/3, so maybe in some dual-gain scenarios m4/3 is at some advantage.
I think there's likely not a lot in the former, a smaller sensor likely makes IBIS a bit more energy efficient, rather than easier. The 'faster readout' thing assumes that wire delay is a bottleneck in readout speed, which it isn't. The reverse is likely true for global shutter. Global shutter requires more circuitry in the pixels, which is easier in a bigger pixel.
Since a lot of it is about the angle, in linear measure the smaller is the pixel the more positioning precision may be needed. The idea that lighter cameras need less sturdy tripods doesn't always play well in practice.
This is the exact reason the Zone System was developed by Fred Archer, Ansel Adams and refined by Minor White and Fred Picker who also documented it to the photographic world. First of all - one could not trust the ASA rating of film. The rating method was highly dependent on the testing method so the ASA was not adequate for insuring "proper exposure." The true ASA would even vary between different batches of the same film. The first task when Adams got a new batch of film was to determine his unique ASA to shoot the film. That was done by the use of a precision instrument called a densitometer. The "personal ASA" was a function of exposure, development method and developer used (Adams used mostly dilution B of HC110).
To be able to calculate N-1 (for high contrast scenes) and N+1 for low contrast scenes one first needs to know normal. While the Zone System works best for sheet film where each sheet is developed independently, it is applicable to medium format where multiple film backs can hold film for N, N-1 and N+1 use.The film calibration also took into account various offsets in the camera - e.g., the shutter was 5% too slow which could happen in mechanically timed shutters.
But the key is in film the actually negative density based on the readings of a densitometer were used. That is as close as one can get to the raw capture. In digital it is whatever spin the camera company wants to put on the JPEGS. Then there is the issue of a CFA. With my Leica monochrome I can hang a white sheet, sit the camera on a tripod and spot meter the center of the sheet and use raw digger to tell me at what ISO middle gray corresponds to middle gray. With a CFA - that is difficult to do since we were not directly measuring the sensitivity of the monochrome sensor below the CFA.
As some have pointed out there are really two issues here which sometimes get conflated. One is the geometric rendering of a scene or geometric equivalence. A lens projects light from the front on to light behind. The geometry is determined by the focal length that is the refraction that changes the cone angle in front to the cone angle behind. When a sensor is put behind the lens, the size of the sensor defines the field of view. Geometric equivalences can then be determined as a function of focal length and and focal length to get the same image projected on different sensors. Then factors such as DOF can be factored in once a circle of confusion is selected. Since one has defined CoC, diffraction can be factored into the model. While this model may be geometrically complete - it isn't really. The sensor images are not the same size. One needs to scale the smaller sensor image to the size of the larger or to better yet scale both to a standard size to take into account the enlargement ratio required to make the images the same size. That will modulate the impact of diffraction. Such things a noise, ISO, etc. have little to do with the geometric properties of "equivalent cameras."
I took at week long workshop with Fred Picker back in the late 1970's. Most of us there had just started using a view camera, however, were proficient otherwise. The first thing Picker said was there was only so much one can learn in the indoor "classroom" sitting on our butts. Sure we could talk about the Scheimpflug line and all sorts of other concepts from projective geometry.
My field of expertise in real life is algebraic geometry so projective transforms are elements of my dreams or more like nightmares during my graduate school years 🙀. The important thing is not how the camera as a transforms from three space to two space as a projective transform works in theory but what the image looks like on the print. The best way to understand that is to "do the experiment" - take the image and make the print go back inside and discuss it. An internal part of photography - just as any discipline is to develop the intuition (we mathematicians call it "mathematical maturity" ) in order we can look at an image, visualize what we want to do with it and have the experience to make the correct choices or at least define a set of choices we can try that will maximize our success rate.
Ansel Adams was able to capture one of his more iconic images, "Moonrise" because he had the intuition to see it and the knowledge base to be able to set up his camera rapidly and let his intuition drive the exposure settings because he didn't have time to dig out his meter and get a workable negative. The rest is as we say, history.
On the Z 6, I use the highlight-weighted metering mode. It leaves some headroom, but I'd rather have informative highlights and pulled-up shadows than blown highlights. Thom Hogan says the Z cameras in this mode put highlights at middle gray, so dialing in a +2EV is prudent. I recently tested that, in my images it let more highlights go to saturation than I desired, so I'm now trying +1EV. For the D7000 I just use regular matrix mode, tried all the ETTR pet tricks with no satisfaction, so I just try to pay attention to the range of light in the scene, sometimes dialing in some -EV. All of this is cognizant of the respective cameras' ability to resolve light measurements through a given range; whatever the measured DR, my practical experience is that I have not had to use denoise in post with the Z 6 except for one image, a grab shot in very dim light where I didn't get the time to properly consider metering. I probably had to deal with shadow noise in about 30% of my D7000 images.
I rather like Bill Claff's PDR definition for comparing cameras.
I do too, but the SNR threshold is getting to be too low as the resolution of cameras increases and we have the ability to make higher quality big prints. When we get to 340MP 4:3 cameras, the PDR SNR threshold will be the same as that of one of the the engineering dynamic range definitions (SNR = 1).