Equivalence no more "falls down in practice" than does, say, Newtonian Gravity. It is a very good approximation for most common situations. But, the more extreme the situation, the more other variables need to be taken into account.
For example, the first "failure" of Newtonian Gravity is when we ignore the effect of air resistance. Depending on the the velocity of the projectile and its ballistic coefficient, this can result in a massive difference between predication and measurement. Likewise, noise can be significantly different than what Equivalence would predict if we do not account for the differences in QE (quantum efficiency -- the proportion of the light projected on the sensor that is recorded) and electronic noise (the additional noise added by the sensor and supporting hardware) between the sensors, which becomes more and more noticeable as the amount of light lessens. However, typically sensors of the same generation are remarkably close in terms of their efficiency, so as long as we don't compare, say, a 10 year old camera to a modern camera, there's little difference.
However, differences in pixel count (as well as lens sharpness) certainly play a role in differences in resolution, although resolution is not strictly a parameter of Equivalence, but Equivalence does cover this issue, nonetheless. Other differences include bokeh, distortion, flare, operation, etc., etc., etc., any and all of which may play a significant role depending on the photo. Again, this is all covered by The Equivalence Essay. It's simply that the anti-Equivalence crowd typically doesn't care that the matter is discussed (or intentionally ignores it).
So many want a short and simple explanation that covers every scenario, which, of course, is simply not possible. As a result, some say that quantifying the differences is simply folly, and that each piece of equipment has its own "character", when, in fact, that "character" can be quantified. Thus, the "CCD look" BS, among other topics.
In short, photographic properties other than the six parameters of Equivalence come with tons of caveats. Ignoring these caveats, which are specifically spelled out, and faulting Equivalence for not explaining all of it in a sentence or two, is, well, how should I put it?
You're right, it's not. It's easy enough to determine what settings are needed. However, if a necessary lens focal length & f-stop combo isn't available or - as you illustrated - an ISO isn't available, the task of making an equivalent photo isn't achievable.
However, in the numerous instances where equivalent photos can be made, there's nothing terribly controversial about determining the necessary settings and making the photos. Equivalent photos can be and are made.
Sure, it seems you always can go downwards with MF always being able to replicate FF and FF always able to replicate M43. It also works when ISO levels are high.
I wonder what effect lens resolving power and pixel size and pixel count, has on the theoretical relationship.
Bad analogy. Newton gravity states something about the gravitational force only. It does not predict trajectories. That is done by Newton’s second law of motion. If you input there gravity and air resistance as external forces, you get a pretty good precision.
A common criticism of Equivalence is that some people say that it does nothing to help them to take better pictures, but this represents a misunderstanding of what Equivalence is all about. Equivalence is simply a framework by which six visual properties -- perspective, framing, DOF / diffraction / total amount of light projected on the sensor, exposure time (motion blur), lightness, and display size -- relate between different formats. It is not an "instruction manual" for how to take a photo, it is not an argument that "FF is best", it does not say that "bigger is always better". In a word or two, it simply explains why the mantra "f/2 = f/2" is no more or less true, or useful, than saying "50mm = 50mm".
If one system can take a photo that another system cannot, and that results in a "better" photo, then, of course, we would do so. For example, if low noise meant more than a deeper DOF in a scene where motion blur were a factor, then we would compare both systems wide open with the same shutter speed, as that would maximize the amount of light falling on the sensor and thus minimize the noise. Equivalence tells us, however, that this would necessarily result in a more shallow DOF for the system using a wider aperture, and thus most likely result in softer corners. So, we surely would not criticize the larger sensor system for having softer corners on the basis of a choice the photographer made.
The point of photography is making photos. As such, one doesn't choose the particular system to get photos which are equivalent to another system. A person chooses a particular system for the best balance of the factors that matter to the them, such as price, size, weight, IQ, DOF range, AF, build, etc.. By understanding which settings on which system create equivalent images, these factors can be more evenly assessed to choose the system that provides the optimum balance of the needs and wants of a particular photographer.
Actually, perfect analogy. Equivalence talks only about perspective, framing, DOF, motion blur, the amount of light projected on the sensor, and display size. It does not predict noise. This is done by including properties of the sensor and supporting hardware. If you include the the total amount of light projected on the sensor along with the properties of the sensor and supporting hardware, you get pretty good precision.
At Purdue, with the exception of the first year of basic physics, the labs were a separate course. One could have a major in physics without doing the oil drop experiment. So I ended up with a major in both physics and math with only two labs. However, while I wasn’t a big personal proponent of redoing historical experiments, the subject of physics progresses by explaining empirical observations in such a way the explanation can be tested through additional experiments. As Feynman points out, although Millikan received a Nobel for his measurements of the electron charge, we know today that his estimates were not that accurate as better experiments with better instrumentation have shown. I was also found that Millikan eliminated some of his trials from his calculations without justification which resulted in a smaller variance.
I have no doubt that in theory, just like Euler column theory and a whole lot of other physics theories, "Equivalence" between photographic formats is correct.
Like many theories it is a nice simple theory. I just question how useful it is, when we have to add in all the external variables like sensor technology, lens definition and other variables. Precision would be very good with a data base driven App that took into account all the variables, such as sensor characteristics and lens characteristics. Just like I do when I run a computer program that takes Euler column theory and adds in all the correctives, such as materials and construction tolerances.
As I said, it serves as a quick rough guide, in its basic simplified form.
I hope I have been clear, I am not a denier; I just question its usability in the real world.
I found in the past, that my old EM5 made "darker" images than my Nikon gear, using default settings. It seems this was due to Olympus trying to protect the highlights. I did find some confirmation on Bill Claff's site, Photons to Photos.
Th first post in the thread? Doesn't illustrate anything about 'calibration' of ISO. Bill can't measure image plane illuminance, therefore can't measure the exposure - so nothing about ISO can be taken from his measurements, only ISO setting. That's why his charts are plotted against ISO setting rather than exposure - causing one of the common interpretation problems.
I was referring to: "It's well known that Fuji implements the ISO setting in an unusual way.
In this case they have given ISO 320 to ISO 12800 an addition 1 stop boost."
You wouldn't find confirmation for that from Bill's site because it on deals in raw files, whilst ISO is all about processed ones. ISO only mandates a single point on the tone curve, so different manufacturers can choose quite different renderings so long as that point is the same.