• Members 796 posts
    Aug. 14, 2025, 4:10 a.m.

    But is it the best way? : )

  • Members 340 posts
    Aug. 14, 2025, 8:21 a.m.

    Snipped for space, I'll answer the complete post as it's a good question and is the point I'm making. Which is that; if you don't consider the nature of the optical instrument and the error it produces then your observations and conclusions will be flawed.

    My contention is that you measure the differences in an image by your understanding of how cameras form images, such as equivalent setting for "exactly the same photo" will produce exactly the same measured blur. I don't dispute this. I just say it isn't the whole story. What you then do is just assume that the measured differences in the images are transferred as visual differences to the image.

    The example you offer wouldn't make it into a peer journal, it would be shredded because it is a highly weighted test that most could say is designed to reinforce a pre-formed conclusion because it ignores the nature of human vision.

    You carefully set the condition for the test that the two photos should be identical, the same scene with no colour difference, yet in the theory you apply it to all equivalent photos. So alongside this test we do mine, four different (but similar) landscapes, two with "equivalent" settings two without.

    Back to your test. Yes, the results will be as you say, BUT... This is only because the human visual system is very good at spotting relative differences or lack of, especially if you flick between them on your computer screen (and you so carefully set up those parameters in the example). This comparison in no way shape or form proves that the human visual system actually sees either photo correctly or sees the parameters as you measure them by how the camera forms the image. In fact it is guaranteed that we don't.

    If equivalent images don't need to be the same then perform the same test with my four similar landscapes, real world photos. I bet the vast majority can't tell which two are the same settings without looking at the exif.

    AOV doesn't transfer to a 2D image. If you take two similar landscapes and display them on screen at the same size... See where I'm going here? Though we know they are photos, and we are familiar with the effects wide angle lenses have on the perception of distance, and allow for it through experience and memory, we still get it wrong. So how does looking at a photo where distances appear to be stretched affect your perception of the actual measured DOF in the same way as you measure it in your equivalence theory? The possibility exists that two photos with different settings can not only look equivalent but actually the photo with shallower DOF can appear to have the greater. (This is important because if you understand this and start choosing your focus points in line with our perception of what we expect to be sharper rather than using DOF as a pure mathematical exercise you may be surprised at the results.) And that's before we even think about different subjects, such as equivalent photos of high acutance subjects such as reflections on gently rippling water against a fields of wheat. But then, normally, the argument falls back along the lines that equivalence still works if we use two equivalent photos of wheat fields, or reflections. Then you say that equivalence doesn't define how you should take a photo, except that you are applying the condition of equivalent subjects in the proof, and thus cancel perceptual effects.

    If you look at all these equivalence threads you may notice that the only examples you ever seem to use are "exactly the same photo". If I mention that I don't see the point of a theory in a creative medium that reduces the camera to a copy machine and effectively cancel out the photo, and you correct me by saying that I obviously don't understand equivalence. I say that by not including an understanding of human perception you fail to understand that the metrics you hold to be constant in the camera don't transfer as constants in finished 2D images all viewed the same size on your computer screen.

    If you quote maths and science at us then you must abide with the same. If you set the conditions for your test and example images then you must also apply those condition to the results. So equivalence works fine with exactly the same photo. If you are going to apply it to real world photography lets see real world equivalent photos in the proofs. See what happens... Are the differences that equivalence defines really that visible in real world images and do they really play that important a role in defining the visual output of different systems in real photography?

    The point of the upside down photo is that it's visual proof that the human brain actively modifies the information the eye records in line with your memory and experience of how tyou think things should look. Using a human face is a weighted example and I deliberately do so because it is so difficult to see through even when you know. The fact remains is that we do similar with all photos, especially when we glance. As I said earlier, it's frightening just how much confirmation bias affects what you see and yet we still assume our vision is absolute.

    Sorry about all the edits, final thoughts to chew on...

  • Members 2534 posts
    Aug. 14, 2025, 9:50 a.m.

    Australia and Vanuatu have struck a funding deal worth $500 million, which will see Australia send aid to the Pacific island nation for climate resilience and security support.
    "climate resilience and security support" sounds as clear as mud (equivalence) 🤔😊

  • Members 568 posts
    Aug. 14, 2025, 2:26 p.m.

    What does that have to do with TCs?

  • Members 1003 posts
    Aug. 14, 2025, 4:38 p.m.

    You said "If you put a 2x TC behind a 70-200/2.8 lens, the combination, which could be rendered as a single unit <> is actually 140-400/5.6".

    Why did you say "they are not an equivalence"? "real and absolute" is no reason for that, so it must be something else.

  • Members 796 posts
    Aug. 14, 2025, 8:49 p.m.

    That's not a Carbon Tax being funneled to a 3rd World country -- it's an economic/defense pact in response to Vanuatu's increasing ties to China:

    www.reuters.com/world/china/australia-vanuatu-agree-325-million-security-economic-pact-amid-china-2025-08-13/
    www.abc.net.au/news/2025-08-13/australia-vanuatu-initial-nakamal-agreement/105650044

    That's not to say that there is nothing about climate change in the agreement, but it's disingenuous in the extreme to cite this as an example of money being funneled to Third World countries from First World countries in the form of a Carbon Tax.

  • Members 796 posts
    Aug. 14, 2025, 8:54 p.m.

    I'm going to snip right here, because your post is quite excellent and hits the core of our disagreement on the head! Equivalent photos, by definition, are necessarily photos of the same scene from the same position with the same framing, etc., etc., etc.. Now, Equivalence talks about Equivalent settings which are settings on the camera that would result in Equivalent photos if the photos had been taken of the same scene from the same position. In other words, something like, "Had you taken the photo from the same position, etc., with a different camera using Equivalent settings, displayed the photos at the same size on the same medium, etc., then the photos would have been Equivalent.

    In short, there is no such thing as Equivalent photos of different scenes. You can use Equivalent settings for different scenes, but the resulting photos will not be Equivalent. I hope this clears up any misunderstandings!

  • Members 568 posts
    Aug. 15, 2025, 4:49 p.m.

    Are things equivalent to themselves? Doesn't "equivalent" imply some kind of translation? What is left for "equal" or "the same", if they can be replaced by "equivalent"?

    Things that are either equal or not equal: DOF, AOV, pupil size, total projected light onto the sensor, exposure time.
    Things that are either equivalent or not equivalent: f-number, focal length, ISO.

  • Members 1003 posts
    Aug. 15, 2025, 5:11 p.m.

    As you surely must know, "equivalent" means "of equal value" which leaves the determination of value in question in all it's variety in this benighted thread.

    Like in money, a "basket of goods" so to speak, with most responders having their own different basket.

  • Members 340 posts
    Aug. 15, 2025, 6:35 p.m.

    It's always a pleasure to disagree with you GB! 😁

  • Members 1003 posts
    Aug. 15, 2025, 11:13 p.m.

    An interesting variant for this thread is Sigma Foveon cameras which can have different raw MPs selected via on-chip pixel binning. For example, the SD10 has a half-size option called "low resolution" binned 2x2 on the sensor chip. Raw output is 1134x756px instead of 2268x1512px. The sensor size remains at 20.7mm x 13.8mm i.e. the so-called "crop factor" remains at 1.7. The framing remains the same but I suspect that the CoC does not, due to double the pixel size, meaning that the DOF may not be the same, even though the crop factor due to sensor size is unity.

  • Members 796 posts
    Aug. 16, 2025, 2:14 a.m.

    The CoC is independent of pixel size. For example, the Z7.2 (47 MP) and Z6.2 (24 MP) will produce photos with the same DOF for a given subject-camera distance, framing, f-number, and viewing conditions (e.g. same display size, viewing distance, etc.).

    All the pixel count does is increase/decrease resolution. It doesn't affect DOF, diffraction, or motion blur. Less resolution in a photo does not mean more DOF; more resolution in a photo does not mean less DOF.

  • Members 1003 posts
    Aug. 16, 2025, 2:32 a.m.

    Same display size when Raw output is 1134x756px instead of 2268x1512px. on my monitor?

    Talking 100% monitor zoom, of course. In other words, your quoted Z 7.2 would have to be down-sampled to get the same display size as your Z 6.2.

    AFAIK, down-sampling an image increases it's apparent DOF and thereby negates it's equivalence. For those two cameras, the only way to get same-looking images would be to shoot from different distances and crop the larger, thereby ruining equivalency.

  • Members 796 posts
    Aug. 16, 2025, 5:23 a.m.

    Resampling a photo only affects the DOF inasmuch as the display size changes. So, if you view photos with different pixel counts at 100%, you will necessarily view them at different display sizes, and that is what changes the DOF. Consider the following scenario: you have two hypothetical monitors that are the same size, one monitor is 24 MP and the other monitor is 47 MP, but both are 65" monitors (thus, the 47 MP monitor has smaller pixels). Now, display the aforementioned photos from the Z6.2 and Z7.2 on the respective monitors. The DOFs will be the same (but, of course, the photo displayed on the 47 MP monitor will be more detailed).

    A corollary to this is that lens sharpness doesn't affect DOF, either. It affects resolution, of course, but not sharpness. What happens is that more pixels (and/or a sharper lens) allows one to see a smoother transition from in-focus to out-of-focus, but the depth from the focal plane where we consider the photo to be out-of-focus (as opposed to being low resolution) is not dependent on pixel size (either on the camera or the display) or the sharpness of the lens.

  • Members 568 posts
    Aug. 16, 2025, 3:58 p.m.

    I don't know if it is as pleasant, but I disagree with you, also. Subjectivity does not dilute the meaning of equivalence. If you sample enough people's subjective evaluation of DOF, you will find that the mean and median evaluations will correlate directly with pupil size and subject distance, if the images are displayed at a reasonably high magnification where any variation in DOF would be visible in an A/B comparison.

  • Members 568 posts
    Aug. 16, 2025, 4:21 p.m.

    That's one clear benefit of a Foveon-like layout or a monochrome sensor; you can potentially do hardware binning to decrease input-referred read noise, or possibly get higher frame rates and faster rolling shutter. With a Bayer sensor, binning would be a wiring mess to bin only pixels with the same color filter, and there would be so much blurring needed to reconstruct the image, although sharp monochrome output is possible. The monochrome, however, may need to be upsampled and color-shifted, to get rid of the color shift of turning exclusive

    RG
    GB

    tiles into single monochrome values, as the red and blue sensitive areas are off-center.

  • Members 568 posts
    Aug. 16, 2025, 4:30 p.m.

    Very large pixels can hide the underlying analog differences in smaller blurs. I think primarily in terms of analog DOF; an envelope of minimal point spread sizes, with pixel resolution, as a secondary concern, much like your thinking here, but low pixel resolution hides differences. Other hidden differences can be lack of focus, small motion blurs, light diffraction, small aberrations, etc.

  • Members 340 posts
    Aug. 16, 2025, 4:58 p.m.

    For the same scene then correct, for different scenes then no, I think you will be surprised. This is the problem, if you start off by knowing the photos have equivalent settings, or you arrange the test such that only photos with equivalent settings are compared then you skew the test. You must also include photos that aren't equivalence and see if your bell curve still fits over that which you wish to prove.

    None of this dilutes the maths of equivalence, but the subjectivity? If we say that equivalence defines the difference between cameras by comparing the same photo taken with each then you must take the same photo with each. If you take different photos with each camera then you must compare the different photos and see if equivalence still defines. What I think (know`) you will find is that with different photos subjectivity dilutes the meaning of equivalence to the point that other subtle differences may have equal if not a greater influence on the finished result.

    I don't disagree with the maths, it's just that we should actually try to understand what we see sometimes rather than try to fit what we see into the maths we understand. I see a disparity.