• Members 979 posts
    Aug. 11, 2025, 8:09 p.m.

    Good link assuming that when it says "aperture" it means aperture size, not f/number.

    It says: Aperture is a measure of light transmission. The exposure of an f1.4 lens on medium format, full frame, M43 or a phone is exactly the same! Equivalence was created for the internet and by influences that want to ... sound smarter than you.

  • Members 776 posts
    Aug. 11, 2025, 8:24 p.m.

    Let's put that to the test. Using the definition of Equivalence: Equivalent photos are photos of the same scene that are taken from the same position with the same focal point and [diagonal] framing using the same aperture diameter, same exposure time, processed in the same manner, displayed at the same size, and viewed from the same distance on the same medium.

    Now, given that I have two of these photos from two different systems, are you saying that the "scientific visual properties" of perspective, DOF, and motion blur will differ from those same properties when "viewed with human perception"? That is, would someone looking at the two photos taken in the described manner say they are different based on those properties? I mean, some properties may differ, such as flare, distortion, resolution, noise, etc., of course, but if we put additional constraints on the equipment used, these, too, can be accounted for.

    What Equivalence is absolutely not saying is that different photographers (one with, say, a smartphone, another with an EM5 + 12-100/4, and another with an R5 + 50 / 1.4) will take the same photo. Hell, even if all three photographers had the exact same equipment no one is saying they would, or even should, all take the same photo!

    What Equivalence does say, however, is that if, for example, if 50mm f/5.6 1/400 ISO 1600 represents the "best" photo on FF, then 25mm f/2.8 1/400 ISO 800 will represent the "best" photo on mFT. Equivalence also says that if 50mm f/1.4 1/400 ISO 1600 represents the "best" photo on FF, then mFT cannot take an equivalent photo and thus must take a different photo. In addition, if 25mm f/1.4 1/400 ISO 1600 on mFT represents the "best" photo, then 50mm f/2.8 1/400 ISO 6400 on FF represents and equivalent photo, but FF might be able to take a "better" photo using settings that mFT has no equivalent for.

    As a side, can mFT, for example, take a "better" photo than FF? Before answering, let me first note that this question is all together a different question than "Can mFT be the better choice than FF?" -- they are two very, very, very different questions. That said, the answer to the first question is "No" -- assuming relevant factors with regards to the photos taken are more or less equal (but that is not to say that any differences would necessarily have any effect of the "success" of the photo). The answer to the second question is not only "Yes!", but, in my opinion, usually true for the vast majority of people in the vast majority of situations.

    That's it. That's the whole of it. And, as I noted, it presumes relevant factors are more or less equal (e.g., if one system has image stabilization and the other doesn't, then that throws a rather large wrench in many situations, and, yes, Equivalence does discuss these things -- that's one reason why the Equivalence Essay is so freakin' long!). But, again, Equivalence absolutely does not say that you would, or should, take Equivalent photos using different formats.

    So, apologies for ignoring the rest of the post, but until we have a clear and unambiguous response to the above, it's meaningless to go further.

  • Members 776 posts
    Aug. 11, 2025, 8:26 p.m.

    I'm afraid I have to disagree, even if that were true. For example, the following:

    But also because a crop sensor lens of the same field of view length is wide for example my 25mm vs a 50mm on full. I get more depth of field. No still not aperture equivalence. It’s just that on the m43 I’m shooting at 25mm and get the DOF a 25mm affords. When I don’t want background blur, the M43 camera actually improves my hit rate and makes focusing easier.

    is absolute bunk.

  • Members 979 posts
    Aug. 11, 2025, 8:36 p.m.

    I missed that - you're right.

  • Members 325 posts
    Aug. 12, 2025, 12:05 a.m.

    That is one hell of a weighted question. "Scientific visual properties"?? And defining the same photos are different??

    But YES!!!

    You make a statement above that is a long way from fact:

    They don't, see my post above. This is very provable, and simple to prove. Plus let's look at another example. Take two photos and show them on a screen one after another, the difference being one f-stop. Flick between them and you will clearly see the difference. Proves your point? No, I say it misunderstands the nature of human vision. So we try again. Put one photo on the screen and once viewed send the viewer off to get a cup of coffee and whilst they're gone swap the photo. Now ask if they are different. You may say that's not a fair test, but I still say that you misunderstand the nature of human vision. And the fact remains that if you perform the test in different ways you get different answers. You can't just pick the result that agrees with your theory and ignore the other.

    As for perspective it's assumed in the real world and certainly not an absolute quantity in an image. You are so wrong about the nature of perspective when it comes to human vision, it's nothing like that of the camera, see my post above. It is very possible to take two photos of the same scene that aren't equivalent by a long chalk and still have people seeing them as the same when they are viewed in isolation, try it with a car, same size in the photos but use different focal lengths, apertures, shutter speeds and view them in isolation (the coffee trip in between again) and I bet people don't notice the perspective difference. They'll notice if it's a different car though.

    Here:

    You start with assumptions that what you hold as measurably constants actually remain as visual constants, or at least become visual differences if they are changed. But this isn't necessarily true. For instance viewing distance isn't really important as the initial assumptions you make when you first view hold remarkably consistent at nearly all viewing distances. So two people can certainly look at the same photo and certainly form different opinions about what it actually shows. Forget the very narrow parameters that you continue to try and restrict the conversation to:

    Or trying to restrict it to one person viewing the images, it still remains that human perception varies. If you use the same person glancing at the same photo a year apart I bet their description of what they see will be different when you compare the records...

    Honestly GB, you really need to review your assumptions about the nature of human vision, the conclusions you make seem to be based upon your understanding of the actual real differences that can be measured, and that they remain constant through human perception is far enough off the mark for me to say that your switching between "measured" and "visual" as being the same is an assumption not supported by evidence that you need to re-visit.

    That's a long way from being:

  • Members 776 posts
    Aug. 12, 2025, 12:28 a.m.

    Equivalence, by definition, is weighted. It concerns itself with perspective, framing, DOF, and motions blur (but can be extended to include resolution, noise, DR, etc. with additional assumptions about the equipment).

    No idea what you are saying here. Are you calling perspective, framing, DOF, and motion blur "scientific visual properties"? If so, then sure. If by "defining the same photos are different" you mean that two photos of the same scene can be different, even though the aforementioned "scientific visual properties" listed are the same, then obviously -- color, distortion, flare, etc., etc., etc.. If you mean anything other than what I spelled out here, then I have no idea what you mean.

    No, that proves my point.

    You're talking about memory here, which is a totally different discussion. If I flick back and forth between two photos and you see a difference, the difference is real. If I show you one photo, you leave the room, come back, show you another photo, and you don't notice a difference, that's not because the photos are not different, it's because you don't remember all the information that was in the photo.

    Let's take a more interesting example. If I have someone look at two photos, one on a 4K monitor and the other on an 8K monitor, and they can't see the difference, it's not that the photos are not different, it's that their visual acuity is unable to discern the difference and/or the difference is so inconsequential that they pay no attention to it.

    No one said, or implied, that perspective is an absolute quantity in a photo. What is being said is that if two photos are taken from the same position, displayed at the same size, and viewed in the same manner, the perspectives will be the same. This is relative, not absolute.

    "Don't notice" is not the same as "isn't there". If I show two photos of the same scene, but change the face of someone in the background and people don't notice, that does not mean the photos are the same (or equivalent). It simply means they didn't notice the difference.

    The "they didn't notice the difference" combined with "they noticed the difference but didn't care" is why I say that pretty much any camera cuts the mustard IQ-wise today, to include smartphones. Absolutely there are situations where the equipment makes a huge difference, but for the vast majority of people in the vast majority of circumstances, the only differences that matter are differences in operation, not differences in IQ and/or DOF options.

    Let me give a nice example. The following photo was taken at 50mm f/1.4 on a 45 MP FF R5 (larger size here):

    pbase.com/joemama/image/175590279/original.jpg

    If it had been taken at 25mm f/5.6 on a 20 MP EM5, how many people do you think would care about the differences? Not many, I would think. Some of those that cared about the differences may have likely preferred the mFT photo due to the greater DOF. Obviously, however, if I wanted the deeper DOF, I could have just stopped down (and, for this photo, still remained at base ISO -- I likely did take a stopped down photo as well, but preferred this one).

    So, I care about the differences. But I see myself as being in a small minority. What if I had taken it with my smartphone? I think more would notice the differences, but still not care, or, who knows, maybe some would have even liked the smartphone photo the best of the lot. Most, however, would probably only express a preference one way or another if pressed, but that's about it. For sure, I can post photos where the differences probably would make a difference to more people, but, again, I'm talking about the vast majority of photos people take, not the minority.

  • Members 2527 posts
    Aug. 12, 2025, 12:53 a.m.

    the world is in the state it is because governments are taxing the wealthy countries and passing it on to the 3rd world countries via global warming claims without the general population having any clue as to what's going on.

  • Members 979 posts
    Aug. 12, 2025, 2:03 a.m.

    Good to see an actual image posted in this verbacious thread along with some actual camera parameters. Slightly puzzled - would the EM5 f/number theoretically be f/0.7 (same aperture diameter, as you said earlier) ?

  • Members 776 posts
    Aug. 12, 2025, 4:36 a.m.

    You'll have to be more specific (and cite credible sources) to support your claim. Prove to us all that you know better than ChatGPT:

    chatgpt.com/share/689ac48c-8dac-8007-bd82-ee4952e63bfa

    😁

  • Members 776 posts
    Aug. 12, 2025, 5:19 a.m.

    Yes, the mFT equivalent for f/1.4 on FF is f/0.7. It's a bit nuanced for such wide apertures, however, as the f-number is an approximation for the numerical aperture which is what really matters. However, I think it's "close enough" even for f-numbers this low. More importantly for extremely fast lenses, lens aberrations get exponentially worse the wider the aperture, which makes correcting for them rather problematic -- the lenses get significantly larger, heavier, and more expensive (consider the size, weight, and cost difference of, say, a 50 / 1.4 and 50 / 1.2, and that's only a 1/3 stop difference!).

    Interestingly, Canon is using software corrections in their VCM line to keep the size and weight of the lenses down while still maintaining high sharpness (the lens used in the above photo is the Canon RF 50 / 1.4L VCM, although this particular VCM lens is not much different, if different at all, from an optically corrected 50 / 1.4 with regards to distortion and vignetting, although the wider VCM lenses "require" significantly greater corrections).

    This is all part and parcel of the "all else equal" consideration that Equivalence makes use of. Usually, this clause is used in noise equivalence, where the implicit assumption is that the cameras being compared have sensors that have more or less the same QE (Quantum Efficiency -- the proportion of light projected on the sensor that gets recorded) and electronic noise (the additional noise generated by the sensor and supporting hardware). With regards to QE, this assumption is more or less valid for sensors of the past decade, regardless of brand or pixel count. With regards to electronic noise, however, there is still some significant variation, although it is still quite low, so it does not become an issue except when heavily pushing shadows (4+ stops) or when shooting in light so low that you'd be using, say, ISO 12800 FF equivalent or higher). This disparity is exasperated with the ultra high frame rate cameras as this high speed does increase electronic noise. In addition to noise/DR, there is also the matter of resolution, which, obviously, depends strongly on the pixel count of the sensor, where there is a lot of variation.

    So, for sure, everything in Equivalence is an approximation -- no two systems ever have "all else equal" -- but, aside from extremes, it's so close in practice for systems of more or less the same generation that it's a very good approximation, like Newtonian Gravity vs General Relativity. You need General Relativity to understand gravity near a neutron star or black hole, but the much simpler Newtonian Gravity works fine even for something as large as the Sun, aside from a few fringe cases (like the precession of Mercury's orbit).

    Basically, Equivalence comes down to something like this: an mFT shooter wants a 35-100 / 2. Their options are:

    1) Adapt and use the huge, heavy, and expensive Four Thirds 35-100 / 2.
    2) Use a 0.7x focal reducer ("speed booster") on the ("ancient" and, if you can find one, inexpensive) Sigma 50-150 / 2.8.
    3) Buy a FF camera and use a 70-200 / 4 on it.
    4) Meh -- the 35-100 / 2.8 is easily "good enough". (1) - (3) are absolutely not worth it for just one extra stop!

    Obviously (I hope), Option 4 is the clear solution. But now add more to what you want -- more resolution, more DR, more light gathering power and/or more DOF options on the shallow end with other lenses, too. Then Option 3 is likely the best path, money permitting, and size/weight not too much.

    Honestly, this is what Equivalence is all about. All the resistance to Equivalence, in my opinion, comes from not understanding what Equivalence actually says (sometimes actively and willfully -- the source of all the "entertainment" when it's "discussed" -- usually from people who are fixated on exposure, and even then completely misunderstand what exposure actually is) or not caring about it, since they are not comparing different formats, and Equivalence does nothing to help them take a good (or better) photo using the equipment in hand (although understanding the principles of Equivalence may help to that end regarding certain technical elements, such as noise, resolution, motion blur, etc.).

  • Members 2225 posts
  • Members 325 posts
    Aug. 12, 2025, 11 a.m.

    See photo below.

    In the quote above, and as a scientist, how many of the statements you make are supported by proven observation and how many are assumption? How many assumptions are you making that support the premise that measured differences on the set parameters you state actually translate to constant visual parameters on the image through human perception?

    What I say below is true:

    It is actually quite frightening just how much our vision is modified by confirmation bias. It could be true to say the not being able to see past the end of one's nose is more a constant in human perception than any of the parameters you list as visual properties on the image. We do not see as the camera does by a country mile. This means that the image the camera produces is always inconsistent with our experience of how we see the real world, and therefore when we view an image we always modify/adapt/translate and mostly in a way that supports our confirmation bias. This process is not consistent across individuals, and so images cease to be visually consistent when we add human perception.

    If you use a device to make a measurement you must consider the nature of that device and if it adds an error during the observation. Human perception clearly does and you must examine this in statements like "Scientific visual properties".

    If we look at this in a way you can relate to maths... Take AOV which you hold as a constant through your understanding of how a camera produces equivalent images and the maths behind it. AOV is not a property of an image, only of the camera that took the photo. It doesn't transfer to a 2D image. When we view an image with human perception we simply just guess, and from that guess comes the further assumption regarding the distance between objects, and so you can see that even DOF is an assumption we make based on incorrect assumptions of perspective.

    If we flick between two similar photos and see the different DOF, it is not because we see them as absolute but because we see them relative to each other (it doesn't prove your point). Then if we apply that to a real world situation, and as you say equivalent photos don't need to be the same they just need equivalent settings so we have 4 landscapes, two of which have equivalent settings.

    I get that we classify the properties of images based on how the camera forms them, I get that we can measure the differences between how cameras record images. What you don't seem to get is just how much you are using that framework of how a camera forms an image to underpin your understanding of how we see those images. We do not see the world the same way as a camera does and as a consequence we can never relate a photograph to an absolute memory. And so we always have to assume/guess, and we ALWAYS get this wrong. We never see any photograph correctly.

    Yet you still say that for images to be equivalent we so we can compare camera capabilities we must level the field and then you just transfer that understanding of how the image is formed to directly describe the visual properties of that image. Don't you see the problem?

    Make that abstract leap... The controls and the relationships between them that you use in equivalence to define image properties do not remain constant visual properties in images through human perception, even mathematically. By not understanding the nature of the instrument you use to make the observations you fail to allow that AOV/DOF/motion blur etc. become variable as perceptual properties. With our four landscapes earlier, viewed one after the other for a minimum of 1 minute each with a gap of approximately 30 seconds between each (which is actually a very fair way to counter some of the perceptual quirks of human perception and allow a more impartial view), and I bet most people will not be able to tell without looking at the numbers.

    Or to put it bluntly, equivalence is not really visible in finished images, or real world visual equivalence is not dependent on camera settings. Equivalence is staggeringly incomplete, but even in this conversation it's impossiuble to move beyond it, we still get pulled back to the same basic conversation.

    upsidedown.jpg

    upsidedown.jpg

    JPG, 275.1 KB, uploaded by Andrew546 on Aug. 12, 2025.

  • Members 2527 posts
    Aug. 12, 2025, 11:01 a.m.

    its called "Equivalence"