• Members 983 posts
    Aug. 11, 2025, 8:09 p.m.

    Good link assuming that when it says "aperture" it means aperture size, not f/number.

    It says: Aperture is a measure of light transmission. The exposure of an f1.4 lens on medium format, full frame, M43 or a phone is exactly the same! Equivalence was created for the internet and by influences that want to ... sound smarter than you.

  • Members 781 posts
    Aug. 11, 2025, 8:24 p.m.

    Let's put that to the test. Using the definition of Equivalence: Equivalent photos are photos of the same scene that are taken from the same position with the same focal point and [diagonal] framing using the same aperture diameter, same exposure time, processed in the same manner, displayed at the same size, and viewed from the same distance on the same medium.

    Now, given that I have two of these photos from two different systems, are you saying that the "scientific visual properties" of perspective, DOF, and motion blur will differ from those same properties when "viewed with human perception"? That is, would someone looking at the two photos taken in the described manner say they are different based on those properties? I mean, some properties may differ, such as flare, distortion, resolution, noise, etc., of course, but if we put additional constraints on the equipment used, these, too, can be accounted for.

    What Equivalence is absolutely not saying is that different photographers (one with, say, a smartphone, another with an EM5 + 12-100/4, and another with an R5 + 50 / 1.4) will take the same photo. Hell, even if all three photographers had the exact same equipment no one is saying they would, or even should, all take the same photo!

    What Equivalence does say, however, is that if, for example, if 50mm f/5.6 1/400 ISO 1600 represents the "best" photo on FF, then 25mm f/2.8 1/400 ISO 800 will represent the "best" photo on mFT. Equivalence also says that if 50mm f/1.4 1/400 ISO 1600 represents the "best" photo on FF, then mFT cannot take an equivalent photo and thus must take a different photo. In addition, if 25mm f/1.4 1/400 ISO 1600 on mFT represents the "best" photo, then 50mm f/2.8 1/400 ISO 6400 on FF represents and equivalent photo, but FF might be able to take a "better" photo using settings that mFT has no equivalent for.

    As a side, can mFT, for example, take a "better" photo than FF? Before answering, let me first note that this question is all together a different question than "Can mFT be the better choice than FF?" -- they are two very, very, very different questions. That said, the answer to the first question is "No" -- assuming relevant factors with regards to the photos taken are more or less equal (but that is not to say that any differences would necessarily have any effect of the "success" of the photo). The answer to the second question is not only "Yes!", but, in my opinion, usually true for the vast majority of people in the vast majority of situations.

    That's it. That's the whole of it. And, as I noted, it presumes relevant factors are more or less equal (e.g., if one system has image stabilization and the other doesn't, then that throws a rather large wrench in many situations, and, yes, Equivalence does discuss these things -- that's one reason why the Equivalence Essay is so freakin' long!). But, again, Equivalence absolutely does not say that you would, or should, take Equivalent photos using different formats.

    So, apologies for ignoring the rest of the post, but until we have a clear and unambiguous response to the above, it's meaningless to go further.

  • Members 781 posts
    Aug. 11, 2025, 8:26 p.m.

    I'm afraid I have to disagree, even if that were true. For example, the following:

    But also because a crop sensor lens of the same field of view length is wide for example my 25mm vs a 50mm on full. I get more depth of field. No still not aperture equivalence. It’s just that on the m43 I’m shooting at 25mm and get the DOF a 25mm affords. When I don’t want background blur, the M43 camera actually improves my hit rate and makes focusing easier.

    is absolute bunk.

  • Members 983 posts
    Aug. 11, 2025, 8:36 p.m.

    I missed that - you're right.

  • Members 326 posts
    Aug. 12, 2025, 12:05 a.m.

    That is one hell of a weighted question. "Scientific visual properties"?? And defining the same photos are different??

    But YES!!!

    You make a statement above that is a long way from fact:

    They don't, see my post above. This is very provable, and simple to prove. Plus let's look at another example. Take two photos and show them on a screen one after another, the difference being one f-stop. Flick between them and you will clearly see the difference. Proves your point? No, I say it misunderstands the nature of human vision. So we try again. Put one photo on the screen and once viewed send the viewer off to get a cup of coffee and whilst they're gone swap the photo. Now ask if they are different. You may say that's not a fair test, but I still say that you misunderstand the nature of human vision. And the fact remains that if you perform the test in different ways you get different answers. You can't just pick the result that agrees with your theory and ignore the other.

    As for perspective it's assumed in the real world and certainly not an absolute quantity in an image. You are so wrong about the nature of perspective when it comes to human vision, it's nothing like that of the camera, see my post above. It is very possible to take two photos of the same scene that aren't equivalent by a long chalk and still have people seeing them as the same when they are viewed in isolation, try it with a car, same size in the photos but use different focal lengths, apertures, shutter speeds and view them in isolation (the coffee trip in between again) and I bet people don't notice the perspective difference. They'll notice if it's a different car though.

    Here:

    You start with assumptions that what you hold as measurably constants actually remain as visual constants, or at least become visual differences if they are changed. But this isn't necessarily true. For instance viewing distance isn't really important as the initial assumptions you make when you first view hold remarkably consistent at nearly all viewing distances. So two people can certainly look at the same photo and certainly form different opinions about what it actually shows. Forget the very narrow parameters that you continue to try and restrict the conversation to:

    Or trying to restrict it to one person viewing the images, it still remains that human perception varies. If you use the same person glancing at the same photo a year apart I bet their description of what they see will be different when you compare the records...

    Honestly GB, you really need to review your assumptions about the nature of human vision, the conclusions you make seem to be based upon your understanding of the actual real differences that can be measured, and that they remain constant through human perception is far enough off the mark for me to say that your switching between "measured" and "visual" as being the same is an assumption not supported by evidence that you need to re-visit.

    That's a long way from being:

  • Members 781 posts
    Aug. 12, 2025, 12:28 a.m.

    Equivalence, by definition, is weighted. It concerns itself with perspective, framing, DOF, and motions blur (but can be extended to include resolution, noise, DR, etc. with additional assumptions about the equipment).

    No idea what you are saying here. Are you calling perspective, framing, DOF, and motion blur "scientific visual properties"? If so, then sure. If by "defining the same photos are different" you mean that two photos of the same scene can be different, even though the aforementioned "scientific visual properties" listed are the same, then obviously -- color, distortion, flare, etc., etc., etc.. If you mean anything other than what I spelled out here, then I have no idea what you mean.

    No, that proves my point.

    You're talking about memory here, which is a totally different discussion. If I flick back and forth between two photos and you see a difference, the difference is real. If I show you one photo, you leave the room, come back, show you another photo, and you don't notice a difference, that's not because the photos are not different, it's because you don't remember all the information that was in the photo.

    Let's take a more interesting example. If I have someone look at two photos, one on a 4K monitor and the other on an 8K monitor, and they can't see the difference, it's not that the photos are not different, it's that their visual acuity is unable to discern the difference and/or the difference is so inconsequential that they pay no attention to it.

    No one said, or implied, that perspective is an absolute quantity in a photo. What is being said is that if two photos are taken from the same position, displayed at the same size, and viewed in the same manner, the perspectives will be the same. This is relative, not absolute.

    "Don't notice" is not the same as "isn't there". If I show two photos of the same scene, but change the face of someone in the background and people don't notice, that does not mean the photos are the same (or equivalent). It simply means they didn't notice the difference.

    The "they didn't notice the difference" combined with "they noticed the difference but didn't care" is why I say that pretty much any camera cuts the mustard IQ-wise today, to include smartphones. Absolutely there are situations where the equipment makes a huge difference, but for the vast majority of people in the vast majority of circumstances, the only differences that matter are differences in operation, not differences in IQ and/or DOF options.

    Let me give a nice example. The following photo was taken at 50mm f/1.4 on a 45 MP FF R5 (larger size here):

    pbase.com/joemama/image/175590279/original.jpg

    If it had been taken at 25mm f/5.6 on a 20 MP EM5, how many people do you think would care about the differences? Not many, I would think. Some of those that cared about the differences may have likely preferred the mFT photo due to the greater DOF. Obviously, however, if I wanted the deeper DOF, I could have just stopped down (and, for this photo, still remained at base ISO -- I likely did take a stopped down photo as well, but preferred this one).

    So, I care about the differences. But I see myself as being in a small minority. What if I had taken it with my smartphone? I think more would notice the differences, but still not care, or, who knows, maybe some would have even liked the smartphone photo the best of the lot. Most, however, would probably only express a preference one way or another if pressed, but that's about it. For sure, I can post photos where the differences probably would make a difference to more people, but, again, I'm talking about the vast majority of photos people take, not the minority.

  • Members 2530 posts
    Aug. 12, 2025, 12:53 a.m.

    the world is in the state it is because governments are taxing the wealthy countries and passing it on to the 3rd world countries via global warming claims without the general population having any clue as to what's going on.

  • Members 983 posts
    Aug. 12, 2025, 2:03 a.m.

    Good to see an actual image posted in this verbacious thread along with some actual camera parameters. Slightly puzzled - would the EM5 equivalent f/number theoretically be f/0.7 (same aperture diameter, as you said earlier) ?

  • Members 781 posts
    Aug. 12, 2025, 4:36 a.m.

    You'll have to be more specific (and cite credible sources) to support your claim. Prove to us all that you know better than ChatGPT:

    chatgpt.com/share/689ac48c-8dac-8007-bd82-ee4952e63bfa

    😁

  • Members 781 posts
    Aug. 12, 2025, 5:19 a.m.

    Yes, the mFT equivalent for f/1.4 on FF is f/0.7. It's a bit nuanced for such wide apertures, however, as the f-number is an approximation for the numerical aperture which is what really matters. However, I think it's "close enough" even for f-numbers this low. More importantly for extremely fast lenses, lens aberrations get exponentially worse the wider the aperture, which makes correcting for them rather problematic -- the lenses get significantly larger, heavier, and more expensive (consider the size, weight, and cost difference of, say, a 50 / 1.4 and 50 / 1.2, and that's only a 1/3 stop difference!).

    Interestingly, Canon is using software corrections in their VCM line to keep the size and weight of the lenses down while still maintaining high sharpness (the lens used in the above photo is the Canon RF 50 / 1.4L VCM, although this particular VCM lens is not much different, if different at all, from an optically corrected 50 / 1.4 with regards to distortion and vignetting, although the wider VCM lenses "require" significantly greater corrections).

    This is all part and parcel of the "all else equal" consideration that Equivalence makes use of. Usually, this clause is used in noise equivalence, where the implicit assumption is that the cameras being compared have sensors that have more or less the same QE (Quantum Efficiency -- the proportion of light projected on the sensor that gets recorded) and electronic noise (the additional noise generated by the sensor and supporting hardware). With regards to QE, this assumption is more or less valid for sensors of the past decade, regardless of brand or pixel count. With regards to electronic noise, however, there is still some significant variation, although it is still quite low, so it does not become an issue except when heavily pushing shadows (4+ stops) or when shooting in light so low that you'd be using, say, ISO 12800 FF equivalent or higher). This disparity is exasperated with the ultra high frame rate cameras as this high speed does increase electronic noise. In addition to noise/DR, there is also the matter of resolution, which, obviously, depends strongly on the pixel count of the sensor, where there is a lot of variation.

    So, for sure, everything in Equivalence is an approximation -- no two systems ever have "all else equal" -- but, aside from extremes, it's so close in practice for systems of more or less the same generation that it's a very good approximation, like Newtonian Gravity vs General Relativity. You need General Relativity to understand gravity near a neutron star or black hole, but the much simpler Newtonian Gravity works fine even for something as large as the Sun, aside from a few fringe cases (like the precession of Mercury's orbit).

    Basically, Equivalence comes down to something like this: an mFT shooter wants a 35-100 / 2. Their options are:

    1) Adapt and use the huge, heavy, and expensive Four Thirds 35-100 / 2.
    2) Use a 0.7x focal reducer ("speed booster") on the ("ancient" and, if you can find one, inexpensive) Sigma 50-150 / 2.8.
    3) Buy a FF camera and use a 70-200 / 4 on it.
    4) Meh -- the 35-100 / 2.8 is easily "good enough". (1) - (3) are absolutely not worth it for just one extra stop!

    Obviously (I hope), Option 4 is the clear solution. But now add more to what you want -- more resolution, more DR, more light gathering power and/or more DOF options on the shallow end with other lenses, too. Then Option 3 is likely the best path, money permitting, and size/weight not too much.

    Honestly, this is what Equivalence is all about. All the resistance to Equivalence, in my opinion, comes from not understanding what Equivalence actually says (sometimes actively and willfully -- the source of all the "entertainment" when it's "discussed" -- usually from people who are fixated on exposure, and even then completely misunderstand what exposure actually is) or not caring about it, since they are not comparing different formats, and Equivalence does nothing to help them take a good (or better) photo using the equipment in hand (although understanding the principles of Equivalence may help to that end regarding certain technical elements, such as noise, resolution, motion blur, etc.).

  • Members 2226 posts
  • Members 326 posts
    Aug. 12, 2025, 11 a.m.

    See photo below.

    In the quote above, and as a scientist, how many of the statements you make are supported by proven observation and how many are assumption? How many assumptions are you making that support the premise that measured differences on the set parameters you state actually translate to constant visual parameters on the image through human perception?

    What I say below is true:

    It is actually quite frightening just how much our vision is modified by confirmation bias. It could be true to say the not being able to see past the end of one's nose is more a constant in human perception than any of the parameters you list as visual properties on the image. We do not see as the camera does by a country mile. This means that the image the camera produces is always inconsistent with our experience of how we see the real world, and therefore when we view an image we always modify/adapt/translate and mostly in a way that supports our confirmation bias. This process is not consistent across individuals, and so images cease to be visually consistent when we add human perception.

    If you use a device to make a measurement you must consider the nature of that device and if it adds an error during the observation. Human perception clearly does and you must examine this in statements like "Scientific visual properties".

    If we look at this in a way you can relate to maths... Take AOV which you hold as a constant through your understanding of how a camera produces equivalent images and the maths behind it. AOV is not a property of an image, only of the camera that took the photo. It doesn't transfer to a 2D image. When we view an image with human perception we simply just guess, and from that guess comes the further assumption regarding the distance between objects, and so you can see that even DOF is an assumption we make based on incorrect assumptions of perspective.

    If we flick between two similar photos and see the different DOF, it is not because we see them as absolute but because we see them relative to each other (it doesn't prove your point). Then if we apply that to a real world situation, and as you say equivalent photos don't need to be the same they just need equivalent settings so we have 4 landscapes, two of which have equivalent settings.

    I get that we classify the properties of images based on how the camera forms them, I get that we can measure the differences between how cameras record images. What you don't seem to get is just how much you are using that framework of how a camera forms an image to underpin your understanding of how we see those images. We do not see the world the same way as a camera does and as a consequence we can never relate a photograph to an absolute memory. And so we always have to assume/guess, and we ALWAYS get this wrong. We never see any photograph correctly.

    Yet you still say that for images to be equivalent we so we can compare camera capabilities we must level the field and then you just transfer that understanding of how the image is formed to directly describe the visual properties of that image. Don't you see the problem?

    Make that abstract leap... The controls and the relationships between them that you use in equivalence to define image properties do not remain constant visual properties in images through human perception, even mathematically. By not understanding the nature of the instrument you use to make the observations you fail to allow that AOV/DOF/motion blur etc. become variable as perceptual properties. With our four landscapes earlier, viewed one after the other for a minimum of 1 minute each with a gap of approximately 30 seconds between each (which is actually a very fair way to counter some of the perceptual quirks of human perception and allow a more impartial view), and I bet most people will not be able to tell without looking at the numbers.

    Or to put it bluntly, equivalence is not really visible in finished images, or real world visual equivalence is not dependent on camera settings. Equivalence is staggeringly incomplete, but even in this conversation it's impossiuble to move beyond it, we still get pulled back to the same basic conversation.

    upsidedown.jpg

    upsidedown.jpg

    JPG, 275.1 KB, uploaded by Andrew546 on Aug. 12, 2025.

  • Members 2530 posts
    Aug. 12, 2025, 11:01 a.m.

    its called "Equivalence"

  • Members 562 posts
    Aug. 12, 2025, 3:26 p.m.

    Why mention TCs? The focal length and f-number ranges created by the TC factor are real and absolute; they are not an equivalence. If you put a 2x TC behind a 70-200/2.8 lens, the combination, which could be rendered as a single unit with some duct tape, superglue or welding, is actually 140-400/5.6. You could call it a 140-400/5.6 that breaks down into two pieces to make it shorter.

    Certainly, it is important to know that a TC does not reduce pupil size, which would be useful in a discussion of etendue, but "Equivalence" is about translations that are "like" something else on another system.

  • Members 983 posts
    Aug. 13, 2025, 8:45 p.m.

    So, with a crop factor of 2, say m4/3 to FF, 70-200/2.8 is not equivalent to 140-400/5.6 ?!!

  • Members 983 posts
    Aug. 13, 2025, 10:17 p.m.

    Good point because NA is directly related to Angle of View which is directly related to Equivalence ...

  • Members 781 posts
    Aug. 14, 2025, 4:03 a.m.

    I guess we have to take that statement by statement as I've made so many. That said, none of my statements have been sent to a scientific journal and undergone peer review, if that's what you are asking.

    As for the rest of your post, I will ask the following, let me ask your opinion about an experiment that could be done, could be sent to a scientific journal, and could be peer reviewed:

    1) Take a photo of a scene that has motion (such as the same car driving by at the same speed for each photo) from the same position with, say, an OM1.2 at 25mm f/2.8 1/400 ISO 1600 and a Z5.2 at 50mm f/5.6 1/400 ISO 6400 from a distance of at least 5m (to reduce any differences in [diagonal] framing as a result of possible differences in focus breathing), crop both to the same framing (due to differences in the aspect ratio), process both photos so that differences in color are minimized, display both at the same size on the same media, and have them viewed under the same viewing conditions.

    2) Have 100 people view the photos with, say, 10 trials side-by-side, with 5 of the trials having one on the left and on the right for the other 5 trials, and then another 10 trials one after the other, again, half with one being shown first, the other half with the other being shown first.

    3) Give instruction to all participants, with examples, of what differences in perspective, framing, DOF, motion blur, and noise are and provide examples, from minor to severe, of each.

    4) After viewing each pairing of photos, ask the participants to note any differences in perspective, framing, DOF, motion blur, and noise, if any, and rate the differences as none, negligible, minor, moderate, or strong. Then ask each to say whether they prefer Photo 1 or Photo 2 on a scale of 1 to 5 (where 1 represents "just a tiny bit better" and 5 represents "a lot better") or say 0 ("no preference").

    5) Compute the average score and standard deviation of scores in each of the 5 categories, and compute the average and standard deviation of the preference scores for each of the two photos.

    First of all, I think you can see why none of my claims have been "supported by proven observation", as it is clear that such an endeavor, as outlined above, is not something one would do to make a point in an internet debate.

    Secondly, what is your opinion on what the results of such a test would be? I'm thinking it would be in line with exactly what I say it is. By the way, I don't understand the purpose of your upside down photo. Again, Equivalent photos would be viewed on the same media under the same viewing conditions. So, if we were showing one photo rightside up and the other photo upside down, in an effort to distort a viewer's perception of any particular property, it would be a violation of not only the conditions of Equivalence, but the entire purpose of Equivalence.

  • Members 781 posts
    Aug. 14, 2025, 4:09 a.m.

    Everything you say is absolutely correct. However, just as some make the claim that "f/2 = f/2 = f/2" (e.g. a 25 / 2 on mFT, 33 / 2 on APS-C, and 50 / 2 on FF are all "equivalent"), some may say that, for example, a TC mounted behind an f/2.8 lens doesn't change the f-number of the lens. Technically true, of course -- the f-number of the lens does not change -- but the f-number of the system changes. Likewise, it is true that a 25 / 2 on mFT and 50 / 2 on FF are both f/2 lenses, but the effect of f/2, in terms of the visual properties of the resulting photo, are, of course, different.

    TLDR: I just put it in to cover all possible bases that I'm aware of at the moment. : )

  • Members 781 posts
    Aug. 14, 2025, 4:10 a.m.

    But is it the best way? : )

  • Members 326 posts
    Aug. 14, 2025, 8:21 a.m.

    Snipped for space, I'll answer the complete post as it's a good question and is the point I'm making. Which is that; if you don't consider the nature of the optical instrument and the error it produces then your observations and conclusions will be flawed.

    My contention is that you measure the differences in an image by your understanding of how cameras form images, such as equivalent setting for "exactly the same photo" will produce exactly the same measured blur. I don't dispute this. I just say it isn't the whole story. What you then do is just assume that the measured differences in the images are transferred as visual differences to the image.

    The example you offer wouldn't make it into a peer journal, it would be shredded because it is a highly weighted test that most could say is designed to reinforce a pre-formed conclusion because it ignores the nature of human vision.

    You carefully set the condition for the test that the two photos should be identical, the same scene with no colour difference, yet in the theory you apply it to all equivalent photos. So alongside this test we do mine, four different (but similar) landscapes, two with "equivalent" settings two without.

    Back to your test. Yes, the results will be as you say, BUT... This is only because the human visual system is very good at spotting relative differences or lack of, especially if you flick between them on your computer screen (and you so carefully set up those parameters in the example). This comparison in no way shape or form proves that the human visual system actually sees either photo correctly or sees the parameters as you measure them by how the camera forms the image. In fact it is guaranteed that we don't.

    If equivalent images don't need to be the same then perform the same test with my four similar landscapes, real world photos. I bet the vast majority can't tell which two are the same settings without looking at the exif.

    AOV doesn't transfer to a 2D image. If you take two similar landscapes and display them on screen at the same size... See where I'm going here? Though we know they are photos, and we are familiar with the effects wide angle lenses have on the perception of distance, and allow for it through experience and memory, we still get it wrong. So how does looking at a photo where distances appear to be stretched affect your perception of the actual measured DOF in the same way as you measure it in your equivalence theory? The possibility exists that two photos with different settings can not only look equivalent but actually the photo with shallower DOF can appear to have the greater. (This is important because if you understand this and start choosing your focus points in line with our perception of what we expect to be sharper rather than using DOF as a pure mathematical exercise you may be surprised at the results.) And that's before we even think about different subjects, such as equivalent photos of high acutance subjects such as reflections on gently rippling water against a fields of wheat. But then, normally, the argument falls back along the lines that equivalence still works if we use two equivalent photos of wheat fields, or reflections. Then you say that equivalence doesn't define how you should take a photo, except that you are applying the condition of equivalent subjects in the proof, and thus cancel perceptual effects.

    If you look at all these equivalence threads you may notice that the only examples you ever seem to use are "exactly the same photo". If I mention that I don't see the point of a theory in a creative medium that reduces the camera to a copy machine and effectively cancel out the photo, and you correct me by saying that I obviously don't understand equivalence. I say that by not including an understanding of human perception you fail to understand that the metrics you hold to be constant in the camera don't transfer as constants in finished 2D images all viewed the same size on your computer screen.

    If you quote maths and science at us then you must abide with the same. If you set the conditions for your test and example images then you must also apply those condition to the results. So equivalence works fine with exactly the same photo. If you are going to apply it to real world photography lets see real world equivalent photos in the proofs. See what happens... Are the differences that equivalence defines really that visible in real world images and do they really play that important a role in defining the visual output of different systems in real photography?

    The point of the upside down photo is that it's visual proof that the human brain actively modifies the information the eye records in line with your memory and experience of how tyou think things should look. Using a human face is a weighted example and I deliberately do so because it is so difficult to see through even when you know. The fact remains is that we do similar with all photos, especially when we glance. As I said earlier, it's frightening just how much confirmation bias affects what you see and yet we still assume our vision is absolute.

    Sorry about all the edits, final thoughts to chew on...

  • Members 2530 posts
    Aug. 14, 2025, 9:50 a.m.

    Australia and Vanuatu have struck a funding deal worth $500 million, which will see Australia send aid to the Pacific island nation for climate resilience and security support.
    "climate resilience and security support" sounds as clear as mud (equivalence) 🤔😊

  • Members 562 posts
    Aug. 14, 2025, 2:26 p.m.

    What does that have to do with TCs?