• Members 86 posts
    Jan. 31, 2024, 1:35 p.m.

    I have a problem with the “Ansel Adams Fallacy” and “Telephoto Compression” being a function of the distance you view the 2D image. There are problems between predictions and observation as well as trying to make the maths cover what are human perceptual effects.

    There are aspects of this subject I don’t mention, but that doesn’t mean that, for example, angle of view of the image doesn’t affect out perception of perspective, it just means that I’m not scrabbling about for a mathematical answer to perceptual issues.

    What I’m trying to do is keep it simple and so highlight the limit of the maths and so the nature of perception. And before some quote the title and use it as absolute verified proof that it’s wrong, I may be joking here. Besides the I said/he said method is not really scientific proof…

    Ok, so we stand at a point in the landscape and take two photos, one with a 24mm lens and one with a 200mm lens.

    If we look entirely at the maths we find that the geometry of image formation predicts that the perspective in both should be identical. And sure enough if we scale and do a pixel overlay of one on top of the other we see that they are indeed identical, (though the telephoto shot doesn’t contain any part of the image we say has wide angle distortion).

    Compression/elongation of objects is entirely a function of camera position/distance.

    So if we include the human visual system by looking at the two images side by side both at the same dimensions we find that they do not look the same at all.

    Why does what we see differ from the maths? We need to look at two things here, the maths, and because we have introduced the human visual system we must at least consider any possible effect.

    So we may say, “hang on a minute. The maths says that the image was formed by ray tracing from the actual objects in the scene through the centre of the lens on to the 2D sensor. So you need to define a point in 3D space from which to view the image in order for the maths to be reversed!”

    So we go from the formation of the image here:
    screenshot-2023-.png

    To this:
    screenshot-2023--1.png

    There is an element of human perception that renders this very difficult to show by visual example. So you’ll have to take it on faith that if you tested this observation in a controlled manner that you would find it to be true. There is a theoretical point at which we can view an image (from the lens in the eye) and see the perspective as natural, or closest to what we saw from the camera position. Further to this there is a direct mathematical relationship between this point and the centre point of the camera lens when taking the photo.

    Now it can be very tempting to relate this entirely to the maths of image formation because it gives the neatly symmetrical and logical order people desire. But let’s see why this doesn’t work.

    So let's see the actual, and mathematically correct representation of mapping 2D images with a lens, one without the assumptions of perception:
    screenshot-2023-3.jpg

    In our modified view we see the mathematical reality of mapping a 2D image to a 2D image via camera, retina, or photocopier is the same, the 2D image gets mapped exactly how it is.

    There is no projection of a 3D understanding behind predicted by the maths, that is entirely the desire of the human mind to form that understanding, or relate it to our understanding of the real world.

    So the absolute mathematically correct perspective predicted by the geometry of image formation and contained within our image of that distant barn is foreshortened. So maths predicts that we see “telephoto compression” when we view an image at the centre of perspective, but we don’t.

    And it’s worth seeing the simple point here:

    When we view an image from the centre of perspective we don’t see the perspective as it’s contained in the image.

    “Whoa!,” I hear you all gasp (not), “but you said that if we viewed an image from the centre of perspective we see the same perspective as we would if we stood on the same spot as the camera.”

    Yes.

    “So we don’t see the mathematically correct perspective of the real 3D scene either?”

    No, you don’t.

    Human vision is empirical, which means we learn by trial and error, memory and experience. Our binocular vision and ability to move within a space means we form a very accurate understanding of our immediate surrounding, but with one important subtraction.

    Mathematical perspective dictates that the perception os shape changes with position, close objects elongate, distant ones foreshorten. You can prove this by taking a photo. But we don’t see it as we move through a room, (unless you’re wearing glasses for the first time, perhaps…).

    So let’s say it would be confusing to see absolute perspective where objects change shape and distance between increases/decreases as we move about. So to make the world less confusing the brain has learnt to subtract that effect and present us with a view where objects maintain a consistent shape and spacial understanding.

    In short, what if we have learnt to subtract the foreshortening of distant objects, how would this affect how we perceive perspective?

    Well it would mean that in the real world our understanding of relative shape and distance would remain consistent as we approached objects.

    But what would this mean when we view an image? Now in the real world one building only looks half the size when you stand twice as far away, as you move closer that relationship changes. In an image those apparent relationships are fixed, one building is still half the size of the other even when you stand in front of the centre of perspective.

    (Now this is a perceptual shift, especially if you are still trying to relate this to a mathematical model)

    If we view the image of our barn from the centre of perspective then we subtract the same foreshortening as we would from the real scene. That foreshortening is fixed in the 2D image, it doesn’t change, so as we move closer and subtract less (because we’ve learnt to subtract relative to distance) the barn appears to shrink in depth.

    As mentioned before, there are problems with demonstrating this simply because the whole reason our brains do this is to render our understanding consistent so it follows that we would also try to form a consistent understanding of perspective in images. Which it does.

    Let’s also be fair here, if we take an image of our barn with an 800mm lens and move further and further away it never takes on a wide angle perspective, it always looks like a distant object. We simply make incorrect assumptions about it’s depth, in the same way that when we view it too close we make the incorrect assumption, the whole definition of “telephoto compression”. Similar with a wide angle shot, as you get closer to the image there isn’t a point where a close object looks like a distant one.

    Telephoto compression and wide angle distortion are a function of camera position alone because you can never create them by viewing position alone, even if you viewed your telephoto shot from a county mile away.

    Of course the way we see and perceive the world still relates to the mathematical model of perspective, our understanding of the space we occupy would be fairly inaccurate if it didn’t! But our understanding is empirical, we learn through trial and error, memory and experience, not maths.

    Perspective is assumed by the viewer, and we don’t always make correct assumptions with more distant objects. The very nature of a 2D object with its fixed relationships means that our assumptions of perspective in images generates different errors, which is why it’s incredibly difficult to determine the exact point that an image was taken by holding that image up to the landscape. There is no exact.

    We often assume our understanding is correct and mathematically provable simply because we assume vision is absolute. Even though there is no evidence to suggest that it is, or even that it’s advantageous to us for it to be that way.

    Hopefully the above gives some insight.

    screenshot-2023-3.jpg

    JPG, 9.5 KB, uploaded by Andrew564 on Jan. 31, 2024.

    screenshot-2023-.png

    PNG, 19.7 KB, uploaded by Andrew564 on Jan. 31, 2024.

    screenshot-2023--1.png

    PNG, 22.7 KB, uploaded by Andrew564 on Jan. 31, 2024.

  • Members 482 posts
    Jan. 31, 2024, 2:42 p.m.

    Your argument contains some fundamental errors of logical thinking. Consider this piece of the argument:

    Projective geometry tells us that the image with the 200mm lens is a highly magnified crop from the centre of the image from the 24mm lens. The relative sizes of objects are the same in the two images, but the absolute sizes are not the same in the two images.

    The absolute size of an object affects our perception of how far away it is. The absolute sizes are just as important as relative sizes. We know by experience the correlation between the absolute size (we are talking about angular size, of course) and distance away, although most of us are very bad at expressing such distances in yards or metres.

    However, in other respects our judgement is remarkably good. If you regularly drive a vehicle you are probably very good at judging the point at which you need to start braking in order to pull up at a red light. You can judge the distance pretty accurately even if you cannot say what it is in feet or in metres. You do that mainly by the absolute size of familiar objects, i.e. the angle they subtend at your eye.

    So the maths explains precisely why we see telephoto compression in one image and not in the other.

  • Removed user
    Jan. 31, 2024, 3:10 p.m.

    Good to see angular measure mentioned - anathema to some.

  • Members 86 posts
    Jan. 31, 2024, 4:27 p.m.

    I thought we were discussing how changing the distance at which we view an image changes our perception of actual shape, i.e. "Telephoto Compression".

    Wouldn't it be wise to keep the photo size constant and observe the results rather than keep the object size constant where no difference is visible? Just a thought. Besides you are still seeing a distortion of the perspective contained in both images.

    Not in dispute, second paragraph. But absolute size of relative objects is not preserved in human vision. It's quite easy to demonstrate, but probably hard to conquer a strong confirmation bias with the easy demo. and:

    Again you inject absolute size as something that is known by the observer. So support this statement. If we don't know the distance our barn is away how do we know it's absolute size? And if we don't actually know it's absolute size then how can this unknown absolute size be used as a proof in you argument?

    And you learn this how? By tape measure or experience and memory? Remember I'm not disputing that we are remarkably good at forming an accurate understanding of our immediate surroundings, I think I made a clear statement to that very effect.

    I'm not seeing how anything you've said above explains precisely why we see telephoto compression??

    Not really sure what you are talking about here (not a trick question, just not sure what you're talking about or how it relates). I'm talking about "Telephoto Compression", or how you can look at a picture of a distant barn, from the centre of perspective, which has the absolute and mathemtically correct perspective A burned in the image and actually see B, or why you don't see "Telephoto Compression" when you look at a photo from the centre of perspective.

    Per-1.jpg

    Per-1.jpg

    JPG, 17.2 KB, uploaded by Andrew564 on Jan. 31, 2024.

  • Removed user
    Jan. 31, 2024, 6:11 p.m.

    Is A simply B with a bit cropped off at right?

    If so, each one has the same single viewpoint - for what that's worth.

  • Members 86 posts
    Jan. 31, 2024, 7:23 p.m.

    No, unfortunately the transform in PS also scales back the thickness of the line. That they have an identical angular measurement was to give you a chance, your point was the consistency of angular measure?

    My point remains the same, if you look at an image of an object (the barn) foreshortened by distance then it's absolute geometry is A, so why do you se B?

    As far as angular measure goes, I thought it was the opposite. Isn't it precisely because we have binocular vision, (see things at two slightly different angles/view with different angular measurement) and our ability to move around objects, (view with different angular measurement), that gives us the far more consistent and accurate understanding of our immediate environment? How many times when viewing distant objects do you sway to the side (get a slightly different angle/view with different angular measurement) to gain greater understanding?

    Again you seem to be supporting a theory by attaching an importance to something where the observational data suggests the opposite. I don't understand the relevance of the absolute angular measurement. Again not a trick question, please explain.

  • Members 482 posts
    Jan. 31, 2024, 7:24 p.m.

    You never mentioned changing the distance! I am not prepared to attempt to discuss with someone who keeps changing their mind. Exactly what situation do you want to discuss? When you decide then describe that situation precisely and I'll discuss it with you. Your chopping and changing makes any discussion pointless.

  • Members 86 posts
    Jan. 31, 2024, 7:33 p.m.

    Did you actually read the OP? You know the one above that starts:

    Do you really take me for that much of a fool that you can turn the topic on it's head, the opposite of what you have discussed and force me to argue that position?

    Tom, you are a fool if you think you can trip me up with the nonsense you've posted so far in this thread. Science? I know what sound most like conspiracy theory to me.

    LOL 😂

  • Members 482 posts
    Jan. 31, 2024, 8:03 p.m.

    I'll start again with this quotation of yours. You say at the beginning of your OP that you have a problem with telephoto compression being a function of viewing distance. Yet the example quoted above talks about taking photos at two different focal lengths and then viewing the images side by side, which I assumes mean that you view them at the same viewing distance.

    Which situation do you want to consider? Viewing different images at the same viewing distance or viewing the same image at different viewing distances?

    That is what I mean by chopping and changing.

    Please don't accuse me of trying to trip you up. I'm trying to pin down what you are actually saying. You appear to change your position whenever you are asked a question.

  • Members 86 posts
    Jan. 31, 2024, 8:35 p.m.

    OK. And this is not a trick question, really. And it's the same as the post above, nothing has moved or shifted despite your word salad suggesting it has. The illustration simply highlights that changing viewing distance changes how you interpret perspective in an image. I really don't see the problem.

    You said in your thread "The Ansel Adams Fallacy":

    You are clearly discussing how our perception of an image changes as we change our viewing distance. It is also inferred quite clearly that if we view at the centre of perspective we are at a null point where we see neither Telephoto Compression nor Wide Angle Distortion.

    And yet the image geometry is quite clear that distant objects are rendered on 2D planes foreshortened. So if the absolute fact of the image geometry of our barn taken with a 200mm lens is that it is foreshortened then why do we see the null point as suggested in you statement above rather than see the correct perspective in the image, which would be foreshortened as in Telephoto Compression (when viewed form the centre of perspective)?

    Again, this is not a trick question. I just want to know how you get around this.

  • Members 86 posts
    Jan. 31, 2024, 8:51 p.m.

    Don't forget that rather clever optical illusion posted some time back on DPR, the one with the three vans? If angular measure was preserved in human vision then there can be no doubt that we must then always see them at the same size.

    BTW, I was thinking during the other thread, "all it would take now is for someone to post the 'Mona Lisa' in support of linear perspective." Then I found the earlier thread. So thanks for that, but your point may have been lost. 😂

  • Members 482 posts
    Jan. 31, 2024, 8:52 p.m.

    Quite correct. Your null point is when we see the correct perspective (not distorted).

    All objects that are not parallel to the image plane will appear foreshortened. That foreshortening is seen whatever the viewing distance (and whatever focal length lens is used). You see the same foreshortening in correct perspective as you see in telephoto compression and in wide-angle distortion. They all show foreshortening in the same way.

    Has this answered your question? I'm really not sure what question you are asking.

    Edit:

    One point about foreshortening that does change with viewing distance is how the foreshortening varies from one side of the image to the other. Suppose we take a photo of a brick wall. If it is taken with a wide-angle lens, then the foreshortening of the bricks will be much greater if we see the wall at an acute angle than if we see it head on.

    20190317-091647.jpg
    In the photo above, a line from the camera to the left corner of the building is at an angle of very approximately 50 degrees to the facade of the building, while a line from the camera to the right corner is at approximately 90 degrees to the facade. So the bricks on the extreme right are hardly foreshortened at all whereas those at the extreme left are strongly foreshortened (because of the large difference in viewing angle).

    If we view the image so that the angle between the left and right corners of the building (seen at our eyes) is only 20 degrees (say) then the variation in foreshortening is much greater than would be expected for that difference in angle. So we see wide-angle distortion in our view of the image.

    20190317-091647.jpg

    JPG, 4.6 MB, uploaded by TomAxford on Jan. 31, 2024.

  • Removed user
    Jan. 31, 2024, 9:33 p.m.

    Never said that it did.

    Obviously.

  • Members 86 posts
    Jan. 31, 2024, 10:04 p.m.

    So you are saying here that when we view our image of the distant barn taken with a 200mm lens from the centre of perspective we see the correct perspective as rendered in the image, as in not distorted? So we see it foreshortened as in we see telephoto compression? Because foreshortening of distant objects is the correct perspective for distant objects in both telephoto and wide angle shots?

    So if we look at our image taken with the 24mm lens from the same camera position why then do we get an exaggerated view of perspective, as in the distant objects and distances between them look elongated or further apart than when we stand at the same point as the image was taken?

    Clearly the wide angle shot does not show the foreshortening of distant objects regardless of the viewing distance, which also seems to contradict your second statement here. Again this is not a trick question, the observational data is clearly at odds with what you are describing we should see.

    What you seem to be trying to say is that we always see distant objects as foreshortened even in wide angle shots. Which if true would be proof that the effect of Telephoto Copmpression was indeed a function of subject distance/camera position alone. Which kind of nullifies your whole thread titled "The Ansel Adams Fallacy".

    Indeed.

    Though I don't agree with the entire statement of yours that I quoted in the reply above, what we see at the null point does match observation. The difference between that and what is demanded by the image geometry is the human perception element in this equation.

    The reason why we have such a consistent global overview of the space we occupy rather than the forever changing and distorted one that the perspective from a single point of view dictates is exactly what human perception gives us. It's a massive advantage and yet you dismiss and deny it (contradicting yourself in the process) because you wish the answer to be in the form you already understand, limited to geometry and maths.

    [EDIT - this bit added and main text for simplicity and clarity] I've just seen your edit, where we are looking at a flat plane (brick wall) that is at an angle to the image plane, where the foreshortening varies from one side to the other. Right so far? Sorry, I'm drawing this in my head as though I was drawing it on paper and surely you're describing the geometry of rendering a curved wall? I'm also struggling to see the relevance of the angles, surely if you draw lines from the camera position to the ends of a limited flat plane such as the flat front of a building then the angles that form the triangle are simply describing the angle of that building form the viewing position?

    [Further edit] Umm...

    So if it's parallel to the image plane there's no foreshortening, and if we shoot at an acute angle there is? (And if it's an obtuse angle we won't see the wall from the camera position?)

    I'm lost to your point, it [still] makes no sense??

    [EDIT - again for clarity] Telephoto compression - is the apparent compression of distant objects in a photo when viewed from in front of the centre of perspective. This generally happens when you view shots taken with a telephoto lens at "arms length".
    Wide angle distortion - is the apparent exaggeration of perspective when you view close objects in a photo from behind the centre of perspective. Commonly seen when you view wide angle shots from "arms length".

  • Members 293 posts
    Feb. 2, 2024, 10:59 p.m.

    The thing that is missed in the OP's argument is he is bouncing back and forth to how humans see vs. how a camera sees. A human sees with two eyes and as such a human can reconstruct an estimation of 3 dimensional space. If he/she could not do that they could not walk around in a room without tripping or running into something and breaking their nose. 🤪 Stereo imagery with two or more taken from diverse locations can be used to reconstruct a 3 D image. This is what makes modern aids, be it "AI cruise control," automatic braking, parking assist, etc., work. A single image cannot be used to reconstruct 3 dimensions. Since a projective transform does not preserve the Euclidian metric - i.e., distance and angles - these metrics are lost.

    A camera is a single projective transform of 3 dimensions onto two. All information relative to distance and angles are lost and cannot be recovered without additional information.

    Most all of these questions, arguments, etc. are addressed in any reasonable text on computer vision. Humans use multiple cameras (two eyes) to provide input to the processor (the brain). Modern automobiles use multiple cameras and FMCW radar to provide sensor data to the processor. The mathematics of "perspective" or actually algebraic geometry is the foundation of computer. Computational algebraic geometry is a very active area today as it is the foundation of computer vision and how robots assemble cars, how cars drive themselves and how cars prevent accidents when drivers fail. In fact the field has progress to point of spinning off new fields in applied mathematics and computer science call Algebraic Vision.

    arxiv.org/pdf/2210.11443.pdf

  • Members 86 posts
    Feb. 3, 2024, 10:23 a.m.

    Couple of points here.

    1. Binocular vision is only effective at near distance, and has no role when viewing 2D images. And telephoto compression is an effect we see in a 2D image, formed by a camera in regard to the isolated point quoted above.

    2. it's quite a big leap to assume that our vision and robotic/computer vision is the same, and therefore the same maths applies. We are not equipped with a radar system for instance.

    The maths of rendering a 2D image through a lens onto a 2D plane (retina) is quite simple, there is no 3D transformation involved. How does your robot (with radar) translate a 2D image, say a brick wall with a mural? Probably the same as us initially, as a wall with a mural. But would it be able to form an opinion of that mural and interpret the 3D space it represents (as a separate and abstract space that doesn't exist). How would it do that? As you quite correctly say in your post above it would involve a programmed "memory" of a variety of different 3D scenes with corresponding side elevations and measurements, it would also involve measuring and comparing absolute sizes of objects etc.

    And it would fail to mimic human vision because in doing so it would also fail to see the optical illusions as we do.

    Every time you look closely you find the observational data is quite clearly showing us that human vision doesn't match the mathematical model. Which is pretty much what I'm trying to highlight in the OP.

    So to recap, the 3D understanding of a 2D image is not contained in the geometry of forming that image on another 2D plane (retina/camera sensor), it is purely a supposition of the human visual system/AI robot.

    It is not rebuilt through pure maths but also requires a "memory" to reconstruct a "most likely/most probable" understanding.

    There are still consistent, observable and predictalble differences between the mathematical model and what we see through our eyes. And so any theory that includes "look at this image with your own eyes it proves the maths," is flawed unless you also consider the nature of human vision.

    Our human understanding of the 3D space we occupy is very remarkable in a number of ways. Of course it resembles the mathematical model, it wouldn't be very remarkable if it didn't. But it's a massive leap of assumption that we can describe human perception by the maths of image geometry. And yet we still insist, without evidence, that there must be a mathematical explanation. Another human trait perhaps?

    😀

  • Members 86 posts
    Feb. 16, 2024, 4:16 p.m.

    The Ames Room and standing in the same spot as the camera.

    What the Ames Room does is force us to make the wrong choice when determining prospective, or depth cues if you like. It’s designed deliberately to do this by presenting us with a room that’s designed to look like a rectangle from a certain spot.

    What it tells us, quite simply, is that perspective is a series of choices based on memory and experience and not an exercise in maths. Present us with a room that must have such a convoluted shape for what we see to be correct regarding the two figures and we dump the maths (including angular size) to go with the “most likely” solution, that the room is rectangular and the people are different heights.

    Pure geometry can describe the retinal image at any point, just like a frozen moment in time, but when we arrive at the “same spot as the camera” you do not suddenly arrive at a frozen moment in time but through a series of events as we travel to reach that spot. The image you see is a process of learning by taking the changing retinal image over the period of time and “processing” it by making “most likely” choices based on memory and experience of how to decipher the many depth cues and also the brain is actively trying to subtract the constant movement as shapes change in accordance with pure geometry.

    You don’t see pure geometry when you stand at the camera position.

    Similarly you don’t suddenly arrive at a frozen moment in time when you approach a photo, you approach it and make a series of assumptions based on memory. And as the nature of a 2D image is so different to the 3D world that there is no maths (a) to describe what you see or (b) even remotely guarantee that it will be the same as what you see when standing at the same spot as the camera. In fact you would normally form a significantly different interpretation if you viewed each in isolation.

    However if you were to hold the photo and compare it to the actual scene you brain would have one of those “hang on it makes more sense like this” moments and the two would look very similar.

    Now you probably could stand in the same spot and the retinal image would be the same as the image projected on the camera sensor, but extending the geometry that formed that image directly to the processed image you actually see, the only image you actually see, and treating that as a frozen moment in time relatable directly to maths and angles of light, etc.

    Well let’s just say the idea may have some flaws.