• Members 557 posts
    Jan. 31, 2024, 8:05 a.m.

    It is simply a matter of terminology. Some people misuse the term "centre of perspective" to mean the vanishing point. I used centre of perspective in its original meaning. It does not mean the vanishing point.

    If you find it confusing, then use the term "centre of projection" instead of centre of perspective. Bruce MacEvoy does this in his article, probably because centre of perspective has been so widely misused.

    As I defined the term (following classical usage) centre of perspective is not synonymous with vanishing point. They are two entirely different concepts.

  • Removed user
    Jan. 31, 2024, 2:14 p.m.

    Thank you for clarifying your usage of the term. As to "centre of projection", does the following transcript from my AI buddy apply to the pin-hole in your earlier illustrations?

    *Optics - Center of Projection:

    In optics, particularly in the context of perspective projection, the term "origin" may refer to the center of projection. **This is the point in space (often located at the center of the camera lens) from which rays of light are considered to emanate.** It plays a crucial role in determining how a three-dimensional scene is projected onto a two-dimensional image.*
    

    Pardon the clip format ....

  • Members 557 posts
    Jan. 31, 2024, 4:09 p.m.

    Yes, that is precisely the centre of perspective (or centre of projection).

  • Members 273 posts
    Feb. 5, 2024, 7:11 p.m.

    I think you're wrong, and it's not a matter of science, it's a matter of language.

    "Perspective" in this context means "point of view", namely the "point of view" of the camera.

    Viewing condition is another important concept, but one that shouldn't be combined with "perspective", it should be kept separate, most particularly because they are, in fact, separate. The place the camera is placed is controlled by the photographer, the place the viewer views the image is controlled by the viewer. Obviously, it makes more sense to keep these things separate. Further, "telephoto compression" and related terms are driven solely by camera position simply because that term refers to relative sizes of objects in the image, not the way the entire image is viewed. The viewer's viewpoint cannot change relative sizes in a 2D image.

  • Members 557 posts
    Feb. 5, 2024, 8:43 p.m.

    Then you should be able to give a formula for working out how much telephoto compression will be present in a photograph taken from a known position.

    Whenever I have asked the question "what is this formula?" it is met with a deafening silence!

  • Members 273 posts
    Feb. 5, 2024, 9:35 p.m.

    It's simple trig to determine the relative sizes of things. You need the size in real life and the distance from the camera to the object plus SOH CAH TOA. Nothing to it.

  • Members 86 posts
    Feb. 5, 2024, 10:26 p.m.

    I get your meaning, and yes (ish).

    Telephoto compression and wide angle distortion are not the opposites as in TomAxford's theory, and certainly not in Ansel Adam's book, (I've just checked mine and strangely it still smells strongly of darkroom chemicals...). Wide angle distortion occurs when we view images of close objects from behind the centre of perspective. Telephoto compression is when we view photos of distant objects from in front of the centre of perspective.

    There are a couple of points here. Firstly that the two are not reversible. It is not possible to take a single photo say a head and shoulders with a 50mm lens (35mm equivalent) and just adjust your position to see the two different perspectives. There is no way that you can stand far enough away (what the "maths" predicts) and see that face distorted in the same way as taking a photo of same face with nose nearly touching the lens. Similarly you could stand with your nose against the head and shoulder shot and still not see it as a distant object. Though distant objects in wide angle shots viewed behind the centre of perspective do appear stretched as do the distances between them.

    The perspective formed by the image geometry and baked in is a function of subject to camera distance as regards to near and far objects. It is set by the maths of image geometry and happens as fact completely independently of any viewing of the image.

    Now as to your mnemonic, please demonstrate this in the mapping of a 2D image to the back of the retina. The maths is that the 2D image is copied exactly as is depicted, as a 2D image to a 2D image with simple changes of scale dependent on distance.

    Honestly, I'm not kidding here, show me the maths.

    So when we realise that a photo is basically a front elevation and can't be rebuilt accurately without the corresponding side elevation which is all the information that is lost in a 2D photo. Then we realise that the 3D projection is purely an assumption of the human viewer.

    Further to that we must then realise that in order for our assumptions of relative distance and scale to change in an image that not a pixel moves as we change our viewing distance that we must therefore at least at every single distance but one be seeing that perspective incorrectly.

    And given that we don't have the side elevation so are guessing based on a memory of relative sizes rather than any absolute scale...

    Surely the amount of foreshortening present in a photograph taken from a know position is a matter of simple geometry?

    This is not the same as our perception of perspective when we view the image and assume a distance and scale that is notr contained within that photograph.

    Personally I think that the assumption of perspective in a 2D image is a human perception. It doesn't exists in the maths without the corresponding side elevation, it is a guess, assumed by the viewer. And if we never see perspective in images correctly, (we don't have the side elevation), then there is no way that the absolute maths of the real 3D scene will ever match what we see in a 2D image.

  • Members 557 posts
    Feb. 6, 2024, 7:20 a.m.

    As I expected, a reply that completely dodges the question! Not even a mention of telephoto compression.

  • Members 86 posts
    Feb. 6, 2024, 9:10 a.m.

    Mine does. But then you chose to ignore me:

  • Members 557 posts
    Feb. 6, 2024, 10:01 a.m.

    Yes, of course. I think we can all agree on that.

    Again, I think we all agree that it is not the same as our perception. And, we assume a distance and a scale that is not contained within the photograph, but is based on our experience of reality.

    I agree.

    I agree that it is a guess, assumed by the viewer. We only see perspective when we see (or think we see) familiar objects or familiar scenes in the image.

    I think, so far, we are almost completely in agreement.

    Yet you still haven't attempted to give a formula to work out whether or not telephoto compression occurs in a photo taken from a known position.

  • Members 86 posts
    Feb. 6, 2024, 2:10 p.m.

    I'm really not sure what relevance the "photo taken from a known position" has. That we see wide angle distortion or telephoto compression in images is completely independent of whether we know the camera position or not.

    So if we look at the definition, that the term "telephoto compression" describes an effect where photos of distant objects appear to be compressed when we view them from a point in front of the centre of perspective.

    It's an apparent effect that is fundamentally based on us misinterpreting the perspective in an image, apparent because there is no change in the perspective captured in the photo, it remains constant, all that changes is our interpretation of that perspective. So we see "telephoto compression" when we make an error of judgement that appears to change relative to our viewing distance.

    Ok, we have a camera on a tripod and we take two pics, one with a 200mm lens one with a 24mm lens. We view both as A4 prints, so as to see two photos of exactly the same perspective (true perspective as defined by camera position) at different magnifications. We see the perspective in the magnfied image (200mm) as compressed and the perspective of distant objects in the wide angle shot as stretched.

    This suggests there is a null point where our perception switches from compressed to stretched, but let's see if we can nail this definition of normal down a little further.

    It just so happens that my photography career never took off and I'm actually taking these photos of a lovely country scene from my burger van at the side of a road. So my wide angle shot includes the counter of my burger bar where there are circular objects such as a sugar bowl and a few cups in the corner. They are distinctly oval in the shot, in fact their tru perspective as per the rules of geometry is for them to be rendered as oval.

    But if I put my nose against my photo and view it from the centre of perspective that distortion disappears and I clearly see those shapes as being perfectly circular.

    I think we should unpack that last statement because it suggests something that so far nobody is seeming to acknowledge. When we view our wide angle shot from the centre of perspective we do not see the objects at the edge in their true perspective, as the maths of image geometry predicts any object must be rendered on a 2D plane when viewed from a single point in space.

    We form an understanding of the true and absolute shape of those objects.

    Is that confined to the edges of wide angle shots, or does it apply to complete photos in general?

    So we are at a point where we could also say that when we view an image from the centre of perspective we are viewing that image from the same relative position that we view the real 3D world, and from that position our ability to see through the distortion caused by the perspective dictated by a single viewpoint in the real 3D world aligns with that of the image, and so we see normal.

    If you're still having trouble, try this:

    The thing about a 2D photo is that the true perspective as defined by camera position is baked absolutely firm and unchanging within the image. So if our eyesight was absolute then we must see a photo exactly as it is, and always exactly as it is.

    That we actually see different perspectives at different magnifications in images is a clear indication that human perception of perspective varies with the assumption of distance, the shape we see changes depending on the assumption of the distance at which it is viewed. Which is similar to an effect you would apply to say make objects appear to be constant in shape and scale in a world where perspective dictates they must be ever changing.

    And there is no formula that really covers it, but I find Ansel Adams' to pretty close whist still being true to the maths of image geometry.

  • Members 273 posts
    Feb. 6, 2024, 2:42 p.m.

    "Telephoto compression" is nothing but an odd description of the relative sizes of things in an image. And I did answer it.

    You will get exactly the same relative sizes of objects in an image with any focal length taken from the same location. In other words, "telephoto compression" is another way of saying "I took the picture from far away". So, back to Ansel being exactly right - location is all that matters in this context.

    If you want to work on a myth, work on what I call "the 50mm myth". It's much more related to what you are talking about than Ansel's absolutely correct statement.

  • Members 557 posts
    Feb. 6, 2024, 2:56 p.m.

    I am very pleased that you have come round to my way of thinking. This is supporting what I have been saying all along.

    It is a little more than just misinterpreting the perspective in an image.

    The perspective seen by our eyes changes when the viewing distance changes. The relative sizes of objects does not change, but the absolute size (i.e. angular size) does change. It is this change of angular size that causes us to misinterpret the perspective captured in the image.

    If the image is moved closer to our eyes, then everything in the image looks larger (angular size) and hence everything looks closer by the usual rule of perspective: object size / object distance = image size / image distance. If we increase the image distance then the object distance appears to increase in proportion.

    I am not having trouble with this and I never have. I thought you were having trouble with it!

    I agree.

    No, the perspective we see in a photo also depends on our viewpoint. It is a combination of the perspective captured in the photo and any perspective distortion produced by not viewing it from the centre of perspective. [Maybe this is a problem of terminology if you do not like using the word perspective for the distortions introduced by the viewing position.]

    For example, take the skull in Hans Hobein's painting "The Ambassadors".
    Viewed normally, the skull is seen with a highly distorted perspective. To see it with the correct perspective it must be viewed from a particular position to the right of the painting and at a very oblique angle. The perspective we see depends on our viewing position.

  • Members 273 posts
    Feb. 6, 2024, 4:02 p.m.

    No it doesn't.

    Glad you agree.

    Which is not "perspective".

    Humans are very insensitive to that effect, and in any case, it's not "perspective".

    What changes when you change "perspective" is relative sizes of objects in the image. This is why the "dolly zoom" effect works - no change in field of view, but a change in relative sizes of the objects in the image, caused solely by moving the camera position.

    No, it does not. The "perspective" is "in the photo". We can't change it after it has been captured. Viewpoint of the image is separate from the "perspective" (point of view) of the image, which is solely caused by the location from which the image was taken. This is effortless to prove. Take a picture of a wall. Now, take a picture from the other side of the wall. Now, look at the first picture from any location you want and see if you can see what's in the second picture.

  • Members 86 posts
    Feb. 6, 2024, 5:43 p.m.

    The absolute geometry of image formation and the perspective from the camera position is also a fact regardless of whether we know it or not.

    We're talking about an effect that is quite distinct from normal vision here, and normal perspective. To create it you must hold the image plane at a fairly obtuse angle to the axis of the lens (or pinhole). This produces a unique distortion in an object that is never seen in normal vision and is as such undecipherable. Until, that is, you view it from exactly that angle that is the axis of the lens/pinhole. Then the transformation can be quite remarkable, almost 3D. Pavement artists use the same technique with their 3D chasms.

    I still think that you are missing the point here. How perspective is captured in a 2D image is not being questioned, we both agree here.

    The maths of "ray tracing" a 2D image through a lens onto another 2D surface is also beyond doubt, you get an exact copy.

    That we recognise a 2D surface as an abstract of the 3D world we inhabit and from a 3D understanding of that is entirely a human cognitive function, it is not a mathematical function of the image because we have never learnt how to interpret our surroundings in a mathematical way. It is the reverse, we learn maths and angles long after we learn to navigate and simply try to define our space by that language. Which works in that we can define the space and perspective in a way that's very advanced.

    Perspective is the distortion of relative scale and shape caused by a unique viewpoint. It is what a camera captures with mathematical accuracy.

    And you never see in real life.

    If you stand next to a barn (again a shape with a perspective) and back away it completely fails firstly to look like wide angle distortion as you stand close, then fails to shrink to telephoto compression as you move further away.

    But I tell you what, a 2D image with baked in perspective does something similar to the exact opposite. It looks compressed if you stand too close and stretches as you move further away.

    So why does real 3D perspective that does change with viewing position appear consistent to our eyes, and perspective in images which is fixed and immutable appear to change as we move?

    I think you are holding the wrong thing as a constant, perspective distorts - mathematical fact, that we don't see it and have a correct understanding- human cognitive function. We subtract the effects of perspective distortion as we move through the real world, our understanding of shape doesn't distort but remains constant. If we freeze that, say in an image and move around it we see the almost exact reverse of an effect that would cancel wide angle distortion/telephoto compression that we should see in the real 3D world.

  • Feb. 6, 2024, 9:50 p.m.

    Is that not what Tom claims from the beginning? I mean about how 2D image looks when you look at it from wrong distance?

    (Sorry for quoting out of context; otherwise I absolutely agree that human perception plays tricks and purely geometrical interpretation is only part of equation.)

  • Members 557 posts
    Feb. 7, 2024, 8:05 a.m.

    It seems that we agree on all the key points. Telephoto compression is something that we see when we view an image from closer than the centre of perspective.

  • Members 557 posts
    Feb. 7, 2024, 10:51 a.m.

    The effect here is quite extreme, but it is not different in principle. As you say, the image plane must be at an extreme angle to the optical axis of the camera. To see the image correctly, it must be viewed from a similarly extreme angle (at the centre of perspective).