• Members 856 posts
    May 28, 2025, 6 p.m.

    OK, now I do ...

    ?

    Thank you. Representative units resulting from "modified by"?

  • Members 856 posts
    May 28, 2025, 10:43 p.m.

    Yes, for example per here, an MTF can be determined for a system (camera sensor plus lens plus the viewer) because part has an MTF at a given frequency which can be multiplied together to get the system MTF for that frequency. It covers MTF for lenses, scanners, sensors, monitors and prints.

  • Members 293 posts
    May 28, 2025, 11:09 p.m.

    Photographs have to be viewed by the human eye or they have no purpose. So I don't get this obsession to reduce photography to a numerical understanding of the performance of the kit.

    Why can't we include a visual understanding, such as how do changes in resolution affect how a photograph looks to human eyes? The full story includes how changes in resolution are perceived by the photo viewing public.

    I don't know the units. I know the principal but don't attach any importance in knowing name of the label.

  • Members 856 posts
    May 28, 2025, 11:40 p.m.

    Ignoring the truism and the provocative "this obsession" I often analyze images and compare cameras by quantizing various parameter rather than seeing how they look on my screen.

    We can use subjective verbiage as to how changes in resolution affect how a photograph looks to human eyes.

    If you don't know the units, I think we're done.

    I leave the coveted Last Word to you.

  • Members 2426 posts
    May 29, 2025, 4:56 a.m.

    im with you ,im still waiting for anyone that uses MF to post an image at 10 X with a fov of 4mm that can out resolve either my apsc of FF. so far only blurry images have been posted resolving very little detail at 100meg.

  • Members 293 posts
    May 29, 2025, 8:07 a.m.

    But by ignoring and trying to cancel perception from the equation you fail to allow that all images, from the preview on your camera to the first time you open the raw, are modified by perceptual intent. For instance by using a "Fast Fourier Transform" to measure detail (quote - where "Detail" refers to the texture or sharpness or acutance of stuff in the image) in an over sharpened image, then you are by definition measuring the amount of sharpening, or a perceptual modification applied to the image by software to give the illusion of detail.

    There's also a science that defines human vision with it's own numbers and terms, not subjective verbage...

    There could be subjective verbiage to cover comments like these though... 😁😁😁😁

    Those thoughts come from you perception, not mine. I'm sorry I don't agree with you. But I thought that was where discussion started, not ended...

  • May 29, 2025, 8:40 p.m.

    Ted, please be more polite. The above is not.

    Alan

  • Members 856 posts
    May 30, 2025, 5:16 a.m.

    Not sure which of my above two sentences is impolite.

    In view of the onslaught by my worthy opponent and my increasing irritability - best move the whole thing to the Dumpster IMHO.

    The problem with continuous exchanges between he-who-knows-nothing and he-who-knows-everything is that others who might have made a valid contribution are put off and the thread dies while the Titans battle on.

    By leaving the coveted Last Word to one's opponent, that ends the exchange and my worthy opponent has done just that with all his winning points frozen in this site forever.

  • Members 293 posts
    May 30, 2025, 8:33 a.m.

    As you repeat in your comment above, you switch from discussing resolution to making the inference that I'm an overbearing bully and you being the victim. It's an argumentative strategy. And what seems to trigger it is a mis-read or misunderstanding another comment.

    Personally I don't take any offense by it.

    If you are too invested in one idea alone then there is a possibility that you see one answer as being the "correct" one and therefore the others are "wrong". If you fall into the trap of thinking that answers are absolute then you can mistake a different opinion as calling your's "wrong".

    In my case I have found that trying to connect the visual attributes of a photograph to the logical and scientific understanding to be detrimental to gaining a human understanding of what constitutes a good photograph. If you like I found the process of ordering visual stimuli in the "correct" box in the scientifically "correct" and logical order prevents you from making the illogical and abstract connections.

    It's a different opinion, a choice I made. My comments can be taken as read rather than seeing them from your own opinion.

    The science doesn't disappear or become less relevant. Think about the design of a modern standard focal length prime, and why science dictates this, then apply that science to the human eye. It just seems odd to me that we discuss what the photograph is scientifically without acknowledging how the limitations of the human visual system and their solutions can distort what we absolutely see. We look for the direct connection rather than the abstract. Accepting we don't always see things clearly is the "absolute" nature of human vision.

  • Members 1413 posts
    May 30, 2025, 9:56 a.m.

    Today is the first day of my retirement, so a bit of time on hand for a somewhat longer reply 😉

    I can see a lot of merit in both the scientific approach and the human perception approach when discussing resolution or other parameters .
    I’d like to propose that these two approaches are not mutually exclusive, in fact the human perception part is, in a way, just an extension of the proven scientific method, as I see it

    I “see” it like this, in the broadest sense,.. (perhaps this is not perfectly and exactly scientifically expressed, but here goes)
    There is a some “real object” out there that we can try and “measure”.
    This is often done by taking many samples of the measurement in different places and saving them all in a 2D grid of numbers.
    In our case the “image” is projected onto a 2D image sensor and samples are made for each pixel.

    Before the image gets to the sensor it has to go through a lens which will modify the signal in some way.
    The image sensor itself will also modify the signal in some way (both steps will usually degrade the result)
    This all happens automatically but each of these modifying steps can be thought of mathematically as a convolution of the original real signal with the response function of the measuring device. (see note 1 below)

    These 2 steps are not completely linear which complicates the situation, e.g. if the object is too “bright” then the sensor will be saturated.
    There are also several further processing steps inside the camera that will modify the output even more. And these are certainly non-linear
    Anyway, we’ll finally display a jpg on a screen. This display equipment has again modified the signal in some linear and non-linear ways.
    Perhaps in further steps there will even be a print which again modifies the result and uses many multi coloured micro dots

    So far this is (my interpretation of ) the scientific approach, and this part is all fairly easily "measurable"
    But, up to this point there was no mention of the response of a human to the presented result.

    ...To actually see the result you have to use an additional system. And this system will of course, in some similar way, change the result further.
    Our biological system , also modifies what we “experience in our minds” when we look at the dispalyed result.

    The biological system is a combination of the human optical lenses in our eyes, the complex neurological processing that begins in the retina and continues in the brain. These processes are anything but purely linear transformations, they are in fact highly non-linear.
    The many illusions we’ve seen in books are a testimony to the fact that we also have the ability to “hallucinate” and “misinterpret” what we see.
    The processes are also based on our previous experiences and have evolved partly to give us some sort of advantage in a dangerous world to see predators or find food etc.

    When the biological part is included “as a part of the whole viewing system” it becomes incredibly difficult to predict what we'll really finally see and experience because it is so non-linear and subjective, and based partly on our own personal experiences. But nevertheless it is part of the complete system and should be considered.
    The language and methods to even discuss this difficult biological part are perhaps not as well developed as the purely scientific discussion of physical systems. But just as important

    (Note 1: As an aside, since convolutions are difficult to do, it’s also possible to think of converting both the real signal and the response function of the sensor to the 2D frequency domain , then simply multiply these 2 together and convert back again)

  • Members 1431 posts
    May 30, 2025, 10 a.m.

    Personally I think resolution should be defined scientifically and quantified / formalised in some manner.

    I am fully aware that at any given time my mind may give favour to a situation that is beyond what the eye sees. At the human level, that is the point where one needs to develop a "critical eye" to ensure one's state of mind doesn't interfere with the reality.

    Where would the world be without the scientific endeavour that seeks to quantify and reproduce? Look at the optical engineering that goes into a high quality lens to remove all manner of aberrations through multiple groups of lenses. Why would they bother if the human condition was so frail that their efforts weren't noticed?

    It doesn't matter if one individual doesn't observe a lack of sharpness as long as those who can or need to, do.

    Back to the OP, for a required level of sharpness at a given print size / viewing distance, there is a minimum level of resolution that needs to be maintained through every stage. An experienced person may not need to apply any formulas because they already know the capabilities of the components. At some point it is necessary to quantify each component to formalise the outcome and allow consistent reproduction.

    [Written before I saw Fireplace's post so not in answer to that]

  • May 30, 2025, 10:37 a.m.

    From wikipedia (en.wikipedia.org/wiki/Visual_acuity#Physiology):

    The maximum angular resolution of the human eye is 28 arc seconds or 0.47 arc minutes;[23] this gives an angular resolution of 0.008 degrees, and at a distance of 1 km corresponds to 136 mm. This is equal to 0.94 arc minutes per line pair (one white and one black line), or 0.016 degrees. For a pixel pair (one white and one black pixel) this gives a pixel density of 128 pixels per degree (PPD).

    IMO this is enough precise scientific basis to calculate required resolution for every step of photographing pipeline. This also means that final result depends on viewing distance - what is not that scientific, albeit can be statistically predicted for different viewing media and conditions.

  • Members 293 posts
    May 30, 2025, 12:15 p.m.

    No, not really. This is my point, to be wary of making this connection between resolution and sharpness. Sharpness in images is mainly a perceptual phenomenon, and more importantly it's main driver is not resolution.

    Yes you can define the resolution of a camera/lens system by precise measurement. You can do the same with a display system, and relate this to measurable attributes of the human visual system. But a resultant high resolution should not be confused with a sharp rendering as the level of detail doesn't define sharpness.

    I'll try and explain, I have two adjacent squares, one black the other white. The sharpest output device would be one that has two adjacent pixels and where the squires map directly to each, but this would also have the lowest resolution.

    If we increase the resolution to 3 pixels across then the output image would be three squares across but the middle square would be a shade of grey dependent on the resizing algorithm.

    You could (and should) increase the "resolution" so the pich falls below that of human acuity, (an equation that can be expressed mathematically) and the apparent sharpness should be the same level as the original two pixels.

    But what happens when you change our picture from the two squares to a step-less gradient from black to white? The two pixel screen still appears the sharpest, but only the pixel pitch that falls below the level of human acuity renders it as a step-less gradient. The high resolution system doesn't render a step-less gradient as "sharp".

    What resolution renders is accuracy of visual detail, not sharpness. This renders to the human eye as "texture", again not sharpness. So many times I've seen on forums photographers converting to B&W and claiming to have "revealed the texture" where what they've done is over sharpen, which pushes our step-less gradient towards the two pixel output of one black square alongside one white. If your texture is a smooth gradient then increasing sharpness lowers resolution. "Apparent" sharpness tends to increase as detail (resolution) decreases and tends to decrease as detail (resolution) increases. Take an etching made with black ink on white paper, to make bold sharp lines you use large cuts with a large spacing and to make a smooth gradient you use small lines with small spacing.

    This is the problem, if we associate resolution with increased sharpness we tend to define high resolution by sharpness and low resolution by softness. If we only use sharpness as our yardstick it is much the same as "if you only have a hammer then everything becomes a nail."

    With high-res cameras and modern high-res output devices this is not necessary. The algorithms used in modern printers render this step completely redundant, leave it to the print driver.

  • Members 293 posts
    May 30, 2025, 12:49 p.m.

    Riddle me this...

    This is going to be so hard to explain. I get where you're coming from, but to design a system and define it's resolution we can to stick to the centre of the bell curve and ignore the outliers. In this way the perceptual part can be defined by equation and so you can express the perceptual intent through maths.

    But what if your text quoted above is describing the way we are approaching this problem rather than the problem itself? What if the biological part ... included “as a part of the whole viewing system” which makes it incredibly difficult to predict what we'll really finally see and experience because it is so non-linear and subjective is the fact that we make a perceptual link between resolution and perceived sharpness that doesn't exist in reality?

    Absolutely, because the biological part is that we glance and make assumptions, often based on the meanings of words alone. For instance if we assume that increasing resolution (or mapping resolution form one device to another) should increase sharpness? What if this wasn't true, as in the case of our step-less gradient? Then we would be creating an expectation that could be described as: so non-linear and subjective, and based partly on our own personal experiences ???

  • Members 856 posts
    May 30, 2025, 7:22 p.m.

    As to the term "sharpness" there is more to it than just resolution.

    Sharpness refers to the visual clarity of detail in an image and is determined by:

    Resolution – the ability of a system to distinguish fine detail and render it accurately, say MTF versus detail frequency.

    Acutance – the perceived edge contrast of the transition between the different tones of a sharp edge, say pixels per 10-90% edge rise.

    Human visual acuity - the angular resolution of the average human eye, say cycles per degree or radian.

    For this reason, the use of the term "sharpness" may not always be appropriate to this thread which was only intended to be about camera and/or lens resolution.