Continuation of discussion from Weekly C&C #085 13/11/2024 I'm continuing the discussion here as it wasn't appropriate to clog up the Weekly C&C further.
I'm specifically talking to Kumsal here as a continuation of the earlier thread.
When we look at a scene in life, what happens is very different to what happens when a camera takes an image and when we look at a photograph.
In real life, the image the brain sees is variable. The eye moves over the scene. We actually "see" only a very small area at once even though there is lots more in our peripheral vision. Consider focus. Most of what the eye sees at any one time is out of focus. Our eye moves around and changes focus as it covers all the details we want to see in the scene. The same happens with exposure, although to a lesser extent. The eye adjusts as it moves around the scene. If there is a particular subject that interests the viewer, the eye will adjust to get the best possible focus and exposure it can manage for that area. What the brain puts together is a sort of composite of the memories of all the different versions it has received. If there was something of particular interest to the viewer, that's likely to be weighted in the memory.
A camera is quite different. What is captured in a single image is static. A decision is made by the photographer or the camera about where the focus point will be and how much of the image will be in sharp focus. Same, to a slightly lesser extent, for exposure. A decision is made to either do some kind of averaging across the entire image or to set the exposure according to a particular point, usually the subject. When a viewer looks at the resulting image, they see something that was fixed by the decisions that were made when the image was taken.
Of course, this is where PP comes in. Multiple images with different exposures can be stacked. Looking at the final stacked image is closer to what we think we saw in real life because our eye now sees the details in areas that it wouldn't have picked up until we focused on them. Ironically. most of us feel that such HDR images don't look real. They look over processed.
Or PP can be done by adjusting specific area of exposure on a single image. The dynamic range of the eye is a lot wider than that of camera/recording media. Even so, cameras. especially when shooting in RAW, can capture more detail than may at first be seen. So shadow areas according to the wishes of the photographer, can be raised or darkened. Same with highlights, to some extent. The photographer can use this to direct our attention to wherever they want us to look. When we look at a scene, it's our brain that makes the decisions as to what we want to look at and it's a composite of lots of different images. When a photographer makes an image, he/she makes those decisions for us. How far a photographer should go in controlling the vision of the person looking at a photograph? That's the art. Looking at a photograph is never like looking at reality.
OK. Back to the two photos we have been discussing.
Version 1 was definitely too bright. I explained how I got there.
In the original that I deleted, as it showed small on Dprev preview, , that wall looked too dark and the orange sign and edge lost their kick. So I did a general brightening of everything and posted that. That's the first version you are seeing on this week's thread. Yes, I agreed with the criticism, it was too bright and highlight areas and deeper shadows had problems.
V2. I had choices. I could have worked on the image is a series of small sections and adjusted them individually but I went for the entire image with adjustments to brighter and darker areas. I wanted the detail in the cobblestones in the shadow at the front. Darkening the image made this area a bit of a shapeless, detail less blob. I didn't want the foreground wall to be darker either. Looking at the scene in real life, all the cobblestone detail was there, much as it is in the photo. If I moved my attention to the wall and sign, they were in shadow but still quite bright. Then there is the area on the walls behind. It was a bright sunny day and it got brighter further up the street. I wanted some texture on the walls but I was (and still am) entirely happy to have the suggestion that brightness increases going up the street. It was probably possible to cut back the highlights up there and darken the whole image. But here's the point. It could only be done by selective manipulation of areas of the image. What you are seeing here is close to what your eye would have seen when it made the adjustments and looked at the subject area, the closer parts, of the image.
If I have been understanding your point of view correctly Kumsal (and perhaps I'm not) you don't approve of manipulation of areas of an image. You want one image as recorded by the camera. That's reality as seen by a camera but not as seen by our eyes.
Take the car picture. I suggest it is "real" as you would see it if your eye swept across the whole scene. If however you were specifically looking at the car, it would have looked brighter to you. In taking the photograph there was a choice. The exposure could have been done to lift the car so it stands out a bit more from the background. This is entirely consistent with what the eye/brain does in real life. Or it could have been adjusted in PP. Or it could be left as you have left it.
For me, the photographer has the artistic licence to choose whatever version they wish. That's the art. But they are all "real" photos, just not "reality."