• Members 724 posts
    April 10, 2025, 7:18 a.m.

    Of course, "die" and "soon" have a lot of latitude, but here's what I'm thinking. I was at an art show not long ago, and there were a few photographers there selling prints for really high prices (by "high", I mean, even if I liked the photo a lot, there's no way I'd pay that price for it). The vast majority of the photos bordered on digital art. That is, you could tell that they were heavily processed. So much so, that my wife was really put off by most of it, whereas I just took it to mean that that's the style that sells now -- at least, it's the style that sells to people who will pay "that much" for a photo.

    Anyway, myself, I almost always strive for technical perfection, and do a fair amount of processing on my photos (where by "processing", I mean getting the parameters for the RAW conversion "just right", sometimes trying multiple conversions and merging with various weights, etc.), but I usually shy away from using settings that take my photo into the realm of "digital art", although that, of course, is subject to the viewer's aesthetics, but certainly less than what I saw at the art show.

    However, I got to thinking. What if I just snapped a pic of the scene with my smartphone, and there were software so advanced that it could make it have the same "high IQ" as my 45 MP FF camera? And, using AI, I could also just tell it how to tweak the photo to get the look I want, just like I tweak the parameters in the RAW conversion, but using natural language? And then the resulting photo actually looks like a photo -- not some AI monstrosity or highly polished piece of digital art, but actually looks just like a photo? With [AI interpolated] resolution as high as I want it?

    We're not there yet. I don't think there's any software that can take a photo from a smartphone and make it just as good (IQ-wise) as a photo from a high MP FF camera at 100% viewing enlargement. I mean, sure, smartphone photos are often already "as good as" dedicated camera photos when viewed on a phone, and sometimes smartphones stack photos to get even a higher IQ photo than a single exposure from a dedicated camera, but as a general rule, "large" sensor cameras produce photos with "significantly higher IQ".

    But I think that might no longer be the case in the very near future. I'm thinking that a smartphone photo combined with AI software will be able to produce results as good as and as "natural looking" as those from even a 45 MP FF digital camera. In fact, they might even be "better" since the AI processing will be far, far, far more able than the vast number of photographers can manage.

    This will be possible because the software will be able to create realistic looking fake detail. And it would be able to remove detail better, too. That is, it could create the illusion of shallow DOF with "perfect" bokeh, if you so desired. It could make a still shot look perfectly panned with motion blur. It could do anything I could do, and do it better.

    Now, for sure, there will still be individuals out there who will excel, and outdo the software. But the rest of us mere mortals won't be able to. And more than that, the rest of us mortals couldn't tell the difference, anyway, unless the photos were side-by-side, and even then we'd get it wrong half the time which was "real" and which was "AI enhanced".

    In short, my rush to get what I consider my "last camera" and "last lenses" because what's out there now is so good that even though there will always be better, what exists now is more than good enough that better won't make any difference to my photography, in that rush, I think I forgot that I might be carrying my "last camera" with me all the time, while I text and browse Reddit with it.

  • Members 1971 posts
    April 10, 2025, 7:59 a.m.

    You are probably right about the direction it's all travelling. But it isn't necessarily the end of photography. Just as there is a market for handmade furniture as well as the mass produced, I think (well, I hope) there will be a market for hand made photography. By that I mean limited edition signed, numbered and analogue. I get into discussions with young photographers who ask much the same question. Is there any future in making prints for sale?
    My advice is to make prints that can be identified as being original and limited. Try an exhibition of images made with film, and individually printed on photo paper. Sign and number them and make a point of giving plenty of coverage to how these were made. Even better, do something like platinum prints and talk up the process being used.
    I don't know whether it will work but people like to feel that what they have on their wall is something relatively unique. traditional film and silver halide papers. If trying my own suggestion, I'd include a certificate with each print saying this was how it was made.

  • Members 262 posts
    April 10, 2025, 9:52 a.m.

    Devil's Advocate here, because these questions need to be asked and examined not because there's anything wrong with your ideas.

    I don't see where this "higher IQ = better photo" equation comes in? It is stated as fact whereas the evidence is against it. It is pure assumption. I also note, that when you explain you do through relating it to the process of how the camera works, not how we respond to visual stimuli.

    Also what is the definition of technical perfection? Technical perfection can only come from one place, the only place in the picture taking process where it exists, in the way a camera works. It can't be based on the human eye because we simply do not see either the technical or the perfect world. Our vision is empirical and is far more extremely biased and "rose tinted" than photographers wish to believe. We wish to believe our photos have validity and so we create a framework that confirms or reinforces that belief, a "technical language" if you like that converts scientific ideas into visual forms that we recognise and piece together to form photos that make sense to us because they satisfy the sense of order that we wish to impose, the one that give our photos more validity.

    This is not as daft as it seems as we all believe that we see the world clearly, that we have a global overview, and that global overview is correct, is the way the world actually is. The world can be explained by science and so therefore our correct worldview can be related to that science and therefore a simple translation of that can be related to the technology that governs a picture as taken by a camera. There is a logical process between the real world and the reality of a photo that's underpinned by real science and real fact.

    anothermike over on another forum said something interesting, (I was just browsing out of boredom, and sometimes there is something interesting that crosses the "we are a gear site and so only talk about the things we buy), he said that in reference to "colour science" and the output of camera jpegs that there was no science because we simply don't like correct colour. And this is true. The colour output of camera jpegs is developed through the emotive response of a representative test group and yet photographers label it as science and try to define it against reality.

    We do not see the world as recognised and measured by scientific equipment, nor do we see photographs as they actually are. But then we are not as logical a race as we wish to believe ourselves to be. We are quite emotive and our vision has developed by entirely empirical means (not science or logic), and so it is correct to say the we see the world as we wish it to be, or as we prefer to believe it is.

    And this is where popular photography has always been, and by popular I mean mass market/lowest common denominator.

    The photographer produces a technically excellent portrait and the sitter prefers the slightly blurred one. The photographer defines the photo by (basically - though they may also be valid comparisons) the same metrics that the marketing departments promote to sell the kit. The sitter bases the choice on an emotive response, often with only a glance, because the photo, or what they think they saw, fits the world as they wish to believe it, or the way they want things to be. Often when you blur a concept and just glance then the sharpening of that concept, or how it's brought into focus, is usually in line with our bias, how we wish to see the world. What we see becomes fact because it was in a photo taken by a camera.

    The new phone becomes available on the market with an improved camera, that improvement in marketing speak is a simple comparison of a published number and a simple assumption that the number published is both relevants and better when higher. But they are also selling more "WOW" as in the photos they use to promote are not real but have the "look at me"emotive appeal.

    Both are true.

    Your yardstick is the measured performance of a camera, the artist's is the emotive response to visual stimuli in a rapidly changing and consumer led marketplace.

    People may also only be buying what they can't produce for themselves with their phones, the market may actually be that simple.

    But also popular photography as defined above will all be redundant as it will all be accessible from a standard smart phone as it already largely is. And the technical understanding that seperates the photographer from the snap-shooter will also cease to be relevant. The tools that are already available give a direct access to the emotive content whilst denying the photographer the ability to quantify and understand it logically. Even the basic sliders in Lightroom work on a perceptual intent, one that uses the emotive response of a representative group.

    Perhaps it's the technical approach and understanding that's hoilding photography back... (?)

    "Real" photography has always been a niche, whatever your definition of real may be.

  • Members 2039 posts
    April 10, 2025, 7:15 p.m.

    It is an interesting proposition, that we can take a picture with a cell phone, with all its technical limitations, such as dynamic range and noise, and an AI algorithm will turn it into a medium format quality picture. It is not so far fetched. I use hand held bracketed HDR for interiors, and it works. We have programs that sharpen out of focus and blurred pictures pretty well. I used Topaz to create a picture with an apparent increased depth of field, once. It was quite convincing. Software can already create convincing fake detail. It is already possible to turn a very sharp picture into a blurred grainy picture, if this is what you want artistically.

    I think we are almost at peak technical quality for cameras and lenses. Progress will be a question of in camera software.

    Yes, over processed photographs sell better than a less dramatic realistic rendering of a subject. It has always been this way, and always will. People do not want a mere photograph hanging on their walls, especially if they have paid a lot of money for it.

    Thanks for a thought provoking post.

  • Members 724 posts
    April 10, 2025, 7:19 p.m.

    I should have clarified that I didn't mean that photography itself would end, per se, but "end" in the same way that film photography has "ended" in exactly the manner you describe with your furniture analogy.

    My advice is to make prints that can be identified as being original and limited. Try an exhibition of images made with film, and individually printed on photo paper. Sign and number them and make a point of giving plenty of coverage to how these were made. Even better, do something like platinum prints and talk up the process being used.
    I don't know whether it will work but people like to feel that what they have on their wall is something relatively unique. traditional film and silver halide papers. If trying my own suggestion, I'd include a certificate with each print saying this was how it was made.
    [/quote]

    I'm just a hobbyist, so that's not a concern of mine. However, the reason that I was at the art show was because a family friend was selling his photos there and it absolutely is a concern for him, although we didn't talk about this particular take as it didn't occur to me at the time. But his photos are more "traditional" (although he does shoot digital, just doesn't do the heavy processing) and he is struggling. That said, maybe the other photographers at the show were struggling, too -- I didn't ask.

    To me, "higher IQ" is directly a result of the equipment, else we'd all just use smartphones, or, at the very least, much smaller and less expensive cameras (if we don't like the "feel" of the smartphone). You can take a "high IQ" photo and process it in a way that makes it look "more authentic" (e.g. as if it were taken with film, among other things), but what I'm talking about is going the other way (i.e., making a "low IQ" photo appear to be "high IQ", which, to some extent, modern software can do, but I'm talking about way, way, way more advanced in that regard).

    By "high IQ", I mean, for example, higher resolution (for the portions of the scene within the DOF), less noise, greater DR, focal point in perfect focus, etc., etc., etc.. All these things can be altered if they do not suit the "artistic purpose" of the photo by making them "lower IQ", but the reverse is not possible at this time (except in certain limited circumstances).

    Some of my favorite photos fall well short of technical perfection. But most of my photography relies on it, although, I admit, mostly it's a non-issue with my equipment. That is, while better equipment will take photos with even "higher IQ" than what my equipment does, there will be no difference in the "success" of the photo, simply because what I have is [more than] "good enough" for the sizes I display my photos.

    What I'm talking about is that we can drop a glass on the floor and get a beautiful pattern of glass shards on the floor which, depending on the reason we dropped the glass, are far more appealing than the glass itself. But we can't take the glass shards on the floor and get the original glass back -- yet. In my opinion, the time is coming, and coming soon, that we can take a photo of glass shards on the floor (or even describe the scene) and tell the software the kind of glass we want, and it will produce a hyper-realistic rendition of that glass that will be indistinguishable from an actual photo of the glass that was dropped.

    In short, I'm saying that I think in the very near future, the information contained in a smartphone photo will be more than enough for software to turn that smartphone photo into anything you like, so there's no need, whatsoever, for an actual dedicated camera with regards to the final photo (of course, there could be other reasons to prefer a dedicated camera, like the shooting experience, but I'm just talking about the final photo).

    The bottom line might be best explained via example: consider any scene, any scene whatsoever. I take a photo of it with my best equipment. Then, assuming the scene has not changed in any significant way, I take another photo of the scene with my smartphone. After processing (with the intent to make the final photos look similar), I display the photos at any particular size or any particular medium and show them to people, and ask which photo was from the smartphone and which was from my 45 MP FF camera. People would not be able to say other than pure chance, with half choosing one photo and the other half choosing the other.

  • April 10, 2025, 7:50 p.m.

    I don't think photography will ever die - but how we produce the images we view may well change as technology advances.

    It's interesting that companies like OnePlus (a phone manufacturer) are linking up with camera companies (in this case. Hassellblad) to produce better phone cameras/processing. That knowledge transfer may well work both ways in the future - so your M43, APS-C, FF etc. cameras may get more internal processing before chucking out a ready made JPG so you don't need much post processing - and because of the extra quality of a larger sensor and better lenses, the quality will be much better than anything produced by a phone. Just give it time...

  • Members 262 posts
    April 11, 2025, 1:05 a.m.

    Get what you mean, but even here you are talking about both the object and the photos as being real and measurable realities and your measurements are the physical properties as defined by the camera. What you don't ask or allow for is the other question:

    Which photo was taken by a human?

    And again, don't try to define it as a precise meaning. the question is more about which photo appears to have been taken by a human, which one shows, or appears to show a human understanding of the subject? Two people won't see the same photo in the same way, and I mean that the exact same physical attributes in a photo register differently with different viewers, they will interpret the same pattern of dots in line with their own memories and experiences which will vary, and vary considerably dependent on wether they actually look or glance and assume.

    I still don't see the relevance of the situation where you can't tell which camera was used to take a photo. Most people can't anyway, and most people don't care. It's like you're separating two identical photos by the invisible. One is more valid than the other because it's a real camera and not a mimicked one whilst missing the point that both photos are the same, i.e. neither is unique. You still talk about the information in a photo as being "reality" that holds a "truth" and that truth can be reproduced by software. I think you ask the wrong question, make the wrong comparisons.

    If I may give an example. What makes a good violin concerto? A genuine Stradivarius or the unique touch of the human hand that plays it? With the violins themselves we can ask what it is about them that makes them great, and would they still sound as great if they all sounded identical? Perhaps it's the variations introduced by the human hand that makes the music come alive and still sound fresh. If you synthesized the sound to be exactly the same then the real Stradivarius would have no value, but it does when it becomes a real tactile object in human hands. There is an empirical connection between the movement of the hand and the sound produced that allows and instinctive expression of emotion.

    The question is can a machine produce that? And the answer, at least in the popular market is yes. Not so much because the machines are very good and can mimic but more because we are very lazy and just glance with a human eye, interpret with human understanding and fill in the gaps with human experience.

    Besides, most digital cameras are the same and produce very similar results in most situations regardless of sensor size, (especially when edited with the human eye), the differences are mainly in application and ease of use rather than results.

  • Members 2389 posts
    April 11, 2025, 6:39 a.m.

    I have a friend thats a retired food photographer, one of the worlds best and probally the best. he told me years ago when he was shooting with a phase 1 digital back he through in an image from a lumix point and shoot just for fun to the editors of a magazine he worked for. he had a laugh as they featured the image in there magazine 😂 you dont need AI just talent.

  • Members 724 posts
    April 11, 2025, 8:59 a.m.

    I have to disagree. I mean, sure, people can't look at a photo and tell what camera took the photo, just as they can't look at a photo and know what lens took the photo. However, two photos of the same scene, all else equal, yes, the camera and lens can make a difference. How much of a difference? Well, that depends tremendously on the scene, processing, and display size, but they do make a difference, else we'd all just be using smartphones exclusively already.

    Take a gander at these magnificent photos:

    www.fredmiranda.com/forum/topic/1889066/

    Do you notice that there's sparse detail in the foreground? I'm thinking that's likely because they were nuked with extreme noise filtering. But I did say "magnificent", right? Nonetheless, they'd be a lot "more magnificent" if the foreground had detail.

    So, yes, we can take "magnificent" photos with pretty much any [digital] camera from the past two decades, or even longer. Sure, differences in operation (AF speed/accuracy, frame rate, etc.) may get more modern cameras a lot more keepers for various forms of photography, but IQ-wise, they're all "good enough". That said, the newer cameras are "more good". Enough so to make a difference to people at the size the photos are displayed on the media they're displayed on? No way, no how. That's why the dedicated camera market has tanked, because smartphones are so, so, so much more convenient, are "good enough" (and even indistinguishable for some photos, especially when viewed on a smartphone, or displayed relatively small), and, perhaps most importantly, have such insanely good processing that is likely better than what many (or even most) photographers could accomplish with RAW, not to mention OOC jpgs.

    Yet, some people (like me) want better. Not just a little better, but a lot better. And my R5 smokes any smartphone with regards to IQ. But, does it matter? To me, absolutely. In fact, I was showing photos on my phone from my website and they were amazed that my phone took such great photos (yes, I know the quip "You must have great pots and pans because your food is so good!"), but I explained those were photos I took with a dedicated camera and they nodded with "understanding".

    Now, I was displaying those photos on my phone, so I sincerely doubt it was the IQ differential between my phone and the camera I used to take the photos, but my processing of the photos which is likely rather different than how the phone would have processed the photos. And this brings me back to my OP. Currently, I don't think a smartphone can do what a dedicated camera system can do.

    Here's a nice photo I shot with my smartphone that I even did additional processing on to make it look better:

    dprevived.com/media/attachments/3b/06/3OQ5cgPIMC09wK6zK6G8GqZxvKYU2JzhvIPLJCCyJPtNkwEMcH4PokZ4xKWgHlEK/sunset-29-nov-20.jpg

    Is it nice? I think so. Would it have been better had I taken it with my dedicated cameras? Way better. So much better, I might have even been inclined to print, frame, and hang it. But the IQ isn't there in that photo for me to do that.

    But in the very near future, I think they may well be able to do so. I think photos taken with smartphones, processed with yet-to-be-invented software using natural language and insanely good fake (but "realistic", or, at least, as "realistic" as you want it to be) detail supplementation will be able to produce photos that no one could distinguish from even what the best cameras today can do, at any size. For example, I would be able to take a photo of a goalie at a football game from the stands at night with my smartphone, and it would be able to render a photo every bit as good as the photographer on the sidelines with a Z9 + 600 / 4 lens. Yes, my photo would have tons of fake detail, but that fake detail would be so cleverly created that no one would be able to tell that it's fake. And it could do that from a frame from a 120 fps 8K video I'm taking of the scene, so the only role that the photographer would play is simply pointing the phone in the right direction with the video running and telling the natural language editor how you want it to look. In this [near] future, the "genius" of the photographer is in how well they can tell the software how they want the photo to look, so the "success" of the "photographer" is in their ability to "communicate" the look they want to the software.

    And then, a few years more in the future, you won't even need the smartphone -- you'll just describe the scene, and the software will create the "photo". Then the "photographer" can keep tweaking how they want it to look until it's "perfect". Music will likely be "composed" and "sung" in a similar manner. In fact, I read a short while ago that they had a computer compose songs, played AI created songs and actual human composed and performed songs to a bunch of people, and asked which were AI and which were human. The majority of people chose more AI songs as being human than the human songs! And so it will come to be with photography. (As a side, don't get me started on what this will mean for news and politics -- by the time the danger is apparent, it will be way too late, just like so many other things.)

    In all honesty, I think a smartphone will do fine for a magazine cover featuring food photography. And years ago, a compact would have gotten the job done quite nicely.

    Sunset 29 Nov 2024_DxO.jpg

    JPG, 560.8 KB, uploaded by GreatBustard on April 11, 2025.

  • Members 262 posts
    April 11, 2025, 1:12 p.m.

    Well, I still can't help but notice you make the same base assumption that higher IQ = a better photo without providing any evidence other than a comparison of sensor and lens performance, not visual preference. You have this metric of acceptable IQ where once gained it can't be lost or the photo will be degraded on one hand, and in the other you readily accept the distortion caused by lens aberrations in the OOF, (and with it the loss of IQ), is a subjective quality in a photo that is desirable. It's like you won't let go of the order and precision by which you quantify... 😋

    The majority of people are using smartphones exclusively.

    Would they? I'm not sure that's true. This is one of the odd things about human vision, we don't measure it against the visual reality of the actual scene. We measure it against an emotive and flawed memory. To the point that if we presented the visual reality of the scene people would likely as not dismiss it as being unrealistic. (Just as an aside, though I don't doubt that the photographer didn't touch the saturation slider those colours are unrealistic and heavily saturated, even a highly visible G5 storm overhead doesn't look like that. A statement that implies there is validity in the colour because a camera was used misunderstands both visual colour and the process.). The problem is memory, when we view the Aurora we see and remember the main part and not the peripheral details, we also can only really see it on quite dark nights where our colour vision is failing due to lack of light. So if you provide the detail in the foreground it may satisfy your logical mind and may well fail to resemble people's memory. Plus it's a surreal event as in it is vastly different to normal experience. If you keep trying to root it in visual reality you will counter that. Besides how do you square the visual inaccuracy of the colour against the need for accurate foreground detail as a coherent photo, left and right hand again...

    Again people don't know and don't care. they see the photos on your phone and make an assumption based on their experience of taking and viewing photos. Now did that assumption add to the WOW the photos produced and your injection of the reality of how they were produced diminish it, or was it the other way around?

    I don't deny that the higher IQ of FF over a smartphone has a visual impact, and just like my 5"x 4" I want that look. But I didn't choose FF for IQ, I chose it because I liked the format and was used to the relationship between shutter/aperture/field of view (the compromise between DoF and shutter speed if you like) and the visual effect this creates. This is what I use for my "niche" photography, my prints on the wall, for everyday shared images with friends and family I much prefer the smart phone. I find people connect far better when I stick with it because they assume as above and so they expect to see the visual language of the smart phone as in they are viewing and interpreting the images as though they were taken with a smart phone.

    Yes but... The gear is not diminished, it has the same capabilities. It is the photo that loses it's value:

    This is has always been a discussion in photography, there have always been those who try to root it to the representation of reality that holds a truth for them. But the camera has always lied, and with every turn of technology those distortions can be presented as more and more realistic. Photography has moved to the smart phone and with it it has moved away from the technically skilled photographer with the "real camera". It's also moving away from that skilled photographers need for an "artistic sensibility" as this is already being replaced with software. The "high art photo" will be available to all at the touch of a button and simultaneously it becomes commonplace. The language is evolving as it always has, and we need to understand this rather than spend 150 posts arguing that the definitions of terms should be fixed and archaic. But people still appreciate the old style of photography, visitors still spend time looking at the prints I have on the wall, though the audience is necessarily smaller. I think the mistake was to believe that higher IQ cameras were ever "where it's at" on the glance and move on world of the internet and viewing on screen, even here it was only ever a niche rapidly outdated by the smart phone.

  • Members 789 posts
    April 11, 2025, 7:26 p.m.

    I only know of one: ... 'CPIQ v2', although there may be less comprehensive others:

    www.imatest.com/imaging/cpiq/

    Spatial Frequency Response (SFR)
    Lateral Chromatic Displacement (LCD)
    Color Uniformity
    Local Geometric Distortion
    Texture Blur
    Visual Noise
    Chroma Level
    Auto Exposure
    

    One could allocate a weighting to each of the above, take an average and get your personal degree of "technical perfection" ...

    ... mine:

    Spatial Frequency Response (SFR) 80
    Lateral Chromatic Displacement (LCD) 20
    Color Uniformity 10
    Local Geometric Distortion 70
    Texture Blur 90
    Visual Noise 30
    Chroma Level 50
    Auto Exposure 10
    

    Mean = 45 ...

  • Members 262 posts
    April 11, 2025, 8:27 p.m.

    Rhetorical question, as in it isn't that the software can replace or duplicate a camera that will redefine photography, but that it can eventually replace the photographer. What is the difference between the sports shot with the fully automated Z9 and the software enhanced smart phone apart from weight? Producing a duplicate of a photo you can already easily take doesn't change photography, it only makes it more accessible. If the photos are identical then where is the progress?

  • Members 2389 posts
    April 11, 2025, 9:50 p.m.

    agree, i put the chalenge up on FM it upset a lot of people. i got rid of the pixel peeper tech heads that look at charts and played on a level playing ground. so i printed a 36inch print from the gfx100 and the a6700 then set my studio up and shot an image with my a7iv which can easily out resolve human vision. to see the resolution detail difference i had to look through a jewelers loupe from 3cm and even then it was barely visable. then i resized the image from the a6700 to 100meg and reprinted, there was in all practical terms no difference in a 12 foot wide print. DR tonal graduation is just a myth that sensor size plays a part its pixel size period.

    test image 3 copy vibe (2025_03_25 05_46_59 UTC) (2025_03_31 22_34_04 UTC).jpg

  • Members 724 posts
    April 11, 2025, 10:10 p.m.

    For me, they would. For others, I cannot say. I suspect most wouldn't care, to be honest.

    I don't understand this. You can shoot whatever DOF and exposure time you like with any format within the limits of the available apertures of the available lenses and exposure times of the camera body. For example, I could shoot a night time sporting event at f/16 1/8000 with my camera if I wanted to. The photos would be insanely noisy, but that's an IQ thing, not a DOF / shutter speed thing.

    If I didn't care about IQ (or shallow DOF), I'd use the smallest and/or least expensive camera that would record the scene that also had the AF speed/accuracy, frame rate, etc., that I want. Lenses wouldn't matter if DOF and IQ didn't matter so long as the system had a lens wide enough that I could crop to whatever framing I wanted. My smartphone goes to 14mm FF equivalent, and that's wide enough for me, so, technically, given that IQ and shallow DOF don't matter, my smartphone is enough 'cause I can just crop to the desired framing (again, assuming AF speed/accuracy and frame rate are "good enough").

    Now, maybe what you're saying is that the IQ of all modern cameras (including smartphones?) is at or past the "good enough" point for most everyone, which means that higher IQ still has no effect on the "success" of the photo. For example, whether I took a landscape photo with my smartphone or took it with my R5 and a sharp lens wouldn't make any difference to the vast majority of people out there. I can't really argue against that, if that's what you're saying.

    But that's not why I have an R5 and sharp lenses -- if I were just out to please "the vast majority" instead of satisfy my own aesthetics, well, I would just use my smartphone. And, if I'm being completely honest about it, most of the photos I see posted that are within the realm of focal lengths smartphones have, I don't see much difference, if any difference at all (that is, I would not be able to tell if it was a smartphone photo or a photo from a GFX 100s II). However, some of the photos I see, there most certainly is a difference, and that difference is as clear as night and day to my eyes. And those photos (all else equal) are absolutely superior photos (to my eyes).

    So, for me, IQ (and the option for shallow DOF) absolutely do matter. What I'm saying is that, in the near future, software will be able to completely close the gap on perceived IQ. The resulting photo may not be a forensically accurate depiction of the scene, but it will be a realistic depiction of the scene -- so realistic, in fact, that differences between the AI processed photo and a "high IQ" photo would not matter to anyone except someone who wanted a forensically accurate rendition of the scene.

    Photos are not, and never will be, "realistic". For one, they're 2D, not 3D. Also, they do not record the whole scene. For photojournalism, that's a huge deal. You could take a photo of people rioting, but not include the people on the side selling rioters food, drinks, and memorabilia and it changes the entire context of the scene. If you want "reality", the best you can do is UWA video. I'm not even saying the Holy Grail of photography is realism. Sometimes, low IQ is what gives the photo its impact. Indeed, some of my favorite photos that I took myself are a blurry mess. But, for me, more often than not, "high IQ" is the goal.

    So, I'm not talking about recording "reality" in some sort of forensic manner. I'm talking about creating a photo that gives the illusion of reality in a convincing fashion, in the same way that the world looks much nicer with my glasses on than with my glasses off (which is not to say that I'm dismissing "artistic" photography where realistic representations are no more relevant to the photo than they were to Picasso). To that end, IQ absolutely matters (to me), and matters a lot. However, the point I was trying to make in this thread is that, in my opinion, "fake high IQ" will be as good as "real high IQ" in the very near future with regards to the impact the photo has, even for an IQ whore like myself. It won't be as forensically accurate as a "high IQ" camera/lens can provide, but it won't matter unless forensic accuracy is primary to the "success" of the photo (which, in my opinion, is rarely the case outside of science -- people just want a convincing and pleasing rendition of a scene and don't care if it's 100% accurate or not). In fact, a more pleasing fake rendition that looks real may well trump a less pleasing rendition that is closer to forensic reality (thus, the prevalence of "over processed" photos at the art show).

    In other words, I'm thinking photography is close to ending. You want a "great" photo that looks like Ansel Adams took it? Video the scene with your smartphone, select "Ansel Adams" filter, and it will be so detailed, the DR so sublime, and can be enlarged to any size you want with every little [fake] detail perfectly rendered. Yes, there are such filters now, but the photos fall apart quickly upon inspection. I'm saying that in the near future, you won't be able to tell fake from real, and the smartphone will be all you need.

    Will people still take photos the traditional way? Of course, just as people still shoot film on manual focus bodies mounted on tripods. And this style of photography will likely always exist (albeit in exponentially decreasing numbers) and have a market. What I mean is that traditional photography will be dead in the near future in the same way that film photography is dead now. Me taking my R5 out with a 50 / 1.4 lens will be as bizarre as someone putting their large format camera on a tripod.

  • Members 724 posts
    April 11, 2025, 11:13 p.m.

    Nope. I took a look at the two crops you posted above. Night and day difference between the two (the top is much better). Now, will that translate into how large the photos are displayed on the medium they are displayed on and viewed at the size and distance they will be displayed and viewed at? That's an entirely different question.

    I think people have an "IQ threshold" that is very likely normally distributed, where any IQ higher than their threshold has no effect on the "success" of the photo. By "success", I mean a higher IQ version of the photo wouldn't sell for more money (or sell more copies), wouldn't place higher in a competition, wouldn't get more "likes", etc., etc., etc.. And I think for most scenes that people take photos of and the sizes they display their photos, the modern smartphone is smack in the middle of that bell curve.

    In the near future, what I'm saying is that the smartphone will be all the way to the right on that bell curve. Even for photos at, say, 1000mm FF equivalent where a smartphone only has, say, a 75mm FF equivalent lens, the photo will be able to be cropped and enhanced/supplemented with fake, but pleasing and "realistic" detail that make the differences between it and a FF camera with a 1000mm lens moot. The only reason to use a dedicated camera will be, then, not for the photo, itself (unless, of course, "forensic reality" is the goal), but for the "shooting experience" and/or people who want "genuine photos".

    Actually, that last comment just gave me a perfect analogy for what I'm trying to say: in the near future, smartphone photos enhanced with significantly better AI than is available today will be like synthetic diamonds as opposed to real diamonds. Almost no one will be able to tell the difference, and even then, not with their eyes, but some people will want the "real thing" (which, to me, is understandable with regards to photography, but incomprehensible with regards to diamonds).

  • Members 789 posts
    April 12, 2025, 2 a.m.

    As this discussion continues to be written in vague terms with nary a single technical comparison of e.g. Spatial Frequency Response (SFR) from my list above showing numbers and MTF curves.

    If the MTF50 from one camera is higher than that from another but only expressed as "better detail", what does that tell us? (rhetoric).

    Allow me to post a comparison between two cameras' SFR:

    compRGBG9SD9.png

    Which camera's SFR is "better" than the other?

    Lord Raleigh must be turning in his grave.

    compRGBG9SD9.png

    PNG, 71.8 KB, uploaded by xpatUSA on April 12, 2025.

  • Members 262 posts
    April 12, 2025, 2:12 a.m.

    I know you don't get this, but most of the photographic language we use, from shallow Dof to motion blur to gritty realism, is rooted in the compromise made when using 35mm cameras hand held with fixed film speeds. All these things are the way the images were abstracted, how they differed from reality and experience, because of the limitations of what was really the largest format you could use hand held at the time. The high contrast/blocked shadows/grain translation to "gritty realism" is an abstract connection we've learned to associate with the look and not because it's the "logical consequence" of taking photos in low light. Because of this you can use aperture to "isolate the subject", blur to "suggest motion", grain to suggest "realism".

    In a digital model where equivalent photos from different formats can be reduced to a comparison of digital noise, how come that has not taken on a similar meaning?

    You still link reality directly to camera IQ, which implies the assumption that human vision is absolute. Which is not true. You also seem to be using the "truth of the object" as the objective reality, the way the real world actually is as your benchmark. Then the greater detail and accuracy with which we can capture this can be measured as IQ and therefore the goal of the software is to represent reality in the same way.

    But we don't see reality through the human eye, or to put it another way: If you want to mimic how a high IQ photo looks then you have to start with how such a photo looks through human eyes. You need to start with what we are actually looking at and how it appears through human eyes, not what we are not looking at and how human eyes can never see it.

    A photo is not reality, it is a 2D representation and we recognise it as a 2D representation and then apply a 3D understanding based on our experience of understanding the real 3D world. We do not "see" that reality based only on the information captured in that photo, it is not a stand alone information source. If you want to construct an AI process that mimics other cameras with a smart phone then you must include the way we see the actual photos as part of you model.

    Higher IQ from smaller sensors has always been the goal and IQ is at least part software generated in all cameras these days.

  • Members 724 posts
    April 12, 2025, 2:56 a.m.

    What I'm saying is that the difference between 4x5 and [35mm] FF is not "the relationship between shutter/aperture/field of view (the compromise between DoF and shutter speed if you like) and the visual effect this creates". The difference is resolution, noise, and DR. Aspect ratio is a consideration, but either can be cropped to the same framing as the other, where, again, the effect of cropping reduces resolution, increases noise, and decreases DR. Whatever "shutter/aperture/field of view" you do one one format, you can do on another (presuming lenses with the same equivalent focal lengths and f-numbers exist).

    I'm not sure what you mean, but the difference, IQ-wise, between formats is, in fact, resolution, noise, and DR (again, presuming lenses with equivalent focal lengths and f-numbers are available). Again, assuming negligible differences in operation between two systems (e.g. same AF speed/accuracy, frame rate, etc.) the only advantage a larger format will have over a smaller format is resolution, noise, DR, and usually the option for a more shallow DOF (for a given perspective and framing). If these IQ differences don't matter, and the option for shallow DOF does not matter, then, IQ-wise, there is no reason to chose one format over another.

    I guess I am not communicating what I mean well, as you keep thinking that I mean "realistic" looking photos are the be-all and end-all of photography, which I am absolutely not saying. I'm saying that, for me, personally, resolution, noise, DR, and the option of shallow DOF matter for a large part of my photography. I'm not saying they're the be-all and end-all of everyone's photography, or even important. In fact, I even specifically stated that some of my favorite photos that I have taken are a "blurry mess", for example, and I have intentionally reduced resolution, introduced noise, and reduced DR to get a "more pleasing" photo on numerous occasions.

    In short, if resolution, DR, noise, and the option for a more shallow DOF do not matter to someone (or, alternatively, that pretty much all modern dedicated camera systems, and even not so modern systems and smartphones are past the "good enough" point), then there is no reason to chose one format over another (there may be, of course, reason to chose one camera over another with regards to operational differences, but that would not be a function of the format, just the particular camera). And if there's no reason to choose one format over another, IQ-wise, then the only reason to chose one camera over another is for differences in operation and/or price.

    The camera-lens system records information from the light emitted from the scene. Processing, both internal and external, manipulates that information so that we can see it manifested as a photo. How we interpret the photo is another matter all together. My thesis is that smartphones already record enough information about the scene which more advanced AI software than we currently have (with photographer input, of course, just as I direct the RAW converter on how to process the photos I take) can then use to create a photo that, for the viewer, is indistinguishable from a photo created from the highest IQ equipment that exists today, and possibly even "better".

    What do I mean by "better"? I mean looks every bit as "realistic", if not more so, if that's the look you're going for. Looks every bit as "arty", if not more so, if that's the look you're going for. Looks every bit as...if that look, whatever it is, is what you are going for. Our eyes can only take in so much, and our brains can only process so much, and I'm saying that the modern smartphone records "enough" information of any scene that, supplemented with the power of AI processing, will be able to do what any modern camera system can do today.