• Members 2389 posts
    April 14, 2025, 11:49 a.m.

    Im talking about replicating a persons face accurately with a phone using AI . its never going to happen, with the tools i have i thought it was going to work, it was no where near it.

  • Members 262 posts
    April 14, 2025, 1:16 p.m.

    Accurately compared to what? Your understanding of how current cameras and lenses work or how human perception works? You said before:

    You call it "distortion" because it looks distorted, i.e. it doesn't match what you normally see. This is what I was trying to explain in my posts with GB, if you use your technical understanding of current cameras as your baseline then you fail to see past their flaws. Because you understand that the perspective distortion in current cameras is "technically and scientifically correct for the technology" then you automatically just apply the "scientific correctness" to the new unaware that your premise is "to match the distortion" rather than make it more realistic as in match the way we see and perceive faces in the real world.

  • Members 724 posts
    April 14, 2025, 5:47 p.m.

    I disagree with "it's never going to happen". In fact, I think it's going to be possible (though not necessarily happen, per se) sooner, rather than later. In fact, let me take it a step further. You already have photos of your daughter, right? You could even load a few past photos into the AI software and, using natural language, describe the scene, and the software would produce a photo so "realistic" that if you showed it to people, at any size, no one would know it wasn't an actual photo.

  • Members 2389 posts
    April 14, 2025, 8:28 p.m.

    i dont beleive that . i have tried that with my spiders and it doesnt work, not even close, AI is not making anything it hasnt already been shown. its actually not intelegent. you only get out of a computer what you put in. a computer is not smart at all its just has fast access to information already inputed.

  • Members 2389 posts
    April 14, 2025, 8:35 p.m.

    we have 2 eyes, a camera only has one lens so we are able correct distortion at close distances.

  • Members 789 posts
    April 14, 2025, 8:43 p.m.

    That it does ... RIP Danno and "exposure" for example.

  • April 14, 2025, 8:51 p.m.

    Me cannot follow all this pages length of discussion, but how all this less or more technical talk is related to photography?
    It's like with photography itself - painting art did not die out, it just transformed - realistic paintings were not requested anymore.
    So with computational photgraphy and especially with AI classical photography does probabaly not die out, it just transforms - more art, less snapshots.
    IMO.

  • Members 789 posts
    April 14, 2025, 9:14 p.m.

    On the other hand, If I am comparing two examples of "the kit", I prefer to compare raw renderings where the only "software generation" is the scaling and bit depth of the output.

  • Members 724 posts
    April 15, 2025, 7:24 a.m.

    AI is definitely "creating" content it has not been shown. That said, current AI, once trained, cannot learn. Many believe that very soon, there will be AI implementations that can learn and change. That will be a game changer.

    Putting it that way, I think what I'm trying to say is that I feel that things will go full circle and photography will become more like painting. That is, it will use real scenes as a basis for a digital painting. One can say that photography is already used in that manner (e.g. all those smartphone filters), and that's true. However, that type of photography is usually quite low resolution and/or obviously fake. I guess what I'm saying is that the dominant form of photography will be like the filters used today, except that people will have the ability to make the AI-enhanced photo appear absolutely realistic. So much so, that no one will "trust" any photo (or video) anymore.

    Now that I step back and look at it from this perspective, I guess it's already happened. Fake news, fake photos, fake everything, everywhere and all the time. The near future AI-enhanced software I'm talking about in this thread will simply put the nail in the coffin with regards to photography (and video). For example, these photos that I linked to earlier:

    www.fredmiranda.com/forum/topic/1889066/

    In the future, I'll see photos like that and, no matter what the person posting them says, just think, "Maybe real, maybe not. Who knows/cares? They're pretty -- that's all that matters." When Tienanmen Square happens here in the US, and photos are published, they will be met with the same ambivalent disbelief. You'll know they could be faked, no matter how realistic they look, and you know the news and/or government have every reason to lie to you one way or another, so people will just believe what they want to believe based on their personal prejudices.

    Now, how will that affect my photography? It won't make the slightest difference. Just more powerful editing tools at my disposal -- less work ridding telephone poles/wires and/or trash on the ground from the photo, change DOF after the fact, etc., etc., etc.. If someone were to ask, "Did it really look like that?" I would simply respond, "Absolutely -- trust me, bro!" 😁

  • Members 2389 posts
    April 15, 2025, 7:35 a.m.

    the northern light show images are fake. i went out to capture the lights once and you can only see them through your phone 😒
    im also concerned about the present not the future even the latest Ps cant clone properly, so much for progress. my computer im on atm still cant remember my calibration, its just turned blue 🤔

  • Members 262 posts
    April 15, 2025, 9:54 a.m.

    The simple version is that our foveal vision is narrow and so your "wide angle view" of the room you occupy is a composite of your eye scanning the scene, i.e. you look directly at objects so no peripheral distortion and combine those into a single "Ai enhanced retinal image". With human faces it becomes more complicated as tests indicate that the recognition of such is possibly genetic and not necessarily learnt behavior.

    Question is still the same though, why replicate the distortions current cameras introduce when you can remove them in the same way the eye does?

    G1-2 storms maybe, we've had two G5's recently and they were both clearly and spectacularly visible to the naked eye, (not nearly as saturated colour as the photos indicated though...).

    Absolutely. I don't think that computational photography will follow the path of the larger sensor cameras and their IQ upgrades. It will find a different direction simply because why would you use new technology to duplicate what you can already do? I also think GB has a valid point with belief and authenticity, and so there will be a demand for the more authentic OOC jpeg that current cameras deliver. Even if only to keep the belief that there is still some integrity in some of the photographs produced.