• Members 106 posts
    May 1, 2023, 3:10 p.m.

    Phones have developed computational features for several years now, and successfully took over entry-level camera market despite their smaller sensors. While most of such features centered around addressing the disadvantages of smaller sensors in low light, there are significant others around portrait modes, background blurring, removing unwanted objects from images, fixing minor blur, etc.

    There were comments on various forums for a while that cameras need to incorporate computational features from phones in order to stay relevant. I am not good at deciphering this technology, but I feel that it's happening. Two examples come to mind.

    My 'aha' moment was when I saw the DPR article explaining GH6's DR Boost mode. This sounds very much like how phones address low light situations with their small sensors.

    As I think more, Olympus's implementation of Live Composite mode probably qualifies as computational photography feature.

    With DSLRs mostly out of limelight, and companies focusing their efforts on ML technologies, it appears that we are seeing computational features, just not to the extent seen in phones. AI in AF has been the hot topic in past few years, but I never thought of it as a computational photography feature. Many of you know this technology and terms better and can add relevant facts.

    Your thoughts?

    Thanks.
    Satya

  • Members 243 posts
    May 1, 2023, 9:20 p.m.

    Nikon uses technology to correct Z lens shortcomings designed into the system as a way of making the lenses smaller and lighter. I think we will be seeing a lot of this in the future.

  • Members 260 posts
    May 1, 2023, 9:28 p.m.

    optics correction predates Z system by a decade if not more... this is not a "computational feature", it is a "primitive" math.

  • Members 56 posts
    May 2, 2023, 3:55 a.m.

    While mirrorless was invented, computational features started appearing. Phone is just making them more powerful, more automatic and sometimes complusory only.

    E.g. on my G1, the first evercmade MILC, supports iDynamic, iResolution etc which were inherited from the earlier compacts. I suppose the Long Shutter NR through dark frame substraction should also be classified as one of the computational features too.

    On later models, there are Shadow/Highlight curve application, in-camera HDR, in-camera focus stacking, in-camera panorama to Live composit etc. Not to mention the filter effects like soft focus, miniature toy effect, mono color effect etc. They are just required shooter's initiation on using them rather than automatic application.

    As a matter of fact, under the Intelligence Auto mode, the camera will apply some of those effect when it thinks fit.

    With improving processing power, wouldn't surprise to see more of these features be invented.

  • Members 360 posts
    May 2, 2023, 5:58 a.m.

    Not all comes from software. Quad pixel sensors with fast readout are very important aspect of that on mobile phones. Cameras Can't do that yet, because it is much harder and more expensive to scale that onto FF. Or even APS-C.

    But we are getting there:
    Lens correction
    Dark frame noise substraction
    Dual-ISO in single frame
    Automatic HDR
    Automatic stacking
    EyeAF
    Scene recognition
    Superres modes with IBIS
    Flickering compensation
    We can expect upright and scan mode too.
    Augmentation slowly creeps in.

    So it is just around the corner, we will see much more in this decade.

  • May 2, 2023, 6:43 a.m.

    Does not the OM System OM-1 have a quad-pixel sensor with fast readout?

  • Members 746 posts
    May 2, 2023, 6:45 a.m.

    Hi rez and hand held hi rez modes. Panasonic's implementation also takes it one step further by removing parts or items in the scene that have moved.

  • Members 360 posts
    May 2, 2023, 8:31 p.m.

    No idea. But it is not a large sensor for scaling.
    Canon just announced they purchased new machine/litography, yet it is seemingly all same all same 500nm "POS". This stuff needs to go down to 130-90nm (And I know it can create much smaller features than that specification).

  • Members 106 posts
    May 2, 2023, 9:20 p.m.

    I wonder what qualifies as primitive vs. computational feature. To me that's a key point in understanding how long 'computational features' existed in cameras.

    I am thinking of your idea of 'primitive' as static vs. dynamic. For example, every lens has aperture settings. Taking full stops from F/1.0 to F/22, let's say ten stops. Similarly, zoom lenses will have focal lengths. Taking 5mm increments, a lens like 24-70mm may have ten intervals. So, we end up with a hundred aperture-focal length combinations. Hundred sets of distortion correction settings could be put into the lens or camera body as a lookup list, not even math. If it is combination 36, apply correction 36 from the list. Scene changes may not matter for this correction.

    On the other hand, anything that's dynamic based on the scene, subject or shooting conditions probably need AI algorithms and cannot be put into static lookups. They include changing lighting, changing noise with long exposure, subjects that look alike, subject with varying size by distance, subject movement for multiple shots (@Ghundred's point above). Is this a good definition for 'computational photography'?

    If that is a good definition, then the recent AI based AF algorithms should also qualify as 'computational photography feature'.

    Overall, I am convinced that the concept has come to cameras, contrary to common complaint that only phones use it. I am just not sure what qualifies and how long these have been around. One way I would generalize it is by saying it started with ML cameras (as @AlbertM43user stated above), is moving slowly compared to phones, and will probably accelerate in the next few years.

  • Members 260 posts
    May 2, 2023, 9:30 p.m.

    stacked sensors with quad pixels are in dSLM cameras already - OM1 for example ...

  • Members 260 posts
    May 2, 2023, 9:31 p.m.

    the size does not matter - what matters is that it is in dSLM camera ...

  • Members 56 posts
    May 3, 2023, 2:25 a.m.

    Permit me to say that as the existing photo community had once been dominated by DSLRs, it is still even now despite more people are moving onto mirrorless because Nikon and Canon join the party.

    As we know majority feature of DSLRs are actually based on slrs, which are largely on analog based, continue the good old style of operation on using the image projected through the lens onto the sensor, part of the light been directed to ovf for composition, and capture the shoot by a digital sensor instead of film.

    It is the biggest differences between DSLR and MILC because the Live View based operation is actually sort of computational operation (using the constant data feed from sensor for AF, metering, applying preview of filters/effects etc. During the operation, computation feature could be inserted in the operation). Phone is actually sort of mirrorless fixed lens camera.

    It could still be an early days for the big guns to venture on their own MILC, so theri computational features and application might still be limited. I expect they would catch up and might ahead of the game very quickly soon. When majority of people would use MILC, the voice on lack of computational feature on camera could be lowered.

    My 2 cents.

  • Members 56 posts
    May 3, 2023, 3:41 a.m.

    Dear friend, please allow me to say a little more on this tropic.

    Those computational features are indeed not new, and might not be useful to many experienced photographers.

    TBH they are just tricks long used by the community, which are just making automatic by programmers, for those not experience users only. Sadly they are the majority of the market.

    Similar to preset scenery modes, those so applauded computational features are well known to the craft long way back before phone camera. E.g. I use stacking in pp to eliminate noise for ISO25600 shots, a way to help using faster shutter for stability under low light shooting or to eliminate moving objects in frame as well, focus bracketing for focus stacking, exposure bracketing for HDR, multiple shots for panorama stitching... Not to mention those Scenery Presets which are just various combination of parameters, JPG settings and biase on exposure etc. I use these tricks since very long time ago. Very often with better result, e.g. A.I. background blurry or facial makeup etc will hardly be better/more perfect than using the right lens or do them manually.

    When shooting RAW, those computational effect will not be applied too.

    So, those computational features are only for those who are new to the hobby or wishing result without learning path. For other users here or on other photo specific forum, lacking of these would hardly been a problem.

    To me, why not to have them, but rarely use them.

  • Members 140 posts
    May 3, 2023, 6 p.m.

    I’m certain that all camera makers are considering this with great care. Cameras have had computational photography features for a long time. I argue that it began with Nikon’s Matrix Metering, and continued with predictive, subject tracking AF, and now HDR features and eye tracking and so on.

    ILC cameras I think will always have advantages with their larger sensors, greater light gathering and ability to use much longer focal lengths, so the smartphone like snapshot features would be be useful. It’s just a lot of R&D.