Phones have developed computational features for several years now, and successfully took over entry-level camera market despite their smaller sensors. While most of such features centered around addressing the disadvantages of smaller sensors in low light, there are significant others around portrait modes, background blurring, removing unwanted objects from images, fixing minor blur, etc.
There were comments on various forums for a while that cameras need to incorporate computational features from phones in order to stay relevant. I am not good at deciphering this technology, but I feel that it's happening. Two examples come to mind.
My 'aha' moment was when I saw the DPR article explaining GH6's DR Boost mode. This sounds very much like how phones address low light situations with their small sensors.
As I think more, Olympus's implementation of Live Composite mode probably qualifies as computational photography feature.
With DSLRs mostly out of limelight, and companies focusing their efforts on ML technologies, it appears that we are seeing computational features, just not to the extent seen in phones. AI in AF has been the hot topic in past few years, but I never thought of it as a computational photography feature. Many of you know this technology and terms better and can add relevant facts.
Your thoughts?
Thanks.
Satya