equivalence is easy to explain, " ignore it " or shoot FF 😁 its been so liberating moving to FF 5 years ago from m43. i cant believe how much people crap on about on m43 and MF forums, to be honest MF forums are worse than m43.
It's interesting that opponents of Equivalence have to call it equivalency for some reason. 'Equivalence' and 'equivalency' are synonyms - but for some reason those that get Equivalence always call it 'equivalence' whilst those that don't go with 'equivalency'.
Anyhow, computational photography doesn't yet make up for any of the things that a bigger aperture gets you. Maybe it will in time.
There isn't one. Portrait mode works by creating a depth map (using lidar in the iPhone 15) and uses the depth map to create graded blur to (kind of) simulate out of focus blur. So, it can simulate one aspect of having a larger aperture. The other ones are shot noise (at a given shutter speed) and diffraction blur. Both of these result in a lack of actual information in the raw file. 'AI computational photography' works by making an educated guess at what that information might have been had it been there. It can be convincing, but it's not the ground truth.
I must confess, that I actually thought about it and considered how much more I could put my foot down on the ISO accelerator one day whilst thinking back to what I had done in a similar situation with M43. The famous 2 stops of noise.
Or maybe I have still not got it.
But then if I drop a kilogram of steel from 2 meters on your head and then a kilogram of feathers, the steel will do more pain, as the air resistance against the falling feathers will slow them down, if they are loose and not in a bag!
That is the basic misunderstanding that cause people not to understand equivalence. It comes for the all-too-common teaching that exposure is entirely about lightness.
It depends. In articles about computational photgraphy (esp in phones context) internal image stacking is a big part of this 'computational' thing. Users usually don't know of it - but in many situations phone camera takes multiple images and either combines or select best of them.
Human 'induced' stacking is not computational itself (automatic HDR modes being kind of exception), but any processing afterwards can be described as computational, be it based on single image or stack of them :)
The operation is the same. You can even consider it to be computation, just using a different type of computer (one where numbers are represented by optical density). I supposed I just defeated my own argument there!
i dont call stacking using zerene computational as your not stacking the images on top of each other, its a stitch of the sharpest detail of the image.