I agree with part of what is in the article. The question is how much more computing power does a camera need and for what price?
petapixel.com/2023/04/05/what-canon-nikon-and-sony-need-to-learn-from-apple-and-google-before-its-too-late/
I agree with part of what is in the article. The question is how much more computing power does a camera need and for what price?
petapixel.com/2023/04/05/what-canon-nikon-and-sony-need-to-learn-from-apple-and-google-before-its-too-late/
No thanks! I don't like shooting with camera phones or tablets, I want to be the one taking the photo, not the computer in my device.
I'm sure there are computational options that would appeal to me in some situations, but I want to keep in charge of the camera (and definitely don't want it linking to the net by itself. A slightly more involved upload routine ensures I don't just upload all my junk!
The author seems unfamiliar with how modern cameras work, which dilutes his point. He calls for cameras to correct for chromatic aberrations multiple times (or even for cameras to connect to the internet to get lens corrections), when cameras already do that quite happily. Camera manufacturers can get the lens to send the lens correction parameters over the electronic interface with the camera for all manner of corrections, and the system works very well - it's rare for a new lens to need firmware updates on the body for it to work as it should.
As for his suggestion of multiple sensor cameras, phone manufacturers have already given up on that for more dynamic range. Nokia tried that, once, and the only other related example I can think of is google using AI upscaled and denoised faces from the ultrawide to replace faces blurred by motion on the main camera - a pretty narrow and AI-heavy use case that most photographers (who want to actually take a picture) won't appreciate. It also would be nigh-unworkable for ILCs, as shown by how few interchangeable lens TLRs there are