I'm a longtime Adobe subscriber. A factor Tom didn't mention that was central to Adobe's move from selling products to leasing a service is the cost to operate the business.
Every software app has bugs. Every software app that's a product gets updated, from time to time, to retain existing customers and attract new ones. When Adobe was selling their apps, a significant portion of their cost of operation was servicing customers who chose not to install bug fixes or who made some change to their computer's configuration that broke the app.
Adobe also lost customers through sales of software to people who never upgraded. It's an odd situation because, in a weird way, they still had the customer. That photographer processed their images in an Adobe product. But Adobe never saw another sale to that customer.
The subscription model ensures consistent and predictable revenue. As important - arguably more so - the subscription model lowers the cost of operating the business. Adobe controls how the software is installed and automatically updates the software with bug fixes. When a subscriber migrates to a new computer, Adobe manages the transition of the product decommissioning on the old system along with the installation of the app on the new. Some apps live in the cloud where they're 100% safe from the fumbling and bumbling of the customer.
I've got to believe Adobe's cost per customer to provide support in respond to questions about the software not working properly has significantly decreased as result of adopting the subscription model.
From a customer retention standpoint, that too has improved. Customers get used to paying the monthly subscription fee. They budget for it and, as a result, stop thinking about it. When Adobe drops a major software update - usually one or two each year - the customer automatically has access to whatever new tools are included. If the customer finds one they like, they're reminded why they're not shopping around to "upgrade" image processing apps.
The biggest vulnerability of photography - and this is something no camera manufacturer or designer of photo processing and editing apps can change - is that photography is an image-making process. There are two kinds of photographers: the person who does photography as their preferred process for making images and the person who makes images and uses the photographic process to do so. The difference is subtle but important, and the second type of photographer is where the medium is most vulnerable.
There are far more people on this planet who enjoy making images than who enjoy doing photography. We've seen this reflected in the rise of the smartphone over the last 15 years as the preferred image-making platform for the vast majority of people. They use the smartphone camera as part of their image-making workflow but they don't self-identify as photographers. They use apps to transform their photos into images...avatars, dreamt of vacation spots, humor and other topical commentary.
The resulting image isn't a photo of an actual person, place, thing, or event. It's a different kind of image; an illustration, photo art, digital watercolor or some other thing.
Please, note that this observation isn't a criticism. I'm not suggesting anything untoward or unethical occurs. It's just a different approach to image-making than a dedicated photographer uses. Neither is right or wrong; they're just different processes.
AI has emerged as an image-making process that's not photography but is capable of making images that look like photographs. It also doesn't require a skillset beyond inputting a simple set of parameters or requests for the type of image one wants made.
It's the perfect tool for the person who wants or needs (for work or another commitment) a realistic image but isn't a photographer and doesn't have the budget to pay for or lease use of a photo.
The need is for an image of a certain type or having a certain look. Photography used to be the sole process capable of addressing that need. That's no longer the case. AI is a competitor in the arena. AI will overtake photography as the dominant image-making process. That is virtually guaranteed.
As more people abandon photography as a process used to make realistic-looking images, camera makers and photo processing app companies like Adobe will have to adapt. The camera makers will adapt to serving a much smaller customer base: those who love doing photography to make images. The software companies will adapt by tailoring their products to support image-making. Photo processing may be among the available tools but won't be the core or most-used tool of the app.
In this context, Adobe is supremely positioned to adapt to this emerging reality. Photoshop has always been an app that's useful to photographers but isn't strictly a photo processing app. It's an image making app. Yes, you can start with a photo and end with a photo. But that's just a small subset of what Photoshop can do. It is first and foremost an image-making app. Several Adobe products fall into this realm.
The real and immediate challenge for users of AI tools, is the ethics of how the images are used. Generations ago when Walter Cronkite signed off each newscast with the catchphrase, "That's the way it is," CBS was selling the integrity of one man as the reason you, the viewer, tuned in to watch. In the not too distant past, Dos Equis sold lots of beer to customers who identified with a fictional character known as, "the most interesting man in the world." The actor was real but the character was fictional. And everyone knew he was fictional...at least, they should have known.
In this new AI-world in which we live, where will the line be drawn defining the acceptable use of realistic imagery? Personally, if an AI image is presented as something that's real - a real person, place, thing, or event - when, in fact, it's manufactured from whole cloth, I take issue with that. It's an act of deception; unethical and should be treated as such.
But if an AI image is presented as what it is, something fictional (the most interesting man in the world), I don't see an inherent problem with that. Nobody's being lied to. We're adults and can reasonably be expected to exercise reason in our consumption of persuasive messaging.
But that element of how the consumer respons to persuasion, is a factor we must consider. A child, for instance, simply doesn't have the mental or emotional maturity to tell the difference between reality and fiction. They're also more impressionable and more easily manipulated. Strict limitations are needed on the use of realistic-looking AI imagery in children's content.
There's also the very real issue of how adults respond to persuasion, especially to intentional misinformation. Millions of adults believe lies and myths about the world, human history, and real people, places, things and events. What happens when a significant segment of the population becomes incapable of discerning reality from an AI-generated fantasy? How do we respond when half the population in developed countries sit around all day wearing VR goggles and living in a fantasy?
They're adults and, as such, have a right to self-determination. But they also have a responsibility to be active members of society in the real world. How do we balance the conflicting rights of the individual vs. the needs of society?
If only everybody realized they could be living more full, richer lives being out in the world doing photography 😉