When I started photography again in digital after a long hiatus from film, I wanted to understand the medium as well as I did with film (not that I understood film that well, in my stupidity-laced youth... :D ). So, I ended up writing my own raw processor, coding each operation from a combination of figuring out the algorithm myself or pulling code from others, with the proper license of course. In doing that, I learned the raw processing process soup-to-nuts, no surprises in my renditions.
Sounds tedious, but I wrapped it in an interface that allows both specific control of the toolchain as well as convenience defaults. I also decided early to embed the toolchain that made the rendition in the rendition (JPEG, mostly) file. That has proved to be a conceptually significant factor in my workflow, where I have only two types of files: 1) raw data files from the scene, and 2) various renditions of that raw data for different purposes. I sometimes produce intermediate renditions to take to other programs for work my raw processor doesn't support, but it's still just a rendition.
As I've moved into other aspects of photography such as camera profiling, it's become important to produce renditions that don't have such default processing like the tone curve or even white balance (ha, did you know you can white balance an image by just making a target-shot camera profile that isn't white balanced? Rather tedious way to do it, but it works quite nicely...). Even to make a standard target-shot camera profile you need to have a "scene-linear" rendition to feed the profile software, and with my software I can reliably apply a toolchain specifically for that objective.
So, to the intent of the thread, I really didn't "switch". I just looked at the others at the beginning, and decided to go my own way...