Sorry, I did not intend to get side-tracked into a discussion of computer architecture.
Again I thank you for your writing about techniques that are sometimes useful. I will always eagerly read what you publish.
Is your code snippet from libraw? What does the apparent assignment do and does it trigger a call to a class method?
I have no idea whether or not you already know about what I am writing next. Your code snippet causes me to speculate that you might not or that you think that I do not. Sorry in advance if I am not good at communicating and identifying areas of shared knowledge.
I intended to say that calculating exponentials and curve fitting is faster using the standard library than implementing the same with scaled integers. I have done both, including on 8 bit CPUs. Of course those standard library functions may be used to create a lookup table. I have not written any code for a modern analog to digital converter. There was a time that even a sqrt() was faster hand coded. I wrote a library for binary coded decimal large integer arithmetic in x86 assembler, but now I would use a library written by others.
Whether or not a look up table will be faster than a calculation will depend upon size and speed of main memory, 3rd level data cache, instruction cache and number of CPUs and whether the cache is shared among the CPUs. 30 years ago, a lookup table would have been faster for any system I had access to. Even in 2001, on a machine with 8 64 bit CPUs and 32 GB RAM, it was sometimes faster to calculate than use a lookup table because each CPU had a separate cache which was coordinated by invalidating the cache. (both HP and IBM sold multi cpu machines at the time, but I was using a Sun. In the 1990's I saw a HP prototype in a lab in Massachusetts while I was working on firmware for a network router.) Even if using a vector processor such as a GPU, the cost of moving memory to and from the vector processor must be considered.
Most modern CPUs can do multiple floating point operations in the time for one main memory access. Some even include on chip a vector processor that can do floating point arithmetic in parallel so that keeping the data in registers is more important than avoiding calculation.
LLVM/Clang has good diagnostics if you really want to know what is faster.
I still do not know why I might want to use a gamma curve to transform linear integer data using only integers and no floating point, but much of what I know might be out of date. I stopped reading ACM Siggraph and IEEE Computer graphics in about 1999.
There is nothing there that suggests those are your photos so what point are you attempting to make?
Are you not aware that the photos of whoever that model is in some of the photos you post are also on the dark web so who knows where any photos you post come from.
🤣😂🤣😎 you want to see 2mm FOV i shoot extreme macro with live subjects . wrap your eyes around perfection 😁 a mosquito head. wanna see a live magget just arrived with a translucent capsule and a babt fly inside 😎🥶
In-camera histogram is.
Enable live RGB histogram and start changing white balance. You'll see how it affects the histogram in the real time in the field...
It's over a month now, and you still don't see the difference between "normal" exposure for an ISO setting, and how much exposure you can actually give for that setting? Most current digital cameras, especially ones with dual conversion gain, can take exposures suitable for half the base ISO setting, at a minimum, especially if the scene is evenly lit and there are no lights or specular reflections in it. I've done a "sunny f/16" type exposure for ISO 25 using the base ISO setting of 100 or extended ISO 50 on my Canon R5 on a bright day, with very little raw clipping (glossy bright white paint). If you want to crop hard or display very large, that could make it much easier to sharpen details without over-sharpening noise.
It can also help at higher exposure indexes, too, to use a higher ISO to get your exposure to the right of the raw histogram, but that won't improve photon noise; only read noise (which could still be helpful for some purposes).
It gets the raw and JPEG histograms to have similar (but not exactly; highly-saturated colors may vary) to have the same ratios of red/green/blue, but that does not address the difference in overall headroom.
"Best exposure" for getting normal image lightness with conversion defaults, and "best exposure" for highest SNR are two different things. It helps to clarify exactly what you're talking about.