It's definitely more accurate as it has the best possible accuracy. You literally can't do any better than a raw data analyser/histogram. The only issue there's no raw histogram in live view in any camera, but nobody argued that. It's rarely practical to use an external device to analyse the raw histogram in the field.
But for post-shooting analysis it's the best you can get.
From one who never looks at 'raw' histograms, I am confused.
The raw data itself has information encoded in there somewhere. But, in itself, you cannot look at the raw data (like you can with jpg) and 'see' the histogram. You have to do some sort of processing. So, unless there is a standard way to process the raw data - across all different types of cameras - you are at the mercy of the processing algorithms to determine what the raw data actually means.
You can make a histogram of any data. If you make one of raw data it tells you about what is the distribution of data numbers in the raw file, which is what you need if trying to optimise raw exposure.
To be of use when actually out shooting, choose a flat style jpeg (standard, natural, neutral or whatever your camera calls it) even when you're shooting raw. That's what the blinkies/histogram are based on & showing in the EVF. Once you've learned your camera with that, understand how much, or lack of headroom you have given what's shown in the EVF. That's how I figure it anyway.
Raw file formats are different across the brands and camera models, but the information encoded in them is largely the same: digital numbers for R, G, G, B channels before demosaicking. Then it's possible to deduce black and white levels, so the digital numbers can be normalised and shown in a raw histogram. There are different RGB mosaics (colour filter arrays) in some sensors but the main point is, it's possible to extract raw RGB channels from raw data for a more or less unified representation as a raw histogram.
that sums it up. you need to know what your cameras headroom is with the in camera tools. Im very happy that both my a74 and a6300 have very accurate tools,
even so if im on a pro shoot i will still be using my standard approach using the incamera metering and allowing more headroom than i know the camera can handle.. must admit the a74 has to be the most forgiving camera i have owned to date and very rarely can i not recover a bad exposure.
I agree in theory (mathematically)but can you do that in reality? To do that, you have to know where the numbers are for each different sensor and also what they represent. So, unless you have a large table of 'sensor value' vs ' data bits', how can you say what the raw data is actually telling you?
What they represent is raw numbers, which is what you want if trying to optimise use of the raw scale. The raw numbers have some maximum value, which both the camera manufacturer and the people who produce raw conversion software know. There is also generally a zero offset, also well known. It's a linear scale. The histogram just needs to provide zero to full value - it doesn't matter what the numbers actually are for this purpose. The main issue is implementation, not theoretical. The histograms generally come form the VF feed, which is processed images. It's also done it hardware - and if the HW doesn't allow a histogram pick-off earlier in the processing chain it's more complicated to do.
you cant. and to be honest from all the experimenting during this thread, i have found even Faststone and Photoshop histograms accurate as well, and with todays modern cameras its really hard to make a mistake. and the only way im going to make that mistake is using off camera fill set to manual and misjudging my exposure/fill to background brightness.
I think of it like this - each pixel on the sensor will have a number between 0 and 16384 (in theory at least for 14 bit raw data) in the raw data. You then plot a histogram of the frequency of each value. This is the raw histogram.
For example - the range of the x-axis will 0 - 16384 and the y-axis is the frequency each of those values in the sensor pixel's data.
In practice the 16384 might not be exact but the explanation for why is way, way above 'my pay grade' and people like JimKasson, Illiah Borg or Bob will be able to explain.
The raw data is then demosaiced, has WB applied to it, mapped to a colour space and other "mathemagics" applied to it to render a sooc jpeg or an rgb image in a raw converter app.
So, for Raw data, you don't care what is red, green or blue, all you care about (in the example) is if something gets close to 16384.
How does this help you? Is there only one thing that determines whether a pixel value gets close to 16384? And (I may be opening a can of worms here), does ISO come in before or after you collect this Raw data?
Pretty much, yes. Apps like RawDigger will output a raw histogram for each of the red, green and blue channels. Ideally you don't want any of the 3 channels clipping. Clipping means that the sensor's pixel was full and couldn't accept any more photons.
By looking at raw histograms of raw data where the camera's histogram (which is based on the camera's jpeg produced from the raw data) showed clipping you can determine how much more exposure* you could have added before the raw data in any of the 3 channels was actually clipped. From experience I now know I can comfortably add 1/3 to 1/2 a stop maximum of exposure* after the camera's histogram and/or blinkies shows clipping before the raw data is actually clipped.
How the affect of ISO is implemented varies between camera manufacturers but generally it is applied to the output voltage of each sensor pixel after the shutter has closed or to the digital data after the analogue data (voltage) has passed through the Analogue to Digital Converter (ADC).
Again, here people like JimKasson, Iliah Borg and Bob et al can provide much more info/detail about how/where ISO is applied.
* exposure - amount of light that struck the sensor per unit area while the shutter was open
** optimal exposure - the maximum exposure* within dof and motion blur requirements without clipping important highlights.
*** under exposed - more exposure* could have been added with the DOF and blur constraints still being met without clipping important highlights.
No, as above, you do care about RGB channels. Run RawDigger and you'll see all colour channels.
ISO usually (but not always), comes to play before the numbers are written.
Also, just a little thing, but with 14-bit representation, you never get a value of 16384, you can only get 16383. But the white level usually lies below 16383.