quarkcharmed posts a raw digger stats and you still think you know what your talking about. ive just revisited FRV and still cant stop laughing how ridiculous some members posts are and there assumptions. i will wait till you have figured it out yourself before telling you ,you are wrong 😎☺
When photographing artworks (or other things that might be in a museum) it makes a difference whether the object is flat (paintings, drawings, tiles, etc) or three-dimensional. You need a small aperture for the 3D objects. (And possibly stacking if you have plenty of time.)
Raw files often use a "signed" number space because even though electron charges from light and dark current are always "positive", read noise can make values higher or lower, and for signals close to black, this means negative pixel values, less than black. So, a camera could have black at raw level 2048, and a max raw value of 16383, but "16383" is actually 14,335.
Even with cameras that have black at raw value zero (not an ideal situation if you really need to make the most of very weak exposures), the original raw digitization had a black offset, and they just subtracted it before writing out the raw file, and in some cases, the firmware may stretch the values with scaling to go back up to 16383 again.
Of course experience matters, but experience generally only hones one's understanding within the quandaries of one's finite understanding. So one can get better and better within their paradigm, without realizing that there is terrain unlit by their own lantern.
What I am saying is that if you alter the gamma of that JPEG image to pull values down, the clouds at the top do not have large contiguous areas of clipping. What looks like clipping is actually steps of 255, 254, 253, 252, etcetera representing a much larger range of scene brightness.
Observing is OK if it is used to confirm an understanding of something technical. However, Real Worlders often imply that technicality is unnecessary, preferring instead to "learn by doing". Such learning is OK when it works - but probably expensive and time-consuming when shooting film, eh?
Perhaps some are reading too much into what I wrote and why I wrote it. My point was only that this image, as given, doesn't actually clip the clouds over any large areas, and doesn't have enough noise to avoid posterization. Obviously, if the conversion was in a 16-bit space instead of an 8-bit, the noise could be sufficient and the highlight compression could be reversed without posterization.
The question is, when it really does work. For instance, a traditional way of treating wounds, developed by 'learning by doing' over generations is a 'bread poultice'. A hunk of wet, stale bread is secured over the wound. Sometimes it works. The reason it works is that if the bread is going mouldy and if that mould includes penicillium, then it has an anti-biotic effect which stops the wound going septic. Knowing why it works means that the penicillin can be isolated, applied on its own (without other potentially toxic moulds) and a much higher success rate can be achieved.
FastRawViewer Preferences -> "Image Display" -> "Ignore exposure correction/baseline exposure in linear mode" - check.
FastRawViewer Preferences -> "Exposure" -> "Exposure adjustments affect OverExposure display" - uncheck.
Display the image and press Shift-L. This will help to assess what is truly clipped in raw data and what a raw converter will deal with as the source data.
The result:
More on this here: www.fastrawviewer.com/blog/fastrawviewer1-7-new-view-mode