• Members 174 posts
    Oct. 23, 2024, 9:24 p.m.

    The terminology is a bit fuzzy, but generally,

    Exposure bracketing means taking shots at different settings in order to choose the best one later on. But also you can use them for blending. So bracketing is about the shooting technique, not about postprocessing.

    Exposure blending is where you blend the shots taken with different exposures in postprocessing, this includes HDR.

    Exposure stacking typically means taking multiple shots with the same exposure settings and then averaging them in post.

  • Members 1415 posts
    Oct. 23, 2024, 10:37 p.m.

    Thank you John. I think I get that now.
    Generally I find it is no longer an issue for me unless I intend using the shot for a large print. Anyway, the new noise reduction programs, used with some moderation, are doing a good enough job for me if required.
    I got interested in this years ago when Sony added a noise reduction mode to its first Nex mirrorless cameras. These took multiple shots at the same settings and then reassembled them into a final image. It worked as long as there were only small movements in the scene. It could only be done in jpeg. The processing power in the cameras couldn't do it with RAW images.

  • Members 3952 posts
    Oct. 23, 2024, 10:47 p.m.

    I think you have that the wrong way around.

    ETTR is useful for increasing, not decreasing, SNR.

    It is a low SNR due to a low exposure* that makes noise visible.

    The more light that hits the sensor, the higher will be the SNR.

    The higher the SNR, the less visible will be the noise.

    ETTR is all about getting as much light onto the sensor within dof and blur requirements without clipping important highlights.

    *exposure - amount of light striking the sensor per unit area during a shutter actuation

  • Members 415 posts
    Oct. 23, 2024, 11:24 p.m.

    Is there an authoritative reference that addresses this controversy?

  • Members 174 posts
    Oct. 23, 2024, 11:52 p.m.

    I don't think there's an authoritative body that defines the standard terminology. You can 'stack' multiple images taken with different exposures, but usually I see this term used in astrophotography. That's why I said 'typically'.

  • Members 415 posts
    Oct. 23, 2024, 11:55 p.m.

    Thanks for clarifying your point of view.

    I don't know whether this site approves of AI but I did check here:

    I asked the older free ChatGPT 3.5. Click to see transcript.

    Astrophotography and HDR get mentioned/

  • Members 174 posts
    Oct. 24, 2024, 12:15 a.m.

    That's clearly wrong because the term is used in astrophotography. Just as an example: www.lonelyspeck.com/milky-way-exposure-stacking-with-manual-alignment-in-adobe-photoshop/ In astrophotography, you use the same exposure settings multiple times, it makes no sense to change it for future stacking.

    Also there's a technique where you simulate a long exposure by taking multiple shorter exposure shots, that is to simulate an ND filter. So the shots don't have to be taken at different exposure levels.

    So 'stacking' is a pretty broad term.
    But when you actually change the exposure settings between the shots, 'bracketing' will be more accurate, and the term is used in camera manuals.

  • Members 415 posts
    Oct. 24, 2024, 12:19 a.m.

    O.K. no point in continuing ...

  • Members 534 posts
    Oct. 24, 2024, 1:33 p.m.

    Even if you don't do an HDR look at all, you still get better data with the staggered combined exposures. High DR in single exposures is a great thing to have, but it can't compete with computational photography with stable subject matter.

    Of course, no matter how you get your high-DR results, you still have to deal with the fact that the shaded areas and the sunlit areas of the same scene are very different in ways that have nothing to do with the total intensity of light, per se. Without clouds or haze, the sunlit areas are dominated by a small disk light source which creates high-contrast micro-shadows in textures, which renders all other light contributions of blue sky or second-hand sunlight off of buildings and trees relatively irrelevant. The shaded areas are being lit only by second-hand sun and blue skies, a huge broad diffuser that renders texture details flat, and, the shaded areas are very blue unless second-hand sun dominates and it has a lot of red light.

    Our visual perception is evolved to de-emphasize this difference with context-based local adjustment in the brain, but the sensor is a literalist, and sees images on flat media as the texture/color of the actual print or display surface; not a scene to be locally adjusted at normal adjustment strength. So, the ideal approach might be that after you have created your high-DR image, you make two separate conversions, a "normal" daylight one for the sunlit areas, and a "shade" conversion that is altered to be much less blue and lightened, with more micro-contrast. You could then blend them with a mask, to make such an image scene look more like "real life perception", then any global image adjustments could.

  • Members 534 posts
    Oct. 24, 2024, 1:41 p.m.

    A "single shot" can be an ETTR shot. Did you mean something like, "single shot with standard exposure for the ISO setting"?

  • Members 534 posts
    Oct. 24, 2024, 2:02 p.m.

    If the software blends based on levels, then the software will use the highest quality image for any final tonal level, and should be able to never include any clipped image area, unless that area is clipped in all of the frames, and it has no choice. Of course, what actual software does and what could be done by software are not always the same.

    So, ETTR, ETTR+2 EC, and ETTR +4 as three input images can be fine, because the highlights will all come from the plain ETTR one, and the +EC ones will only contribute to shadows. Of course, if you just literally add the three images with no conditional blending, then the highlights would have extremely low contrast, dominated by whiteness, and the shadows would be way too bright.

  • Members 415 posts
    Oct. 24, 2024, 4:39 p.m.

    I believe the meaning is: if the scene DR exceeds that of the camera then, in any one single shot, pixels will be blown at one level or another or both, ETTR or not. Obviously, several shots are required if the scene DR exceeds that of the camera and the shooter wants to exclude blown pixels. Otherwise, do like myself, take your pick - keep the highlights or keep the shadows, can't have both. 😪

  • Members 1620 posts
    Oct. 24, 2024, 4:56 p.m.

    This is getting overcomplicated. I proposed a really simple way of getting more more DR than our camera sensor can handle. It is perfect for use in the field.

    I get the exposure about right for the main part of the picture, or a picture that nails the mid tones. I then take two other shots. One +2EV and one -2EV.

    Capture one works some magic. Mission accomplished.

  • Members 174 posts
    Oct. 24, 2024, 9:17 p.m.

    It I understand it correctly, the method you described is what many people use as their default method of capturing HDR.

    In fact it doesn't guarantee you don't blow the highlights, although the chances of that will be reduced. What you describe as "about right for main part of the picture" sounds quite arbitrary, and going -2ev from midtones often means you get a heavily underexposed shot with little to zero additional information, but increased chances of ghosting.

    I almost always use ETTR in landscape shooting. By histogram, I can see if the scene fits the DR of my camera, which is, say, 80-90% of cases.

    When it doesn't, I take just two shots: ETTR and ETTR +2ev. This is enough for HDR blending in 99% of cases. It's much easier to blend just two images instead of three - less chances for ghosting.

    In very high contrast cases (e.g. the sun in the frame) I'd take a third shot ETTR +4ev.

  • Members 3952 posts
    Oct. 25, 2024, 8 p.m.

    Yes, that is spot on.

    Most of the time only 2 shots are required to capture the full dynamic range as you described.

    There is no need to underexpose the highlights by 2 stops, or whatever, since they are not being clipped when using ETTR.

  • Members 415 posts
    Oct. 26, 2024, 9:14 p.m.

    I would be interested in how shooting with a negative EC increases chances of ghosting, assuming a faster shutter speed.

    Perhaps "ghosting" has a different meaning here ...

  • Members 174 posts
    Oct. 26, 2024, 10:23 p.m.

    Any additional image used for HDR merge increases chances of ghosting, no matter how fast the shutter speed was.

    The method described by NCV involves taking 3 shots for HDR merge: some arbitrary exposure that looks right for midtones (call it 0ev), then -2ev and +2ev.

    But note that if the highlights aren't blown at 0ev, there's no actual need to take a -2ev shot because it has very little additional information, but it increases chances of ghosting if used in an HDR merge.

    On the other hand, if the highlights are blown at 0ev, there's no guarantee they won't be blown at -2ev.

  • Members 415 posts
    Oct. 27, 2024, 2:23 a.m.

    OK, that kind of ghosting, thank you.

    Personally, if I merge a stack, I align the images first - so as to avoid "ghosting". I'm a bit surprised that others don't bother with that.

    Um ... "ev" stands for electron-volts ... pardon the pedantry ...