• March 22, 2024, 6:23 p.m.

    That's not 'all', that's technology that we don't have, or anything like - though my son had an idea for a system that might do that. It was interesting enough for his company to investigate it, and whilst the theory was sound, there was no way of actually realising it at present. So, what we settle for is a description of the scene, that allows us to create stimuli that the human visual system perceives as something quite similar to looking at the original scene. As photographers we play around creatively with the several ways that optical illusion can fail.

    The problem is that we rarely view images in such a way that the image forms enough of the visual environment that our visual will adapt solely to it. When 'colour science' was developed it was about printed reproduction. You view a print by reflected light, in an environment to which your visual system has adapted. The lighting conditions in that environment will rarely be the same as the original scene. In a print the colorimetry assumes daylight viewing conditions and our visual system adapts when the image is viewed in other lighting conditions - up to a point. For critical assessment of prints it was necessary to use the right D series illuminant (usually D65 or D50)

    Metamerism is the condition where objects issuing different spectral distributions are perceived as the same hue. It's an inevitable consequence of an colour vision that does not have infinitely many different stimuli covering the whole visual spectrum(in classical theory - on quantum theory and the Heisenberg uncertainty principle would provides a computable upper bound) The number of stimuli required to do away with metamerism would certainly be a lot above three.

    Since metamerism is about the spectral distribution, it depends on both the objects in the scene and the spectral distribution of the illuminant. Thus sometimes paints will look to be different colours in different illuminants. That's not strictly metamerism failure, which is when a reproduction system presents spectral distributions which should be metamers as different hues. In practice we tend to call 'metamerism failure' when things in our photos turn out unexpected colours.

    The key thing here is that colour photographe does not try to capture the spectral disributions of the objects in the scene. What it's trying to capture is the three human visual stimuli that scene would produce, or at least sufficient information to allow them to be reproduced.

  • March 22, 2024, 6:46 p.m.

    Please explain.

    BTW, there was a vocal dpreview member that would jump on me anytime I used "metamerism" in a way he did not approve...

    I hope that was not me :)

  • Members 86 posts
    March 22, 2024, 6:52 p.m.

    Are you sure about that? I think there might be orange in there...

    So you're talking about how the scene looks to human eyes, and how the colour appears but have not mentioned the exact word colour? Sounds like a side step to me.

    What I say (without the mention of a certain word...) is that you are still treating an input measurement as a baseline truth, you are still treating that measurement as the absolute reading. So how does that work with the additive colour system of your screen, or even a print from any media? Show me how you recreate the exact input data without recreating the whole exact 3D scene in real life. Show me how that is even relevant when the human eye doesn't preserve it, the camera doesn't preserve it, it's impossible to recreate in any current media. I just don't see it's relevance as it's not part of the system and not likely to be.

  • Members 878 posts
  • Members 2322 posts
    March 22, 2024, 7:53 p.m.

    there is so much chest banging, on topics that dont even contribute to amazing images, just go and shoot things the human eye cant even see 🫣or should i say "perceived", its real 😁

    2024-03-04-03.04.22 ZS PMax copyfs (2024_03_04 06_39_11 UTC).jpg

    2024-03-04-03.04.22 ZS PMax copyfs (2024_03_04 06_39_11 UTC).jpg

    JPG, 8.4 MB, uploaded by DonaldB on March 22, 2024.

  • March 22, 2024, 10:37 p.m.

    Donald,

    I think this subject is interesting and may contribute towards great images.

    Alan

  • Members 105 posts
    March 23, 2024, 8:49 a.m.

    A suitable conclusion to this long debate about colour accuracy woul be to acknowledge thar taking photographs is about enjoying the results, not about measurements (except for some scientific purposes).

    I rrecall when I preferred Agfachrome to Kodachrome for some subject matter even though Agfa slides tended to fade while Kodaks robust colours survived.

    Current sensor filters and readout and decoding programs may vary according to what the resident salespeople trust in, but should nevertheless be chosen on the basis of preference. Less dense pixel filters would presumably give a noise bonus and maximum flexibiity Raw massaging would be desirable.

    p.

  • Members 86 posts
    March 23, 2024, 9:44 a.m.

    You must be aware of the problem here. You do know the principle of an additive colour system?

    Possibly not - "Almost guaranteed"? Absolutely definitely, written in stone guaranteed to be a complete illusion.

    RGB is not real colour, it doesn't contain anything more that three narrow bands of RG and B.

    There is no spectral density as it only contains a very small portion of the visible spectrum.

    The eye doesn't respond in the same way to light from a computer screen because it is so completely different to that from the original screen.

    No combination of RGB produces yellow, or even orange, but by shining the known quantities of it into a human eye you can create the same response in the cones and so trick the eye. (NB - Those quantities are derived from LUTs of colour themselves defined by the standard human eye and not reference to the actual spectral density of the light from the original scene).

    That's kinda the point, the preservation of how the colour appears to the standard human eye given strict ambient light, which in the case of a computer screen is quite difficult because ambient light reflecting of the objects around the screen does affect how you see the colour on the screen. View that screen in the dark and the eye adapts to all sorts of irregularities and colour casts, the image becomes quite unstable.

    You've lost me completely here, I've never come across this in anything I've read about colour. Grey is the colour the brain presents us with when there is no dominant hue present in the reflected wavelengths, we have 3 colour receptors in the eye and so the entire visible spectrum is therefore converted into three signals. Those three signals define the entire visible spectrum obviously because what it doesn't see is also not visible. I have no idea where 4-9 come from or even the relevance of (1,2,3,9,8,8,9,1,2,3). Sorry, all Greek to me.

  • Removed user
    March 23, 2024, 2:12 p.m.

    That's pretty rude, bearing in mind who you're asking!

    And yet, everybody knows that RGB=1,1,0 in the aforementioned additive system produces yellow !!

    Pardon me for snipping all the obfuscation ...

  • Members 86 posts
    March 23, 2024, 3:27 p.m.

    Err... You specified it? It was you who said we were talking about compurter screens wasn't it? When bobn2 was discussing prints or a playing field where the input and output were both subtractive colour systems.

    Err... I think you are talking exclusively about theoretical colour models, or how we standardise a model of colour so we can create a numerical sequence that can be programmed into computers. You could leave the colour out of it and talk only about absolute wavelengths and spectral densities of the real scene and matching them in the output as I think you seem to be trying to do.

    But then you talk about the output device being a computer screen, or RGB device, or an additive colour system.

    Not at all rude but an honest question. Again because in your second response you are not seeming to understand the difference between the theoretical colour space and the nature of the light emitted from your computer screen and how it actually creates the sensation of colour in the human brain. If I am missing any humour again I must apologise because I am quite lost trying to understand.

    JACS seems to be talking about preserving the integrity of the input data and so if you matched that in the output then the eye would have the same response and so you could forget about it.

    But how does this work when your output device is a computer screen? Which is an optical illusion calibrated specifically to the human eye. The nature of the light emitted from your pic of a daffodil on screen is completely different to the yellow reflected of a daffodil in sunlight and so the human eye's response being mainly chemical will not be the same. We are also talking about an output device that must be calibrated to how a human eye perceives colour, and to be relevant it must also be calibrated to how it sees colour under reference ambient light. This is the only way you can have calibrated colour on an output device that uses additive colour.

    You can't calibrate a computer screen to match or recreate the absolute light of the scene, that would to me indicate a misunderstanding of the principal involved. But perhaps I'm misunderstanding what you are trying to communicate, so again I apologise.

  • March 23, 2024, 3:36 p.m.

    I see...

    You have developed terminology shift. Metamerism itself is purely physiological term, which describes human perception - seeing different 'spectral combinations' as same color. What you describe, could be called 'metameric failure', 'metamerism errors' or something similar.

    Then - your examples of 4D are not relevant. All photography and imaging talk about metamerism is limited to visual range wihout any other, invisible dimension. Simplest model is 1D - spectral distribution of received light, next iteration would be 2D - spectrum plus light intensity (human eye receptors sensitivities depend on intensity in little different way).

    In other words - problem is not in frequencies, what we can not see, problem is that eye response to different spectral composition is not the same as sensor response. And even if it would the same (technically not impossible, but like you said, has some side effects), then what we could do with recorded information? Send directly into our eye nerves? This could work :) But we need to reproduce it with help of some visualisation medium, like computer screen or printed paper - and here we have much bigger problems - there is no set of three independent color channels, able to generate similar response in our eyes as original does. Increasing channel count (some Sharp panel have 4 - RGB + yellow) makes them dependent and does not help.

  • Members 86 posts
    March 23, 2024, 4:16 p.m.

    But the eye's response here is still post "colour correction", you're recreating the eye's response and not the original illumination.

  • Removed user
    March 23, 2024, 4:41 p.m.

    Asking @JACS if he knows what additive color mixing is insulting bearing in mind his obvious knowledge on all things color.

    Tantamount to asking a truck driver if he knows which way to turn the steering wheel to go to the right 😀

    I was being deliberately obtuse in order to show that your remarkably complex responses will not really educate simple folks like myself in spite of your intent.