Ok, before we go too far down a rabbit hole here regarding "colour science" we have to define and understand that we are talking specifically about colour perception and not colour itself or really anything to do with the physics of light or even actual colour.
@bobn2 Yes, absolutely.
RGB is misleading, especially the assumed symmetry it provides via the meaning of the labels. There is no real link between RGB sensors, RGB colour spaces and the RGB output of additive colour systems such as cumputer/phone screens other that the way the human eye sees and perceives colour. Colour accuracy is defined by comparisons by the standard (or average) human eye under strict reference illumination in the subtractive colour model.
The additive colour as in the RGB colour from your screen is not colour at all. No colours are comprised by RG or B components nor can they be broken down into them, similarly RG and B mixed do not create the colours you see on your screen.
The eye is a remarkable thing in the way it gets around the problem of the balance between accuracy and efficiency. All colour is a combination of lots of different wavelengths combined. So if you want lots of accuracy then you really need to define those wavelengths and their relative density, but the problem is that this takes lots of sensors tuned to different bands and so the efficiency (and sample rate therefore accuracy) falls as increasingly more of the light falls on sensors that fail to record it.
The human eye effectively does this with three sensors (in the simple model). Basically reduces all colour to three variable signals.
So it is theoretically, and practically, possible to produce the sensation of colour in a human eye simply by stimulating the three receptors to produce the same signals are real colour would produce, or basically shining narrow bands (wavelenghts) of red, green and blue light directly at the eye.
Now the problems here are that a camera sensor can't replicate either the complexity of the spectral sensitivities of the cones, or their behavior characteristics when exposed to light. So we have a compromise, both in the spectral sensitivities of the RGB of the sensors and in the selection and widths of the RGB output light of the screen. And to do all this on a computer does require that we create an absolute model of numerical colour.
But again, two things:
Colour accuracy and the LUTs are still defined by the standard (or average) human eye, not by wavelength or measurement of actual light.
RGB has little to nothing to do with real colour or the physics of light.
When you display jpegs on computer screens you are talking about creating the illusion of colour that looks similar to your own human eye, there is no measured accuracy other than that, there is no colour fidelity, in fact there is no actual colour accuracy at all as it is all complete illusion based on nothing more than it looks the same to the standard (average) human eye.
[EDIT] To make it clearer: If you were to place a vase of daffodils on your desk next to your monitor, which was displaying the excellent photo you just took of those flowers. Then measured the actual light being reflected of the flower and that emitted by the computer screen of the same flower...
...You would find that there would be nothing even remotely comparable between the two readings other than they look the same to a standard human eye.
Whatever voltage/charge that may be on whatever device through how that is transformed into whatever numerical colour model.
Simple rule though, if you want to sell cameras to a mass market then good clean and punchy primaries. We all want our photos and lives to be better than others on social media. That is what we perceive as reality, and so it becomes reality. You can measure it if you like, but accuracy doesn't sell cameras, or perversely doesn't look real compared to our perception/memory.