Lets get things going in this forum, here´s a start on a topic often referred to in discussions about pixel density and high ISO vs sharpness and noise.
Here are two, highly enlarged, pictures of singel pixels: one pixel from an ISO 64, very sharp, part of an image, the other pixel from an ISO 12800, very blurry, part of an image.
Please discuss which is which, and what general conclusions that can be made fron these examples.
I'm going with 2.jpg as the ISO64 shot, because at 129x128 pixels (0.016512 megapixels, perceptual or otherwise) it clearly has higher "resolution" than 1.jpg, which weighs in at only 128x128 pixels (0.016384 megapixels).
I tried some slanted-edge analyses, but the contrast across the edge was too low to provide any additional insights.
You also can't say anything about color from a single pixel. Color is not about wavelengths, nor even energy distribution over wavelengths, but about relative behavior across multiple pixels. Yes, I'm talking about that stuff that puzzled Land for so long, and that Wendy Carlos (the musician) actually explains rather well on her WWW site: www.wendycarlos.com/colorvis/color.html. What colors do you see in:
Open the image in a separate tab for the best effect -- it gets scaled funny here.
That image really only contains red, white, and black...
Too narrow. ;-) Just to be clear, the colors that aren't really there can actually be photographed using color film, etc. In fact, that's how this cover shot from May 1959 Scientific American was made:
I first learned of this way back in the mid 1970s, and quickly confirmed that you really could take two monochrome images with color filters as close as just a few tens of nm, process the B&W images and then project them, and see -- and photograph -- full color images from typical scenes. Being clever, I figured I would shoot bright line spectra this way to really get the color response nailed down... however, the bright line spectra handled this way looked monochromatic! In any case, one pixel tells you nothing about color because (1) one pixel would normally be sampling only one color "channel" and (2) wildly different energy distributions over the sensed wavelengths can result in the exact same single pixel value.
Incidentally, my then high-school AP physics teacher told me about Land's work when I asked him for an explanation of something even stranger. When my brother and I were very young, we only had a B&W TV, but lots of shows were filmed in color and regularly announced the colors of various objects so folks could adjust their color TVs. Well, both my brother and I somehow learned to see B&W TV broadcasts in color! Our parents confirmed that we both were able to identify colors in shows we hadn't seen before with disturbingly high accuracy. We both lost that ability by the time we were something like 6 years old, but the fact we had it always bothered me until I read Land's work, which actually confirms that distinguishing colors in monochrome images is actually possible provided the scene content has the right kind of structure. The retinex algorithm was the best result from Land's work, but he never got a complete explanation of what is happening, nor has anyone else that I'm aware of. Incidentally, the 2-color trick that Land "discovered" was widely used in the printing industry to reduce cost in making apparently full-color printed images as far back as the 1800s; this weirdness has been around and in use for a very long time without being well understood.
You're in great company on that; Land spent something like two decades on this.
I personally spent a while doing things related to retinex too. Never improved upon it, but did get some spin-offs. One was a novel method for reconstructing full color stereo pairs from a single anaglyph image: Reprocessing anaglyph images. The other involved some hacks for obtaining multispectral data with more bands than the sensor had filters. I first did this using a Canon PowerShot G1 (CMYG CFA) in 2001, and by 2005 I could sometimes extract up to 8 bands from a Sony F828 (RGBE CFA with NIR filter disengaged), but the method was very sensitive to noise. Anyway, we used a similar method in Multispectral, high dynamic range, time domain continuous imaging. Maybe one day one of us will figure out how color really works...? Until then, I see all this as a very deep and ponderous rabbit hole with "color science" being the term slapped on whatever magical approximation currently works best. ;-)
Most interesting piece about colour perception, but to my mind, the original question about SINGLE pixel sharpness seems nonsensical.
A pixel on the sensor is a tiny "light meter" prceeded by a coloured filtter and its physical size is determined in the fabrication process. So how the "lightmeter" is calibrated cannot possibly deform the chip or make it squishy. The perceprion of the aggregate of pixels is another matter.