• Members 5 posts
    June 25, 2024, 9:04 a.m.

    Per pixel. This is not relevant at all as image level saturation is relevant.

    Sensors do not exist in vacuum. They provide information which is processed for desired results. Having more pixels tends to increase read noise slightly (roughly proportionally to square root of ratio of pixel counts, assuming same per-pixel read noise, depending on ADC contribution), thus the amount of infomation is reduces. However sampling the image with more pixels captures more information, information that the lower pixel count doesn't have. The question is now which set of input information can be used to create a better quality output. It is not something which has a trivial result, especially since different set of information should be processed in different way for optimal results. The evidence seems to point out to direction that usually it would be better to have more pixels.

    Also more pixels helps to reduce aliasing.

    More pixels improves resolution even with poor lenses. Today's lenses and even the smallest of pixels (in interchangable lens cameras) result in massive aliasing. There is a lot of room there for finer sampling.

    Pixel counts have gone up very slowly compared to sizes of storage media over the last 10-20 years. Same with computer processing power.

    I doubt anyone has made such claim.

  • June 25, 2024, 9:17 a.m.

    What matters is what is the end result you can get. More pixels gives more information which allows NR to do a better job. If you insist on negating the advantages that it offers, of course you'll get a misleading result.

  • June 25, 2024, 9:20 a.m.

    It's not the properties of the pixel that is relevant. It's the amount of information that an aggregate of pixels can yield about the scene.

  • June 25, 2024, 9:37 a.m.

    It depends on what you mean by 'pixel cells'. The A9 III's pixels are the same size as any other FF 24MP sensor, but global shutter pixels contain two photodiodes, only one receptive to incident light. FWC is a somewhat anachronistic term when it comes to CMOS sensors, because the saturation charge isn't determined by the capacity of the potential well in the way that it was with CCDs. Instead it's determined by the voltage swing of the pixel's read transistor and the capacitance of one node in the pixel circuit (which isn't the photodiode but the floating diffusion which connects that transistor). If you're jamming more into the pixel that node tends to be smaller, has lower capacitance and saturation charge capacity reduces.

  • Members 278 posts
    June 25, 2024, 10 p.m.

    Capacity is not equal to capacitance.
    FWC is measured in electrons [e-], not farad [F].
    Thus voltage swing is irrelevant.

  • June 26, 2024, 8:11 a.m.

    I wouldn't assume that you know more than you do. I have a Physics degree and an engineering PhD, so I have some background in this stuff. I'm not confusing capacity and capacitance. 'FWC' refers to the charge capacity of the potential well. In a CCD the potential well that keeps electrons in place in a pixel is imposed by a potential on a transparent electrode over the cell. By changing that potential the charge can be shifted along the device for read-out. The depth (in terms of electrical, not gravitational potential) of the well is determined by the potential (voltage) on that electrode and the size of the well. As more charge (electrons) fills the well, it changes the potential in the well until it equals that on the electrode and further electrons can no longer be contained. In a CMOS sensor the individual cells are insulated from each other, so they can, in theory hold any amount of electrons (at least, until the insulation breaks down). What limits the capacity is the ability to read them, which in the end is dictated by the voltage swing of the source follower transistor that performs the read out anlogf with teh conversion gain, which is determined by the capacitance of the floating diffusion. I learned this originally from Eric Fossum, so in the end you're choosing to disagree with someone who can undeniably claim to know how CMOS image sensors work.

  • Members 278 posts
    June 26, 2024, 9:06 a.m.

    FWC is commonly used with CMOS image sensors.

    AFAIK the D4 got a CMOS image sensor.
    What would Eric Fossum say, is Bill Claff's generilisation too much?

  • June 26, 2024, 9:35 a.m.

    One may use very different terms and definitions, but IMO this all this boils down to simple equation: voltage = charge / capacitance. In given sensor implementation (measurable) voltage (limit) and (cell) capacitance are fixed and this gives maximal measurable charge, which can be expressed in electrons count or 'capacity'.

  • June 26, 2024, 12:54 p.m.

    I know. But that doesn't mean that it's correct.

    Bill's actually saying that he's using a particular definition of FWC (as 'the limits of the ADC'). Bill's quite a one for his own definitions. I don't think anyone in the sensor business would use that definition. It tends to be used more by amateurs in the sensor analysis field (which includes me, by the way). In the old days, when sensors and ADCs were separate units the 'FWC' was a sensor parameter, independent of the ADC.
    Still, it doesn't help your case, because Bill is saying even less that FWC is dependent on pixel size, instead it's an ADC characteristic. The case is that mostly (but not invariably) sensors for photography are run well short of FWC by either the charge overflow or the voltage swing definition, because the response becomes non-linear as that limit is approached. Generally the gain between the sensor output and the ADC is set so that the ADC tops out well before pixel saturation. Even extended low ISOs tend not to go into that non-linear territory.

  • June 26, 2024, 12:59 p.m.

    In fact the voltage isn't very significant, because you (as in the outside observer, obviously the designers know) don't know either what is the pixel output voltage or the full-scale voltage for the ADC. In the end we can lump this together in a metric of electrons per DN (which amateur sensor measurers tend to call somewhat misleadingly the 'gain'. Given that you do know the full-scale DN from that you can work out the maximum measurable electron count per pixel. Often that's called the 'full well capacity', but that is, as I said, a misnomer.

  • Members 278 posts
    June 26, 2024, 2:39 p.m.

    So what would pro sensor measurers name it properly instead, Full Pixel Capacity (FPC)?

  • Members 10 posts
    June 26, 2024, 4:09 p.m.

    It doesn’t really matter much what they call them.
    What matters more is to get away from pixel centric obsession and instead focus on images.
    I was misled early on by Phil Askey’s misguided campaign against small pixels, but relatively quickly realized his fallacious arguments thanks to people like bobn2 and many others back in 2008 when I got sucked into digital photography. I’ve been allergic to this line of thinking ever since.
    I did similar tests for my self with D3x vs D3 and D700, a7r vs a7 vs a7s series and many more contemporary pairs of cameras of different resolution. The overwhelming trend so far as I have seen has been that smaller pixels led to better DR at the ISO base without significant penalties at low exposures.
    Maybe that’s a free lunch 😉
    One of my all time favorite post (in 6 parts) which summarizes the pixel battle back then by Daniel Browning starts here [1/6] Myth busted: small pixels bad, 4 legs good - part 1

  • June 26, 2024, 4:54 p.m.

    I don't think there's such a thing as 'pro sensor measurers'.
    You're missing the point really. I wasn't criticising you for using the wrong term - that's why in my original post I said the term 'FWC' was somewhat anchronistic, not wrong. Simply, it's not the measurement that matters so much with modern cameras. Further its use leads to some misunderstandings of where the limits are. Also, the fact that people like Bill Claff and others use the term loosely to mean different things leads to equivocation errors. Let me try and explain the situation.
    Here is a diagram
    pixel1.png
    Here PD is the photodiode, TG is the transfer gate, FD is the floating diffusion connecting TG with the source-follower amplifier SF. Remember that source followers have a voltage gain of unity or a little less, so the input of FD is limited by the voltage swing of the output. What determines the size of the input? As PD collects charge Q it will gain a potential Vpd of Q/Cpd where Cpd is the capacitance of the photodiode. The photodiode itself is generally isolated by trench oxide, so the only thing limiting Vpd is the breakdown drain-source voltage of TG. When TG is opened (made conductive) the charge flows into FD, causing a potential of Q/Cpdfd where Cpdfd is the parallel capacitance of PD and FD. Now this voltage cannot be larger than the output swing of SF, as discussed above. Also, to ensure that most of the charge transfers to FD, and therefore is used, PD must have a much lower capacitance than FD. Thus it is the capacitance of FD and the output swing of SF that determines the maximum usable charge. In many modern sensors there is a further capacitor which can be switched in parallel with FD, thus lowering the potential for a given charge and allowing more charge to be used - sometimes (wrongly) called 'dual base ISO'. This feature is an indication that it is not the 'well' that dictates the size of usable charge - rather it is the capacitance of FD and the voltage swing of SF.

    This diagram shows the circuit of a global shutter pixel:
    pixel2.png
    Here a second photodiode SD has been included, though it doesn't act as a photdiode because it is protected from light by a mask. It's there to save the charge from PD between the time the exposure ends and the time that the pixel can be read. At the end of the exposure the gate SG opens to allow the charge from PD to transfer to SD, then close. Then the pixel operates as the standard pixel above. So what limits the performance compared with a standard pixel. One factor is the degree to which complete charge transfer can be effected between PD and SD. This is in effect a very small CCD, so can be made quite efficient - though limited by the gate voltages on CMOS sensors being much smaller than on CCDs. The second limitation is how much capacitance there is in FD, given that the pixel has more to fit in the same space. If the capacitance is smaller, the maximum charge usable is smaller. However the pixel is still collecting light from the same area (assuming good microlenses) so can tolerate a lower maximum exposure, which means that a higher ISO must be used.

    pixel2.png

    PNG, 6.7 KB, uploaded by bobn2 on June 26, 2024.

    pixel1.png

    PNG, 4.9 KB, uploaded by bobn2 on June 26, 2024.

  • June 26, 2024, 4:56 p.m.

    Phil Askey banned me for that! 😀

  • Members 278 posts
    June 26, 2024, 11:33 p.m.

    You said "misnomer". For my limited understanding of your language:
    misnomer = wrong name.

    I'm not talking about how measurements are made, just using FWC [e-] as a convenient (virtual) parameter to compare CCD and CMOS sensors alike for (some) amateur sensor measurers.

    Why should there be any difference between the breakdown drain-source voltage in TG (fig.1) and SG or TG (fig.2)?

    IMHO it would make more sense to close switch SG during the exposure and open it as soon as the exposure ends. Too bad i never bothered about CCD shift intrinsics.
    (I didn't attend semiconductor physics lectures beyond 101)

    That's my point:
    lower maximum exposure = less capacity
    (call it anything you like if you think FWC is a misnomer for that)
    No matter what actual capacitance(s) and voltage swing(s).

  • Members 10 posts
    June 27, 2024, 6:15 a.m.

    I do remember!
    Daniel Browning’s great essay on the subject ends like this:
    “ Overall, DPR is a great site and highly informative; but there are some important flaws, and the DPR war on pixel density is one of them. I'm disappointed to see that Bob Newman has been banned from DPR. I hope I wont be the next one up on the chopping block.”

    It was actually pretty shocking that people could be banned for stating factual information.
    Not the finest moment at dpr! Later they silently pulled the articles with misinformation. I never saw Askey acknowledging he was wrong let alone apologize….

  • June 27, 2024, 7:17 a.m.

    Sure, but the sense I get is 'name that is misleading' rather than 'name that someone applied wrongly'. The dictionary I looked at used as an example 'morning sickness', which leads one to think that it happens only in the morning when it can happen any time of day. The point I'm making is the what people call 'full well capacity' has these days very little to do with the capacity of the wells and whether they are full or not. Unless you dig deep into it, the natural assumption is that the size of the wells dictates their capacity, when it doesn't.

    Yes, but your argument was based on the size of the photodiodes directly affecting the 'FWC', which isn't really the case. It was that which I responded to. It's an example of terminology that I think is unhelpful because it leads people to make perfectly reasonable assumptions from the term which are wrong. The other problem, as I said, is that FWC is being used several different ways, and that adds more confusion. Does it mean the charge capacity of the potential wells, or the limits of the read-out circuitry or the limits of the ADC count. There are plenty of cases where the same sensor presents with different 'FWC' in different cameras, simply because the camera designers have configured pre-ADC gain differently. So the sensor measurers are not actually measuring the sensor, they are evaluating the usage of a sensor in a particular camera.

    I shouldn't think that there is, and if there was, it wouldn't make any difference - because that breakdown should never be reached in a properly designed pixel. Those transistors will be made as small as they can and still gives a breakdown above that required to give full output swing of SF.

    It's possible that some do it that way, but papers I've seen on GS sensors say that it is only PD that collects exposure charge.

    not 'less capacity' by itself, less capacity per pixel area (since we're talking about exposure). In the context of this discussion, where we're just talking about 24MP FF sensors, I suppose that's a given - but those important little qualifications have a way of wriggling out of the discussion.

  • June 27, 2024, 7:20 a.m.

    I stopped taking it seriously at that point and re-appeared under a number of aliases. I was re-instated by Simon Joinson when he took over. Unfortunately that Askey doctrine was followed by some moderators, who decided that what was 'factual' depended on which forum you were posting on.