• Members 293 posts
    Jan. 1, 2024, 10:48 p.m.

    This is getting somewhat ridiculous. The OTF is the Fourier transform of the point spread function, PTF, of an optical system. The MTF is a complex valued vector valued (2 dimensional function). even though the PTF is a real valued vector valued function. The MTF is the magnitude of the OTF, that it is the OTF stripped of the phase. The phase of the OTF is known as the OTF, phase transfer function. There is a lot of valuable information in PTF, but for lens performance the MTF is the primary metric used since what the phase information is destroyed with the projection on the sensor.

    www.montana.edu/jshaw/documents/18%20EELE582_S15_OTFMTF.pdf

  • Members 293 posts
    Jan. 2, 2024, 11:29 p.m.

    No that is not what I was saying. It has nothing to do with a sinusoid pattern. The MTF is the amplitude of the OTF. The MTF contains on phase information since it it is nothing more than the amplitude of the complex OTF an each point. This is because the sensor simply measures the amplitude of the incident light on it.

    In reality the MTF is the amplitude spectral response of an optical systems. How it is estimated does not change the definition nor what it measures. Of course the PSF/OTF/MTF make sense in the analogue to digital conversion since the sensor that is used to measure and estimate it is in the digital domain. The subtly here is the fact that there is no such thing as periodic sampling in the image domain. For the current crop of sensors the sampling rate in the linear dimensions is different than in the diagonal dimension. That is the Shannon-Nyquist sampling theorem does not directly apply. This was of course addressed in the Petersen-Middleton sampling theorem. it also turns out that no sensor in consumer cameras provide optimal sampling since a reticular lattice is not optimal.

    scholarworks.wm.edu/cgi/viewcontent.cgi?article=3744&context=etd

    In reality our image sensors based on a rectular lattice are not optimal, a hexigonal lattice is for resolution in all directions. But at the end pod the day not of this has anything to do with the physical definitions of PSF, OTF or MTF. How we estimate these functions is a separate question.

  • Members 293 posts
    Jan. 3, 2024, 5:04 p.m.

    Read the reference. Take a standard image sensor which is based on a reticular lattice. The Shannon sampling theorem requires a periodic one dimensional sampling grid. When one has one can perfectly reconstruct a band limited one dimensional band limited analog waveform by sampling at least twice the highest frequency present in the wave form. However, with a two dimension sensor, if the sampling interval along the horizontal and vertical dimensions are "x," The sampling interval on the diagonal of the lattice is Sqrt(2)x. The highest spatial frequency that can be resolve (highest frequency that can be reproduced) is (1/2) x on the vertical and horizional but is (1/2) (x/Sqrt(2)) on the diagonal. The Shannon theorem does not apply because the conditions are not met. Shannon's sampling theorem is valid for one dimension only. For a discussion of this see,

    Richard R. Legault. The aliasing problems in two-dimensional sampled imagery. In
    Lucien M. Biberman, editor, Perception of Displayed Information, chapter 7, pages
    279-312. Plenum Press, New York, NY, 1973.

    Daniel Peterson, a student of David Middleton, addressed this by analyzing sampling lattices for multidimensional sampling. There is a lot of similarity to this problem and the sphere packing problem in lattice theory. For a one dimension case - it is easy. For dimensions greater than one - it become much more subtile. In
    his thesis, Peterson address this problem. In 1962 Peterson and Middleton published their seminal paper, Daniel P. Petersen and David Middleton. Sampling and reconstruction of wave-number limited functions in n-dimensional euclidean spaces. * Information and Control,* 5(4):279-323, 1962

    In this paper they show the optimal sampling lattice is not the rectangular lattice which has become default today but the hexagonal lattice. To be able to perfectly reconstruct the image would require a pixel density 13% higher with the rectangular lattice than with the hexagonal lattice. In fact Fujifilm used a hexagonal lattice in the S3Pro and S5Pro cameras.

    So the resolution a spatial frequency of (1/2)x would require a higher density rectangular sensor. in the paper
    scholarworks.wm.edu/cgi/viewcontent.cgi?article=3744&context=etd
    the author analyzes the difference associated with these two lattices, the default rectangular lattice used today and the optimal hexagonal lattice.

    None of this of course is new. Petersen and Middleton published their paper in 1962. This analysis was used in the development is new of course. It was used extensively in the development of the first digital image sensors developed by the US Government for both airborne and satellite reconnaissance in the 1970's.
    So what has the rectangular lattice become the "default" lattice although to reproduce equal spatial frequency than the hexagonal lattice? It most likely has to do with it is more economical (lower NRE cost and higher yield) to produce a more dense rectangular lattice sensor than a hexagonal sensor.

  • Removed user
    Jan. 3, 2024, 9:50 p.m.

    Gentlemen,

    I am failing completely to understand why this thread has devolved into a discussion about the spatial sampling rate relationship between the diagonal and the x,y directions!!

    Does anyone NOT know that the factor is sqrt(2) ?

    Feel free to "clarify" with as-scientific-as-possible-terminology so as to keep other members' eyes glazed over ... 😉

  • Removed user
    Jan. 3, 2024, 10:12 p.m.

    After a couple of days reading about OTF, Fourier, et al, I am beginning to understand the above question and the reason it was posed ... and I thank @JACS for that.

  • Removed user
    Jan. 3, 2024, 10:25 p.m.

    All agreed and thank you for your patience in spite of my occasional sniping.

  • Removed user
    Jan. 22, 2024, 8:24 p.m.

    Would just like to mention that my copy of QuickMTF can show what they call a "Line Spread Function" which looks like a PSF cross-section.

  • Removed user
    Jan. 22, 2024, 10:46 p.m.

    I don't understand that ... mine also opens TIFF, PNG and BMP.

    What is a "converted JPEG"?

    Why is it "not so great" and what type of JPEG is being referred to? I can imagine a very low quality 4:2:0 not producing a nice slant edge but what about a 100% quality RGB JPEG as opposed to any Y'CbCr?

  • Removed user
    Jan. 22, 2024, 11:06 p.m.

    I see. Please note that I shoot Foveon-based cameras - should have said - ergo, no demosaicing for my stuff.

    For serious analysis I do use RawDigger's raw composite output which excludes converter high jinks and, yes, I do realize that the said analysis still includes the lens performance.

  • Members 293 posts
    Jan. 23, 2024, 4:20 p.m.

    There are several issues with defining the "resolution." Resolution of an optical system is pretty simple for a monochrome sensor system. The first thing to consider is operates under the assumption of diffraction limited optics. Secondly the resolution for a given sampling lattice in this case is well understood as is the optimal sampling lattice ( the basic lattice) is hexagonal for an isotropic wavenumber limited function. That is for functions whose Fourier transform is zero outside a circle. See for example, See www.sciencedirect.com/science/article/pii/S0019995862906332?ref=pdf_download&fr=RR-2&rr=84a1341f0e2f3998

    This result follows follows from the fact in the case of isotropic wavenumber limited functions, is mathematically equivalent to the sphere packing problem in the plane. This was all well established by the early 1960's.

    Monochrome is pretty easy. Where it becomes more subtle is in the case of color images generated from an underlying cascade of 3 layers of color on the sensor. This can be accomplished by using some sort of color filter array (normally RGB) on the sensor, e.g., Bayer or Fujifilm's XTrans or the Foveon 3 layer approach with its own unique color space from which RGB is derived. The Foveon approach results in a sample of each color at each pixel location where as the CFA approach requires a convolution with neighboring pixels to generate a sample of each color at each pixel location.

    It is this "demosaicing" that results in a loss of resolution compared to the native sensor resolution. In reality it results in two different values for resolution - one for Green and another for Red and Blue. This is where it gets interesting. Jim Kasson in his blog give a detailed measurements of the GFX 100 sensor, 100 MP CFA, vs the Leica Q2M monochrome sensor as far as resolution goes and found that the 47 MP monochrome sensor ontresolves the 100 MP Bayer CFA using a Siemens star target. The other issue with CFA resolution in the real world is it is dependent on color in the image. Foveon during the development of the X3 chip showed that by using three color charts and is why they justified in comparing a 3.3 MP Foveon X3 to a 10 MP Bayer CFA sensor. The rule of thumb and is the resolution loss of a CFA compared to the native resolution of the underlying sensor is between 1/4 to 1/2 with about 1/3 being normal. From what I can remember from Kasson's analysis, that seems about right.

    What might be lost in such discussions is resolution might not be the proper metric to use. Shanon found what is more important in the "real world" in communications is Information Capacity which has the benefit of factoring in multiple effects. That is one does not have to worry about the contrast on an MTF, it is factored into the Information Capacity metric. Below is a description of how the Information Capacity is calculated and is a good read.

    www.imatest.com/docs/shannon/

  • Members 293 posts
    Jan. 23, 2024, 9:47 p.m.

    Actually, information and entropy are two of the most reliable metrics in the sciences. Not only that they have a firm mathematical base arising out of ergodic theory and dynamic systems theory.