cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Help us improve the PTC Community by taking this short Community Survey! X

MTF

Jbryant61
4-Participant

MTF

Any suggestions on how to get the average of the max and min values in the modulation image.

Thanks
Jason
125 REPLIES 125

Hi Val. Thanks.

This is great if I know the minimum values i.e. zero in your case, but thats the problem, I need to average the troughs and the peaks.

BTW, the only if syntax I know is different to yours.

Thanks
Jason

Here's a rather simple way to do it ... ie it won't work with noisy, more-general data.

- Guy
Jbryant61
4-Participant
(To:GuyB)

Sorry - just read this after replying to a later message.

You ask about background subtraction for the contrast. The image enclosed is a raw image. I thought I could call the modulation as (Imax-Imin)/(Imax + Imin - 2*bg) where bg is a value of the image away from the bars.

Its great what you have done - I loove the plots of the points on the peaks and trough!

I think it was you who suggested using edges (or line pairs) as a means to measure image quality rather than beads.

Thankyou.
Jason

On 5/13/2009 5:18:54 PM, Jbryant61 wrote:
>Its great what you have done -

Thank you.


>I think it was you who
>suggested using edges (or line
>pairs) as a means to measure
>image quality rather than
>beads.

Yup - now that you have these targets you can get much cleaner measurements of your optical response. BTW, it's a good thing that you have several periods of the square wave - it has been shown that unless you have at least 5 periods you don't get an accurate measurement because of edge effects.

- Guy
RichardJ
19-Tanzanite
(To:Jbryant61)

If what you want is the amplitude of the MTF there is a better approach than trying to find the averages of all the maxima and minima.

Richard
RichardJ
19-Tanzanite
(To:RichardJ)

I should add that if you change the size of the bars a simple sinusoid may not adequately represent them. You can still get parameter estimates of the fundamental frequency using the FFT, but you'll have to change the fitting function for the second step.

Richard

On 5/13/2009 9:57:50 AM, rijackson wrote:
>If what you want is the
>amplitude of the MTF there is
>a better approach
>
>Richard

The way you did it isn't quite right - the MTF must be normalized to the zero-frequency response, so you cannot subtract off the mean value of the image to get an MTF.

Furthermore, the properly-normalized MTF of the image does not correspond to the MTF of the lens system. If your object has no feature at a spatial frequency f, then neither will your image. So your ability to measure MTF at a frequency f is affected by your object's spatial frequecny spectrum.

To obtain the MTF of your optics alone means dividing the spatial frequency spectrum of your image by the spatial frequency spectrum of your object - and the frequency spectrum at f of a sinusoidal object at f will be different from the f-component of a bar pattern at f.

- Guy
Jbryant61
4-Participant
(To:GuyB)

Richard & Guy, thankyou both very much for your help. I don't follow Guy's comments about object MTF.

I have a range of targets that have different frequencies (from 0.5um lines upto 5um lines) and wanted to plot the moudlation (Imax - Imin) / (Imax + Imin - 2*background)) as a function of line pairs. So the image I included would just represent one point.

Thanks
Jason

On 5/13/2009 3:04:38 PM, Jbryant61 wrote:
>... I have a range of targets that
>have different frequencies
>(from 0.5um lines upto 5um
>lines) and wanted to plot the
>moudlation (Imax - Imin) /
>(Imax + Imin - 2*background))
>as a function of line pairs.
>So the image I included would
>just represent one point.

Yes, but technically the contrast you measure with bar patterns is not the same as the MTF. The MTF expresses the contrast you would see if your object had a sinusoidal variation of intensity - smoothly varying from dark to light in a sinusoidal way.

Bar patterns (step functions of intensity) are far easier to fabricate, however, so they are a common imaging tool. Strictly speaking, the contrast value you measure with bar patterns is not the value of the MTF.

This is easily explained. Because the peaks are brighter, on average, and the valleys darker, it is easier to see bar patterns at the edge of a system's response than sine patterns. Therefore, you'd expect to measure a higher contrast for bar patterns than you would for sine patterns at the same fundamental frequency.

How much higher? If the image of your bar pattern is degraded enough that it really looks like a sine wave[*], then the contrast you measure will be higher than the true MTF by a factor of 4/pi.

How do we arrive at that factor? The intensity of a sine pattern varies via 0.5*(1+sin(x)), and its normalized MTF at the fundamental frequency has an amplitude of 0.5. Blurry imaging of a square wave, however, reduces the image to just the fundamental Fourier component of a square wave, which has a normalized value of 2/pi. (See the Fourier peaks of the two waveforms in my last sheet in this thread.) So, contrast measurements of a blurry square wave are bigger than those of a sine wave by a factor of (2/pi) / (1/2) = 4/pi.

- Guy

[*] If the image is _not_ washed out, however, and still looks like a step function, then the contrast you measure should be much closer to the true value of the MTF ... of course it also means you're measuring a value close to 1 anyway.
IRstuff
3-Visitor
(To:GuyB)

On 5/14/2009 12:05:04 AM, GuyBeadie wrote:
>Yes, but technically the contrast you
>measure with bar patterns is not the
>same as the MTF. The MTF expresses the
>contrast you would see if your object
>had a sinusoidal variation of intensity
>- smoothly varying from dark to light in
>a sinusoidal way.

The modulation function for bar targets is called the contrast transfer function (CTF).

TTFN,
Eden

On 5/13/2009 3:04:38 PM, Jbryant61 wrote:
>... I have a range of targets that
>have different frequencies

Don't forget to measure the contrast values with both horizontal and vertical bars out at the edge of your field of view.

You will likely find real differences in the H vs V response at the edges of the field.

- Guy
Jbryant61
4-Participant
(To:GuyB)

Yes we do infact see this. Is astigmatism a likely cause - do you know of any other reasons why this is so.

Thanks
Jason
Jbryant61
4-Participant
(To:Jbryant61)

I have taken Richards & Guys excellent worksheets and combined into one. I think I have been mixing up modulation depth and MTF. Its still not clear how to extract the value of the MTF for the given pattern.

Also can the modulation depth be used at all - or related to theoretical diffraction limited prediction or is the MTF better to use?

there seems to be 2 ways to get the MTF a: ratio the modulation of image to object, or b: ratio the frequencies - I can't get a: to agree.


I think this is a real better way than all of the point (beads) fitting using virendra's equations!

Thanks
Jason

On 5/14/2009 6:51:45 AM, Jbryant61 wrote:
>... Its still not
>clear how to extract the value
>of the MTF for the given
>pattern.
>
>Also can the modulation depth
>be used at all - or related to
>theoretical diffraction
>limited prediction or is the
>MTF better to use?

I think it's time for you to either read up on what the MTF is or (more valuable) experiment on your own w/ Mathcad. Take an object's Fourier transform, multiply it by an assumed MTF function, and inverse FFT the product to see how your optics will blur an object. Plug in actual numbers to compare your simulations w/ your measurements, and you will see on-screen the relationships between them.

- Guy

An object in (x,y) can be expressed as a Fourier series of spatial frequencies (kx,ky).

A perfect imaging system would tranfer the spatial frequencies with no loss and generate an image that is the inverse Fourier transform of (kx/M,ky/M), which are the original frequencies scaled by the magnification factor M of the optics.

Real imaging systems blur images, however. The way this happens, in the math, is that the Fourier spectrum of the image is not a perfectly-scaled copy of the object spectrum: it is the product of the original spectrum and the modulation transfer function (MTF) of the optics[*].

So, to see the effect of an imaging system you take the perfect Fourier transform of your object, multiply it by the MTF of your optics, and inverse Fourier transform the product. Your inverse Fourier transform will generate a model of the image you detect - it will be blurred if the MTF is less than 1 for any of the spatial frequencies in your object.


[*] As long as you're dealing w/ broadband light, you don't need to include any complex numbers for the transfer function of the optics.

On 5/14/2009 6:51:45 AM, Jbryant61 wrote:
>I have taken Richards & Guys
>excellent worksheets and
>combined into one...

Comments on the direct contrast measurement enclosed.

- Guy
Jbryant61
4-Participant
(To:GuyB)

Wow...Thanks Guy, this is the best way I have seen the MTF explainded -very simple. For the alternative way where the ratio's of frequencies was used after taking the FT, is there any reason why I need to consider this given what you have shown with the "contrast" method.

Thankyou very very much!
Jason
RichardJ
19-Tanzanite
(To:Jbryant61)

On 5/14/2009 12:21:25 PM, Jbryant61 wrote:
> For
>the alternative way where the
>ratio's of frequencies was
>used after taking the FT, is
>there any reason why I need to
>consider this given what you
>have shown with the "contrast"
>method.

The FFT is just a way to get approximate numbers as guesses for the least squares fit. Both the contrast method and the FFT suffer from the same problem: just picking maximum and minimum points does not give you the actual values of the maxima and minima. The least squares fit gives the most accurate numbers.

Richard
RichardJ
19-Tanzanite
(To:GuyB)

More comments and corrections.

Richard
RichardJ
19-Tanzanite
(To:Jbryant61)

On 5/14/2009 6:51:45 AM, Jbryant61 wrote:

>there seems to be 2 ways to
>get the MTF a: ratio the
>modulation of image to object,
>or b: ratio the frequencies -
>I can't get a: to agree.

Because the first point after the FFT is not equal to the DC component, it's equal to twice the DC component (Guy makes mistakes too 🙂 ).

>I think this is a real better
>way than all of the point
>(beads) fitting using
>virendra's equations!

That depends on what you want form all of this. Are you actually trying to characterize the optical system? If you are then the MTF is useful, but it only tells you so much. It shows how far away from the diffraction limit you are, but not where any deviation came from. Any distortion of the wavefront decreases the MTF.

Richard
PhilipOakley
5-Regular Member
(To:RichardJ)

On 5/14/2009 5:28:47 PM, rijackson wrote:
>On 5/14/2009 6:51:45 AM, Jbryant61
>wrote:
>
>>there seems to be 2 ways to
>>get the MTF a: ratio the
>>modulation of image to object,
>>or b: ratio the frequencies -
>>I can't get a: to agree.
>
>Because the first point after the FFT is
>not equal to the DC component, it's
>equal to twice the DC component (Guy
>makes mistakes too 🙂 ).
>
xxx
>
>Richard

As Richard effectively notes, it is always worth checking the convention-in-use for scaling. The FFT/fft and CFFT/cfft, and other people's FFT's (e.g. Mathlab, or your local professor's) all have their own local convention, foibles and explanations for their choice !

It is always worth creating one's own known sine wave and/or DC level and applying the 'fft' and seeing the answer it gives, then also change the sample length...

A collegue today came asking about a 3dB 'loss' that may have just such a cause!

Philip Oakley

On 5/14/2009 5:44:15 PM, philipoakley wrote:

>It is always worth creating one's own
>known sine wave and/or DC level and
>applying the 'fft' and seeing the answer
>it gives, then also change the sample
>length...

I agree. In fact, adding a known offset is exactly what I did 🙂

Richard


Everybody is talking only about optical effects. Looking at the plot of the large bar (where you have adequate resolution) it looks to me like those edge effects are mainly electrical, in the processing of the analog video signal. Much of the edge effect is explained as a simple RC low pass filter. The remaining effects will be at least partly optical, but may also include additional electrical components, perhaps a small inductance.

I've also added a sample and hold facility, so you can have multiple selected regions available at the same time, without duplicating the main picture component.
__________________
� � � � Tom Gutman
RichardJ
19-Tanzanite
(To:TomGutman)

I also noticed that, and don't know what to make of it. If the wide bar is supposed to transition abruptly from light to dark and dark to light then the measured intensity is really wrong. I don't think it's an analog signal processing issue (even though it may look like one), because the ADC is probably bonded right to the CCD chip and there isn't much in the way of analog electronics. It is weird though. I think the primary question (to Jason) is what is the transition supposed to look like?

Richard

Even if the ADC is built on the same chip as the CCD, there's still a fair bit of analog processing. The CCD itself is busily transfering charges (read charging capacitors) and the input to the ADC is all analog (and may have significant capacitance). Transmission lines are not the only place analog signals get degraded.

Note the shape of the curve for the bar patterns. To me it also looks like the early part is charging up a capacitor to the mean value. What you were fitting with an arbitrary polyomial (one with no theoretical basis).

Do you know of any optical effect that would give that exponential decay curve? Absent evidence to the contrary, Occam's razor calls for problems in the analog signal processing.
__________________
� � � � Tom Gutman
RichardJ
19-Tanzanite
(To:TomGutman)

On 5/15/2009 7:16:40 PM, Tom_Gutman wrote:
>Even if the ADC is built on
>the same chip as the CCD,
>there's still a fair bit of
>analog processing. The CCD
>itself is busily transfering
>charges (read charging
>capacitors) and the input to
>the ADC is all analog (and may
>have significant capacitance).
>Transmission lines are not the
>only place analog signals get
>degraded.

Maybe, but there should not be that much cross talk between pixels. If the readout electronics are carrying charge not just from one pixel to the next, but to several subsequent pixels, somethings not right. The CCD could not have been designed with a capacitance so high that at the maximum frame rate of the camera you see such effects.

I did wonder if the CCD might be saturated. I've seen some weird and seemingly inexplicable effects when that happens. It's easy to do too, if the camera is in autoexposure mode and there's a lot of black in the scene.


>Note the shape of the curve
>for the bar patterns. To me
>it also looks like the early
>part is charging up a
>capacitor to the mean value.
>What you were fitting with an
>arbitrary polyomial (one with
>no theoretical basis).

Because I don't have any theory 🙂

>Do you know of any optical
>effect that would give that
>exponential decay curve?

No. I cannot imagine how it could be an optical effect. I agree it's probably something in the camera, I'm just not convinced it's as simple as a capacitor charging.

>Absent evidence to the
>contrary, Occam's razor calls
>for problems in the analog
>signal processing.

For me, Occam's razor just calls for "don't know".

It certainly needs to be resolved though, because it's pointless trying to measure the MTF with this going on.

Richard
PhilipOakley
5-Regular Member
(To:RichardJ)

On 5/16/2009 10:29:58 AM, rijackson wrote:
>On 5/15/2009 7:16:40 PM, Tom_Gutman
>wrote:
>>Even if the ADC is built on
>>the same chip as the CCD,
>>there's still a fair bit of
>>analog processing. The CCD
>>itself is busily transfering
>>charges (read charging
>>capacitors) and the input to
>>the ADC is all analog (and may
>>have significant capacitance).
>>Transmission lines are not the
>>only place analog signals get
>>degraded.
>
>Maybe, but there should not be that much
>cross talk between pixels. If the
>readout electronics are carrying charge
>not just from one pixel to the next, but
>to several subsequent pixels, somethings
>not right. The CCD could not have been
>designed with a capacitance so high that
>at the maximum frame rate of the camera
>you see such effects.
>
>I did wonder if the CCD might be
>saturated. I've seen some weird and
>seemingly inexplicable effects when that
>happens. It's easy to do too, if the
>camera is in autoexposure mode and
>there's a lot of black in the scene.
>
>
>>Note the shape of the curve
>>for the bar patterns. To me
>>it also looks like the early
>>part is charging up a
>>capacitor to the mean value.
>>What you were fitting with an
>>arbitrary polyomial (one with
>>no theoretical basis).
>
>Because I don't have any theory 🙂
>
>>Do you know of any optical
>>effect that would give that
>>exponential decay curve?
>
>No. I cannot imagine how it could be an
>optical effect. I agree it's probably
>something in the camera, I'm just not
>convinced it's as simple as a capacitor
>charging.
>
>>Absent evidence to the
>>contrary, Occam's razor calls
>>for problems in the analog
>>signal processing.
>
>For me, Occam's razor just calls for
>"don't know".
>
>It certainly needs to be resolved
>though, because it's pointless trying to
>measure the MTF with this going on.
>
>Richard

I'd support Tom's idea that if an RC style decay is seen that it is quite likely to be an artefact of the camera system.
Most folk wouldn't notice it and ther probaly has to be an RC decay in the circuit to keep the overall cost within limits. Even expensive systems can have such simple mistakes if it was desgned for a slightly different application - most Cameras assume natural scenery, rather than sceince experiments.
I remember a line csan camera I designed many years ago which had such artefacts. Plus you can get apparent asymetries with these things because of short intense signals which tend to exagerate one edge relative to the other.

Philip

Philip Oakley

On 5/16/2009 10:54:31 AM, philipoakley wrote:

>I'd support Tom's idea that if an RC
>style decay is seen that it is quite
>likely to be an artefact of the camera
>system.

I think we all agree it's from the camera. I'm just saying that we don't know what is going on in the camera that causes it. It's an exponential decay, so you can model it as an RC circuit, but that doesn't mean it's actually that simple.

>Most folk wouldn't notice it

At that level? I disagree. If you has such an effect for a scene of a backlit person standing in front of a window I think it would be really obvious.

> and ther
>probaly has to be an RC decay in the
>circuit to keep the overall cost within
>limits.

The electronics obviously must have some time constant, yes. But I have a very hard time believing that someone would design a camera with a readout rate of x and a time constant of 5x, although that's certainly what it looks like. The only way I can envision that is if both the integration time and readout rate are user adjustable, and the camera let's you set them independently regardless of whether the combination make sense. That would be a really bad piece of design, but it's possible.

Richard

I agree that there shouldn't be that much cross talk between pixels. But shouldn't, isn't, and can't be are three different things.

There may well be more to the artifact than simple RC charging. But I know that RC charging artifacts are almost inevitable (isn't the typical ADC based on charging a capacitor and then reading out its charge?) Good design should keep such artifacts down to an insignificant level. But an actual camera may not (Philip covered various reasons for that). Could be a simple as somebody overclocking the chip, resulting in inadequate settling time for the ADC. For now my position is that a good deal of the distortion is explainable as, and so likely to be caused by, simple RC charging. The overall distortion will include optical effects, and the results of my attempt at sharpening the image suggests that there are some additional electrical effects.

I have tried adding a bit of inductance, but that was unsuccessful. I had trouble with the stability of the equations, and the result was no better than the simpl RC alone. Consistent with very short leads hence no significant inductance.

It is also rather odd that this effect was not noticeable in the bead images. Perhaps this image is from a different camera from those images. I would expect that this level of smearing would have produce quite noticeable comas in the bead images.

IAC this system is not going to characterize the optical system. As with the bead images, the pixel resolution is too low to accurately determine the intensity curves. Given a theory, it is barely enough to do fitting to get parameters. But not to validate the theory, or to distinguish one theory from another. And with the smearing artifact the overall measured response is likely to reflect more the camera system than the optical system.
__________________
� � � � Tom Gutman
Top Tags