cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Stay updated on what is happening on the PTC Community by subscribing to PTC Community Announcements. X

MTF

Jbryant61
4-Participant

MTF

Any suggestions on how to get the average of the max and min values in the modulation image.

Thanks
Jason
125 REPLIES 125
RichardJ
19-Tanzanite
(To:TomGutman)

On 5/16/2009 3:57:02 PM, Tom_Gutman wrote:

>It is also rather odd that
>this effect was not noticeable
>in the bead images. Perhaps
>this image is from a different
>camera from those images.

I think it's different settings on the same camera. The beads were weak fluorescent sources, now we have transmission through a target using a halogen lamp as a source.

>IAC this system is not going
>to characterize the optical
>system.

Until this issue is resolved, that's for sure.

>As with the bead
>images, the pixel resolution
>is too low to accurately
>determine the intensity
>curves.

For a perfect bar pattern I think it's good enough to get a reasonable measure of the MTF.

Richard

>>For a perfect bar pattern I think it's good enough to get a reasonable measure of the MTF.<<

But without additional assumptions the MTF does not really characterize the system. It seems to assume that sinusoidal input results in sinusoidal response, at the same frequency. Not necessarily true if there are non-linear distortions.

Considering the bar pattern. Distortion as a simple low pass filter, with an effective cutoff somewhere between the bar frequency and three times that, will reduce the bar pattern to a sinusoid. An exponential decay, such as an RC circuit, will create a sawtooth wave. I don't thing the resolution here is good enough to distinguish between the two possibilities.
__________________
� � � � Tom Gutman
RichardJ
19-Tanzanite
(To:TomGutman)

On 5/17/2009 5:45:11 PM, Tom_Gutman wrote:
>>>For a perfect bar pattern I think it's good enough to get a reasonable measure of the MTF.<<
>
>But without additional
>assumptions the MTF does not
>really characterize the
>system. It seems to assume
>that sinusoidal input results
>in sinusoidal response, at the
>same frequency. Not
>necessarily true if there are
>non-linear distortions.

I agree. I meant that if the only distortion is optical, it's good enough.

Richard

Due to some last minute fixes the sample and hold process was not working properly. This fixes it, and makes some minor emendation.

I also replaced Richards polynomial adjustment with an exponential. Fits nicely.
__________________
� � � � Tom Gutman

On 5/14/2009 3:24:23 AM, Jbryant61 wrote:
>Yes we do infact see this. Is
>astigmatism a likely cause -
>do you know of any other
>reasons why this is so.

There are all kinds of aberrations that lead to distortion at the edge of a field. It's just a fact of optics. It's not particularly instructive to identify what they are if you're not designing & optimizing the system.

Just measure them and know that they're there if you need to correct for them.

- Guy
IRstuff
3-Visitor
(To:GuyB)

The only other reason for measuring the aberrations would be as a calibration, i.e., to correct for the aberrations as a function of position in the field.

TTFN,
Eden
Jbryant61
4-Participant
(To:GuyB)

Guy thanks for you time and explanations. I fully understand if you dont have time or don't want to answer my questions.


>I think it's time for you to either read
>up on what the MTF is or (more valuable)
>experiment on your own w/ Mathcad. Take
>an object's Fourier transform, multiply
>it by an assumed MTF function, and
>inverse FFT the product to see how your
>optics will blur an object. Plug in
>actual numbers to compare your
>simulations w/ your measurements, and
>you will see on-screen the relationships
>between them.
>
> - Guy

This is why I kept with the Virendra equations for so long - I couldn't get my head around this MTF theory. I have been reading up and thats where the confusion is. I read that you can divide the spectra to get the MTF's as well as divide the modulation depths (or contrast as you term it). It dosen't make sense to me the modulation depth of the object.



>An object in (x,y) can be expressed as a
>Fourier series of spatial frequencies
>(kx,ky).
>
>A perfect imaging system would tranfer
>the spatial frequencies with no loss and
>generate an image that is the inverse
>Fourier transform of (kx/M,ky/M), which
>are the original frequencies scaled by
>the magnification factor M of the
>optics.

OK, so the ratio of the image spectra, divide by mag, all divided by the object spectra gives the MTF. So in your worksheet why is the frequency f/Npts?. Why doesn't NA come into this, after all this defines resolution (0.61*lamda / NA)?



>Real imaging systems blur images,
>however. The way this happens, in the
>math, is that the Fourier spectrum of
>the image is not a perfectly-scaled copy
>of the object spectrum: it is the
>product of the original spectrum and the
>modulation transfer function (MTF) of
>the optics[*].
>
>So, to see the effect of an imaging
>system you take the perfect Fourier
>transform of your object, multiply it by
>the MTF of your optics, and inverse
>Fourier transform the product. Your
>inverse Fourier transform will generate
>a model of the image you detect - it
>will be blurred if the MTF is less than
>1 for any of the spatial frequencies in
>your object.

I read that there is a cut off frequency associated with the diffraction limit. But again there seems to be a strange way to represent this on a graph where both x and y-axes are 0 to unity. What value <1 is considered equivalent to the strehl ratio = 0.8 criteria for diffraction limit



>
>[*] As long as you're dealing w/
>broadband light, you don't need to
>include any complex numbers for the
>transfer function of the optics.
RichardJ
19-Tanzanite
(To:Jbryant61)

On 5/14/2009 10:40:07 AM, Jbryant61 wrote:

>This is why I kept with the
>Virendra equations for so long
>- I couldn't get my head
>around this MTF theory.

The concept of MTF is easy. Take an object with a perfect sinusoidal intensity going from light (intensity=1) to dark (intensity=0). If that is imaged through a perfect optical system we will get a sinusoidal image going from intensity=1 to intensity=0. No optical system is perfect though, if only because of diffraction. So the sinusoid is blurred, and energy is moved from the peaks to the valleys and the valleys to the peaks. The sinusoid therefore no longer goes from 0 to 1, but from something greater than zero to something less than 1. In a diffraction limited case the blurring depends on the spatial frequency of the sinusoid and the wavelength of the light. To represent the intensity variation as a number we can calculate (max-min)/(max+min), which is the modulation at the frequency of the sinusoid, for a specific wavelength. If we divide the image modulation by the object modulation we have the MTF for a specific spatial frequency and a specific wavelength. When the optical system has aberrations they will also blur the image and the MTF will be lower than the diffraction limited case. A graph of MTF vs spatial frequency is very useful, because it shows you the resolving power in a far more detailed way than just looking at a simple formula like the Rayleigh criterion. Any wavefront distortion will lower the MTF though, so if the MTF is low you do not know why, only that it is. You can get the MTF either from measurements or by calculation in a software package like Zemax (look up MTF in the manual: it can use more than one method to calculate it).

> I have
>been reading up and thats
>where the confusion is. I read
>that you can divide the
>spectra to get the MTF's as
>well as divide the modulation
>depths (or contrast as you
>term it). It dosen't make
>sense to me the modulation
>depth of the object.

The spectra are just the amplitude vs frequency, so in either case, for a sinusoid, you are just calculating the ratio of the amplitudes.

The modulation depth of the object is just the contrast of the object: for sinusoidal objects.

> So in your
>worksheet why is the frequency
>f/Npts?.

I fixed that so f is now just f, in pixels.

> Why doesn't NA come
>into this, after all this
>defines resolution (0.61*lamda
>/ NA)?

It does, in the theory. This is a good introductory write up of MTF:

http://www.mellesgriot.com/products/optics/os_2_2.htm


>I read that there is a cut off
>frequency associated with the
>diffraction limit. But again
>there seems to be a strange
>way to represent this on a
>graph where both x and y-axes
>are 0 to unity.

No, there is no real cutoff. The MTF is obviously wavelength dependent, and sometimes it would be nice to have a theoretical expression that removes that dependence, for example when plotting the diffraction limited MTF. It's the same concept as plotting the Airy disk with x-axis units of wavelengths. The MTF drops with increasing spatial frequency, so we can pick an upper limit to that frequency, and say that we don't care about anything higher. Then we normalize the spatial frequencies to that upper frequency. If the diffraction limited MTF graph is plotted with normalized frequencies the x-axis goes from 0 to 1 and the graph is the same for all wavelengths. We could plot the graph for values greater than 1, but since we decided that higher frequencies are not important we don't. there are several ways to calculate the upper frequency: see the link above.

Richard
RichardJ
19-Tanzanite
(To:GuyB)

On 5/13/2009 2:35:14 PM, GuyBeadie wrote:
>On 5/13/2009 9:57:50 AM, rijackson
>wrote:
>>If what you want is the
>>amplitude of the MTF there is
>>a better approach
>>
>>Richard
>
>The way you did it isn't quite right -
>the MTF must be normalized to the
>zero-frequency response, so you cannot
>subtract off the mean value of the image
>to get an MTF.

Yes I can, because all I took from the FFT was an estimate of the sinusoid parameters. Subtracting the mean makes it easier to do so. With a f**ked up frequency calculation 🙂 (now fixed). Actaully, atan was also a bad choice for calculating the phase: atan2 is much better.

>Furthermore, the properly-normalized MTF
>of the image does not correspond to the
>MTF of the lens system. If your object
>has no feature at a spatial frequency f,
>then neither will your image. So your
>ability to measure MTF at a frequency f
>is affected by your object's spatial
>frequecny spectrum.
>
>To obtain the MTF of your optics alone
>means dividing the spatial frequency
>spectrum of your image by the spatial
>frequency spectrum of your object

That is all true, but I didn't calculate the MTF. I shouldn't have called the fitting function MTF, because it's not. It's just the image intensity. Once you have the amplitude, A, the modulation can be calculated as simply A/MEAN, where MEAN is what I subtracted off.

Except that, as you observed, the background has not been subtracted, so the calculated value is ridiculously low.

> - and
>the frequency spectrum at f of a
>sinusoidal object at f will be different
>from the f-component of a bar pattern at
>f.

That is true. But as you noted in another post, it's just a scaling factor.

Richard

On 5/14/2009 4:56:41 PM, rijackson wrote:
>.... I shouldn't have
>called the fitting function MTF, because
>it's not. It's just the image intensity.
>Once you have the amplitude, A, the
>modulation can be calculated as simply
>A/MEAN, where MEAN is what I subtracted
>off.

I agree with this, and with those corrections it all appears correct (including my factor of 2 error 🙂 ).

There's still one thing that puzzles me after all that, though. After you do the FFT you go back to the original data and fit it to a sine wave to get your actual sinusoidal amplitude.

Did you need to do the FFT to begin with? I agree that a fit will extract a better approximation of the amplitude than averaging over discretized peaks, but I don't see the FFT as being required for the fit.

- Guy
IRstuff
3-Visitor
(To:GuyB)

Another approach is to FFT the line spread function. It's a bit tricky, but the entire MTF curve can be generated, and it avoids some sampling artifacts.

TTFN,
Eden
RichardJ
19-Tanzanite
(To:IRstuff)

On 5/15/2009 12:29:06 AM, eden_mei wrote:
>Another approach is to FFT the
>line spread function. It's a
>bit tricky, but the entire MTF
>curve can be generated, and it
>avoids some sampling
>artifacts.

Essentially an impulse response approach. Interesting idea. That should also be possible using an edge, which gets around the problem that the ideal line is infinitely narrow, and therefore not well resolved by most real world optical systems 🙂

Richard

On 5/15/2009 5:21:43 PM, rijackson wrote:
>On 5/15/2009 12:29:06 AM, eden_mei
>wrote:
>>Another approach is to FFT the
>>line spread function. It's a
>>bit tricky, but the entire MTF
>>curve can be generated, and it
>>avoids some sampling
>>artifacts.
>
>Essentially an impulse response
>approach. Interesting idea. That should
>also be possible using an edge, which
>gets around the problem that the ideal
>line is infinitely narrow, and therefore
>not well resolved by most real world
>optical systems 🙂
>
>Richard

We started out doing edge response, but you wind up having to differentiate at some point, which generates a TON of noise.

As for the exponential decay, I haven't actually looked at the data, but one potential source is poor drive on the CCD. If the buckets are not fully transferred on the correct clock, then the residual charge being eventually clocked out will have an exponential decay.

TTFN,
Eden
PhilipOakley
5-Regular Member
(To:IRstuff)

On 5/16/2009 8:12:36 PM, eden_mei wrote:
>As for the exponential decay, I haven't
>actually looked at the data, but one
>potential source is poor drive on the
>CCD. If the buckets are not fully
>transferred on the correct clock, then
>the residual charge being eventually
>clocked out will have an exponential
>decay.
>
>TTFN,
>Eden

Interesting point. It is something I will have to look out for.
Does it occur in also occur in CMOS multiplexed designs?

Philip Oakley

On 5/17/2009 5:42:47 AM, philipoakley wrote:
>On 5/16/2009 8:12:36 PM, eden_mei wrote:
>>As for the exponential decay, I haven't
>>actually looked at the data, but one
>>potential source is poor drive on the
>>CCD. If the buckets are not fully
>>transferred on the correct clock, then
>>the residual charge being eventually
>>clocked out will have an exponential
>>decay.
>>
>>TTFN,
>>Eden
>
>Interesting point. It is something I
>will have to look out for.
>Does it occur in also occur in CMOS
>multiplexed designs?
>
>Philip Oakley

It shouldn't occur in CMOS cameras, since they do not have CCD structures.

> The "exponential" tail is ENORMOUS, ~35 pixels. There is definitely a problem in the camera. Whatever it is, I don't think you're not going to get good MTFs out of that data. The tails are one-sided, which automatically rules out the optics, and so it's probably circuit-related. Sometimes, chip outputs behave like open-drain devices, but both rising and falling transitions are being affected. There could be Miller-coupling on the gate driver that drives the final stage. Gate drivers tend to be open-drain, and a push-pull stage could have open-drain drives for both pullup and pulldown. The most obvious place is the output of the camera sensor itself, and it may need to be buffered from whatever it's trying to drive.

> The uncorrupted transition regions appear to be way too long. It looks as if it would be around 10 pixels, if it hadn't been messed up by the tail. I haven't been following these threads in detail, so my calculations may be completely off the mark, but unless your optics are >f/10-ish, and your pixels are under 2 um, either the optics are defocused, or there are serious aberrations.

> This may be partly corroborated by the fact that I can see "bloom" above and below the smaller bars, but it might be something else, like DIRTY optics. Optics that sit in open air, particularly in a factory environment can collect smog, etc, resulting in a film of oily residue that degrades optical performance. I'm not convinced that it's that, since it doesn't appear on the leading edges, but I can't think anything that affects only 3 edges, either.

> Another possibility is that your bar target illumination is not perfectly symmetrical, or there are glint artifacts from your bar targets. Is your bar target a metal mask? Or a photolithographic mask? Just how is your mask being illuminated?

TTFN,
Eden
RichardJ
19-Tanzanite
(To:IRstuff)

On 5/17/2009 6:13:51 PM, eden_mei wrote:
>
>> The uncorrupted transition regions appear to be way too long. It looks as if it would be around 10 pixels, if it hadn't been messed up by the tail. I haven't been following these threads in detail, so my calculations may be completely off the mark, but unless your optics are >f/10-ish, and your pixels are under 2 um, either the optics are defocused, or there are serious aberrations.

I don't know the f#, but there are about 7 pixels/micron.

The 0.5 microns lies look reasonably well resolved.

Richard

On 5/18/2009 8:27:55 AM, rijackson wrote:
>On 5/17/2009 6:13:51 PM, eden_mei wrote:
>>
>>> The uncorrupted transition regions appear to be way too long. It looks as if it would be around 10 pixels, if it hadn't been messed up by the tail. I haven't been following these threads in detail, so my calculations may be completely off the mark, but unless your optics are >f/10-ish, and your pixels are under 2 um, either the optics are defocused, or there are serious aberrations.
>
>I don't know the f#, but there are about
>7 pixels/micron.
>
>The 0.5 microns lies look reasonably
>well resolved.
>
>Richard

I was refer to image space. The classical diffraction blur is 2.44*lambda*f#, and assuming visible light at 550 nm, the diffraction blur should be on the order of 1 physical pixel. A typical CMOS camera is no smaller than about 1.5 um per pixel, which means that a sharp edge should blur across, at most, 1 or 2 pixels. The transitions back to background levels are dozens of pixels, which should not be the case.

I hadn't realized that the camera is looking at a microscope slide. The slide, hopefully, is only backside illuminated, ideally with a diffused source. There must be zero frontside illumination, which will seriously mess up the measurement. If you are in a room that cannot be darkened, you should use a hood over the microscope.

Uniformity is mandatory, otherwise, the uneven illumination will cause measurement errors.

The reason we concentrate on the large target is precisely because it's supposed to be a no-brainer. If you can't even get the large target to behave ideally, how can you expect the smaller targets to be correct?

TTFN,
Eden
Jbryant61
4-Participant
(To:IRstuff)


"The reason we concentrate on the large target is precisely because it's supposed to be a no-brainer. "


This is what attracted me to this method from the spatial bead measurement method where I just measure the volume that enclosed 50% of the total energy.

But it does seem that even with this MTF method there are still uncertainties:-

1: The actual theory appears of what the ultimate (Diffraction limit) is

2: the scalar correction factors for using a Bar chart rather than a sine chart. At what point do you stop using the 4/pi factor?

thanks
Jason

On 5/18/2009 12:24:40 PM, Jbryant61 wrote:
>
>2: the scalar correction
>factors for using a Bar chart
>rather than a sine chart. At
>what point do you stop using
>the 4/pi factor?

Set up your own mathcad sheet where you model the input & output of an optical system.

Do the processing on the output and you'll see.

- Guy
IRstuff
3-Visitor
(To:GuyB)

One other thing I keep forgetting to mention. You should really use a dark field target. That'll limit the amount of light on the camera, and minimize any annoying problems with blooming, etc.


TTFN,
Eden

One thing is that averaging something like 100 frames will reduce the noise impact, so that's probably worth doing, regardless.

But, as was mentioned yesterday, your lighting issues are confounding and overwhelming the measurement issues.


TTFN,
Eden

>>The 0.5 microns lies look reasonably well resolved.<<

I am not at all certain about the sizes of the bars. The comments call for .5 and 1.5 μm bars. But the large bars are nowhere near three times the size of the small bars. Maybe 50% larger. I therefore consider the actual sizes of the bars to be unkown, and any assumptions speculative.
__________________
� � � � Tom Gutman

Almost exactly 50%. I get a span of about 63 pixels for 9 bars and spaces for the small bars, and about 95 pixels for the 9 bars and spaces for the larger bars.

TTFN,
Eden
Jbryant61
4-Participant
(To:IRstuff)

Hi Eden, this was an error on my behalf. The size of the bars go from 0.5um, 0.75um, 1um and 1.5um.

Sorry for any confusion.
Jason

But only two sizes are actually shown. Presumably either the .5μm and .75μm or the 1μm and 1.5μm.

As to why the interest in the wide border bar: because it is on a large enough scale that you have enough resolution to see the large camera artifacts. Those same artifacts will affect the fine scale grids as well. You can see them affecting the baseline. They will also affect the shape and amplitude of the oscillations, although you don't have enough resolution to actually see that effect.
__________________
� � � � Tom Gutman
RichardJ
19-Tanzanite
(To:IRstuff)

You obviously know much more about CCDs than I do 🙂

Richard
Jbryant61
4-Participant
(To:RichardJ)

I think too much attention has been focused to the thick lines which are not part of the targets, they just enclose a group of target patterns (different spatial frequencies) and so i have no proof that they are etched to the same level and have the same flatness/thickness spec.

I do have tilt on my sample.


Also the CCD isn't saturated, I was able to get a more symmetric repsonse by changing focus, but as I said earlier, this line isn't of any interest to me. (sorry for not responding over the weekend).

I still can't see how Guy obtained his expression for vmax in terms of the NA!


Thanks
Jason

On 5/18/2009 9:31:58 AM, Jbryant61 wrote:
>I think too much attention has
>been focused to the thick
>lines which are not part of
>the targets,

Ah, but they serve as their own targets in a way, and the asymmetry indicates possible asymmetries that could be affecting your results.



>I do have tilt on my sample.
>
>.... I was able to get a more
>symmetric repsonse by changing
>focus, but as I said earlier,
>this line isn't of any
>interest to me.

This, I think, augments my suspicion that the problem isn't electrical but rather optical. I suspect a combination of nonuniform illumination and improper focusing gives rise to the blurring.

I have certainly seen asymmetric streaks when moving a light source around, though I admit to being puzzled that this would occur on a well-controlled microscope platform. A tilted sample, however, means all bets are off. The tilt will definitely induce asymmetric effects on the imaging.

The other contributing factor will be the effect of focusing to the bar patterns: the best contrast for some of the bar patterns will be at focus spots that would make a real image look fuzzy.

As the spatial frequecy goes up in an imaging system, it becomes harder and harder to see what's in focus. As a result, there's a surprisingly large range of focal positions to choose from when picking your measurement spot for a single-frequency object. With a true image, where there's a broad spread in frequency information, there's less ambiguity in the overall focus.

So, by concentrating on the patterns, its not too surprising that the sharp edge nearby is out of focus.

- Guy

Your thread:

http://collab.mathsoft.com/~Mathcad2000/read?124147,63

Only Guy & Richard (or vice versa) have optical answers. I read enough from the web that manufacturers have contoured the very high difficulty near the impossibility to fit a function to a data set that each have invented their own soup MTF. There is a MIL Standard, where you can see some of Guy functions in the "Edge detect" that he gently redirected me to. As you can see from the attached, no method is best vs other ones. All that is because the Fourier DFT does not confuse shapes. Each experiment will have to be shaped particularly and for that there is no general method known.

My Delta is not bad at all, just made it demo Boltzmann, but it could be Airy ... not tried, a RemToDo. At least, Delta would be an invariant, and by same token an "invariant error wise". If you visit this work sheet and some links, don't miss "My preferred cow" ! As you can see, my interest starts at the data set only, then plug into DFT. I don't want to spend more time digitizing the picture in the first example, it is of too low quality, the data set would make lot of difference.

Also, bear in mind that fitting data set is a conjecture based on the fact that some data points are true but they end weighted by untrue ones, and there is no way to distinguish which ones would like to raise the finger "I'm a good jack"

Salut Jason.

Jean
Jbryant61
4-Participant
(To:IRstuff)

"Another possibility is that your bar target illumination is not perfectly symmetrical, or there are glint artifacts from your bar targets. Is your bar target a metal mask? Or a photolithographic mask? Just how is your mask being illuminated?"


The bar target is mask on a glass slide that is c.a. 1mm thick. The collection optics (infinity corrected onjective lens) collects from above. The illumination is from below i.e. an electroilluminescent white light source that I stick to the bottom of slide. The light is incoherent and pretty much uniform over the region I image.
Top Tags