cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community email notifications are disrupted. While we are working to resolve, please check on your favorite boards regularly to keep up with your conversations and new topics.

resample function

Jbryant61
4-Participant

resample function

Hi.

Eden kindly gave me this fantastic function to resample a matrix at higher resolution. It is only now that I am using it and can't quite work out whats actually happening - although it does the job beautifully. Would any experts be kind enough to give me a quick explanation.

I can see it starts with taking the FFT and then checking for odd number of pixels in the image. But from there I am lost.

thanks
Jason
83 REPLIES 83
Jbryant61
4-Participant
(To:TomGutman)

Thanks Tom.
>
>One reason I largely dropped
>out of the original discussion
>is that you were blindly
>misapplying tools without
>properly validating them. You
>never really validated the
>Virenda equation as a suitable
>fit.

I sort of did. 1: It is the expected fucntional form of the psf and 2: it was better than my previous attemps at using gaussian and bessel fucntions. So it was an improvement.

The circle of half the
>total power is not a suitable
>measure of the size of the
>image. Because of their large
>size the rings can have
>appreciable power, but they do
>not contribute to the image
>size, which is controled by
>the central peak.

Well, encirlced energy is exactly this measure and is used throught the optics world - in particular Zemax. The reasons why we should cater for the rings (if we can), its that in the presence of spherical aberraiton, on one side of focus, the ringing in enhanced and the actual central peak is narrower - in fact narrower than the diffraction limit. I must avoid calling this the best focus because it isn't.


You never
>validated the assumption that
>the beads were small enough to
>be considered as points for
>your analysis.

I think this was done some time ago by Richard in another topic "convolution". The other reason for building this into the "theoretical calculaiton part" where I take best estimates at everything involved, is I can then extend this to estimate what any other sized object should image as e.g. 2um, 5um.

You compared
>them to the wavelength of the
>light, but never compared them
>to the size of the spots you
>were trying to measure, nor,
>perhaps more importantly, to
>the accuracy that you were
>looking for.
>__________________
>� � � � Tom Gutman

I was happy with the accuracy from Virendra for a single bead. I did want to take say 10 single beads and recreate a superresolved image as each one of these samples the pixels slightly differently and gives more information, but I don't know how to do it.



Obviously when another method comes along that can reproduce what I actually observe on the CCD, that appears to do so better than Virendra, I get "seduced!" This was the case of Resample.

Jason
PhilipOakley
5-Regular Member
(To:Jbryant61)

Yes there is a lot of 'fiddling' going on. 😉

It is almost always the case that imaging performance is a compromise between the resolution and field of view, balanced against the available pixels and the quality of the optics. [I call this the 'anchor chain' philosophy - everything is a weakest link - nothing can be improved unless everthing is improved]

The various techniques being proffered are then the community's negotiated 'best' fit comprise at trying to measure things that haven't quite been measured, claim performance that can't actually be seen, infer capability that perhaps isn't there, etc.

However it is a really good excercise in understanduing the mathematics and its basis so that the claims can be justified given appropriate well stated assumptions (did we miss the last bit 😉

It's a bit like Engineering.....

Philip Oakley

Engineering is the art of moulding materials we do not wholly understand into shapes we cannot precisely analyse, so as to withstand forces we cannot really assess, in such a way that the community at large has no reason to suspect the extent of our ignorance. (Dr AR Dykes, Inst Struc Eng. 1976)

If I understand correctly you're:

1) taking a bead that is almost, but not quite, small enough to be considered a point source

2) magnifying its image onto a CCD with optics that are near, but not quite, diffraction limited

3) resolving the image with pixels large enough that you can only get a few pixels across the PSF

and

4) Trying to take the few-pixels worth of data to answer:
- What is the best focus spot?
- How many times diffraction-limited are my optics?
- Can I find a better set of optics to do my imaging?

Is this an accurate statement of the overall task set?

- Guy
Jbryant61
4-Participant
(To:GuyB)

Oh - one important question I forgot to ask:

When acquiring your images, what is the wavelength range of light incident on the CCD from the beads and the surrounding background?

- Guy
Jbryant61
4-Participant
(To:GuyB)

The fluorescence band is 500 - 540nm.

Background light is all but eliminated by filters.

thanks
Jason

As I see it, your experimental parameters are:

>> They [beads] are 0.2um so

>> The fluorescence band is 500-540nm.

>> [CCD resolution] 10 pixels diameter
>> to the 1st minima.

>> DOF is c.a 500nm. I can clearly
>> define focus to within 200nm.


And you're looking for:

>> Im looking for [MTF] changes of
>> 10 - 20%, not 1-2%.


I think you're probably stuck. Your uncertainties are greater than 10-20%, so you probably won't get any better analysis than you've already got.

The beads have a diameter 40% of the wavelength - they aren't point sources. Furthermore, as Tom pointed out, they aren't plane objects either - they have a thickness which corresponds to a large fraction of your DOF. So, they're not only structured along x & y but z as well. Convolving the fluorescence emitted at each point within a bead will create blurring just from the difference in focus from top to bottom.

The CCD resolution is also a factor. With only 10 pixels across the main peak it requires a lot of assumptions of CCD performance to confidently process the images (interpolation, FFT, etc.) in the way you wish.

Can you get your hands on some fluorescent quantum dots? They won't help with the CCD resolution, but if you've got the necessary sensitivity you can relieve the other uncertainties - even with overcoats for solubility dots are typically <25 nm in diameter.

- Guy
Jbryant61
4-Participant
(To:GuyB)

Hi Guy.



"The beads have a diameter 40% of the wavelength - they aren't point sources. Furthermore, as Tom pointed out, they aren't plane objects either - they have a thickness which corresponds to a large fraction of your DOF. So, they're not only structured along x & y but z as well. Convolving the fluorescence emitted at each point within a bead will create blurring just from the difference in focus from top to bottom."

We use TIR excitation so there is some restriction of the excitation to a thin plane (the evanescent field).

"I think you're probably stuck. Your uncertainties are greater than 10-20%, so you probably won't get any better analysis than you've already got."

Yes I was of expecting this - I thought the same with the Virendra stuff - that I can't really get any better, it was only when Lou introduced me to the Freq. padding that I wanted to see if this was slightly better.


Would you still come to the same conclusion (of being stuck) if I took 10 or 100 beads and combined them to create a super resolved spot?

Thanks
Jason

On 8/19/2009 4:57:47 PM, Jbryant61 wrote:
>We use TIR excitation so there
>is some restriction of the
>excitation to a thin plane
>(the evanescent field).

The evanescent field isn't likely to be any thinner than your beads - they can extend for quite some distance.


>...only when Lou introduced me to
>the Freq. padding that I
>wanted to see if this was
>slightly better.

Frequency padding doesn't give you any new information. Doing a least squares fit to your original pixels will provide just as much useful information as fitting them to the interpolated data.


>Would you still come to the
>same conclusion (of being
>stuck) if I took 10 or 100
>beads and combined them to
>create a super resolved spot?

No, because I've never believed that the beads are identically sized. If you could take 10-100 images where the same beads have been slightly "jiggled" between exposures (random displacements <1 pixel) then there are methods to super-resolve the images.

Astronomers do that frequently, but the math is intense.

If you truly need a better handle on your optics than you already have (which, IMHO you do not) then you'll need to go with fluorescing quantum dots (lots of places supply them) or, once again, back to the straight edge analysis.

- Guy
Jbryant61
4-Participant
(To:GuyB)

Hi Guy.

What will quantum dots offer me that fluorescence beads don't?

thanks
Jason

>
>What will quantum dots offer
>me that fluorescence beads
>don't?

If you disperse dots in a solvent and let them dry onto a microscope slide, you can control the density so you know they're separated. Then it's possible to get images from individual quantum dots.

Having an object that is truly point-like (~20 nm in diameter) will eliminate many of the issues that arise in your analysis.

All that's left will be the pixelation of your camera - and with quantum dots you really could try super-resolution.

The only caveat is that your camera may not be sensitive enough. I've never tried imaging single dots. I know it's possible, lots of people have reported doing so, but I don't know whether you need a special camera to do it.

- Guy
Jbryant61
4-Participant
(To:GuyB)

OK, I see where you are coming from now.I have tried 20nm beads, the problem is that they bleach quite quickly - time to bring out the EMCCD - ah damn it - it has too large pixels!



Do you really thin a 200nm bead is really causing me to image c.a. 10%-20% larger than the diffraction limit.

Have you ever heard of lateral charge diffusion specific to back-illuminated CCD's?

thanks
Jason

On 8/21/2009 10:39:23 AM, Jbryant61 wrote:
>Do you really thin a 200nm
>bead is really causing me to
>image c.a. 10%-20% larger than
>the diffraction limit.

More than anything else you have a 40 nm bandwidth for a central wavelength of 520 nm. Because a diffraction-limited spot diameter is inversely proportional to wavelength, you already have a chromatically-aberrated spot good for an 8% blur.

Then, because you have a lambda/2 bead you are going to get coherent effects - partial interference of the light from one part of the bead interfering with light emitted from another part of the beam messing up the response on a CCD pixel.


>Have you ever heard of lateral
>charge diffusion specific to
>back-illuminated CCD's?

Back-illuminated? Not so much - more front-illuminated. Unless you've got a >200 um thick substrate.

- Guy
Jbryant61
4-Participant
(To:GuyB)

Hi Guy.


>More than anything else you have
>a 40 nm bandwidth for a central
>wavelength of 520 nm. Because a
>diffraction-limited spot diameter is
>inversely proportional to wavelength,
>you already have a
>chromatically-aberrated spot good for an
>8% blur.


I always used the reddest wavelength present in my fluorescence so I think I have this taken care off.

Then, because you have a
>lambda/2 bead you are going to get
>coherent effects - partial interference
>of the light from one part of the bead
>interfering with light emitted from
>another part of the beam messing up the
>response on a CCD pixel.


This is really interesting. I was playing today with Zemax - and in particular the partial coherence image analysis tool - you can model the convolved psf either incoherently, coherently or partially coherent. I have always done it incoherently becasue my dyes undergo rotational diffusion and hence have random polarisation, so I assumed there would be little interference.

If there was interference how would this mess up the detection?

I'm going on PTO for 2 weeks tonight, so you won't be hearing from me for a while. Thnaks for all your conversations - there really interesting!

Jason

Yeah, you "sort of" validated the Virenda equation. Like the girl who "sort of" took precautions and is only slightly pregnant.

That the Virenda equation might be the expected function for a point image (might be, do your optics meet the conditions that lead to the Virenda equation?) that is different from it being the expected function for your beads. This relates to the related issue of whether the beads can be considered as point sources.

Your information in your later post here suggests nowhere near. You list a bead size of 200nm, a DOF (whatever that is, I don't recognize the acronym off the top of my head) of 500nm and an accuracy of 200nm. You bead size is 40% of the DOF and equal to your stated accuracy. That is far from negligible. The beads might be small enough to be considered as point sources for determining the general behaviour, and the overall form of the diffraction pattern, but that is quite different from the detailed analysis you are trying for.

Encircled energy is a perfectly good measure -- for some purposes. If you were looking to focus a laser beam then it would be an appropriate measure of how good your focusing was. But that is not your task. Your task is resolving images -- something very different. The energy in the rings is irrelevant to that resolution (the amplitude is, but because of the size of the rings a negligible amplitude can still represent significant energy.

And why should you not consider a focus that is not quite the theoretical focal plane but does result in a narrower central peak? "defocusing" to improve resolution and contrast is not exactly an unknown nor unused technique.
__________________
� � � � Tom Gutman
Jbryant61
4-Participant
(To:TomGutman)

Hi Tom.

>Yeah, you "sort of" validated the Virenda >equation. Like the girl who "sort of" took >precautions and is only slightly pregnant.

I've not heard this saying before - quite amusing!


That the Virenda equation might be the expected function for a point image (might be, do your optics meet the conditions that lead to the Virenda equation?) that is different from it being the expected function for your beads.

See the attached. Firstly, the Virendra equations describe fully the (unconvolved psf) when spherical aberration is present. As this is my dominant aberration (albeit small), its a good starting point - much better than what I had before Gaussian or Bessel. So yes it does represent the real psf. But does it represent the convolved psf?

Well correctly or incorrectly, I am using the fact that the Virendra equations have enough freedom to allow an interploation of my convolved psf. As & Ad allow for shape change, and psi allows for size change. So the values actually obtained for As,Ad and psi ar enot necessarily the real values for the psf, but they are values that allows a good fit to my data (again I am using good to mean better than a previous method)




>Your information in your later post here >suggests nowhere near. You list a bead size of >200nm, a DOF (whatever that is, I don't

"Depth of focus". So my beads sit on the x-y plane, and the depth of focus is in the z-plane and is dictated by my objective lens (and lambda)


>recognize the acronym off the top of my head) of >500nm and an accuracy of 200nm. You bead size is >40% of the DOF and equal to your stated >accuracy.

I can determine focus to within the limits of my z-stage (c.a. 200 nm's). By plotting the measured size against z position (for a through focus set of data), the curve is very smooth and parabolic, so I can obtain best focus quite well (or at least well within the range of sizes that my optical changes make)


>That is far from negligible. The beads might be >small enough to be considered as point sources >for determining the general behaviour, and the >overall form of the diffraction pattern, but >that is quite different from the detailed >analysis you are trying for.

Why is it - the perfect system contains 83.6% of its total energy within the first dark ring. Im using a similar measure.



>Encircled energy is a perfectly good measure -- >for some purposes. If you were looking to focus >a laser beam then it would be an appropriate >measure of how good your focusing was. But that >is not your task.

But it is, we need the peak intensity as high as possible and I cannot change how tha analysis works - its better for higher peak intensities. As we randomly populate a slide with these beads, any ringing reduces the contrast and also screws up our alorgrithms.


>Your task is resolving images -- something very >different.

Not really, its identifying the locations of objects and deciding in which channel the signal is strongest.

The energy in the rings is irrelevant to that resolution (the amplitude is, but because of the size of the rings a negligible amplitude can still represent significant energy.

>And why should you not consider a focus that is >not quite the theoretical focal plane but does >result in a narrower central peak? "defocusing" >to improve resolution and contrast is not >exactly an unknown nor unused technique.
Yes this is a away to improve resolution, but see attached - its at the expense of contrast.


Whatever happens, my audience want to see a method where I can then reproduce what we see on the camera - compare raw blocky image with reconstructed blocky image.

Thanks
Jason

>Like the girl who "sort of" took precautions and is only slightly pregnant. <<br> __________________________

There are variants(ssssss) !
... grandma will get twins next week ...
and so on ...
depending upon the degree of precautions.

Oh ! my dear, where did you have the head ?
In the dash, Grandpa !

"Pixelation" as you call it is an act of forgery, no matter by how many pixels you fabricate. I have passed the camera matrix calibration procedure but you said you couldn't apply . The first conjecture is there: if you can't calibrate a camera you don't have an image, you just have an unknown array. Virendra or else "populate" but don't pixelate. The "Mathcad Improc fractional zoom" is one of many C# codes (yet unknown). It does "pixelate in your sense", it is the sole pixelator you can apply. But as it stands you must code it for the electronic gadget you have to design and equip your camera. Lots of readers of this thread are like kids, they have a piece of pipe between the ears, nothing prints in the brain. If your camera does not have enough sensors for the details you need, get a better one.
I have been unsuccessful getting some C# zoom codes ... ended with UniBlue (Microsoft) that wanted to clean my registry , that it did for 15 errors , but then wanted $, then "F it" and zap from my box that works fine for my Mathcad.

The question is on your side: is your camera for one (1) spot only or for general use, capturing some real scene of the life around ?

Jean
LouP
11-Garnet
(To:Jbryant61)

On 8/13/2009 6:47:57 AM, Jbryant61 wrote:
>Hi Lou, following your
>comments below i have
>revisited comparing the two
>techniques and now believe
>that your method is the best.
>It appears to fit the
>background better. I think the
>fitting equation in the space
>domain assumes rotational
>symmetry, whereas your method
>doesn't.
As far as the background, if the fitting fct. has an adjustable offset, then it should be possible to pick it so that the integral beyond some radius adds no significant contribution. I haven't looked at it.


>I think the worksheet supports
>the conclusion your method is
>better. would you agree?
>For the numerical
>approximation, if I increase
>the oversampling from 32 to 64
>i see no effect. Does this
>suggest that 32 is adequate
>and the approaximation is very
>good?
Given the low resolution of the original data, even this may be overkill, but it certainly appears to be sufficient. The inherent accuracy of either is not clear, but the analysis can be done to good precision (lots of digits, but not nec. meaningful).

>>For the
>>2D DFT method, the function
>>was simply offset by its
>>global minimum before
>>integration(via scale fct.),
>>leaving all contributions in
>>the "dark" region to be
>>positive.
>>
>Is there another way to define
>the background. The integrated
>value really depends on this,
>and I assume using the scale
>fucntion to define the
>background will just use the
>lowest value pixel.

True, but I tihnk the correct offset is that for which the integral beyond some radius adds no significant contribution, as mentioned above. If you beleive this should be the case, then there is a unique offset that gives this. The area vs. radius should approach a horizontal assymptote.

Lou
Jbryant61
4-Participant
(To:LouP)

Sometime ago, Jean kindly gave me a worksheet that allowed me to tak a function and plot it in 3D with half the section cut away.

Is it possible to decentre this graphic from (0,0) to some centroid coordinate (cx,cy)?

Thanks
Jason

On 8/18/2009 12:03:37 PM, Jbryant61 wrote:
>Sometime ago, Jean kindly gave
>me a worksheet that allowed me
>to take a function and plot it
>in 3D with half the section
>cut away.
>
>Is it possible to decentre
>this graphic from (0,0) to
>some centroid coordinate
>(cx,cy)?
>
>Thanks
>Jason
__________________________

>How to displace this graphic so it is centred on cx,cy <<br>
I don't understand: this plot is a needle seating in � 5. If you decenter from [0, 0] it will really seat in the blue ! Note that we have two plots because the axis system does not support lighting. Then the 2nd plot is single + lighted. Back to your question, decenter directly in the fvir ...(x-xc)�+(y-yc)�, would that do ?
Answer= NO, I have tried. What could be done is:
1. mesh finer
2. explode the 3D level
3. extract submatrix the portion of interest

Is that what you mean ?
If not, you may have to rework Virendra.

Jean



Jbryant61
4-Participant
(To:ptc-1368288)

Hi Jeans, thanks for you effort.

The attached shows what I would like. I love your sliced graphic and I want to superimpose it on the raw data - but the raw data has coordinates starting at 0,0! So I thought of displacing the graphic to the centroid coordinates of the raw data.

thanks
Jason

On 8/19/2009 6:45:05 AM, Jbryant61 wrote:
>Hi Jeans, thanks for you
>effort.
>
>The attached shows what I
>would like. I love your sliced
>graphic and I want to
>superimpose it on the raw data
>- but the raw data has
>coordinates starting at 0,0!
>So I thought of displacing the
>graphic to the centroid
>coordinates of the raw data.
>
>thanks
>Jason
______________________

Maybe tonight.



On 8/19/2009 6:45:05 AM, Jbryant61 wrote:
>Hi Jeans, thanks for you
>effort.
>
>The attached shows what I
>would like. I love your sliced
>graphic and I want to
>superimpose it on the raw data
>- but the raw data has
>coordinates starting at 0,0!
>So I thought of displacing the
>graphic to the centroid
>coordinates of the raw data.
>
>thanks
>Jason
______________________________

In all those look alike threads, I can't find the data set and the Virendra fit. And you need it to plug in the CreateMesh, in order to have the fit "centered" on the data set. It dates back several months.

Jean



Top Tags