I have a high resolution image but want to illustrate what the image would look like if it were aquired using a camera with lower resolution e.g., 1 m resolution at a distance of ~500 m. I would think the best way to do this is to do a convolution but not sure what I should use? help would be appreciated.
It depends on why the camera has lower resolution. Is it because you have gone to a smaller lens, so the diffraction limited resolution is worse? Or because the optics have aberrations? Or just because there aren't as many pixels in the CCD?
I have a visible spectrum image which has been rendered for temperature and simulate in Zeemax. We want to simulate what it would look like using a Microbolometer FLIR tau640 which uses a 17 micron pixel pitch and 640 (H) × 512 (V) array. The Zeemax simulation appears too high resolution. Does this help? My thought was to scale the image to look for two objects that are ~ 1m then use a Guassian Blur until it is difficult resolve the two objects.
The optics blur the image according to the point spread function, which is most likely a second order Bessel function (can't say for sure without know more about the optics), not a Gaussian. If the image was rendered using visible wavelengths what you need to do is change the wavelengths in Zemax to the IR wavelengths the bolometer will actually see. That will reduce your image resolution a lot. Then you can look at the spatial resolution of the optics by looking at the point spread function and/or the cutoff of the MTF in Zemax. If it's still much better than 17 microns then the camera will be the limiting factor. If it's much worse than than 17 microns (at least 5-6 pixels across the width of the Airy disc), the effect of the camera is minimal.
The effect of the camera is just pixelation, and you can simulate it just by downsampling the image in a graphics program. If you do it that way do not use anti-aliasing, because the optics in the camera clearly cannot do that!. Do not use any smoothing or convolution either.
Thank you Richard,
I believe that the colleague that ran the Zeemax simulation did indeed use IR wavelengths, and pixel pitch per FLIR documentation. I think the array size he used was 512x512 rather than 640x512 as stated in the literature. I think we were all surprised at the detail in the simulated image since we expected a "blob". Is it O.K. for me to email the image to you so you can see what I mean?
I think the array size he used was 512x512 rather than 640x512 as stated in the literature.
That's not going to make much difference (and in one direction, obviously no difference whatsoever ).
I think we were all surprised at the detail in the simulated image since we expected a "blob". Is it O.K. for me to email the image to you so you can see what I mean?
Sure. I'll send you a private message with my e-mail address. I'm not sure what I can tell you though, because I don't know anything about your optics. The key is to look at the MTF. That will tell you the expected resolution (I am not an expert in Zemax, but I am fairly sure the MTF is calculated without taking the detector into consideration. That's actually good, because the MTF of an FPA detector is a rather wishy-washy thing!), which you can compare to the pixel spacing.