Hello All,
I would like to know more about "genfit" algorithm. My main question is:
Why "genfit" routine cannot find a good fit when the guess values are not chosen properly. What is the normal behavior of the genfit outputs with respect to different guess values?
I appreciate if you introduce me some references or articles for further information.
Thank you.
Payman
The algorithm is "Levenberg-Marquardt". Like all non-linear least squares solvers, it is iterative. It starts from the guess value and heads "downhill" on the error surface. If the error surface has only one minimum then it will eventually converge to a solution at that minimum. However, in many cases there are local minima, as well as the global minimum. If that is the case the iterative solver can descend into a local minimum, and determine that is the best solution it can find. Which minimum it finds, local or global, will depend on the guess values at the start of the iterative procedure.
Thank you Richard,
The attached figure is an example of genfit result vs different guess values.
When the guess value is exactly the same as the actual value (in this example 1.5), the genfit returns the best answer.
However slight change in guess value causes abrupt change in the genfit result.
Is it a normal behaviour for genfit? Is there any special information about the system when the convergence is so sharp?
Your data happens to be very badly behaved for non-linear least squares fitting. You have very many local minima close to to the global minimum. This is basically telling you that a non-linear least squares fit is not a good way to model your data to find the parameters. At least, not unless unless you first find a way to get very good guesses.
Do you mean that "genfit" is not suitable here? My model to fit the data looks like this:
Is there a better routine to be used here?
No. I mean iterative non-linear least squares fitting is not suitable, which includes genfit and minerr. If you want to do a fit to the data simulated annealing might work. It's well suited to problems with many local minima. But your data is the sum of sinusoids, so why not just FT it and get the parameters that way?
Thank you Richard. I need more time to know about "annealing".
Regarding FT, I am not quiet sure how to implement it to my program. Can I ask you to show it in my attached program?
Happy Holidays
Payman
See the attached worksheet.
Your fitted function is the sum of three sinusoids, each with an amplitude and a frequency (but no phase terms in any of them). If you know those amplitudes and frequencies you can calculate the b coefficients. The amplitudes and frequencies can be found by performing a Fourier transform on the data and finding the positions and heights of the peaks. To do that you will have to normalize both of the axes after the FT. The maximum value on the frequency axis is half your sampling frequency for the data. I am not inclined to spend the time required to examine your program that generates the "data" to figure out what that is. The normalization for the amplitude axis depends on which FFT routine you use, but for CFFT it's sqrt(N) (see the help for more information). The coefficients found via the FT may be good enough for you, but if not they will at least be very accurate guesses for the least squares fit.
Note that your fitting function assumes a phase of zero for all three cosine functions. I didn't check your data to see if that's actually the case, but you should (even if only to check for an accidental phase caused by a programming error).
genfit works with units in Prime!
One example: