There is more to consider in curve fitting than thinking you are missing something. Clean data do not exist unless from a pure mathematical function. Collected data are subject to the digitization noise. A good figure is ± 0.001. Lanczos reveals ± 0.000005 ... a damn good lab equipment. What's ridiculous from NIST publication is the all lot of trailing zeros [non sense]. That their "statistical methods" arrive at the coefficients and exponents in increasing magnitude is erroneous. The exponential suite is very reflexive [not reflective] and reactive. The free raw PWMinerr arrives at a more realistic fit much faster and more economical in calculations. The PWMinerr was discovered after the DAEP was designed. About your "Acc" question, just forget it, rather go ahead with more advanced fitting techniques.
jmG
Thanks Jean,
I understand what you are saying..
So, increasing the acc does not necessarily reduce the error all the time and it depends on data itself.
Am I right? (I am just curious..)
Raj sharma wrote:
Thanks Jean,
I understand what you are saying..
So, increasing the acc does not necessarily reduce the error all the time and it depends on data itself.
Am I right? (I am just curious..)
Totally correct Raj. For years in this collab, nothing else than PWMinerr has been used and genfit completely abandoned. Most of the times Levenberg-Marquardt [LM] is the best option and generally the sole option. On very difficult models [not necessarily huge models], it will be Conjugate Gradient [CG]. Often we have to iterate manually... often again initialising the fit is very difficult and when too difficult it must be found manually. Mathcad has never failed fitting data. As a detail, I have done a lot of comparative fits between Mathcad PWMinerr and ORIGINLAB advanced statistical fitting that often fails, but not Mathcad.
Jean
It seems to be a quirk of the iterative procedure. An iterative algorithm will not always converge to the same point. If you change the starting guesses, for example (try changing the first one to 0.7) it will converge to a slightly different solution. In a sense, there is some random error in the point at which it finishes iterating. The change in tolerance is too small to override this, and it happens that you get a larger residual with a smaller tolerance. If you change the tolerance to 10^-20 it will give you the smaller residual you expect.
As an aside, that function is not necessary for basic least squares fitting, although it is useful if you intend to use the optional parameters. For basic fitting just use minerr.