Community Tip - Learn all about PTC Community Badges. Engage with PTC and see how many you can earn! X
Hi,
What could be the reason of which when setting g:=2 the first local minimum is taken as 11.095 (so jumps/skips over the first local minimum), but when using g:=2.3 the first local minimum is taken as 4.812, so the first minimum expected one value around 2.3 value. I would expected to see the first local minimum as 4.812 when using g:=2 as well.
My understanding is that by setting g:=some value, then maximize(f,g), and minimize(f,z) mathcad functions will take the first local maximum and the first local minimum around this g given value. Is not right my understanding?
P10 changes the solver engine. Need check all data and answers.
I do not know how you obtained all these results from your plot, showing only the plot results.
I'm going to take a wild guess that Tetsurō evaluated the function at a number of points and used a set of functions to return the zeros, minima and maxima of x & f(x)?
Something like this, perhaps ...
vec(z) flattens a matrix into a vector and converts a range variable to a vector. index(v) returns a range variable that can be used as an index into vector v.
Note: I was just playing with vlookup - it's been a long time since I've used it - and wanted to keep it all to one page. Look closely at the y[zeros values and you should see they either contain 0 or pairs of values either side of 0 (ie, the zero is probably somewhere between a pair of points).
If I was being slightly more serious, I'd have written a function to detect the zero crossings and make a choice as to which 'side' was nearer to zero and just choose that x,y pair. However, zero detection within a vector needs some thought to take care of awkward situations (eg, multiple crossings in a small region, or apparent adjacency that only occurs because there are too few points to show the true behaviour), and my brain is too small to deal with that at the moment. Of course, in this instance, it's easy enough to calculate the zeros within a given x range, but that's cheating.
Stuart
An alternate option for the zeros:
Advantage is higher precision
On the other side we get a very large vector with a lot of duplicate values and also values outside of the desired range
And because we rely on the algorithm used by the "root" function, we cannot be absolutely sure that with our selection of x-values (guesses) we can really catch ALL zeros within that range.
Indeed, one can use root, Werner, but according to my rules (see last sentence of my post), that's cheating as it assumes knowledge of the function that generated the data, rather than trying to find the zeros present in or implied by zero-crossings. Now, I know if one isn't cheating, one isn't trying, but sometimes I play by Humpty Dumpty Rules. 🙂
(Granted, only Tetsurō knows what he actually did to get the zeros. He may have cheated also ... as one would do in a real-world situation)
Stuart
Humpty Dumpty Rules:
“When I use a word,” Humpty Dumpty said in rather a scornful tone, “it means just what I choose it to mean—neither more nor less.”
“The question is,” said Alice, “whether you can make words mean so many different things.”
“The question is,” said Humpty Dumpty, “which is to be master—that's all.”
Lewis Carroll: Through the Looking-Glass
@Cornel wrote:
Hi,
What could be the reason of which when setting ...
The reason sure is the underlying numerical algorithm chosen by PTC to implement 😉
As PTC did not provide a detailed reference to the algorithm and because we don't have access to the source code of Prime, we can't say more as an answer.
@ttokoro just mentioned that PTC seems to have changed the algorithm with Prime 10. Here is a comparison of the two with the minima of the function you provided. The differences are clearly recognizable. And in both versions you have sort of outliers.
File in P9 format attached
It's amusing to play with the solver, Werner. Here's one that *almost* works. Not all of the choices are so well behaved.
Stuart
@StuartBruff wrote:
It's amusing to play with the solver, Werner. Here's one that *almost* works. Not all of the choices are so well behaved.
Are you sure?
Here is this very algorithm without an autoscaled y-axis 🙂
I was being somewhat understated. One of the others was even worse.
Stuart
Guess I'll stick with Mathcad 11 for serious work:
(Method is Nonlinear -> Conjugate gradient, by default)
Luc
@LucMeekes wrote:
Guess I'll stick with Mathcad 11 for serious work:
That's sure a wise decision 🙂
BTW, MC15 is showing the very same behavior as your MC11. No unexpected "outliers", no matter which nonlinear algorithm (Conjugate Gradient or Quasi-Newton) we chose.
But it looks like Mathcad has troubles finding the minima at x=-7.754 and x=-14.037.
Here Prime 10 is doing a better job (despite its erratic outliers)
Mathcad Prime 10 uses the KNITRO solvers, which do (allegedly) differ from the previous MP solvers to a degree. See https://en.wikipedia.org/wiki/Artelys_Knitro and MP10 Help for further information.
You can choose which of the available solvers to use by right-clicking on minimize/maximize and selecting from the context menu.
Stuart
From Mathcad Prime 10 Help:
Ahh, I forgot that in P10 we got back the ability to chose an algorithm.
But none of the algorithms in P10 duplicates the behaviour of the algorithm used in P9.
Additionally as far as I am aware, the KNITRO algorithm were implemented from the very start of Prime, also in lower versions. At least the help in P4 already states that (amongst others) "minimize" is now based on KNITRO. So this does not explain the differences between P9 and P10.
So it still seems that PTC has modified the underlying algorithm in P10. From the plots I showed it looks like P10 is a little bit less 'erratic' concerning the initial guess value.
P.S.:
@StuartBruff wrote:
.....
From Mathcad Prime 10 Help:
Algorithm Selection for Find, Minerr, Minimize, and Maximize
I could not find this in the help, but if the helps claims that LM would be available for "miminize" and "maximize", then thats wrong. These two functions seem to default to "conjugate gradients" rather than to LM (wich is not among the choice list).
BTW, the same applies to real Mathcad as well were LM was not available for "minimize" and "maximize".
Good to know, Werner. I'd forgotten I even knew which solvers Prime was using anyway.
I have a vague recollection of seeing references to the Mathcad Original algorithms listed in the manual (or whatever it was called). I also vaguely recall writing my own, just for fun ... I might have a look in the dusty archives, provided I can push my way past the virtual spider webs and electron dust.
Stuart
I think the original algorithms in real Mathcad were mainly based on the "Numerical Recipes".
BTW, the Prime help suggest to add a constraint to get a solution in the desired area.
As can be seen in the picture this can work pretty good but also could awfully fail - strange!
(Algorithm in all three cases is "auto select")
By one of those moments of serendipity, according to today's email, Manning are offering a discount on a book on optimisation. For those who might be interested in spending $30 ...
I wonder, PTC, if it would be of interest/worthwhile writing a Mathcad book for Manning or similar publisher?
Stuart