Community Tip - You can subscribe to a forum, label or individual post and receive email notifications when someone posts a new topic or reply. Learn more! X
I cannot make to work the algorithm of the attached file containing Gauss-Seidel Operator for linear systems. Can someone give an advice, please? I attached it.
Solved! Go to Solution.
The easiest way still is
You may say thats Jacobi, not Gauß-Seidel. But because of the order in that the (new) x[j are calculated and overwriting the old values, in the sum we use the new, just calculated values as long as k<j and automatically use the old values for k>j as those are not yet calculated and overwritten. Seen that way we its finally Gauß-Seidel.
To illustrate Werner's "more stable and robust with better error checking":
and
For this matrix it does not converge, why?
On my old pocket calculator (HP 41SX of the seventies) instead, the same example and the same algorithm created at that time, is convergent.
@-MFra- wrote:
For this matrix it does not converge, why?
On my old pocket calculator (HP 41SX of the nineties) instead, the same example and the same algorithm, it converges.
Looks like the algorithm you implemented (we don't see it, we only have a picture of results) is not robust enough.
As was already said, the algorithms in the number cruncher Mathcad are heavily tested and tuned for robustness. Like your HP calculator also Mathcad encounters no problem in solving the task:
Knowing an algorithm which theoretically works is on thing, knowing how to implement algorithms so they also work in special cases, avoiding overflows, etc., is another.
Numerical math is a science on its own and not the simplest. And there is a lot of good literature on the subject if you really are interested in devising your own robust routines.
There is a limit of iterations, I think 999. You required 10^-5 accuracy.
For me remains another curiosity: how I can find further the number of iterations done in order to be met the convergence.
The problem in two examples where I showed that your routine fails has nothing to do with the number of iterations or the tolerance. The same seems to apply for FM's example.
The limit 999 is something you introduced. You could make it a higher number if you want but i doubt that this would much sense.
I am not sure what you mean with
how I can find further the number of iterations done in order to be met the convergence.
Do you simply want to know how many iterations were done before your program returns its value? Implement a counter in your program and then return a vector, containing that counter and the vector x as its two elements.
But first you should repair the first part of your program. The matrix created must not have a 0 in its diagonal.
I have done so:
It sure doesn't make much sense trying to debug a picture 😉
From what I see in your first post it looks like we have those big numbers right after the first iteration step?
Have you tried using a different start vector for x. As the program is written, no start guess is provided and so its (0,0,0).
Otherwise you will have a good time trying to debug the algorithm and its implementation to find out what the cause for the problem is.
BTW, if you or anybody else is interested in the implementation of numerical math, its highly recommended to study the "numerical recipes" -> http://numerical.recipes/
Many routines built into Mathcad (up to version 15) are based on them and in the good old times (pre version 13) we also were offered a mathcad e-book "Exploring Numerical Recipes".
It might also worth to take a look at Knuth, "Semi Numerical Algorithms".
And to complete the things - most functions implemented in Mathcad are based on "Handbook of Mathematical Functions" by Abramowitz & Stegun
When PTC destroyed Mathcad and created Prime, they also change some of the numerical methods underlying for the KNITRO routines (which I don't know much/anything of). They claimed those routines would be better, more stable, etc. Some threads in this forum proofed that claim to be not true, if I remember correctly.
Looks like MF found the number of iterations too by introducing a vector. Nice. All the best to all.
It should not matter how we chose the components of A. In fact, Gauss-Seidel has its own limitations, that's why appeared also other methods like Gauss-Seidel with Over-Relaxation, etc.
@tubar wrote:
Looks like MF found the number of iterations too by introducing a vector. Nice. All the best to all.
It should not matter how we chose the components of A. In fact, Gauss-Seidel has its own limitations, that's why appeared also other methods like Gauss-Seidel with Over-Relaxation, etc.
Is this a wish?
In fact it matters a lot what the values in the matrix A are and how they are arrange. As I have just written, both Jacobi and Gauß-Seidel are very happy with very big diagonal elements. It can be proven that they converge for sure if the matrix is diagonally dominant. If this is not the case, the algorithms may converge anyway, but there is no guarantee. The smaller the diagonal elements are the bigger the chance that the algorithms will not converge which can be seen in the two examples above where Mathcad throws the floating_point error (the values calculated possibly exceed the IEEE limit of approx. 10^307.
The part of your program which you copied from that pdf tries to cope with that problem by rearranging the rows. This may help in some case but in some other cases it does even worse. In the second example I posted the rows are rearranged in such a manner that the third diagonal element is zero. Of course this will throw the division by zero error. If the program would not have rearranged the matrix elements we would have got a solution.
In other examples the opposite might be the case and its a good thing to rearrange. A clever written program would "see" when to rearrange and how, will make the best from the given matrix A and only the would feed it to Jacobi or GS. Coping with all or most possible cases is a difficult programming task - thats the reason why also in other programming languages math libraries can be bought extra for good money. If the implementation of these libraries is professionally done, thats well spent money.
Its an illusion that you can simply read about an algorithm an so you are able to implement it in Java, C#, C++ or R. There much more to it you'll have to consider.
Again, I can only suggest having a look at the "Numerical Recipes" to get an idea.
The problem with Gaus-Seidel is the dividing by A_j,j that indeed should be not null.
In general all the programing language construct the algorithm from an initial logical tree that is universal for that given problem. Of course, on that tree are built further the particularities of each developer of the software/compiler.
Thank you very much Werner, and the other guys. I inserted your last suggestions and now it works.
I attached the file of Gauss-Seidel Method in order to benefit others in the future.
@tubar wrote:
Thank you very much Werner, and the other guys. I inserted your last suggestions and now it works.
Thats fine, but ...
I attached the file of Gauss-Seidel Method in order to benefit others in the future.
Better make your program more robust!! The remarks I made above concerning the wrong manipulation of the matrix A in the first part of your program still applies. See the screen shot below. Its from your latest file.
At the right I copied the matrix your program created right before going into the iteration section.
Not sure whats the problem in the first example - maybe simply not enough robust. In the second example its clear that a zero in the diagonal must produce that error.
Note that for orders up to and including 5 (that is, when A is a 5x5 matrix), an exact solution can be found:
So I guess, if an approximation algorithm is not stable enough to solve all solvable problems up to and including order 5, it's simply not good enough.
Success!
Luc
Symbolic calculation is magnificent but is limited. If you want for example to EXPAND the term (a+b+c+d)^6 it will say that is a graphical problem - something where you cannot solve through settings. Very bad... I had some work with algebraic equations many years ago....
You mean like this?
No problem. Note that the above shows only the first page or so. The total is 7 pages wide in this 'collect'ed fashion. Uncollected it spans 12 pages.
Luc
The instruction COLLECT is very nice, but the limitation of the Mathcad remains. Many year ago I was locked in my symbolic calculation due to the graphical/display limitation of Mathcad at that time. I think Mathcad 13 was the most stable Mathcad version from all.
Luc's Mathcad 11 (which uses Maple for symbolics) can expand that expression, but Mathcad 15 (and 14) and also Prime use Mupad for symbolics and would refuse to display the result. Don't think that this is due to the symbolic engine but rather a design change in Mathcad itself. Somebody at Mathsoft probably thought that it would be a good idea not to let a symbolic result spread some pages widths - sorry guy, but it was not a good idea.
While MC13 still uses Maple (which is far superior over MuPad) it also had some severe deficits. Thats the reason many consider Mathcad11 as the best version ever. Alas, times go by and from todays point of view its probably MC15 to favor.
Jacobi or Gauß-Seidel sure are not algorithms you would like to use for real life problems.
They simply are not robust enough. At least not the original algorithm out of the box without some smart modifications.
If the matrix A is diagonally dominant (meaning that the absolute value of each diagonal element is greater than the sum of the absolute value of the other elements in the same row) the algorithm converges for sure.
But if A is not diagonally dominant, the algorithm may converge but its also likely to fail and this is exactly what we see in the examples above.
Its nice that we have an exact solution for 5 variables - but you won't use an iterative algorithm for that few variables anyway.
Iterative algorithms would be used if you have ten thousands or maybe millions of variables. Then you sure would be happy to be able to use an iteration which gives you the solution in one day to 3 decimals exact rather than to have to wait for two weeks to get it with machine precision.
The program discussed here tries to cope with the convergence problem by arranging the rows so that the highest number of each column gets the diagonal element. A rather helpless attempt as it works row by row and it may happen that we are left with a 0 in the lower right corner of the matrix that way which sure results in a division of zero.
I am not well versed enough in numerical methods to be able to suggest a simple solution. I am nor sure if a simple solution exists at all. The algorithm would realize that it doesn't converge and would automatically chose another algorithm, maybe one specialized for a specific case.
Did I already mention that numerical math isn't easy and trivial?
But some people out there sure need it, as "exact" solution can not be found for every situation and problem (and not always in finite time).
One area of usage for such numerical approximations is in simulation of electronic circuits. Programs like SPICE (and its modern versions Psice and LTspice) do a good job of finding stable solutions for all currents and voltages in circuits with thousands of elements, often within a second.
I suppose you're not interested in the simple solution: let Mathcad do the work.
Some usage examples:
Success!
Luc
I have adapted the program to the case of any origin. It is known that convergence occurs with dominant diagonal matrices and for symmetric and positive definite matrices.
This new approach suffers from the same problem as the OPs variant.
The algorithm works if the diagonal elements are large and it works guaranteed if the matrix is diagonally dominant like your second example. Unfortunately its not always possible to rearrange the rows or columns easily in such a way that the matrix would be suitable for this algorithm.
And Applying more work in changing the system/matrix would mean a similar effort than solving the system exactly via Gauß or similar.
??
Yes, also restrictive like the criteria of the diagonally dominance.
So, quite often the algorithm will not converge and fail. None of these algorithms will always converge - thats the point.
But by analyzing the given matrix and applying the appropriate of a couple of "tricks", one is able to turn a non-converging set to a converging one. This is what good implemented numerical algorithms will do and the programming work to implement the special treatment of a number of special cases usually will be much more than the implementation of the pure algorithm. Thats the reason me and others advised against using this self written program for real work but better rely on the tested methods implemented in Mathcad.
Of course there is much literature about convergence of these iteration algorithms and error reduction.
http://www.iosrjournals.org/iosr-jm/papers/vol2-issue2/D0222023.pdf
http://math.unice.fr/~frapetti/CorsoF/cours3.pdf
.....
Yes, the most robust algorithm remains the Newton-Raphson method. While it converges slower it is still used further for nonlinear systems of equations. For the nonlinear systems of equations the Newton -Raphson method is upgraded by the Gauss-Newton of nonlinear systems using least squares.
Nonlinear systems of equations are my challenge for me now. On this area I try some calculations in this moment.
Very nice, but unfortunately you shared only the image. It is hard for occupied people to re-write the content of an algorithm :). It is like in primary school. A completed file would be more helpful.
Thank you -MF. It is quite useful your file. I will try to tweak it further in order to evaluate better the solutions of a nonlinear system of equations.
Hi tubar,
Since, as is known, the numerical calculation of the inverse matrix is inefficient, it can be avoided by executing the following modifications (t is a parameter that is defined only if it is present in the equation system):