cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Have a PTC product question you need answered fast? Chances are someone has asked it before. Learn about the community search. X

What's the relationship between p-level, solution time, mesh & accuracy?

cwenner
1-Newbie

What's the relationship between p-level, solution time, mesh & accuracy?

First some perspective: I deal with very large structural (rocket) problems, so for solution time (lots of optimizations) I am highly incented to use single-pass analyses rather than multi-pass. So readers should understand that my question deals with finding an approach that maximizes the prospects of getting reasonable results using single-pass analysis.

Up until now I have "accepted" that if the p-level of my problem was below 9 (the maximum), then I could expect my solutions to be reasonably numerically accurate since "the math hadn't red-lined". The first question is, does anyone know that to be "mostly true" or "a reckless assumption"? To answer that you must know that I deal with concept level designs: I do NOT need absolutely accurate numerical results... I just need results that are accurate enough to yield accurate design insight.

The next question is somewhat related but deals specifically with solution time. Despite the p-level of my model being almost entirely 7 and below, it's taking 3 hours to run a simple 4-parameter optimization. More elements means more equations... but it also means that stress gradients are much easier for the math to deal with. My question is this (kind of hoping a developer will answer). Is there a target "mathematical sweet spot" for geometrically complex problems? If the default, coarse mesh, results in alot of P=8 elements and I want to reduce my solution time, should I refine the mesh just enough to get to most elements being P=7, or should I, if possible, increase the mesh density until most of the model is P=5 or lower?

Thanks.


This thread is inactive and closed by the PTC Community Management Team. If you would like to provide a reply and re-open this thread, please notify the moderator and reference the thread. You may also use "Start a topic" button to ask a new question. Please be sure to include what version of the PTC product you are using so another community member knowledgeable about your version may be able to assist.
4 REPLIES 4

Hi Chauncey,

An interesting question!

First off, congratulations on getting a 4-parameter optimization study to run. I've yet to have success with even a 2-parameter study - I just get stuck with "recovering from invalid geometry" down to ridiculously small step sizes.

Obviously for confidence in your results, multi-pass has the huge advantage that you can plot a convergence graph. If you're going to do that with single-pass, you'll have to repeat the run multiple times and plot your own graph in Excel... although if you do a lot of similar problems (and/or you capture the 'right' metrics, whether that's maximum P-value, worst element angle, or something else we haven't thought of) then it might be a worthwhile exercise.

My subjective view is that I get better (= quicker with similar confidence) results by applying aggressive mesh controls locally to areas of concern, probably using more elements in those areas than is actually necessary - but this does involve human intervention and judgement. Solving a few [hundred or thousand] extra elements for a good solution first time out seems to be quicker than waiting for MPA to complete every time - but in terms of absolute rules, I only have my own judgement / experience on the types of component I typically analyse.

If someone does fancy doing some research into element size / run time / P-level / convergence then that would be really interesting.

Also, if you haven't already done so, have a read of some of the other threads on getting Mechanica to run better. First, buy lots of memory (RAM) - it's dirt cheap really - and second, put all your temporary and results files on a RAM drive if you can. People here are probably bored of reading me posting this, but it's transformed Mechanica for us from "quite useful when you need it" to "I'll just analyse it anyway 'cos it'll only take a few minutes".

As a quick guide: on a multi-core CPU with a linear static analysis, it should be possible to have the total CPU time greater than the total elapsed time...

Thanks for the response.

For what it's worth, since WF5 I routinely optmize on upto 9 parameters generally using a failure_index<1.0 optimization criteria. I do find "9" taking painfully long, especially when hours into it I find I did something that won't work (like not provide enough range for dimensions and model parameters to change in). To be sure I am happiest 6 parameters or less. I've tried using exclusion for singularity areas... it is not worth the computational overhead (at least not for large problems).

Without exclusion my 6-parameter runs take 1-5 hours depending on lots of variables. with exclusion everything was more than double. One reason could well be that I use small elements in excluded areas which greatly increases the element count.

I do have a 64-bit, 24GB RAM, 6-processor maching though.

Two other quick thoughts:

1) I presume you've already increased SOLRAM (the memory allocation in the Options dialog box) from the default 128 MB? 4-8 GB might be a better first guess.

2) In case you haven't found it, the Edge Length by Curvature mesh control is good for putting elements where you need them and not where you don't. It can even be applied globally on some models. Somewhere between 2.0 and 1.0 (lower is more aggressive) is a good starting point.

Hi Chauncey,

I agree that if your p-levels remain below 9 during an single pass adaptive (SPA) analysis, then the solution is reasonably numerically accurate. Of course, it's impossible to make precise statements about accuracy because so much depends on the specifics of the model and of the analysis.

As you probably already know, you can get an idea of the accuracy of your results by looking at the RMS stress error estimate that is printed in the summary file. You can also create a result window that shows the distribution of stress error estimates. You can check the accuracy of the default SPA analysis by running a similar SPA analysis or multi-pass adaptive (MPA) analysis with tighter convergence criteria.

I don't think there is a "mathematical sweet spot" or rule of thumb that balances the number of elements with the p-levels of the edges for all problems. I do know that, in general, increasing the polynomial order of the edges increases the accuracy of the solution faster than increasing the number of elements does, at least when measuring error in the energy norm versus the number of degrees of freedom. (See, for example, Chapter 8 of Joseph Flaherty's course notes on Finite Element Analysis.) However, singularities and stress concentrations introduce exceptions to this general rule, and that's why adding smaller elements near fillets and isolating singularities often help to obtain accurate solutions faster. In another post, Jonathan Hodgson mentioned the Edge Length by Curvature AutoGEM control, and I agree that it's an excellent tool for helping to create good meshes.

Tad Doxsee

PTC, R&D

Top Tags