cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Your Friends List is a way to easily have access to the community members that you interact with the most! X

Mechanica memory allocation vs actual usage

JonathanHodgson
11-Garnet

Mechanica memory allocation vs actual usage

We've established here that setting Mechanica's memory allocation to around half your available memory is still a pretty good guideline, even though that's more likely to mean 6 GB than the 128 MB default.

In my team we're getting some amazing performance using a RAM drive to supplement WF4's 8 GB Mechanica limit, and it's a very useful (and usable) tool these days.

However, this number doesn't seem to be a hard limit - to copy from a recent .rpt file:

Memory and Disk Usage:

Machine Type: Windows XP 64 Bit Edition

RAM Allocation for Solver (megabytes): 8192.0

Total Elapsed Time (seconds): 5162.19

Total CPU Time (seconds): 3089.44

Maximum Memory Usage (kilobytes): 19861061

Working Directory Disk Usage (kilobytes): 18542428

Results Directory Size (kilobytes):

5172630 i:\casing_dyn_3rd_3mounts_3casings

Very small components usually do only use around 8 GB, but as the run gets bigger so does the memory take by msengine.exe, and I suspect that this recent run started hitting the Windows swapfile (20 GB Mechanica + 6 GB RAM drive + xtop + Windows on a 24 GB machine...) - which can't be a good thing!

What's the exact relationship between the memory allocation setting and what msengine.exe actually uses? By trial and error we'll probably turn it down to 6144 next time, but as each model is different it's a little hard to predict for a new run.

Thanks!


This thread is inactive and closed by the PTC Community Management Team. If you would like to provide a reply and re-open this thread, please notify the moderator and reference the thread. You may also use "Start a topic" button to ask a new question. Please be sure to include what version of the PTC product you are using so another community member knowledgeable about your version may be able to assist.
1 ACCEPTED SOLUTION

Accepted Solutions

OK, I'm now re-reading Tad Doxsee's reply to my original post.

http://communities.ptc.com/thread/33467?start=0&tstart=0#160574

If I've understood it all correctly, my theory is this:

msengine.exe (Mechanica) has at least two elements to its memory usage: the global stiffness matrix, which it will fit into SOLRAM if it can and page to disk if it can't; and a general 'database'.

For very small models and adequate SOLRAM, SOLRAM is bigger than everything, so it claims that amount of memory (plus a tiny bit for the database) but probably doesn't use it all.

For slightly larger models, the database is big enough to add a noticeable amount to the memory use - but still small in relation to SOLRAM. At some point the matrix may become too large for SOLRAM, and start to page to disk.

For very large models (and with the WF4 8 GB SOLRAM limit), the database can be as big as, or bigger than, SOLRAM. Now the memory usage can exceed 2× SOLRAM, and the matrix is almost certainly already paging to disk. This is probably only possible with a 64 bit operating system!

So, the summary is: to analyse very large models, you may need to reduce SOLRAM to allow space for the other data that Mechanica needs to keep in memory. There's probably no way to predict this; just be aware of the possibility, and be ready to reduce SOLRAM if your computer gets close to running out of memory during a run.

Presumably this means that it's possible to create an analysis where the database takes up or even exceeds all the available RAM, leaving no space for SOLRAM at all... what then? Is that the limiting factor for model size / complexity?

View solution in original post

17 REPLIES 17

Hello Jonathan,

I believe this was answered in an older thread that you were involved in: http://communities.ptc.com/message/161358#161358

As far as mem allocation, my understanding is that it's the size of chunks of memory that Mechanica will take when it needs it. Thus, when we used to get the "insufficient memory" error on 32-bit machines, you could sometimes make the run go by reducing memory allocation, which is counterintuitive. But the reason it worked is that it was taking smaller bites of the memory to get closer to exactly what the run needed instead of throwing more than enough RAM at it. So most often, I would expect that you would see less Maximum Memory Usage with smaller memory allocation.

Smaller bites = more writing to disk, larger bites = less writing to disk. In general, you can see a decent improvement in run speed by taking larger bites, but my experience was that it was model and/or machine dependent as to how much speed benefit you see. Put SSD into the picture and it might be irrelevant.

Brad Green

The "chunks" explanation doesn't hold water for me. We now use 8192 as our standard SOLRAM value, for everything. With that setting, I've seen memory 'takes' of around 8 GB, 10 GB, 13 GB, 16 GB, 20 GB... there's no sign that it uses multiples of SOLRAM. It just seems to vary as it wants.

Running my own quick test:

SOLRAM Max Mem Ratio E.T.

512 668852 1.28 3.84

1024 1168884 1.11 3.07

2048 4093364 1.95 2.78

3072 5093364 1.62 3.63

4096 6093364 1.45 3.17

8192 10093364 1.20 3.13

(Also, how bad is the font and tab management in this text editor?!?)

So, from that I'd conclude that for any given model, larger SOLRAM means larger memory take, but not in any sensible or linear fashion.

Mostly I'm just a bit concerned that it doesn't stop at a ratio of 2.0 (my original example went to 2.37× the SOLRAM value) which isn't helpful when you're trying to make best use of finite RAM.

This is only really a problem when we're running really big models - 8192 works for all the smaller ones - but that's potentially when it has the biggest impact.

The "093364" string looks strange...

Keep in mind that 1Mb = 1024 kb.

Indeed - it's as though it's a multiple plus a constant, but as you suggest the basic number doesn't increase in the right multiples.

I wonder if PTC's 128 "MB" is really 128 000 000 bytes...

It doesn't have to be 128 000 000 bytes.

Since 1Mb = 1024 kb and 1kb = 1024 bytes, 128Mb = 128*1024*1024 = 134.217.728 bytes

EDIT: I have just understood that you wonder if 1 "PTC's MB" = 1000000 bytes

Did you check the memory usage in Windows task manager during your tests ?

(I really don't like the layout of the forum that forces us to go up and down depending on who is answering to whom... )

I didn't look at Task Manager on this occasion, but when we've been running 10-20 GB jobs it usually seems to agree with the reported memory usage.

To your other reply (yes, this branching structure can get very messy!):

I was thinking that the fixed digits in the memory usage (...093364) may mean that Mechanica is using 2 000 000 +93 364, or 3 000 000 +93 364, or 4 000 000 + 93 364 - the only reason we should keep seeing the same digits when reading the number in base-10, is if the actual value that's changing is a 'round number' in base-10.

My 2 cents:

I recently saw that the memory usage reported by mechanica depends on the SOLRAM setting more than on the matrix size.

As an example, the analysis of a little rod with solram set to 500Mb:

Machine Type: Windows XP 64 Bit Edition
RAM Allocation for Solver (megabytes): 500.0

Total Elapsed Time (seconds): 0.90
Total CPU Time (seconds): 0.33
Maximum Memory Usage (kilobytes): 553.282

with solram set to 1000Mb:

Machine Type: Windows XP 64 Bit Edition
RAM Allocation for Solver (megabytes): 1000.0

Total Elapsed Time (seconds): 2.25
Total CPU Time (seconds): 0.94
Maximum Memory Usage (kilobytes): 1.101.019
Working Directory Disk Usage (kilobytes): 0

with solram set to 2000Mb:

Machine Type: Windows XP 64 Bit Edition
RAM Allocation for Solver (megabytes): 2000.0

Total Elapsed Time (seconds): 1.15
Total CPU Time (seconds): 0.33
Maximum Memory Usage (kilobytes): 4.016.333
Working Directory Disk Usage (kilobytes): 0

But in all cases, the computer memory usage increased by ~30Mb.

I also found this in the *.pas file:

Size of global matrix profile (kb): 249.888

Number of terms in global matrix profile: 31236

Minimum recommended solram for direct solver: 2

and this little advice:

If you set solram too high, performance will usually suffer,

even on machines with very large RAM, because there will not

be enough machine RAM for other important data.

For example, Mechanica allocates many large, non-solver

memory areas that will cause excessive swapping unless you

leave enough spare machine RAM.

Yes, the "may case excessive swapping" warning is exactly what I'm concerned about.

For small models it seems that there's no significant effect on speed (I don't really care whether a model takes 3.1 or 3.8 seconds to solve!) but with larger runs there's definitely a penalty if you set the value too low, certainly when using a mechanical hard drive for the working directory.

Interestingly, all my test runs in my other post had the same working directory disk usage (11573).

OK, I'm now re-reading Tad Doxsee's reply to my original post.

http://communities.ptc.com/thread/33467?start=0&tstart=0#160574

If I've understood it all correctly, my theory is this:

msengine.exe (Mechanica) has at least two elements to its memory usage: the global stiffness matrix, which it will fit into SOLRAM if it can and page to disk if it can't; and a general 'database'.

For very small models and adequate SOLRAM, SOLRAM is bigger than everything, so it claims that amount of memory (plus a tiny bit for the database) but probably doesn't use it all.

For slightly larger models, the database is big enough to add a noticeable amount to the memory use - but still small in relation to SOLRAM. At some point the matrix may become too large for SOLRAM, and start to page to disk.

For very large models (and with the WF4 8 GB SOLRAM limit), the database can be as big as, or bigger than, SOLRAM. Now the memory usage can exceed 2× SOLRAM, and the matrix is almost certainly already paging to disk. This is probably only possible with a 64 bit operating system!

So, the summary is: to analyse very large models, you may need to reduce SOLRAM to allow space for the other data that Mechanica needs to keep in memory. There's probably no way to predict this; just be aware of the possibility, and be ready to reduce SOLRAM if your computer gets close to running out of memory during a run.

Presumably this means that it's possible to create an analysis where the database takes up or even exceeds all the available RAM, leaving no space for SOLRAM at all... what then? Is that the limiting factor for model size / complexity?

Jonathan Hodgson wrote:

Presumably this means that it's possible to create an analysis where the database takes up or even exceeds all the available RAM, leaving no space for SOLRAM at all... what then? Is that the limiting factor for model size / complexity?

I think that swapping to disk will occur, performances will decrease, but it would run fine (assuming that the OS virtual memory is big enough).

One would keep in mind that the memory usage reported by mechanica isn't the actual used RAM. That's just the amount of memory allocated to mechanica and things can go wrong if the global stiffness matrix is "too big".

I mean, if we set solram to, say, 10Gb on a machine with 4Gb RAM all will be fine until the 4Gb RAM will be entirely filled.

Tad Doxsee wrote:

For example, suppose you have a machine with 4 GB of RAM and 4 GB of disk allocated to swap space.You run an analysis which needs 1 GB for the global stiffness matrix, K, and 2 GB for everything else, which I'll call DB. If you set solram to 1.5 GB, then, ignoring the RAM used by the operating system and other applications, the memory usage looks like this.

Available:

RAM swap

|--------------------------------|--------------------------------|

Used by Mechanica
:

DB K
****************(########----) Ideal
solram



DB + solram < RAM good (no OS swapping)
K < solram good (no matrix equation swapping)

In the above, the memory used by DB is shown as ****, the memory used by K is shown as ###, and the memory allocated to solram is inside parentheses (###--). Because K is smaller than solram, there is some memory that is allocated to solram that is unused, shown as ----. This is an ideal situation because the K < solram and DB + solram < RAM and hence, no swapping will occur.

But in my example the maximum memory usage reported by Mechanica would be ~10Gb or more.

I'm curious to see what's the global matrix size of the analysis you ran above as a quick test.

Could you have a look at it?

The small run that I tried at several values of SOLRAM reports: Size of global matrix profile (mb): 9.51816

The run that hit 20 GB: Size of global matrix profile (gb): 13.8528

I just thought that the best way to optimize solram size could be to run an analysis with solram set to 8Gb, then have a look at the checkpoints tab in the run status and note the size of the global matrix profile.

The last step is to stop the analysis, set solram size a little bit bigger than the global matrix profile size of the analysis and finally re-run the analysis.

In your case, with a matrix of 14Gb, a ramdisk of 6Gb and 20Gb RAM, it could be hard to define the right solram size because of the unknown amount of RAM used by the msengine.exe process.

I know, this is your primary question ("What's the exact relationship between the memory allocation setting and what msengine.exe actually uses?")

As an example, I have just ran an analysis and looked at the msengine.exe memory usage in the task manager and it has reached a maximum of ~360.000 kb.

Here are parts of the report:

Size of global matrix profile (mb): 146.063
Number of terms in global matrix profile: 18257900
Minimum recommended solram for direct solver: 10

Size of element file (mb): 50.1274
Maximum element matrix size (kb): 14.64
Average element matrix size (kb): 12.6361

Machine Type: Windows XP 64 Bit Edition
RAM Allocation for Solver (megabytes): 1000.0

Total Elapsed Time (seconds): 1156.78
Total CPU Time (seconds): 1770.81
Maximum Memory Usage (kilobytes): 1183990
Working Directory Disk Usage (kilobytes): 116789

IF I'm right:

The difference between max mem usage and solram is 159.990 kb (if solram is 1.024.000 kb), it should be the memory used by the "other part" of msengine.exe.

If we add this to the global matrix profile size, we get a total RAM usage of 309.404 kb, not far from the memory usage seen in the task manager...

With your 20Gb run:

matrix=13.8528Gb/solram=8.388.608kb/max mem usage=19.861.061kb

Msengine.exe should have used 19.861.061 kb as your solram was entirely filled.

The "non-matrix part" of the memory used by msengine was 11.472.453 kb (max mem usage minus solram size).

The non-SOLRAM memory usage of msengine seems to be approximately equal to the global matrix size (in both your example and mine).

To be confirmed by experiment.

We've just done a first pass on another large analysis:

Size of global matrix profile (gb): 9.79986

Number of terms in global matrix profile: 1224982668

Minimum recommended solram for direct solver: 198

Size of element file (gb): 2.30161

Machine Type: Windows XP 64 Bit Edition

RAM Allocation for Solver (megabytes): 8192.0

Total Elapsed Time (seconds): 2599.75

Total CPU Time (seconds): 2072.20

Maximum Memory Usage (kilobytes): 14121478

Working Directory Disk Usage (kilobytes): 12805830

Therefore:

SOLRAM = 8 388 608 (8192×1024)

Non-SOLRAM memory used = 5 732 870

So in this case, the non-SOLRAM memory is a little more than 50% of the matrix size (and nearly 2.5× the 'element file' size)...

Yes, you pointed it right in your quick test too... ('not sure my english is right )

Latest update: the Windows memory usage appears to be slightly lower than Mechanica reports. At the moment we have a run showing 19 GB in use, whilst Windows shows msengine using 17 GB... although the total of xtop.exe and msengine.exe is very close to the Mechanica value.

And a run can crash when it uses up all the memory - we've just proved this!

I don't remember exactly how we concluded this (it's been a few years), but we believed that even when enough memory was present/free to keep the job completely in physical RAM, Windows was still sending some of it to virtual memory.

Top Tags