Community Tip - Stay updated on what is happening on the PTC Community by subscribing to PTC Community Announcements. X
Hello.
We are discussing if a server should be acquired to run Creo Simulate calculations.
Is it possible for me to set up the analysis on my workstation, and when I press run,
the calculations is done on a server and not local on my workstation?
Thanks in advance
I believe the Distributed Batch option will allow you to do this.
You would need an engine license available on all machines for distributed batch.
I don't think it's possible to buy these separately from the UI license.
The difference between a 'server' and a workstation ...
Better to get a top of the range dell T7910 workstation with 64gb Dual Intel® Xeon® Processor E5-2670 v3 (12C, 2.3GHz, Turbo, HT, 30M, 120W) 256 gb SSHD
About £5.6k at the moment.
The s/w will use all cores by default (I am assuming there are no s/w limitations so that would be 24 cores)
Make it available to all only via network, no monitor. Perhaps configure a RAM disk. Stick it in a dust free cooled room.
Replace every 3-5 years (speed roughly doubles, warranties run out, dust needs to be shovelled out and reliability declines)
Float your license(s) and use batch files to queue.
Use your 'IT' dept to write some scripts for client machines to make daily life simple or teach users how to remotely login etc.
Have lighter weight (basic CAD) user machines for model prep/test/post using the same floating license.
Note, you can pre/post processes on one machine whilst the engine license(s) is busy elsewhere.
(I don't sell dell computers, there are many other manufacturers out there)
Thank you.
I was thinking a huge supply of RAM would cut the simulation time alot,
What is the most important? Ram or a extreme CPU?
Hi,
Our machines have 48Gb ram. Our simulate studies rarely go above 24Gb (in total) used, and these are for sizable models. I think this is due to the block solver memory allocation being limited to 16Gb.
Our Ansys and CFD models can use all the RAM.
Fatigue calculations are linearly scalable and as many CPU's as possible is good. (again, licensing limitations may apply)
I think there was a suggestion in an earlier thread that the block solver memory allocation limit will eventually be lifted and therefore more memory=good.
Extreme CPU is always welcome. But as earlier threads also point out, the read and write speed becomes a bottleneck at various points in the analysis hence SSHD is a major step forward, striping disks is good and RAM disks are about as fast as it gets.
However, all this h/w s/w talk must be balanced with opportunities for the users' time management to hit peak efficiency. Estimate, plan, batch and run models over night, weekends, holidays, meetings, timesheets, coffee, facebook, lunch, write reports etc.
A slower machine than you think will probably be adeqate.
Regards
RAM requirement depends mostly on the model size, and I think is also limited by Creo (16 GB max SOLRAM in Creo 2? - but remember you need about twice as much machine RAM as the SOLRAM setting).
For small models, once you have allocated 'enough' RAM there is no speed increase. For large models, once you have allocated the maximum permitted, no more is possible (but of course future versions of Creo will probably increase this limit).
However, as Charles mentions, there are some gains to be had by creating a RAM disk for the temporary files and the results - if you have an SSD then the gains will be smaller (but still there); but using a RAM disk instead of an SSD will also save 'using up' write cycles on the SSD.
RAM is relatively cheap and in general will help most aspects of performance, so 64 GB as Charles suggests seems like a good idea. However, at the moment I suspect you will get along pretty well with 32 GB if it makes a difference to the budget. Happy for others to come along and contradict me here!
In terms of CPU, I would consider targetting something with fast single-thread speed since some types of analysis (e.g. LDA) still seem to run only on one core, and even those that do multi-thread still spend some time single-threaded while splitting and re-combining the analysis.
Based on the results at http://www.cpubenchmark.net/singleThread.html (which from some recent testing of my new workstation correlate quite well with Creo regen times), Charles' suggestion of an E5-2670 v3 looks OK for single-thread and produces very good multi-thread results ( http://www.cpubenchmark.net/high_end_cpus.html ), but the i7-4770k gives better single-thread speed albeit with significantly lower overall performance, and could be overclocked if you choose (typically up to about 4.5 GHz so about 28%).
Based on Passmark as above, the i7 is 25% faster single-threaded, but the Xeon is 74% faster multi-threaded... have a look at Task Manager when running some typical 'heavy' analyses on your current machines to see how much time is spent multi-threaded.
I forgot about the ssd write cycles. That's important. 3-5 years of heavy Simulate use, I wonder how long they would actually last?
The problem making changes from a standard offering such as 64 to 32 Gb spec is that the cost saving for the paperwork effort required is minimal compared to potential later and moderate future proofing.
Similarly the spec I suggested has a huge graphics card and other bits and pieces not required that can't be removed/descoped which presumably has something to do with economies of scale when offering a 'complete' machine.
Deviations from 'mass produced' machines require specialist builders and much more effort on the part of the client with attendant hidden costs and risks.
Anyway, if you do specify an overclocked machine you really must get some of those lovely blue lights fitted inside to show off the water cooling system.
ttfn
Indeed, it may depend on the size of your company. If you're a one-man consultant you can specify whatever you like; if you work for Ford then you probably have to take what the IT department gives you! You may be able to get a better performance:price ratio using a small, specialist PC builder but you may not get the same warranty / support as you might from Dell (again, assuming your company is a big enough customer...)
Customising up to 64 GB and dropping to a K2000, this system comes out at £3000 (it's not the 4770k but they sell that as a component for similar money):
http://www.scan.co.uk/3xs/configurator/custom-built-graphics-cad-workstation-pc-uk-3xs-gw-ht20
so that's an interesting comparison to £5600 for the Dell mentioned.
Olaf Corten's Pro/E benchmark site (http://proesite.com/) has a number of o/c'd machines topping the benchmark tables, so certainly people do this for PESC* workstations, but they're probably in the minority.
*Pro/E, sorry, Creo
That's a very interesting machine Jonathan.
I think the majority of the price difference is that the dell has 2-off xeons.
Dear Santa ...
Yes, I realised after I posted that the Dell was dual-CPU - which apart from another ?£500-1000 for the CPUs probably means a more expensive motherboard too...
Scan/3XS seem to have a good reputation on a forum I read - I've had components from them myself but not a system.
Hello Eirik,
I saw that your thread generated quite some feedback.
Did one or more of the posts help you come to a resolution for your discussion and/or were there helpful answers, you may want to acknowlege?
Thanks,
Gunter