Community Tip - If community subscription notifications are filling up your inbox you can set up a daily digest and get all your notifications in a single email. X
Sorry if this is in the wrong forum (modeling), but I saw no more appropriate one:
Can anyone tell me what exactly more expensive video cards buy for you? I understand they have drivers optimized for the software and for OpenGL, etc. But I would like to know what the result is on own experience with using ProE. Does it make everything faster? Does it only make animations with large assemblies faster? Are they really useful for ProE? I am not aware of slowness while using ProE while designing parts. Maybe it is people who do lots of animations, rendering, and running mechanisms that need the better cards. Or maybe my CPU is having to do a lot of extra work because I don't have a powerful OpenGL card, I just don't realize it, so my regen and assembly load times are higher.
Anyone know the story on video cards and ProE?
Here are som benchmark tests from NVidia http://www.nvidia.com/object/ptc-creo-parametric-2.html and AMD http://www.anandtech.com/show/5747/amd-partners-with-ptc-for-creo-parametric-20
This is for Creo 2.0, are you still on ProE and if so which version?
I'm on WF5 but about to move to Creo 2. Thanks for the links. In case you're interested, I also saw some more at Tom's Hardware. But benchmark numbers don't really tell me what the differences in user experience are exactly. Do you know what I mean? Does the fact I don't notice that my computer is very slow during standard reorienting of assemblies during modeling mean that I wouldn't benefit from a better card? I work with some pretty big assemblies. At least I think.
My experience is that the CPU is the limiting factor, at least in WF4; Creo 2 may have some improvements that make better use of the graphics card. Transparency seems to be the main thing that works the GPU.
If your company allows you, install something like nVidia Inspector or GPU-Z so that you can view a graph of your GPU usage. I would suggest that if it never reaches 100% in your normal usage, there would be little benefit in a more powerful card.
Watch Windows Task Manager as well: set it to "one graph, all CPUs" and if the CPU usage hits [100 / number of cores] then chances are you're CPU limited at that moment.
As mentioned in the nVidia link, if you do have spare GPU capacity then you could potentially use it for anti-aliasing, which makes things look a little nicer - certainly in my gaming experience, that's heavy on the video card.
If you'd like to see more benchmarks go here: http://www.proesite.com/newframe.htm?/OCUSB6/ocusb6.htm
So do the benchmarks basically load assemblies and then move them around a lot or something, like a maximum-speed automation of what users do while interacting with a model on screen? Do I have it correct that the idea behind getting fancier video cards is primarily that it let's you open larger assemblies with more transparency without reorientations becoming very slow? Thanks for the tips on how to measure CPU and GPU load.
Do I have it correct that the idea behind getting fancier video cards is primarily that it let's you open larger assemblies with more transparency without reorientations becoming very slow?
That's my understanding. It's possible that there are other processes (maybe rendering) which use the GPU more intensively, but I don't personally use them.
You can't have too good a video card, but having a cheap/poor one can SERIPUSLY affect performance. Turn off the stupid enhanced rendering, and turn off pre-highlight. Enable levels of detail (config option: "lods_enabled", set value of "lods_value" to 75 or so).