cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Help us improve the PTC Community by taking this short Community Survey! X

Scenarios for Windchill Performance Testing

dwilliams
6-Contributor

Scenarios for Windchill Performance Testing

Hi All,

Does anyone have a list of scenarios that can be used to measure
Windchill performance? We are hoping to measure both transaction and
navigation response time inside of Windchill (9.1 M040).



Thanks for your help,

Dax

3 REPLIES 3

Hi, Dax,


Have you seen the EDC document entitled "Using the PTC Day-in-the-Life Performance Benchmark"? It may be useful to you, and I know that it includes links to available datasets as well.


Regards,


Jane


Jane Zupfer
VP, Windchill Publications
T 763.957.8299
E -

AL_ANDERSON
5-Regular Member
(To:dwilliams)

Warning: This post is too long to read unless you are really really
interested in doing a lot of performance testing and tuning for a major
Windchill upgrade, such as Windchill 9.1 to Windchill 10.0 AND you are
interested in using statistical analysis to present your results to
project stakeholders.

......

When moving from 9.1 to 10.0 we set a goal of "Performance as good or
better in 10.0 than 9.1." We defined 81 simple, transactional use cases
and 11 business unit sub-processes that incorporated "day in the life"
scenarios that crossed from Pro/E to Windchill and back. We found that
the longer the test case, the greater the variability in results, the less
useful the data. Also, for any kind of BOM-related testing, we had to
separate "large BOM" tests from "small" or "medium" BOM tests.

For each of our 92 tests, we used a common set of test objects that
testers could use. We then had each tester run multiple tests using the
same test objects in both systems. For the 81 simple use cases, we used 1
tester each with 10 objects (same 10 in both systems). For the
sub-process tests, we had each tester run the same test on 3 objects each,
but we used at least 3 testers for each test.

We then compared average times and standard deviations of the results from
9.1 to 10.0. We also used the entire population of results in a 2-sample
t-test to determine if there was a significant statistical difference over
all the results between 10.0 and 9.1. This helped a lot. Initially, 10.0
was significantly slower across the entire population. By focusing on the
specific test results, we actually got 10.0 (on average) faster than 9.1,
although the difference was not statistically significant across the
entire population.


Non-CAD RICEF-Solar Where Used
Part-BOM-Open MPSE from Part Information Page
Part-BOM-Save As
Part-BOM-Where Used Expand All (BOM)
Non-CAD RICEF-Create Project from Template
Part-BOM-MPSE Modify Structure (BOM)
ECR-Add Linked Object
Document-Move to New Context
Document-Keyword Search and Read
CAD-Check In
CAD-View Relationship Report
Part-BOM-HTML Check In Structure (BOM)
Part-BOM-Insert an Existing on MPSE
ECR-File Upload
Non-CAD RICEF-Part Displacement Editor ?Add? Record from ECN
Document-Delete
Document-File Upload
ECT-Remove Linked Object
Non-CAD RICEF-Advanced Add for 50 Child Part Numbers
ECT-Add Linked Object
Part-BOM-Modify Attributes
Part-BOM-MPSE Check In Structure (BOM)
Part-BOM-Classification Search and Read
Part-BOM-Move to New Context
ECR-Basic Search and Read
Part-BOM-Set State
Document-Revise
Assignment (Workitem)-Open Worklist
CAD-Keyword Search and Read
Part-BOM-HTML Check Out Structure (BOM)
Document-Saved Search Search and Read
Non-CAD RICEF-Change Status Report
Non-CAD RICEF-Project Terminiation Report Print
Part-BOM-Opening of Revise Page
ECR-Remove Linked Object
Part-BOM-Mass Change Search for Parts
CAD-Set State
Assignment (Workitem)-Reassign Task
Part-BOM-Delete
Non-CAD RICEF-Part Displacement Editor ?Edit? Record from ECN
Part-BOM-MPSE Check Out Structure (BOM)
ECR-File Download
Document-Expand All for Structure (BOM)
Document-Check Out Structure (BOM)
Document-Basic Search and Read
ECR-Saved Search Search and Read
Part-BOM-Remove Linked Object
ECR-Modify Attributes
User-Search on User in Participant Tab
ECR-Set State
Assignment (Workitem)-Complete Setup Participants Task
Part-BOM-Opening of Classification Node Tree
Document-Create Document
Document-HTML Modify Structure (BOM)
CAD-Saved Search and Read
Part-BOM-Revise
Part-BOM-Saved Search Search and Read
Part-BOM-New View Version
Part-BOM-Basic Search and Read
Part-BOM-Add Linked Object
Part-BOM-Click on Classification Search
Part-BOM-Keyword Search and Read
CAD-Save As
CAD Customization-CAD-BOM Compare Report (MFO Only)
Part-BOM-Create Part
Part-BOM-Classify Part
ECN and ECT-Create ECN and ECT
CAD-Basic Search and Read
Document-Check In Structure (BOM)
Part-BOM-HTML Expand All for Structure
Part-BOM-HTML Modify Structure (BOM)
Document-File Download
Non-CAD RICEF-MPDC Termination Report (non cached)
CAD-Delete
CAD-Create New Revision
CAD-Add to Work Space
CAD-Check Out
Non-CAD RICEF-RSPL Editor (Open, Expand All, Edit Value, Submit Change)
Non-CAD RICEF-MPDC Termination Report (cached)
ECR-Create ECR
Supplier-Search on a Supplier whose name is

Sub-Process Business Tests
PSE ME-Custom Feature Investigation
PSE ME-Create Save As Traveler
PSE ME-Custom Feature Edit
PSE DE-Generate New Core Number
PSE DE-Add Additional Asignee
PSE DE-Reassign task to ME
PSE DE-Duplicate Assembly and export BOM
PSE DE-CAD Model to BOM
PSE ME-Create PCS BOM
PSE DE-Create ECN & ECT
PSE DE-Create Windchill Part Objects

Below are our results with the slowest 10.0 tests relative to 9.1 on top,
and the fastest 10.0 tests relative to 9.1 on the bottom. Note that the 2
slowest processes were "sub processes" tested by business users. Those
two "bad" results had such high standard deviations that we could not
really tell if 10.0 was actually slower, or if the users were just
unfamiliar with the new picks and clicks. I suspect that most of that
slowness was due to unfamiliarity with the new system in just those 2
cases.

"RICEF" means Custom Report, Interface (aka Integration), Conversion, E
xtension, or Form. We heavily tested customized elements of the
application where performance was thought to be a potential problem.

Below is the 2-sample T-test for 9.1 vs 10.0 indicating that while 10.0 is
"faster" than 9.1 on average in our test, that difference is not
statistically significant.



Since the P-Value is greater than 0.05, we fail to reject the null
hypothesis that the means are the same - essentially saying that 10.0 is
not significantly better or worse than 9.1 for our implementation.

Interestingly, when we first did this analysis, our 10.0 average was about
5 seconds slower than 9.1 with a P-Value of 0.000, meaning that we had to
reject the null hypothesis that the means were the same - essentially
saying that 10.0 was significantly slower than 9.1 until we fixed the
problems we could identify, and rerun our tests.

Here is a histogram of the 2642 tests that we performed as part of our
performance assessment for the 10.0 upgrade to "prove" 10.0 was as good or
better than 9.1 to our business stakeholders.


Early in the performance tuning process, we ended up making a Ishikawa of
all the various sources of "performance" problems to help us make overall
performance better, and communicate that "performance" isn't one thing so
much as a lot of things that can overlap and interact. The bottom section
is all Solar-specific stuff (mostly customizations), but the top section
is probably useful to all companies. The colors of the lines
(green/yellow/red) indicate status on various performance problems or
potential sources of problems as of 3 month prior to our go live. By go
live, each of the arrows was green. This helped communicate and
categorize our performance work efforts. The alphabet soup of acronyms
under "Solar Integrations" are various other IT systems that we have
Windchill connected to.


We also did CAD-only testing using LinkTuner for Pro-E, but since this
thread is about Windchill, I won't include those results, too. Basically,
they were just load testing CAD operations on various sized models that
showed 10.0 was actually a little faster under load for ProE than 9.1 was
for our data set.

Finally, we did all of our tests on Windows XP 32-bit PCs using IE8 with
Google Chrome Frame. We also ran a few number of regression tests on
Windows 7 using IE8 with Google Chrome Frame. Prior to all the testing
and analysis, above, we did a smaller set of testing that showed IE8
without Google Chrome Frame (GCF) was too slow to use, forcing us to go
with GCF since we could not get enough of our user base to IE9 since they
were still on XP. We also found that RAM on the client plays a role in
performance when viewing large BOMs in Windchill 10.0 that was not the
case in 9.1. Users with less than 2 GB of RAM ran too slow to use 10.0
for large BOMs, so we identified those heavy users with insufficient RAM
and upgraded their PCs prior to our go live.

Al Anderson
PLM Functional Architect
Solar Turbines Incorporated






dwilliams
6-Contributor
(To:dwilliams)

Al,
I couldn't have asked for a better response. Thank you for sharing this information with us.

Dax Williams
Engineering E-Tools Administrator
GE Healthcare OEC
Surgery

T +1 801 536 4917
F +1 801 536 4696
-<">mailto:->

384 Wright Brothers Drive
Salt Lake City, Utah 84116

Top Tags