cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Did you know you can set a signature that will be added to all your posts? Set it here! X

Upgrade to Oracle 11g or not?

akelly
11-Garnet

Upgrade to Oracle 11g or not?

Using Pro/Intralink 9.1 M030. Going to be updating to M062 for compatibility with Windows 7 on the Pro/Engineer (4.0, not Creo) workstations.

Software matrices say Oracle 11g is bundled with Pro/Intralink 9.1 M040+. It also says both Oracle 10g and 11g are compatible with Pro/Intralink 9.1. It looks like we have a choice whether or not to upgrade Oracle at the time of the Windchill maintenance update.

Are there any advantages (Pro/Intralink) of upgrading to Oracle 11g?

Are there any drawbacks (Pro/Intralink) of remaining at Oracle 10g?

Andrew Kelly, P.E. | Senior Engineer | Crane Aerospace & Electronics | +1 440 326 5555 | F: +1 440 284 1090

1 REPLY 1

We currently are using this a HP 460c G6 Blade with 2X Intel Quad Core Xeon 5570 (2.9GHz) with 48 GIG of DDR3 RAM. We are using a backbone 10GIG switch to our other Windchill HP Blade and 6 ProE workers on the third HP blade. We also use a downgrade 1GIG module to the core. Both the Windchill and Oracle blade machine is running on RedHat 5.4. We have about 50 to 70 ProE users plus Windchill doc users. We scaled this machine to handle700 Windchill users including the 50 to 70 ProE users. Right now the Oracle 11G R2 is only being utilized at 2% with a database memory size (SGA and PGA) of40%. Found out ifyou exceed 40% you'll have database issues. For RedHat Linux, I suggest going to Oracle p10098816_112020_Linux-x86-64 patch set which is also the complete install.

You are going to find out that most things run faster on RedHat 5.4 x86_64.Too bad ProE is not multi-processed app. It always just uses just 1 CPU core which is why I could put 6 to 7 ProE workers on the thirdHP 460c G6 bladewith Windows 2008 R2 64bitto keep up with the demand of publishing on the fly at each checkin, changes of state and identity.

The only thing I suggest when going to a new database 11G is to use 32KB block sizes. You will have to add to your site.xconf and run the JavaGen to reset your sqls to accommodate the new block sizes.

<property name="wt.generation.sql.tinyBlobSize" overridable="true"&lt;/p">

targetFile="codebase/user.properties"

value="64k"/>

<property name="wt.generation.sql.smallBlobSize" overridable="true"&lt;/p">

targetFile="codebase/user.properties"

value="64k"/>

then in a windchill shell run the following:

$WT_HOME/bin/JavaGen.sh or %WT_HOME%\bin\JavaGen.bat registry false false true false false.

This will allow you to have 128GIG per datafile in you smallfile tablespace. Just incase you require larger datafiles in your tablespaces.

If you want to move to Oracle Standard, there is no current TPI supported for Oracle 11G. There is the Oralce 9i PTC TPI147256 which uses the standard exp and imp commands. I found out if you don't try to follow the TPI147256 with expdp and impdp methods you will get a ton of oracle feature not enabled deferred segment creation (ORA00439), ORA-39083failed to createand ORA-31684, ORA-39152. For 11G, the method I use is as follows which doesn't create any major errors.

  1. On the mirrored/production oracle enterprise 11G database server
    • In SQLPLUS as sysdba
      • ALTER SYSTEM SET DEFERRED_SEGMENT_CREATION=FALSE (this way I should not even get an error)
      • create a directory dump_dir as '<path>';
      • grant read,write on directory dump_dir to system;
    • Command Line
      • expdp system/PASSWD @SID directory=dump_dir dumpfile=dumpfile_name.dmp logfile=logfile_name.log schemas=schema_name(s)

  2. On the future oracle standard database server:
    • create a database with 32KB and tablespace which generous extra free space when compared to the source database.
    • In SQLPLUS as sysdba
      • ALTER SYSTEM SET DEFERRED_SEGMENT_CREATION=FALSE (this way I should not even get an error)
      • create a directory dump_dir as '<path>';
      • grant read,write on directory dump_dir to system
      • you don't have to create the windchill or cognos user schemas because the impdp will import them
    • Impdp system/PASSWD@SID directory=dump_dir dumpfile= dumpfile_name.dmp logfile= logfile_name.log schemas= schema_name(s)
    • Recompile the 2 Windchill schema packages that always have issues compiling during an import:
      • BASELINEPK
      • and EPMWORKSPACEPK

And you should be done.

Top Tags