cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

We are happy to announce the new Windchill Customization board! Learn more.

Tips for Large INTRALINK Database Migrations

StuartHarvey
6-Contributor

Tips for Large INTRALINK Database Migrations

I am working on a migration of a large INTRALINK database (~2M PIV). I have encountered issues with the migration that are specifically related to the volume of data. Below are some of the issues and remediations.

<> Poor overnight performance -- Turn off all backups.
<> Archive logs flood the disk -- Turn off archiving.
<> Rollback/Undo logs flood the disk -- Change Undo TS to 30 minutes max.

I've also noticed that the UDA loader causes the database to double in size, from ~30 GB to 60 GB.

Are there any other tips for working with large databases? Does reordering the loaders from the recommended order improve overall performance? What commands are you executing for gathering statistics? Etc.

Stuart Harvey
1 REPLY 1

Above the other advice.

Hopefully you have a new test server with an import of the metadata only. Some companies like to keep everything in BLOBs including content which is really hard to constantly copy the entire tablespaces or preform exports. My advice is to create external vaults which ProI should be set to and then perform an export.

Next while upgrading, if it takes more than 5 hours to upgrade, trim your logs because performance has a lot to do with IOs of writting to the log files by always scrolling then appending. I've attached a log trimming script for unix which you can modify to point to your upgrade log. I suggest you run this in 2 hour intervals so you can at least read the log if grows to more than 10

MEG. This trimming also applies to a current live system.

I've spoken toawindows expert here and there is a method to echo to nulllog files to a running system. If you import you data to another database, it usually makes the data more contiguous and then run statistics for better performance.

Make sure you OS is properly tuned with kernals, patches and so on. Best to apply this to your test upgrade server then production.

If you are using PTC migration tools, add more RAM to the upgrade process with memory settings in the command. Hopefully more than 64MEG which is default. I usually make it to 1 to 2 GIG.

Here is a sample below.

## Nulling Apache Logs

optjava1.5binjar -cvf $WT_LOGS$(date +%y%m%d)Apache.$(date +%y%m%d%H%M).jar $APACHE_HOMElogs.

cat devnull > opthpwsapachelogserror_log

cat devnull > opthpwsapachelogsmod_jk.log

cat devnull > opthpwsapachelogsaccess.log

## Nulling Tomcat Logs

optjava1.5binjar -cvf $WT_LOGS$(date +%y%m%d)Tomcat.$(date +%y%m%d%H%M).jar $TOMCAT_HOMElogs.

cat devnull > $TOMCAT_HOMElogscatalina.out

cat devnull > $TOMCAT_HOMElogswindchill.log

## Nulling Windchill Logs

optjava1.5binjar -cvf $WT_LOGS$(date +%y%m%d)Windchill.$(date +%y%m%d%H%M).jar $WT_HOMElogs.

cat devnull > $WT_HOMElogsDCA.log

cat devnull > $WT_HOMElogsHTTPGateway.log

## Nulling Cognos Logs

optjava1.5binjar -cvf $WT_LOGS$(date +%y%m%d)Cognos.$(date +%y%m%d%H%M).jar $COGNOS_HOMElogs.

cat devnull > $COGNOS_HOMElogstomcat.log

cat devnull > $COGNOS_HOMElogscbs_cnfgtest_run.log

cat devnull > $COGNOS_HOMElogscbs_isrunning.log

cat devnull > $COGNOS_HOMElogscbs_run.log

cat devnull > $COGNOS_HOMElogscbs_start.log

cat devnull > $COGNOS_HOMElogscbs_stop.log

cat devnull > $COGNOS_HOMElogscbsdefault.log

cat devnull > $COGNOS_HOMElogsccl.log

cat devnull > $COGNOS_HOMElogscogconfigipf.log

cat devnull > $COGNOS_HOMElogscogserver.log

 

## This Script deletes all files under $WT_LOGS$(date +%y%m%d) and $WT_HOMElogs that are

## have not been accessed within 7, 15 and 28 days respectfully

##

##

date

find $WT_HOMElogs $TOMCAT_HOMElogs $COGNOS_HOMElogs $APACHE_HOMElogs -mtime +7 -exec rm -f {} \;

find $WT_LOGS -mtime +28 -exec rm -f {} \;

Top Tags