cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Want the oppurtunity to discuss enhancements to PTC products? Join a working group! X

Rehost - Minimize Size of Vaults

BenPerry
15-Moonstone

Rehost - Minimize Size of Vaults

I am looking for a way to minimize the footprint of our rehosted test & development sites.  Currently the 50 GB DB is exported from Production and imported into test/dev sites.  And then on our Unix server, we use the rsync command to move the application files and vault files (1500 GB) from the production disk to the test/dev disks.  After the rehost is done, we perform "remove unreferenced files" to make sure things are tidy, and any "extra" files moved over from production disk to test/dev disk are deleted.

But maybe we only want to work with a 100 GB vault on test/dev sites.  How can we rehost only the latest 1 year of data, for example?

Additionally, I thought of doing an rsync on files last modified in the previous 1 year.  But I'm not sure if that is a good idea.  Are there necessary files in the vaults that are older than 1 year?  For example, the workflow templates?  They might be stored in the DB.  But that is just an example of some of the data I'm worried about not copying over if I limit the scope of rsync to only previous 1 year.

11 REPLIES 11
BineshKumar1
13-Aquamarine
(To:BenPerry)

There are several approaches which you could use based on the technology you are using for vaults

  • Approach of copying only the delta data which has been added after the last refresh is what we commonly follow. You can write a SQL to find out which all files were modified after your last refresh and provide this as an input to your rsync using --files-from=FILE flag.
  • If you are using VMs and Use VNX snapshot of production and attach it to your test environments - https://community.emc.com/docs/DOC-24251
  • Going by the logic that Windchill would write only to folders which has read-only flag disabled, you can share your read-only folders between environments. You can mark/verify whether the folders are read-only through SQLs. Indeed, this is risky and you should really put all sort of checks in place to prevent a data corruption

Thank you

Binesh Kumar

Binesh,

I have a few follow-up questions.  I'm matching your bullets points.

  • Delta sync.  We rehost every month.  But we want perhaps 1 year of data.  But still this doesn't get at the root issue.  How do I get rid of all of the data from > 1 year?  Just don't copy it over?  But it is still referenced in the DB.  So the objects exist, but the primary content does not exist in the vault, giving errors.
  • Snapshot.  I'm not sure how this will help.
  • Share read-only vault folders.  I agree it seems risky, and would not prefer this.
BineshKumar1
13-Aquamarine
(To:BenPerry)

Hello Ben,

Sorry for the delayed response. The best way to get the list of files is by running a query on application data connecting to mastertables/fvitems/fvfolders and get a list of files which are <1 year and is necessary.  Since you are doing it on a monthly basis, i think you can figure out a query based on the errors you get.

Regarding the snapshot method, we are using vfilers for the vaults storage, it provides snapshots/snapvaults to share the copy of the vault and to enable DR. I will message you a document on this.

Thank you

Binesh Kumar

Medtronic - MITG

RandyJones
20-Turquoise
(To:BenPerry)

You don't specify what "Unix" you are running. If you are running Solaris you can take advantage of zfs snapshots/clones and zones. We are running Solaris and have the following:

  • 1 server with zones
    • production zone
    • test/dev 1 zone
    • test/dev 2 zone
    • test/dev 3 zone
  • zfs filesystems for
    • Windchill install
    • Windchill vaults
      • master vault
        • only on the master server
      • cache vault
      • replica vault
        • on the replica servers
    • Windchill oracle database
    • oracle install

We snapshot and then clone the production Windchill filesystems and use these for the test/dev zones. This takes a minimal amount of disk space. Disk space is only used when the cloned filesystem diverges from the original clone. In my typical usage case the oracle tables are by far the largest "divergers".

If you didn't want to run test/dev on same server as production you still could snapshot/clone the production filesystems and then nfs mount them on the test/dev server(s).

BenPerry
15-Moonstone
(To:RandyJones)

We're using Red Hat Linux.  Since I'm not really a server admin, I'm confused how your divergent vault is so small.  Your test/dev vaults point to Production vault?

Right now what we do is have a separate 1.5 TB for the test/dev site vault, which is a copy of Production vault.  That way it is completely separated.  If we were able to read-only from the Production vault...but put new data into the test/dev vault, that would be great.  I'm not sure how to do that though.

RandyJones
20-Turquoise
(To:BenPerry)

Ben Perry wrote:

We're using Red Hat Linux.  Since I'm not really a server admin, I'm confused how your divergent vault is so small.  Your test/dev vaults point to Production vault?

My test/dev points to cloned snapshots of the Production filesystems: vaults, Windchill install, oracle tables, oracle install

http://docs.oracle.com/cd/E19253-01/819-5461/6n7ht6r4n/index.html

ZFS - Wikipedia, the free encyclopedia

Right now what we do is have a separate 1.5 TB for the test/dev site vault, which is a copy of Production vault.  That way it is completely separated.  If we were able to read-only from the Production vault...but put new data into the test/dev vault, that would be great.  I'm not sure how to do that though.

A cloned zfs filesystem is a writable filesystem. When you write to these file systems you are not writing to the original (Production).

TomU
23-Emerald IV
(To:BenPerry)

Here is an interesting article on ZFS.  Fun stuff. 

http://www.smbitjournal.com/2014/05/the-cult-of-zfs/

TomU
23-Emerald IV
(To:BenPerry)

Ben Perry‌,

I'm wondering if the "Selective Content Settings" in the Windchill Rehost Tool (3.0) would do what you're looking for.  From the rehost.properties file:

########################### Selective Content Settings #############################

#  These properties identify a set of files in vault folders in a

#    staging location that will be copied to the target system

#  target.content.assembly.configspec could be:

#    As Stored (default value), Latest,

#    Latest|STATE (such as, Latest|In Work),

#    Baseline|Number(such as, Baseline|0001),

#    or Promotion|Number(for e.g. Promotion|0001)

#

#  Only required dependencies and family tables are included by default

#    to include all, set the corresponding property to true.

####################################################################################

# target.content.assembly.filename.0=t2_assly.asm

# target.content.assembly.version.0=B.1

# target.content.assembly.configspec.0=Latest

# target.content.assembly.includeAllDependencies.0=false

# target.content.assembly.includeAllFamilyTables.0=false

# target.content.assembly.viewables.0=true

# target.content.assembly.drawings.0=true

# target.content.assembly.includesource.0= false

# target.content.assembly.includeimage.0=false

BenPerry
15-Moonstone
(To:TomU)

Tom,

Selective Content is kind of what I was thinking in the first place.  But this ZFS thing is pretty interesting.  Using ZFS it seems we could have access to the entire 1.5 TB of data without actually making a separate copy of all 1.5 TB.

I'm going to talk with our Unix admins and see if this is something we could get set up with Red Hat.

Anyone else using ZFS, or something similar to what Randy has mentioned?

One approach to this issue is not to 'migrate' the exiting vaults.

In your new environment, your existing vault, would link to the vault path in 'production server'.

You would put this vault as read-only.

You would create a new vault mounted in the new environment to be able to create new data.

If this new environment has only READ access (windows/OS permission) to path of 'production' folders, it is impossible to delete or broke anything.

Joao,

That is interesting.  What are some of the key steps that you take to execute that scenario?  I understand at the high level, but are there some key settings or steps you need to take?

Announcements


Top Tags