cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

We are happy to announce the new Windchill Customization board! Learn more.

Publishing w/Remote Site & WAN Utilization (long)

TomU
23-Emerald IV

Publishing w/Remote Site & WAN Utilization (long)

We currently have one master site and one remote site with limited bandwidth between them. A remote file server has been configured to reduced the load on on the WAN connection during business hours and improve CAD application performance for the remote users. Unfortunately, publishing seems to be defeating this functionality. Publishing is currently configured to happen on check in. I'm curious if there is a better way to configure things.

Desired Functionality:

1.) Remote user uploads data --> data is stored on remote file server only

2.) Later, when replication is scheduled to take place (off hours)

a.) Data on remote file server should be copied (replicated) back to master server.

b.) Data on remote file server should be moved to replica vault (instead of cache vault).

3.) Once data arrives at the master server, it should then be published.

- Of course for users at the master site, publishing should still take place immediately upon check in.

Actual Functionality:

1.) Remote user uploads data --> data is stored on remote file server only

2.) WVS publishes the data

a.) The file is copied file from remote file server to the publish server - ACROSS the WAN

b.) The publisher publishes the files and then stores them on the master server (local).

c.) The files used for publishing are then deleted from the publish server

3.) Later, when replication is scheduled to take place (off hours)

a.) Data on remote file server is copied (replicated) back to master server - ACROSS the WAN AGAIN

b.) Data on remote file server is moved to replica vault (instead of cache vault).

I'm not sure the impact, but WVS authentication is currently set to use the user's credentials during the publish job (auth=$user). I'm concerned that this may cause WVS to pull from whatever file server the user has configured for their default vault, regardless of where the publisher is physically located.

Is there a way to have all data checked into the master server published on check in but delay publishing data checked into the remote site until after the data has replicated?

5 REPLIES 5
Darrell
12-Amethyst
(To:TomU)

Have you made any progress on this? Could you share your experience since posting?

We are adding a remote site that will generate a lot of publishing and I know you can configure Distributed File Server Workers for Pro/E-Creo to publish at remote sites to avoid the WAN transfer, but I haven't tried it - http://support.ptc.com/cs/help/windchill_hc/wc100_hc/index.jspx?id=ConfigDistributeFileServerWorkIntro&action=show.

I'm also wondering if you have any master vaults at remote sites. We currently only have one master vault at the master site, but I think you can create a master vault on any site. It would probably make sense to do that if most of the data generated at a particular site will only be used there. Any experience with that?

TomU
23-Emerald IV
(To:Darrell)

PTC's recommendation (after much discussion) was to place an additional publisher at the remote site. There are some settings you can add that will allow data checked in at a remote site to be published by a publisher also connected to that remote site. When the scheduled synchronization happens both the primary cad objects and the published representations will be copied back to the master.

I am running with a single master. If there was a master at the remote site (for certain content), then the content would stay there unless there were replication rules to push it somewhere else.

PTC has published a 39 page technical brief called PTC Windchill Vaulting and Replication Planning. It does a pretty good job of explaining how most things work, but some of the recommendations don't hold true in a single master scenario. (For example, remote sites without a master should not have both replica vaults and cache vaults - they should only have cache vaults and the replication rules should point to this vault.)

GaryMansell
6-Contributor
(To:TomU)

Tom,

Where did you get the information that Windchill Installations with only a single Master should have just Cache Vaults at the Replica sites and rather than both a Replica and Cache Vault?

This is contrary to my own understanding, the documentation that you refer to and also a PTC Tech Support guy (who really knows his stuff) that helped me archictect our new 10.1 System.

Our previous 9.1 System only had CacheVaults, whereas we were advised by PTC to split this into a Cache and Replica Vault for our 10.1 System. I have to say, it has been running for 9 months now like this and seems to work fine.

Please can you explain why a single Cache Vault is best for a Replica Site without a Master rather than both a Cache and a Replica Vault?

Thanks and Regards

Gary

TomU
23-Emerald IV
(To:GaryMansell)

Gary,

I agree, running with a single vault seems to disagree with the technical brief, but that is what PTC tech support told me to do.

There are multiple issues at play here, so this will be a really long post.

Vault Configuration

When I was testing this I had two vaults setup, a cache vault and a replica vault. The cache vault was set as the default target for site (remote site) and was connected to a single folder. The replica vault was setup with multiple folders and configured to allow automatic folder creation based on file count.

Synchronization Behavior

I was taught during PTC training that data created at a remote site (in the cache vault) would automatically be converted to replica data (moved to the replica vault) after synchronization with the master. PTC technical support initially confirmed that this understanding was correct. The technical brief also seemed to agree with this. From page 20:

If the original, cached content is moved to another site as a result of the synchronization process, the cached content is then converted to further replicated content.

Unfortunately, in my tests this movement from cache vault to replica vault was never taking place. After extensive testing by PTC they determined that it is working correctly, but both their and my understand was incorrect. The original content in the cache vault is converted to replica content after synchronization, but it remains in the cache vault. It does not get moved to the replica vault. This of course is a problem because my cache vault wasn't configured for automatic folder creation. Anything that was created at the remote site would have it's replica content stuck in the cache vault (forever). Since it only had a single folder associated with it, there was nothing to prevent unrestrained growth on this one folder.

Ad-Hoc Replication

In the process of testing I also noticed that any cad models, 3D thumbnails, or visualizations I accessed directly from Creo or from any web browser would automatically get downloaded to the cache vault before being sent to my computer. This is good in the sense that the next time someone else at this same remote site wants to access these files, they are already there. This is bad though because these also are going into the cache vault (which isn't setup for automatic folder creation). Changing the "default target for site" check box from the cache vault to the replica vault changed this behavior. Now all the ad-hoc, on-the-fly stuff is going to the replica vault instead of the cache vault.

Conclusion

Based on how data is converted to replica content after synchronization, I really only see two options. Either use one vault for everything (like I'm currently doing now) or make sure that the cache vault and the replica vault are both configured for automatic folder creation. I guess I don't see a good reason to keep the locally created, but now considered replica data separate from the replicated data coming from the master. Neither did the PTC technical support person. If you think about it, this is exactly how the master site is setup as well. We have the "wt.fv.forceContentToVault" turned on with a single master vault (with automatic folder creation). The cache vault at the master site is never used. All new content goes directly to the master vault. Why shouldn't the remote site simply work the same way?

If you have anything I can take back to tech support to show why this is a bad idea, please let me know. Thanks!

Tom

GaryMansell
6-Contributor
(To:TomU)

Hi Tom,

Thanks such a detailed explanation. I have to say that I find identical behaviour on my 10.1 M040 system and agree with your hypothesis, and your resulting decision to have only one cachevault at the Remote Site, rather than both a cache and a replica vault. I think this is the "neatest" solution based on the findings of the actual behaviour of the System (rather than what the documentation says).

As you say, this is the nub of it:

The original content in the cache vault is converted to replica content after synchronization, but it remains in the cache vault. It does not get moved to the replica vault.

I have checked the Cache Vaults on all my System's Replica sites and they all have loads of vault data in loads of folders (I used auto folder creation, so there is no risk of me running out of vault space on my system, though). You would not expect to see many files in the Cache Vault if the behaviour was as per the PTC documentation and they were actually being moved to the Replica vault on the Remote Site. As you say, this proves that the Cached data must be just being marked as Replica data but still remains located in the Cache Vault, rather than being moved to the Replica Vault.

Hence, I agree that there seems very little point in having a Cache Vault containing both Cached and Replica data, and then another Replica vault containing some additional Replica data. It's much neater to just have one Cache Vault to hold all Cached and Replica data as you say.

I was sold the idea of using separate vaults by the PTC Tech Support guy saying that this would separate the Cached and Replica data on my Remote Site, which would mean I only need to backup the Cache Vault and not the Replica Vault on the Remote site - which seemed like a good idea to me. In fairness to him, this would have been a neat solution if the System had worked as per the documentation.

Good stuff - I am glad this has been discussed.


Rgds

Gary

Top Tags