Community Tip - You can subscribe to a forum, label or individual post and receive email notifications when someone posts a new topic or reply. Learn more! X
anyone have suggestions on how to manage a single vault share for multiple Windchill systems on Windows?
we have PRODUCTION and two TEST systems in place. as more testing demands came up, we copied the vaults to the TEST systems, doubling and then tripling the disk space required for the vaults. ultimately we'd like to have a singe vault share for all three systems giving PRODUCTION read/write access and the TEST systems read along with their own unique "active" vault on a different share for testing purposes. these unique shares would get wiped and re-attached for a refresh.
i've looked at some older posts here, but didn't find anything conclusive.
I did a bunch of research on this several years ago and while it was very simple to do this on the unix/linux operating systems, it was much more difficult on Windows. Storage space is cheap enough, and our vault is small enough (750 GB or so) that I just decided to copy the vault from one system to the other. I can fire off a robocopy script and only the new/changed stuff actually gets copied. Doesn't usually take too long.
If your vault is very large and/or you don't have sufficient space, I'd take a look at sharing the drive read-only at the VM level (assuming you're using VMware or something similar.)
Officially Windchill doesn't support mount of same folders to multiple Windchill environment.
But for your own risk, you can try, make sure that on the OS level test system has only read access to the shared production Vault folder. For the test system Vault status will be Invalid.
I would suggest you post this request in https://community.ptc.com/t5/Windchill-Ideas/idb-p/WindchillIdeas.
Also to reduce the cost of Vault storage you can use the AWS S3 bucket.
A few places I've worked we have shared the vault as read only to the test and then mounted as read only. This worked for Windows server and AIX
The Achilles heel of file vaults sharing between production and test systems is 'Remove Unreferenced Files'. We can't hide/disable the option from the UI and it is necessary to periodically recover file vaults disk space in production. So there needs to be at least two copies of the file vaults (production and non-production). Otherwise, the non-production systems need to be recloned/rehosted each time Remove Unreferenced Files is run against production. There must also be a hard rule to never run Remove Unreferenced Files in a non-production system, or at least never against the first master root folder.
Sharing among multiple test systems is possible/practical. As you mentioned, at the time of cloning/rehosting, mark the master root folder as read-only and create a second root folder (i.e. active) for that system's unique content files. Going forward, never run Remove Unreferenced Files against the first master root folder in any of the non-production systems. Additional file vaults rehosting steps may be needed if using the defaultUploadVault.
Going another direction... NetApp and other hardware vendors offer block level cloning/file de-duplication capabilities. These solutions don't actually clone the full set of file vaults. Instead, they snapshot the LUN (disk) and track changes going forward or they point multiple operating systems to the same file on disk. I've seen Windchill systems cloned in minutes using these types of technologies. Unfortunately, they aren't free and almost never on IT's radar when scoping hardware for Windchill environments.
I forgot to mention that -- since it isn't officially supported by PTC -- sometimes the 'MOUNT_VALIDATION\site*' file causes issues when sharing vaults between multiple systems. If this is the case, you may need to do some OS level folder mounting magic (e.g. Windows symbolic links) to share the vaults between multiple systems.
I've recently discovered we had a licence of NetApp's flexclone which we are starting to use for non production environments.
https://community.netapp.com/t5/Tech-OnTap-Articles/Back-to-Basics-FlexClone/ta-p/84874
Still early days with it but could be a big change to our vaulting strategy.
You can actually mount your production vault to a test system in read-only mode. The only issue you'll face is that mount validation will not work on a read-only vault (if it's controlled by the OS and not by the application), so you'll have to manually update the mount status to valid in the database in case it switches to invalid. We're using this method for a while now to save 60tb+ storage space. No issues found so far.
I am doing the same it works fine until I restart the Windchill Service.
Are you saying that you set the status to VALID every time the test server is restarted?
Do you have a workaround for this yet?
Basically this is what I have done.
My test system has local vault.
I am moving them to a Windows DFS location that is read only at the OS level for the service account running the Windchill instance.
At the DB level, I change the FvMount and RootMount to the new path.
Starting the windchill instance, because of the read only at the OS level, I get the error in the MS logs, so I update the mount at the DB level to VALID.
Am I missing anything?
I found the issue.
I had to disable the Automatic Mount Validation