We have been challenged with vault stability issues over the last few months. (Our vault getting set to read only). We are also looking at re-architecting our storage while addressing this to help improve performance and 'future proof'. Looking for advice and experience related to this. At least some of our issues are related to running out of iNodes on our NAS.
Our Enviro: WC 11.0 (soon to be upgrading to 11.2), ~13 TB and 135 million objects in a single external vault (NetApp, 7200RPM, 2TB drives)
- Are there storage advantages to using multiple vaults? Either by using vaulting rules to separate objects into multiple vaults, or by setting a vault to read only and activating a new one once a given size is reached? (Yes, we are using folder size limits with autofolder creation at 50k items).
- If more than one root folder is active, will files be written to both? If so, what determines what gets written where? Are multiple active root folders beneficial?
- What configurations can/should be done on the NetApp side?
- Does any of this vaulting strategy/advice change with 11.2? 12.0?
I have similar concerns regarding vault storage. Our developers created a bulkload tool and now our users are uploading TB's of data into our system. I am not entirely certain if the data is even related to any product in the PLM system.
We are currently at ~40TB and a user is planning to add another 30TB's over the next couple of months. I am very concerned about the downstream tax they are putting on PDMLink app server.
Can some community members chime in with their take on the down-steam impact of having a large vault with millions of objects?