Community Tip - If community subscription notifications are filling up your inbox you can set up a daily digest and get all your notifications in a single email. X
I am moving vaults to a new NetApp volume and making long overdue changes to automatically create folders for that vault. I have 3 vaults that I will reduce to 2 so wt.fv.forceContentToVault is set to false properly (https://www.ptc.com/en/support/article/CS68929?source=search. I also have the following set:
wt.fv.useFvFileThreshold=false
wt.fv.useVaultsForAllContent=true
I know in the past we hit some NetApp or OS limit on the number of files in a single folder but I think that limit might not exist (except for max number of files on entire volume). The vault folders currently have anywhere from 100K, 300K to 1M files in them and I have had no issues (xfs or ext4 I think). So the question is, does setting a limit even matter anymore?
Solved! Go to Solution.
550K files in a single folder after revaulting. My main vault has 13M files so I may set a limit just for sanity like 1M but its not for any technical reason. @HelesicPetr mentioned hit if you happened to peer into those folders so no harm in putting a limit.
Hi @avillanueva
It was a limit of explorer that can show.number of files
Aslo it was performance issue. If there were 1M + files it took ages to load sometimes it was imposible to load all files..
I don't believe that Microsoft has solved that issue 😄
PetrH
Same on RHEL but I never really cared to browse the vault folders.
I have mine set at 50,000 but I am using Windows Servers.
I found some references that indicate it is a Microsoft limitation of the index file they use for each folder.
It's enabled out of the box by default now at 50,000 files, so unless you explicitly change it, that's what you will get.
Unless use threshold is set to false. I am at 156000 files and counting as it revaults.
550K files in a single folder after revaulting. My main vault has 13M files so I may set a limit just for sanity like 1M but its not for any technical reason. @HelesicPetr mentioned hit if you happened to peer into those folders so no harm in putting a limit.
The performance issue would show up perhaps when you have spinning disks. But nobody has those now.
Arcane file system trivia: a large directory gets divided via sub-inodes and sub-sub-inodes (corrections welcome). When this results in access to several tracks on the physical disk, the seek latency kills you.