Hi everyone,
We're currently facing a performance challenge in our ThingWorx implementation related to saving a large volume of image files (potentially millions over time).
We’ve observed that writing files directly to a file share (network storage) from ThingWorx is quite slow, whereas saving them to the local repository (on the same server where the ThingWorx ApplicationServer/Experience Service runs) is significantly faster.
To address this, our development team has proposed the following approach:
Save incoming files temporarily to the local ThingWorx repository (for fast write operations).
Then, in the background, run a scheduled service (e.g., every 15 or 30 minutes) that:
Moves these files to the file share for long-term storage.
Deletes them from the local repository after successful transfer.
We’d like to ask the community:
Is this a recommended or sustainable strategy for handling large-scale file storage in ThingWorx?
Are there risks or performance concerns (e.g., file locking, memory issues, scaling problems) we should be aware of with this kind of background file migration?
Any alternatives or best practices others have used for similar use cases?
Thanks in advance for your insights!
Hello,
I don't have so much experience with such volumetry but I will try to give you some inputs waiting other people to help you !
Regards,
Guillaume
Hello Guillaume,
Thank you very much for your input and for taking the time to share these ideas!
Thanks again for your suggestions!
Hi @MA8731174,
I wanted to follow up with you on your post to see if your question has been answered.
if you have more to share on your issue, please let the Community know so that we can continue to support.
Thanks,
Abhi