Hi ThingWorx Community,
I’m working on a large data migration project in ThingWorx and would love your advice or best practices!
I have around 100 DataTables, each with a significant number of rows.
For each DataTable, I read all entries, transform the data (sometimes with nested loops, adding extra logic), and then either:
Build a large InfoTable result,
This logic works fine for small datasets, but with real data volumes, I’m consistently hitting this error:
I’ve seen loops that just insert thousands of rows run much longer without hitting this.
Now that I’m reading + transforming + updating, I’m hitting the limit quickly.
How do others handle large migrations like this without hitting the script timeout?
Is there a proven batching or chunking pattern that works well inside ThingWorx services?
Should I switch to a Timer/Scheduler Thing approach? Any examples or pitfalls?
Any other general tips for making heavy transformations scalable and safe?
I’d really appreciate any real-world lessons or suggestions.
Thanks in advance for your help!
Best regards,
Solved! Go to Solution.
I've seen ScriptTimeouts of 2 hours and larger in the field.
If you have development under control meaning the developer don't code against this limit and understand that long running transactions are blocking threads, you can increase ScriptTimeout and run the job when there is low activity on the system.
Especially when this is a one time migration and you can switch it back afterwards.
The other way would be running batches from a scheduler. Batch size is highly individual on the data.
We talked about this before - data tables don't scale well with large data, long term you may want to think about using SQL Tables instead. This might also reduce the time needed for the updates.
I've seen ScriptTimeouts of 2 hours and larger in the field.
If you have development under control meaning the developer don't code against this limit and understand that long running transactions are blocking threads, you can increase ScriptTimeout and run the job when there is low activity on the system.
Especially when this is a one time migration and you can switch it back afterwards.
The other way would be running batches from a scheduler. Batch size is highly individual on the data.
We talked about this before - data tables don't scale well with large data, long term you may want to think about using SQL Tables instead. This might also reduce the time needed for the updates.
Hi @Rocko
Thanks a lot for your insights — very helpful!
Yes, you’re right: I’m actually already using SQL Server for this, and the goal of this migration is to move the data over to the SQL tables for better long-term scalability. So this post was specifically about how to handle this one-time migration efficiently.
Given that it’s a one-time migration, do you think it’s reasonable to temporarily increase the ScriptTimeout to something like 2 or even 4hours and run the job during low system activity?
I’ll make sure to switch it back afterwards once the migration is complete.
Appreciate your advice and best practices on this!
Totally reasonable, the only downside is you have to restart the server for the setting to be active.
+1 to @Rocko's suggestion. That's exactly why this ScriptTimeout limit is configurable and not hard-coded.
Hello @MA8731174,
I wanted to follow up with you on your post to see if your question has been answered.
If so, please mark the appropriate reply as the Accepted Solution for the benefit of other members who may have the same question.
Of course, if you have more to share on your issue, please let the Community know so that we can continue to support.
Thanks,
Abhi