ThingWorx Navigate is now Windchill Navigate Learn More

Translate the entire conversation x

ValueStream cleaning

dbologna
13-Aquamarine

ValueStream cleaning

Hello everyone, I am using the Thingworx 9.6 platform, on which I use the purge functions ‘PurgeSelectedPropertyHistory’/‘PurgeAllPropertyHistory’ on ValueStream to manage data retention. I often get the following error
ERROR[java.lang.RuntimeException: com.thingworx.common.exceptions.DataAccessException: [1,018] Data store unknown error: [Error occurred while accessing the data provider.] The message is returned to me after 602 seconds. Can anyone help me it is important to keep only a small amount of data active in order to avoid a db overflow.
Thanks
Dimitri

 

ACCEPTED SOLUTION

Accepted Solutions
slangley
23-Emerald II
(To:dbologna)

Hi @dbologna 

 

From a review of your case, it appeared that you made some updates to your purge script, possibly a timeout setting.  If that is correct, please confirm by marking this response as the Accepted Solution.  If you made other settings changes, please post an update and mark that as the Accepted Solution for the benefit of others in the community.

 

Regards.

 

--Sharon

View solution in original post

7 REPLIES 7

PTC has this article regarding valueStream cleanup: https://www.ptc.com/en/support/article/CS271772

 

I assume the failure you get is that the timeout for DB queries is 600 seconds. And as you have so much data the deletion query just needs very long.

 

You can change this timeout or change your purge mechanism (depending on amount of data you have). 1. Maybe purge multiple times with different start times (first run purge all data older than 10 months, second run all data older than 9 months, ...) so that each run only deletes like 1 month of data.

 

2. You can configure query timeout in "ThingWorxPersitenceProvider"-Entity:

nmutter_0-1737133893805.pngnmutter_1-1737133900188.png

But this setting applies for all queries to database. It may impact platform performance in certain conditions. So better alternative is to fine tune your purge mechanism

 

A 3. option would be to implement a purge in database level. Without TWX functionality. Like execute a custom query "DELETE * From table where timestamp > 1.1.2025" (not working like this). But I do not have such a query.

 

There's unfortunate no feature from the platform itself for this.

on option 3) I would assume if it takes that long using PurgeSelectedPropertyHistory, the DELETE FROM will also take very long. I would recommend option 1), make smaller chunks, and also to call it more often (e.g. weekly instead of monthly) for the same effect, less data to delete in one chunk.

 

If the valuestream is very large, also consider moving the data for this effected thing to its own valuestream. It will end up in the same database table but might allow for less deletion time if a different index is used.

dbologna
13-Aquamarine
(To:Rocko)

Hi Rocko,

the cleaning takes place daily and is related to the data of a single day that is identified by the retention.

Each Thing has its own ValueStream and on average every day the number of data is about 20000 rows (calculated with QueryNumberPropertyHistory). Now instead of deleting all the data with a single request I have created a loop and perform the cleaning for each hour of the day, but the problem does not change.
Thanks
Dimitri

I suggest to create a ticket with support then. They can look into the system with you to identify the issue. Deleting a couple of rows shouldn't take that long.

dbologna
13-Aquamarine
(To:Rocko)

Ticket opened at PTC

BR

Dimitri

slangley
23-Emerald II
(To:dbologna)

Hi @dbologna 

 

From a review of your case, it appeared that you made some updates to your purge script, possibly a timeout setting.  If that is correct, please confirm by marking this response as the Accepted Solution.  If you made other settings changes, please post an update and mark that as the Accepted Solution for the benefit of others in the community.

 

Regards.

 

--Sharon

Just as a note (as I had the issue as well): With the new mechanism you need to consider the still existing data of the ValueStream. E.g. I implemented to delete all data older 6 months. I collected data already for 12 months. So the first deletion run will need to delete all data from 6 months at once which takes long.

So I had to delete first all data older 11 months, then decrease to 10, ... Until 6 months was reached. Now it runs smoothly as it does not have too much data do delete every day.

Mechanism is: Every hour, from 100 devices, purge history older 6 months. So also limit the amount of ValueStreams per run (to not run into ScriptTimeout)

Announcements




Top Tags