Community Tip - You can Bookmark boards, posts or articles that you'd like to access again easily! X
I'm trying to shrink the primary file for my Windchill database. When I try, it has taken over a day to try to go form 25GB to 13GB. I think that the culprit might be the entryNumber_seq table. It has the largest record count and through a query, it is what the shrink stays on the longest. It is the largest table record wise with 77 million rows. Does anyone know what it is really for? It has only two columns and one is just an 'x'. Can it be truncated?
Solved! Go to Solution.
This is the response from PTC Technical Support...
Regarding the case, kindly refer to the below information and the article attached.
Table Name rows
------------------------------------------------------- --------------------
entryNumber_seq 256541430
id_sequence 67836810
Note: Please have a complete backup of database firstly.
The below SQL can be used to clear out the entries from id_sequence. Windchill must be stopped before running these commands:
Let me know if anything comes out of the above suggestions. Feel free to contact me if you have more concerns related to the issue.
So, we can truncate sequence tables. Ultimately that is what we did and our shrink operation continued forward without further issues.
Not sure that's a table. Using SQL Server right? In Oracle, that is a sequence. Likely its the source of IDA2A2 values. I have 144 Million numbers used. Perhaps SQL Server does that differently. But I do not think its the source of your file size in the DB. If this still works from 2005:
Unfortunately it is a table in SQL server. It is the third largest table at 2.5 GB. It seems to be hindering my shrink due to the number of records.
What's your process on shrinking? Are you running purge jobs? If there is an entry per entry number, purging should be removing entries.
FYI: The SQL Server equivalent functionality to an Oracle sequence is a table with two columns ('dummy' and 'value'), a stored procedure, and no indexes.
We don't believe this heap table (i.e. no indexes) should be an issue. This is just where the shrink process stalls out... indefinitely.
The entryNumber_seq sequence/table appears to exist to populate the QueueEntry.entryNumber column. As a sequence table, it is only read when a new QueueEntry is created and only the latest entry is read to determine the last/next entryNumber value. So, the ~80 million rows (2.5 GB disk space) is definitely overkill for maybe 200 rows of existing queue entries.
The values are incrementally increasing, but they don't appear to have a uniqueness constraint. Nearly all the content of the QueueEntry table is temporary. It may have a hundred rows in it at any time.
Option A
We could truncate the table and update the procedure to start at 80 million. That way we don't have duplicates and the table is minimized. This is probably the cleaner/safer option.
Option B
We could truncate the table and allow the procedure to restart at 1. There would be the chance for some entryNumber duplication. The entryNumber value appears in the Queue Manager as the Number column. Usually 'Number' implies uniqueness, but not certain this is a requirement for non-persistent database entries. The real question is can we have duplicates and is it an issue?
Option C
We could move the table to another tablespace, finish the shrink, and move it back if the table really can't be truncated.
Insights?
Great insight that its tied to queue entries. Has this been raised with PTC? It makes sense to use Option A but I would check first.
Perhaps PTC can recommend this as part of this article: https://www.ptc.com/en/support/article/CS81914
Seems like a poor design that would be common across of SQLServer DBs. Ideally, option B would work if you could also update the queue entry IDs to match the new sequence. But if you have enough head room, A works just the same.
Calls were logged with PTC yesterday. Too early for answers.
That article works well for non-PRIMARY datafiles. PRIMARY datafiles like to hang on shrink operations. This BLOG article, particularly the part about "Shrink using smaller intervals" works better for PRIMARY datafiles. We have been using a 500MB shrink increment.
https://eitanblumin.com/2020/04/07/troubleshooting-long-running-shrink-operations/
But it is still hanging in this specific database.
Thanks
This is the response from PTC Technical Support...
Regarding the case, kindly refer to the below information and the article attached.
Table Name rows
------------------------------------------------------- --------------------
entryNumber_seq 256541430
id_sequence 67836810
Note: Please have a complete backup of database firstly.
The below SQL can be used to clear out the entries from id_sequence. Windchill must be stopped before running these commands:
Let me know if anything comes out of the above suggestions. Feel free to contact me if you have more concerns related to the issue.
So, we can truncate sequence tables. Ultimately that is what we did and our shrink operation continued forward without further issues.