cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - When posting, your subject should be specific and summarize your question. Here are some additional tips on asking a great question. X

Error: Could not create a transaction for ThingworxPersistenceProvider

seanccc
17-Peridot

Error: Could not create a transaction for ThingworxPersistenceProvider

Hi,

I got the error "Could not create a transaction for ThingworxPersistenceProvider" .    Referring to the article https://www.ptc.com/en/support/article/CS229649  ,    the issue should only occurs in thingworx version lower than 7.0,  but I'm on Thingworx 8.4.2. 

 

please check the attached trace_log.   I don't know how to reproduce it and don't know when it will occurs again.  

 

Database: Postgresql 10.x

 

Regards,

Sean

1 ACCEPTED SOLUTION

Accepted Solutions

Hello Sean,

 

The top error message says it all: 

Key (entity_name, entity_type)=(TestingDataReportHistorytestline001Stream, 2401) already exists

The error message suggests that you are attempting to create duplicate streams, so that's where you should be looking first, IMHO.

 

The exception is thrown at the beginning of your CreateStream service. I guess you catch it then and try to cleanup ghost entities by listing them first using GetGhostEntities service. Because your transaction is already aborted at the beginning, the GetGhostEntities also fails (that's the "ERROR: current transaction is aborted, commands ignored until end of transaction block"). 

 

As for the reason for "Could not create a transaction for ThingworxPersistenceProvider", I can only guess that it is some intermittent side effect of rolling back transactions at the rate of 14 rollbacks per 50 milliseconds. I don't know how Postgres (and its Java driver) handles it, but wouldn't be too surprised if 10+ almost simultaneous rollbacks could trigger some internal racing condition under some unfortunate circumstances. I'm pretty sure that there are some background jobs in Postgres that take care of cleanup, optimization, reindexing, etc., and that not all of that transaction processing happens synchronously (and thus increasing the risk of the racing conditions and weird side effects). That's just my hypothesis, not even a theory.

 

My advise to improve the situation (after you find out where those duplicates come from):

 

  1. Do not use GetGhostEntities, instead try to delete the thing in catch block (wrap this deletion into try/catch as well);
  2. Do not create streams programmatically, unless you have a very good reason for doing so;
  3. The users who create a work order probably do not expect it to create also lines and streams. Not only it is confusing, but also affects performance and may have undesirable effect on security (the user who creates work orders effectively gets the same access rights as the one who defines production line structure);

 

Regards,

Constantine

View solution in original post

2 REPLIES 2

Hello Sean,

 

The top error message says it all: 

Key (entity_name, entity_type)=(TestingDataReportHistorytestline001Stream, 2401) already exists

The error message suggests that you are attempting to create duplicate streams, so that's where you should be looking first, IMHO.

 

The exception is thrown at the beginning of your CreateStream service. I guess you catch it then and try to cleanup ghost entities by listing them first using GetGhostEntities service. Because your transaction is already aborted at the beginning, the GetGhostEntities also fails (that's the "ERROR: current transaction is aborted, commands ignored until end of transaction block"). 

 

As for the reason for "Could not create a transaction for ThingworxPersistenceProvider", I can only guess that it is some intermittent side effect of rolling back transactions at the rate of 14 rollbacks per 50 milliseconds. I don't know how Postgres (and its Java driver) handles it, but wouldn't be too surprised if 10+ almost simultaneous rollbacks could trigger some internal racing condition under some unfortunate circumstances. I'm pretty sure that there are some background jobs in Postgres that take care of cleanup, optimization, reindexing, etc., and that not all of that transaction processing happens synchronously (and thus increasing the risk of the racing conditions and weird side effects). That's just my hypothesis, not even a theory.

 

My advise to improve the situation (after you find out where those duplicates come from):

 

  1. Do not use GetGhostEntities, instead try to delete the thing in catch block (wrap this deletion into try/catch as well);
  2. Do not create streams programmatically, unless you have a very good reason for doing so;
  3. The users who create a work order probably do not expect it to create also lines and streams. Not only it is confusing, but also affects performance and may have undesirable effect on security (the user who creates work orders effectively gets the same access rights as the one who defines production line structure);

 

Regards,

Constantine

@Constantine ,

 

Thank you for the quick reply ,  I'll  adopt your suggestions . 

 

Regards,

Sean

Top Tags