cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Need help navigating or using the PTC Community? Contact the community team. X

[1,018] Data store unknown error: [Error occurred while accessing the model provider. ?

seanccc
17-Peridot

[1,018] Data store unknown error: [Error occurred while accessing the model provider. ?

Hi ,

When creating DataTable with service dynamically,  I sometimes get the error "[1,018] Data store unknown error: [Error occurred while accessing the model provider." . 

 

var params = {
name: dataTableName,
description: "",
thingTemplateName: "DataTable",
};

Resources["EntityServices"].CreateThing(params);

Things[dataTableName].SetDataShape({ name: shapeName });
//Things[dataTableName].SetProjectName({ projectName: "ABCD" });

Things[dataTableName].EnableThing();
Things[dataTableName].RestartThing();

Things[dataTableName].SetPersistenceProvider({
name: persistentProvider
});

Resources["EntityServices"].AddShapeToThing({
name: dataTableName /* THINGNAME */,
thingShapeName: "LockableThing" /* THINGSHAPENAME */
});

Things[dataTableName].EnableThing();
Things[dataTableName].RestartThing();

 

And, if error occurs, I have the following exception handling to rollback: 

 

 var params = {
name: dataTableName /* THINGNAME */
};
try{
Resources["EntityServices"].DeleteThing(params);
logger.debug("Rollback data table "+dataTableName + "finished");
}catch(error2){
logger.debug("fail to rollback data table "+dataTableName + ". Error: " + error2 ); // result: INFOTABLE dataShape: "EntityReference"
var existingGhostEntities =
Resources["GhostEntitiesResource"].GetGhostEntities({
deleteGhosts: true /* BOOLEAN */
});
}

 

Even though,  if the error "Data store unknown error" occurs then it will always occur unless restart tomcat .  Any suggestion ? 

 

Thingworx 8.4.2 + Postgresql 10.x

 

Regards,

Sean

12 REPLIES 12
PaiChung
22-Sapphire I
(To:seanccc)

Is this creation service execution in a set security context?

Data store unknown could be a visibility permission problem.

ie. the security context (user) that is executing the service may not have permissions to see the Persistence Provider associated with the DataTable you are creating

seanccc
17-Peridot
(To:PaiChung)

@PaiChung ,

 

No,  I'm currently using Administrator to run the creation service.  It will be executed in a security context when my project go live , but for now it's only executed by Administrator.   

The strange thing is that the error occurs randomly , about 1 time out of 5~20 executions.  

Hi,

 

You can try to look into Tomcat logs (stderr), sometimes there is another error behind Data Store Unknown Error, and get a hint. 

 

Thanks,

Raluca Edu

Hello Sean,

 

I'm sorry for not answering your question, just wondering what is your use case for creating data tables programmatically -- is that for visibility purposes?

 

/ Constantine

@seanccc , 

 

Have you looked at this Article CS239982 This article discuss how to enable a stack trace which may provide more information on what action is causing the problem.

@Constantine ,

My customer has multiple production lines and each line will have 50,000 records per year,  since data table is suitable for less than 300,000 records , I create data table for each line.  The customer may have more lines in t he future , so it's created pragmatically.   Also, I want to keep the flexibility to assign different data provider for the data table of each line to spread the data into different  db sachems/table spaces , although they use the same data provider for now.  

Sean,

All DTs are stored internally in the same SQL table, so I would advise you to run some basic performance test first. For example fill DT1 with 300K rows and measure a query, and then fill DT2 and see it it has the impact on both queries. Because of underlying reindexing I would also expect some impact on the INSERT performance — with 10 customers at the end of the year you’ll be inserting rows into a table with 500K rows instead of 50K. This is all hypothetical, I’ve never measured it myself, but the gut feeling suggests that running 10 DTs each with 50K records will be slow.

May I ask you what is your use case for that? Do you accumulate some running totals?

/ Constantine

Hi @Constantine ,

I don't have needs to accumulate some running totals.  I separate the data table by production line just for getting better query performance as I noticed that there is entity_key_constraint on data table and table name is part of the constraint.  

 

But I just checked the constraint again and find something new.  In PgAdmin the table name(entity_id) is the 1st columns of the constraint , so I thought querying by table name can use index , however ,checking the table creation script , just found it's actually the 2nd column.  Query condition on table name cause full table scan and is slow.  So , I would abandon the way of creating dynamic data tables . 

 

I also have several dynamic stream for each production line.  For stream the stream name(entity_id) is indexed, so I would keep the streams get created dynamically.  

Sean,

 

On a side note, we found that in most use cases (~10K rows) DTs are only useful for querying by ID. For pretty much anything else we use streams and SQL.

 

/ Constantine

@Constantine ,

 

My case has lots of CRUD operations on time independent data,  also,  stream doesn't has index on "field_values" , so stream is not suitable .  SQL is always the last choice I think, I don't want the main business logic implemented on SQL , it would cause more maintenance effort in the future. 

 

The good news for using data table in my case is that the data won't be changed once confirmed/submitted a few weeks later,  so I have chance to archive the data to another schema/database, meanwhile I can compress the lines by transforming the data into a complex JSON object,  so that the data volumn in DT won't be two large in years.  

@seanccc 

 

We had a somewhat similar use case (stored workflow instances in DTs, each ~20K when serialized to JSON), also archived old instances. This setup started having performance issues at around ~20K rows, and while doing optimizations it became obvious that it won't scale for one more year, and so we ended up rewriting it in SQL, reusing the same DB instance and putting all DDL into ThingWorx services. It's actually not that bad in terms of maintenance until you attempt any refactoring. Luckily we were exporting / importing the whole thing on each major release, which provided a simple and convenient way to "migrate" data between changing SQL table definitions.

 

/ Constantine

@Constantine ,

 

Thank you very much for sharing such a precious experience.  20K is also not scale for my case even if the lines are compressed into JSON.   I'll test my case as well. 

 

The lucky thing is that the query condition is relatively simple for the archived data, maybe I can avoid to query on field_values but always on entity_key field (service: GetDataTableEntryByKey).  Hope that would be fast even for 20K records.  

Top Tags