cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

We are working to address an issue with subscription email notifications. In the meantime, be sure to check your favorite boards for new topics.

How do I get evaluation metrics on a separate dataset?

JA_10708351
2-Guest

How do I get evaluation metrics on a separate dataset?

In Thingworx 9, I know that training a model will result in an automatically generated confusion matrix but is there a way to run another dataset through a model that is already trained, and get a confusion matrix on just the new dataset?

 

 

 

 

8 REPLIES 8

@JA_10708351 ,

 

Are you able to elaborate on your use case here?

You can retrain a model based on updated datasets: https://support.ptc.com/help/thingworx/analytics/r9/en/#page/analytics/analytics_builder/retrain_model.html

 

You can append an existing dataset by uploading additional data: https://support.ptc.com/help/thingworx/analytics/r9/en/#page/analytics/analytics_builder/upload_additional_data.html

 

Generate Confidence Models for an Existing Predictive without retraining the Model: https://support.ptc.com/help/thingworx/analytics/r9/en/#page/analytics/twxa_funtionality/generate_confidence_for_existing_model.html

 

Though the most direct way to train a new model with a new dataset is to upload it and train on that dataset itself.

 

Regards,

 

Neel

 

Basically, is there a way to train a model on dataset A, but get the confusion matrix of the model predicting on a different dataset B?

@JA_10708351 ,

 

Essentially, the Confusion Matrix is created at time of model creation.

 

It is dependent on the data supplied at time of creation, and currently there is no process to guarantee the same confusion matrix for a different dataset/model

 

The Retraining of the model using Additional Data (info linked in my original comment) would be the closest to your use case.

 

Regards,

 

Neel

ok. I would like to make a feature request for that, because often times someone might engineer the data in python or something, but want evaluation metrics/decision matrix on the un-engineered data. For example, if I upsampled something that was easy to predict, then running evaluation metrics on that upsampled data (instead of untouched data) would make the metrics look artificially high, and the decision matrix would be artificially good, etc.

@JA_10708351 ,

 

Please submit your Product Feature Request to the ThingWorx Ideas Community where it can be voted on by the community members.

 

https://community.ptc.com/t5/Welcome-How-To-s/Getting-to-Know-PTC-Community-Ideas/td-p/607208https://community.ptc.com/t5/ThingWorx-Ideas/idb-p/thingworxideas

When you submit your Feature Request, please provide the following Information in your post:

 

1. What version of Thingworx are you currently running?
2. Describe the problem you are trying to solve. Please include detailed documentation such as screenshots, images or video.

3. What business value would your suggestion represent for your organization?

 

Regards,

 

Neel

slangley
23-Emerald II
(To:JA_10708351)

Hi @JA_10708351.

 

If you or another member of your organization created a request for this on the ThingWorx Ideas forum, please post the link here so others can vote for it.

 

Regards.

 

--Sharon

Wish I could, but suggestion button is greyed out for me.

slangley
23-Emerald II
(To:JA_10708351)

Hi @JA_10708351.

 

You will need a PTC support login in order to post an idea.  Is there someone else in your organization that can do so?

 

Regards.

 

--Sharon

Top Tags