I am trying Prescriptive analytics on Thingworx with the analytics manager.
The data set that i am using is a example data set with Temperature and Vibration values.
I selected the values for levers and was able to generate the above prescriptive scores.
However i am confused with the above results. The goal field i select was Goal and it has 3 values: good normal and bad.
1. How does the model understand the string ('good', 'normal' and 'bad')?
2. What does it mean by the value of the scores?
eg what is the meaning of 0.22221755904502294? This value seems low so does it mean it is not optimized?
Hello @hujingrui ,
Have you seen our HelpCenter Topic - Prescriptive Scoring ?
Regarding what the originalScore and optimizedScore values mean is how close your results were to the best fit results. You can take the difference of the two scores to see how close you are to an optimal model.
originalScore - is the score taking in from the existing model
optimizedScore - is after the optimal values are applied to the lever attributes.
Please let me know if this answers your question.
yes i have seen it. However, i am unsure about the meaning behind the optimized score, let say if i train with 2 readings temperature and vibration, do they have to be
1. in sequence(time series) + 1 Goal?
2. individual values (1 temp + 1 vibration) and 1 Goal?
For the goal it had to be ordinal right and not categorical? and the ordinal goal had to support the notion 'more is better' or 'less is better' right?
We are working on providing a more detail response for you, and should be posted shortly.
After speaking internally, here is some additional feedback:
You are correct in that the Goal has to be Ordinal, as running it categorical would throw an error.
As an Ordinal, the values Bad, Normal, and Good have an inherent order to them, so "more is better" is only part of it. Generally speaking, with prescriptive scoring there are options to ‘Maximize’ or ‘Minimize’ the goal variable. TWXA can figure how to alter the levers to produce a goal (‘Optimized Score’) that is higher or lower than the prediction that would have resulted from unaltered levers(‘Original Score’).
I see that you are using the Time Series Vibration set, but not as a Time Series dataset for your analytics prescriptive scoring as that would also throw an error if the process was performed on a time series model.
The results you are seeing are likely due to a poor model, which gave no weight to the Temp and Vibration fields as they are levers. This would also explain why all the original scores for the model are completely identical to each other.
Please let me know if you have any additional questions.
What kind of algorithm is used behind Thingworx analytics? Does it support lstm(rnn) as those may be needed for time series.
According to the reply, Does it mean it could work if i use for training:
a Time Series dataset where each row is a sequence of vibration and temperature and a expected goal
For scoring, i would use the same length of sequence to score so can i then expect a sequence of optimized levers in return?
We typically do not share the mechanics / coding / specific algorithms of how ThingWorx Analytics works.
If you have questions around how to work with Time Series Data in ThingWorx Analytics, we have a knowledge article available for you here: Using Time Series Data in ThingWorx Analytics
Some of your questions appear to be around a solution development, and not of a Technical Support / General Use topics. Would you like to be put in touch with the Analytics Professional Services Consultants to work with you regarding your operationalizing of ThingWorx Analytics?