In my previous question in the community, I asked about the availability of DataConnect in ThingWorx 8.1 and since it is not available or rather the analytics functionality is now made native to the Thingworx Core, I wanted to know how is it possible to use the generated data from the TW core into the ThingWorx Analytics.
The latest release notes (PTC) talk about an edge agent being automatically connected and available through microservices etc., this is a little bit confusing for me. Is there a blog/video tutorial or a documentation where it explains how the data is being sent to the TW analytics server for the model generation or if someone could explain some basic steps here with a basic example?
Also, are the Data Analysis Definitions still relevant and how to generate the CSV files which are required for the ANalytics server?
In short, a tutorial/steps on how to use the generated data from the TW core (eg. speed) and use it to create some prediction model in TW Analytics would be highly appreciated.
Hi Tushar Yadav
As reference of your previous post we can see the Christophe's relpy:
The functionality provided by DataConnect is being rewritten and integrated into the platform. This should become available in a future release of Thingworx. For as of now we don't have any available video or document for the same about 8.1.
If you need to use this functionality you will need to use ThingWorx 8.0 And Analytics 8.0 With DataConnect 8.0. There is no replacement in 8.1. Or you can manually upload the csv and json file on Analytics.
Do let me know in case of any question.
Hey Mohit Goel
Thanks for the reply. I understand what Christophe meant to say in my previous post but I think this post of mine was not clear enough.
What I wanted to know is that, in the absence of dataconnect in TW Analytics 8.1, is there a workaround which can be employed to send data from ThingWorx core to Analytics server 8.1?
Since the dataconnect is irrelevant for the TW 8.1 package, is there a way which has been employed by PTC or others through which the property values from TW core can be prepared in such a way so that they are ready to be fed into the Analytics server?
Also, what significance do the data analysis definitions now holds with respect to the Analytics Server 8.1 architecture and is there a documentation or tutorial available which takes care of these issues?
I hope the questions are clear enough now.
Please do let me know if there is still some confusion with my questions.
Looking forward to a solution.
Thanks for reaching out. We expect to release more refined transform functionality in a future release. In the meantime, there are a couple of ways to handle this.
Option 1: Export from the ThingWorx platform to a CSV file then import it via Analytics Builder into the Analytics Server as a dataset for training.
Option 2 (if you don’t mind a bit of coding): Load the CSV file programmatically by using the Dataset Creation service which uses a URI supporting formats like file:// and thingworx://. In the case of file://, the path will be to a location on the Analytics Server deployment's local filesystem. For thingworx://, the file must be uploaded to the AnalyticsUploadStorage file repository in ThingWorx.
Note, for smaller amounts of data you can also use body:// in which the data can be passed as an InfoTable directly in the service call. This should only be used for relatively small amounts of data as InfoTables are stored in memory. A great use-case for the body:// uri is for predictive scoring.
I agree that we should create some how-to documentation and we’ll release that shortly. Please feel free to reach out to me with any questions
I need to provide my data scientists with a way to create predictive models using historical data stored in value streams.
8.2 release is around the corner. Can you tell us what is changing/improving in Thingworx analytics regarding this topic.
I need to know how to feed real-time data and historical data to Analytics engine so we they can find abnormalities of current data points and also to find problems across historical data point values (predictive) scoring.
We are talking about data lakes, but I want to find out how and where I will need to store data that is kept for 3 months (for the purpose of regular day to day reporting on data collected from IoT devices) and data to be kept >3 months for long term storage and analysis (which can also be used for predictive scoring and batch based ML).