Community Tip - You can Bookmark boards, posts or articles that you'd like to access again easily! X
I'm wearing a creativity hat (the green one
Just wanted to spark the discussion about the scalability of Thingworx and handling "large" amounts of time-series data from sensors.
There is a great "Why Use InfluxDB in a Small ThingWorx Application" post by @ttielebein , but Influx and Postgresql are only compared in terms of ingestion performance there.
So just as a starter question...
To what extend can Thingworx and the underlying databases be used for storing historical data? Can someone provide guidelines on how many datapoints can be handled and when should we stop and downsample and/or dump "old data"?
I've recently been working on a PoC / MVP and a device with 140 sensors working @1/60Hz (1 datapoint per minute) resulted in 201.600 datapoints per day and ~73.500.000 per year.
For 10 devices that would be 735.000.000 datapoints per year. Actually the sensors work @1Hz frequency so that would be 44.100.000.000 datapoints per year.
Does anyone have experience working with such amounts of data? Is it considered a small-/medium-/high-scale project?
Of course, there are ways and means to lower the amounts (e.g. downsampling or only storing last X days), but if it works "as is", keeping all the data, of course, would be preferred choice for anyone asked.
Basically the question can be boiled down to "To what extent can Thingworx act as a kind of "data warehouse"?
Hi @DmitryTsarev,
It case it can help, the official PTC guidelines when it comes to sizing are documented here : ThingWorx Sizing Guide.
Hi @DmitryTsarev.
If the previous response answered your question, please mark it as the Accepted Solution for the benefit of others with the same question.
Regards.
--Sharon