cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhancing Time series chart widget to support large data points

Enhancing Time series chart widget to support large data points

Currently the in-built time series chart can support only 2K data points; this is not sufficient to meet our requirements as most of the sensor data coming in the platform have frequency of 1Hz (one data point per second), so if we can show only 2000 data points on a chart, it means we can at most show only 30 mins of data at a time. But we can have following use cases:

  • Users need to monitor data across a process and sometimes these processes runs for hours, as such, 2000 data points won’t meet our requirements.
  • For facility data, users need to see last 7 days data on the mashup, even if we downscale the data to one data point per minute, we would only be able to show maximum 1.5 days of data in this case.
  • We also have some sensors with frequency of 10Hz and going by the recommendation above, we would only be able to plot 3 mins of data on the chart which is way too less.

So, we therefore request to raise an enhancement ticket for this issue – the time series chart should be able to support at-least 20K data points to meet all our requirements.

7 Comments
ATAT
5-Regular Member
+1 This is a good idea. We are also facing a similar issue. Would love to see time series charts capable of supporting more data points.
cbaldwin
13-Aquamarine

We can likely increase the 2K limit.  Question though, what are your users looking for in that amount of data?

 

Do you need to see the raw 20K points?  Or do you just need to see the mins, maxs, and maybe "important" points on the graph?  Are you using the charts for a rough visual representation of activity over that time period, or relying on the human eye to make decisions on 20K items?  Could other strategies like TWX Alerting be used to tell people when there are deviations in the data that they need to act on, rather than relying on visual chart inspection?

 

Generally speaking with displaying that frequency of data in any software you run into performance challenges getting the data from server to browser, performance challenges on the browser for processing/visualizing that data in a timely fashion, and then inability to distinguish important data because the trend becomes an smudged art project.  Other software products will offer different sampling modes or features to handle this amount of data, but not actually transmit/display 20K worth of points.  Highstock uses data grouping for example, other solutions use interpolation, downsampling, trend modes etc. to allow humans to make decisions without truly dealing with the large data set.

cbaldwin
13-Aquamarine
Status changed to: Delivered

The latest charts released in 9.0 have much better handling of larger data sets.  Please check them out!

iguerra
14-Alexandrite

It's not useful for me to have more than 2k points on a chart considering that a FHD screen i 1920 pixel wide.

You should implement a downsampler

It need as inputs:

- source data (INFOTABLE)

- start-time

- end-time

- numer_of_points (a good value should be close to the x-size of the chart)

 

it should return max  numer_of_points records, all equally time-distanced from start and end time (this is a basic downsample)

about the discarted points ... I just discard but they may be averaged ...

 

In this way, if you need to zoom on the chart, you may call again  the downsampler service and it will return again 2k points but all referred to the zoomed time range, so on the screen you will see always with the maximum detail possible.

In this way ... moreover .. you'll not transfer to the web browser (where widget runs) thousands of useless records ... but just those that can fill the chart with the max detail possible

 

I have to say that ...  IMHO this should be a basic feature of thingworx ... in particular when working with logged-property-history

Doing this with a service (I do that !) is not so fast ... and may be tricky or impossible if I have a big time range because you need first to read ALL records, and then downsample them .... and as you can understand it's not a good way.
It should be done ad a lower and more optimized level ...

ShlomitRigler
7-Bedrock

Hi, 

 

Most of the new charts have a sampling mechanism built in. In general, the main principles are: 

  • The chart decides how many data points it wants to display, depending on the width of the x-axis. Obviously, a wide chart gets more points than a narrow chart (this is not configurable today)
  • If the chart has more points than the current width allows, it samples down the size. It does this by grouping the data points down into same sized chunks and taking the same number of points from each chunk.
  • Depending on the chart type, in some charts, the sampling doesn’t use outlier detection and the sampled points are taken uniformly.
  • In other charts, the sampling uses outlier detection and the sampler makes sure that the lowest and highest data point in each chunk is included.

We will consider in the future to enhance the sampling by making it configurable and add a property to control the "max number of data points". 

 

Regards, 

Shlomit

iguerra
14-Alexandrite

Yes it has sense to downsample at chart level all the times, this means however that from backend and frontend lot of data is transferred (all points)... and it is the frontend widged to downsample if needed.
But doing this also at backend (and possibly at stream/valuestream level) will result in much less data transferred to the widget, and so all faster to run, it is more optimized IMHO.

I have currently twx 8.5 ... I can't test latest charts on version 9.0...

 

 

olivierlp
Community Manager
Status changed to: Under Consideration

Good suggestion.

Some key areas of consideration is the amount of data we are sending to the clients. 8 tags with 20k of data points would be 160k of datapoints being pulled.

Currently we do not have OPC HA like queries to the Valuestreams, so we get back the individual points.

We are looking to improve this (i.e. you can choose your result mode. interpolated. . . )

We chose the 2k points because you can have 8 properties being viewed, for a total of 16k points, with 2k pixels being drawn, it fits most resolutions.

Having a more responsive chart is a key consideration going forward (i.e. the chart can query for more/less data depending on it's zoom level).

But there are layers to getting this to work.