Showing results for 
Search instead for 
Did you mean: 
Showing results for 
Search instead for 
Did you mean: 

Enhancing Time series chart widget to support large data points

Enhancing Time series chart widget to support large data points

Currently the in-built time series chart can support only 2K data points; this is not sufficient to meet our requirements as most of the sensor data coming in the platform have frequency of 1Hz (one data point per second), so if we can show only 2000 data points on a chart, it means we can at most show only 30 mins of data at a time. But we can have following use cases:

  • Users need to monitor data across a process and sometimes these processes runs for hours, as such, 2000 data points won’t meet our requirements.
  • For facility data, users need to see last 7 days data on the mashup, even if we downscale the data to one data point per minute, we would only be able to show maximum 1.5 days of data in this case.
  • We also have some sensors with frequency of 10Hz and going by the recommendation above, we would only be able to plot 3 mins of data on the chart which is way too less.

So, we therefore request to raise an enhancement ticket for this issue – the time series chart should be able to support at-least 20K data points to meet all our requirements.

5-Regular Member
+1 This is a good idea. We are also facing a similar issue. Would love to see time series charts capable of supporting more data points.

We can likely increase the 2K limit.  Question though, what are your users looking for in that amount of data?


Do you need to see the raw 20K points?  Or do you just need to see the mins, maxs, and maybe "important" points on the graph?  Are you using the charts for a rough visual representation of activity over that time period, or relying on the human eye to make decisions on 20K items?  Could other strategies like TWX Alerting be used to tell people when there are deviations in the data that they need to act on, rather than relying on visual chart inspection?


Generally speaking with displaying that frequency of data in any software you run into performance challenges getting the data from server to browser, performance challenges on the browser for processing/visualizing that data in a timely fashion, and then inability to distinguish important data because the trend becomes an smudged art project.  Other software products will offer different sampling modes or features to handle this amount of data, but not actually transmit/display 20K worth of points.  Highstock uses data grouping for example, other solutions use interpolation, downsampling, trend modes etc. to allow humans to make decisions without truly dealing with the large data set.

Status changed to: Delivered

The latest charts released in 9.0 have much better handling of larger data sets.  Please check them out!


It's not useful for me to have more than 2k points on a chart considering that a FHD screen i 1920 pixel wide.

You should implement a downsampler

It need as inputs:

- source data (INFOTABLE)

- start-time

- end-time

- numer_of_points (a good value should be close to the x-size of the chart)


it should return max  numer_of_points records, all equally time-distanced from start and end time (this is a basic downsample)

about the discarted points ... I just discard but they may be averaged ...


In this way, if you need to zoom on the chart, you may call again  the downsampler service and it will return again 2k points but all referred to the zoomed time range, so on the screen you will see always with the maximum detail possible.

In this way ... moreover .. you'll not transfer to the web browser (where widget runs) thousands of useless records ... but just those that can fill the chart with the max detail possible


I have to say that ...  IMHO this should be a basic feature of thingworx ... in particular when working with logged-property-history

Doing this with a service (I do that !) is not so fast ... and may be tricky or impossible if I have a big time range because you need first to read ALL records, and then downsample them .... and as you can understand it's not a good way.
It should be done ad a lower and more optimized level ...




Most of the new charts have a sampling mechanism built in. In general, the main principles are: 

  • The chart decides how many data points it wants to display, depending on the width of the x-axis. Obviously, a wide chart gets more points than a narrow chart (this is not configurable today)
  • If the chart has more points than the current width allows, it samples down the size. It does this by grouping the data points down into same sized chunks and taking the same number of points from each chunk.
  • Depending on the chart type, in some charts, the sampling doesn’t use outlier detection and the sampled points are taken uniformly.
  • In other charts, the sampling uses outlier detection and the sampler makes sure that the lowest and highest data point in each chunk is included.

We will consider in the future to enhance the sampling by making it configurable and add a property to control the "max number of data points". 





Yes it has sense to downsample at chart level all the times, this means however that from backend and frontend lot of data is transferred (all points)... and it is the frontend widged to downsample if needed.
But doing this also at backend (and possibly at stream/valuestream level) will result in much less data transferred to the widget, and so all faster to run, it is more optimized IMHO.

I have currently twx 8.5 ... I can't test latest charts on version 9.0...



Community Manager
Status changed to: Under Consideration

Good suggestion.

Some key areas of consideration is the amount of data we are sending to the clients. 8 tags with 20k of data points would be 160k of datapoints being pulled.

Currently we do not have OPC HA like queries to the Valuestreams, so we get back the individual points.

We are looking to improve this (i.e. you can choose your result mode. interpolated. . . )

We chose the 2k points because you can have 8 properties being viewed, for a total of 16k points, with 2k pixels being drawn, it fits most resolutions.

Having a more responsive chart is a key consideration going forward (i.e. the chart can query for more/less data depending on it's zoom level).

But there are layers to getting this to work.

Status changed to: Current Functionality


Thank you for your idea.

We recommend using the new line chart:

The new ptcs-charts have no maximum number of points, but we are of course limited by 64-bit integers, and it also depends on available RAM and CPU processing speed. However, when the distance between x-values is less size than one pixel, it is typically of no value to show more points. We have built-in sampling in such cases that not all data points can be shown and then the user can use "zoom" controls to zoom in (if the ThingWorx developer configures them to be shown to the end user, using the chart properties). In the upcoming TWX 9.4 release planned for June 2023, we added a new property that also allows the developer to define the sampling size, and can set the maximum number of data points to display when visualizing large data sets on the chart.


Let us know if this helps with your idea.




Director, ThingWorx Product Management


I didn't understood well where you'll put the new property on twx 9.4, probably on the widget


The best for me would be however to have this property for sampling size on the QueryPropertyHistory functions, so the output data will be already reduced/downsampled in size.

The QueryPropertyHistory should return the points to be shown on widget .. not more. If I have to zoom data I'll recall QueryPropertyHistory on a smaller time-range.


Take as an example the need to plot 2 months of a timeseries data, with a point every 10 seconds.

It would be a huge quantity of data, slow and possible memory problems if you try to load all and link to a widget (even if the widget will downsample)

So it should be downsampled at database query level.

Doing downsample at widget level, may be useless with millions of point as input data, because a "problem" may happen before data come to the widget


Influx DB do this downsample natively (aggregate parameters), and plotting data of 1 year on a 1-second timseries is terribly fast, because downsample is made at DB-level





Thanks for the feedback, and we agree. The data needs to be sent to the widget AFTER downsampling. We plan to provide additional capabilities leveraging InfluxDB to Downsample the query results by exposing it within the Query Property history function. But with that said, we do have sampling on the web component-based charts like line chart, combo chart (from ThingWorx v9.4), Pareto, and waterfall.

Sampling shows the trend of the data, but obviously, the chart will be more accurate if the data is narrowed before it gets to the chart.



Ayush Tiwari

Director Product Management, ThingWorx