cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - If community subscription notifications are filling up your inbox you can set up a daily digest and get all your notifications in a single email. X

IoT Tips

Sort by:
Modbus is a commonly used communications protocol that allows data transfer between computers and PLCs. This is intended to be a simple guide on setting up and using a Modbus PLC Simulator with ThingWorx. ThingWorx provides Modbus packages for Windows, Linux and Linux ARM. The Modbus Package contains libraries and lua files intended to be used along with the Edge Microserver. Note: The Modbus package is not intended as an out of the box solution Requirements: ThingWorx Platform Edge Microserver Modbus Package Modbus PLC Simulator In this guide, a free Modbus PLC Simulator​ is used. Here is the direct download link for their v8.20 binary release. Configuring the EMS: The first step is to configure the EMS as a gateway. This is done via adding an auto_bind section in the config.json: "auto_bind": [ {     "name": "ModbusGateway",     "gateway": true }] This creates an ephemeral Thing that only exists when the EMS is running. The next step is to modify the config.lua to include the Modbus configuration. Copy over the contents of the etc folder of the Modbus Package over to the etc folder of the EMS. A sample config_modbus.lua is provided in the Modbus Package as a reference. The following code defines a Thing called MyPLC (which is a Remote Thing created on the Platform): scripts.MyPLC = {     file = "thing.lua",     template = "modbusExample",     identifier = "plc",     updateRate = 2000 } scripts.Thingworx = {     file = "thingworx.lua" } scripts.modbus_handler = {     file = "modbus_handler.lua",     name = "modbus_handler",     host = "localhost" } Adding 'modbusExample' to the above script enables the usage of the same located at /etc/custom/templates/. 'modbusExample' is a reference point for creating a script to add the registers of the PLC. The given template has examples for different basetypes. The different types of available registers are noted and referenced in the modbus.lua file available under /etc/thingworx/lua/. Setting up the PLC Simulator: Extract the mod_RSsim to a folder and run the executable. Since we are 'simulating' a PLC connection, set the protocol to Modbus TCP/IP. Change the I/O to Holding Registers (or any other relevant option), with the Address set to Dec. In the Simulation menu, select 'No animation' if you want to enter values manually or use 'Increment BYTES' to automatically generate values. This PLC Simulator will run at port 502. The Connection: With the EMS & luaScriptResource running, the PLC Simulator should have a connection to the platform with activity on the received/sent section. Now if you open the Remote Thing 'MyPLC' in the platform, the isConnected property (under the Properties section) should be true. (If not, go back to General Information, click on Browse in the Identifier section and select 'plc'). Go back to the Properties section, and click on Manage Bindings. Click on the Remote tab and the list of defined properties should appear. For example, the following code from the modbusExample.lua: properties.Int16HoldRegExample = {key="holding_register/1/40001?format=Int16", handler="modbus_handler", basetype="NUMBER"} denotes a property named Int16HoldRegExample at register 40001. The value at the address 40001 in the PLC Simulator should correspond with the value at the platform once this property is added and the Thing saved. If you are running into any errors when connecting with a Raspberry Pi, please take a look atDuan Gauche's follow up document/ guide - Using your Raspberry Pi with the Edge Microserver and Modbus
View full tip
  Hi, everyone!   We’re actively working towards the ThingWorx 9.0 release and we’re ready to provide a sneak peek into the biggest feature of 9.0: Active-Active Clustering for High Availability configuration.   You may be wondering: doesn’t ThingWorx already offer High Availability?   Yes, ThingWorx already supports a High Availability configuration. Previous versions of ThingWorx, such as ThingWorx 8.X version releases, support Active-Passive configuration, where one “active” ThingWorx server performs all processing and maintains the live connections to other systems such as databases and connected assets. Meanwhile, in parallel, there is a second “passive” ThingWorx server that is a mirror image and regularly updated with data but does not maintain active connections to any of the other systems. If the “active” ThingWorx server fails, the “passive” ThingWorx server is made the primary server, but this can take a few minutes to establish connections to the other systems.   So, how is ThingWorx Active-Active different?   Active-Active configuration differs from Active-Passive in that all the ThingWorx servers in the cluster are “active.” Not only is data mirrored across all ThingWorx servers, but all of the servers, instead of only one, maintain live connections with the other systems. This way, if any of the ThingWorx servers fail, the other ThingWorx servers take over instantaneously with no recovery time.   Since all ThingWorx servers are active, they are processing in parallel and, as a result, the cluster can process more data than that of a single server or a cluster with an Active-Passive configuration. Simply put, multiple servers working together outperform a single server. This allows customers to scale their deployment by simply adding more ThingWorx servers to the cluster (horizontally scaling), which does not have the same limitations of scale that is achieved by increasing the performance of the server itself.   What does that mean for me? Higher Availability - You can avoid single points of failure and configure the ThingWorx Foundation platform in an Active-Active cluster mode to achieve the highest availability for your IIoT systems and applications. Increased Scalability - Now, you can horizontally scale from one to many ThingWorx servers to easily manage large amounts of your IIoT data at scale more smoothly than ever before.   Stay tuned! We’ll be posting more information on Active-Active Clustering—how it's achieved in ThingWorx, architectural component overviews, and what it means for your ThingWorx deployment!   In the meantime, we're running the Active-Active Clustering Beta Program. Interested in participating? Reach out to Ryan Servais (rservais@ptc.com) or Ayush Tiwari (atiwari@ptc.com) to learn more about participation!   Stay connected! Kaya
View full tip
ThingWorx Performance Monitoring with Grafana authored by EDC team member Desheng Xu ( @xudesheng )   Monitoring ThingWorx performance is crucially important, both during the load testing of a newly completed application, and after the deployment of new code in an existing application. Monitoring performance ensures that everything works as expected at the Enterprise level.  This tutorial steps you through configuring and installing a tool  which runs on the same network as the ThingWorx instance. This tool collects data from the Platform and translates it into something visual and easy to understand via Grafana.    tsample is  small and customizable, and it plays a similar role to telegraf. Its focus is on gathering ThingWorx performance metrics. Historically, this tool also supported collecting OS level performance metrics, but this is no longer supported. It is highly recommended to collect OS level performance metrics by using telegraf, a tool designed specifically for that purpose (and not discussed here). This is not the only way to go about monitoring ThingWorx performance, but this tool uses a very good approach that has been proven effective both at customer sites and internally by PTC to monitor scale tests.   Find the most recent release here.   Recommended Deployment Architecture tsample can be deployed in the same box where ThingWorx Tomcat is running, but it's recommended to deploy it on a separated box to minimize any performance impact caused by the collector. tsample supports export to InfluxDB and/or local file. In this document, it is assumed that InfluxDB will be used for monitoring purpose. Please note that this is not the same instance of InfluxDB being used by ThingWorx (if configured). This article will not cover setting up InfluxDB or NGINX (if necessary), so please configure these before beginning this tutorial.   Supported Platform tsample has been tested on Windows 2016, MacOS 10.15, Ubuntu 16.04, and Redhat 7.x.  It's anticipated to work on a more general Ubuntu/Redhat/Mac/Windows release as well. Please leave a comment or contact the author, @xudesheng , if Raspberry Pi support is needed.    Configuration File Where to Store the Configuration File tsample will pick up the configuration file in the following sequence: from the command line...   ./tsample -c <path to configuration file>​     from the environment... Linux:   export TSAMPLE_CONFIG=<path to configuration file> ./tsample​   Windows:   set TSAMPLE_CONFIG=<path to configuration file> tsample.exe   from a default location... tsample will try to find a file with the name "config.toml " from the same folder in which it starts.   How to Craft a Configuration File You can use following command to generate a sample file:     ./tsample -c config.toml -e     or:     ./tsample -c config.toml --export       A file with the name "config.toml " will be generated with a sample configuration. You can then adjust its content in accordance with the following.   Configuration File Content Format Configuration file must be in toml format. title and owner sections Both sections are optional. The intention of these two sections is to support doc tool in future. TestMachine section This is section is required, and it defines where this tool will run.   thingworx_servers section This section is where you define targeted ThingWorx applications. Multiple ThingWorx servers can be defined with the same or different metrics to be collected.   thingworx_servers.metrics sections Underneath each thingworx_servers section, there are several metrics. In default example, following metrics have been included: ValueStreamProcessingSubsystem DataTableProcessingSubsystem EventProcessingSubsystem PlatformSubsystem StreamProcessingSubsystem WSCommunicationsSubsystem WSExecutionProcessingSubsystem TunnelSubsystem AlertProcessingSubsystem FederationSubsystem You can add your own customized metrics, as long as the result follows the same Data Shape. The default Data Shape has 3 columns: If the output Data Shape exceeds this limit, the tool will likely not work properly.   result_export_to_db section This section defines the target InfluxDB as a sink of collected performance metrics.   result_export_to_file section This section defines the target file storage for collected performance metrics.   Grafana Configuration Example Monitor Value Stream Step 1. Connect Grafana to InfluxDB   Step 2: Create a New Dashboard   Step 3. Create a New Query Depending on which metrics you defined to collect in the tsample configuration file, you will see a different choice of measurement in Grafana. Here, we will use ValueStreamProcessingSubsystem as an example.   Step 4. Choose the Right Platform and Storage Provider Some metrics depend on the database storage provider, like Value Stream and Stream.   Step 5. Choose the Metrics Figures   Select "remove" to get rid of the default 'mean' calculation. Select "non_negative_difference" from Transformations. Using this transformation, Grafana can show us the speed of writes.     Then, remove the default GROUP BY "time" clause. Assign a meaningful alias of this query.   Step 6. Add Another Query You can add another query as 'Value Stream Queued Speed' by following the same steps.   Step 7. Assign a Panel Title   Step 8. Review the Result Let's go back to the dashboard page and select "last 15 minutes" or "last 5 minutes" from the top right corner. It should show a result similar to the chart below.   Step 9. Save the Dashboard Don't forget to save your dashboard before we add more panels.   Step 10. Refine the Panel It's difficult to figure out the high-level write speed from the above panel, so let's enhance it. Add a new query with the following configuration: In the above query, there are two additional figures: 20s and 1m... How do you choose? 20s should be the same as sampling_cycle_inseconds in your tsample configuration file. If you choose a different value, then you could end up with misleading results. Larger values such as "1m" may give you a smoother result, but they could also hide system instability. Going larger than 1m is not recommended in most cases. With this new query, it's much easier to figure out what the average write speed in current testing is.   Tips: if your sampling_Cycle_inseconds is 30s, then you may not need this additional query. The following image is a sample at the 30s interval time. You would not need an additional average query to get a smooth write speed.   The next example is a sample at the 10s interval time. Without additional queries, you may not be able to get a meaningful understanding of the write speed. From the above three examples, it's recommended to configure the sampling interval time at 30s, or anything larger than 20s. You can then choose whether you need additional queries based on the visualization result.   Step 11. Further Refinement The above charts illustrate the queuing and writing speed. However, it is possible that the Value Stream may perform at a reasonable speed, but the Value Stream queue may be growing and could exceed its capacity. Let's add another query to monitor this: However, it is difficult to read this chart, since it has a different value range on the y-axis: Let's move this query to a second y-axis on the right: This will make the view much easier to see: The current queue size or remaining queue size will always move up and down; it is healthy as long as it does not continue to grow to a high level.   What Else Can Be Monitored? The following metrics would be monitored by most customers: Value Stream write and queue speed Value Stream queue size Stream write and queue speed Stream queue size Event performed speed (completedTaskCount) Event submitted speed (submittedTaskCount) Event queue size Websocket communication Websocket connection   ThingWorx Memory Usage Monitoring Create a new panel and add a new query: In a running system, memory usage will always move up and down - at times sharply or quickly - when the system is busy. The system is healthy as long as memory doesn't go up continuously or stay at a maximum for a long period of time.   Conclusion Setting up monitoring is absolutely crucial to managing the performance of an enterprise ThingWorx application. Using Grafana makes tracking and visualizing the performance much easier. Stay tuned to the EDC tag for more monitoring tips to come!
View full tip
This might be a well-known topic for some, but I recently had a need that Event Routers fit into perfectly and wanted to share. If you have some neat applications for Event Routers on mashups, feel free to reply!   What? Event Routers are a function on Mashups that let you connect multiple inputs to a single output. For my use case, this was extremely helpful to let me have two different Service Outputs go to the same Widget. They are a really simple tool that can save a lot of headache.   How? Event Routers work by funneling the latest data through to a single output. This is particularly useful for  user-activated actions with the output tied to a widget or another service. The Event Router automatically activates when any one of the Inputs changes.   Example I have two services that generate HTML from different sources, but I want to display just the latest one that the user had activated in a single HTML Text Area widget. The two different services are activated with two different buttons. But how do I show these two outputs in a single widget? Create an Event Router with two HTML inputs!         Now I just tie each service output to the Inputs and tie the Output to the HTML Text Area Text (note: the icon for Input2 is incorrect—it should be HTML as well; this system is running 8.5.1, perhaps it's an issue in that release).         Now when the user clicks on either button, the correct service’s HTML is sent to the HTML Text Area. Ta-da!   P.S. I noticed in some older posts that Event Routers used to be a widget or extension that came and went. Now (8.5+) it is baked into the Functions on the far right side of Mashup Builder.
View full tip
When using Value Streams to log historical data, there's a service to purge the ValueStream entries from the Thing itself. But, what to do when a Thing that once logged values into a ValueStream was deleted? Currently, there's no OOTB way to delete these entries if they're not being used anymore. Currently, I was asked this question and wanted to share this with the entire community. I created a utility application that queries directly the TWX DB for Things that are present in the ValueStream but don't exist anymore and allows a user to purge it.    These services are considering  PostgreSQLServer as persistence provider for the ValueStreams. The services can be modified if you're using SQLServer.  Do not apply for InfluxDB persistence providers   The twxDBConnector thing is based on the PostgreSqlServer template, that is present in the Relational Databases extension. It has 4  main services:   getEntriesToPurge:  Queries the TWX DB for all the entries related to a Thing. It does not consider the ValueStream id, so it will purge all the entries across all value streams. Requires a Thing name as an input; getMissingThings: Queries Things that are present in the ValueStream DB table that are not present in the Things table, meaning that they were deleted; purgeThingEntries: Purges the entries related to a Thing. It does not consider the ValueStream id, so it will purge all the entries across all value streams; purgeAllEntries: Purge all the entries related to Things that were deleted.   The queries can be modified to allow the selection of the value stream to be cleaned.   I also added a sample mashup that leverages the services.       The twxDBConnector has a configuration table that requires the DB Connection string, user and password.   You can also do it directly from the DB using PGAdmin and purge it all.   DELETE FROM value_stream WHERE value_stream.entry_id IN --Queries all entries in the value stream table that belong to an inexistent thing (SELECT entry_id FROM value_stream LEFT JOIN thing_model ON value_stream.source_id = thing_model.name WHERE thing_model.name IS NULL)   Attention: These services are changing directly the TWX DB, so use it carefully.   To use it:   Import the PostgresSQLServer Extension (you might need to change the JAR in the extension depending on the TWX version you're using); Import the entities from the purgeVSEntries.xml Thanks  @dsantos for the help on optimizing the queries.   Hope it helps. Ewerton  
View full tip
Remote Monitoring of Assets Benchmark   As @ttielebein introduced previously, one of the missions of the IOT Enterprise Deployment Center (EDC) is to publish benchmarks that showcase the ThingWorx Platform deployed to solve real-world IOT business problems.    Our goal is that these benchmarks can be used as a reference or baseline for architects working on their own implementations... showing not only a successful at-scale implementation, but also what happens when that same implementation is pushed to ...or even past... it's limits.   Please find the first installment attached - a reference benchmark demonstrating ThingWorx deployed to monitor 15,000 assets with a high-volume of data properties per asset.  Over 250 hours of simulations were conducted as part of producing this benchmark.   The IOT EDC team will be monitoring this post (as well as our other posts in the IOT Tech Tips forum) to answer any questions we can about the approaches taken in designing, deploying and simulating this implementation.    As the team will publish more benchmarks like this will be published in the future, we also greatly value any feedback you have that can help us to improve the content for future documents.
View full tip
Users of ThingWorx Analytics (TWA) may choose to create a predictive model using TWA or import a predictive model that was created using other software. When importing into or exporting out of TWA, this predictive model must be in a PMML (Predictive Model Markup Language) version 4.3+ format. This post describes how to complete the import and export processes. Exporting: The user may create a model in two main ways inside of TWA: using the Builder user interface, or by using ‘Create Job’ service that exists the Training Thing. Whichever method is used, a model Job Id is created automatically by TWA for that model. It is this model Job Id that is used to identify the model inside of TWA, regardless of what is being done with that model.   If a model is trained using Builder, the user may highlight that model, click ‘Job Details’, and then copy the Job ID. This is done as follows:   Next, the user will navigate to Browse --> Things --> …TrainingThing. This is the Training Microservice inside of TWA where all the functionality involved with training a model exists. Within the …TrainingThing, the user will execute the ‘RetrieveModel’ service under Services. When executing the service, the user will paste the model Job ID (ex. 49704f1a-7fcd-4e38-ab53-84ef46517d0a) they copied earlier, and press ‘Execute’. The resulting text can then be highlighted and copied to Notepad or some other text editor, and saved as .pmml format (ex. ‘ModelExport.pmml’).   Importing Through Results Microservice: To import a model that has been saved in PMML 4.3+ format into TWA using the Results Microservice, the user will navigate to Manage --> Repositories (ex. AnalyticsUploadStorage) --> Actions --> Upload, and choose the PMML file. The user will then navigate to Browse --> Things --> …ResultsThing. This is the Results Microservice inside of TWA where all the functionality exists related to previously trained models. Within the …ResultsThing, the user will execute the ‘UploadModel’ service under Services. Alternatively, the user can upload the model from any repository using ‘UploadModelFromRepository” service.   To create a model from the uploaded PMML inside of TWA, the user will fill out the filePath and name then execute the service. Note: This model will not show up in Builder, as that would require model validation information that is not part of the imported PMML file.   The resulting Job Id can be used to make predictions, such as by using the …PredictionThing’s BatchScore or RealtimeScore services. At this point, the uploaded model acts the same way as if the model were created inside of that TWA environment.       Importing Through Analytics Manager: To import a model that has been saved in PMML 4.3+ format into TWA using the Analytics Manager, the user will navigate to Analytics --> Analytics Manager --> Analysis Models, and click the green “New” button. Next the user will choose the provider name (or create a new one by navigating to Analytics --> Analytics Manager --> Analysis Providers). The user will also check the box to “Upload Model”, and click the grey “Choose File” button to find the PMML file. Finally, the user will click the black “Upload” button, then the green “Save” button.     At this point, the model is uploaded into ThingWorx Analytics, and the user may progress through the subsequent steps to set up “Analysis Events” and “Analysis Jobs” that will be powered by the imported model.
View full tip
This is a lessons learned write up that I proposed to present at Liveworx but it didn't make the cut, but I did want to share it with all the developer folks. Please note that this is before we added Influx and Micro Services, which help improve the landscape. Oh and it's long 🙂 ------------------------------------------ This is written as of Thingworx 8.2   Different ways to scale Data and Processing with Thingworx Two main issues are targeted Data Storage Platform processing Data Storage in Thingworx Background Issues around storage is that due to the limited indexing in the Persistence Provider with then the actual values according to the datashape being in a JSON Blob So when you look in the Persistence Provider you’ll see Source sourceType Location entityID Datetime Tags ValueJSONBlob   The first six carry an index, the JSON Blob which holds the values according to the datashape is not, that can read something like {value1:firstvalue,value2:secondvalue,value3:[ …. ]} etc. This means that any queries beyond the standard keys – date/time, entityID (name of Stream or DataTable), source, sourcetype, tags, location become very inefficient because it will query the records and then apply the datashape query server side. Potentially this can cause you to pull way more records over from Persistence Provider to Platform than intended. Ie: a Query on Temperature in my data, that should return 25 records for a given month, will perhaps first return 250K records and then filter own to 25. The second issue with storage is that all Streams are stored in one table in the Persistence Provider using entityID as an additional key to figure out which stream the record is for. This means that your record count per table goes up much faster than you’d expect. Ie: If I have defined 5 ValueStreams for 5 different asset types, ultimately all that data is still in one table in the Persistence Provder. So if each has 250K records, a query against the valuestream will then in actuality be a query against 1.25 million records. I think both of these issues are well known and documented? By now and Dev is working on it. Solution approaches So if you are expecting to store a lot of records what can you do? Archive The easiest solution is to keep a limited set and archive off the rest of the data, preferably into a client’s datalake that is not part of the persistence provider, remember archiving from one stream to another stream is not a solution! Unless … you use Multiple Persistence Providers Multiple Persistence Providers Thingworx does support multiple persistence providers for storing data. So you can spin up extra schemas (potentially even in the same DataBase Server) to be the store for additional Persistence Providers which then are mapped to a specific Stream/ValueStream/DataTable/Blog/Wiki. You still have to deal with the query challenge, but you now have less records per data store to query through. Direct queries in the Persistence Provider If you have full access to your persistence provider (NOTE: PTC Cloud Services does NOT provide this right now). You can create an additional JDBC connection to the Persistence Provider and query the stream directly, this allows you to query on the indexed records with in addition a text search through the JSON Blob all server side. With this approach a query that took several minutes at times Platform side using QueryStreamEntries took only a few seconds. Biggest savings was the fact that you didn’t have to transfer so many records back to the Platform server. Additional Schemas You can create your own schema (either within the persistence provider DB – again not supported by PTC Cloud Services) in a Database Server of your choice and connect to it with JDBC/REST. (NOTE: I believe PTC Cloud Service may/might offer a standalone server with actual root access) This does mean you have to create your own Getter/Setter services to retrieve and store information, plus you’ll need some event to store (like DataChange). This approach right now is probably a common if not best practice recommendation if historical information is required for the solution and the record count looks to go over 1 million records and can’t just be queried based on timestamp. Thingworx Event Processing Background Thingworx will consistently deal with many Things that have many Properties, and often times there will be Alerts/Rules that need to run based on value changes. When you are using straight up Alerts based on a limit value, this isn’t such a challenge, but what if you need to add some latch/lock/debounce logic or need to check against historical values or check multiple conditions? How can you design something that can handle evaluating these complex rules, holds some historical or derived values and avoid race conditions and be responsive? Potential Problems Race conditions Multiple Events may need to update the same Permanent or Temporary store for the determination of a condition. Duplicates If you don’t have some ‘central’ tracker, you may possibly trigger the same rule multiple times. Slow response You are potentially triggering thousands or more events at the same time, depending on how you’ve set up your logic, your response could become so slow that the next event will be firing before finish and you’ll overload the system. System queue overrun If your events trigger faster than you can handle the events, you will slowly build up and finally overrun the event queue. System Thread count overrun Based on the number of cores in your system, you can overrun the number of threads that can be handled. Connection Pool overrun Each read/write to a stream/datatable but also Property Persist is a usage of the connection pool to your persistence provider. If you fire a lot at once, you can stack up requests and cause deadlocks System out of memory Potentially in handling the events you are depending on in memory information, if that is something that grows over time, you could hit an ‘Out of Memory’ issue. Solution Approaches Batch processing Especially with Agents/Sources that write a set of property updates, you potentially trigger multiple threads that all may need the same source information or update the same target information. If you are able to process this as a batch, you can take all values in account and only process this as a single event and have just a single read from source or single write to target. This will be difficult to achieve when using something like Kepserver, unless it is transferring as something non-standard like MQTT. But if you can have the data come in as a single REST POST this approach becomes possible. In Memory vs. Table/Stream Storage To speed up response time, you can put necessary information into Memory vs. in a DataTable or Stream. For example, if you need the most current received record together with some historical values, you could: Use a Stream but carry the current value because the stream updates async. (ie adding the current value to the stream doesn’t guarantee that when you read from the stream it has already been committed) Use a DataTable because they are synchronous but it can make the execution slow, especially if you are reaching 100K records or more Use an InfoTable or JSON Property, now this information is in memory and runs the fastest and is synchronous. Note that in some speed testing JSON object was faster than InfoTable and way faster than DataTable. One challenge is that you would have to do a full overwrite if you need to persist this information. Doing a full write does open up the danger of a race condition, if this information is being updated by multiple threads at the same time. If it is ok to keep the information in memory than an InfoTable is nice because you can just add/delete rows in memory. I sadly haven’t figured out yet how to directly do this to a JSON object property :(. It is important to consider disaster recovery scenarios if you are only using this in memory Centralized Processing vs. Distributed Processing Think about how you can possibly execute some logic within the context of the Entity itself (logic within the ThingShape/ThingTemplate) vs. having it fire into a centralized Service (sync or async) on a separate Entity. Scheduler or Timer As much as Schedulers and Timers are often the culprit of too many threads at the same time, a well setup piece of logic that is triggered by a Scheduler or Timer can be the solution to avoid race conditions If you are working with multiple timers, you may want to consider multiple schedulers which will trigger at a specific time, which means you can eliminate concurrence (several timers firing at the same time) Think about staggering execution if necessary, by using the hated, looked down upon … but oft necessary … pause() function !!!! Synchronous vs. Asynchronous Asynchronous execution can give great savings on the processing speed of a thread, since it will kick off the asynch parts and continue on. The terrible draw back, you can’t tell when it is finished nor what the resulting output is. As you mix and match synch/asynch vs processing speed, you may need to consider other ways to pick up when an asynch process finishes, some Property elsewhere that will trigger into a DataChange for example. Interesting examples Batch Process With one client there was a batch process that would post several hundred results at once that all had to be evaluated. The evaluation also relied on historical information. So with some logic these properties were processed as a batch, related to each other and also compared to information held in memory besides historically storing the information that came in. This utilized several in memory objects and ultimately also an eval() statement to have the greatest flexibility and performance. Mix and Match With another client, they had a requirement to have logic to do latch/lock and escalation. This means that some information needs to be persisted, however because all the several hundred properties per asset are coming in through Kepware once a second, it also had to be very fast. The approach here was to have the DataChange place information into an in memory infotable that then was picked up by a separate latch/lock/escalation timer to move it over to the persistent side. This allowed for the instantaneous processing of DataChange and Alerts, but also a more persistent processing of latch/lock/escalation logic. In Conclusion Remember that PTC created its software for specific purposes. I don’t think there ever will be a perfect magical platform that will do everything we need and want. Thingworx started out on a specific path which was very high speed data ingest and event platform with agnostic all around connectivity, that provided a very nice holistic modeling approach and a simple way to build UI/UX. Our use cases will sometimes go right past everything and at times to the final frontier aka the bleeding edge and few are a carbon copy of another. This means we need to be innovative and creative. Hopefully all of you can use the expert knowledge you have about our products to create those, but then also be proactive and please share with everyone else!  
View full tip
This Best Practices document should offer some guidelines and tips & tricks on how to work with Timers and Schedulers in ThingWorx. After exploring the configuration and creation of Timers and Schedulers via the UI or JavaScript Services, this document will also highlight some of the most common performance issues and troubleshooting techniques.   Timers and Schedulers can be used to run jobs or fire events on a regular basis. Both are implemented as Thing Templates in ThingWorx. New Timer and Scheduler Things can be created based on these Templates to introduce time based actions. Timers can be used to fire events in a certain interval, defined in the Timer's Update Rate (default is 60000 milliseconds = 1 minute). Schedulers can be used to run jobs based on a cron pattern (such as once a day or once an hour). Schedulers will also allow for a more detailed time based setup, e.g. based on seconds, hours, days of week or days months etc. Events fired by both Timers and Schedulers can be subscribed to with Subscriptions which can be utilized to execute custom service scripts, e.g. to generate "fake" or random demo data to update Remote Things in a test environment. In general subscriptions and scripts can be used to e.g. run regular maintenance tasks or periodically required functions (e.g. for data aggregation) For more information about setting up Timers and Schedulers it's recommended to also have a look at the following content:   How to set up and configure Timers How to set up and configure Schedulers How to create and configure Timers and Schedulers via JavaScript Services Events and Subscriptions for Timers and Schedulers   Example   The following example will illustrate on how to create a Timer Thing updating a Remote Thing using random values. To avoid any conflicts with permissions and visibility, use the Administrator user to create Things.   Remote Thing   Create a new Thing based on the Remote Thing Template, called myRemoteThing. Add two properties, numberA and numberB - both Integers and marked as persistent. Save myRemoteThing. Timer Thing   Create a new Thing based on the Timer Template, called myTimerThing. In the Configuration, change the Update Rate to 5000, to fire the Event every 5 seconds. User Context to Administrator. This will run the related services with the Administrator's user visibility and permissions. Save myTimerThing. Subscriptions   To update the myRemoteThing properties when the Timer Event fires, there are two options: Configure a Subscription on myRemoteThing and listen to Timer Events on the myTimerThing. Configure a Subscription on myTimerThing and listen to Timer Events on itself as a source. In this example, let's go with the first option and Edit myRemoteThing. Create a new Subscription pointing to myTimerThing as a Source. Select the Timer Event Note that if no source is selected, the Timer Event is not availabe, as myRemoteThing is based on the Remote Thing Template and not the Timer Template Enable the Subscription. In the Script area use the following code to assign two random numbers to the Thing's custom properties: me.numberA = Math.floor(Math.random() * 100); me.numberB = Math.floor(Math.random() * 100); Save myRemoteThing. Validation   The Subscription will be enabled and active on saving it. Switch to the myRemoteThing Properties Refreshing the Values will show updates with random numbers between 0 and 99 every 5 seconds (Timer Update Rate).   Performance considerations   Timers and Schedulers are handled via the Event Processing Subsystem. Metrics that impact current performance can be seen in Monitoring > Subsystems > Event Processing Implementing Timers and Schedulers on a Thing Template level might flood the system with services executions originating from Subscriptions to Timer / Scheduler triggered Events. Subscribing to another Thing's Events will be handled via the Event Processing Subsystem. Subscribing to an Event on the same Thing will not be handled via the Event Processing Subsystem, but rather execute on the already open in memory Thing. If Timers and Schedulers are not necessarily needed, the Services can be triggered e.g. via Data Change Events, UI Interactions etc. Recursion can be a hidden performance contributer where a Subscription to a certain Event executes a service, triggering another Event with recursive dependencies. Ensure there are no circular dependencies and service calls across Entities. If possible, reads for each and every action from disk should be avoided. Performance can be increased by storing relevant information in memory and using Streams or Datatables or for persistence. If possible, call other Services from within the Subscription instead of handling all code within the Subscription itself. For full details, see also Timers and Schedulers - Best Practice   How to identify and troubleshoot technical issues   Check the Event Processing Subsystem for any spikes in queued Events (tasks submitted) while the total number of tasks completed is not or only slowly increasing. For a historical overview, search the ApplicationLog for "Thingworx System Metrics" to get system metrics since the server has been (re-) started. In the ApplicationLog the message "Subsystem EventProcessingSubsystem is started" indicates that the Subsystem is indeed started and available. Use custom loggers in Services to get more context around errors and execution in the ScriptLog Custom Loggers can be used to identify if Events have fired and Subscriptions are actually triggered Example: logger.debug("myThing: executing subscribed service") For issues with Service execution, see also CS268218 Infinite loops in Services could render the server unresponsive and might flood the system with various Events To change the timing for a Timer, restarting the Thing is not enough. The Timer must be disabled and enabled at the desired start time. Schedulers will allow for a much more flexible timing and setting / changing execution times in advance. For further analysis it's recommended to generate Thread Dumps to get more information about the current state of Threads in the JVM. The ThingWorx Support Tools Extension can help in generating those. See also CS245547 for more information and usage.
View full tip
It's been challenging, in the absence of OOTB Software Content Management (SCM) system within ThingWorx, to track, maintain code changes in different entities or to roll back in case of any entity is wrongly edited or removed. Though there's possibility to some extent to compare the differences i.e. when importing back the entities from the Source Control repository in ThingworxStorage. However this approach itself has it's own limitations. Note : This is not an official document, rather more of a trick to help workaround this challenge. I'll be using following components in this blog to help setup a workable environment. Git VSCode ThingWorx Scheduler Thing First two components i.e. Git & VSCode is something I preferred to use due to their ease and my familiarity. However, you are free to choose your own versioning platform and IDE to review the branches, commits, diff, etc. or simply use GIT Bash to review the branches and all related changes to the code. This blog divides into following structures Installing & setting up code versioning software Installing IDE Creating a Scheduler Thing in ThingWorx Reviewing the changes 1. Installing & setting up code versioning software As mentioned you can use any code versioning platform of your choice, in this blog I'll be using Git. To download and install Git, refer to the Git- Installing Git Setting up the Git Repository Navigate to the \\ThingworxStorage as this is the folder which contains the path to the \repository folder which in turns have SystemRepository folder. Note: Of course more custom repositories could be created if preference is to separate it from the SystemRepository (available OOTB) Once Git is installed, let's initialize the repository. If you are initializing the repository in ThingworxStorage and would like only repository folder to be tracked, be sure to create .gitignore and add following to it: # folders to ignore database esapi exports extensions logs reports certificates # keystore to ignore keystore.jks Note : Simply remove the folder/file from the .gitignore file if you'd like that file/folder to be tracked within the Git repository. Following commands have been issued in the Git Bash which can be started from Windows Start > Git Bash. Then from the Git Bash navigate to the ThingworxStorage/repository folder. Git Command to initialize repository $ git init Git command to check the status of tracked/un-tracked files/folders $ git status This may or may not return list(s) of files/folders that are not tracked/un-tracked. While issuing this command for the first time, it'll show that the repository and its content i.e. SystemRepository folder is not tracked (file/folder names will likely be highlighted in red color. Git command to configure user and then add required files/folders for tracking $ git config --global user.name "" $ git config --global user.email "" $ git add . This will add all the folders/files that are not ignored in the .gitignore file as we created above. $ git commit -m "" This will perform first commit to the master branch, which is the default branch after the initial setup of the git repository $ git branch -a This will list all available branches within this repository $ git branch e.g. $ git branch newfeatureA This will create a new branch with that name, however to switch to that branch checkout command needs to be used. $ git checkout newfeatureA Note this will reflect the change in the command prompt e.g. it'll be switched from  MINGW64 /c/ThingworxStorage/repository (master) to MINGW64 /c/ThingworxStorage/repository (newfeatureA) If there's a warning with files/folders highlighted in Red it may mean that the files/folders are not yet staged (not yet ready for being committed under the new branch and are not tracked under the new branch). To stage them: $ git add . To add them for tracking under this branch $ git commit -m "Initial commit under branch newfeatureA" Above command will commit with the message defined with "-m" parameter 2. Installing IDE Now that the Git is installed and configured for use, there are several options here e.g. using an IDE like VSCode or Atom or any other IDE for that matter, using Git Bash to review the branches and commit codes via command or to use the Git GUI. I'll be using VSCode (because apart from tracking Git repos I can also debug HTML/CSS, XML with minimum setup time and likely these are going to be the languages mostly used when working with ThingWorx entities) and will install certain extensions to simplify the access and reviewing process of the branches containing code changes, new entities those that are created or getting committed from different users, etc To install VSCode refer to the Setting up Visual Studio Code. This will cover all the platforms which you might be working on. Here are the list of extensions that I have installed within VS Code to work with the Git repository. Required Extensions Git Extension Pack Optional Extensions Markdown All in One XML Tools HTML CSS Support Once installed and done with all the above mentioned extensions simply navigate to the VSCode application's File > Open Folder > \\ThingworxStorage\repository This will open the SystemRepository folder which will also populate the GitLense section highlighting the lists of branches, which one is the active branch (marked with check icon) & what uncommitted changes are remaining in which branch see following: To view the history of all the branches we can use the extension that got installed with the Git Extension Pack by pressing keyboard key F1 and then search for Git: View History (git log) > Select all branches; this will provide overview such as this 3. Creating a Scheduler Thing in ThingWorx Now that we have the playground setup, we can either: Export ThingWorx entities manually by navigating to the ThingWorx Composer > Import/Export > Export > Source Control Entities, or Invoke the ExportSourceControlledEntities service automatically (based on a Scheduler's CRON job) available under Resources > SourceControlFunctions To invoke this service automatically, a subscription could be created to the Scheduler Thing's Event which invokes the execution of ExportSourceControlledEntities service periodically. I'm using this service with following input parameters : var params = { path: "/"/* STRING */, endDate: undefined/* DATETIME */, includeDependents: undefined/* BOOLEAN */, collection: undefined/* STRING */, repositoryName: "SystemRepository"/* THINGNAME */, projectName: undefined/* PROJECTNAME */, startDate: undefined/* DATETIME */, tags: undefined/* TAGS */}; // no return Resources["SourceControlFunctions"].ExportSourceControlledEntities(params); Service is only taking path & repositoryName as input. Of course, this can be updated based on the type of entities, datetime, collection type, etc. that you would want to track in your code's versioning system. 4. Reviewing the changes With the help of the Git tool kit installed in the VS Code (covered in section 2. Installing IDE) we can now track the changed / newly created entities immediately (as soon as they are exported to the respository ) in the Source Control section (also accessible via key combination Ctrl + Shift + G) Based on the scheduled job the entities will be exported to the specified repository periodically which in turn will show up in the branch under which that repository is being tracked. Since I'm using VS Code with Git extension I can track these changes under the tab GitLens Additionally, for quick access to all the new entities created or existing ones modified - Source Control section could be checked Changes marked with "U" are new entities that got added to the repository and are also un-tracked and the changes marked with "M" are the ones that are modified entities compared to their last commit state in the specific branch. To quickly view the difference/modifications in the entity, simply click on the modified file on the left which will then highlight the difference on the right side, like so
View full tip
Welcome to the ThingWorx Manufacturing Apps Community! The ThingWorx Manufacturing Apps are easy to deploy, pre-configured role-based starter apps that are built on PTC’s industry-leading IoT platform, ThingWorx. These Apps provide manufacturers with real-time visibility into operational information, improved decision making, accelerated time to value, and unmatched flexibility to drive factory performance.   This Community page is open to all users-- including licensed ThingWorx users, Express (“freemium”) users, or anyone interested in trying the Apps. Tech Support community advocates serve users on this site, and are here to answer your questions about downloading, installing, and configuring the ThingWorx Manufacturing Apps.     A. Sign up: ThingWorx Manufacturing Apps Community: PTC account credentials are needed to participate in the ThingWorx Community. If you have not yet registered a PTC eSupport account, start with the Basic Account Creation page.   Manufacturing Apps Web portal: Register a login for the ThingWorx Manufacturing Apps web portal, where you can download the free trial and navigate to the additional resources discussed below.     B. Download: Choose a download/packaging option to get started.   i. Express/Freemium Installer (best for users who are new to ThingWorx): If you want to quickly install ThingWorx Manufacturing Apps (including ThingWorx) use the following installer: Download the Express/Freemium Installer   ii. 30-day Developer Kit trial: To experience the capabilities of the ThingWorx Platform with the Manufacturing Apps and create your own Apps: Download the 30-day Developer Kit trial   iii. Import as a ThingWorx Extension (for users with a Manufacturing Apps entitlement-- including ThingWorx commercial customers, PTC employees, and PTC Partners): ThingWorx Manufacturing apps can be imported as ThingWorx extensions into an existing ThingWorx Platform install (v8.1.0). To locate the download, open the PTC Software Download Page and expand the following folders:   ThingWorx Platform | Release 8.x | ThingWorx Manufacturing Apps Extension | Most Recent Datacode     C. Learn After downloading the installer or extensions, begin with Installation and Configuration.   Follow the steps laid out in the ThingWorx Manufacturing Apps Setup and Configuration Guide 8.2   Find helpful getting-started guides and videos available within the 'Get Started' section of the ThingWorx Manufacturing Apps Portal.     D. Customize Once you have successfully downloaded, installed, and configured the Manufacturing Apps, begin to explore the deeper potential of the Apps and the ThingWorx Platform.   Follow along with the discussion and steps contained in the ThingWorx Manufacturing Apps and Service Apps Customization Guide  8.2   Also contained within the the 'Get Started' page of the ThingWorx Manufacturing Apps Portal, find the "Evolve and Expand" section, featuring: -Custom Plant Layout application -Custom Asset Advisor application -Global Plant View application -Thingworx Manufacturing Apps Technical Lab with Sigma Tile (Raspberry Pi application) -Configuring the Apps with demo data set and simulator -Additional Advanced Documentation     E. Get help / give feedback / interact Use the ThingWorx Manufacturing Apps Community page as a resource to find documentation, peruse past forum threads, or post a question to start a discussion! For advanced troubleshooting, licensed users are encouraged to submit support tickets to the PTC My eSupport portal.
View full tip
  You might have seen the Performance Advisor for some of your other favorite PTC Products like Creo, Windchill or Integrity.  Good news....it's now also available for ThingWorx!   In case you're not familiar with the Performance Advisor, it's new functionality allowing you to work closer with the PTC / ThingWorx team for improving your usage with ThingWorx and improving ThingWorx itself in the areas that matter most to you.      ThingWorx Performance Advisor   delivers information dashboards driven by data on the features, usage and performance of your ThingWorx systems unlocks information that can reduce wasted development and improve design cycles allows comprehensive visibility into software versions in use to manage software upgrade plans simplifies compliance and revenue allocation by monitoring usage enables quick access to system and usage statistics across your organization uses personalized dashboards to viewing, reporting and trend analysis   The Performance Advisor for ThingWorx has just been released, so we want you to share your experience and data to get you and us started on analyzing usage statistics and needs for further features.   The Performance Advisor is easy to connect. It just takes three simple steps and a minute of your time. This will result in improved transparency, improved stability, improved productivity, improved product performance, improved compliance administration and an increased administrative efficiency and allows the ThingWorx R&D team to continuously improve the platform through the analytical insights from the data collected.   As ThingWorx is growing fast, be sure to participate and actively shape the way you're using ThingWorx and the way that ThingWorx is designed.   With newer versions of ThingWorx, capabilites and benefits for the Performance Advisor will be improved to ensure we're capturing the most accurate information to help you grow your Internet of Things business and scale your solutions to your / your application's needs and requirements. We're just at the beginning of the journey...   How to enable ThingWorx Performance Advisor   Enable Metrics Reporting and setting up the Performance Advisor capabilties is described in detail in CS262960 Just follow the steps and: Congratulations!   It's as simple and fast as that - you enabled the ThingWorx Performance Advisor... quite easy, right?   Where can I see the data / metrics I have sent to PTC?   The information can be seen on the Performance Advisor Homepage   Here's how the current views look like - they might change over time, introducing new features and views to maximize the impact and benefit for you.   In a first glance the basic information of what has been collected can be seen in the Summary     In the Connection System Details it shows more about what systems are currently connected with its user counts and number of remote things. The Connected System History shows a historical overview on how those parameters changed over time.   For a more detailed historic overview of all the data being sent, check out the Historical Property Data.     Questions?   For specific questions, check out article CS262967 which holds the FAQs for the Performance Advisor   If you have specific questions not addressed in the article, you can always comment on this blog post, open a new community thread or open a case with Support Services.   We want your feedback   After enabling metrics collection and reviewing the Performance Advisor dashboards, what do you think? What features would you like to see in the future? Is there anything missing that would help you as a System Administrator making your life easier?   As we're trying to improve functionality over time, make sure your voice is heard as well and feel free to leave some feedback.
View full tip
All of the entities and data in storage entities(Stream, ValueStream, DataTable, Wiki, Blog) are kept in Persistence Provider, to enhance the storage volume of Thingworx, it is advised to use bigger Persistence Provider (PostgreSQL or Cassandra rather than Neo4j and H2), move and keep certain part of data in external database or set up a second Persistence Provider Data Storage entities (Stream, ValueStream, DataTable) and Collaboration entities (Wiki, Blog) can be assigned to the new Persistence Provider to keep the Business Data The original Persistence Provider (upon installation) will be freed from large data input and respond faster, and only holds the Model Data (entity information) Small Persistence Provider like H2 or Neo4j, which does not require extra steps to setup in the Thingworx installation process, cannot set up a second Persistence Provider (they don't have .bat files to configure the database) Here is an example for setting up a second PostgreSQL (may also applicable for Cassandra) Install a new PostgreSQL in the server Should give different folder names for the files, or two PostgreSQL will affect each other PostgreSQL 9.5 could work with Thingworx 7.2.x, but cannot be used with 7.3 and above. Version 9.5 is not officially supported or advised by now, and the user should take their own risk if doing so. The new PostgreSQL will have a new port number (e.g. first port is 5432, the new port is 5433) Open PostgreSQL using pgAdmin 3 Create a new user role: a. Right click PostgreSQL9.4 (localhost:5433). b. Select NewObject>New Login Role. On the Properties tab, in the Role name field, enter the user role name. c. On the Definition tab, in the Password field, enter a unique password (you will be prompted to enter it twice). Add the new <postgres-installation>/bin folder to system path variable Edit the thingworxPostgresDBSetup.bat and thingworxPostgresSchemaSetup.bat from the Thingworx software download package, and change the port property to 5433 Execute the two scripts, and new database and tablesapce will be created platform-settings.json file does not need extra configuration Restart the Tomcat server, a new folder will be created under ThingworxPostgresqlStorage folder Go to Thingworx composer, create a new Persistence Provider in the Home list Give it a new name, and select Persistence Provider Package(only one choice) After selection, a Configuration field will show up in the ENTITY INFORMATION field ( such field will not appear in Neo4j based Thingworx) Open Configuration and change the JDBC URL to jdbc:postgresql://localhost:5433/thingworx, update Password, click Save The new Persistence Provider can be used in data storage entities now The new Persistence Provider can also be set as default provider
View full tip
This post will cover the challenges I've had while going through the setup of .NET SDK based ADO Service for SQL Server DB Connection. I'll be starting from the scratch on setting up the service for this to present full picture on the setup. Pre-requisite 1. Download and install Microsoft SQL Server Express or Enterprise edition, for testing I worked with Express edition : https://www.microsoft.com/en-us/sql-server/sql-server-editions-express 2. Once installed, it's imperative that the TCP/IP Protocol is enabled in the SQL Server Configuration Manager for the SQL Server 3. Download ThingWorx Edge ADO Service from PTC Software download page What is ThingWorx ADO Service? An ActiveX Data Object service allowing connection to a Microsoft database source e.g. MS SQL Server, MS Excel or MS .NET application to the ThingWorx platform. It is based on the ThingWorx .NET SDK. Installing ADO Service Let me begin by saying this is just a summary, in a crude way of course, of ThingWorx Edge ADO Service Configuration Guide. So when in doubt it's strongly recommended to go through the guide,also provided together with the downloaded package. I'll be using the ThingWorx ADO Service v5.6.1, most recent release, for the purpose of this blog. Depending if you are on x86 or x64 Windows navigate to the C:\Windows\Microsoft.NET for accessing the InstallUtil.exe You'll find the above specified file under following two locations, use the one that applies to your use case. i) For x64 : C:\Windows\Microsoft.NET\Framework64\v4.0.30319 ii) For x86 : C:\Windows\Microsoft.NET\Framework\v4.0.30319 1. Copy over the desired InstallUtil.exe to the location where you have unzipped the ADO Service package, the one downloaded above. e.g. I've put mine at C:\Software\ThingWorxSoftware\ADOService\ 2. Start a command prompt (Windows Start Menu > Command Prompt) and execute the InstallUtil.exe ThingWorxADOService.exe 3. This should create a service and some additional info in the \\ADOService folder in the form of InstallUtil.InstallLog 4. Check the log for confirmation, you should see something similar Running a transacted installation. ...     .... The Commit phase completed successfully. The transacted install has completed. ​​5. In Windows Explorer navigate to the folder containing all the unzipped files, and edit the AdoThing.config 6. For this blog I've security disabled, though obviously in production you'd definitely want to enable it 7. Configure the ConnectionSettings as per your requirement (refer to the guide for more detail on settings), below I'm noting the settings that will require configuration in its most minimum form (I've also attached my complete AdoThing.config file for reference) "rows": [       {         "Address": "localhost",         "Port": 8080,         "Resource": "/Thingworx/WS",         "IsSecure": false,         "ThingName": "AdoThing",         "AppKey": "f7e230ac-3ce9-4d91-8560-ad035b09fc70",         "AllowSelfSignedCertificates": false,         "DisableCertValidation": true,           "DisableEncryption": true       }     ] 8. Configure the connection string for the SQL Server in following section, in the same file opened above     "rows": [       {         "ConnectionType": "OleDb",         "ConnectionString": "Provider=SQLNCLI11;Server=localhosts\\SQLEXPRESS;Database=TWXDB;Uid=sa;Pwd=login123;",         "AlwaysConnected": true,         "QueryEnabled": true,         "CommandEnabled": true,         "CommandTimeout": 60       }     ] 9. Just to highlight what's what in ConnectionString above: "ConnectionString": "Provider=SQLNCLI11;Server=<Machine/ClientName>\\SQLServerInstanceName;Database=<databaseName>;Uid=<userName>;Pwd=<password>;" 10. To get correct connection string syntax for different source refer to the ConnectionStrings.com 11. Save the file 12. Navigate to the windows services by opening Windows Start > Run > services.msc 13. Check for the service ThingWorx .NET ADO Client as you'll have to start it if it's set to Manual, like so in my case Following message will be logged on successful connection  in the DotNETSDK -X-X-X.log : [Critical] twWs_Connect: Websocket connected! At the end of the blog I'll share some of the errors that I came across while working on this and how to go about addressing them. Creating and connecting to Remote Database Thing Now, let's navigate to the ThingWorx Composer and create a Thing with RemoteDatabase Template to consume the resource created above in the form of ADO Service. I've named my thing as AdoThing while creating it in ThingWorx Composer, which matches with the ThingName used in the AdoThing.json file. If everything went through as needed you should see the isConnected = true in the AdoThing's Properties section. Since, this is a Database thing I can now go about creating all the required services concerning the Create, Update, Delete (CRUD) operations, just like for any database for created using the RDBMS Connector. Handling errors while setting up the ADO Service Here are some of the errors that I encountered while setting up the ADO service for this blog: Error 1: com.thingworx.ado.AdoThing Cannot connect to database. : System.Data.OleDb.OleDbException: Login timeout expired Note: Logged in DotNetSDK-X-X-X.log Cause & Resolution: - Service is not able to successfully reach or authenticate against the SQL Server Express DB instance - Ensure that the TCP/IP is enabled for the Protocols for the SQL Express, as I have shared in the screenshot above - Make sure that the username / password used for authenticating with the database is correctly provided while configuring the settings for the OLEDB section in    AdoThing.config Error 2: com.thingworx.ado.AdoThing GetTables OleDbException error : System.Data.OleDb.OleDbException Note: Logged in Application.log from ThingWorx platform Cause & Resolution - This exception is thrown when user attempts to check for the available tables, while creating the service in the ThingWorx Composer - Resolution to this is similar to that mentioned above for Error 1 Error 3: [U: SYSTEM] [O: com.thingworx.ado.AdoThing] OleDbException [code = -2147217865, message = Invalid object name 'TWXDB.DemoTable'.] executing SQL query Note: Logged in Application.log from ThingWorx platform while testing/executing the SQL service created in the ThingWorx Composer Cause & Resolution - The error is due to the usage of DB name in front of the table name, it's not required since the DB name is already selected in the connection String Error 4: [O: com.thingworx.Configuration] Could not read configuration file. : Newtonsoft.Json.JsonReaderException: Bad JSON escape sequence: \S. Path 'Settings.rows[0].ConnectionString', line 656, position 71. Note: Logged in DotNetSDK-X-X-X.log Cause & Resolution - This is caused due to the "ConnectionString": "Provider=SQLNCLI11;Server=<machineNameOrIP>\SQLEXPRESS;Database=TWXDB;Uid=sa;Pwd=login123;", - Json requires this to be escaped thus switching to "ConnectionString": "Provider=SQLNCLI11;Server=<machineNameOrIP>\\SQLEXPRESS;Database=TWXDB;Uid=sa;Pwd=login123;", resolved the issue - Among many other, https://jsonformatter.curiousconcept.com/​ is quite helpful in weeding out the issues from the JSON syntax Error 4: [O: com.thingworx.ado.AdoClient] Error while initializing new AdoThing, or opening connection to Platform. : System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.     at com.thingworx.communications.client.TwApiWrapper.twApi_Connect(UInt32 timeout, Int32 retries)     at com.thingworx.communications.client.TwApiWrapper.Connect(UInt32 timeout, Int16 retries)     at com.thingworx.communications.client.BaseClient.start()     at com.thingworx.ado.AdoClient.run() Note: Logged in DotNetSDK-X-X-X.log Cause & Resolution - This error is observed when using FIPS version of the  ADO Service, esp. when downloaded from the ThingWorx Marketplace - Make sure to recheck the SSL configuration - When not using SSL check that the x64 and x86 directories only contain twApi.dll as by default FIPS version contain two additional dlls i.e. libeay32.dll & ssleay32.dll in both x64 & x86 directories
View full tip
There are now three new places where you can get and/or share ThingWorx code examples in the ThingWorx Community: ThingWorx Platform Services ThingWorx Extensions and Widgets ThingWorx Edge and Edge SDKs We encourage you to share your own relevant code examples in the appropriate space. Be sure to read the how-to and guidelines for posting to the Code Examples Libraries before you create your document. Any official code from ThingWorx Support Services will be marked with an official designation at the top of the document, which looks like this: Keep an eye out for more code examples as we ramp up these libraries and don’t forget to share your own examples!
View full tip
Setting up the ThingWorx Server RemoteThing, ApplicationKey, and TunnelSubsystem Tunneling from the ThingWorx platform to an Edge Device can be easily done with a few preparation steps on the platform side: Create an ApplicationKey entity on the ThingWorx server so that the EMS or SDK you are using can authenticate with the platform Create a RemoteThingWithTunnels or RemoteThingWithTunnelsAndFileTransfer Thing for the remote device to bind to Either ThingTemplate will work, the only difference is if you want to use any native file transfer capabilities that are provided by ThingWorx In the newly created Thing, on the General Information page, click on the drop-down menu next to Enable Tunneling and select Override - Enabled ​Go to the Configuration​ section under ​Entity Information ​on the right and click on the Add My Tunnel ​button The Tunnel Name is used to identify what tunnel to use in the RemoteAccessWidget you will bind to the tunnel The Host will remain 127.0.0.1 because this is from the perspective of where the vnc server is to the remote device In my example they are on the same device The Port value should be the Port that the server is listening on This is typically 5900, but my vnc server is running on port 5901 for this example The App URI can be cleared out because we do not need to reference that file Here is a link to a further explanation on what the App URI is for: ThingWorx Tunneling App URI's The # of Connections and Protocol can remain their default values unless you have a reason to change them Navigate back to Home and look for the TunnelSubsystem under the Subsystems page Click on the TunnelSubsystem Click on the Configuration option on the left Modify the Public host name used for tunnels field and the Public port used for tunnels field to the host and port of your ThingWorx server Save and close the TunnelSubsystem Configuring the Edge Device For this example I'm going to keep it simple and set up an EMS (Edge MicroServer) instead of an SDK. This EMS will be on a totally separate device (an Ubuntu machine), while my ThingWorx server is on my local machine. Download the latest EMS onto a separate machine Configure the config.json file settings to match the server's host, port, and application key The ​tunnel​ block will be necessary to add as well, see below for an example of a working config.json file: Configure the config.lua file to match the name of the RemoteThingWithTunnels we created earlier; in this instance the name of my RemoteThing is ​EdgeThing​: Run the EMS and LSR (Lua Script Resource) The LSR EdgeThing​ will bind automatically to the RemoteThingWithTunnels we created earlier To verify there is successful connection between the platform and EMS go to the ​EdgeThing​'s Properties page and check to see if the ​isConnected ​property is currently set to ​true​ If it's not, please refer to this Help Center section for further troubleshooting. There is a list of error codes here. Installing a VNC Viewer and Server The next series of steps talks about configuring a VNC Server on the EMS machine and a VNC Client on the computer you are using to connect to the server. For this example I will be using packages tightvncserver, xfce4, xfce4-goodies, and vnc4server on my Ubuntu machine that hosts the EMS, and I will be using the tightvnc viewer available for download here. The following steps describe how to configure the Ubuntu machine so that it will be ready to accept vnc requests: I want to note that I am specifically using a 64-bit Ubuntu 14.04 LTS OS Run the following commands: sudo apt-get update sudo apt-get install xfce4 xfce4-goodies tightvncserver Run the vncserver and you will be prompted to setup a password I used password to keep it simple, but you will want to use something relatively secure We will want to kill this instance right away so we can proceed with further configuration vncserver -kill :1 ​Make a backup of the ​xstartup​ file in case things go awry mv ~/.vnc/xstartup ~/.vnc/xstartup.bak Create a new xstartup ​file to proceed with the setup nano ~/.vnc/xstartup Insert the following commands into the file, and they will be exercised every time the server starts or is restarted: #!/bin/bash xrdb $HOME/.Xresources startxfce4 & The first command in the file tells the VNC's GUI framework to reference the .Xresources file, which is where a user can change vnc settings The second command launches the XFCE -- the graphical software Ensure that the xstartup ​file has executable privileges: sudo chmod +x ~/.vnc/xstartup Start the server back up with vncserver For the machine that is being used to view the Mashup, install the tightvnc server from the link mentioned above. You should double-click the tightvnc-jviewer.jar file to run the viewer application now so it is up and ready for the ​Establishing a Tunnel ​section​. Creating the RemoteAccess Mashup This next portion of the tutorial covers creating the Mashup that will be asked by any user who wants to remote into the Edge device. Go to Composer Home and open the Mashup menu option on the left side of the screen Add a new Static or Dynamic Mashup Drag-and-drop a RemoteAccessWidget onto the Mashup Click on the RemoteAccessWidget and modify the RemoteThingName, TunnelName, and AcceptSelfSignedCertificates ​properties for the connection The RemoteThingName is the name of the Edge Thing the remote device is bound to The TunnelName is the name of the tunnel we added to the Edge Thing in the Configuration screen The AcceptSelfSignedCertificates is only used when using an SSL connection with self signed certs View the Mashup and the RemoteAccess Widget should have a green plus sign on it if the connection from the EMS to platform is up and connected Establishing a Tunnel The following section is the last part of the process where we actually establish a tunnel between the client, platform, and remote device. Open the Mashup with the RemoteAccess Widget if you closed it Click on the RemoteAccess Widget to being the wsadapter.jnlp download Once that has completed click on the wsadapter.jnlp file to run it Keep in mind that there is a default 90 second timeout defined in the TunnelSubsystem that will render the wsadapter.jnlp file useless and you will have to download a new one if the connection is not established within that timeframe If you receive the following error message you may need to reconfigure your TunnelSubsystem configuration options for your server because the thingworx-tunnel-launcher.jar was unable to be found at that address If you receive the following error message after you will need to modify your security settings in your Java options. This is done by opening ​Configure Java​, navigating to the ​Security ​tab, and then adding your ThingWorx server's IP and port to the site list via the ​Edit Site List...​ button You should have received a Security Warning message upon successfully finding the thingworx-tunnel-launcher.jar file that you will click the ​Run​ button on and check the I accept the risk and want to run this application​ A pop-up, like the following, will be seen and you know the tunnel is now open for tightvnc to connect through Do not click ​OK​, instead, please proceed to the next step. Clicking OK will close the tunnel if you have not connected to the EMS via the VNC Viewer yet. Open the tightvnc-jviewer.jar and type in the corresponding host and port that a vnc connection should be established to: localhost ​ and port ​16345​ are used because we have already established a connection to the EMS and it is listening for a vnc connection on port 16345 -- per the ThingWorx pop-up we just saw Click ​Connect​ and a new window should appear showing the GUI environment of your Ubuntu server like below
View full tip
I had just finished writing an integration test that needed to update a Thing on a ThingWorx server using only classes in the Java JDK with as few dependencies as possible and before I moved on, I though I would blog about this example since it makes a great starting point for posting data to ThingWorx. ThingWorx has a Java SDK which uses the HTTP Websockets protocol and you can download it from our online at the ThingWorx IoT Marketplace​ that offers great performance and far more capabilities than this example. If you are looking, however for the simplest, minimum dependency example of delivering data to ThingWorx, this is it. This examples uses the REST interface to your ThingWorx server. It requires only classes already found in your JDK (JDK 7) and optionally includes the JSON Simple jar. References to this jar can be removed if you want to create your property update JSON object yourself. Below is the Java Class. package com.thingworx.rest; import org.json.simple.JSONObject; import javax.net.ssl.HostnameVerifier; import javax.net.ssl.HttpsURLConnection; import javax.net.ssl.SSLContext; import javax.net.ssl.SSLSession; import javax.net.ssl.TrustManager; import javax.net.ssl.X509TrustManager; import java.io.IOException; import java.io.OutputStreamWriter; import java.net.HttpURLConnection; import java.net.URL; import java.security.KeyManagementException; import java.security.NoSuchAlgorithmException; import java.security.cert.CertificateException; import java.security.cert.X509Certificate; /** * Author: bill.reichardt@thingworx.com * Date: 4/22/16 */ public class SimpleThingworxRestPropertyUpdater {    static {    //Disable All SSL Security Testing (Not for production!)      try {       disableSSLCertificateChecking();      } catch (Exception e) {       e.printStackTrace();      }      HttpsURLConnection.setDefaultHostnameVerifier(new HostnameVerifier(){        public boolean verify(String hostname, SSLSession session) {return true;}      });   }    public static void main(String[] args) {      // like http://localhost:8080 or https://localhost:443      String serverUrl = args[0];      // Generate one of these from the composer under Application Keys      String appKey = args[1];      String thingName = args[2];      // You don't have to use the Simple JSON class, just pass a JSON string to restUpdateProperties()      // This Thing has three properties, a (NUMBER), b (STRING) and c (BOOLEAN)      JSONObject properties = new JSONObject();      properties.put("a", new Integer(100));      properties.put("b", "My New String Value");      properties.put("c", true);      String payload= properties.toJSONString();      try {        int response = restUpdateProperties(serverUrl, appKey, thingName, payload);        System.out.println("Response Status="+response);      } catch (Exception e) {        e.printStackTrace();      }   }    public static int restUpdateProperties(String serverUrl, String appKey, String thingName, String payload) throws IOException {     String httpUrlString = serverUrl + "/Thingworx/Things/"+thingName+"/Properties/*";     System.out.println("Performing HTTP PUT request to "+httpUrlString);     System.out.println("Payload is "+payload);     URL url = new URL(httpUrlString);     HttpURLConnection httpURLConnection = (HttpURLConnection) url.openConnection();     httpURLConnection.setUseCaches(false);     httpURLConnection.setDoOutput(true);     httpURLConnection.setRequestMethod("PUT");     httpURLConnection.setRequestProperty ("Content-Type", "application/json");     httpURLConnection.setRequestProperty ("appKey",appKey);     OutputStreamWriter out = new OutputStreamWriter(httpURLConnection.getOutputStream());     out.write(payload);     out.close();     httpURLConnection.getInputStream();     return httpURLConnection.getResponseCode();   }    /**   * Disables the SSL certificate checking for new instances of {@link HttpsURLConnection} This has been created to   * aid testing on a local box, not for use on production.   */    private static void disableSSLCertificateChecking() throws KeyManagementException, NoSuchAlgorithmException {   TrustManager[] trustAllCerts = new TrustManager[] { new X509TrustManager() {      public X509Certificate[] getAcceptedIssuers() {        return null;      }      public void checkClientTrusted(X509Certificate[] arg0, String arg1) throws CertificateException {}      public void checkServerTrusted(X509Certificate[] arg0, String arg1) throws CertificateException {}      } };      SSLContext sc = SSLContext.getInstance("TLS");      sc.init(null, trustAllCerts, new java.security.SecureRandom());      HttpsURLConnection.setDefaultSSLSocketFactory(sc.getSocketFactory());   } } When run, it prints out what the request body should look like in JSON: Performing HTTP PUT request to https://localhost:443/Thingworx/Things/SimpleThing/Properties/* Payload is  {"a":100,"b":"My New String Value","c":true} Response Status=200 I have attached the full Gradle project that builds and runs this example class as a zip file to this article. When you download it, if you have Java JDK 7 already installed an on your path, you can run the example with the command: On Linux or OSX ./gradlew simplerest Windows gradlew.bat simplerest Don't forget to edit the build.gradle file to use your server's URL and application key. You will also find the Thing used in this example in the entities folder of this project and you can import it on your server to test it out. It is a Thing that is based on GenericThing and has three properties, a (NUMBER), b (STRING) and c (BOOLEAN).
View full tip
  The ThingWorx team would like to be one of the first to wish you a happy holiday season.   To kick it off, I’ll share the top three things we’re thankful for: Our customers—you! Thank you for all that you do. The entire team behind ThingWorx. Thank you for being such a fun team to work with! Our robust platform—recognized by Gartner as a leader in industrial IoT platforms. Launching today, 9.1 features a cornucopia of new functionality, including: Runtime support for AWS Corretto Open JDK 11, now available to help you save Oracle Java licensing costs A new Project View in Composer Pareto charts, auto refresh, pagination and chip-based data filter web components Enhancements to auditing, Thing Group UI & export to source control Asset Advisor scale improvements Operator Advisor process plan caching Azure AD integration with Solution Central to manage users ARM templates now available to allow you to easily deploy the necessary ThingWorx infrastructure on Azure Read the release notes and download 9.1 from the Software Downloads Page today!   Stay connected & happy holidays, Kaya      
View full tip
The Property Set Approach This article details an approach developed by Prachi Rath and Roy Clarke, refined by the EDC team in the December 2019's Remote Monitoring of Assets Reference Benchmark , and used to handle multi-property business rules in an Enterprise ThingWorx application.   Introduction If there are logic rules which depend upon multiple properties, and each property receives its updates one at a time, then each property will need to have an identical subscription, because there is no way for any one subscription to know the most up-to-date values for the other properties. This inefficient approach would create redundancy and sizing constraints, reducing the capacity of the application to scale up to the Enterprise level. The Property Set Approach resolves this issue by sending in all property updates as one Info Table or JSON property (called the “property set”), which can then have a single subscription. The property set is assembled on the Edge when an update needs to be sent, and then the Platform dissects, processes, and stores the data within this property set as required by the business logic.   This approach also involves caching the last property value into a runtime variable so that it can be referenced within the business logic subscription without having to be retrieved from the database. This can significantly improve the runtime of the subscription, reducing the number of resources required to sustain the business logic and ensuring that any alerts or events resulting from the business logic occur as soon as possible. It also reduces the load on the database, ensuring that data ingestion can complete unhindered.   So, while there are many benefits to this approach, it is also more complicated. It tightly couples the development of the Edge and Platform code and increases the application complexity, making it slightly less easy to maintain the application in the long run. The property set also requires a little more bandwidth and a more stable internet connection between Edge devices and the Platform since there is more metadata in an Info Table property, and therefore every update is slightly larger than it would be otherwise. So this approach is only recommended when multi-property rules are a requirement of the application and a stable internet connection exists between the Edge and Platform.   Platform Implementation I. Create an Info Table (or JSON) Property This tutorial uses the out-of-the-box Data Shape called NamedVTQ for the Info Table property, which is defined on a Thing Template as a remote property. It is important that this is not marked as persistent or logged, as the purpose is to reduce the amount of database writes and reads required by the Platform. The Info Table property has the following property definition:     <PropertyDefinition aspect.dataChangeType="ALWAYS" aspect.dataShape="NamedVTQ" aspect.isPersistent="false" baseType="INFOTABLE" isLocalOnly="false" name="numberPropertySetAsInfotable"/>       II. Create a Data Change Event Subscription for the Info Table Property The subscription has three parts: Cache the last value for the property in a runtime variable Start off the business rules processing, sending in the whole Info Table Send the Info Table to be logged as individual local properties in the database     // First step caches the last Value, refer to the next step… // Second step sets off the business rules processing with the Info Table me.ScaleTestBusinessRuleForPropertySetAsInfotable({ PropertySetAsInfotable: eventData.newValue.value }); // Third step sends the Info Table as one property into a service which parses it into the // individual properties, updating both the runtime properties on the remote thing and the database me.UpdatePropertyValues({ values: eventData.newValue.value /* INFOTABLE */ });       III. Set-Up Caching Each property which needs to be cached should be created on the Thing Template level and named in a similar way, say by placing the word “Last” at the end, such as “Property1” => “Property1Last”, “Property2” => “Property2Last”, etc. This property should NOT be logged or persistent, as the point of this is to store the most recent value in memory, removing any superfluous dependency on database queries in the process. Note that while storing the property in runtime memory makes it much more accessible, it also means that the property needs to be rewritten manually upon Platform restart. Additional code (not provided here) must be written to populate these properties from the database upon application start-up.   The following code should be placed in the data change event subscription (option 1 in the case where only a few properties need caching, or option 2 if every property value needs to be cached):   Option 1: Some but Not All Properties Need Caching     // Names of properties for which you want to cache the last value var propertyNames = ['number1', 'number2']; // Loop through the properties and cache their time if they are found in the property set propertyNames.map(assignLast); // This function can be split into two functions for Age and Last separately if need be function assignLast(propertyName) { logger.debug("Looping for property -> "+ propertyName); var searchprop = new Object(); searchprop.name = propertyName; property = eventData.newValue.value.Find(searchprop); if(property){ logger.debug("Found Row. Name= " + property.name); var lastPropertyName = propertyName+"Last"; if(property.value) { // Set the cache property on me, this entity, to the current property value me[lastPropertyName] = me[propertyName]; } } else { logger.debug("Property Not Found in property set -> " + propertyName); } }       Option 2: All Properties Need Caching     var rowCount = eventData.newValue.value.getRowCount(); for(var i=0; i<rowCount; i++){ logger.warn("property name->" + eventData.newValue.value[i].name + "----- property new value->" + eventData.newValue.value[i].value.value); var propertyName = eventData.newValue.value[i].name; var lastPropertyName = propertyName+"Last"; me[lastPropertyName] = me[propertyName]; logger.warn("done last subscription, last property value for lastPropertyName" + me[lastPropertyName]); }         Useful Platform Code Snippets I. Age Calculation     var date1 = new Date(); var date2 = me.GetPropertyTime({ propertyName: propertyName /* STRING */ }); var result = millisToMinutesAndSeconds (dateDifference(date1, date2) ); // This function converts from an unintelligibly large number in milliseconds to something formatted in minutes and seconds function millisToMinutesAndSeconds(millis) { var minutes = Math.floor(millis / 60000); var seconds = ((millis % 60000) / 1000).toFixed(0); return (seconds == 60 ? (minutes+1) + ":00" : minutes + ":" + (seconds < 10 ? "0" : "") + seconds); }       II. Sort the Info Table by Time     var params = { sortColumn: "time" /* STRING */, t: me.propertySet/* INFOTABLE */, ascending: ascending /* BOOLEAN */ }; var result = Resources["InfoTableFunctions"].Sort(params);       III. Search the Info Table for a Property     var searchprop = new Object(); searchprop.name = propertyName; property = PropertySetAsInfotable.Find(searchprop); if(property === null){ logger.info("Property Not Found -> " + propertyNumber1); } else { logger.info("Found Row. Name= [" + property.name + "], value= " + property.value.value); }         Edge Implementation This example implementation uses the .NET Edge SDK to build a property set Info Table at the Edge.   I. Define the Data Shape A standard Data Shape is used (NamedVTQ), but because this Data Shape is not exposed in the Edge SDK code, it has to be created manually.     // Data Shape definition for NamedVTQ FieldDefinitionCollection namedVTQFields = new FieldDefinitionCollection(); namedVTQFields.addFieldDefinition(new FieldDefinition(CommonPropertyNames.PROP_NAME, BaseTypes.STRING)); namedVTQFields.addFieldDefinition(new FieldDefinition(CommonPropertyNames.PROP_VALUE, BaseTypes.VARIANT)); namedVTQFields.addFieldDefinition(new FieldDefinition(CommonPropertyNames.PROP_TIME, BaseTypes.DATETIME)); namedVTQFields.addFieldDefinition(new FieldDefinition(CommonPropertyNames.PROP_QUALITY, BaseTypes.STRING)); base.defineDataShapeDefinition("NamedVTQ", namedVTQFields);     II. Define the Info Table Property The property defined should NOT be logged or persistent, and it can be read-only, since data is always pushed from the Edge and read from the server cache when accessed on the Platform. Note that the push type of the info table property MUST be set to "ALWAYS" (if set to "VALUE", the data change event will only fire if the number of rows changes).   // Property Set Definitions [ThingworxPropertyDefinition( name = "DevicePropertySet", description = "Alternative representation of properties as an Info Table for rules processing", baseType = "INFOTABLE", category = "Status", aspects = new string[] { "isReadOnly:true", "isPersistent:false", "isLogged:false", "dataShape:NamedVTQ", "cacheTime:0", "pushType:ALWAYS" } ) ]     III. Define a Property to Store the GOOD Quality Status   private static String QUALITY_STATUS_GOOD = QualityStatus.GOOD.name();     IV. Define Functions to Populate the Value Collections  An Info Table is really just made up of many Value Collections, where each Value Collection is considered a row. These services take in the name and value of a property and return a Value Collection object which can be added to the property set Info Table.   public ValueCollection createNumberValueCollection(String name, double value) { ValueCollection vc = new ValueCollection(); // Add quality and time entries to the Value Collection vc.SetStringValue(CommonPropertyNames.PROP_QUALITY, QUALITY_STATUS_GOOD); vc.SetDateTimeValue(CommonPropertyNames.PROP_TIME, new DatetimePrimitive(DateTime.UtcNow)); vc.SetStringValue(CommonPropertyNames.PROP_NAME, name); vc.SetNumberValue(CommonPropertyNames.PROP_VALUE, value); return vc; } public ValueCollection createBooleanValueCollection(String name, Boolean value) { ValueCollection vc = new ValueCollection(); // Add quality and time entries to the Value Collection vc.SetStringValue(CommonPropertyNames.PROP_QUALITY, QUALITY_STATUS_GOOD); vc.SetDateTimeValue(CommonPropertyNames.PROP_TIME, new DatetimePrimitive(DateTime.UtcNow)); vc.SetStringValue(CommonPropertyNames.PROP_NAME, name); vc.SetBooleanValue(CommonPropertyNames.PROP_VALUE, value); return vc; }     V. Build the Property Set Call this code from the processScanRequest method to build the property set.   // Create an instance of a new Info Table using the standard "NamedVTQ" Data Shape InfoTable propertySet = new InfoTable(getDataShapeDefinition("NamedVTQ")); // Set name/value for Temperature using convenience function propertySet.addRow(createNumberValueCollection("Temperature", temperature)); // Set name/value for Pressure using convenience function propertySet.addRow(createNumberValueCollection("Pressure", pressure)); // Set name/value for TotalFlow using convenience function propertySet.addRow(createNumberValueCollection("TotalFlow", this._totalFlow)); // Set name/value for InletValve using convenience function propertySet.addRow(createBooleanValueCollection("InletValve", inletValveStatus)); // Set name/value for FaultStatus using convenience function propertySet.addRow(createBooleanValueCollection("FaultStatus", faultStatus)); // Set the property set Info Table property base.setProperty("DevicePropertySet", propertySet);     VI. Update the subscribed properties These two lines of code update the properties and events, actually sending the property set (containing all property updates) to the Platform.   base.updateSubscribedProperties(15000); base.updateSubscribedEvents(60000);     Conclusion Following these steps will enable the Edge to build a property set before sending any property updates to the Platform. The Platform can then rely on caching to process the business logic with no database dependency, which is faster and more efficient than any other approach. Finally the updates are still written to the database, so in the end, there is no functional difference between using a property set and binding each property individually. Please don't hesitate to comment here with any questions about this approach.
View full tip
Here is a tutorial to explain the process of uploading a PMML file from an external system to Thingworx Analytics. The tutorial steps are explained in the attached PDF and all referenced files can be found in the attached ZIP.  
View full tip
  There are times when the raw sensor readings are not directly useful for monitoring conditions on a machine. The raw data may need to be transformed before it can provide value within your monitoring applications. For example, instead of monitoring individual pressure readings reported each second, you may only be concerned with the maximum pressure reading each minute. Or, maybe you want to monitor the median value of the electrical current pulled by a machine every five seconds to smooth out the noise of raw sub-second sensor readings. Or, maybe you want to monitor if the average hourly temperature of a machine exceeds a control limit in 2 of the past 3 hours.   Let’s take the example of monitoring the max pressure of a valve reading over the past 45 seconds for your performance dashboard. How do you do it? Today, you might add a new property (e.g. “MaxPressure”) to your valve Thing. Then, you might add a subscription that triggers when the Pressure property value changes, and then call a service FindMax() to return the maximum pressure for that time interval. Lastly, you might write that maximum result value to the new property MaxPressure to store it and visualize it in the dashboard. Admittedly, not the worst process, but also not the most efficient.   Coming in 8.4, we will now offer Property Transforms, which enable you to automatically execute common statistical calculations—like min, max, average, median, mode and standard deviation, as well as SPC calculations—directly within a property itself. These transforms are configurable to run at certain intervals of time or points collected and can also be used with our alerting subsystem to drive behavior and user action where necessary. There is no longer a need to create an elaborate subscription-based logic flow just to do simple calculations!  This is just another way that ThingWorx 8.4 offers a more productive environment for IoT developers than ever before.   Ready to see it in action? Check out this video below by our product manager Mark!   (view in My Videos)   Comment your thoughts below!   Stay connected, Kaya
View full tip
Announcements