cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Need to share some code when posting a question or reply? Make sure to use the "Insert code sample" menu option. Learn more! X

IoT Tips

Sort by:
I have created a mashup which allows you to easily use and test the Prescriptions functionality in Thingworx Analytics (TWA). This is where you choose 1 or more fields for optimization, and TWA tells you how to adjust those fields to get an optimal outcome.   The functionality is based on a public sample dataset for concrete mixtures, full details are included in the attached documentation.  
View full tip
Welcome to the ThingWorx Manufacturing Apps Community! The ThingWorx Manufacturing Apps are easy to deploy, pre-configured role-based starter apps that are built on PTC’s industry-leading IoT platform, ThingWorx. These Apps provide manufacturers with real-time visibility into operational information, improved decision making, accelerated time to value, and unmatched flexibility to drive factory performance.   This Community page is open to all users-- including licensed ThingWorx users, Express (“freemium”) users, or anyone interested in trying the Apps. Tech Support community advocates serve users on this site, and are here to answer your questions about downloading, installing, and configuring the ThingWorx Manufacturing Apps.     A. Sign up: ThingWorx Manufacturing Apps Community: PTC account credentials are needed to participate in the ThingWorx Community. If you have not yet registered a PTC eSupport account, start with the Basic Account Creation page.   Manufacturing Apps Web portal: Register a login for the ThingWorx Manufacturing Apps web portal, where you can download the free trial and navigate to the additional resources discussed below.     B. Download: Choose a download/packaging option to get started.   i. Express/Freemium Installer (best for users who are new to ThingWorx): If you want to quickly install ThingWorx Manufacturing Apps (including ThingWorx) use the following installer: Download the Express/Freemium Installer   ii. 30-day Developer Kit trial: To experience the capabilities of the ThingWorx Platform with the Manufacturing Apps and create your own Apps: Download the 30-day Developer Kit trial   iii. Import as a ThingWorx Extension (for users with a Manufacturing Apps entitlement-- including ThingWorx commercial customers, PTC employees, and PTC Partners): ThingWorx Manufacturing apps can be imported as ThingWorx extensions into an existing ThingWorx Platform install (v8.1.0). To locate the download, open the PTC Software Download Page and expand the following folders:   ThingWorx Platform | Release 8.x | ThingWorx Manufacturing Apps Extension | Most Recent Datacode     C. Learn After downloading the installer or extensions, begin with Installation and Configuration.   Follow the steps laid out in the ThingWorx Manufacturing Apps Setup and Configuration Guide 8.2   Find helpful getting-started guides and videos available within the 'Get Started' section of the ThingWorx Manufacturing Apps Portal.     D. Customize Once you have successfully downloaded, installed, and configured the Manufacturing Apps, begin to explore the deeper potential of the Apps and the ThingWorx Platform.   Follow along with the discussion and steps contained in the ThingWorx Manufacturing Apps and Service Apps Customization Guide  8.2   Also contained within the the 'Get Started' page of the ThingWorx Manufacturing Apps Portal, find the "Evolve and Expand" section, featuring: -Custom Plant Layout application -Custom Asset Advisor application -Global Plant View application -Thingworx Manufacturing Apps Technical Lab with Sigma Tile (Raspberry Pi application) -Configuring the Apps with demo data set and simulator -Additional Advanced Documentation     E. Get help / give feedback / interact Use the ThingWorx Manufacturing Apps Community page as a resource to find documentation, peruse past forum threads, or post a question to start a discussion! For advanced troubleshooting, licensed users are encouraged to submit support tickets to the PTC My eSupport portal.
View full tip
Overview REST stands for representational state transfer and is a software architectural style common in the World Wide Web. Anything with a RESTful interface can be communicated with using standard REST syntax. ThingWorx has such an interface built-in to make viewing and updating Thing properties as well as executing services easy to do independently of the Web UI.   How to Use REST API The ThingWorx REST API is entirely accessible via URL using the following syntax:    (Precision LMS. Getting Started With ThingWorx 5.4 (Part 1 of Introduction to ThingWorx 5.4). PTC University. https://precisionlms.ptc.com/viewer/course/en/21332822/page/21332905.)   The above example shows how to access a service called “GetBlogEntriesWithComments” found on the “ThingWorxTrainingMaintenanceBlog” Thing. Notice that even though this service gets XML formatted data, the method is type “POST” and “GET” will not work in this scenario (Further reading: https://support.ptc.com/appserver/cs/view/solution.jsp?n=CS214689&lang=en_US).   In order to be able to run REST API calls from the browser, one must allow request method switching. This can be enabled by checking the box “Allow Request Method Switch” in PlatformSubsystem (Further reading: https://support.ptc.com/appserver/cs/view/solution.jsp?n=CS224211&lang=en_US).   Access REST API from Postman Postman is a commonly used REST client which can ping servers via REST API in a manner which mimics third party software. It is free and easy-to-use, with a full tutorial located here: https://www.getpostman.com/docs/   In order to make a request, populate the URL field with a properly formatted REST API call (see previous section). Parameters will not automatically be URL-encoded, but right-clicking on a highlighted portion of the URL and selecting EncodeURIComponent encodes the section.   Next click the headers tab. Here is where the content-type, accept, and authorization are set for the REST call. Accept refers to which response format the REST call is expecting while content-type refers to the format of the request being sent to the server. Authhorization is required for accessing ThingWorx, even via REST API (see previous section for examples authenticating using an app key, but in Postman you can also use Basic Auth using a username and password)   In Postman, there is also ample opportunity to modify the request body under the Body tab. There are several options here for setting parameters. Form-data and x-www-form-urlencoded both allow for setting key value pairs easily and cleanly, and in the latter case, encoding occurs automatically (e.g. “Hello World” becomes %22Hello%20World%22). Raw request types can contain anything and Postman will not touch anything entered except to replace environment variables. Whatever is placed in the text area under raw will get sent with the request (normally XML or JSON, as specified by content-type). Finally, binary allows for sending things which cannot normally be entered into Postman, e.g. image, text, or audio files.     REST API Examples For introductory level examples, see the previous Blog document found here: https://community.thingworx.com/docs/DOC-3315   Retrieving property values from “MyThing” using GET, the default method type (notice how no “method=GET” is required here, though it would still work with that as well): http://localhost/Thingworx/Things/MyThing/Properties/   Updating “MyProperty “with the value “hello” on “MyThing” using PUT: http://localhost/Thingworx/Things/MyThing/Properties/MyProperty?method=PUT&value=hello In Postman, you can send multiple property updates at once via query body (in this case updating all of the properties, the string “Prop1” and the number “Prop2” on MyThing) § Query: http://localhost/Thingworx/Things/MyThing/Properties/* § Query Type: PUT § Query Headers: Content-Type: application/json Authorization: Basic Auth (input username and password on Authorization tab and this will auto-populate) § Body JSON: {"Prop1":"hello world","Prop2":10} Note: you can also specify multiple properties as shown, but only update one at a time in Postman by utilizing the browser syntax given above   Calling “MyService” (a service on “TestThing)” with a String input parameter (“InputString”): http://localhost/Thingworx/Things/TestThing/Services/MyService?method=post&InputString=input   It is easier to pass things like XML and JSON into services using Postman. This query calls “MyJSONService” on “MyThing” with a JSON input parameter § Query: http://localhost/Thingworx/Things/MyThing/Services/MyJSONService § Query Type: § Queries Headers: Accept should match service output (text/html for String) Content-Type: application/json or Authorization: Basic Auth (input username and password on Authorization tab and this will auto-populate) Body JSON: {"InputJSON":"{\"JSONInput\":{\"PropertyName\":\"TestingProp\",\"PropertyValue\":\"Test\"}}"} Body XML:{"xmlInput": "<xml><name>User1</name></xml"}   Viewing “BasicMashup” using AppKey authentication (so no login is required because this Application Key is set-up to login as a user who has permissions to view the Mashup): http://localhost/Thingworx/Mashups/BasicMashup?appKey=b101903d-af6f-43ae-9ad8-0e8c604141af&x-thingworx-session=true Read more here: https://support.ptc.com/appserver/cs/view/solution.jsp?n=CS227935   Downloading Log Information from “ApplicationLog” (or other log types): http://localhost/Thingworx/Logs/ApplicationLog/Services/QueryLogEntries?method=POST   In Postman, more information can be passed into some queries via query body § Query: http://localhost/Thingworx/Logs/ApplicationLog/Services/QueryLogEntries Query Type: POST Query Headers: Accept: application/octet-stream or Content-Type: application/json Authorization: Basic Auth (input username and password on Authorization tab and this will auto-populate) Body: {\"searchExpression\":\"\",\"origin\":\"\",\"instance\":\"\",\"thread\":\"\", \"startDate\":1462457344702,\"endDate\":1462543744702,\"maxItems\":100}   Downloading “MyFile.txt” from “MyRepo” FileRepository (here, “/” refers to the home folder of this FileRepository and the full path would be something like “C:\ThingworxStorage\repository\MyRepo\MyFolder\MyFile.txt”): http://localhost/Thingworx/FileRepositoryDownloader?download-repository=MyRepo&download-path=/MyFolder/MyFile.txt   Uploading files to FileRepository type Things is a bit tricky as anything uploaded must be Base64 encoded prior to making the service call. In Postman, this is the configuration to used to send a file called “HelloWorld.txt”, containing the string “Hello World!”, to a folder called “FolderInRepo” on a FileRepository named “MyRepo”:   Query: http://localhost/Thingworx/Things/MyRepo/Services/SaveBinary Query Type: POST Query Headers: Accept: application/json Content-Type: application/json Authorization: Basic Auth (input username and password on Authorization tab and this will auto-populate) Body: {"path" : "/FolderInRepo/HelloWorld.txt", "content" : "SGVsbG8gV29ybGQh"} Notice here that the content has been encoded to Base64 using a free online service. In most cases, this step can be handled by programming language code more easily and for more challenging file content   Resources and other built-in Things can be accessed in similar fashion to user-created Things. This query searches for Things with the “GenericThing” ThingTemplate implemented: http://localhost/Thingworx/Resources/SearchFunctions/Services/SearchThingsByTemplate?method=POST&thingTemplate=GenericThing   Deleting “MyThing” (try using services for this instead when possible since they are likely safer): http://localhost/Thingworx/Things/MyThing1?method=DELETE&content-type=application/JSON   Exporting all data within ThingWorx using the DataExporter functionality: http://localhost/Thingworx/DataExporter?Accept=application/octet-stream   Exporting all entities which have the Model Tag “Application.TestTerm” within ThingWorx using the Exporter functionality: http://localhost/Thingworx/Exporter?Accept=text/xml&searchTags=Applications:TestTerm
View full tip
Javascript, everyone knows it, at least a little bit. What if I told you that you could do serious data acquisition with just a little bit of Javascript and you may already have the tools to do it, right now on your "Off the Shelf" device. Node.js is a command line implementation of Javascript that can be run on common, credit card sized devices like the Raspberry PI or the Intel Edison. I suspect that if you already know about Node.js, you may have encountered its non-blocking asynchronous, "Call back", style of programming which can be a little different that most other languages which block or wait for commands to complete. While this can be a benefit for increasing performance, it can also be a barrier to entry for new users. This is the problem that Node Red really solves. Node Red is a web based Integrated Development Environment (IDE) that turns the "Call Back" style Javascript programming of Node.js into a series of interconnected Nodes, each Node of which represents a Javascript function which is connected by a callback to another node/function. A simple hello world program in Node Red would look something like this ( with annotations in red) : You can re-create this program using the Node Red IDE yourself. Here is a brief video (with no sound) which should familiarize you with how to create your own hello world flow. Video Link : 1333 How can you install Node Red on your own system to try it out? The good news is, if you have a Raspberry PI 2 with a NOOBS installed on it, Node.js and Node Red come pre-installed. If you do not already have it installed, or want to install it on your own system it is still pretty simple. Here are the steps: 1. Download and install Node.js (https://nodejs.org/en/download/) 2. Run the command:  sudo npm install -g --unsafe-perm node-red     Omit the sudo on windows (see http://nodered.org/docs/getting-started/installation.html  for more info) 3. You now have Node Red. To run it, just type: node-red  on your command line. 4. Using your web browser goto http://localhost:1880 and the Node Red IDE will appear in your browser. How about a real hardware integration example? Node Red comes with many built in Nodes and many more nodes you can add to connect to specific peripherals you may have on your device. Rather than provide a complete tutorial on Node Red, I will focus on discussing using this IDE to re-create a hardware integration that I created in the past using the Java SDK, The Raspberry PI, AM2302 Weather Station (see Weather Applications with Raspberry Pi | ThingWorx)​. This example contains detailed specifics on the attachment of the AM2302 Temperature/Humidity sensor to your Raspberry PI. I am going to assume you have the hardware already attached to your Raspberry PI as described in this tutorial ( https://learn.adafruit.com/dht-humidity-sensing-on-raspberry-pi-with-gdocs-logging/overview ). I am also assuming that you have installed the python based sample program described in this tutorial as well and you now have a python script called "AdafruitDHT.py" installed on your PI that produces the following output when it is run. pi@raspberrypi:~/projects/Adafruit_Python_DHT/examples $ sudo ./AdafruitDHT.py 2302 4 Temp=22.3*  Humidity=30.6% pi@raspberrypi:~/projects/Adafruit_Python_DHT/examples $ If you don't have any of this hardware installed, you can still proceed with this example and just create your own temperature and humidity values manually. We are going to connect the output of this python script directly to ThingWorx and sample its output value every 5 seconds. I will start assuming you do not have the Am2302 hardware and create simulated values. I will then replace them with the actual output of the python script as a final step. Polling versus Interrupt Driven Data Collection In the Java SDK version of this example, we are polling for changes in data. Every so many seconds our device will wake up and take a reading. How do we recreate the same effect in Node Red without having to push an inject button every 5 seconds. No. We need an input node that activates on its own every 5 seconds. The Inject Node will do this. Drag out an inject node and configure it as shown below. This is an input node so it will be starting a new flow. It will fire off every 5 seconds from the minute this sheet is deployed. Simulate Data Collection Lets generate a random humidity and temperature value before getting the actual data. For this node we will use a Function node. Drag one out and configure it as shown below. Here is the Javascript for this node so you can cut and paste it into this dialog. var tempF = Math.random() * 40 + 60; var tempC = (tempF-32)/1.8; var humidity = Math.random() * 80 + 20; msg.payload = {     "tempF":tempF,     "tempC":tempC,     "humidity":humidity     }; return msg;                                    Remember that the returned message is the message that the next node will receive. The payload property is the standard or default property of a message that most nodes use to pass data between each other. Here, our payload is an object with all of our simulated data in it. Lets Test it Out Connect the two nodes together and add a debug output node and deploy your sheet. The completed flow will look like this. As soon as you deploy you should see the following output in your debug tab and every five seconds another data sample will be generated. So how does this data get to ThingWorx? What we need to do is take this data and deliver it to ThingWorx in the form of a REST web service call. This is easier to do than it sounds. First off, lets create a Thing on your ThingWorx server that looks like this. Now give it these properties. Next, create an Application Key in the application keys section of the composer. Assign it to the "Administrator" user. Your keyId will of course be different. This key will be the credential you need to post your data. Installing the ThingRest Node Red Node To simplify the process of posting the data to ThingWorx, I have created my own custom node to post data. To install a custom node into your Node Red installation you have to find the directory Node Red is using to store your sheets in. By default this is a directory called ".node-red" in your home directory. On a Raspberry PI this directory would be /home/pi/.node-red. If you are running Node Red now, quit it by hitting control-c and cd into the .node-red directory. Below is the sequence of commands you would issue on your PI to install the ThingRest node. cd ~/.node-red npm install git+https://git@github.com/obiwan314/node-red-node-thingrest.git node-red                     The node package manager (npm) will install this new node automatically into your .node-red directory. Now re-run node-red and go back to your browser and refresh your Node Red IDE. You should now have a "REST Thing" node. Adding a REST Thing node to your flow Drag a REST Thing output node into your flow and configure it as shown below. Remember, your Application Key will be different than the one shown here. Also, your ThingWorx server URL may be different if your server is not on the same machine you are working on. Now connect it as shown below. When you deploy this sheet, you will be posting data to ThingWorx. Go back to your WeatherStation1 Thing in ThingWorx and use the Refresh button shown below to see your data changing. Wait, that is? Thats the whole data collection program? Yes. The flow above is the equivalent of the Java SDK code from the Java weather station example. Now for Some Real Data As promised, we will now replace the simulated data in the Generate Data node with real data obtained from the "~/projects/Adafruit_Python_DHT/examples/AdafruitDHT.py 2302 4" python command on your Raspberry PI using an Exec node. The exec node can be found at the very bottom of your node palette. It executes a command and returns the results as msg.payload to the next node in the flow. You may have noticed it has three outputs instead of one. In order these outputs are your Standard output, Standard Error and the integer return code of the process. Use the first output node to get the results of this command. Now Connect this in place of the Generate Data Node as shown below. At this point, we can't connect the collected data to the WeatherStation1 Thing because it is in the wrong format. It is console output and we need it in the form of a Javascript object. We are going to need a function to parse the console output into a Javascript object. Add the function node shown below. Here is the Javascript for cut and paste convenience. var temphumidArray=msg.payload.split(" "); var tempC = parseFloat(temphumidArray[0].replace("*","").split('=')[1]); var tempF = tempC *1.8 + 32; var humidity = parseFloat(temphumidArray[2].replace("%","").replace("\n","").split('=')[1]); msg.payload = {     "humidity":humidity,     "tempF":tempF,     "tempC":tempC   }; return msg;   Now msg.payload contains a javascript object identical to the one we were generating at random but now it is using real data. Connect up your nodes so they appear as shown below but when you deploy, don't expect it to work yet because there is still one problem you will have to get around. This python script expects to be run as the root user. How to run Node Red as Root You can start Node Red as root with the following command sudo node-red -u /home/pi/.node-red   Note that the -u argument is required to make sure you keep using the pi user's .node-red directory. If you loose your REST Thing node, you are not using the pi user's .node-red directory, but root's instead. If you see any error messages in your debug window, try re-attaching the the debug node to the Collect Data node and see what is being produced by the exec node. Don't forget to verify that your tempC,tempF and humidity properties are updating in ThingWorx. Lets Add a GPS Location You may have noticed that there is a stationLocation property on the WeatherStation1 Thing. Lets set that to a fixed location to complete this example of 40.0568764,-75.6720953,18. Below is the modified Javascript to update in the Parse Data node to add this location. var temphumidArray=msg.payload.split(" "); var tempC = parseFloat(temphumidArray[0].replace("*","").split('=')[1]); var tempF = tempC *1.8 + 32; var humidity = parseFloat(temphumidArray[2].replace("%","").replace("\n","").split('=')[1]); msg.payload = {     "humidity":humidity,     "tempF":tempF,     "tempC":tempC,     "stationLocation":"40.0568764,-75.6720953,18" }; return msg; What's Next? Node Red has many more nodes that you can add to your project through the use of the npm command. There is a GPIO node library you can install at https://github.com/monteslu/node-red-contrib-gpio which will give you input and output nodes for the GPIO pins on your PI as well, This library also supports accessing Arduino's attached to the PI over a USB cable which expand the possibilities for data collection and peripheral control.Hopefully this article has exposed you to the many other possibilities for connecting devices to your ThingWorx Server. The Rest Thing node is using the HTTP REST protocol to talk to ThingWorx. In the near future, with the Introduction of the ThingWorx Javascript SDK, a Node Red library can be created that uses ThingWorx AlwaysOn WebSockets protocol to communicate with your ThingWorx server which will offer even more capabilities and better performance.
View full tip
The following is valid  for ThingWorx Analytics (TWA) 52.0.2 till 8.0 For release 8.3.0 and above see How to score new data in ThingWorx Analytics 8.3.x ?   Overview The main steps are as follow: Create a dataset Configure the dataset Upload data to the dataset Optimize the dataset Create filters for training and scoring data Train the model Execute scoring on existing data Upload new data to dataset Execute scoring on new data TWA models are dataset centric, which means a model created with one dataset cannot be reused with a different dataset. In order to be able to score new data, a specific feature, record purpose in the below example, is included in the dataset. This feature needs to be included from the beginning when the data is first uploaded to TWA. A filter on that feature can then be created to allow to isolate desired data. When new data comes in, they are added to the original dataset but with a specific value for the filtered feature (record purpose), which allows to discriminate and score only those new records. Process Create a dataset This example uses the beanpro demo dataset Create dataset is done through a POST on datasets REST API as below 2. Configure dataset This is done through a POST on <dataset>/configuration REST API 3.      Upload data         4.      Optimize the dataset         5.      Create filters The dataset includes a feature named record purpose created especially to differentiate between the rows to be used for training and the rows to be used for scoring. New data to be added will have record purpose set to scoringnew, which will allow to execute a scoring job limited to those filtered new rows Filter for training data: Filter for new scoring data        6.      Train the model This is done through a POST on <dataset>/prediction API        7.      Score the training data This is done through a POST on <dataset>/predictive_scores API. Note the use of the filter TrainingData created earlier. This allow to score only the rows with training as value for record purpose feature. Note: scoring could also be done without filter at this stage, in which case all the data in the dataset will be scored and not just the ones with training fore record purpose   Retrieving the scoring result show all the records in the dataset:   8.      Upload new data The newly uploaded csv file should only contains new record. This will be appended to the existing ones.   Note that the new record (it could be more than one) has a value scoringnew for the record purpose feature: This will allow to use the previously created filter ScoringNewData so that a new scoring job will only take into account this new record.   9.      Scoring new data A POST on API predictive_scores is executed however using the filter ScoringNewData. This results in only the newly added data to be scored and therefore a much quicker execution time too. Retrieving the scoring result shows only the new record:
View full tip
Original Post Date:      June 6, 2016   Description: This is a video tutorial on creating an InfoTable through a service, creating and adding a Data Shape adding rows to the Infotable through a service, adding rows to an InfoTable property and querying the InfoTable.    
View full tip
In this blog I will be testing the SAPODataConnector using the SAP Gateway - Demo Consumption System.   Overview   The SAPODataConnector enables the connection to the SAP Netweaver Gateway through the ODdata specification. It is a specialized implementation of the ODataConnector. See Integration Connectors for documentation.   It relies on three components : Integration Runtime : microservice that runs outside of ThingWorx and has to be deployed separately, it uses Web Socket to communicate with the ThingWorx platform (similar to EMS). Integration Subsystem : available by default since 7.4 (not extension needed) Integration Connector : SAPODataConnector available by default in 8.0 (not extension needed)   ThingWorx can use OAuth to access SAP, but in this blog I will just use basic authentication.   SAP Netweaver Gateway Demo system registration   1. Create an account on the Gateway Demo system (credentials to be used on the connector are sent by email) 2. Verify that the account has access to the basic OData sample service : https://sapes4.sapdevcenter.com/sap/opu/odata/IWBEP/GWSAMPLE_BASIC/   Integration Runtime microservice setup   1. Follow WindchillSwaggerConnector hands-on (7.4) - Integration Runtime microservice setup Note: Only one Integration Runtime instance is required for all your Integration Connectors (Multiple instances are supported for High Availability and scale).   SAPODataConnector setup   Use the New Composer UI (some setting, such as API maps, are not available in the ThingWorx legacy composer)     1. Create a DataShape that is used to map the attributes being retrieved from SAP SAPObjectDS : Id (STRING), Name (STRING), Price (NUMBER) 2. Create a Thing named TestSAPConnector that uses SAPODataConnector as thing template 3. Setup the SAP Netweaver Gateway connection under TestSAPConnector > Configuration Generic Connector Connection Settings Authentication Type = fixed HTTP Connector Connection Settings Username = <SAP Gateway user> Password = < SAP Gateway pwd> Base URL : https://sapes4.sapdevcenter.com/sap Relative URL : /opu/odata/IWBEP/GWSAMPLE_BASIC/ Connection URL : /opu/odata/IWBEP/GWSAMPLE_BASIC/$metadata 4. Create the API maps and service under TestSAPConnector > API Maps (New Composer only) Mapping ID : sap EndPoint : getProductSet Select DataShape : SAPObjectDS (created at step 1) and map the following attributes : Name <- Name Id <- ProductID Price <- Price Pick "Create a Service from this mapping"     Testing our Connector   Test the TestSAPConnector::getProductSet service (keep all the input parameters blank)
View full tip
In this post, I show how you can downsample time-series data on server side using the LTTB algorithm. The export comes with a service to setup sample data and a mashup which shows the data with weak to strong downsampling.   Motivation: Users displaying time series data on mashups and dashboards (usually by a service using a QueryPropertyHistory-flavor in the background) might request large amounts of data by selecting large date ranges to be visualized, or data being recorded in high resolution. The newer chart widgets in Thingworx can work much better with a higher number of data points to display. Some also provide their own downsampling so only the „necessary“ points are drawn (e.g. no need to paint beyond the screen‘s resolution). See discussion here. However, as this is done in the widgets, this means the data reduction happens on client site, so data is sent over the network only to be discarded. It would be beneficial to reduce the number of points delivered to the client beforehand. This would also improve the behavior of older widgets which don’t have support for downsampling. Many methods for downsampling are available. One option is partitioning the data and averaging out each partition, as described here. A disadvantage is that this creates and displays points which are not in the original data. This approach here uses Largest-Triangle-Three-Buckets (LTTB) for two reasons: resulting data points exist in the original data set and the algorithm preserves the shape of the original curve very well, i.e. outliers are displayed and not averaged out. It also seems computationally not too hard on the server. Setting it up: Import Entities from LTTB_Entities.xml Navigate to thing LTTB.TestThing in project LTTB, run service downsampleSetup to setup some sample data Open mashup LTTB.Sampling_MU: Initially, there are 8000 rows sent back. The chart widget decides how many of them are displayed. You can see the rowcount in the debug info. Using the button bar, you determine to how many points the result will be downsampled and sent to the client. Notice how the curve get rougher, but the shape is preserved. How it works: The potentially large result of QueryPropertyHistory is downsampled by running it through LTTB. The resulting Infotable is sent to the widget (see service LTTB.TestThing.getData). LTTB implementation itself is in service downsampleTimeseries     Debug mode allows you to see how much data is sent over the network, and how much the number decreases proportionally with the downsampling.   LTTB.TestThing.getData;   The export and the widget is done with TWX  9 but it's only the widget that really needs TWX 9. I guess the code would need some more error-checking for robustness, but it's a good starting point.  
View full tip
Hello, There have been some inquires about how can one use AngularJS for developing custom parts that can run in the ThingWorx environment. To address these inquires I have created a document that describes the process of integrating AngularJS with ThingWorx. The document attached comes with the source code for the examples presented throughout the document and an extension for AngularJS 1.5.8 and angular-material components. Feedback is appreciated. Thank you.
View full tip
Sometimes you need the values from different ThingTemplate members in ONE grid. Therefore it would be great, if you can join 2 "GetImplementedThingsWithData" results into a common one. Here a script that works generally as long as you don't mess with datatypes on same column names. I'm very interested, if someone can find a much easier solution. The Union function was the only one I found suited for the task, but this needs preparation of the infotables upfront. Input: Table1 :Infotable Table2: Infotable Output: Infotable Here the "Snippet": // Define params for an Infotable to hold column names var params = {   infoTableName: "field" /* STRING */ }; // Define column 1 var newField = new Object(); newField.name = "field"; newField.baseType = 'STRING'; // Two 1 columns Infotables to store the field definition; var field1 = Resources["InfoTableFunctions"].CreateInfoTable(params); field1.AddField(newField); var field2 = Resources["InfoTableFunctions"].CreateInfoTable(params); field2.AddField(newField); // Define the cell to add to Infotable var myField = new Object(); myField.field = ""; myField.baseType = "STRING"; // Loop through Table1 var dataShapeFields = Table1.dataShape.fields; for (var fieldName in dataShapeFields) {   logger.debug('field1 name is ' + dataShapeFields[fieldName].name);     myField.field = dataShapeFields[fieldName].name;    field1.AddRow(myField); } // Loop through Table2 var dataShapeFields = Table2.dataShape.fields; for (var fieldName in dataShapeFields) {   logger.debug('field2 name is ' + dataShapeFields[fieldName].name);    myField.field = dataShapeFields[fieldName].name;    field2.AddRow(myField); } // Using inner join functionality to filter only the values that exist in both var params = {   columns2: "field" /* STRING */,   columns1: "field" /* STRING */, joinType: "INNER" /* STRING */,   t1: field1 /* INFOTABLE */, t2: field2 /* INFOTABLE */,   joinColumns1: "field" /* STRING */,   joinColumns2: "field" /* STRING */ }; var commonFields = Resources["InfoTableFunctions"].Intersect(params); // Loop over the result to build a search string var commonColumns = ""; var tableLength = commonFields.rows.length; for (var x = 0; x < tableLength; x++) {   var row = commonFields.rows ;   commonColumns = commonColumns + row.field + ","; } // Reduce Table1 to match only common columns var params = { t: Table1 /* INFOTABLE */, columns: commonColumns /* STRING */ }; var result1 = Resources["InfoTableFunctions"].Distinct(params); // Reduce Table2 to match only common columns var params = {   t: Table2 /* INFOTABLE */,   columns: commonColumns /* STRING */ }; var result2 = Resources["InfoTableFunctions"].Distinct(params); // At the END JOIN the tables together (does not work if colums are different) var params = {   t1: result1 /* INFOTABLE */,   t2: result2 /* INFOTABLE */ }; var result = Resources["InfoTableFunctions"].Union(params);
View full tip
In this post, I will use an instance of InfluxDB and Chronograf. See this post for installing both using Docker. InfluxDB - Time Series Databases   InfluxDB is a time series database. It allows users to work with and organize time series data. The advantage of such a database system is that it comes with built-in functionality to easily aggregate and operate on data based on time intervals. Other types of databases can do this as well - but time series databases are heavily optimized for this kind of data structures which will show in storage space and performance.   Data is stored in the database with its timestamp, its value and one or more tags.   Time Temperature Humidity Location 2019-01-24T00:00:00 23 42 Home 2019-01-24T00:01:00 22 43 Home 2019-01-24T00:02:00 21 44 Home 2019-01-24T00:03:00 23 45 Home 2019-01-24T00:04:00 24 42 Home 2019-01-24T00:05:00 25 43 Home 2019-01-24T00:06:00 23 44 Home   Values can be aggregated by intervalls, i.e. "give me the temperatur values within the last hour and take the average for 5 minutes". This would result in (60 / 5) = 12 results with a value that represents the average temperature within this 5 minute interval.   Example: Temperature Data averaged by 4 minutes   Time Temperature 2019-01-24T00:00:00 (23 + 22 + 21+ 23) / 4 = 22,25 2019-01-24T00:04:00 (24 + 25 + 23) / 3 = 24   To find out more about InfluxDB see also https://www.influxdata.com/time-series-database/ and https://www.influxdata.com/time-series-platform/   InfluxDB in ThingWorx   The new ThingWorx 8.4 release comes with an option to setup InfluxDB as additional Persistence Provider. Meta Data like Entity Definitons will still be stored in PostgreSQL. Streams, Value Streams and Data Tables however can be stored in InfluxDB.   The InfluxDB Persistence Provider setup is delivered with the PostgreSQL installation package for ThingWorx. Currently ThingWorx does not allow any aggregation of data with its built-in InfluxDB capabilities.   Prepare InfluxDB   InfluxDB will need a user and a database. Connect via Chronograf - the graphical UI to administer InfluxDB and create a new user via   InfluxDB Admin > Users Default username = twadmin Default password = password Permissions = ALL   Create a new database via   InfluxDB Admin > Databases Default database name = thingworx   Configure ThingWorx   Create a new Persistence Provider for InfluxDB in ThingWorx - but don't mark it as active yet!     Switch to the Configuration and change the username / password, database and hostname to match your installation.     Save the configuration, switch back to the General tab and mark the InfluxDB Persistence Provider as Active.   Save again and a "successful" message will be shown. If the save action failed, the connection settings are not correct - check for the correct ports and for any typos.   Creating Entities & Testing   Streams, Value Streams and Data Tables can now be created using the new InfluxDB Persistence Provider.   To test with a Value Stream   Create a new Thing with some NUMBER properties, e.g. 'a', 'b' and 'c' as properties - ensure they are marked as logged as well Name = InfluxValueStreamThing Create a new ValueStream based and change its Persistance Provider to the InfluxDB created above Name = InfluxValueStream Save both Entities Setting values for the properties will now automatically create the entries in InfluxDB - including the Entity name "InfluxValueStreamThing" Running the QueryPropertyHistory service on the Thing will return the results as an InfoTable In Chronograf this will display like this:   ThingWorx 8.4 will be released end of January 2019. Be sure to check out and test the new Persistence Provider features!
View full tip
Recently I needed to be able to parse and handle XML data natively inside of a ThingWorx script, and this XML file happened to have a SOAP namespace as well. I learned a few things along the way that I couldn’t find a lot of documentation on, so am sharing here.   Lessons Learned The biggest lesson I learned is that ThingWorx uses “E4X” XML handling. This is a language that Mozilla created as a way for JavaScript to handle XML (the full name is “ECMAscript for XML”). While Mozilla deprecated the language in 2014, Rhino, the JavaScript engine that ThingWorx uses on the server, still supports it, so ThingWorx does too. Here’s a tutorial on E4X - https://developer.mozilla.org/en-US/docs/Archive/Web/E4X_tutorial The built-in linter in ThingWorx will complain about E4X syntax, but it still works. I learned how to get to the data I wanted and loop through to create an InfoTable. Hopefully this is what you want to do as well.   Selecting an Element and Iterating My data came inside of a SOAP envelope, which was meaningless information to me. I wanted to get down a few layers. Here’s a sample of my data that has made-up information in place of the customer's original data:                <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" headers="">     <SOAP-ENV:Body>         <get_part_schResponse xmlns="urn:schemas-iwaysoftware-com:iwse">             <get_part_schResult>                 <get_part_schRow>                     <PART_NO>123456</PART_NO>                     <ORD_PROC_DIV_CD>E</ORD_PROC_DIV_CD>                     <MFG_DIV_CD>E</MFG_DIV_CD>                     <SCHED_DT>2020-01-01</SCHED_DT>                 </get_part_schRow>                 <get_part_schRow>                     <PART_NO>789456</PART_NO>                     <ORD_PROC_DIV_CD>E</ORD_PROC_DIV_CD>                     <MFG_DIV_CD>E</MFG_DIV_CD>                     <SCHED_DT>2020-01-01</SCHED_DT>                 </get_part_schRow>             </get_part_schResult>         </get_part_schResponse>     </SOAP-ENV:Body> </SOAP-ENV:Envelope> To get to the schRow data, I need to get past SOAP and into a few layers of XML. To do that, I make a new variable and use the E4X selections to get there: var data = resultXML.*::Body.*::get_part_schResponse.*::get_part_schResult.*; Note a few things: resultXML is a variable in the service that contains the XML data. I skipped the Envelope tag since that’s the root. The .* syntax does not mean “all the following”, it means “all namespaces”. You can define and specify the namespaces instead of using .*, but I didn’t find value in that. I found some sample code that theoretically should work on a VMware forum: https://communities.vmware.com/thread/592000. This gives me schRow as an XML List that I can iterate through. You can see what you have at this point by converting the data to a String and outputting it: var result = String(data); Now that I am to the schRow data, I can use a for loop to add to an InfoTable: for each (var row in data) {      result.AddRow({         PartNumber: row.*::PART_NO,         OrderProcessingDivCD: row.*::ORD_PROC_DIV_CD,         ManufacturingDivCD: row.*::MFG_DIV_CD,         ScheduledDate: row.*::SCHED_DT     }); } Shoo! That’s it! Data into an InfoTable! Next time, I'll ask for a JSON API. 😊
View full tip
Here is a tutorial to explain the process of uploading a PMML file from an external system to Thingworx Analytics. The tutorial steps are explained in the attached PDF and all referenced files can be found in the attached ZIP.  
View full tip
Large files could cause slow response times. In some cases large queries might cause extensively large response files, e.g. calling a ThingWorx service that returns an extensively large result set as JSON file.   Those massive files have to be transferred over the network and require additional bandwidth - for each and every call. The more bandwidth is used, the more time is taken on the network, the more the impact on performance could be. Imagine transferring tens or hundreds of MB for service calls for each and every call - over and over again.   To reduce the bandwidth compression can be activated. Instead of transferring MBs per service call, the server only has to transfer a couple of KB per call (best case scenario). This needs to be configured on Tomcat level. There is some information availabe in the offical Tomcat documation at https://tomcat.apache.org/tomcat-8.5-doc/config/http.html Search for the "compression" attribute.   Gzip compression   Usually Tomcat is compressing content in gzip. To verify if a certain response is in fact compressed or not, the Development Tools or Fiddler can be used. The Response Headers usually mention the compression type if the content is compressed:     Left: no compression Right: compression on Tomcat level   Not so straight forward - network vs. compression time trade-off   There's however a pitfall with compression on Tomcat side. Each response will add additional strain on time and resources (like CPU) to compress on the server and decompress the content on the client. Especially for small files this might be an unnecessary overhead as the time and resources to compress might take longer than just transferring a couple of uncompressed KB.   In the end it's a trade-off between network speed and the speed of compressing, decompressing response files on server and client. With the compressionMinSize attribute a compromise size can be set to find the best balance between compression and bandwith.   This trade-off can be clearly seen (for small content) here:     While the Size of the content shrinks, the Time increases. For larger content files however the Time will slightly increase as well due to the compression overhead, whereas the Size can be potentially dropped by a massive factor - especially for text based files.   Above test has been performed on a local virtual machine which basically neglegts most of the network related traffic problems resulting in performance issues - therefore the overhead in Time are a couple of milliseconds for the compression / decompression.   The default for the compressionMinSize is 2048 byte.   High potential performance improvement   Looking at the Combined.js the content size can be reduced significantly from 4.3 MB to only 886 KB. For my simple Mashup showing a chart with Temperature and Humidity this also decreases total load time from 32 to 2 seconds - also decreasing the content size from 6.1 MB to 1.2 MB!     This decreases load time and size by a factor of 16x and 5x - the total time until finished rendering the page has been decreased by a factor of almost 22x! (for this particular use case)   Configuration   To configure compression, open Tomcat's server.xml   In the <Connector> definitions add the following:   compression="on" compressibleMimeType="text/html,text/xml,text/plain,text/css,text/javascript,application/javascript,application/json"     This will use the default compressionMinSize of 2048 bytes. In addition to the default Mime Types I've also added application/json to compress ThingWorx service call results.   This needs to be configured for all Connectors that users should access - e.g. for HTTP and HTTPS connectors. For testing purposes I have a HTTPS connector with compression while HTTP is running without it.   Conclusion   If possible, enable compression to speed up content download for the client.   However there are some scenarios where compression is actually not a good idea - e.g. when using a WAN Accelerator or other network components that usually bring their own content compression. This not only adds unnecessary overhead but is compressing twice which might lead to errors on client side when decompressing the content.   Especially dealing with large responses can help decreasing impact on performance. As compressing and decompressing adds some overhead, the min size limit can be experimented with to find the optimal compromise between a network and compression time trade-off.
View full tip
Create a new Thing using the Scheduler Thing Template. The Scheduler Thing will fire a ScheduledEvent Event when the configured schedule is fired. The event is automatically present and does not need to be added manually. Configuration   The Scheduler Configuration is quite straightforward and allows for an exact setup of schedule based on units of time, e.g. seconds, minutes, hours, days of week etc. It can be accessed via the Thing's Entity Configuration   Configuration allows for Changing the runAsUser context - in which the Events will be handled. The user will need visibility and permission on e.g. executing Services or depending Things, which are required to run the Service triggered by the Event. Changing the Schedule - in which time the Events will be fired (by default every minute). The schedule is displayed in CRON String notation and can be changed and viewed in detail by clicking on "More". The CRON String will be generated automatically based on the inputs. Schedules can be configured in Manual mode - allowing for full configuration of each and every time based attribute. Schedules can be configured for a specific time Type - allowing for configuration only based on seconds, minutes, hours, days, weeks, months or years. Below screenshots show schedules running every minute and every Saturday / Sunday at 12:00 ("Every Weekend Day").     Services   Scheduler Things inherit two Services by default from the Thing Template DisableScheduler EnableScheduler These will activate / de-activate the Scheduler and allow / disallow firing Events once a scheduled time is reached If a Scheduler is currenty enabled or disabled can be seen in its properites  
View full tip
The AddStreamEntries​ snippet does not offer too much information, except that it needs an InfoTable as input. It is however based on the InfoTable for the AddStreamEntity service.     To use the AddStreamEntries table, an InfoTable based on sourceType, values, location, source, timestamp​ and ​tags​ must be used.   In this example, I started with a new Thing based on a ​Stream​ template and the following DataShape:     This DataShape must be converted into an InfoTable with is used as the ​values​ parameter. It's important that the ​timestamp​ parameter has distinct values! Otherwise values matching the same timestamp will be overwritten!   We don't really need the sourceType​ as ThingWorx will automatically determine the type by knowing the source and which kind of Entity Type it is.   I created a new ​MyStreamThing​ with a new service, filling the InfoTable and the Stream. The result is the following code which will add 5 rows to the Stream:     // *** SET UP META DATA FOR INFO TABLE ***   // create a new InfoTable based on AddStreamEntries parameters (timestamp, location, source, sourceType, tags, values)   var myInfoTable = { dataShape: { fieldDefinitions : {} }, rows: [] };   myInfoTable.dataShape.fieldDefinitions['timestamp']  = { name: 'timestamp', baseType: 'DATETIME' }; myInfoTable.dataShape.fieldDefinitions['location']  = { name: 'location', baseType: 'LOCATION' }; myInfoTable.dataShape.fieldDefinitions['source']    = { name: 'source', baseType: 'STRING' }; myInfoTable.dataShape.fieldDefinitions['sourceType'] = { name: 'sourceType', baseType: 'STRING' }; myInfoTable.dataShape.fieldDefinitions['tags']      = { name: 'tags', baseType: 'TAGS' }; myInfoTable.dataShape.fieldDefinitions['values']    = { name: 'values', baseType: 'INFOTABLE' };   // *** SET UP ACTUAL VALUES FOR INFO TABLE ***   // create new meta data   var tags = new Array(); var timestamp = new Date(); var location = new Object(); location.latitude = 0; location.longitude = 0; location.elevation = 0; location.units = "WGS84";   // add rows to InfoTable (~5 times)   for (i=0; i<5; i++) {       // create new values based on Stream DataShape       var params = {           infoTableName : "InfoTable",           dataShapeName : "Cxx-DS"     };       var values = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params);       // add something to the values to make them unique       // create and add new row based on Stream DataShape     // only a single line allowed!       var newValues = new Object();     newValues.a = "aaa" + i; // STRING - isPrimaryKey = true     newValues.b = "bbb" + i; // STRING     newValues.c = "ccc" + i; // STRING       values.AddRow(newValues);       // create new InfoTable row based on meta data & values     // add 10 ms to each object, to make it's timestamp unique     // otherwise entries with the same timestamp will be overwritten       var newEntry = new Object();     newEntry.timestamp = new Date(Date.now() + (i * 10));     newEntry.location = location;     newEntry.source = me.name;     newEntry.tags = tags;     newEntry.values = values;       // add new Info Table row to Info Table           myInfoTable.rows = newEntry;       }       // *** ADD myInfoTable (HOLDING MULITPLE STREAM ENTRIES) TO STREAM       // add stream entries in the InfoTable       var params = {           values: myInfoTable /* INFOTABLE */     };       // no return       Things["MyStreamThing"].AddStreamEntries(params);   To verify the values have been added correctly, call the ​GetStreamEntriesWithData​ service on the ​MyStreamThing​
View full tip
If you ever tested mashup rendering on mobile phones, you probably experienced that the mashup was not sizing to fit your mobile display. This "MobileHeader" extension enables to auto adapt the mashup to mobile displays.   It adds the following parameters to the HTML header: <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=0"> <meta name="apple-mobile-web-app-capable" content="yes"> <meta name="apple-mobile-web-app-status-bar-style" content="black-translucent">   In the composer just drop the "MobileHeader" extension into a section of the mashup.   This extension was tested until version 7.4.
View full tip
This Blog presents a simple Java utility to validate the deployment of ThingWatcher. It is important to note that the utility used is not a real life situation, the intent was to keep it as simple as possible in order to achieve its aim: validation of the deployment. An understanding of Java IDE (such as Eclipse) is necessary in order to run the utility with relevant dependency and classpath setup. Those are beyond the scope of this posting. We will cover the following points: Pre-Requisites Using the sample utility Code walk through Validate training job creation Validate model job creation Update for ThingWorx Analytics 8.0 Pre-requisites A strict adherence to the ThingWatcher deployment guide is recommended in order to first deploy training and model microservices as well as to familiarize yourself with ThingWatcher APIs. Prior to testing ThingWatcher, both the training and model microservices should be up and running The media for ThingWatcher (including model and training micro-service) should be downloaded from PTC Software Download page . The commands to deploy the micro-services will vary depending on the platform used and are presented in the ThingWatcher deployment guide. As a reference example, on Windows the command will be similar to the following: Start Docker: Start > Program > Docker > Docker Quick Start Terminal Load model micro service tar $ docker load < "D:\PTC\MED-61147-CD-522_F000_ThingWorx-Analytics-ThingWatcher-52-2\components\ModelService\ModelService\model-service.tar"     3. Install model service: $ docker run -d -p 8080:8080 -v '/d/TWatcherStorage/model:/data/models' -v '/d/TWatcherStorage/db:/tmp/' twxml/model-service:1.0 -Dfile.storage.path=/data/models -jar maven/model-1.0.jar server maven/standalone-evaluator.yml     4. Load training micro service tar file                         $ docker load < "D:\PTC\MED-61147-CD-522_F000_ThingWorx-Analytics-ThingWatcher-52-2\components\TrainingService\TrainingService\training-service.tar"     5. Install training service                         $ docker run -d -p 8090:8080  twxml/training-service:1.0.0  -Dmodel.destination.uri=model://192.168.99.100:8080/models -jar maven/training-standalone-1.0.0-bin.jar server /maven/training-standalone-single.yml Note: the -Dmodel.destination.uri points here to the model micro-service host. To find the ip address, enter docker-machine ip on the model micro-service docker machine.     6. Validate micro-services deployment: Execute docker ps  and confirmed that both services are up, as in the following example: CONTAINER ID        IMAGE                          COMMAND                      CREATED            STATUS              PORTS NAMES 5b6a29b95611        twxml/training-service:1.0.0  "java -Dmodel.destina"  13 days ago        Up 44 minutes      8081/tcp, 0.0.0.0:8090->8080/tcp  modest_albattani 8c13c0bc910e        twxml/model-service:1.0        "java -Dfile.storage."      2 weeks ago        Up 44 minutes      0.0.0.0:8080->8080/tcp, 8081/tcp  thirsty_ptolemy   Using the sample utility Download the attachment Main.java Import Main.java into Eclipse (or IDE of choice) with the ThingWatcher dependencies added in classpath. Update the trainingBaseURI (see below) to points to the training micro-services. The utility should be ready to execute. Code walk through The code declares a thingwatcher in the following snippet: ThingWatcher thingwatcher = new ThingWatcherBuilder() .certainty(90.0) .trainingDataDuration(60) .trainingDataDurationUnit(DurationUnit.SECOND) .trainingBaseURI("http://192.168.99.100:8090/training") .getThingWatcher(); In the above code it is important to update the trainingBaseURI argument with the correct ip address and port for the training micro-service host. The code then loops 10000 times and sends a new value, which simulates the sensor data, at a simulated 100 ms interval. The value is computed as Math.sin(i) for the whole calibrating phase and most of the monitoring phase too. We artificially introduce an anomaly by sending a value of Math.incremetExact(i) between the 9000 th and 9900 th iterations. During the Monitoring phase, the code logs the value, the anomalous status and the thingwatcher state. It is advised to save the output to a file in order to review the logging once the utility has run. In Eclipse this can be done by selecting the Main.java with right mouse button > Run As… > Run Configuration > Common and tick Output File under the Standard Input and Output, and specify a location for the output file. A review of the output log file will shows that somewhere between timestamp 900000 and 990000, the isAnomalousValue is true. Note that this does not starts and ends exactly at 900000 and 990000, as ThingWatcher needs a few occurrences before reporting it as anomaly. Sample output indicating an anomalous state: [main] INFO com.thingworx.analytics.demo.Main - Value = 901700,9017.0,-9016.403802019577 [main] INFO com.thingworx.analytics.demo.Main - isAnomalousValue = true [main] INFO com.thingworx.analytics.demo.Main - ThingWatcherStat = MONITORING As part of validating the successful deployment of ThingWatcher, it is recommended to validate the correct creation of a training and model job. Validate training job creation In order to validate the successful creation of a training job, execute a GET request to the training micro service : http://192.168.99.100:8090/training (update the ip address to the one on your system) This should return a COMPLETED job whose body starts with something similar to: Validate model job creation In order to validate the successful creation of a model job, execute a GET request to http://192.168.99.100:8080/models (update the ip address to the one on your system) to see all the models that have been created. For example: Alternatively, click (or use) the URI reported in the training job output, here http://192.168.99.100:8080/models/6/pmml.xml, to see the complete model definition. The output will be similar to: When this sample test runs correctly, the ThingWatcher deployment has been validated. Update for ThingWorx Analytics 8.0 Deploying the microservices, see Video Link : 1937 Updated Java code: see Does anyone know how to use java api to achieve anomaly detection with Thingwatcher8.0? To Note: The utility provided is for testing purpose only. The code does not represent any kind of best practice and is not meant to be a perfect java coding example. It is provided as is with no guarantee.
View full tip
  There are times when the raw sensor readings are not directly useful for monitoring conditions on a machine. The raw data may need to be transformed before it can provide value within your monitoring applications. For example, instead of monitoring individual pressure readings reported each second, you may only be concerned with the maximum pressure reading each minute. Or, maybe you want to monitor the median value of the electrical current pulled by a machine every five seconds to smooth out the noise of raw sub-second sensor readings. Or, maybe you want to monitor if the average hourly temperature of a machine exceeds a control limit in 2 of the past 3 hours.   Let’s take the example of monitoring the max pressure of a valve reading over the past 45 seconds for your performance dashboard. How do you do it? Today, you might add a new property (e.g. “MaxPressure”) to your valve Thing. Then, you might add a subscription that triggers when the Pressure property value changes, and then call a service FindMax() to return the maximum pressure for that time interval. Lastly, you might write that maximum result value to the new property MaxPressure to store it and visualize it in the dashboard. Admittedly, not the worst process, but also not the most efficient.   Coming in 8.4, we will now offer Property Transforms, which enable you to automatically execute common statistical calculations—like min, max, average, median, mode and standard deviation, as well as SPC calculations—directly within a property itself. These transforms are configurable to run at certain intervals of time or points collected and can also be used with our alerting subsystem to drive behavior and user action where necessary. There is no longer a need to create an elaborate subscription-based logic flow just to do simple calculations!  This is just another way that ThingWorx 8.4 offers a more productive environment for IoT developers than ever before.   Ready to see it in action? Check out this video below by our product manager Mark!   (view in My Videos)   Comment your thoughts below!   Stay connected, Kaya
View full tip
This is using the simplest structure to do a look through an infotable.  It's simple but it avoids having to use row indexes and cleans up the code for readability as well.   //Assume incoming Infotable parameter names "thingList" for each (row in thingList.rows) {      // Now each row is already assigned to the row variable in the loop      var thingName = row.name; }   You can also nest these loops (just use a different variable from "row").  Also important to note to not add or remove row entries of the Infotable inside the loop.  In this case you may end up skipping or repeating rows in the loop since the indexes will be changed.
View full tip
There are now three new places where you can get and/or share ThingWorx code examples in the ThingWorx Community: ThingWorx Platform Services ThingWorx Extensions and Widgets ThingWorx Edge and Edge SDKs We encourage you to share your own relevant code examples in the appropriate space. Be sure to read the how-to and guidelines for posting to the Code Examples Libraries before you create your document. Any official code from ThingWorx Support Services will be marked with an official designation at the top of the document, which looks like this: Keep an eye out for more code examples as we ramp up these libraries and don’t forget to share your own examples!
View full tip