cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - New to the community? Learn how to post a question and get help from PTC and industry experts! X

IoT Tips

Sort by:
Timers and Schedulers can also be created and configured programmatically via custom services. The following service, which can be created on any Thing, will create a new Timer using the following Inputs:         // create new Thing var params = { name: ThingName /* STRING */, description: undefined /* STRING */, thingTemplateName: "Timer" /* THINGTEMPLATENAME */, tags: undefined /* TAGS */ }; Resources["EntityServices"].CreateThing(params); // read initial configuration // result: INFOTABLE var configtable = Things[ThingName].GetConfigurationTable({tableName: "Settings"}); // update configuration with service parameters configtable.updateRate = updateRate configtable.runAsUser = user // set new configuration table var params = { configurationTable: configtable /* INFOTABLE */, persistent: true /* BOOLEAN */, tableName: "Settings" /* STRING */ }; Things[ThingName].SetConfigurationTable(params);   This code is an example which could also be used to create a new Scheduler. The configuration table for a Timer has the following attributes: updateRate enabled runAsUser The configuration table for a Scheduler has the following attributes: schedule enabled runAsUser  
View full tip
Original Post Date:     June 6, 2016 Description: This tutorial video will walk you through the installation process for the PostgreSQL-based version of the ThingWorx Platform in a Windows environment.  All required software components will be covered in this video.    
View full tip
Video Author:                     Christophe Morfin Original Post Date:            October 2, 2017 Applicable Releases:        ThingWorx Analytics 8.1   Description:​ In this video we will walk thru the installation steps of ThingWorx Analytics Server 8.1.  This covers the Native Linux installation though the steps will be similar for a docker installation on Windows or Linux.    
View full tip
Ran into this recently thought I share an approach to getting a table with multi-column distinct yet retaining all the columns of the row. If you use Distinct, you get only the Columns you do Distinct on. This isn't very helpful if you want the 'latest' or the 'first occurrences'  of records in your table with a combination of fields being unique. For example I had Process, Part, Dimension and Point for which I had multiple value and date time entries, but I only wanted the latest entries. Following is how I solved it, if you have a better way please leave a comment! P.S.: for the query I used the awesome query builder available in the snippet section! --------------------------------------- var q1Result = Things["MyThing"].QueryStreamEntriesWithData({maxItems:99999, query:query1}); //Below creates a temporary measurement table to store the latest meaurement values var params = {                 infoTableName : "InfoTable",                 dataShapeName : "MyDatashape.DS" }; // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(MyDataShape.DS) var tempTable1 = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); // Extract only the latest measurements for the PART from the measurement result table 'q1Result' //The way we are going to reduce this to unique measurements is //1. records are in reverse order of date time //2. get distinct by Process Part Dim Point //3. Step through and match against distinct set //4. First match goes into final set //5. Upon match remove from distinct set //6. If no match then skip record //7. If no more distinct match records break loop var params = {                 t: q1Result /* INFOTABLE */,                 columns: 'ProcessID,PartID,Dimension,Point' /* STRING */ }; // result: INFOTABLE var distinctResult = Resources["InfoTableFunctions"].Distinct(params); for (var x = 0; x < q1Result.rows.length; x++) {     var query = {       "filters": {         "type": "AND",         "filters": [           {             "fieldName": "ProcessID",             "type": "EQ",             "value": q1Result.rows .ProcessID           },           {             "fieldName": "PartID",             "type": "EQ",             "value": q1Result.rows .PartID          },           {             "fieldName": "Dimension",             "type": "EQ",             "value": q1Result.rows .Dimension           },           {             "fieldName": "Point",             "type": "EQ",             "value": q1Result.rows .Point           }         ]       }     };   var params = {                 t: distinctResult /* INFOTABLE */,                 query: query /* QUERY */ }; // result: INFOTABLE var matchResult = Resources["InfoTableFunctions"].Query(params);     if (matchResult.rows.length == 1) {         tempTable1.AddRow(q1Result.rows );            var params = {             t: distinctResult /* INFOTABLE */,             query: query /* QUERY */         };         // result: INFOTABLE         var distinctResult = Resources["InfoTableFunctions"].DeleteQuery(params);         if (distinctResult.rows.length == 0) {                        break                    }            }    } //I now have a tempTable1 with the full rows and the 4 fields distinct result = tempTable1
View full tip
/* Define a DataShape used in an InfoTable Parameter for this service call */ twDataShape* sampleInfoTableAsParameterDs = twDataShape_Create(twDataShapeEntry_Create("ColumnA",NO_DESCRIPTION,TW_STRING)); twDataShape_AddEntry(sampleInfoTableAsParameterDs,twDataShapeEntry_Create("ColumnB",NO_DESCRIPTION,TW_NUMBER)); twDataShape_AddEntry(sampleInfoTableAsParameterDs,twDataShapeEntry_Create("ColumnC",NO_DESCRIPTION,TW_BOOLEAN)); twDataShape_SetName(sampleInfoTableAsParameterDs,"SampleInfoTableAsParameterDataShape");      /* Define Input Parameter that is an InfoTable of Shape SampleInfoTableAsParameterDataShape */ twDataShapeEntry* infoTableDsEntry = twDataShapeEntry_Create("itParam",NULL,TW_INFOTABLE); twDataShapeEntry_AddAspect(infoTableDsEntry, "dataShape", twPrimitive_CreateFromString("SampleInfoTableAsParameterDataShape", TRUE));    twDataShape* inputParametersDefinitionDs = twDataShape_Create(infoTableDsEntry);   /* Register remote function */ twApi_RegisterService(TW_THING, SERVICE_INTEGRATION_THINGNAME, "testMultiRowInfotable", NO_DESCRIPTION,   inputParametersDefinitionDs, TW_NOTHING, NULL, PlatformCallsServiceWithMultiRowInfoTableServiceImpl, NULL); /* Note that you will have to manually create the datashape in ThingWorx before attempting to add this remote service to your Thing. */
View full tip
It usually happens that we need to copy a large file to ThingWorx server periodically, and what's worse, the big file is changing(like a log file). This sample give a simpler way to implement. The main idea in the sample is: 1. Lower the management burden from ThingWorx server and instead it put all the work in edge SDK side 2. Save network burden with only uploading the incremented file and append it to the older file on ThingWorx server   Java SDK version in this sample: 6.0.1-255
View full tip
Disclaimer: example was provided by Hatcher Chad - chad@onfarmsystems.com   //   // For this example, we'll have an Math service   // which takes two numbers, and an operation.   // The result will be that operation performed on the two inputs.       //   // We either need an Application Key,   // or user credentials to perform the reads and writes.   // App keys are a little safer.   // In this demo, we'll store it on the Entity as a Property.   var appKey = me.appKey;       //   // The service name needs to be unique and not already in use.   var serviceName = "MyMath";       //   // What are the inputs to the service?   // We'll define them nicely here, but manipulate this object later.   var parameters = {   "op" : "STRING",   "x" : "NUMBER",   "y" : "NUMBER"   };       //   // What datatype does the service return?   // If it's an infotable,   // then you'll also have to specify the data shape   // as part of the resultType's aspect,   // but I won't demonstrate that here.   var output = "NUMBER";       //   // What is the actual service script?   // We'll define it here as an array of lines, and then join them together.   var serviceScript = [   "var result = (function() {",   " switch(op) {",   " case \"add\": return x + y;",   " case \"sub\": return x - y;",   " case \"mult\": return x * y;",   " case \"div\": return x / y;",   " default: return op in Math ? Math[op](x, y) : 0;",   " };",   "})();",   ].join("\n");       ////////       //   // Let's convert the friendly parameter definition   // into the structure that ThingWorx uses:   var parameterDefinitions = Object.keys(parameters).reduce(function(parameterDefinitions, parameterName, index) {   var parameterType = parameters[parameterName];   parameterDefinitions[parameterName] = {   "name": parameterName,   "aspects": {},   "description": "",   "baseType": parameterType,   "ordinal": index   };   return parameterDefinitions;   }, {});       //   // Now let's set up our service definition and implementation.   var definition = {   "isAllowOverride": false,   "isOpen": false,   "sourceType": "Unknown",   "parameterDefinitions": parameterDefinitions,   "name": serviceName,   "aspects": {   "isAsync": false   },   "isLocalOnly": false,   "description": "",   "isPrivate": false,   "sourceName": "",   "category": "",   "resultType": {   "name": "result",   "aspects": {},   "description": "",   "baseType": output,   "ordinal": 0   }   };       var implementation = {   "name": serviceName,   "description": "",   "handlerName": "Script",   "configurationTables": {   "Script": {   "isMultiRow": false,   "name": "Script",   "description": "Script",   "rows": [{   "code": serviceScript   }],   "ordinal": 0,   "dataShape": {   "fieldDefinitions": {   "code": {   "name": "code",   "aspects": {},   "description": "code",   "baseType": "STRING",   "ordinal": 0   }   }   }   }   }   };       ////////       //   // Here are the URLs we'll need in order to make updates.   // You can change the thing name ('ServiceModifier' here)   // to something else.   // If you use credentials instead of an app key,   // then you can remove the appKey parameter here,   // but you'll have to add the username and password   // to the two ContentLoaderFunctions calls.   var url = {   export : "http://127.0.0.1:8080/Thingworx/Things/ServiceModifier?Accept=application/json&appKey="+appKey,   import : "http://127.0.0.1:8080/Thingworx/Things/ServiceModifier?appKey="+appKey   };       //   // We can download the entity to modify as a JSON object.   // Older versions of ThingWorx might not support this.   var config = Resources.ContentLoaderFunctions.GetJSON({   url : url.export,   });       //   // We have to modify both the 'effectiveShape',   // as well as the 'thingShape'.   config.effectiveShape.serviceDefinitions[serviceName] = definition;   config.effectiveShape.serviceImplementations[serviceName] = implementation;       config.thingShape.serviceDefinitions[serviceName] = definition;   config.thingShape.serviceImplementations[serviceName] = implementation;       // Finally, we can push our updates back into ThingWorx.   Resources.ContentLoaderFunctions.PutText({   url : url.export,   content : JSON.stringify(config),   contentType : "application/json",   });       // The end.
View full tip
1. Create a network and added all Entities that implement from a specific ThingShape in the network 2. Create a ThingShape mashup as below Note: Bind the Entity parameter to DynamicThingShapes_TracotrShape's service GetProperties input EntityName. Laso bind mashup RefreshRequested event to that service 3. Create a mashup named ContentShape, add Tree widget and ContainedMashp in it 4. Bind Service GetNetworkConnection's Selected Row(s) result and Selected RowsChanged event to ContainedMashup widget Note: Master can total replace ThingShape mashup. Suggest to use Master after ThingWorx 6.0
View full tip
I had a better example, this is my original example ... If I find the other one I'll upload that as well.
View full tip
The following script is a component of the Axeda Connected Configuration (CMDB) feature.  It is used to provide configuration data for controlling package deployments via Connected Content (SCM). ​ ConfigItem_CRU.groovy *Takes a POST request, not parameters import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.drm.sdk.scripto.Request import com.axeda.services.v2.ConfigurationItem import com.axeda.services.v2.ConfigurationItemCriteria import com.axeda.services.v2.AssetConfiguration import com.axeda.services.v2.Asset import com.axeda.services.v2.ExecutionResult import groovy.json.JsonSlurper import net.sf.json.JSONObject import groovy.xml.MarkupBuilder /** * ConfigItem_CRU.groovy * ----------------------- * * Reads in json from an http post request and reads, adds, deletes or updates Configuration Items. * * * @note this parses a post and does not take any additional parameters. * * @author sara streeter <sstreeter@axeda.com> */ def contentType = "application/json" final def serviceName = "ConfigItem_CRU" def response = [:] def writer = new StringWriter() def xml = new MarkupBuilder(writer) try {     // BUSINESS LOGIC BEGIN     def assetId     def validationOnly     def validationResponse = ""     List<ConfigurationItem> configItemList     if (Request?.body != null && Request?.body !="") {         def slurper = new JsonSlurper()         def request = slurper.parseText(Request?.body)         assetId = request.result.assetId         validationOnly = request.result.validationOnly?.toBoolean()         if (request.result.items != null && request.result.items.size() > 0){             configItemList = request.result.items.inject([]) { target, item ->               if (item && item.path != "" && item.key != "" && item.path != null && item.key != null){                     ConfigurationItem configItem = new ConfigurationItem()                     configItem.path = item.path + item.key                     configItem.value = item.value                     target << configItem                 }                 target             }         }     }       if (assetId != null) {               def asset = assetBridge.find([assetId])[0]             AssetConfiguration config = assetConfigurationBridge.getAssetConfiguration(assetId, "")               def itemToDelete                        if (config == null) {                     createConfigXML(xml)                     AssetConfiguration configToCreate = assetConfigurationBridge.fromXml(writer.toString(), asset.id)                     ExecutionResult result = assetConfigurationBridge.create(configToCreate)                     AssetConfiguration config2 = assetConfigurationBridge.getAssetConfiguration(asset.id, "")                     config = config2                     itemToDelete = "/Item"                 }                 if (configItemList != null && configItemList?.size() > 0){                 List<ConfigurationItem> compareList = config.items                 def intersectingCompareItems = compareList.inject(["save": [], "delete": []]) { map, item ->                     // find whether to delete                     def foundItem = configItemList.findAll{ compare -> item?.path == compare?.path && item?.value == compare?.value  }                     map[foundItem.size() > 0 ? "save" : "delete"] << item                     map                 }               intersectingCompareItems.delete = intersectingCompareItems.delete.collect{it.path}               if (itemToDelete){                 intersectingCompareItems.delete.add(itemToDelete)               }                 def intersectingConfigItems = configItemList.inject(["old": [], "new": []]) { map, item ->                     // find whether it's old                     def foundItem = compareList.findAll{ compare -> item?.path == compare?.path && item?.value == compare?.value }                     map[foundItem.size() > 0 ? "old" : "new"] << item                     map                 }                 assetConfigurationBridge.deleteConfigurationItems(config, intersectingCompareItems.delete)                 assetConfigurationBridge.appendConfigurationItems(config, intersectingConfigItems.new)               def exResult = assetConfigurationBridge.validate(config)               if (exResult.successful){                     validationResponse = "success"                     if (!validationOnly){                         assetConfigurationBridge.update(config)                     }               }                 else {                     validationResponse = exResult.failures[0]?.details                 }             }             response = [                 assetId: assetId,                 items: config?.items?.collect { item ->                 def origpath = item.path                 def lastSlash = origpath.lastIndexOf("/")                 def key = origpath.substring(lastSlash + 1, origpath.length())                        def path = origpath.replace("/" + key, "")                 path += "/"                     [                         path: path,                         key: key,                         value: item.value                     ]                 },                 validationResponse: validationResponse             ]       }         else {             throw new Exception("Error: Asset Id must be provided.")         } } catch (Exception ex) {       logger.error ex   response = [           error:  [                   type: "Backend Application Error"                   , msg: ex.getLocalizedMessage()           ]   ] } return ['Content-Type': 'application/json', 'Content': JSONObject.fromObject(response).toString(2)] /** * Create the Success response. * * @param xml : The xml response.<br> * @param info : If this is set to "1" the info element will be included in the response.<br> * @param infos : Collection of information to include within the info element of the response.<br> */ private void createConfigXML(xml) {     xml.Item() }  
View full tip
I recently had a customer who wanted to run services on ThingWorx from Power BI to retrieve existing operational data, and we were a bit stumped on how to pass the API key over in the headers, so I did a bit of Googling and pieced together the solution. It's not quite intuitive on the Power BI side, so I thought it would be helpful to share. If you have any other experience with integrating ThingWorx with Power BI, feel free to add a comment.    Prepare ThingWorx Create an Application Key that has Run Time execution access to the services you need. Understand the inputs needed for the service you would like. I'll have examples of none, one, an InfoTable, and multiple inputs.   Power BI Following the following steps in Power BI: 1. In Power BI, create a new blank query   2. On the left, right click on Query1 and go to the Advanced Editor:   3. Replace all of the body content with the following, replacing your API key, appropriate end point, and base URL as needed (this is an example with NO input parameters, I'll follow with examples of other parameters):     let appKey = "your-application-key-here", endpoint = "Things/YourThingNameHere/Services/YourServiceNameHere", baseUrl = "https://YourServerNameHere/Thingworx/", url = Text.Combine({baseUrl,endpoint}), body = "", request = Web.Contents( url, [ Headers = [ appKey = appKey, #"Content-Type" = "application/json", Accept = "application/json" ], Content = Text.ToBinary(body) ] ), Source = Json.Document(request) in Source       4. Click "Done", and now you'll have a warning about how to connect. Click the "Edit Credentials" button. 5. Leave it on Anonymous and click "Connect":   6. You should now see the return data coming from ThingWorx.   Note that I had a little trouble with this authentication initially and it saved the wrong method. To clear that out, go to the ribbon bar item "Data source settings" and select the server and clear it out.   Other Examples Here is an example for sending a single string parameter:   let appKey = "your-application-key-here", endpoint = "Things/YourThingNameHere/Services/YourServiceNameHere", baseUrl = "https://YourServerNameHere/Thingworx/", url = Text.Combine({baseUrl,endpoint}), body = "{""InputParameter"": ""InputValue""}", request = Web.Contents( url, [ Headers = [ appKey = appKey, #"Content-Type" = "application/json", Accept = "application/json" ], Content = Text.ToBinary(body) ] ), Source = Json.Document(request) in Source     Here's an example of sending a string and an integer: let appKey = "your-application-key-here", endpoint = "Things/YourThingNameHere/Services/YourServiceNameHere", baseUrl = "https://YourServerNameHere/Thingworx/", url = Text.Combine({baseUrl,endpoint}), body = "{""InputString"": ""Hello, world!"", ""InputNumber"" : 42}", request = Web.Contents( url, [ Headers = [ appKey = appKey, #"Content-Type" = "application/json", Accept = "application/json" ], Content = Text.ToBinary(body) ] ), Source = Json.Document(request) in Source   Here is an example for sending an InfoTable. Note that you must supply the dataShape with fieldDefinitions. If you're using an existing Data Shape, you can get the JSON by using the service GetDataShapeMetadataAsJSON() that is on the data shape.     let appKey = "your-application-key-here", endpoint = "Things/YourThingNameHere/Services/YourServiceNameHere", baseUrl = "https://YourServerNameHere/Thingworx/", url = Text.Combine({baseUrl,endpoint}), body = "{""propertyNames"": { ""rows"": [ { ""name"": ""FirstEntityName"", ""description"": ""The first entity"" }, { ""name"": ""SecondEntityName"", ""description"": ""The second entity"" }], ""dataShape"": { ""fieldDefinitions"": { ""name"": { ""name"": ""name"", ""aspects"": { ""isPrimaryKey"": true }, ""description"": ""Entity name"", ""baseType"": ""STRING"", ""ordinal"": 0 }, ""description"": { ""name"": ""description"", ""aspects"": {}, ""description"": ""Entity description"", ""baseType"": ""STRING"", ""ordinal"": 0 } } } }}", request = Web.Contents( url, [ Headers = [ appKey = appKey, #"Content-Type" = "application/json", Accept = "application/json" ], Content = Text.ToBinary(body) ] ), Source = Json.Document(request) in Source       If I find any more interesting ways to use Power BI with ThingWorx services, I'll add them on here.  
View full tip
Background Getting a performance benchmark of your running application is an important thing to do when deploying and scaling up an application in production.  This not only helps focus in on performance issues quickly, but also allows for safely planning for scaling up and resource sizing based on real concrete data.   I recently created a tool and made a post about capturing and analysing ThingWorx utilisation statistics to do such an analysis, as well as identifying potential performance bottlenecks. Although they are rich and precise, utilisation statistics fall short in a number of areas however - specifically being able to count and time specific service executions, as well as identifying and sorting based on the host executing the service.   Tomcat Access Log Analysis As ThingWorx is a Tomcat web application, Tomcat logs details of the requests being made to the application server and ThingWorx REST API.  The default settings include the host (IP address), date/timestamp, and request URI; which can be decoded to reveal relevant details like the calling entities and service executions.   Adding 3 key additional variables (%s %B %D) to the server.xml access log value also gives us the HTTP response code, service execution time, and bytes returned from Tomcat.  This is super useful as we can now determine exact time of service executions, and run statistics on their execution totals and execution time.     Once you have an access log file looking like the one above, you can attempt to load it into the access_log sheet in the analysis Excel workbook that I created.  You do this by click on the access_log table, then selecting "Data > Get Data > Data Source Settings".  You'll then be prompted with the following or similar pop-up allowing you to navigate to your access_log file to select and then load.     It should be noted that you'll have to Refresh the table after selecting the new access_log.txt file so that it is read in and populates the table.  You can do this by right-clicking on the table and saying Refresh, or using the Data > Refresh button.   This workbook relies on a number of formulas to slice and dice the timestamp, and during my attempts at importing I had significant issues with this due to some of the ways that Excel does things automatically without any manual options.  You really need to make sure that the timestamps are imported and converted correctly, or something in the workbook will likely not work as intended.  One thing that I had to do was to add 1 second to round up 00:00:00 for the first entries as this was being imported as a date without the time part, and then the next lines imported as a date/time.   Depending on how many lines your file is, you'll likely also have to "Fill Down" the formulas on the right side of the sheet which may be empty in the table after importing your new data set.  I had the best results by selecting the cells in question on the last row, then going down to the bottom corner, pushing and holding Shift, clicking on the last cell bottom right, and then selecting Home > Fill > Down to pull the formulas down from the top.   Once the data is loaded, you'll be able to start poking around.  The filters and sorting by the named columns is really helpful as you can start out by doing things like removing a particular host, sorting by longest execution times, selecting execution times greater than 4 seconds, or only showing activity aimed at a particular entity or service.     You really need to make sure that the imported data worked fine and looks perfect, as the next steps will totally break if not.  With the data loaded, you can now go to the Summary Data table and right-click on one of the tables and select Refresh.  This is reload the data in into the pivot table and re-run their calculations.   Once the refresh is complete, you should see the table summary like shown here; there are Day, Hour, and Minute expand/collapse buttons.  You should also see the Day, Hour, Month fields showing in the Field Definitions on the right.  This is the part that is painful -- if the dates are in the wrong format and Excel is unable to auto-detect everything in the same way, then you will not get these automatically created fields.     With the data reloaded, and Pivot Tables re-built, you should be able to go over to the Dashboard sheet to start looking at and analysing the graphs.  This one is showing the Top 10 services organised into hourly buckets with cumulated service execution times.     I'm not going to go into all of the workbooks features, but you can also individually select a set of key services that you want to have a look at together across both the execution count and execution time dimensions.     Next you can see the coordinated view of both total service execution time over number or service executions.  This is helpful for looking for patterns where a service may be executing longer but being triggered the same amount of times, compared to both being executed and taking more time.  I've created a YouTube video (see bottom) which goes through using all of the features as well as providing other pointers to using it.     Getting into a finer level of detail, this "bonus" sheet provides a Pivot Table and Pivot Chart which allows for exploring minimum, maximum and average execution time for a specific service.  Comparing this with the utilisation subsystem metrics taken during the same period now provide much deeper insight as we can pinpoint there the peaks were, how long they lasted, and where the slow executions were in relation to other services being executed at that time (example: identifying many queries/data processing occurring simultaneously).     Without further ado, you can download and play with my ThingWorx Tomcat Access Log Analysis Excel Workbook, and check out the recorded demonstration and explanation for more details on loading and analysis use. [YouTube] ThingWorx Tomcat Access Logs - Service Performance Analysis
View full tip
Requirements: 6.1.2+ Geofences are geometric shapes drawn virtually on a geographical area that represents a fence that can be crossed by a device.  The Axeda Platform has built-in support for mobile locations and geofences, which can be linked to the rules engine to enable notifications based on geofence crossing. What this tutorial covers This tutorial demonstrates the workflow of creating a geofence through to creating the expression rules with notifications, then how the mobile location can trigger the rules. 1) Creating the Geofence 2) Creating the Expression Rule There is currently no user interface built into the Axeda Applications Console which interacts with geofences.  For a sample application with a geofence user interface, see Sample Project: Traxeda​ (TODO).  For a single Custom Object that includes all of the functionality described below, see the end of  this document. The properties of a geofence are a name, a description, and a series of coordinates based on Well-Known Text (WKT) syntax (see the OpenGIS Simple Features Specification). def addGeofence(CONTEXT, map){     Geofence myGeofence = new Geofence(CONTEXT)        myGeofence.name = map.name     if(map.type != "polygon" && map.type != "circle")     {         throw new Exception("Invalid type: need 'polygon' or 'circle', not '$map.type'")     }     else if(map.type == "polygon")     {         def geo = map.locs.loc.inject( "POLYGON (("){ str, item ->             def lng = item.lng             def lat = item.lat             str += "$lng $lat,"         str         }         //the first location also has to be the last location         myGeofence.geometry = geo + map.locs.loc[0].lng + " " + map.locs.loc[0].lat + "))"         //Something like this is built:         //POLYGON ((-71.082118 42.383892,-70.867198 42.540923,-71.203654 42.495374,-71.284678 42.349394,-71.163829 42.221382,-71.003154 42.266114,-71.082118 42.383892))     }     else if(map.type == "circle")     {         def lng = map.locs.loc[0].lng         def lat = map.locs.loc[0].lat         myGeofence.geometry = "POINT ($lng $lat)"         //POINT (-71.082118 42.383892)         myGeofence.buffer = map.radius.toDouble()     }     myGeofence.description = "ALERT:::$map.alertType:::$map.alert"     try {          myGeofence.store()     }     catch (e){         logger.info e.localizedMessage             return null     }     myGeofence } The geofence itself does not interact with devices in any way.  Rather it is the Expression Rule that is applied to models and devices and that invokes the geofence when a mobile location is passed in. Creating the Expression Rule The Expression Rule for the Geofence is built as follows: TYPE: MobileLocation IF:  Expression set to "InNamedGeofence" for entering and "!InNamedGeofence" for exiting. The following function creates this expression rule: /* Sample call createGeofenceExpressionRule(CONTEXT, "My Geofence", "rule_MyGeofence", "in", "You entered the geofence!", "SDK Generated Geofence Rule", 100) */ def createGeofenceExpressionRule(com.axeda.drm.sdk.Context CONTEXT, String geofencename, String rulename, String alertType, String alertMessage, String ruledescription, int severity){     ExpressionRuleFinder erf = new ExpressionRuleFinder(CONTEXT)     erf.setName(rulename)     ExpressionRule expressionRule1 = erf.findOne()     expressionRule1?.delete()        def expressionRule = new ExpressionRule(CONTEXT)     expressionRule.setName(rulename)     expressionRule.setDescription(ruledescription)     expressionRule.setTriggerName("MobileLocation")     def ifExpStr = "InNamedGeofence(\"$geofencename\", Location.location)"     if(alertType == "out"){         ifExpStr = "!" + ifExpStr     }     expressionRule.setIfExpression(new Expression(ifExpStr))     expressionRule.setThenExpression(new Expression("CreateAlarm(\"$alertMessage\", severity)"))     expressionRule.setEnabled(true)     expressionRule.setConsecutive(false)     expressionRule.store()     expressionRule } Then the rule associations must be created to apply the rule to a model or device. /* Sample call findOrCreateRuleAssociations(CONTEXT, myModel, expressionRule, "EXPRESSION_RULE", "MODEL") Where expressionRule is the rule created in the above example */ def findOrCreateRuleAssociations(Context CONTEXT, Object entity, Object rule, String ruleType, String entityType){     // rule type is whether this is an expression rule     ruleType = ruleType ?: "EXPRESSION_RULE"     entityType = entityType ?: "DEVICE_INCLUDE"     RuleAssociationFinder ruleAssociationFinder = new RuleAssociationFinder(CONTEXT)     ruleAssociationFinder.setRuleId(rule.id.value)     ruleAssociationFinder.setRuleType(RuleType.valueOf(ruleType))     ruleAssociationFinder.setEntityId(entity.id.value)     ruleAssociationFinder.setEntityType(EntityType.valueOf(entityType))     def ruleAssociations = ruleAssociationFinder.findAll()     if (!ruleAssociations || ruleAssociations?.size() == 0){         def ruleAssociation = new RuleAssociation(CONTEXT)         ruleAssociation.entityId = entity.id.value         ruleAssociation.entityType = EntityType.valueOf(entityType)         ruleAssociation.ruleType = RuleType.valueOf(ruleType)         ruleAssociation.setRuleId(rule.id.value)         ruleAssociation.store()         ruleAssociations = [ruleAssociation]     }     return ruleAssociations } The rule will now be triggered when any device of the applied model sends a mobile location within the geofence, which in turn will create an alarm. Here is a custom object with the complete geofence functionality: import com.axeda.drm.sdk.Context import com.axeda.drm.sdk.geofence.Geofence import com.axeda.drm.sdk.geofence.GeofenceFinder import com.axeda.drm.sdk.rules.engine.Expression import com.axeda.drm.sdk.rules.engine.ExpressionRule import com.axeda.drm.sdk.rules.engine.ExpressionRuleFinder import com.axeda.drm.sdk.rules.engine.RuleAssociation import com.axeda.drm.sdk.rules.engine.RuleAssociationFinder import com.axeda.drm.sdk.rules.engine.RuleType import com.axeda.drm.sdk.common.EntityType import com.axeda.drm.sdk.device.Model import com.axeda.drm.sdk.device.ModelFinder try {     def Context CONTEXT = Context.getSDKContext()     def model = findOrCreateModel(CONTEXT, "FooModel")     def sampleCircle = [         "name": "My Circle",         "alert": "My Geofence Alert Text",         "type": "circle",         "alertType": "in",         "radius": "65.76",         "locs": [             [                 "loc": [   "lat": "42.60970621339408",   "lng": "-73.201904296875"   ]             ]         ]     ]     def samplePolygon = [         "name": "My Polygon",         "alert": "My Geofence Alert Text",         "type": "polygon",         "alertType": "out",         "locs": [             ["loc": [  "lng": -71.2604999542236,  "lat": 42.3384903145478  ]],             ["loc": [  "lng": -71.4218616485596,  "lat": 42.3242772020001  ]],             ["loc": [  "lng": -71.5585041046143,  "lat": 42.2653600946699  ]],             ["loc": [  "lng": -71.5413379669189,  "lat": 42.1885837119108  ]],             ["loc": [  "lng": -71.4719867706299,  "lat": 42.1137514551207  ]],             ["loc": [  "lng": -71.3737964630127,  "lat": 42.0398506628541  ]],             ["loc": [  "lng": -71.2508869171143,  "lat": 42.0311807962068  ]],             ["loc": [  "lng": -71.1355304718018,  "lat": 42.2084223174036  ]],             ["loc": [  "lng": -71.2604999542236,  "lat": 42.3384903145478  ]]         ]     ]     // find geofence if it exists     def circle = findGeofenceByName(CONTEXT, sampleCircle.name)     // create circular geofence     if (!circle){         circle = addGeofence(CONTEXT, sampleCircle)     }     // create rule for circular geofence     def circleRule = createGeofenceExpressionRule(CONTEXT, circle.name, "${circle.name}__Rule",                                                                            sampleCircle.alertType, sampleCircle.alert, "SDK Generated Geofence Rule", 100)     // apply rule to new Model     findOrCreateRuleAssociations(CONTEXT, model, circleRule, "EXPRESSION_RULE", "MODEL")     def polygon = findGeofenceByName(CONTEXT, samplePolygon.name)     if (!polygon){         polygon = addGeofence(CONTEXT, samplePolygon)     }     def polygonRule = createGeofenceExpressionRule(CONTEXT, polygon.name, "${polygon.name}__Rule",                                                                               samplePolygon.alertType, samplePolygon.alert, "SDK Generated Geofence Rule", 100)     // apply rule to new Model     findOrCreateRuleAssociations(CONTEXT, model, polygonRule, "EXPRESSION_RULE", "MODEL") } catch (Exception e) {     logger.info(e.localizedMessage) } return true def findGeofenceByName(CONTEXT, name){     GeofenceFinder geofenceFinder = new GeofenceFinder(CONTEXT)     geofenceFinder.setName(name)     def geofence = geofenceFinder.find()     geofence } def addGeofence(CONTEXT, map){     Geofence myGeofence = new Geofence(CONTEXT)     myGeofence.name = map.name     if(map.type != "polygon" && map.type != "circle") {         throw new Exception("Invalid type: need 'polygon' or 'circle', not '$map.type'")     } else if(map.type == "polygon") {         def geo = map.locs.loc.inject( "POLYGON (("){ str, item ->             def lng = item.lng             def lat = item.lat             str += "$lng $lat,"             str         }         //the first location also has to be the last location         myGeofence.geometry = geo + map.locs.loc[0].lng + " " + map.locs.loc[0].lat + "))"         //Something like this is built:         //POLYGON ((-71.082118 42.383892,-70.867198 42.540923,-71.203654 42.495374,-71.284678 42.349394,-71.163829 42.221382,-71.003154  42.266114,-71.082118 42.383892))     } else if(map.type == "circle") {         def lng = map.locs.loc[0].lng         def lat = map.locs.loc[0].lat         myGeofence.geometry = "POINT ($lng $lat)"         //POINT (-71.082118 42.383892)         myGeofence.buffer = map.radius.toDouble()     }     myGeofence.description = "ALERT:::$map.alertType:::$map.alert"     try {         myGeofence.store()     }  catch (e) {         logger.info e.localizedMessage         return null     }     myGeofence } def createGeofenceExpressionRule(com.axeda.drm.sdk.Context CONTEXT, String geofencename, String rulename,                                                      String alertType, String alertMessage, String ruledescription, int severity) {     ExpressionRuleFinder erf = new ExpressionRuleFinder(CONTEXT)     erf.setName(rulename)     ExpressionRule expressionRule1 = erf.findOne()     expressionRule1?.delete()     def expressionRule = new ExpressionRule(CONTEXT)     expressionRule.setName(rulename)     expressionRule.setDescription(ruledescription)     expressionRule.setTriggerName("MobileLocation")     def ifExpStr = "InNamedGeofence(\"$geofencename\", Location.location)"     if(alertType == "out"){         ifExpStr = "!" + ifExpStr     }     expressionRule.setIfExpression(new Expression(ifExpStr))     expressionRule.setThenExpression(new Expression("CreateAlarm(\"$alertMessage\", severity)"))     expressionRule.setEnabled(true)     expressionRule.setConsecutive(false)     expressionRule.store()     expressionRule } def findOrCreateRuleAssociations(Context CONTEXT, Object entity, Object rule, String ruleType, String entityType) {     // rule type is whether this is an expression rule     ruleType = ruleType ?: "EXPRESSION_RULE"     entityType = entityType ?: "DEVICE_INCLUDE"     RuleAssociationFinder ruleAssociationFinder = new RuleAssociationFinder(CONTEXT)     ruleAssociationFinder.setRuleId(rule.id.value)     ruleAssociationFinder.setRuleType(RuleType.valueOf(ruleType))     ruleAssociationFinder.setEntityId(entity.id.value)     ruleAssociationFinder.setEntityType(EntityType.valueOf(entityType))     def ruleAssociations = ruleAssociationFinder.findAll()     if (!ruleAssociations || ruleAssociations?.size() == 0){         def ruleAssociation = new RuleAssociation(CONTEXT)         ruleAssociation.entityId = entity.id.value         ruleAssociation.entityType = EntityType.valueOf(entityType)         ruleAssociation.ruleType = RuleType.valueOf(ruleType)         ruleAssociation.setRuleId(rule.id.value)         ruleAssociation.store()         ruleAssociations = [ruleAssociation]     }     return ruleAssociations } def findOrCreateModel(Context CONTEXT, String modelName) {     ModelFinder modelFinder = new ModelFinder(CONTEXT)     modelFinder.setName(modelName)     def model = modelFinder.find()     if (!model){         model = new Model(CONTEXT, modelName);         model.store();     }     return model } https://gist.github.com/axeda/6529288/raw/5ffca58c3c48256b81287d6a6f2d2db63cd5cd2b/AddGeofence.groovy
View full tip
This is a slide deck I created while learning how to post data from an Arduino to ThingWorx using MQTT protocol.
View full tip
This might be a well-known topic for some, but I recently had a need that Event Routers fit into perfectly and wanted to share. If you have some neat applications for Event Routers on mashups, feel free to reply!   What? Event Routers are a function on Mashups that let you connect multiple inputs to a single output. For my use case, this was extremely helpful to let me have two different Service Outputs go to the same Widget. They are a really simple tool that can save a lot of headache.   How? Event Routers work by funneling the latest data through to a single output. This is particularly useful for  user-activated actions with the output tied to a widget or another service. The Event Router automatically activates when any one of the Inputs changes.   Example I have two services that generate HTML from different sources, but I want to display just the latest one that the user had activated in a single HTML Text Area widget. The two different services are activated with two different buttons. But how do I show these two outputs in a single widget? Create an Event Router with two HTML inputs!         Now I just tie each service output to the Inputs and tie the Output to the HTML Text Area Text (note: the icon for Input2 is incorrect—it should be HTML as well; this system is running 8.5.1, perhaps it's an issue in that release).         Now when the user clicks on either button, the correct service’s HTML is sent to the HTML Text Area. Ta-da!   P.S. I noticed in some older posts that Event Routers used to be a widget or extension that came and went. Now (8.5+) it is baked into the Functions on the far right side of Mashup Builder.
View full tip
When using Value Streams to log historical data, there's a service to purge the ValueStream entries from the Thing itself. But, what to do when a Thing that once logged values into a ValueStream was deleted? Currently, there's no OOTB way to delete these entries if they're not being used anymore. Currently, I was asked this question and wanted to share this with the entire community. I created a utility application that queries directly the TWX DB for Things that are present in the ValueStream but don't exist anymore and allows a user to purge it.    These services are considering  PostgreSQLServer as persistence provider for the ValueStreams. The services can be modified if you're using SQLServer.  Do not apply for InfluxDB persistence providers   The twxDBConnector thing is based on the PostgreSqlServer template, that is present in the Relational Databases extension. It has 4  main services:   getEntriesToPurge:  Queries the TWX DB for all the entries related to a Thing. It does not consider the ValueStream id, so it will purge all the entries across all value streams. Requires a Thing name as an input; getMissingThings: Queries Things that are present in the ValueStream DB table that are not present in the Things table, meaning that they were deleted; purgeThingEntries: Purges the entries related to a Thing. It does not consider the ValueStream id, so it will purge all the entries across all value streams; purgeAllEntries: Purge all the entries related to Things that were deleted.   The queries can be modified to allow the selection of the value stream to be cleaned.   I also added a sample mashup that leverages the services.       The twxDBConnector has a configuration table that requires the DB Connection string, user and password.   You can also do it directly from the DB using PGAdmin and purge it all.   DELETE FROM value_stream WHERE value_stream.entry_id IN --Queries all entries in the value stream table that belong to an inexistent thing (SELECT entry_id FROM value_stream LEFT JOIN thing_model ON value_stream.source_id = thing_model.name WHERE thing_model.name IS NULL)   Attention: These services are changing directly the TWX DB, so use it carefully.   To use it:   Import the PostgresSQLServer Extension (you might need to change the JAR in the extension depending on the TWX version you're using); Import the entities from the purgeVSEntries.xml Thanks  @dsantos for the help on optimizing the queries.   Hope it helps. Ewerton  
View full tip
Users of ThingWorx Analytics (TWA) may choose to create a predictive model using TWA or import a predictive model that was created using other software. When importing into or exporting out of TWA, this predictive model must be in a PMML (Predictive Model Markup Language) version 4.3+ format. This post describes how to complete the import and export processes. Exporting: The user may create a model in two main ways inside of TWA: using the Builder user interface, or by using ‘Create Job’ service that exists the Training Thing. Whichever method is used, a model Job Id is created automatically by TWA for that model. It is this model Job Id that is used to identify the model inside of TWA, regardless of what is being done with that model.   If a model is trained using Builder, the user may highlight that model, click ‘Job Details’, and then copy the Job ID. This is done as follows:   Next, the user will navigate to Browse --> Things --> …TrainingThing. This is the Training Microservice inside of TWA where all the functionality involved with training a model exists. Within the …TrainingThing, the user will execute the ‘RetrieveModel’ service under Services. When executing the service, the user will paste the model Job ID (ex. 49704f1a-7fcd-4e38-ab53-84ef46517d0a) they copied earlier, and press ‘Execute’. The resulting text can then be highlighted and copied to Notepad or some other text editor, and saved as .pmml format (ex. ‘ModelExport.pmml’).   Importing Through Results Microservice: To import a model that has been saved in PMML 4.3+ format into TWA using the Results Microservice, the user will navigate to Manage --> Repositories (ex. AnalyticsUploadStorage) --> Actions --> Upload, and choose the PMML file. The user will then navigate to Browse --> Things --> …ResultsThing. This is the Results Microservice inside of TWA where all the functionality exists related to previously trained models. Within the …ResultsThing, the user will execute the ‘UploadModel’ service under Services. Alternatively, the user can upload the model from any repository using ‘UploadModelFromRepository” service.   To create a model from the uploaded PMML inside of TWA, the user will fill out the filePath and name then execute the service. Note: This model will not show up in Builder, as that would require model validation information that is not part of the imported PMML file.   The resulting Job Id can be used to make predictions, such as by using the …PredictionThing’s BatchScore or RealtimeScore services. At this point, the uploaded model acts the same way as if the model were created inside of that TWA environment.       Importing Through Analytics Manager: To import a model that has been saved in PMML 4.3+ format into TWA using the Analytics Manager, the user will navigate to Analytics --> Analytics Manager --> Analysis Models, and click the green “New” button. Next the user will choose the provider name (or create a new one by navigating to Analytics --> Analytics Manager --> Analysis Providers). The user will also check the box to “Upload Model”, and click the grey “Choose File” button to find the PMML file. Finally, the user will click the black “Upload” button, then the green “Save” button.     At this point, the model is uploaded into ThingWorx Analytics, and the user may progress through the subsequent steps to set up “Analysis Events” and “Analysis Jobs” that will be powered by the imported model.
View full tip
Please find here an Labview implementation to connect to Thingworx via RestCalls. Have Fun using it. Any Feedback is appreciated. https://github.com/Seppel1985/LabVIEW_TWX_RestAPI
View full tip
When predicting a Boolean goal such as Failure in the next hour or any other goal that has a yes or no answer, Thingworx Analytics(TWXA) models will output a 'risk' of the event occurring. TWXA will intelligently pick a threshold beyond which that risk warrants attention. 1. In Analytics Builder, click on the export button 2. This will export a PMML model and download it for you 3. Open up the PMML model, in the output section, you will find a condition that explains the threshold that was selected by TWX Analytics.   In this example case, TWXA chose 0.5 as the best Threshold.   Note: The export button will only be available in Builder for TWXA 8.4+.
View full tip
Installing an Open Source Time Series Platform For testing InfluxDB and its graphical user interface, Chronograf I'm using Docker images for easy deployment. For this post I assume you have worked with Docker before.   In this setup, InfluxDB and Chronograf will share an internal docker network to exchange data.   InfluxDB can be accessed e.g. by ThingWorx via its exposed port 8086. Chronograf can be accessed to administrative purposes via its port 8888. The following commands can be used to create a InfluxDB environment.   Pull images   sudo docker pull influxdb:latest sudo docker pull chronograf:latest   Create a virtual network   sudo docker network create influxdb   Start the containers   sudo docker run -d --name=influxdb -p 8086:8086 --net=influxdb --restart=always influxdb sudo docker run -d --name=chronograf -p 8888:8888 --net=influxdb --restart=always chronograf --influxdb-url=http://influxdb:8086     InfluxDB should now be reachable and will also restart automatically when Docker (or the Operating System) are restarted.
View full tip
This post covers how to build and operationalize a time series model using Thingworx Analytics. A lookback window is used to read multiple previous rows before the current one, and base the prediction on those lookback rows.   In this example we use time series data to predict water flow for different water pumps in a system.   There is a full explanation of the method attached, also all necessary resources are included in the attached files.
View full tip
Announcements