cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Need to share some code when posting a question or reply? Make sure to use the "Insert code sample" menu option. Learn more! X

IoT Tips

Sort by:
Applicable Releases: ThingWorx Platform 7.0 to 8.5   Description:   Introduction to ThingWorx Extension Development, with the following topics: What is an Extension Why building an Extension Prerequisites Installing Eclipse plugin and features Creating entities with the plugin and including exported Entities in an Extension Project Upgrading or Updating and Existing extension in ThingWorx Building with Gradle and Ant       ThingWorx Extension Development Guide
View full tip
Recently I needed to be able to parse and handle XML data natively inside of a ThingWorx script, and this XML file happened to have a SOAP namespace as well. I learned a few things along the way that I couldn’t find a lot of documentation on, so am sharing here.   Lessons Learned The biggest lesson I learned is that ThingWorx uses “E4X” XML handling. This is a language that Mozilla created as a way for JavaScript to handle XML (the full name is “ECMAscript for XML”). While Mozilla deprecated the language in 2014, Rhino, the JavaScript engine that ThingWorx uses on the server, still supports it, so ThingWorx does too. Here’s a tutorial on E4X - https://developer.mozilla.org/en-US/docs/Archive/Web/E4X_tutorial The built-in linter in ThingWorx will complain about E4X syntax, but it still works. I learned how to get to the data I wanted and loop through to create an InfoTable. Hopefully this is what you want to do as well.   Selecting an Element and Iterating My data came inside of a SOAP envelope, which was meaningless information to me. I wanted to get down a few layers. Here’s a sample of my data that has made-up information in place of the customer's original data:                <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" headers="">     <SOAP-ENV:Body>         <get_part_schResponse xmlns="urn:schemas-iwaysoftware-com:iwse">             <get_part_schResult>                 <get_part_schRow>                     <PART_NO>123456</PART_NO>                     <ORD_PROC_DIV_CD>E</ORD_PROC_DIV_CD>                     <MFG_DIV_CD>E</MFG_DIV_CD>                     <SCHED_DT>2020-01-01</SCHED_DT>                 </get_part_schRow>                 <get_part_schRow>                     <PART_NO>789456</PART_NO>                     <ORD_PROC_DIV_CD>E</ORD_PROC_DIV_CD>                     <MFG_DIV_CD>E</MFG_DIV_CD>                     <SCHED_DT>2020-01-01</SCHED_DT>                 </get_part_schRow>             </get_part_schResult>         </get_part_schResponse>     </SOAP-ENV:Body> </SOAP-ENV:Envelope> To get to the schRow data, I need to get past SOAP and into a few layers of XML. To do that, I make a new variable and use the E4X selections to get there: var data = resultXML.*::Body.*::get_part_schResponse.*::get_part_schResult.*; Note a few things: resultXML is a variable in the service that contains the XML data. I skipped the Envelope tag since that’s the root. The .* syntax does not mean “all the following”, it means “all namespaces”. You can define and specify the namespaces instead of using .*, but I didn’t find value in that. I found some sample code that theoretically should work on a VMware forum: https://communities.vmware.com/thread/592000. This gives me schRow as an XML List that I can iterate through. You can see what you have at this point by converting the data to a String and outputting it: var result = String(data); Now that I am to the schRow data, I can use a for loop to add to an InfoTable: for each (var row in data) {      result.AddRow({         PartNumber: row.*::PART_NO,         OrderProcessingDivCD: row.*::ORD_PROC_DIV_CD,         ManufacturingDivCD: row.*::MFG_DIV_CD,         ScheduledDate: row.*::SCHED_DT     }); } Shoo! That’s it! Data into an InfoTable! Next time, I'll ask for a JSON API. 😊
View full tip
Here is a tutorial to explain the process of uploading a PMML file from an external system to Thingworx Analytics. The tutorial steps are explained in the attached PDF and all referenced files can be found in the attached ZIP.  
View full tip
The AddStreamEntries​ snippet does not offer too much information, except that it needs an InfoTable as input. It is however based on the InfoTable for the AddStreamEntity service.     To use the AddStreamEntries table, an InfoTable based on sourceType, values, location, source, timestamp​ and ​tags​ must be used.   In this example, I started with a new Thing based on a ​Stream​ template and the following DataShape:     This DataShape must be converted into an InfoTable with is used as the ​values​ parameter. It's important that the ​timestamp​ parameter has distinct values! Otherwise values matching the same timestamp will be overwritten!   We don't really need the sourceType​ as ThingWorx will automatically determine the type by knowing the source and which kind of Entity Type it is.   I created a new ​MyStreamThing​ with a new service, filling the InfoTable and the Stream. The result is the following code which will add 5 rows to the Stream:     // *** SET UP META DATA FOR INFO TABLE ***   // create a new InfoTable based on AddStreamEntries parameters (timestamp, location, source, sourceType, tags, values)   var myInfoTable = { dataShape: { fieldDefinitions : {} }, rows: [] };   myInfoTable.dataShape.fieldDefinitions['timestamp']  = { name: 'timestamp', baseType: 'DATETIME' }; myInfoTable.dataShape.fieldDefinitions['location']  = { name: 'location', baseType: 'LOCATION' }; myInfoTable.dataShape.fieldDefinitions['source']    = { name: 'source', baseType: 'STRING' }; myInfoTable.dataShape.fieldDefinitions['sourceType'] = { name: 'sourceType', baseType: 'STRING' }; myInfoTable.dataShape.fieldDefinitions['tags']      = { name: 'tags', baseType: 'TAGS' }; myInfoTable.dataShape.fieldDefinitions['values']    = { name: 'values', baseType: 'INFOTABLE' };   // *** SET UP ACTUAL VALUES FOR INFO TABLE ***   // create new meta data   var tags = new Array(); var timestamp = new Date(); var location = new Object(); location.latitude = 0; location.longitude = 0; location.elevation = 0; location.units = "WGS84";   // add rows to InfoTable (~5 times)   for (i=0; i<5; i++) {       // create new values based on Stream DataShape       var params = {           infoTableName : "InfoTable",           dataShapeName : "Cxx-DS"     };       var values = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params);       // add something to the values to make them unique       // create and add new row based on Stream DataShape     // only a single line allowed!       var newValues = new Object();     newValues.a = "aaa" + i; // STRING - isPrimaryKey = true     newValues.b = "bbb" + i; // STRING     newValues.c = "ccc" + i; // STRING       values.AddRow(newValues);       // create new InfoTable row based on meta data & values     // add 10 ms to each object, to make it's timestamp unique     // otherwise entries with the same timestamp will be overwritten       var newEntry = new Object();     newEntry.timestamp = new Date(Date.now() + (i * 10));     newEntry.location = location;     newEntry.source = me.name;     newEntry.tags = tags;     newEntry.values = values;       // add new Info Table row to Info Table           myInfoTable.rows = newEntry;       }       // *** ADD myInfoTable (HOLDING MULITPLE STREAM ENTRIES) TO STREAM       // add stream entries in the InfoTable       var params = {           values: myInfoTable /* INFOTABLE */     };       // no return       Things["MyStreamThing"].AddStreamEntries(params);   To verify the values have been added correctly, call the ​GetStreamEntriesWithData​ service on the ​MyStreamThing​
View full tip
If you ever tested mashup rendering on mobile phones, you probably experienced that the mashup was not sizing to fit your mobile display. This "MobileHeader" extension enables to auto adapt the mashup to mobile displays.   It adds the following parameters to the HTML header: <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=0"> <meta name="apple-mobile-web-app-capable" content="yes"> <meta name="apple-mobile-web-app-status-bar-style" content="black-translucent">   In the composer just drop the "MobileHeader" extension into a section of the mashup.   This extension was tested until version 7.4.
View full tip
This is using the simplest structure to do a look through an infotable.  It's simple but it avoids having to use row indexes and cleans up the code for readability as well.   //Assume incoming Infotable parameter names "thingList" for each (row in thingList.rows) {      // Now each row is already assigned to the row variable in the loop      var thingName = row.name; }   You can also nest these loops (just use a different variable from "row").  Also important to note to not add or remove row entries of the Infotable inside the loop.  In this case you may end up skipping or repeating rows in the loop since the indexes will be changed.
View full tip
The following code is best practice when creating any "entity" in Thingworx service script.  When a new entity is created (like a Thing) it will be loaded into the JVM memory immediately, but is not committed to disk until a transaction (service) successfully completes.  For this reason ALL code in a service must be in a try/catch block to handle exceptions.  In order to rollback the create call the catch must call a delete for any entity created.  In line comments give further detail.     try {     var params = {         name: "NewThingName",         description: "This Is A New Thing",         thingTemplateName: "GenericThing"     };     Resources["EntityServices"].CreateThing(params);    // Always enable and restart a new thing to make it active on the Platform     Things["NewThingName"].Enable();     Things["NewThingName"].Restart();       //Now Create an Organization for the new Thing     var params = {         topOUName: "NewOrgName",         name: "NewOrgName",         description: "New Orgianization for new Thing",         topOUDescription: "New Org Main"     };     Resources["EntityServices"].CreateOrganization(params);       // Any code that could potentially cause an exception should     // also be included in the try-catch block. } catch (err) {     // If an exception is caught, we need to attempt to delete everything     // that was created to roll back the entire transaction.     // If we do not do this a "ghost" entity will remain in memory     // We must do this in reverse order of creation so there are no dependency conflicts     // We also do not know where it failed so we must attempt to remove all of them,     // but also handle exceptions in case they were not created       try {         var params = {name: "NewOrgName"};         Resources["EntityServices"].DeleteOrganization(params);     }     catch(ex2) {//Org was not created     }       try {         var params = {name: "NewThingName"};         Resources["EntityServices"].DeleteThing(params);     }     catch(ex2) {//Thing was not created     } }
View full tip
Super simple widget that embeds the HTML5 audio tag, allowing MP3 files to be played and/or triggers by another mashup event.
View full tip
In case it's useful for anyone, I successfully used the Thingworx Importer to import an Entity using version 8.4.1. The import command was in a CURL script (can also be run using e.g. Cygwin on Windows) and the entity data was contained in an XML file. The XML file is attached to this post, and the CURL script is copied into the bottom of the post body.   Notes on using the script: Copy CURL code into import.sh You might need to change the line endings to UNIX, e.g. in Notepad++ menu option Edit > EOL Conversion > Unix Give permissions to run import.sh (chmod +x import.sh) ./import.sh >>>>>>>>>>>> #!/bin/bash APPKEY="2e6704c0-XXXX-XXXX-XXXX-a1589e387d1a" TWX_HTTP_PORT="8018" FILE_PATH_TWX="Things_testImport.xml" PROTOCOL="http://" IP="localhost" URL="$PROTOCOL""$IP"":""$TWX_HTTP_PORT""/Thingworx/Importer?purpose=import" curl -X POST -H 'appKey: '$APPKEY \ -H 'Content-Type: multipart/form-data' \ -H 'Accept: application/json' \ -H 'x-thingworx-session: true' \ -H 'X-XSRF-TOKEN: TWX-XSRF-TOKEN-VALUE' \ -F 'upload=@'$FILE_PATH_TWX \ $URL <<<<<<<<<<<<   Also TWX Importer is explained in this support article.
View full tip
Those who have been working with ThingWorx for many years will have noticed the work done around ingress stress testing and performance optimization.  Adding InfluxDB as a time-series data persistence provider really helped level up these capabilities while simultaneously decreasing the overall resources required by the infrastructure.  However with this ease comes a hidden challenge: query and data processing performance to work it into something useful.   Often It's Too Much Data In general most customers that I work with want to collect far too much data -- without knowing what it will be used for, or what processing will be required in order to make it usable and useful.  This is a trap in general with how many people envision IoT projects, being told by infrastructure providers that cloud storage and compute resources are abundant and cheap and that they should get as much data as possible.  This buildup of data means that more effort needs to be spent working it into something useful (data engineering/feature extraction) and addressing common data issues (quality, gaps, precision, etc.).  This might be fine for mature companies with large data analytics teams; however this is a makeup that I've only seen in the largest of our customers.  Some advice - figure out what you need and how you'll use it, and then collect that.  Work on extracting value today rather than hoping that extra data collected  now will provide some insights years from now.   Example - Problem Statement You got your Thing Model designed, and edge devices connected.  Now you've got data flowing in and being stored every 5 seconds in InfluxDB.  Great progress!  Now on to building the applications which cover the various use cases. The raw data is most likely going to need to be processed and potentially even significantly transformed into other information in order to make it useful.  Turning a "powered on and running" BOOLEAN to an "hour meter" INTEGER is a simple example.  Then you may need to provide a report showing equipment run time hours by day over a month.  The maintenance team may also have asked to look for usage patterns which lead to breakdowns, requiring extracting other data points from the initial one like number of daily starts, average daily run time, average time between restarts. The problem here is that unless you have prepared these new data points and stored them as well (say in a Stream), you are going to have to build these data sets on the fly, and that can be time and resource intensive and not give you the response time expected.  As you can imagine, repeatedly querying and processing large volumes of unchanging raw data is going to have resource and time implications - so this is why data collection and data use need to be thought about separately.   Data Engineering In the above examples, the key is actually creating new data points which are calculated progressively throughout normal operation.  This not only makes the information that you want available when you need it - in the right format - but it also significantly reduces resource requirements by constantly reprocessing raw data.  It also helps managing data purging, because as you create and store usable insights, you can eventually just archive away your old raw data streams.   Direct Database Queries vs. Thingworx Data Services Despite the above being a rule of thumb, sometimes a simple well structured database query can get you exactly what you need and do so quite quickly.  This is especially true for InfluxDB when working with extremely large time-series datasets.  The challenge here is that ThingWorx persistence providers abstract away the complexity of writing ones own database queries, so we can't easily get at the databases raw power and are forced to query back more data than needed and work it into a usable format in memory (which is not fast).   Leveraging the InfluxDB API using the ContentLoader Technique As InfluxDBs API is 100% REST, we can access it using in-built ThingWorx Content Loader services.  Check out this demonstration and explanation video where I talk about how to interact directly with InfluxDB in order to crush massive time-series data and get back much more usable and manageable data sets.  It is important to note here that you should use a read-only database user here, as you should never modify the ThingWorx databases to avoid untested scenarios which may lead to data corruption.   Optimizing ThingWorx query performance with the InfluxDB REST API - YouTube InfluxToolBox ThingWorx demo project (by T. Wobben)      
View full tip
/* Define a DataShape used in an InfoTable Parameter for this service call */ twDataShape* sampleInfoTableAsParameterDs = twDataShape_Create(twDataShapeEntry_Create("ColumnA",NO_DESCRIPTION,TW_STRING)); twDataShape_AddEntry(sampleInfoTableAsParameterDs,twDataShapeEntry_Create("ColumnB",NO_DESCRIPTION,TW_NUMBER)); twDataShape_AddEntry(sampleInfoTableAsParameterDs,twDataShapeEntry_Create("ColumnC",NO_DESCRIPTION,TW_BOOLEAN)); twDataShape_SetName(sampleInfoTableAsParameterDs,"SampleInfoTableAsParameterDataShape");      /* Define Input Parameter that is an InfoTable of Shape SampleInfoTableAsParameterDataShape */ twDataShapeEntry* infoTableDsEntry = twDataShapeEntry_Create("itParam",NULL,TW_INFOTABLE); twDataShapeEntry_AddAspect(infoTableDsEntry, "dataShape", twPrimitive_CreateFromString("SampleInfoTableAsParameterDataShape", TRUE));    twDataShape* inputParametersDefinitionDs = twDataShape_Create(infoTableDsEntry);   /* Register remote function */ twApi_RegisterService(TW_THING, SERVICE_INTEGRATION_THINGNAME, "testMultiRowInfotable", NO_DESCRIPTION,   inputParametersDefinitionDs, TW_NOTHING, NULL, PlatformCallsServiceWithMultiRowInfoTableServiceImpl, NULL); /* Note that you will have to manually create the datashape in ThingWorx before attempting to add this remote service to your Thing. */
View full tip
It usually happens that we need to copy a large file to ThingWorx server periodically, and what's worse, the big file is changing(like a log file). This sample give a simpler way to implement. The main idea in the sample is: 1. Lower the management burden from ThingWorx server and instead it put all the work in edge SDK side 2. Save network burden with only uploading the incremented file and append it to the older file on ThingWorx server   Java SDK version in this sample: 6.0.1-255
View full tip
Disclaimer: example was provided by Hatcher Chad - chad@onfarmsystems.com   //   // For this example, we'll have an Math service   // which takes two numbers, and an operation.   // The result will be that operation performed on the two inputs.       //   // We either need an Application Key,   // or user credentials to perform the reads and writes.   // App keys are a little safer.   // In this demo, we'll store it on the Entity as a Property.   var appKey = me.appKey;       //   // The service name needs to be unique and not already in use.   var serviceName = "MyMath";       //   // What are the inputs to the service?   // We'll define them nicely here, but manipulate this object later.   var parameters = {   "op" : "STRING",   "x" : "NUMBER",   "y" : "NUMBER"   };       //   // What datatype does the service return?   // If it's an infotable,   // then you'll also have to specify the data shape   // as part of the resultType's aspect,   // but I won't demonstrate that here.   var output = "NUMBER";       //   // What is the actual service script?   // We'll define it here as an array of lines, and then join them together.   var serviceScript = [   "var result = (function() {",   " switch(op) {",   " case \"add\": return x + y;",   " case \"sub\": return x - y;",   " case \"mult\": return x * y;",   " case \"div\": return x / y;",   " default: return op in Math ? Math[op](x, y) : 0;",   " };",   "})();",   ].join("\n");       ////////       //   // Let's convert the friendly parameter definition   // into the structure that ThingWorx uses:   var parameterDefinitions = Object.keys(parameters).reduce(function(parameterDefinitions, parameterName, index) {   var parameterType = parameters[parameterName];   parameterDefinitions[parameterName] = {   "name": parameterName,   "aspects": {},   "description": "",   "baseType": parameterType,   "ordinal": index   };   return parameterDefinitions;   }, {});       //   // Now let's set up our service definition and implementation.   var definition = {   "isAllowOverride": false,   "isOpen": false,   "sourceType": "Unknown",   "parameterDefinitions": parameterDefinitions,   "name": serviceName,   "aspects": {   "isAsync": false   },   "isLocalOnly": false,   "description": "",   "isPrivate": false,   "sourceName": "",   "category": "",   "resultType": {   "name": "result",   "aspects": {},   "description": "",   "baseType": output,   "ordinal": 0   }   };       var implementation = {   "name": serviceName,   "description": "",   "handlerName": "Script",   "configurationTables": {   "Script": {   "isMultiRow": false,   "name": "Script",   "description": "Script",   "rows": [{   "code": serviceScript   }],   "ordinal": 0,   "dataShape": {   "fieldDefinitions": {   "code": {   "name": "code",   "aspects": {},   "description": "code",   "baseType": "STRING",   "ordinal": 0   }   }   }   }   }   };       ////////       //   // Here are the URLs we'll need in order to make updates.   // You can change the thing name ('ServiceModifier' here)   // to something else.   // If you use credentials instead of an app key,   // then you can remove the appKey parameter here,   // but you'll have to add the username and password   // to the two ContentLoaderFunctions calls.   var url = {   export : "http://127.0.0.1:8080/Thingworx/Things/ServiceModifier?Accept=application/json&appKey="+appKey,   import : "http://127.0.0.1:8080/Thingworx/Things/ServiceModifier?appKey="+appKey   };       //   // We can download the entity to modify as a JSON object.   // Older versions of ThingWorx might not support this.   var config = Resources.ContentLoaderFunctions.GetJSON({   url : url.export,   });       //   // We have to modify both the 'effectiveShape',   // as well as the 'thingShape'.   config.effectiveShape.serviceDefinitions[serviceName] = definition;   config.effectiveShape.serviceImplementations[serviceName] = implementation;       config.thingShape.serviceDefinitions[serviceName] = definition;   config.thingShape.serviceImplementations[serviceName] = implementation;       // Finally, we can push our updates back into ThingWorx.   Resources.ContentLoaderFunctions.PutText({   url : url.export,   content : JSON.stringify(config),   contentType : "application/json",   });       // The end.
View full tip
I recently had a customer who wanted to run services on ThingWorx from Power BI to retrieve existing operational data, and we were a bit stumped on how to pass the API key over in the headers, so I did a bit of Googling and pieced together the solution. It's not quite intuitive on the Power BI side, so I thought it would be helpful to share. If you have any other experience with integrating ThingWorx with Power BI, feel free to add a comment.    Prepare ThingWorx Create an Application Key that has Run Time execution access to the services you need. Understand the inputs needed for the service you would like. I'll have examples of none, one, an InfoTable, and multiple inputs.   Power BI Following the following steps in Power BI: 1. In Power BI, create a new blank query   2. On the left, right click on Query1 and go to the Advanced Editor:   3. Replace all of the body content with the following, replacing your API key, appropriate end point, and base URL as needed (this is an example with NO input parameters, I'll follow with examples of other parameters):     let appKey = "your-application-key-here", endpoint = "Things/YourThingNameHere/Services/YourServiceNameHere", baseUrl = "https://YourServerNameHere/Thingworx/", url = Text.Combine({baseUrl,endpoint}), body = "", request = Web.Contents( url, [ Headers = [ appKey = appKey, #"Content-Type" = "application/json", Accept = "application/json" ], Content = Text.ToBinary(body) ] ), Source = Json.Document(request) in Source       4. Click "Done", and now you'll have a warning about how to connect. Click the "Edit Credentials" button. 5. Leave it on Anonymous and click "Connect":   6. You should now see the return data coming from ThingWorx.   Note that I had a little trouble with this authentication initially and it saved the wrong method. To clear that out, go to the ribbon bar item "Data source settings" and select the server and clear it out.   Other Examples Here is an example for sending a single string parameter:   let appKey = "your-application-key-here", endpoint = "Things/YourThingNameHere/Services/YourServiceNameHere", baseUrl = "https://YourServerNameHere/Thingworx/", url = Text.Combine({baseUrl,endpoint}), body = "{""InputParameter"": ""InputValue""}", request = Web.Contents( url, [ Headers = [ appKey = appKey, #"Content-Type" = "application/json", Accept = "application/json" ], Content = Text.ToBinary(body) ] ), Source = Json.Document(request) in Source     Here's an example of sending a string and an integer: let appKey = "your-application-key-here", endpoint = "Things/YourThingNameHere/Services/YourServiceNameHere", baseUrl = "https://YourServerNameHere/Thingworx/", url = Text.Combine({baseUrl,endpoint}), body = "{""InputString"": ""Hello, world!"", ""InputNumber"" : 42}", request = Web.Contents( url, [ Headers = [ appKey = appKey, #"Content-Type" = "application/json", Accept = "application/json" ], Content = Text.ToBinary(body) ] ), Source = Json.Document(request) in Source   Here is an example for sending an InfoTable. Note that you must supply the dataShape with fieldDefinitions. If you're using an existing Data Shape, you can get the JSON by using the service GetDataShapeMetadataAsJSON() that is on the data shape.     let appKey = "your-application-key-here", endpoint = "Things/YourThingNameHere/Services/YourServiceNameHere", baseUrl = "https://YourServerNameHere/Thingworx/", url = Text.Combine({baseUrl,endpoint}), body = "{""propertyNames"": { ""rows"": [ { ""name"": ""FirstEntityName"", ""description"": ""The first entity"" }, { ""name"": ""SecondEntityName"", ""description"": ""The second entity"" }], ""dataShape"": { ""fieldDefinitions"": { ""name"": { ""name"": ""name"", ""aspects"": { ""isPrimaryKey"": true }, ""description"": ""Entity name"", ""baseType"": ""STRING"", ""ordinal"": 0 }, ""description"": { ""name"": ""description"", ""aspects"": {}, ""description"": ""Entity description"", ""baseType"": ""STRING"", ""ordinal"": 0 } } } }}", request = Web.Contents( url, [ Headers = [ appKey = appKey, #"Content-Type" = "application/json", Accept = "application/json" ], Content = Text.ToBinary(body) ] ), Source = Json.Document(request) in Source       If I find any more interesting ways to use Power BI with ThingWorx services, I'll add them on here.  
View full tip
Thingworx Analytics is offered through the User interface called Analytics Builder with some pre-configured functionality. However, should you want to create your own jobs and mashups, all features from Analytics Builder and some more are available through the Thingworx Services.  Running most functionality requires that you provide some data to run the Analytics Services. This is where the datasetRef parameter is required.        Data uploaded through Analytics Builder Any dataset uploaded through builder will require have a datasetUri as shown in the image above and format will be parquet (all small letters) datasetUri can be obtained from the list of datasets in builder Passing data as an in-body Dataset If data isn't uploaded through Analytics Builder, data can be supplied as an Infotable in the data parameter of the datasetRef. Metadata will also need to be supplied if a new dataset is being created (create Job of the AnalyticsServer_DataThing) If this data is being supplied for a scoring job, as long as the column names match up to what the model is expecting, TWX Analytics will inference them appropriately. The filter parameter is for parquet datasets already uploaded into TWXA and will take an ANSI SQL statement format to add conditions to reduce number of rows. Exclusions is an single column infotable list of the columns you wish to remove from the job you are trying to submit Example: If you want Profiles to only run on 5 out of 10 columns, you would give a list of 5 columns that you don't want to include in this exclusions infotable. Data may also be supplied as a csv file in the file repo in some cases, in which case you would give the dataseturi parameter the location of the file on the TWX File repo (of the format thingworx://UseCaseFileRepo/tempdata.csv) and the format which would be csv
View full tip
This post covers how to build and operationalize a time series model using Thingworx Analytics. A lookback window is used to read multiple previous rows before the current one, and base the prediction on those lookback rows.   In this example we use time series data to predict water flow for different water pumps in a system.   There is a full explanation of the method attached, also all necessary resources are included in the attached files.
View full tip
Hello everyone,   Following a recent  experience, I felt it was important to share my insights with you. The core of this article is to demonstrate how you can format a Flux request in ThingWorx and post it to InfluxDB, with the aim of reporting the need for performance in calculations to InfluxDB. The following context is renewable energy. This article is not about Kepware neither about connecting to InfluxDB. As a prerequisite, you may like to read this article: Using Influx to store Value Stream properties from... - PTC Community     Introduction   The following InfluxDB usage has been developed for an electricity energy provider.   Technical Context Kepware is used as a source of data. A simulation for Wind assets based on excel file is configured, delivering data in realtime. SQL Database also gather the same data than the simulation in Kepware. It is used to load historical data into InfluxDB, addressing cases of temporary data loss. Once back online, SQL help to records the lost data in InfluxDB and computes the KPIs. InfluxDB is used to store data overtime as well as calculated KPIs. Invoicing third party system is simulated to get electricity price according time of the day.   Orchestration of InfluxDB operations with ThingWorx ThingWorx v9.4.4 Set the numeric property to log Maintain control over execution logic Format Flux request with dynamic inputs to send to Influx DB  InfluxDB Cloud v2 Store logged property Enable quick data read Execute calculation Note: Free InfluxDB version is slower in write and read, and only 30 days data retention max.     ThingWorx model and services   ThingWorx context Due to the fact relevant numeric properties are logged overtime, new KPIs are calculated based on the logged data. In the following example, each Wind asset triggered each minute a calculation to get the monetary gain based on current power produced and current electricity price. The request is formated in ThingWorx, pushed and executed in InfluxDB. Thus, ThingWorx server memory is not used for this calculation.   Services breakdown CalculateMonetaryKPIs Entry point service to calculate monetary KPIs. Use the two following services: Trigger the FormatFlux service then inject it in Post service. Inputs: No input Output: NOTHING FormatFlux _CalculateMonetaryKPI Format the request in Flux format for monetary KPI calculation. Respect the Flux synthax used by InfluxDB. Inputs: bucketName (STRING) thingName (STRING) Output: TEXT PostTextToInflux Generic service to post the request to InfluxDB, whatever the request is Inputs: FluxQuery (TEXT) influxToken (STRING) influxUrl (STRING) influxOrgName (STRING) influxBucket (STRING) thingName (STRING) Output: INFOTABLE   Highlights - CalculateMonetaryKPIs Find in attachments the full script in "CalculateMonetaryKPIs script.docx". Url, token, organization and bucket are configured in the Persitence Provider used by the ValueStream. We dynamically get it from the ValueStream attached to this thing. From here, we can reuse it to set the inputs of two other services using “MyConfig”.   Highlights - FormatFlux_CalculateMonetaryKPI Find in attachments the full script in "FormatFlux_CalculateMonetaryKPI script.docx". The major part of this script is a text, in Flux synthax, where we inject dynamic values. The service get the last values of ElectricityPrice, Power and Capacity to calculate ImmediateMonetaryGain, PotentialMaxMonetaryGain and PotentialMonetaryLoss.   Flux logic might not be easy for beginners, so let's break down the intermediate variables created on the fly in the Flux request. Let’s take the example of the existing data in the bucket (with only two minutes of values): _time _measurement _field _value 2024-07-03T14:00:00Z WindAsset1 ElectricityPrice 0.12 2024-07-03T14:00:00Z WindAsset1 Power 100 2024-07-03T14:00:00Z WindAsset1 Capacity 150 2024-07-03T15:00:00Z WindAsset1 ElectricityPrice 0.15 2024-07-03T15:00:00Z WindAsset1 Power 120 2024-07-03T15:00:00Z WindAsset1 Capacity 160   The request articulates with the following steps: Get source value Get last price, store it in priceData _time ElectricityPrice 2024-07-03T15:00:00Z 0,15 Get last power, store it in powerData _time Power 2024-07-03T15:00:00Z 120 Get last capacity, store it in capacityData _time Capacity 2024-07-03T15:00:00Z 160 Join the three tables *Data on the same time. Last values of price, power and capacity maybe not set at the same time, so final joinedData may be empty. _time ElectricityPrice Power Capacity 2024-07-03T14:00:00Z 0,15 120 160 Perform calculations gainData store the result: ElectricityPrice * Power _time _measurement _field _value 2024-07-03T15:00:00Z WindAsset1 ImmediateMonetaryGain 18 maxGainData store the result: ElectricityPrice * Capacity lossData store the result: ElectricityPrice * (Capacity – Power) Add the result to original bucket   Highlights - PostTextToInflux Find in attachments the full script in "PostTextToInflux script.docx". Pretty straightforward script, the idea is to have a generic script to post a request. The header is quite original with the vnd.flux content type Url needs to be formatted according InfluxDB API     Well done!   Thanks to these steps, calculated values are stored in InfluxDB. Other services can be created to retrieve relevant InfluxDB data and visualize it in a mashup.     Last comment It was the first time I was in touch with Flux script, so I wasn't comfortable, and I am still far to be proficient. After spending more than a week browsing through InfluxDB documentation and running multiple tests, I achieved limited success but nothing substantial for a final outcome. As a last resort, I turned to ChatGPT. Through a few interactions, I quickly obtained convincing results. Within a day, I had a satisfactory outcome, which I fine-tuned for relevant use.   Here is two examples of two consecutive ChatGPT prompts and answers. It might need to be fine-tuned after first answer.   Right after, I asked to convert it to a ThingWorx script format:   In this last picture, the script won’t work. The fluxQuery is not well formatted for TWX. Please, refer to the provided script "FormatFlux_CalculateMonetaryKPI script.docx" to see how to format the Flux query and insert variables inside. Despite mistakes, ChatGPT still mainly provides relevant code structure for beginners in Flux and is an undeniable boost for writing code.  
View full tip
This example provides the ability to generate a simple entity structure and some historical data for each entity. Historical data is run through a ThingWorx service to generate histogram data for display in a bar chart.  The provided ThingWorx entities and PDF document provide the example as well as documentation.
View full tip
Hi,   I have just been playing around with the Edge MicroServer on a couple of Raspberry Pi's.  I obviously want them connected to ThingWorx, however they stop when the SSH session closes which isn't ideal.  I thought about doing something really quick and dirty using 'nohup', but this could have lead to many running processes, and wouldn't still not have started automatically when the Pi booted.  So, I did it right instead using init.d daemon configurations.   There are two files; one for the EMS and one for the Lua Script Resource.  You need to put these in /etc/init.d, then make sure they are owned by root.   sudo chown root /etc/init.d/thingworx* sudo chgrp root /etc/init.d/thingworx*   You'll need to modify the paths in the first lines of these files to match where you have your microserver folder.  Then you need to update the init.d daemon configs and enable the services.   sudo update-rc.d thingworx-ems defaults sudo update-rc.d thingworx-ems enable sudo update-rc.d thingworx-lsr defaults sudo update-rc.d thingworx-lsr enable   You can then start (stop, etc.) like this:   sudo service thingworx-ems start sudo service thingworx-lsr start   They should both also start automatically after a reboot.   Regards,   Greg
View full tip
This simple example creates an infotable of hierarchical data from an existing datashape that can be used in a tree or D3 Tree widget.  Attachments are provides that give a PDF document as well as the ThingWorx entities for the example.  If you were just interested in the service code, it is listed below: var params = {   infoTableName : "InfoTable",   dataShapeName : "TreeDataShape" }; // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(TreeDataShape) var result = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); var NewRow = {}; NewRow.Parent = "No Parent"; NewRow.Child = "Enterprise"; NewRow.ChildDescription = "Enterprise"; result.AddRow(NewRow); NewRow.Parent = "Enterprise"; NewRow.Child = "Site1"; NewRow.ChildDescription = "Wilmington Plant"; result.AddRow(NewRow); NewRow.Parent = "Site1"; NewRow.Child = "Site1.Line1"; NewRow.ChildDescription = "Blow Molding"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line1"; NewRow.Child = "Site1.Line1.Asset1"; NewRow.ChildDescription = "Preform Staging"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line1"; NewRow.Child = "Site1.Line1.Asset2"; NewRow.ChildDescription = "Blow Molder"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line1"; NewRow.Child = "Site1.Line1.Asset3"; NewRow.ChildDescription = "Bottle Unscrambler"; result.AddRow(NewRow); NewRow.Parent = "Site1"; NewRow.Child = "Site1.Line2"; NewRow.ChildDescription = "Filling"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line2"; NewRow.Child = "Site1.Line2.Asset1"; NewRow.ChildDescription = "Rinser"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line2"; NewRow.Child = "Site1.Line2.Asset2"; NewRow.ChildDescription = "Filler"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line2"; NewRow.Child = "Site1.Line2.Asset3"; NewRow.ChildDescription = "Capper"; result.AddRow(NewRow); NewRow.Parent = "Site1"; NewRow.Child = "Site1.Line3"; NewRow.ChildDescription = "Packaging"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line3"; NewRow.Child = "Site1.Line3.Asset1"; NewRow.ChildDescription = "Printer Labeler"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line3"; NewRow.Child = "Site1.Line3.Asset2"; NewRow.ChildDescription = "Packer"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line3"; NewRow.Child = "Site1.Line3.Asset3"; NewRow.ChildDescription = "Palletizing"; result.AddRow(NewRow); NewRow.Parent = "Enterprise"; NewRow.Child = "Site2"; NewRow.ChildDescription = "Mobile Plant"; result.AddRow(NewRow); NewRow.Parent = "Site2"; NewRow.Child = "Site2.Line1"; NewRow.ChildDescription = "Blow Molding"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line1"; NewRow.Child = "Site2.Line1.Asset1"; NewRow.ChildDescription = "Preform Staging"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line1"; NewRow.Child = "Site2.Line1.Asset2"; NewRow.ChildDescription = "Blow Molder"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line1"; NewRow.Child = "Site2.Line1.Asset3"; NewRow.ChildDescription = "Bottle Unscrambler"; result.AddRow(NewRow); NewRow.Parent = "Site2"; NewRow.Child = "Site2.Line2"; NewRow.ChildDescription = "Filling"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line2"; NewRow.Child = "Site2.Line2.Asset1"; NewRow.ChildDescription = "Rinser"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line2"; NewRow.Child = "Site2.Line2.Asset2"; NewRow.ChildDescription = "Filler"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line2"; NewRow.Child = "Site2.Line2.Asset3"; NewRow.ChildDescription = "Capper"; result.AddRow(NewRow); NewRow.Parent = "Site2"; NewRow.Child = "Site2.Line3"; NewRow.ChildDescription = "Packaging"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line3"; NewRow.Child = "Site2.Line3.Asset1"; NewRow.ChildDescription = "Printer Labeler"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line3"; NewRow.Child = "Site2.Line3.Asset2"; NewRow.ChildDescription = "Packer"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line3"; NewRow.Child = "Site2.Line3.Asset3"; NewRow.ChildDescription = "Palletizing"; result.AddRow(NewRow);
View full tip
This Javascript snippet creates a random value between those limits:   var dbl_Value = Math.floor(Math.random()*(max-min+1)+min);
View full tip
Announcements