cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
This tutorial applies to Axeda version 6.1.6+, with sections applicable to 6.5+ (indicated below) Custom objects (or Groovy scripts) are the backbone of Axeda custom applications.  As the developer, you decide what content type to give the data returned by the script. What this tutorial covers? This tutorial provides examples of outputting data in different formats from Groovy scripts and consuming that data via Javascript using the jQuery framework.  While Javascript and jQuery are preferred by the Axeda Innovation team, any front end technology that can consume web services can be used to build applications on the Axeda Machine Cloud.  On the same note, the formats discussed in this article are only a few examples of the wide variety of content types that Groovy scripts can output via Scripto.  The content types available via Scripto are limited only by their portability over the TCP protocol, a qualification which includes all text-based and downloadable binary mime types.  As of July 2013, the UDP protocol (content streaming) is not supported by the current version of the Axeda Platform. Formats discussed in this article: 1) JSON 2) XML 3) CSV 4) Binary content with an emphasis on image files (6.5+) For a tutorial on how to create custom objects that work with custom applications, check out Using Google Charts API with Scripto.  For a discussion of what Scripto is and how it relates to Groovy scripts and Axeda web services, take a look at Unleashing the Power of the Axeda Platform via Scripto. Serializing Data JSON For those building custom applications with Javascript, serializing data from scripts into JSON is a great choice, as the data is easily consumable as native Javascript objects. The net.sf.json JSON library is available to use in the SDK.  It offers an easy way to serialize objects on the Platform, particularly v2 SDK objects. import net.sf.json.JSONArray import static com.axeda.sdk.v2.dsl.Bridges.* def asset = assetBridge.findById(parameters.assetId) def response = JSONArray.fromObject(asset).toString(2) return ["Content-Type": "application/json", "Content": response] Outputs: [{     "buildVersion": "",     "condition": {         "detail": "",         "id": "3",         "label": "",         "restUrl": "",         "systemId": "3"     },     "customer": {         "detail": "",         "id": "2",         "label": "Default Organization",         "restUrl": "",         "systemId": "2"     },     "dateRegistered": {         "date": 11,         "day": 1,         "hours": 18,         "minutes": 7,         "month": 2,         "seconds": 49,         "time": 1363025269253,         "timezoneOffset": 0,         "year": 113     },     "description": "",     "detail": "testasset",     "details": null,     "gateways": [],     "id": "12345",     "label": "",     "location": {         "detail": "Default Organization",         "id": "2",         "label": "Default Location",         "restUrl": "",         "systemId": "2"     },     "model": {         "detail": "testmodel",         "id": "2345",         "label": "standalone",         "restUrl": "",         "systemId": "2345"     },     "name": "testasset",     "pingRate": 0,     "properties": [         {             "detail": "",             "id": "1",             "label": "TestProperty",             "name": "TestProperty",             "parentId": "2345",             "restUrl": "",             "systemId": "1",             "value": ""         },         {             "detail": "",             "id": "4",             "label": "TestProperty0",             "name": "TestProperty0",             "parentId": "2345",             "restUrl": "",             "systemId": "4",             "value": ""         },         {             "detail": "",             "id": "3",             "label": "TestProperty1",             "name": "TestProperty1",             "parentId": "2345",             "restUrl": "",             "systemId": "3",             "value": ""         },         {             "detail": "",             "id": "2",             "label": "TestProperty2",             "name": "TestProperty2",             "parentId": "2345",             "restUrl": "",             "systemId": "2",             "value": ""         }     ],     "restUrl": "",     "serialNumber": "testasset",     "sharedKey": [],     "systemId": "12345",     "timeZone": "GMT" }] This output can be traversed as Javascript object with its nodes accessible using dot (.) notation. For example, if you set the above JSON as the content of variable "json", you can access it in the following way, without any preliminary parsing needed: assert json[0].condition.id == 3 If you use jQuery, a Javascript library, feel free to make use of axeda.js, which contains utility functions to pass data to and from the Axeda Platform.  One function in particular is used in most example custom applications found on this site, the axeda.callScripto function.  It relies on the jQuery ajax function to make the underlying call. /**   * makes a call to the enterprise platform services with the name of a script and passes   * the script any parameters provided.   *   * default is GET if the method is unknown   *   * Notes: Added POST semantics - plombardi @ 2011-09-07   *   * original author: Zack Klink & Philip Lombardi   * added on: 2011/7/23   */ // options - localstoreoff: "yes" for no local storage, contentType: "application/json; charset=utf-8", axeda.callScripto = function (method, scriptName, scriptParams, attempts, callback, options) {   var reqUrl = axeda.host + SERVICES_PATH + 'Scripto/execute/' + scriptName + '?sessionid=' + SESSION_ID   var contentType = options.contentType ? options.contentType : "application/json; charset=utf-8"   var local   var daystring = keygen()   if (options.localstoreoff == null) {   if (localStorage) {   local = localStorage.getItem(scriptName + JSON.stringify(scriptParams))   }   if (local != null && local == daystring) {   return dfdgen(reqUrl + JSON.stringify(scriptParams))   } else {   localStorage.setItem(scriptName + JSON.stringify(scriptParams), daystring)   }   }   return $.ajax({   type: method,   url: reqUrl,   data: scriptParams,   contentType: contentType,   dataType: "text",   error: function () {   if (attempts) {   expiredSessionLogin();   setTimeout(function () {   axeda.callScripto('POST', scriptName, scriptParams, attempts - 1, callback, options)   }, 1500);   }   },   success: function (data) {   if (options.localstoreoff == null) {   localStorage.setItem(reqUrl + JSON.stringify(scriptParams), JSON.stringify([data]))   }   if (contentType.match("json")) {   callback(unwrapResponse(data))   } else {   callback(data)   }   }   }) }; Using the axeda.callScripto function: var postToPlatform = function (scriptname, callback, map) {         var options = {             localstoreoff: "yes",             contentType: "application/json; charset=utf-8"         }        // Javascript object "map" has to be stringified to post to Axeda Platform         axeda.callScripto("POST", scriptname, JSON.stringify(map), 2, function (json) {             // callback gets the JSON object output by the Groovy script             callback(json)         }, options)     } The JSON object is discussed in more detail here. Back to Top XML XML is the preferred language of integration with external applications and services. Groovy provides utilities to make XML serialization a trivial exercise. import groovy.xml.MarkupBuilder import static com.axeda.sdk.v2.dsl.Bridges.* def writer = new StringWriter() def xml = new MarkupBuilder(writer) def findAssetResult = assetBridge.find(new AssetCriteria(modelNumber: parameters.modelName)) // find operation returns AssetReference class. Contains asset id only def assets = findAssetResult.assets      xml.Response() {   Assets() {   assets.each { AssetReference assetRef ->   def asset = assetBridge.findById(assetRef.id)               // asset contains a ModelReference object instead of a Model.  ModelReference has a detail property, not a name property   Asset() {   id(asset.id)   name(asset.name)   serial_number(asset.serialNumber)   model_id(asset.model.id)   model_name(asset.model.detail)   }   }   }   } return ['Content-Type': 'text/xml', 'Content': writer.toString()] Output: <Assets>   <Asset>   <id>98765</id>   <name>testasset</name>   <serial_number>testasset</serial_number>   <model_id>4321</model_id>   <model_name>testmodel</model_name>   </Asset> </Assets Although XML is not a native Javascript object as is JSON, Javascript libraries and utilities are available for parsing XML into an object traversable in Javascript. For more information on parsing XML in Javascript, see W3 Schools XML Parser.  For those using jQuery, check out the jQuery.parseXML function. Back to Top Outputting Files (Binary content types) CSV CSV comes in handy for spreadsheet generation as it is compatible with Microsoft Excel. The following example is suitable for Axeda version 6.1.6+ as it makes use of the Data Accumulator feature to create a downloadable file. import com.axeda.drm.sdk.device.ModelFinder import com.axeda.drm.sdk.Context import com.axeda.drm.sdk.scripto.Request import com.axeda.common.sdk.id.Identifier import com.axeda.drm.sdk.device.Model import com.axeda.drm.sdk.device.DataItem import com.axeda.drm.sdk.device.DataItemValue import com.axeda.drm.sdk.data.DataValue import com.axeda.drm.sdk.device.DeviceFinder import com.axeda.drm.sdk.device.Device import com.axeda.drm.sdk.mobilelocation.MobileLocation import com.axeda.drm.sdk.data.DataValueList import com.axeda.drm.sdk.data.CurrentDataFinder import com.axeda.drm.sdk.mobilelocation.CurrentMobileLocationFinder import groovy.xml.MarkupBuilder import com.axeda.platform.sdk.v1.services.ServiceFactory /* * ExportObjectToCSV.groovy * * Creates a csv file from either all assets of a model of a single asset that can then be used to import them back into another system. * * @param model        -   (REQ):Str model name. * @param serial        -   (OPT):Str serial number. * * @author Sara Streeter <sstreeter@axeda.com> */ def writer = new StringWriter() def xml = new MarkupBuilder(writer) InputStream is try {    Context CONTEXT = Context.getSDKContext()    ModelFinder modelFinder = new ModelFinder(CONTEXT)     modelFinder.setName(Request.parameters.model)    Model model = modelFinder.find()    DeviceFinder deviceFinder = new DeviceFinder(CONTEXT)    deviceFinder.setModel(model)    List<Device> devices = [] def exportkey = model.name Device founddevice if (Request.parameters.serial){     deviceFinder.setSerialNumber(Request.parameters.serial)    founddevice = deviceFinder.find()    logger.info(founddevice?.serialNumber)    if (founddevice != null){    devices.add(founddevice)    }    else throw new Exception("Device ${Request.parameters.serial} cannot be found.")    exportkey += "${founddevice.serialNumber}" } else {     devices = deviceFinder.findAll()     exportkey += "all" } // use a Data Accumulator to store the information def dataStoreIdentifier = "FILE-CSV-export_____" + exportkey def daSvc = new ServiceFactory().dataAccumulatorService if (daSvc.doesAccumulationExist(dataStoreIdentifier, devices[0].id.value)) {   daSvc.deleteAccumulation(dataStoreIdentifier, devices[0].id.value) } List<DataItem> dataItemList = devices[0].model.dataItems def firstrow = [ "model", "serial", "devicename", "conditionname", "currentlat","currentlng" ]                     def tempfirstrow = dataItemList.inject([]){list, dataItem ->             list << dataItem.name;             list         }         firstrow += tempfirstrow            firstrow = firstrow.join(',')         firstrow += '\n'         daSvc.writeChunk(dataStoreIdentifier, devices[0].id.value, firstrow);     CurrentMobileLocationFinder currentMobileLocationFinder = new CurrentMobileLocationFinder(CONTEXT) devices.each{ device ->                 CurrentDataFinder currentDataFinder = new CurrentDataFinder(CONTEXT, device)                 currentMobileLocationFinder.deviceId = device.id.value                 MobileLocation mobileLocation = currentMobileLocationFinder.find()                 def lat = 0                 def lng = 0                 if (mobileLocation){                     lat = mobileLocation?.lat                     lng = mobileLocation?.lng                 }                 def row =                 [                     device.model.name,                     device.serialNumber,                     device.name,                     device.condition?.name,                     lat,                     lng                     ]                                     def temprow = dataItemList.inject([]){ subList,dataItem ->                         DataValue value = currentDataFinder.find(dataItem.name)                                             def val = "NULL"                         val = value?.asString() != "?" ? value?.asString() : val                         subList <<  val                         subList                     }                 row += temprow                 row = row.join(',')                 row += '\n'                 daSvc.writeChunk(dataStoreIdentifier, devices[0].id.value, row);             }    // stream the data accumulator to create the file is = daSvc.streamAccumulation(dataStoreIdentifier, devices[0].id.value) def disposition = 'attachment; filename=CSVFile' + exportkey + '.csv' return ['Content-Type': 'text/csv', 'Content-Disposition':disposition, 'Content': is.text] } catch (def ex) {    xml.Response() {        Fault {            Code('Groovy Exception')            Message(ex.getMessage())            StringWriter sw = new StringWriter();            PrintWriter pw = new PrintWriter(sw);            ex.printStackTrace(pw);            Detail(sw.toString())        }    } logger.info(writer.toString()) return ['Content-Type': 'text/xml', 'Content': writer.toString()] } return ['Content-Type': 'text/xml', 'Content': writer.toString()] Back to Top Image Files (6.5+) The FileStore in Axeda version 6.5+ allows fine-grained control of uploaded and downloaded files. As Groovy scripts can return binary data via Scripto, this allows use cases such as embedding a Groovy script url as the source for an image. The following example uses the FileStore API to create an Image out of a valid image file, scales it to a smaller size and stores this smaller file. import com.axeda.drm.sdk.Context import com.axeda.drm.sdk.data.* import com.axeda.drm.sdk.device.* import com.axeda.drm.sdk.mobilelocation.CurrentMobileLocationFinder import com.axeda.drm.sdk.mobilelocation.MobileLocation import com.axeda.drm.sdk.mobilelocation.MobileLocationFinder import com.axeda.sdk.v2.bridge.FileInfoBridge import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.ExecutionResult import com.axeda.services.v2.FileInfo import com.axeda.services.v2.FileInfoReference import com.axeda.services.v2.FileUploadSession import net.sf.json.JSONObject import groovy.json.JsonBuilder import net.sf.json.JSONArray import com.axeda.drm.sdk.scripto.Request import org.apache.commons.io.IOUtils import org.apache.commons.lang.exception.ExceptionUtils import com.axeda.common.sdk.id.Identifier import groovy.json.* import javax.imageio.ImageIO; import java.awt.RenderingHints import java.awt.image.BufferedImage import java.io.ByteArrayOutputStream; import java.awt.* import java.awt.geom.* import javax.imageio.* import java.awt.image.* import java.awt.Graphics2D import javax.imageio.stream.ImageInputStream /*    Image-specific FileStore entry point to post and store files */ def contentType = "application/json" final def serviceName = "StoreScaledImage" // Create a JSON Builder def json = new JsonBuilder() // Global try/catch. Gotta have it, you never know when your code will be exceptional! try {       Context CONTEXT = Context.getSDKContext()     def filesList = []     def datestring = new Date().time     InputStream inputStream = Request.inputStream       def reqbody = Request.body     // all of our Request Parameters are available here     def params = Request.parameters     def filename = Request?.headers?.'Content-Disposition' ?     Request?.headers?.'Content-Disposition' : "file___" + datestring + ".txt"     def filelabel = Request.parameters.filelabel ?: filename     def description = Request.parameters.description ?: filename     def contType = Request.headers?."content-type" ?: "image/jpeg"     def tag = Request.parameters.tag ?: "cappimg"     def encoded = Request.parameters.encoded?.toBoolean()   def dimlimit = params.dimlimit ? params.dimlimit : 280     // host is available in the headers when the script is called with AJAX     def domain = Request.headers?.host     byte[] bytes = IOUtils.toByteArray(inputStream);     def fileext = filename.substring(filename.indexOf(".") + 1,filename.size())     def outerMap = [:]     // check that file extension matches an image type     if (fileext ==~ /([^\s]+(\.(?i)(jpg|jpeg|png|gif|bmp))$)/){         if (inputStream.available() > 0) {                 def scaledImg                               try {                     def img = ImageIO.read(inputStream)                     def width = img?.width                              def height = img?.height                     def ratio = 1.0                     def newBytes                                       if (img){                                               if (width > dimlimit || height > dimlimit){                             // shrink by the smaller side so it can still be over the limit                             def dimtochange = width > height ? height : width                             ratio = dimlimit / dimtochange                                                       width = Math.floor(width * ratio).toInteger()                             height = Math.floor(height * ratio).toInteger()                         }                                             newBytes = doScale(img, width, height, ratio, fileext)                      if (newBytes?.size() > 0){                         bytes = newBytes                      }                     }                 }                 catch(Exception e){                     logger.info(e.localizedMessage)                                   }                                           outerMap.byteCount = bytes.size()                    FileInfoBridge fib = fileInfoBridge                 FileInfo myImageFile = new FileInfo(filelabel: filelabel,                                                     filename: filename,                                                     filesize: bytes?.size(),                                                     description: description,                                                     tags: tag                                                     )                    myImageFile.contentType = contType                    FileUploadSession fus = new FileUploadSession();                 fus.files = [myImageFile]                    ExecutionResult fer = fileUploadSessionBridge.create(fus);                 myImageFile.sessionId = fer.succeeded.getAt(0)?.id                               ExecutionResult fileInfoResult = fib.create(myImageFile)                               if (fileInfoResult.successful) {                     outerMap.fileInfoSave = "File Info Saved"                     outerMap.sessionId = "File Upload SessionID: "+fer.succeeded.getAt(0)?.id                     outerMap.fileInfoId = "FileInfo ID: "+fileInfoResult?.succeeded.getAt(0)?.id                     ExecutionResult er = fib.saveOrUpdate(fileInfoResult.succeeded.getAt(0).id,new ByteArrayInputStream(bytes))                     def fileInfoId = fileInfoResult?.succeeded.getAt(0)?.id                     String url = "${domain}/services/v1/rest/Scripto/execute/DownloadFile?fileId=${fileInfoId}"                     if (er.successful) {                         outerMap.url = url                     } else {                         outerMap.save = "false"                         logger.info(logFailure(er,outerMap))                     }                 } else {                     logger.info(logFailure(fileInfoResult, outerMap))                 }                } else {                 outerMap.bytesAvail = "No bytes found to upload"             }         } else {             outerMap.imagetype = "Extension $fileext is not a supported image file type."         }     filesList << outerMap     // return the JSONBuilder contents     // we specify the content type, and any object as the return (even an outputstream!)     return ["Content-Type": contentType,"Content":JSONArray.fromObject(filesList).toString(2)]     // alternately you may just want to serial an Object as JSON:     // return ["Content-Type": contentType,"Content":JSONArray.fromObject(invertedMessages).toString(2)] } catch (Exception e) {     // I knew you were exceptional!     // we'll capture the output of the stack trace and return it in JSON     json.Exception(             description: "Execution Failed!!! An Exception was caught...",             stack: ExceptionUtils.getFullStackTrace(e)     )     // return the output     return ["Content-Type": contentType, "Content": json.toPrettyString()] } def doScale(image, width, height, ratio, fileext){     if (image){     ByteArrayOutputStream baos = new ByteArrayOutputStream();     def bytes      def scaledImg = new BufferedImage( width, height, BufferedImage.TYPE_INT_RGB )        Graphics2D g = scaledImg.createGraphics();         g.setRenderingHint(RenderingHints.KEY_INTERPOLATION, RenderingHints.VALUE_INTERPOLATION_BILINEAR);         g.scale(ratio,ratio)         g.drawImage(image, null, null);         g.dispose();              ImageIO.write( scaledImg, fileext, baos )       baos.flush()       bytes = baos.toByteArray()       baos.close()     }     else {         logger.info("image to be scaled is null")         return false     }   return bytes   } private void logFailure(ExecutionResult fileInfoResult, LinkedHashMap outerMap) {     outerMap.message = fileInfoResult.failures.getAt(0)?.message     outerMap.source = fileInfoResult.failures.getAt(0)?.sourceOfFailure     outerMap.details = fileInfoResult.failures.getAt(0)?.details?.toString()     outerMap.fileInfoSave = "false" } The next example makes use of the jQuery framework to upload an image to this script via an http POST. Note: This snippet is available as a jsFiddle at http://jsfiddle.net/LrxWF/18/ With HTML5 button: <input type="file" id="fileinput" value="Upload" /> var PLATFORM_HOST = document.URL.split('/apps/')[0]; // this is how you would retrieve the host on an Axeda instance var SESSION_ID = null // usually retrieved from login function included below /*** * Depends on jQuery 1.7+ and HTML5, assumes an HTML5 element such as the following: * <input type="file" id="fileinput" value="Upload" /> * **/ $("#fileinput").off("click.filein").on("click.filein", function () {     fileUpload() }) var fileUpload = function () {     $("#fileinput").off('change.fileinput')     $("#fileinput").on('change.fileinput', function (event) {         if (this.files && this.files.length > 0) {             handleFiles("http://" + PLATFORM_HOST, this.files)         }     }) } var handleFiles = function (host, files) {     $.each(files, function (index, file) {         var formData = new FormData();         var filename = file.name         formData.append(filename, file)         var url = host + '/services/v1/rest/Scripto/execute/StoreScaledImage?filelabel=' + filename + "&tag=myimg"         url = setSessionId(url)         jQuery.ajax(url, {             beforeSend: function (xhr) {                 xhr.setRequestHeader('Content-Disposition', filename);             },             cache: false,             cache: false,             processData: false,             type: 'POST',             contentType: false,             data: formData,             success: function (json) {                 refreshPage(json)                 console.log(json)             }         });     }) } var setSessionId = function (url) {     // you would already have this from logging in     return url + "&sessionid=" + SESSION_ID } var refreshPage = function (json) {     // here you would refresh your page with the returned JSON     return } /*** *  The following functions are not used in this demonstration, however they are necessary for a complete app and are found in axeda.js  http://gist.github.com/axeda/4340789 ***/     function login(username, password, success, failure) {         var reqUrl = host + SERVICES_PATH + 'Auth/login';         localStorage.clear()         return $.get(reqUrl, {             'principal.username': username,                 'password': password         }, function (xml) {             var sessionId = $(xml).find("ns1\\:sessionId, sessionId").text()             // var sessionId = $(xml).find("[nodeName='ns1:sessionId']").text(); - no longer works past jquery 1.7             if (sessionId) {                 // set the username and password vars for future logins.                         storeSession(sessionId);                 success(SESSION_ID); // return the freshly stored contents of SESSION_ID             } else {                 failure($(xml).find("faultstring").text());             }         }).error(function () {             $('#loginerror').html('Login Failed, please try again')         });     }; function storeSession(sessionId) {     var date = new Date();     date.setTime(date.getTime() + SESSION_EXPIRATION);     SESSION_ID = sessionId     document.cookie = APP_NAME + '_sessionId=' + SESSION_ID + '; expires=' + date.toGMTString() + '; path=/';     return true; }; The return JSON includes a URL that you can use as the source for images: [{   "byteCount": 14863,   "fileInfoSave": "File Info Saved",   "sessionId": "File Upload SessionID: 01234",   "fileInfoId": "FileInfo ID: 12345",   "url": "http://yourdomain.axeda.com/services/v1/rest/Scripto/execute/DownloadFile?fileId=12345" }] The DownloadFile Custom Object looks like the following: import static com.axeda.sdk.v2.dsl.Bridges.* import javax.activation.MimetypesFileTypeMap import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* import com.axeda.drm.sdk.scripto.Request def knowntypes = [          [png: 'image/png']         ,[gif: 'image/gif']         ,[jpg: 'image/jpeg']     ] def params = Request.parameters.size() > 0 ? Request.parameters : parameters def response = fileInfoBridge.getFileData(params.fileId) def fileinfo = fileInfoBridge.findById(params.fileId) def type = fileinfo.filename.substring(fileinfo.filename.indexOf('.') + 1,fileinfo.filename.size()) type = returnType(knowntypes, type) def contentType = params.type ?: (type ?: 'image/jpg') return ['Content': response, 'Content-Disposition': contentType, 'Content-Type':contentType] def returnType(knowntypes, ext){     return knowntypes.find{ it.containsKey(ext) }?."$ext" } Make sure to append a valid session id to the end of the URL when using it as the source for an image. The techniques discussed above can be applied to any type of binary file output with consideration for the type of file being processed. A Word on Streaming Content streaming such as streaming of video or audio files over UDP is not currently supported by the Axeda Platform.
View full tip
    Go beyond functional application development and learn how you can dress up your UI to enhance the user experience. In this webinar, we'll show you how to implement common design elements throughout your UI including custom logos, color schemes, and styles.   In this video, UI expert Tsveta Saul will demonstrate valuable tips and tricks for making a sophisticated IoT application with ThingWorx, including: Masters and Menus - enhance your app framework with personalized menu titles, backgrounds, and custom headers Layout and Labels - create consistency in your application by combining commonly used widgets State and Style Definitions - define widget styles to illustrate your brand Images - integrate visuals to describe thousands of words' worth of design and development specifications Watch the recording above, and download this sample Mashup containing all the data and entities shared in the video.   Q&A   We didn’t have time to get to all of the questions during the live webcast, but we’ve answered them here on our blog. Have any additional questions? Please leave us a comment. WHERE ELSE CAN STATE DEFINITIONS BE APPLIED? State Definitions can be applied to the Map Widget. Not only can you add color-coded pins, but also images that describe certain types of assets, for example. You can also add State Definitions to your Time Series Chart. For instance, you can color-code for values that exceed a certain threshold.   ARE THERE ANY IMPORTANT OR HIDDEN PROPERTIES THAT I SHOULD KNOW ABOUT? One feature to take note of is the Z-index, which helps you control the position of a widget in relation to another widget. This option can control whether users see the specific widget or not. ThingWorx has a robust a system that handles permissions – from users and groups to organizations. There’s a lot of control as to who can see what either through Model, or through specific services and conditions inside the Mashup.   HOW CAN I DISABLE A USER FROM SEEING SOMETHING OR DOING A CERTAIN ACTION? Again, that is handled through the security system of ThingWorx. There are multiple ways of authenticating users from other systems. You can create services that are related to your Things or user session parameters, and define whether a user can see those parameters or not.   HOW CAN WE CHANGE DATE FORMAT FOR X-AXIS IN TIME SERIES CHART? This property is available within the Time Series Chart Widget. It uses standard programming language – you can add or delete in the property.   CAN LAYOUT SPACING BE A PERCENTAGE? The spacing within the layout (none, 5px, 10px, 15px, or 20px).
View full tip
Analytics projects typically involve using the Analytics API rather than the Analytics Builder to accomplish different tasks. The attached documentation provides examples of code snippets that can be used to automate the most common analytics tasks on a project such as: Creating a dataset Training a Model Real time scoring predictive and prescriptive Retrieving the validation metrics for a model Appending additional data to a dataset Retraining the model The documentation also provides examples that are specific to time series datasets. The attached .zip file contains both the document as well as some entities that you need to import in ThingWorx to access the services provided in the examples. 
View full tip
Best Practices in Data Preparation for ThingWorx Analytics
View full tip
Update to Connected Factories Benchmark   Scenario Three: One Kepware Server in ThingWorx 9.0 The goal of this scenario is to confirm the same performance in ThingWorx 9.0 as seen in scenario one, where one Kepware Server represented a single factory in version 8.5.   Matrix 1 - Slow (15s slow properties, 1s fast) The lower frequency tests performed the same in 9.0. Even the 10k ingestion test, which lies very close to the boundary for a single Kepware Server, passed with no errors. Matrix 2 – Fast (5s slow properties, 500ms fast) These showed similar results, but the 500 thing, 50-10 property test had data loss in 9.0. However, the write rate is much higher than PTC recommends for a single Kepware Server anyway.     Matrix 3 – Faster (1s slow properties, 200ms fast) The fastest tests had similar results as well. The larger tests ran with more success with two Kepware Servers (data not shown here).   Conclusions ThingWorx 9.0 is similarly capable of ingesting data using Kepware Server. A single instance can still achieve up to 10k wps. Future scenarios will now make use of ThingWorx 9.0.   Download the updated draft here!
View full tip
  It’s here! It’s happening! ThingWorx 9.0 released today!   Unleash the Power of Now and leverage exciting new features like: Improved performance and higher availability via active-active clustering Faster, more streamlined development with: New widgets like line, bar & schedule charts Undo & redo functionality in Mashup Builder (look out for an upcoming Ask Kaya tech tip) New Mashup mobile settings Simpler management of entities and applications: Improved auditing scale & usability Ability to group entities via Thing Groups Ability to deploy apps built prior to 8.5 with Solution Central And more—like: Confidence model training Delegated authorization for Flow [Navigate] Component-based app development for custom production use apps (upgrade-safe) And even more—the list goes on and on. Check out the full list of new functionality in the 9.0 release notes and discover our What’s New page.   Access 9.0 under “ThingWorx Foundation” on the PTC Downloads Page and unleash its power today! As you get started, check out the ThingWorx Help Center for guidance.   Enjoy 9.0 and let us know what you think below! Kaya
View full tip
This video continues Module 9: Anomaly Detection of the ThingWorx Analytics Training videos. It begins with a ThingWatcher exercise, and concludes by describing Statistical Process Control (SPC). The "SPC Accelerator" will be covered in Module 9 Part 3.
View full tip
Video Author:                     Polina Osipvoa Original Post Date:            June 10, 2016 Applicable Releases:        ThingWorx   Description: This is a video tutorial on creating a Media Entity, and importing and displaying an image.      
View full tip
Hi everybody, In this blogpost I want to share with you my local ThingWorx installation, with some optimizations that I did for local development. -use the -XX:+UseConcMarkSweepGC . This uses the older Garbage Collector from the JVM, instead of the newer G1GC recommended by the ThingWorx Installation guide since version 7.2. The advantage of ConcMarkSweepGC is that the startup time is faster and the total memory footprint of the Tomcat is far lower than G1GC. -use -agentlib:jdwp=transport=dt_socket,address=1049,server=y,suspend=n. This allows using your Java IDE of choice to connect directly to the Tomcat server, then debugging your Extension code, or even the ThingWorx code using the Eclipse Class Decompilers for example. Please modify the 1049 to your port of choice for exposing the server debugging port. -use -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=60000 -Dcom.sun.management.jmxremote.ssl=false                  -Dcom.sun.management.jmxremote.authenticate=false           This sets up the server to allow JMX monitoring. I usually use VisualVM from the JDK bin folder, but you can use any JMX monitoring tool.           This uses no Authentication, no SSL and uses port 6000 - modify if you need. I usually startup Tomcat manually from a folder via startup.bat, and the setenv.bat looks like: set JAVA_HOME=C:\Program Files\Java\jdk1.8.0_102 set JRE_HOME=C:\Program Files\Java\jdk1.8.0_102 set THINGWORX_PLATFORM_SETTINGS=D:\Work\servers\apache-tomcat-8.0.33 // this is where the platform-settings.json file is located set CATALINA_OPTS=-d64 -XX:+UseNUMA -XX:+UseConcMarkSweepGC -Dfile.encoding=UTF-8 -agentlib:jdwp=transport=dt_socket,address=1049,server=y,suspend=n -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=60000 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false In this mode I can look at any errors in almost real time from the console and it makes killing the server for Java Extension reload a breeze -> Ctrl+C Please don't hesitate to provide feedback on this document, I certainly welcome it. Be warned: THESE ARE NOT PRODUCTION SETTINGS. Best regards, Vladimir
View full tip
Build an Equipment Dashboard Guide Part 2   Step 5: Display Data   Now that you have configured the visual part of your application, you need to bind the Widgets in your Mashup to a data source.   Add Services to Mashup   In the top-right, ensure the Data tab is selected. Click the green + symbol. In the Entities Filter field, search for and select MyPump. In the Services Filter field, type GetPropertyValues. Click the right-arrow beside GetPropertyValues. Note how GetPropertyValues was added to the right-side under Selected Services Check the checkbox for Execute on Load. This causes the Service to execute when the Mashup starts.        7. In the Services Filter field, type QueryPropertyHistory. 8. Click the right-arrow beside QueryPropertyHistory. 9. Check the checkbox for Execute on Load. 10. Click Done to close the pop-up. Note how the Services have been added to the Data tab in the top-right.          11. Click Save. Now that we have access to the backend data, we want to bind it to our Widgets.   Value Display   Configure the Value Display to display the SerialNumber of the pump. Under the Data tab, expand GetPropertyValues > Returned Data > All Data. Drag-and-drop GetPropertyValues > serialNumber onto the Value Display Widget in the top section. On the Select Binding Target popup, select Data. Image   We want to use an Image Widget to display a thumbnail picture of the pump for easy reference. To do that, though, you first need to upload an image to Foundation by creating a Media Entity. Right-click the image below, then click "Save image as..." to download. Click Browse > Visualization > Media. Click + New. In the Name field, type pump-thumbnail. If Project is not already set, click the + in the Project text box and select the PTCDefaultProject. Under Image, click Change. Navigate to and select the pump-image.png file you just downloaded. On the navigation pop-up, click Open to close the pop-up and confirm the image selection. At the top of Foundation, click Save. Change Image to pump   We will now update the Image Widget to display the ThingWorx Media Entity we just created. Return to the pump-dashboard Mashup. Click the Image Widget to select it, and ensure that the bottom-left Properties tab is active. In the bottom-left Properties' Filter field, type SourceURL. For the SourceURL Property, search for and select pump-thumbnail. Click Save.   Line Chart   Configure the Line Chart to display Property values changing over time. In the top-right Data tab, expand QueryPropertyHistory > Returned Data. Drag and drop QueryPropertyHistory > All Data onto the Line Chart Widget in the bottom-right Canvas section. On the Select Binding Target pop-up, select Data. Ensure the Line Chart Widget is selected. On the Line Chart's Property panel in the bottom-left, in the Filter field, type XAxisField. For the XAxisField Property, select timestamp. In the Filter field, type LegendFilter. Check the checkbox for LegendFilter. Click Save.   Verify Data Bindings   You can see the configuration of data sources bound to Widgets in the bottom-center Connections pane. In the top-right Data tab, click GetPropertyValues. Check the diagram in the bottom-center Connections window to confirm a data source is bound to the Value Display Widget.       2. Also in the top-right Data tab, click QueryPropertyHistory. Confirm that the diagram shows it is bound to the Line Chart.         3. Click Save.     Step 6: Test Application   Browse to your Mashup and click View Mashup to launch the application. NOTE: You may need to enable pop-ups in your browser to see the Mashup. 2. Confirm that data is being displayed in each of the sections. 3. Open the MyPump Thing, then click the Properties and Alerts Tab. 4. Click Set Value on the line of the serialNumber Property. 5. Enter a value for the serial number, then click the Check-mark button. 6. Click Refresh to confirm the value is changed. 7. Refresh the browser window showing the dashboard to see the new serial number value.     Step 7: Next Steps   Congratulations! You've successfully completed the Build an Equipment Dashboard guide, and learned how to: Use Composer to create a Thing Shape and a Thing Template Make a Thing using a custom Thing Template Store Property change history in a Value Stream Create an applicaton UI with Mashup Builder Display data from connected devices Test a sample application   If you like to return to previous guides in the learning path, select Connect and Monitor Industrial Plant Equipment.    
View full tip
When using Value Streams to log historical data, there's a service to purge the ValueStream entries from the Thing itself. But, what to do when a Thing that once logged values into a ValueStream was deleted? Currently, there's no OOTB way to delete these entries if they're not being used anymore. Currently, I was asked this question and wanted to share this with the entire community. I created a utility application that queries directly the TWX DB for Things that are present in the ValueStream but don't exist anymore and allows a user to purge it.    These services are considering  PostgreSQLServer as persistence provider for the ValueStreams. The services can be modified if you're using SQLServer.  Do not apply for InfluxDB persistence providers   The twxDBConnector thing is based on the PostgreSqlServer template, that is present in the Relational Databases extension. It has 4  main services:   getEntriesToPurge:  Queries the TWX DB for all the entries related to a Thing. It does not consider the ValueStream id, so it will purge all the entries across all value streams. Requires a Thing name as an input; getMissingThings: Queries Things that are present in the ValueStream DB table that are not present in the Things table, meaning that they were deleted; purgeThingEntries: Purges the entries related to a Thing. It does not consider the ValueStream id, so it will purge all the entries across all value streams; purgeAllEntries: Purge all the entries related to Things that were deleted.   The queries can be modified to allow the selection of the value stream to be cleaned.   I also added a sample mashup that leverages the services.       The twxDBConnector has a configuration table that requires the DB Connection string, user and password.   You can also do it directly from the DB using PGAdmin and purge it all.   DELETE FROM value_stream WHERE value_stream.entry_id IN --Queries all entries in the value stream table that belong to an inexistent thing (SELECT entry_id FROM value_stream LEFT JOIN thing_model ON value_stream.source_id = thing_model.name WHERE thing_model.name IS NULL)   Attention: These services are changing directly the TWX DB, so use it carefully.   To use it:   Import the PostgresSQLServer Extension (you might need to change the JAR in the extension depending on the TWX version you're using); Import the entities from the purgeVSEntries.xml Thanks  @dsantos for the help on optimizing the queries.   Hope it helps. Ewerton  
View full tip
  Utilizing the ThingWorx monitoring system   Guide Concept   Being able to view your logs is an important part of knowing what is happening in your system. You can't keep things secure if you don't know who is doing what.   These concepts and provide you with ways to find information about what is going on in your application and system.   We will teach you how to access the monitoring panel and help keep your application running the way you need it to.     You'll learn how to   How to log data and capture useful information How to filter and find what you need in the monitoring pages.   NOTE:  The estimated time to complete this guide is 30 minutes     Step 1: Example and Strategy   If you’d like to skip ahead, download the completed example of the Aerospace and Defense learning path: AerospaceEntitiesGuide1.zip. Extract, then import the .twx files included.   In the last guide, we ended with viewing the Monitoring section of ThingWorx. What you’ll now learn is a more detailed method to knowing exactly what is happening in your application. Part of the magic in logging comes from informative statements and how you perform your error handling.   We will create an Entity with a number of services that will allow us to see what is happening in our application. In order to automate these services and view the logs that they generate, we’ll be using Schedulers. When we have everything running, we’ll go over how to utilize the Monitoring section to its strengths.     Step 2: Utilizing Logging Functions   Errors happen. Sooner or later, there will be an error. When you have an error in your application, you need to know about it and have information to help resolve the problem. There are 5 main logging functions under the universal logger object – info, debug, trace, warn, and error.   Function  Description  info Used for high level and general information. debug Used for debugging an application and capturing data. trace Used for low level and tracking what is happening. warn Used to warn for possible errors or problems. error Used for showing an error in a log.   Let’s create a Thing that performs logging for different data types (complex types will be handled in the next section).   1.  In the ThingWorx Composer, click the + New button in the top left.  2. In the dropdown list, click Thing.   3. In the Name field, give our agency name, such as  LoggingServices. 4. In the Base Thing Template field, select GenericThing. 5. Select a Project, such as PTCDefaultProject. Set this as the default context if you wish.   6. Click Save.     Let's start creating our services.   1. Click the Services tab. 2. Click the Add button to create a new JavaScript service.   3. Enter LogInformation in the Name field of the service. 4. Enter the below lines as the code for the service.   logger.trace("LoggingServices.LogInformation(): Entering Service."); logger.info("LoggingServices.LogInformation(): Logging at Information Level"); try { var x = 10; var y = 20; var z = 0; logger.debug("LoggingServices.LogInformation(): Logging Addition - " + (x + y)); logger.debug("LoggingServices.LogInformation(): Logging Division - " + (y / z)); } catch(error) { logger.error("LoggingServices.LogInformation(): Error - " + error); } logger.trace("LoggingServices.LogInformation(): Ending Service."); 5. Click the Save and Continue button. Hit the Execute button.   When triggered, this service will print the trace information for the service, the information level logging, the sum of the x and y variables, and our error message (because of our division by 0 attempt). In the next section, we’ll cover the log file types and how to filter to get what we want.       Step 3: Logging Complex Objects   As you know, with JavaScript, objects can be seen as JSON. That being said, The JSON.stringify function is very handy in printing out complex objects in ThingWorx (NOTE: In order to print out an InfoTable, use <infotable>.toJSON() ). Let’s create a new service to log more complex types.   1.     Open our LoggingServices Thing that we created in the earlier sections. 2.     Click the Services tab. 3.     Click the Add button to create a new service. 4.     Enter LogComplexObjects  in the Name field. 5.     Enter the below statements into the code area.   logger.trace("LoggingServices. LogComplexObjects (): Entering Service."); logger.info("LoggingServices. LogComplexObjects (): Logging at Information Level"); try { var x = {}; x.y = 20; x.z = 0; logger.debug("LoggingServices. LogComplexObjects (): Logging Addition - " + (x + y)); logger.debug("LoggingServices. LogComplexObjects (): Logging Division - " + (y / z)); } catch(error) { logger.error("LoggingServices. LogComplexObjects (): Error - " + error); } logger.trace("LoggingServices. LogComplexObjects (): Ending Service.");   After you execute this service, you should be able to see the outcome in the ScriptLog. Try utilizing some of the filtering methods we mentioned in order to find the log statements.       Step 4: Viewing and Filtering Logs   In the table below, you’ll see what each of the logs showcase. Keep in mind, while some of these logs are accessible in the ThingWorx composer, you can view all of these logs on the server in the /ThingworxStorage/logs directory.   Log Description   Application Log  The Application Log contains all of the messages that ThingWorx logs while operating. Depending on your settings, this log can display errors only or every execution of the platform.  Communication Log  The Communication Log contains all communication activity with ThingWorx.  Configuration Log  The Configuration Log contains all of the messages that the ThingWorx application generates for any create, modify, and delete done in ThingWorx. For example, if a Thing or Mashup is created, modified, or deleted, that information will be included in the ConfigurationLog.  Database Log  The DatabaseLog contains all messages related to database activity.  Script Log  The ScriptLog contains all of the messages that the ThingWorx application generates while executing JavaScript services. You can use logger.warn(or logger.info, logger.trace, logger.debug, logger.error). By default, the log displays warnings and errors, so we recommend using the warn function to log this information from the services you are running. Generally, ThingWorx will only publish errors that were incurred while running a service to this log.  Security Log  The Security Log contains all of the messages that the ThingWorx application generates regarding users. This information can include login data and page requests depending on the log level.  Script Error Log  The Script Error Log contains the stack trace for scripts created in the platform and is only available in the ScriptErrorLog.log file. Not accessible in the Composer.   Let’s go to the Monitoring section of the ThingWorx Composer and view the results from the server we triggered earlier.   1.     Click on the Monitoring section on the left-hand side of the Composer.   2.     Click on ScriptLog. Based on your role on this team, this might be the log that you view the most.     3.     You should be able to see or find the logs we recently generated from executing our new service.   The logging views in Composer will generally show you the last 24-hour period. This can be helpful if you know something ran within the last day. Based on the amount of logging your system performs, that can be way too large a window. Let’s start playing with the filters and searches.       # Function  1 The search text box allows you to search for specific words in the log based on your time window. Once you have entered the word, hit the **ENTER/RETURN key on your keyboard. 2 The filter button provides a menu that provides more features to filter the logs. See below. 3 The configure button provides a pop-up that allows you to filter incoming log statements based on a certain logging level. See below. 4 The date range will default to the last 24 hours but can be updated to a wider or smaller window. You will need to click the Apply button when ready. 5 The max rows field provides how many records will come back in the view. The search will continue until it hits this number, or your date range is met. You can shorten this field for faster searching or increase it up to 1000 to showcase more information. You will need to click the Apply button when ready. 6 Apply and Reset buttons. Apply will run the search you currently have on the screen. Reset will reset the date range and max rows ONLY. 7 Auto Refresh allows for the logs to continue rolling in based on your current filters. Think of this as a specified **tail command on a server log. 8 This grid provides the actual logging information. The information provided here can be used to also help filter what you’re looking for. If you see specific thread has the data you need, use the filter menu (2) to filter based on that thread. You can also click on these items to view the log statement clearer. 9 This log message field provides the message from the grid (8) entry clicked on.     The Filter Menu   The filter menu provides very helpful filters. See below for some information on how it works.   # Function 1 This User field allows you to filter the logs to the user that ran the service or function. This is helpful when you know a specific person has logged in and having problems with your application. 2 The Origin field can help when trying to zero in your search. If you notice in the log message grid, there is an origin field. Use this field to help fine tune your search. 3 The Thread field is amazing when you have multiple processes running and would like to see the logs from one process only. You’ll need to find a log entry before you know which thread to filter on. 4 This is similar to the Origin field in the sense that once you find the instance value for a log entry, it can be used here to help filter. 5 The log level range is exactly as the name states. It will help filter message based on logging level.     The Configure Pop-up     The configure pop-up helps with logging going forward. If you only want to see debug level information or you’d like to see everything down to the trace messages, configure it here and you’ll see the change as new messages come in.     Step 5: Next Steps   Congratulations! You've successfully completed the Tracking Activities and Stats guide, and learned how to use the ThingWorx Platform to help track what is happening in your application.   Learn More   We recommend the following resources to continue your learning experience:    Capability     Guide Build Design Your Data Model Manage Data Model Implementation Guide   Additional Resources   If you have questions, issues, or need additional information, refer to:    Resource       Link Community Developer Community Forum Support REST API Help Center
View full tip
Sometimes the following error is seen filling up the application log: "HTTP header: referrer". It does not cause noticeable issues, but does fill up the log. This article has been updated to reflect the workaround: https://www.ptc.com/en/support/article?n=CS223714 To clear the error, locate the validation.properties ​inside the ThingworxStorage\esapi directory. Then  change the values of both Validator.HTTPHeaderValue_cookie and Validator.HTTPHeaderValue_referer to: Validator.HTTPHeaderValue_cookie= ^.*$ Validator.HTTPHeaderValue_referer= ^.*$
View full tip
Insight into Performance Ensure asset availability by managing asset condition Identify potential equipment failure through rule-based alerts Faster notification providing more time to respond Access to real-time performance data enables time-sensitive decision making Review performance history of equipment to improve troubleshooting Key indicators/signals of future failures View asset information and sensor history to understand context of alert View multiple sensor data and trends to understand any correlations across sensors Diagnose and Understand Accelerate diagnostic processes to understand the root cause of issues more quickly prior to the customer reporting the issue Improve interactive diagnostics processes by incorporating connected machine data View detailed sensor information for speedy issue identification Automate service issue notifications Machine created events to alert of potential problems Notifications to service technicians or service dispatch systems about potential problems Resolve Service Issues Increase equipment uptime by remotely resolving equipment issues before they occur to improve customer retention and reduce service cost Leverage remote access to connected equipment Dashboard to view alerts and summary across all customers and assets Identify common issues across multiple assets Prevent unnecessary onsite service visits with remote service capabilities Capture logs and diagnostic information for speedy issue resolution Remote access equipment to perform diagnostics and resolve issues through configuration changes
View full tip
ThingWorx Docker Overview and Pitfalls to Avoid    by Tori Firewind of the IoT EDC Containers are isolated and can run side-by-side on the same machine, but they share the host OS, making them more efficient in terms of memory usage and scalability.   Docker is a great tool for deploying ThingWorx instances because everything is pre-packaged within the Docker image and can be stored in a repository ready for deployment at any time with little configuration required.  By using a different container for every component of an application, conflicting dependencies can be avoided. Containers also facilitate the dev ops process, providing consistent application deployments which can be set up, taken down, and tested automatically using scripts.   Using containers is advantageous for many reasons: simplified configuration, easier dev ops management, continuous integration and deployment, cost savings, decreased delivery time for new application versions, and many versions of an application running side-by-side without any wasted resources setting them up or tearing them down. The ThingWorx Help Center is a great resource for setting up Docker and obtaining the ThingWorx Docker files from the PTC Software Downloads website. The files provided by PTC handle the creation of the image entirely, simplifying the process immensely. All one has to do is place the ThingWorx version and all of the required dependencies in the staging folder, configure the YML file, and run the build scripts. The Help Center has all of the detailed information required, but there are a few things worth noting here about the configuration process.   For one thing, the platform-settings.json file is generated based on the options given in the YML file, so configuration changes made within this configuration file will not persist if the same options aren’t given in the YML file. If using Docker Desktop to run an image on a Windows machine, then the configuration options must be given in an ENV file that can be referenced from the command used to start the image. The names of the configuration parameters differ from the platform-settings.json file in ways that are not always obvious, and a full list can be found here.   For example, if extension imports need to be enabled on a ThingWorx instance running in Docker, then the EXTPKG_IMPORT_POLICY_ENABLED option must be added to the environment section of the YML file like this:     environment: - "CATALINA_OPTS=-Xms2g -Xmx4g" # NOTE: TWX_DATABASE_USERNAME and TWX_DATABASE_PASSWORD for H2 platform must # be set to create the initial database, or connect to a previous instance. - "TWX_DATABASE_USERNAME=dbadmin" - "TWX_DATABASE_PASSWORD=dbadmin" - "EXTPKG_IMPORT_POLICY_ENABLED=true" - "EXTPKG_IMPORT_POLICY_ALLOW_JARRES=true" - "EXTPKG_IMPORT_POLICY_ALLOW_JSRES=true" - "EXTPKG_IMPORT_POLICY_ALLOW_CSSRES=true" - "EXTPKG_IMPORT_POLICY_ALLOW_JSONRES=true" - "EXTPKG_IMPORT_POLICY_ALLOW_WEBAPPRES=true" - "EXTPKG_IMPORT_POLICY_ALLOW_ENTITIES=true" - "EXTPKG_IMPORT_POLICY_ALLOW_EXTENTITIES=true" - "EXTPKG_IMPORT_POLICY_HA_COMPATIBILITY_LEVEL=WARN" - "DOCKER_DEBUG=true" - "THINGWORX_INITIAL_ADMIN_PASSWORD=Pleasechangemenow"   Note that if the container is started and then stopped in order for changes to the YML file to be made, the license file will need to be renamed from "successful_license_capability_response.bin" to "license_capability_response.bin" so that the Foundation server can rename it. Failing to rename this file may cause an error to appear in the Application Log, and the server to act as if no license was ever installed: "Error reading license feature info for twx_realtime_data_sub".   In Docker Desktop on a Windows machine, create a file called whatever.env and list the parameters as shown here: Then, reference this environment file when bringing up the machine using the following command in Powershell:      docker run -d --env-file h2.env -p 8080:8080 -v ${pwd}/ThingworxPlatform:/ThingworxPlatform -v ${pwd}/ThingworxStorage:/ThingworxStorage -it <image_id>     Notice in this command that the volumes for the ThingworxPlatform and ThingworxStorage folders are specified with the “-v” options. When building the Docker image in Linux, these are given in the YML file under the volumes section like this (only change the path to local mount on the left side of the colon, as the container mount on the right side will never change):      volumes: - ./ThingworxPlatform:/ThingworxPlatform - ./ThingworxStorage:/ThingworxStorage - ./tomcat-logs:/opt/apache-tomcat/logs     Specifying the volumes this way allows for ThingWorx logs and configuration files to be accessed directly, a crucial requirement to debugging any issues within the Foundation instance. These volumes must be mapped to existing folders (which have write permissions of course) so that if the instance won’t come up or there are any other issues which require help from Tech Support, the logs can be copied out and shared. Otherwise, the Docker container is like a black box which obscures what is really going on. There may not be any errors in the Docker logs; the container may just quit without error with no sign of why it won’t stay up. Checking the ThingWorx and Tomcat logs is necessary to debugging, so be sure to map these volumes correctly.   Once these volumes are mapped and ThingWorx is successfully making use of them, adding a license file to the Docker instance is simple. Use the output in the ThingworxPlatform folder to obtain the device ID, grab a valid license file, and put it right back into that ThingworxPlatform folder, exactly the same way as on a regular instance of ThingWorx. However, if the Docker image is being used for a dev ops process, a license may not be necessary. The ThingWorx instance will work and allow development for a time before the trial license expires, which normally will be enough time for developers to make their changes, push those changes to a repository, and tear the container down.   Another thing worth noting about ThingWorx Docker image creation is that the version of Java supplied in the staging folder must match the compatibility requirements for each version of ThingWorx. This is the version of Java used by the container to run the Foundation server. In versions of ThingWorx 9.2+, this means using the Amazon Corretto version of Java. The image absolutely will not start ThingWorx successfully if older versions of Java are used, even if the scripts do successfully build the image.   Also note that in the newer versions of ThingWorx Docker, the ThingWorx Foundation version within the build.env file is used throughout the Docker image creation process. Therefore, while the archive name can be hard-coded to whatever is desired, the version should be left as is, including any additional specifications beyond just the version number. For example, the name of the archive can be given as Thingworx-Platform-H2-9.2.0.zip (a prettier version of the archive name than is used by default), but the PLATFORM_VERSION should still be set to 9.2.0-b30 (which should be how it appears within the build.env file upon download of the ThingWorx Docker files).   Paying attention to every note in the Help Center is critically important to using ThingWorx Docker, as the process is extensive and can become very complicated depending on how the image will be used. However, as long as the volumes are specified and the log files accessible, debugging any issues while bringing up a Docker-contained ThingWorx instance is fairly straightforward.     Credits: Images borrowed from ThingWorx Docker Containerization Tech Talk by Adrian Petrescu
View full tip
Precision and Recall are the evaluation matrices that are used to evaluate the machine learning algorithm used. This post needs some prior understanding of the confusion matrix and would recommend you to go through it here.   Example of Animal Image Recognition Consider the below Confusion Matrix for the input of the animal images and algorithm trying to identify the animal correctly: ANIMALS Cat Dog Leopard Tiger Jaguar Puma Cat 62 2 0 0 1 0 Dog 1 50 1 0 4 0 Leopard 0 2 98 4 0 0 Tiger 0 0 10 78 2 0 Jaguar 0 1 8 0 46 0 Puma 2 0 0 1 1 42   Explaining Few Random Grids: [Cat, Cat]: The grid is having the value 62. It means the image of a cat was identified as a cat for 62 times. [Cat, Dog]: The grid is having the value 2. It means the image of a cat was identified as dog twice. [Leopard, Tiger]: The grid is having the value 4. It means the image of Leopard was identified as Tiger for 4 times.   Questions To Find Some Answers    Q.How many times our algorithm predicted the image to be Tiger? A. Looking at the Tiger column: 0+0+4+78+0+1 = 83   Q.What is the probability that Puma will be classified correctly? A. Looking at the matrix above, we can see that we 42 times Puma was classified correctly. But twice it was classified as Cat, once as Tiger and once as Jaguar. So the probability will come down as: 42/(42+2+1+1)= 42/46 = 0.91      This concept is called as RECALL. It is the fraction of correctly predicted positives out of all actual positives. So we can say that Recall = (True Positives) / (True Positives + False Negatives)   Q. What is the probability that when our algorithm is identifying the image as Cat, it is actually Cat? A. Looking at the matrix above, we can see that once our algorithm has identified a Dog as a Cat, twice Puma as Cat and 62 times Cat as Cat. So the probability will come down as: 62/(62+1+2) = 0.95      This concept is called as PRECISION. It is the fraction if correctly predicted positives out of all predicted positives. So we can say that Precision = (True Positives) / (True Positives + False Positives)
View full tip
  Create Your Application Guide UI Part 5    Step 7: Test Your GUI   At this point in the lesson, your Mashup now contains: Graphics to represent data Data Logical connections between the graphical elements and the data   Run Application   Execute the following steps to test that your application works as expected. Click the View Mashup button at the top. You may have to disable your pop-up blocker to load the Mashup in a new browser tab.   2. Wait and observe the Mashup for at least 20 seconds, clicking the Manually Retrieve Counts button regularly. NOTE: The values in the Mashup will increase roughly every 10 seconds, after you press the button to manually retrieve them. This happens because the Thing you imported earlier is simulating data and the work already done by the Edge and Backend Developers in this scenario. In this simulation, different production lines are continually producing new inventory, and that information is being fed into ThingWorx Foundation. As the count for each production line reaches 100, the batch is shipped out, and the count resets to 0 for the next iteration. 3. Check the Gears Manually Set Checkbox, change the Gears Count Text Field to read 7, and click the Manually Set Counts button. After 10+ seconds have passed, click the Manually Retrieve Counts button again. NOTE: This simulates a situation where an IoT sensors in the inventory warehouse has failed. Maybe it's over or under-counting inventory. Maybe there's a network issue. Maybe the sensor has simply stopped working entirely. Regardless, the warehouse floor has contacted a user of your GUI, and asked them to manually set the inventory to the correct amount. Performing Step 4 is all that is required to do so. As long as the Gears Manually Set checkbox is active, the inventory does not increase on the 10 second timer. This is to prevent a malfunctioning sensor from providing incorrect data to your GUI. 4. Uncheck the Gears Manually Set Checkbox and click Manually Set Counts. NOTE: The Gears Count starts increasing again after 10 seconds have passed. Remember to click the Manually Retrieve Counts button to see this change. This represents the previously-malfunctioning IoT sensor having been repaired.   Spend some time interacting with your Mashup GUI to get a better sense of its functionality. For the purposes of this exercise, this is a Minimum Viable Product. After testing it and getting feedback from your users, you would likely make further enhancements.   Collect Feedback   When testing your GUI, it is a best practice to collect feedback from users regarding your design, so that you can improve the experience with your application. Example User Feedback Possible Resolution Manually Set Counts button is too easy to accidentally press. Implement a Confirmation pop-up. Manually Retrieve Counts button is not ideal, should automate Remove the manual button and implement auto-update functionality instead. Manually Set Checkboxes have confusing names. Change the text to read Disabled, indicating that the sensor-input is being ignored. User wants to use your GUI on their smartphone and on a desktop computer. Re-implement the GUI using a Responsive (rather than Static) Mashup, so that there isn’t as much dead space on larger-resolution screens. User wants a visual indication of each inventory line amount at which the pallet is shipped out. Instead of using TextBoxes to show the current inventory counts, you could implement them as Gauges, showing a minimum and maximum amount that is easy to see at-a-glance.   Now that we've gotten some feedback, let's implement one such change-request as an example.   Enhance your GUI   Set up GetProperties to automatically update whenever data changes instead of using a button to manually retrieve the data. Close your MVP Mashup browser-tab and return to the Mashup Builder. Click the button-manual-retrieve Widget to select it.   Hit your keyboard's Delete key.   Click Yes in the Remove the selected widgets? pop-up window.   Click the green </> button beside Things_MBQSThing in the top-right Data tab.   6. In the Services Filter field, type getprop. 7. Click the right-arrow beside GetProperties. Be sure to select GetProperties, not GetPropertyValues. 8. Check the Execute on Load checkbox. 9. Click Done. 10. In the Data tab on the top-right, expand the newly-added GetProperties Service. 11. Drag-and-drop Gears_Count onto textbox-gears-count. 12. On the Select Binding Target popup, click Text. 13. On the Confirm Binding Replace pop-up, click Yes. 14. Repeat steps 11-13 for Pistons_Count and Wheels_Count onto their respective TextBox Widgets. 15. Drag-and-drop Gears_Count_Manually_Set onto checkbox-gears-manual. 16. On the Select Binding Target popup, click State. 17. On the Confirm Binding Replace pop-up, click Yes. 18. Repeat steps 15-17 for Pistons_Count_Manually_Set and Wheels_Count_Manually_Set onto their respective Checkboxes. 19. Click the GetProperties Service to select it. 20. In the bottom-right Data Properties section, check the Automatically update values when able box. 21. Click Save. 22. Click View Mashup.   You have now enhanced your MVP Mashup based on user feedback. The Counts in each TextBox will automatically update whenever there is a change in the Property values, as requested. ThingWorx Foundation provides a platform that allows you to quickly create a GUI for your IoT application in an extremely flexible and agile manner. The options to continue to improve your GUI are entirely up to you.   Step 8: Next Steps   Congratulations! You've successfully completed the Create Your Application UI guide, and learned how to: Create new Mashups Choose a Static or Responsive layout Add Widgets to your Mashup Bind data Services to Widgets in your Mashup Create a functional GUI with applied usage of Widgets and Service   The next guide in the Customize UI and Display Options to Deploy Applications learning path is Basic Mashup Widgets.    Learn More   We recommend the following resources to continue your learning experience: Capability Guide Build Data Model Introduction   Additional Resources   If you have questions, issues, or need additional information, refer to: Resource Link Community Developer Community Forum Support Mashup Builder Support Help Center    
View full tip
I know most of us very happily use the Administrator account in Thingworx, however this is bad bad practice for development and even administration of the platform! Administrator is there by default and should be used to set up your initial users, which should include your Actual Platform Administrator (with a strong password of course) After that change the Administrator Password and Remove them from the Administrators group. I recommend this as a Best Practice even in your own Development environments, but especially in Runtime. Your very first steps would like: Install Thingworx Log in as Administrator Set up the new Platform Administrator account Remove Administrator from Administrators group Change Administrator password.
View full tip
This is an example to show returning an uploaded file as a binary from a Groovy Script. Parameters: model_name serial_number import java.io.StringWriter import java.io.PrintWriter import com.axeda.drm.sdk.Context import com.axeda.drm.sdk.data.* import com.axeda.drm.sdk.device.* import javax.activation.MimetypesFileTypeMap try {     Context ctx = Context.getUserContext()     ModelFinder modelFinder = new ModelFinder(ctx)     modelFinder.setName(parameters.model_name)     Model model = modelFinder.find()     DeviceFinder dfinder = new DeviceFinder(ctx)     dfinder.setModel(model)     dfinder.setSerialNumber(parameters.serial_number)     Device d = dfinder.find()     UploadedFileFinder uff = new UploadedFileFinder(ctx)     uff.device = d     def ufiles = uff.findAll()     UploadedFile ufile     if (ufiles.size() > 0) {         ufile = ufiles[0]         File f = ufile.extractFile()     def type = getType(f)     return ['Content-Type': type, 'Content': new FileInputStream(f)]      } else {     return ['Content-Type': 'text/plain', 'Content': 'No files have been uploaded'] } } catch (Exception e) {     logger.info(e.message)     StringWriter logStringWriter = new StringWriter();     PrintWriter logPrintWriter = new PrintWriter(logStringWriter)     e.printStackTrace(logPrintWriter)     logger.info(logStringWriter.toString()) } static String getType(File f) {   MimetypesFileTypeMap mimeTypesMap = new MimetypesFileTypeMap()   return mimeTypesMap.getContentType(f); }
View full tip
Hello!   “ThingWorx on Air” Episode 9 is now available! Grab your headphones and listen to Neal, an Azure subject matter expert, and Janie, a PM focused on Azure functionality, introduce a new integration we’re working on between Kepware, Azure IoT Edge, Azure IoT Hub and ThingWorx.   Discover how you’ll be able to leverage and model OPC UA data directly in the Thing Model, how you’ll be able to connect just about any OPC UA device through the Azure stack and to the Cloud, and so much more. We can’t wait to continue to extend ThingWorx functionality to support industry standards like the OPC UA protocols.   Are you excited? Wish you could get your hands on this functionality early? You can! Reach out to Janie at jpascoe@ptc.com to learn how you can become involved in an exclusive OPC UA & ThingWorx preview program.   Enjoy the episode and let me know what you think below!   Stay connected, Kaya
View full tip
One of the issues we have encountered recently is the fact that we could not establish a VNC Remote session. The edge was located outside of the internal network where the Tomcat was hosted, and all access to the instance was through an Apache reverse proxy. The EMS was able to connect successfully to the Server, because the Apache had correctly setup the Websocket forwarding through the following directive: ProxyPass "/Thingworx/WS/"  "wss://192.168.0.2/Thingworx/WS" However, we saw that tunnels immediately closed after creation and as a result (or so we thought), we could not connect from the HTML5 VNC viewer. More diagnostics revealed that you need to have ProxyPass directives for the following: -the EMS will make calls to another WS endpoint, called WSTunnelServer. After you setup this, the EMS will be able to create tunnels to the server. -the HTML5 VNC page will make a "websocket" call to yet another WS endpoint, called WSTunnelClient. Only at this step you have the ability to successfully use tunnels through a reverse proxy. Hope it helps!
View full tip
Announcements