cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - You can change your system assigned username to something more personal in your community settings. X

IoT Tips

Sort by:
This blog post provide information on the technical changes in Thingworx 8.0, New Technical Changes in ThingWorx 8.0.0 Here are some common questions and answers in regards to the Licensing change: Does that mean all the extensions in the marketplace won't be free anymore? Depends on the extensions. The main extensions we are licensing for 8.0 are Navigate, Manufacturing and Utilties. We are not licensing the MailExtension on the marketplace, for example. Partners and customers can still import their custom apps/extensions If TWX connects to RP(remote platform) which has its own subscription based Flexera license (InService, for example), how does this interaction works- license validation.  Is server to sever connection counts as user login direct to PTC product? License files are per TWX instance. For RP, each would have their own license files. User counts (if entitled and enforced) are generic to each system.
View full tip
This article https://support.ptc.com/appserver/cs/view/solution.jsp?n=CS264270 and the blog post provide information on the technical changes in Thingworx 8.0, New Technical Changes in ThingWorx 8.0.0 Here are some common questions and answers in regards to the change. Is there any way that one could lose access to the Appkey keystore? (restore a backup, etc) Yes, it is possible to  lose the key store. If one for some reason simply deletes it or renames it, the existing application keys would no longer be able to be decrypted - thus unusable. Is there a way to back up the appkey store and any necessary secrets such that it can be recovered then? Yes, but this would be handled at the file system, not inside of ThingWorx. We do have some best practices documented for managing the keystore. For developers who configure Edge applications and Connectors to connect to the platform, where do they copy the app key from? Are they copyring the encrypted version of the app key? They would do the same as before. We are not encrypting the app key in the UI, only on disk. Is there any way to get a summary of appkeys that are being used via URL? We do have some requirements for highlighting "repeat offenders". In 8.0, one can query the logs to find those app keys. We don't log the app key id, only the app key name.
View full tip
Let us consider that we have 2 properties Property1 and Property2 in Thing Thing1 which we want to update using UpdatePropertyValues service. In our service we will use a system defined DataShape NamedVTQ which has following field definitions: In Thing1 create a custom service like following: var params1 = {      infoTableName : "InfoTable",      dataShapeName : "NamedVTQ" }; // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(NamedVTQ) var InputInfoTable = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params1); var now = new Date(); // Thing1 entry object var newEntry = new Object(); newEntry.time = now; // DATETIME - isPrimaryKey = true newEntry.quality = undefined; // STRING newEntry.name = "Property1"; // STRING newEntry.value = "Value1"; // STRING InputInfoTable.AddRow(newEntry); newEntry.name = "Property2"; // STRING newEntry.value = "Value2"; // STRING InputInfoTable.AddRow(newEntry); var params2 = {      values: InputInfoTable /* INFOTABLE */ }; me.UpdatePropertyValues(params2);
View full tip
This is using the simplest structure to do a look through an infotable.  It's simple but it avoids having to use row indexes and cleans up the code for readability as well.   //Assume incoming Infotable parameter names "thingList" for each (row in thingList.rows) {      // Now each row is already assigned to the row variable in the loop      var thingName = row.name; }   You can also nest these loops (just use a different variable from "row").  Also important to note to not add or remove row entries of the Infotable inside the loop.  In this case you may end up skipping or repeating rows in the loop since the indexes will be changed.
View full tip
Recently, mentor.axeda.com was retired. The content has not yet been fully migrated to Thingworx Community, though a plan is in place to do this over the coming weeks. Attached, please find OneNote 2016 and PDF attachments that contain the content that was previously available on the Axeda Mentor website.
View full tip
Thingworx provides a library of InfoTable functions, one of the most powerful ones being DeriveFields (besides that I use Aggregate and Query a lot and ... getRowCount) DeriveFields can generate additional columns to your InfoTable and fill that with values that can be derived from ... nearly anything! Hard coded, based on a Service you call, based on a Property Value, based on other values within the InfoTable you are adding the column to. Just remember for this Service (as well as Aggregate), no spaces between different column definitions and use a , (comma) as separator. Here are some two powerful examples: //Calling another function using DeriveFields //Note that the value thingTemplate is the actual value in the row of column thingTemplate! var params = {   types: "STRING" /* STRING */,   t: AllItems /* INFOTABLE */,   columns: "BaseTemplate" /* STRING */,     expressions: "Things['PTC.RemoteMonitoring.GeneralServices'].RetrieveBaseTemplate({ThingTemplateName:thingTemplate})" /* STRING */ }; // result: INFOTABLE var AllItemsWithBase = Resources["InfoTableFunctions"].DeriveFields(params); //Getting values from other Properties //to in this case is the value of the row in the column to //Note the use of , and no spaces //NOTE: You can make this even more generic with something like Things[to][propName] var params = {     types: "NUMBER,STRING,STRING,LOCATION" /* STRING */,     t: AllAssets /* INFOTABLE */,     columns: "Status,StatusLabel,Description,AssetLocation" /* STRING */,     expressions: "Things[to].Status,Things[to].StatusLabel,Things[to].description,Things[to].AssetLocation" /* STRING */ }; // result: INFOTABLE var AllAssetsWithStatus = Resources["InfoTableFunctions"].DeriveFields(params);
View full tip
The following is a set of custom objects that will trigger from an Expression Rule, and cause a file uploaded by a remote agent to be sent on to the Thingworx platform instance of your choice once completed.  The expression rules should be configured as so: ​Type: ​File ​IF: ​1 == 1 ​THEN: ​ExecuteCustomObject("SendUploadedFiles") SendUploadedFiles.groovy: import com.axeda.drm.sdk.data.UploadedFile import static com.axeda.sdk.v2.dsl.Bridges.* logger.info("Executing AsyncExecutor") //Spawn async thread for each upload compressedFile.getFiles().each {   UploadedFile upFile ->     // last parameter is a timeout for the execution measured in seconds (200 minutes)     customObjectBridge.initiateAsync("SendToThingWorx",                                     [                                       fileID: upFile.id,                                       hint: parameters.hint,                                       deviceID: context.device.id                                     ], 200 * 60) }     SendToThingworx.groovy: import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* import groovyx.net.http.HTTPBuilder import static groovyx.net.http.ContentType.* import static groovyx.net.http.Method.* import com.axeda.drm.sdk.data.*  // UploadedFileFinder stuff. import com.axeda.drm.sdk.Context import com.axeda.common.sdk.id.Identifier import org.apache.commons.codec.binary.Base64 def retStr = "this is a sample groovy script. your username: ${parameters.username}\n" def context = Context.getAdminContext() def thingName = 'ExampleThing' def thingworxHost = 'sample.cloud.thingworx.com' def twxApplicationKey = '00000000-0000-0000-0000-00000000000' def fileid = parameters.fileID def finder = new UploadedFileFinder(context) finder.setId( new Identifier( fileid.toLong()) ) UploadedFile uf = finder.find() def is = new FileInputStream( uf.extractFile() ) retStr += "UF: ${uf.name} ${uf.fileSize} ${uf.actualFileName}\n" logger.info  "UF: ${uf.name} ${uf.fileSize} ${uf.actualFileName}\n" def bOut = new ByteArrayOutputStream() int b = 0 int count = 0 while ( (b = is.read()) != -1  ) {     count ++     bOut.write(b) } is?.close() byte[] bRes = bOut.toByteArray() logger.info "Length: ${bRes.length}" retStr += "Count: ${count}  Length: ${bRes.length}\n" def b64 = new Base64() def outputStr = b64.encodeBase64String(bRes) retStr += "Length of base64 string: ${outputStr.length()}\n" logger.info "Length of base64 string: ${outputStr.length()}\n" logger.info "===========================================" logger.info outputStr logger.info "===========================================" def http = new HTTPBuilder("https://${thingworxHost}") http.request(POST, JSON) {     uri.path = "/Thingworx/Things/${thingName}/Services/SaveBinary"     body  = [path: uf.name, content: outputStr ]     headers = [appKey: twxApplicationKey ,                       Accept: 'application/json',                       "content-type": "application/json"             ]     response.success = { resp ->         println "POST response status: ${resp.statusLine}"         logger.info "POST RESPONSE status: ${resp.statusLine}"     }     response.failure = { resp ->         logger.info "RequestMessage: ${resp.statusLine}"         logger.info "Request failed: ${resp.status}"     } } return retStr    
View full tip
Sometimes M2M Assets should poll the platform on demand, such as in the case of avoiding excessive data charges from chatty assets.  A mechanism was developed that instructs the Asset to contact (poll) the platform for actions that the Asset needs to act on such as File Uploads, Set DataItem, etc. The Shoulder Tap SMS message is the platform’s way of contacting the Asset – tapping it on the shoulder to let it know there’s a message waiting for it.  The Asset responds by polling for the waiting message.  This implementation in the platform provides a way to configure the Model Profile that is responsible for sending an SMS Shoulder Tap message to an M2M Asset.  The Model Profile contains model-wide instructions for how and when a Shoulder Tap message should be sent. How does it work? The M2M asset is set not to poll the Axeda Platform for a long period, but the Enterprise user has some actions that the Asset needs to act upon such as FOTA (Firmware Over-the-Air).       Software package deployed to M2M Asset from Axeda Platform and put into Egress queue.       The Shoulder Tap mechanism executes a Custom Object that then sends a message to the Asset through a delivery method like SMS, UDP, etc.       The Asset’s SMS, etc. handler receives the message and the Asset then sends a POLL to the Platform and acts upon the action in the egress queue How do you make Shoulder Tap work for your M2M Assets? The first step is to create a Model Profile, the model profile will tell Asset of this model, how to communicate. For Example, if the Model Profile is Shoulder Tap, then the mechanism used to communicate to the Asset will imply Shoulder Tap.  Execute the attached custom object, createSMSModelProfile.groovy, and it will create a Model Profile named "SMSModelProfile". When you create a new Model, you will see  “SMSModelProfile“ appear in the Communication Profile dropdown list as follows: The next step is to create the Custom Object Transport script which is responsible for sending out the SMS or other method of communication to the Asset.  In this example the custom object is be named SMSCustomObject​.  The contents of this custom object are outside the scope of this article, but could be REST API calls to Twilio, Jasper or to a wireless provider's REST APIs to communicate with the remote device using an SMS message.   This could also be used with the IntegrationPublisher API to send a JMS message to a server the customer controls which could then be used to talk directly with custom libraries that are not directly compatible with the Axeda Platform. Once the Shoulder Tap scripting has been tested and is working correctly, you can now directly send a Shoulder Tap to the Asset from an action or through an ExtendedUI Module, such as shown below: import com.axeda.platform.sdk.v1.services.ServiceFactory; final ServiceFactory sFact = new ServiceFactory() def assetId = (Long)parameters.get("assetId") def stapService = ServiceFactory.getShoulderTapService() stapService.sendShoulderTap( assetId ) See Extending the Axeda Platform UI - Custom Tabs and Modules for more about creating and configuring Extended UI Modules. What about Retries? maxRetryCount  - This built in attribute’s value defines the number of times the platform will retry to send the Shoulder Tap message before it gives up. retryInterval -The retry interval that can be used if the any message delivery needs to be retried. Retry Count and Interval are configured in the Model Profile Custom Object like so: final DeliveryMethodDescriptor dmd = new DeliveryMethodDescriptor(); fdmd.setMaxRetryCount(2); fdmd.setRetryInterval(60);
View full tip
Objective Learn how the Scripto Web Service helps you to present platform information in your HTML with JavaScript and dynamic page updating.  After this tutorial, you will know how to create Axeda Custom Objects that return formatted results to JavaScript using XmlHttpResponse, and how a very simple page can incorporate platform data into your browser-based user interface. Part 1 - Simple Scripto In Using Scripto, you learned how Scripto can be called from very simple clients, even the most basic HTTP tools. This tutorial builds on the examples in that tutorial. The following HelloWorld script accepts a parameter named "foo". This means that the caller of the script may supply a value for this parameter, and the script simple returns a message that includes the value supplied. import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* return "Hello world, ${parameters.foo}" In the first part of this tutorial, we'll be creating an HTML page with some JavaScript that simply calls the HelloWorld script and puts the result on the page. Create an HTML File Open up your favorite text editor and create a blank document. Paste in this simple scaffold, which includes a very simple FORM with fields for your developer platform email and password, and the "foo" parameter. <html> <head> <title>Axeda Developer Connection Simple Ajax HelloWorld Example</title> </head> <body> <form name="f1">         Platform email (login): <input name="username" type="text"><br/>         Password: <input name="password" type="password"><br/>         foo: <input name="foo" type="text"><br/> <input value="Go" type="button" onclick='JavaScript: callScripto()'/></p> <div id="result"></div> </form> </body> </html> Pretty basic HTML that you've seen lots of times. Notice the form onclick refers to a JavaScript function. We'll be adding that next. Add the JavaScript Directly under the <title> tag, add the following <script language="Javascript"> var scriptoURL ="http://dev6.axeda.com/services/v1/rest/Scripto/execute/"; var scriptName ="HelloWorld2"; </script> This defines our JavaScript block, and a couple of constants to tell our script where the server's Scripto REST endpoint is, and the name of the script we will be running. Let's add in our callScripto() function. Paste the following directly under the scriptName variable declaration: function callScripto(){ try{                 netscape.security.PrivilegeManager.enablePrivilege("UniversalBrowserRead"); }catch(e){ // must be IE }    var xmlHttpReq =false;    var self =this;    // Mozilla/Safari    if(window.XMLHttpRequest){                 self.xmlHttpReq =new XMLHttpRequest();    }// IE elseif(window.ActiveXObject){                 self.xmlHttpReq =new ActiveXObject("Microsoft.XMLHTTP"); }    var form = document.forms['f1'];    var username = form.username.value;    var password = form.password.value;    var url = scriptoURL + scriptName +"?username="+ username +"&password="+ password;             self.xmlHttpReq.open('POST', url,true);             self.xmlHttpReq.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded');             self.xmlHttpReq.onreadystatechange =function() {       if(self.xmlHttpReq.readyState ==4){                     updatepage(self.xmlHttpReq.responseText);       }    }    var foo = form.foo.value;    var qstr ='foo='+escape(foo);    self.xmlHttpReq.send(qstr); } That was a lot to process in one chunk, so let's examine each piece. This piece just tells the browser that we'll be wanting to make some Ajax calls to a remote server. We'll be running the example right off a local file system (at first), so this is necessary to ask for permission. try{                 netscape.security.PrivilegeManager.enablePrivilege("UniversalBrowserRead"); }catch(e){ // must be IE } This part creates an XmlHttpRequest object, which is a standard object available in browsers via JavaScript. Because of slight browser differences, this code creates the correct object based on the browser type. var xmlHttpReq =false; var self =this; // Mozilla/Safari if(window.XMLHttpRequest){                 self.xmlHttpReq =new XMLHttpRequest(); } // IE elseif(window.ActiveXObject){                 self.xmlHttpReq =new ActiveXObject("Microsoft.XMLHTTP"); } Next we create the URL that will be used to make the HTTP call. This simply combines our scriptoURL, scriptName, and platform credentials. var form = document.forms['f1']; var username = form.username.value; var password = form.password.value; var url = scriptoURL + scriptName +"?username="+ username +"&password="+ password; Now let's tell the xmlHttpReq object what we want from it. we'll also reference the name of another JavaScript function which will be invoked when the operation completes. self.xmlHttpReq.open('POST', url,true);     self.xmlHttpReq.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded');     self.xmlHttpReq.onreadystatechange =function(){ if(self.xmlHttpReq.readyState ==4){             updatepage(self.xmlHttpReq.responseText); } } Finally, for this function, we'll grab the "foo" parameter from the form and tell the prepped xmlHttpReq object to post it. var qstr ='foo='+escape(foo);     self.xmlHttpReq.send(qstr); almost done. We just need to supply the updatepage function that we referenced above. Add this code directly before the </script> close tag: function updatepage(str){             document.getElementById("result").innerHTML = str; } Try it out Save your file as helloworld.html and open it in a browser by starting your browser and choosing "Open File". You can also download a zip with the file prepared for you at the end of this page. If you are using Internet Explorer, IE will pop a bar asking you if it is OK for the script inside this page to execute a script. Choose "Allow Blocked Content". Type in your platform email address (the address you registered for the developer connection with) and your password. Enter any text that you like for "foo". When you click "Go", the area below the button will display the result of the Scripto call. Note that if you are using Mozilla Firefox, you will be warned about the script wanting to access a remote server. Click "Allow". Congratulations! You have learned how to call a Custom Object-backed Scripto service to display dynamic platform content inside a very simple HTML page. Next Steps Be sure to check out the tutorial on Hosting Custom Applications to learn how you can make this page get directly served from your platform account, with its very own URL. Also explore code samples that show more sophisticated HTML+AJAX examples using Google Charts and other presentation tools.
View full tip
Expression rules are the heart of the Axeda Platforms processing capability. These rules have an If-Then-Else structure that's easy to create and understand. We think they're like a formula in a spreadsheet. For example, say your asset has a dataitem reading for temperature: IF: temperature > 80 THEN: CreateAlarm("High Temp", 100)                      This rule compares the temperature to 80 every time a reading is received. When this happens, the rule creates an alarm with name "High Temp" and severity 100. Dataitems represent readings from an asset. They are typically sensors or monitoring parameters in an application. But also think of dataitems as variables. The rule can be changed to IF: temperature > threshold                      so that each asset has its own threshold that can be adjusted independently. Look at the complete list of Expression Rule triggers - the events that trigger a rule to run variables - the information you can access in an expression functions - the functions that can be used within an expression actions - these are called in the Then or Else part of an expression to make something happen A rule can calculate a new value. For example, if you wanted to know the max temperature IF: temperature > maxTemperature THEN: SetDataItem("maxTemperature" temperature) To convert a temperature in celsius to fahrenheit IF: temperature THEN: SetDataItem("tempF", temperature*9/5 + 32) The If simply names the variable, so any change to that variable triggers the rule to run. There may be lots of other dataitems reported for an asset, and changes to the other dataitems should not recalculate the temperature. When rules should run only when an asset is in a particular mode or state, or when there is a complex sequence to model, read about how State Machines come to the rescue. Creating and Testing an Expression Rule ​ We're going to create a simple Expression Rule and show it running in a few steps. Above, you saw a rule that created an alarm when temperature > 80. Now, we will make one that converts a temperature in F to one in C. An Expression Rule consists of a few things: Name Description - an optional field to describe the rule Trigger - what makes this rule run? The trigger tells the rule if it applies to Alarms, Data, Files, or many others. If - the logic expression for the condition to evaluate Then - the logic to run if the condition is true Else - the logic to run if the condition is false To begin, log into an Axeda Platform instance. Navigate to the Manage tab Select ​New​, then ​Expression Rule​ Enter this Expression Rule information Name: TempConvert Type: Data Description: Enabled: Leave checked If: TempC Then: SetDataItem("TempF", TempC*9/5 + 32) If you click on functions or scroll down for actions in the Expression Tree, you will see a description in Details. Click the Apply to Asset​ button to select models and specific assets to apply this rule to. Now that you have an Expression Rule, lets try it. Testing the Expression Rule (NEEDS UPDATING) You can test the expression rule by simulating the TempC data using Axeda Simulator, as instructed below. Or, you can use the Expression Rules Debugger to simulate the reading and display the results. For information about using the Expression Rules Debugger, see the Expression Rules Debugger documentation in the on-line Help system.Simulate a TempC reading Launch the Axeda Simulator The Axeda Simulator will launch in a new browser window Enter your registered email address, Developer Connection password, and click Login.       Select asset1 from the Asset dropdown. Under the Data tab, enter the dataitem name TempC, and a value like 28: Then Click Send. To see the exciting result, go back to the Platform window and navigate to the Service tab: and you should see that 28C = 82.4F. You created an Expression Rule that triggers when a value of TempC is received, and creates a new dataitem TempF with a calculated value. This rule applies to your model, but if you had many models of assets, it could apply to as many as you want. You could change the rule to do the conversion only If: TempC > 9 and simulate inputs to see that this is the new behavior. Further Reading Read about how Rule Timers can trigger rules to run on a scheduled basis. (TODO)
View full tip
This example shows how a file can be retrieved via Scripto and then displayed on a Web page. Precondition is that an asset has an uploaded file. This script assumes the file is there and that it is not extremely large (under 1 megabyte). This example uses base64 encoding to convert the file into a string. Future versions of Scripto will support other data streams so that base64 encoding will not be necessary. import com.axeda.drm.sdk.Context import com.axeda.drm.sdk.data.UploadedFile import com.axeda.drm.sdk.data.UploadedFileFinder import com.axeda.drm.sdk.device.Device import com.axeda.drm.sdk.device.DeviceFinder // This script requires parameter "id" Context ctx = Context.create(parameters.username); def response = '' try {     DeviceFinder deviceFinder = new DeviceFinder(ctx, new Identifier(parameters.id as Integer));     Device device = deviceFinder.find();     UploadedFileFinder uff = new UploadedFileFinder(ctx)     uff.device = device     uff.hint = 'photo'     def ufiles = uff.findAll()     UploadedFile ufile     if (ufiles.size() > 0) {         ufile = ufiles[0]         File f = ufile.extractFile()         response = getBytes(f).encodeBase64(false).toString()     } } catch (Exception e) {     logger.info(e.message);     response = [             faultcode: 'Groovy Exception',             faultstring: e.message     ]; } return ['Content-Type': 'data:image/png;base64', 'Content': response]; static byte[] getBytes(File file) throws IOException {     return getBytes(new FileInputStream(file)); } static byte[] getBytes(InputStream is) throws IOException {     ByteArrayOutputStream answer = new ByteArrayOutputStream(); // reading the content of the file within a byte buffer     byte[] byteBuffer = new byte[8192];     int nbByteRead /* = 0*/;     try {         while ((nbByteRead = is.read(byteBuffer)) != -1) { // appends buffer             answer.write(byteBuffer, 0, nbByteRead);         }     } finally {         is.close()     }     return answer.toByteArray(); }
View full tip
This groovy script creates an xml output of the audit log filtered by the User Access category, so dates of when users logged in or logged out. Parameter: days - number of days to search import com.axeda.drm.sdk.device.ModelFinder import com.axeda.drm.sdk.Context import com.axeda.common.sdk.id.Identifier import com.axeda.drm.sdk.device.Model import com.axeda.drm.sdk.device.DeviceFinder import com.axeda.drm.sdk.device.Device import com.axeda.drm.sdk.audit.AuditCategoryList import com.axeda.drm.sdk.audit.AuditCategory import com.axeda.drm.sdk.audit.AuditEntryFinder import com.axeda.drm.sdk.audit.SortType import com.axeda.drm.sdk.audit.AuditEntry import groovy.xml.MarkupBuilder /* * AuditEntryList.groovy * * Creates an xml output of the audit log filtered by the User Access category, so dates of when users logged in or logged out. * * @param days        -   (REQ):Str number of days to search. * * @author Sara Streeter <sstreeter@axeda.com> */ def writer = new StringWriter() def xml = new MarkupBuilder(writer) try {     def ctx = Context.getUserContext()     ModelFinder modelFinder = new ModelFinder(ctx, new Identifier(1))     Model model = modelFinder.find()     DeviceFinder deviceFinder = new DeviceFinder(ctx, new Identifier(1))     Device device = deviceFinder.find()     AuditCategoryList acl = new AuditCategoryList()     acl.add(AuditCategory.USER_ACCESS)     long now = System.currentTimeMillis()     Date today = new Date(now)     def paramdays = parameters.days ? parameters.days: 5     long days = 1000 * 60 * 60 * 24 * Integer.valueOf(paramdays)     AuditEntryFinder aef = new AuditEntryFinder(ctx)     aef.setCategories(acl)     aef.setToDate(today)     aef.setFromDate(new Date(now - (days)))     aef.setSortType(SortType.DATE)     aef.sortDescending()     List<AuditEntry> audits = aef.findAll() // assemble the response     xml.Response() {         audits.each { AuditEntry audit ->                   Audit() {                 id(audit?.id.value)                 user(audit?.user?.username)                 date(audit?.date)                 category(audit?.category?.bundleKey)                 message(audit?.message)             }         }     } } catch (def ex) {     xml.Response() {         Fault {             Code('Groovy Exception')             Message(ex.getMessage())             StringWriter sw = new StringWriter();             PrintWriter pw = new PrintWriter(sw);             ex.printStackTrace(pw);             Detail(sw.toString())         }     } } return ['Content-Type': 'text/xml', 'Content': writer.toString()]
View full tip
Recently I have been accompanying an integration partner and end customer around an issue experienced with ThingWorx resource exhaustion.  Early on it seemed like this was an issue with the ThingWorx Azure IoT Hub Connector as it would freeze up and become unresponsive.  Following a root cause analysis it became clear that it was actually caused by a lack of a number of standard cloud design patterns, which if used would have automatically adapted operation of the overall solution to be far more resilient as well as resource optimized.   The way that the logic was structured, it prioritized job execution on entities with the oldest last success time and would continue to retry these executions (IoT Direct Methods) every few seconds until successful.  There were a number of problems here, but I'll unpack a few in order to tie the problem to the solution via design patterns.   1) No exception handling When the direct method execution failed/timed out or the system reported being unable to execute the remote service, this response was not used to adapt the solutions behavior. 2) No backoff retry mechanism As exceptions were not caught, an adaptive retry mechanism with incremental or exponential backoff could not be leveraged to limit the impact of the build up of the failing retries. 3) No exception tracking Tracking that exceptions were occurring and counting them would allow powering an exponential backoff retry algorithm (with jitter), a Cancel or Circuit Breaker pattern (stop doing something which is just broken), as well as provided alerting to address specific areas of the distributed solution experiencing issues. 4) Conflicting priorities It was interesting to see the manifestation of the conflicting interests of wanting to ensure checks and balances (had all needed data) and system resiliency.  Retries and resource usage built up exponentially due to the transient error instead of backing them off.  Trying so hard to get the needed data from failing sensors meant that operational sensors were deprioritized and their data was not received either - spreading the localized issue to the whole system.   Around the time that I shared my recommendations and some examples of how to make the solution more resilient, one of my technical colleagues at Microsoft shared some extremely interesting and relevant design patterns documented by Microsoft as a part of the "Microsoft Azure Well-Architected Framework".  This framework with included Design Patterns for specific cloud application goals allows applying well-known industry standard approaches to dealing with the challenges of large scale distributed enterprise systems (reliability, performance, cost optimization).   She later then shared this blog post describing exactly the exponential backoff retry with jitter pattern which we had together recommended to the systems integrator.   What's interesting for us ThingWorx people is that this framework from Microsoft is about well-architected cloud solutions and does not specifically reference the Azure stack, and as such many of these approaches and design practices can be employed in your ThingWorx applications.  What are you waiting for?  Go check them out!
View full tip
JavaMelody is an open source (LGPL) application that measures and calculates statistical information based on application usage. The resulting data can be viewed in a variety of formats including evolution charts, which track various operations and server attributes over time. There are also robust reporting options that allow data to be exported in either HTML of PDF formats. Installation Installation is fairly simple and can be done in just a few minutes. Download the distribution from JavaMelody Wiki and extract the javamelody.jar, available at https://github.com/javamelody/javamelody/releases Step 1: Download the java melody file (in unix, use the following command*): wget javamelody.googlecode.com/files/javamelody-1.49.0.zip Note: Ensure the latest version available at the link provided above before executing the unix command, modify the version accordingly. Step 2: Extract the zip file (using the following command in unix, note the version from step 1); unzip javamelody-1.49.0.zip Step 3: Copy the javamelody.jar and jrobin-x.jar from the javamelody installable to the WEB-INF/lib directory of the war file deployed in the tomcat using the following command in unix: cp -pr javamelody-1.49.0 jrobin-x.jar /opt/tomcat/server/webapps/<application name>/WEB-INF/lib Step 4: Edit the web.xml file from WEB-INF directory of the war file deployed in the tomcat and add the following lines in the web.xml before the description of the servlet.ie. mostly at the starting of the web.xml file.                 <filter> <filter-name>monitoring</filter-name>                <filter-class>net.bull.javamelody.MonitoringFilter</filter-class>        </filter>        <filter-mapping>                <filter-name>monitoring</filter-name>                <url-pattern>/*</url-pattern>        </filter-mapping>        <listener>                <listener-class>net.bull.javamelody.SessionListener</listener-class>        </listener> Step 5: Restart the tomcat server after editing the web.xml and access the javamelody page using the following url pattern: http://<hostname on which tomcat is configured>:<Port number on which the application is accessed>/<application name>/monitoring The url can be customized in the configuration file. Reports can be viewed in weekly, daily, or monthly formats. They can also be downloaded or can be sent over email in pdf format. iText library for WebApps and Java’s Mail and Activation libraries are required on the server in order to use the mail session. The report provides the same information that can be found in monitoring web page with both high-level and detailed information. CPU&Memory usage: Detailed SQL Information: SQL Statistics: Server Requests: System threads, caches: Data Caches: System Overhead ​On the JavaMelody Wiki, https://github.com/javamelody/javamelody/wiki/Overhead​ one can find a healthy discussion about system overhead. It seems that the general consensus is that  the overhead cost caused by JavaMelody is very low and that the feature is safe to enable full-time in QA environment. ->JavaMelody records only statistics and not events, so the overhead of memory is quite minimal. ->No  I/O on the wire and minimal I/O on disk. If no problem arises, it can be considered to enable JavaMelody on the production environment as well. Using a tool like JavaMelody can lead to valuable insights on how to optimize servers or uncover otherwise hidden issues, providing value that exceeds the overhead cost.
View full tip
We will host a live Expert Session: "Thingworx Mashup 101 - Do's and Don'ts" on February 24th, 13h30 EST.   Please find below the description of the expert session and the registration link.   Expert Session: Thingworx Mashup 101 - Do's and Don'ts Date and Time: February 24th, 13h30 EST Duration: 1 hour Host: Aanjan Ravi - Technical Product Manager Registration Here: https://www.ptc.com/en/events/thingworx-mashup-101   Description: This session covers the most common and useful tips about how to correctly use Mashup builder, Widgets and Layouts – and what to avoid -  to create applications with good principles of UI/UX and easier to maintain.   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’. You can also suggest topics for upcoming sessions using this small form.   Here are some recorded sessions that might be of your interest. You can find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search Thingworx Active Active Clustering This session will cover the main aspects of the High Availability Clustering feature launched with the ThingWorx 9.0 release.   Recoding Link Upgrade to Thingworx 9 – How to Plan / Evaluate Impacts This session highlights the key points you should evaluate to properly plan your upgrade to Thingworx 9. Recording Link Top 5 items to check for Thingworx Performance Troubleshooting How to troubleshoot performance issues in a Thingworx Environment? Here we cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support     Recording Link
View full tip
We will host a live Expert Session: "Thingworx Mashup 101 - Do's and Don'ts" on February 24th, 13h30 EST.   Please find below the description of the expert session and the registration link.   Expert Session: Thingworx Mashup 101 - Do's and Don'ts Date and Time: February 24th, 13h30 EST Duration: 1 hour Host: Aanjan Ravi - Technical Product Manager Registration Here: https://www.ptc.com/en/events/thingworx-mashup-101   Description: This session covers the most common and useful tips about how to correctly use Mashup builder, Widgets and Layouts – and what to avoid -  to create applications with good principles of UI/UX and easier to maintain.   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’. You can also suggest topics for upcoming sessions using this small form.   Here are some recorded sessions that might be of your interest. You can find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search Thingworx Active Active Clustering This session will cover the main aspects of the High Availability Clustering feature launched with the ThingWorx 9.0 release.   Recoding Link Upgrade to Thingworx 9 – How to Plan / Evaluate Impacts This session highlights the key points you should evaluate to properly plan your upgrade to Thingworx 9. Recording Link Top 5 items to check for Thingworx Performance Troubleshooting How to troubleshoot performance issues in a Thingworx Environment? Here we cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support     Recording Link
View full tip
We will host a live Expert Session: "Thingworx Navigate 3D Viewer" on October 9th at 11:00 AM EST.   Please find below the description of the expert session and the registration link .   Expert Session: Thingworx Navigate 3D Viewer Date and Time: Friday, October 9th, 2020 11:00 am EST Duration: 1 hour Host: Robbie Morrison, Product Management Senior Manager   Description: Following the series of new capabilities released with Navigate 9.0, this session will focus in the details of Navigate 3D Viewer leverage this to your use cases   Register here   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’.   You can also suggest topics for upcoming sessions using this small form.   Here are some recorded sessions that might be of your interest. You can find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search   Navigate 9.0 – What’s New? This session is the intro of a series that will cover new capabilities of the recent Navigate 9 release and the value that each can bring to your implementation. Then we will have further sessions covering the details of some of them   Recoding Link Top 5 items to check for Thingworx Performance Troubleshooting How to troubleshoot performance issues in a Thingworx Environment? Here we cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support     Recording Link Thingworx 9.0 Component Based App Development Following the series of new capabilities released with Navigate 9.0, this session will focus in the details of Navigate Component Based app development and how to leverage this to your use cases Recording Link
View full tip
We will host a live Expert Session: Thingworx Navigate Component Based App Development on Wednesday 09/30, 08:00 AM Eastern Daylight Time   Please find below the description of the expert session as well as the link to register .   Expert Session: Thingworx Navigate Component Based App Development Date and Time: Wednesday 09/30, 08:00 AM Eastern Daylight Time Duration: 1 hour Host: Pratibha Bhatnagar Description: Following the series of new capabilities released with Navigate 9.0, this session will focus in the details of Navigate Component Based app development and how to leverage this to your use cases.   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’   You can also suggest topics for upcoming sessions using this small form
View full tip
Applicable Releases: ThingWorx Platform 7.0 to 8.4   Description:   A practical example of how to build a data model in ThingWorx following a pre-defined design Following topics are covered: Review existing Design Plan Build all required entities in ThingWorx Composer Test the model and review scalability and reusability         The session was recorded using the old ThingWorx Composer, but the concepts are still applicable Related Success Service - Principles of Thingworx Modeling Related Service - Design your Thingworx Model
View full tip
Hi Community,   Although we have reference architectures and integration paths for connecting devices to ThingWorx through Azure IoT; no one has ever written anything about doing the same from one ThingWorx to another.  I thought I’d change that and put some ideas out there around how one might go about doing this.  Although this is not officially supported or recommended by PTC; I have consulted with a number of leading SMEs on the subject, which have participated in forming the basis of my thinking outlined here.   Components Required (in order of communication path): On-premise ThingWorx Platform Protocol Adapter Toolkit* (CXS) - MQTT Azure IoT Edge Azure IoT Hub ThingWorx Azure IoT Hub Connector (CXS) Azure Cloud-hosted ThingWorx Platform   PAT (2) with codec to encode MQTT messages publishes to on-premise IoT Edge MQTT endpoint which handles store-and-forward of messages to IoT Hub.  An Azure IoT device would exist for each Thing you wish to represent on the ThingWorx servers.  The Azure IoT Hub Connector would pick-up the incoming messages and pass them on to the cloud ThingWorx which would decode the MQTT payload and map to Thing property updates.   The only part that I presently don’t like about this approach is that you’ll need to decode the MQTT messages on the ThingWorx platform in the cloud when they are received from the IoT Hub, and this mechanism will need to also need to handle encoding and publishing back to the IoT Hub if C2D (Cloud-to-Device) messages are to be implemented (aka bi-directional).  This is required as ThingWorx only supports AlwaysOn as an application level protocol so some form of mapping needs to be done.   * Another approach would be to replace the PAT with a custom agent which implements both the ThingWorx Edge SDK and the Azure IoT device SDK   Regards,   Greg Eva
View full tip
In this post I show how to use Federation in ThingWorx to execute services on a different ThingWorx platform instance. In the use case below I set up one ThingWorx instance in the Factory and another instance in the Cloud, whereby the latter is executing a service which is actually running on the former.   Please find the document in attachment.   HTH, Alessio Marchetti  
View full tip