cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Learn all about PTC Community Badges. Engage with PTC and see how many you can earn! X

IoT Tips

Sort by:
Integrating LDAP authentication into Thingworx is fairly simple. Since release 5.0 and later, the out-of-the-box (OOTB) Thingworx authenticators already include the necessary code to validate a user's credentials against an LDAP server. These authenticators look to see if an LDAP server is connected every time a user attempts a login, and then further check to see if this user exists in the LDAP server. If the username does exist in LDAP, then Thingworx will check if the password entered is a match to the password stored within LDAP. If the password entered does not match the password stored in LDAP, then Thingworx will next check if the password matches the one stored in Thingworx for that user. So in order for a user to login to Thingworx, they must have a user Thing created for them within Thingworx Composer (this can be done programmatically, see below), and a valid password which matches either an LDAP account password or the password as it is set for that user on the Thing in Thingworx Composer. The first thing a developer needs to do to integrate LDAP is configure their Thingworx instance so that it can find the LDAP server and access its contents. This is done by importing an XML file which will allow the developer to see a Thing that comes with the Thingworx platform (see attached file "directoryServices.xml"). The Thing that needs configuring is called ApacheDS3 and it is a DirectoryServices Thing. The largest task for a developer to do to integrate LDAP into Thingworx involves importing their LDAP users into Thingworx. Getting the LDAP usernames out of the LDAP server will vary depending on which distribution of LDAP is in use. However, once the developer acquires this information, using it to create users in Thingworx is simple. The developer will need to create a Thing Service which creates a dummy password and assigns the LDAP username in the parameters. Then they can pass the parameters into the CreateUser service of the “EntitiyServices” resource: var params = { password: "SOMETHING_COMPLICATED", //dummy password does not matter, but you don't want an accidental match, so make it something very complicated, and standard to your company's LDAP users name: ldap_username, //retrieve from LDAP description: "This user was created as part of LDAP import", //can be whatever you'd like tags: undefined }; Resources["EntityServices"].CreateUser(params); // no return Any users created in this way will be redirected to Squeal if there is no home mashup assigned, so you will have to add an additional bit of code which assigns the home mashups to users, looping through something like this: var params = {     name: "dashboard" //replace this with String name of dashboard (must exist) }; Users[username].SetHomeMashup(params); For full steps on integrating LDAP and Thingworx, including instructions on how to set up an ApacheDS test LDAP server, see the Thingworx support article titled “Integrate LDAP Authentication and Import LDAP User Directory into Thingworx” (reference document – CS221840).
View full tip
Hi Community,   I've recently had a number of questions from colleagues around architectures involving MQTT and what our preferred approach was.  After some internal verification, I wanted to share an aggregate of my findings with the ThingWorx Architect and Developer Community.   PTC currently supports four methods for integrating with MQTT for IoT projects. ThingWorx Azure IoT Hub Connector ThingWorx MQTT Extension ThingWorx Kepware Server Choice is nice, but it adds complexity and sometimes confusion.  The intent of this article is to clarify and provide direction on the subject to help others choose the path best suited for their situation.   ThingWorx MQTT Extension The ThingWorx MQTT extension has been available on the marketplace as an unsupported “PTC Labs” extension for a number of years.  Recently its status has been upgraded to “PTC Supported” and it has received some attention from R&D getting some bug fixes and security enhancements.  Most people who have used MQTT with ThingWorx are familiar with this extension.  As with anything, it has advantages and disadvantages.  You can easily import the extension without having administrative access to the machine, it’s easy to move around and store with projects, and can be up and running quite quickly.  However it is also quite limited when it comes to the flexibility required when building a production application, is tied directly to the core platform, and does not get feature/functionality updates.   The MQTT extension is a good choice for PoCs, demos, benchmarks, and prototypes as it provides MQTT integration relatively quickly and easily.  As an extension which runs with the core platform, it is not a good choice as a part of a client/enterprise application where MQTT communication reliability is critical.   ThingWorx Azure IoT Hub Connector Although Azure IoT Hub is not a fully functional MQTT broker, Azure IoT does support MQTT endpoints on both IoT Hub and IoT Edge.  This can be an interesting option to have MQTT devices publish to Azure IoT and be integrated to ThingWorx using the Azure IoT Hub Connector without actually requiring an MQTT broker to run and be maintained.  The Azure IoT Hub Connector works similarly to the PAT and is built on the Connection Server, but adds the notion of device management and security provided by Azure IoT.   When using Azure IoT Edge configured as a transparent gateway with buffering (store and forward) enabled, this approach has the added benefit of being able to buffer MQTT device messages at a remote site with the ability to handle Internet interruptions without losing data.   This approach has the added benefit of having far greater integrated security capabilities by leveraging certificates and tying into Azure KeyVault, as well as easily scaling up resources receiving the MQTT messages (IoT Hub and Azure IoT Hub Connector).  Considering that this approach is build on the Connection Server core, it also follows our deployment guidance for processing communications outside of the core platform (unlike the extension approach).   ThingWorx Kepware Server As some will note, KepWare has some pretty awesome MQTT capabilities: both as north and southbound interfaces.  The MQTT Client driver allows creating an MQTT channel to devices communicating via MQTT with auto-tag creation (from the MQTT payload).  Coupled with the native ThingWorx AlwaysOn connection, you can easily connect KepWare to an on-premise MQTT broker and connect these devices to ThingWorx over AlwaysOn.   The IoT Gateway plug-in has an MQTT agent which allows publishing data from all of your KepWare connected devices to an MQTT broker or endpoint.  The MQTT agent can also receive tag updates on a different topic and write back to the controllers.  We’ve used this MQTT agent to connect industrial control system data to ThingWorx through cloud platforms like Azure IoT, AWS, and communications providers.   ThingWorx Product Segment Direction A key factor in deciding how to design your solution should be aligned with our product development direction.  The ThingWorx Product Management and R&D teams have for years been putting their focus on scalable and enterprise-ready approaches that our partners and customers can build upon.  I mention this to make it clear that not all supported approaches carry the same weight.  Although we do support the MQTT extension, it is not in active development due to the fact that out-of-platform microservices-based communication interfaces are our direction forward.   The Azure IoT Hub Connector, being built on the Connection Server is currently the way forward for MQTT communications to the ThingWorx Foundation.   Regards,   Greg Eva
View full tip
In this session, we pick up where we left off with the mashup which was worked on in part 1 of our Advanced Mashup Expert Session series. Specifically, we will explore the concepts of master mashups, session variables, and media entities, using each to further enhance the look and feel of our mashup.     For full-sized viewing, click on the YouTube link in the player controls. Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
    About   This is part of a ThingBerry related blog post series.         ThingBerry is ThingWorx installed on a RaspBerry Pi, which can be used for portable demonstrations without the need of utilizing e.g. customer networks. Instead the ThingBerry provides its own custom WIFI hotspot and allows Things to connect and send / receive demo data on a small scale.   In this particual blog post we'll discuss on how to connect a ESP8266 module to the ThingBerry WIFI hotspot and send data from a DHT-11 sensor via the MQTT protocol.   As the ThingBerry is a highly unsupported environment for ThingWorx, please see this blog post for all related warnings.   Install MQTT broker on the ThingBerry     To install mosquitto as a MQTT broker, log in to the ThingBerry and run     sudo apt-get install mosquitto   This will provide a basic broker installation, which is good enough for this example. MQTT clients (including ThingWorx) will connect to this broker to exchange messages. There will be no added security like encrypted traffic shown in this example, it's however good practise to secure MQTT broker / client connections.   While the ESP8266 module is publishing information, ThingWorx will subscribe to the corresponding topics to update its internal property values with what is sent by the ESP8266 module.   For more information on MQTT, how to configure it for ThingWorx or more security relevant information also see   https://community.thingworx.com/message/5063#5063 https://community.thingworx.com/community/developers/blog/2016/08/08/securing-mqtt-connection-to-thingworx-platform?sr=tcontent   Configure the ESP8266     There are too many instructions on the web already on how to initially setup the ESP8266 and use it with the Arduino IDE. I'll therefore just refer to Google which covers the topic more extensively than I ever could.   All coding in this example is done in the Arduino IDE and is pushed to the ESP8266 (NodeMCU) via USB. For this you might need to install a CH340g USB driver for the NodeMCU.   In the Arduino IDE under Tools, I have set my environment to   Board: NodeMCU 1.0 (ESP-12E Module) CPU Frequency: 80 MHz Flash Size: 4M (3M SPIFFS) Upload Speed: 115200 Port: COM3   Under Sketch > Include Library > Manage Libraries add / install the following libraries:   DHT sensor library by Adafruit Adafruit Unified Sensor by Adafruit PubSubClient by Nick O'Leary   These bring the libraries necessary to read data from the DHT-11 sensor and to configure the ESP8266 as MQTT client.     Wiring the DHT-11 sensor     The following image shows the PINs on the ESP8266     I'm using a DHT-11 sensor with cables included and already fixed to a board with 3 PINs. In case you're using a different version, there might be additional components and wiring required, like a resistor etc. Google might help here as well.     Ensure that neither board nor sensor are plugged in, and the ESP8266 is powered off.   To hook the sensor up to the ESP8266, join   ( - ) to GND ( + ) to 3.3V (out) to D3   After all the connections are made, connect the ESP8266 via USB to a computer / laptop with the Arudino IDE configured.   Coding   In the Arduino IDE use the following code - adjust the WIFI settings and the MQTT broker configuration. Ensure to rename the ESP_xx name / topic to something more meaningful, e.g. a specific device name (or just leave it as is if in doubt).   Use the ssid and wpa_passphrase from the hostapd.conf used to configure the ThingBerry as WIFI hotspot.   Copy&paste the code below into the Arduino IDE, verify it and upload it to the ESP8266.     If searching for a WIFI connection, the device's blue LED will blink. A successful connection to the broker and publishing the values will result in a static blue LED. In case the LED is off, the connection to the broker is lost or messages cannot be published.   For troubleshooting, use the Serial Monitor function (at 115200 baud) in the Arduino IDE. In case sensor data cannot be read but the wiring is correct and the code addressing the correct PIN verify the sensor is indeed working. It took me a long time to figure out that the first sensor I used was a defective device.   The current configuration sends updates every 10 seconds - longer intervals might make more sense, but can trigger a timeout for the MQTT broker. In this case the program will re-connect automatically and log corresponding messages in the Serial Monitor. This might seem like an error, but is indeed intended behavior by the code and the MQTT broker.     Configure MQTT Thing in ThingWorx     Create a new Thing in ThingWorx based on the MQTT Template. Add two properties:   temperature humidity   Both set to persistent and logged and Data Change Type to ALWAYS. Also configure a Value Stream to log a history of values.   In the configuration, add two more subscriptions. Activate the "subscribe" checkbox and map name (local property) to topic (MQTT topic), e.g.   name = temperature; topic = ESP_xx/temp name = humidity; topic = ESP_xx/hum   Ensure the correct servernames, ports etc. are configured (an empty servername will use the localhost).   Save the configuration. Property values should now be updated from the MQTT broker, depending on what the device is sending.   Code   #include "DHT.h" #include "PubSubClient.h" #include "ESP8266WiFi.h"   /* * * Configure parameters for sensor and network / MQTT connections * */   // setup DHT 11 pin and sensor   #define DHTPin D3 #define DHTTYPE DHT11   // setup WiFi credentials   #define WLAN_SSID "mySSID" #define WLAN_PASS "WIFIpassword"   // setup MQTT   #define MQTTBROKER "mqttbrokerhostname" #define MQTTPORT 1883   // setup built-in blue LED   #define LED 2   /* * ============================================================ * * DO NOT CHANGE ANYTHING BELOW * (unless you know what you're doing) * */   // initiate DHT   DHT dht(DHTPin, DHTTYPE);   // initiate MQTT client   WiFiClient wifiClient; PubSubClient client(MQTTBROKER, MQTTPORT, wifiClient);   /* * setup */   void setup() {     // switch off internal LED     pinMode(LED, OUTPUT);   digitalWrite(LED, HIGH);     // start serial monitor     Serial.begin(115200);     // start DHT     dht.begin();     // start WiFi     WiFi.begin(WLAN_SSID, WLAN_PASS);   }   /* * the loop */   void loop() {     // while not connected to WiFi, print "."   // after connection exit the loop   // blink LED while having no WiFi signal     boolean wifiReconnect = false;     while (WiFi.status() != WL_CONNECTED) {       digitalWrite(LED, LOW);       delay(200);       Serial.print(".");       digitalWrite(LED, HIGH);       delay(300);       wifiReconnect = true;     }     // if WiFi has reconnected, print new connection information and turn on LED     if (wifiReconnect == true) {       // print connection information and local IP address, mac address       Serial.println();     Serial.println("WiFi connected");     Serial.println(WiFi.localIP());     Serial.println(WiFi.macAddress());     Serial.println();       // turn on built-in LED to indiciate successful WiFi connection       digitalWrite(LED, LOW);     }     // if MQTT client is not connected, connect again   // turn on built-in LED to indicate a successful connection     if (!client.connected()) {       Serial.println("Disconnected from MQTT server... trying to connect");       if (client.connect("ESP_xx")) {         Serial.println("Connected to MQTT server");       Serial.println("Topic = ESP_xx");         digitalWrite(LED, LOW);       } else {         Serial.println("MQTT connection failed");         digitalWrite(LED, HIGH);       }       Serial.println();     }     // read temperature and humidity from sensor     float t = dht.readTemperature();   float h = dht.readHumidity();     if (isnan(t) || isnan(h)) {       // if temperature or humidity is not a number, print error       Serial.println("Failed retrieving data from DHT sensor");     } else {       // print temperature and humidity       Serial.print(t);     Serial.print("° - ");     Serial.print(h);     Serial.print("%");     Serial.println();       // only send values to MQTT broker, if client is connected       if (client.connected()) {         // boolean to check for errors during payload transfer         bool isError = false;         // create payload and publish values via MQTT client       // use buffer to convert float to char*         char buffer[10];         dtostrf(t, 0, 0, buffer);         if (client.publish("ESP_xx/temp", buffer)) {             Serial.print("  published /temp  ");           } else {             Serial.print("  failed /temp  ");         isError = true;           }            dtostrf(h, 0, 0, buffer);         if (client.publish("ESP_xx/hum", buffer)) {             Serial.print("  published /hum  ");           } else {             Serial.print("  failed /hum  ");         isError = true;           }         Serial.println();         // on error, turn off LED         if (isError == true) {           digitalWrite(LED, HIGH);         } else {             digitalWrite(LED, LOW);           }       }     }     // sleep for 10 seconds   // if sleep > default mosquitto timeout : a reconnect is forced for each update-cycle     delay(10000);   }
View full tip
Design Your Data Model Guide Part 1   Overview   This project will introduce the process of taking your IoT solution from concept to design. Following the steps in this guide, you will create a solution that doesn’t need to be constantly revamped, by creating a comprehensive Data Model before starting to build and test your solution. We will teach you how to utilize a few proposed best practices for designing the ThingWorx Data Model and provide some prescriptive methods to help you generate a high-quality framework that meets your business needs. NOTE: This guide’s content aligns with ThingWorx 9.3. The estimated time to complete ALL 3 parts of this guide is 60 minutes. All content is relevant but there are additional tools and design patterns you should be aware of. Please go to this link for more details.    Step 1: Data Model Methodology   We will start by outlining the overall process for the proposed Data Model Methodology.       Step Description 1 User Stories Identify who will use the application and what information they need. By approaching the design from a User perspective, you should be able to identify what elements are necessary for your system. 2 Data Sources Identify the real-world objects or systems which you are trying to model. To create a solid design, you need to identify what the “things” are in your system and what data or functionality they expose. 3 Model Breakdown Compose a representative model of modular components to enable uniformity and reuse of functionality wherever possible. Break down user requirements and data, identifying how the system will be modeled in Foundation. 4 Data Strategy Identify the sources of data, then evaluate how many different types of data you will have, what they are, and how your data should be stored. From that, you may determine the data types and data storage requirements. 5 Business Logic Strategy Examine the functional needs, and map them to your design for proper business logic implementation. Determine the business logic as a strategic flow of data, and make sure everything in your design fits together in logical chunks. 6 User Access Strategy Identify each user's access and permission levels for your application. Before you start building anything, it is important to understand the strategy behind user access. Who can see or do what? And why? NOTE: Due to the length of this subject, the ThingWorx Data Model Methodology has been divided into multiple parts. This guide focuses on the first three steps = User Stories, Data Sources, and Model Breakdown. Guides covering the last three steps are linked in the final Next Steps page.    Step 2: User Stories     With a user-based approach to design, you identify requirements for users at the outset of the process. This increases the likelihood of user satisfaction with the result. Utilizing this methodology, you consider each type of user that will be accessing your application and determine their requirements according to each of the following two categories: Category Requirement Details Functionality Determine what the user needs to do. This will define what kind of Services and Subscriptions will need to be in the system and which data elements and Properties must be gathered from the connected Things. Information What information do they need? Examine the functional requirements of the user to identify which pieces of information the users need to know in order to accomplish their responsibilities.   Factory Example   Let’s revisit our Smart Factory example scenario. The first step of the User Story phase of the design process is to identify the potential users of your system. In this example scenario, we have defined three different types of users for our solution: Maintenance Operations Management Each of these users will have a different role in the system. Therefore, they will have different functional and informational needs.   Maintenance   It is the maintenance engineer’s job to keep machines up and running so that the operator can assemble and deliver products. To do this well, they need access to granular data for the machine’s operating status to better understand healthy operation and identify causes of failure. They also need to integrate their maintenance request management system to consolidate their efforts and to create triggers for automatic maintenance requests generated by the connected machines. Required Functionality Get granular data values from all assets Get a list of maintenance requests Update maintenance requests Set triggers for automatic maintenance request generation Automatically create maintenance requests when triggers have been activated Required Information Granular details for each asset to better understand healthy asset behavior Current alert status for each asset When the last maintenance was performed on an asset When the next maintenance is scheduled for an asset Maintenance request for information, including creation date, due date, progress notes   Operations   The operator’s job is to keep the line running and make sure that it’s producing quality products. To do this, operators must keep track of how well their line is running (both in terms of speed and quality). They also need to be able to file maintenance requests when they have issues with the assets on their line. Required Functionality File maintenance request Get quality data from assets on their line Get performance data for the whole line Get a prioritized list of production orders for their line Create maintenance requests Required Information Individual asset performance metrics Full line performance metrics Product quality readings   Management   The production manager oversees the dispatch of production orders and ensures quotas are being met. Managers care about the productivity of all lines and the status of maintenance requests. Required Functional Create production orders Update production orders Cancel production orders Access line productivity data Elevate maintenance request priority Required Information Production line productivity levels (OEE) List of open maintenance requests   Step 3: Data Sources – Thing List     Thing List   Once you have identified the users' requirements, you'll need to determine what parts of your system must be connected. These will be the Things in your solution. Keep in mind that a Thing can represent many different types of connected endpoints. Here are some examples of possible Things in your system: Devices deployed in the field with direct connectivity or gateway-connectivity to Foundation Devices deployed in the field through third-party device clouds Remote databases Connections to external business systems (e.g., Salesforce.com, Weather.com, etc.)   Factory Example   In our Smart Factory example, we have already identified the users of the system and listed requirements for each of those users. The next step is to identify the Things in our solution. In our example, we are running a factory floor with multiple identical production lines. Each of these lines has multiple different devices associated with it. Let’s consider each of those items to be a connected Thing. Things in each line: Conveyor belt x 2 Pneumatic gate Robotic Arm Quality Check Camera Let's also assume we already have both a Maintenance Request System and a Production Order System that are in use today. To add this to our solution, we want to build a connector between Foundation and the existing system. These connectors will be Things as well. Internal system connection Thing for Production Order System Internal system connection Thing for Maintenance Request System NOTE: It is entirely possible to have scenarios in which you want to examine more granular-level details of your assets. For example, the arm and the hand of the assembly robot could be represented separately. There are endless possibilities, but for simplicity's sake, we will keep the list shorter and more high-level. Keep in mind that you can be as detailed as needed for this and future iterations of your solution. However, being too granular could potentially create unnecessary complexity and data overload.    Click here to view Part 2 of this guide.
View full tip
Introduction to the ThingWorx Composer and a demonstration of how you go about building out the design plan.   For full-sized viewing, click on the YouTube link in the player controls.   Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
  Create compelling, modern application user interfaces in ThingWorx with the latest enhancements to our Mashup visualization platform - Collection and Custom CSS.   In this webinar with IoT application designer Gabriel Bucur, we'll show how the new Collection widget makes it easy to replicate visual content in your UI for menu systems, dashboards, tables, and more. You'll learn about several of the 60+ configuration properties available for collections, many of which offer input/output bindings for dynamic flexibility.   Gabriel will also demonstrate the styling and UX power of the latest feature in the Next Gen Composer, which allows you to write classes and CSS for your Mashups, masters, and widgets.   Watch the recording above, and download this sample Mashup containing all the data and entities shared in the video.   Q&A   We didn’t have time to get to all of the questions during the live webcast, but we’ve answered them here on our blog. Have any additional questions? Please leave us a comment.   WILL PTC CONTINUE SUPPORT FOR THE REPEATER WIDGET IN THINGWORX 8.2, OR WILL IT BE REMOVED? The Repeater Widget will not removed. However, due to limited performance in various browsers, switching to the Collection is highly recommended.   WHAT’S THE DIFFERENCE BETWEEN REPEATER AND COLLECTION, AND ARE THERE PROS AND CONS FOR EACH WIDGET? The Collection is an advanced widget that allows you to contain a series of repeated Mashups within a collection. Its functionality is similar to the Repeater Widget, but contains more properties that provide additional options and better performance, especially when handling large amounts of data.   IS IT POSSIBLE TO ADD A DRAG AND DROP ACTION TO LISTS OR REPEATERS, E.G. DRAGGING AN ELEMENT FROM ONE CONTAINER TO ANOTHER? Drag and drop functionality is not available in the Collection Widget at this time. It is, however, in consideration for future ThingWorx releases.   IN THE EVENT I HAVE MORE THAN ONE MASHUP (FOR EXAMPLE, MASHUP A AND MASHUP B), CAN I BIND DIFFERENT PROPERTIES TO THE SAME MASHUP PARAMETER ACCORDING TO THE MASHUP NAME? The MashupName row goes to the MashupNameField in the collection, where you’ll  have a dropdown after you populate it with the InfoTable that contains the MashupName. You can put all the bindings there, even if you don't use them in all the Mashups. For example: {"valueA":"MashupA","valueB":"MashupB"}   IS IT POSSIBLE TO ORDER SECTIONS HORIZONTALLY IN THE COLLECTION? Sections can only be ordered vertically at this time.   WHAT IS THE DIFFERENCE BETWEEN GLOBAL PROPERTIES AND SESSION VALUES? Global Properties are only available within the Collection. These properties offer a way to control Things from other widgets with which the Collection is displaying.   IF THERE ARE MULTIPLE COLLECTIONS, DO THEY SHARE THE SAME SET OF GLOBALPARAMETERS? No. If you defined a Boolean on your Collection, when you drag the Boolean output from a checkbox on the Collection you will see that you can bind it to that defined Boolean in the GlobalParameters.   WHEN USING CUSTOM CSS, DO YOU HAVE TO DEFINE STYLING FOR EACH ELEMENT, OR CAN YOU CREATE A STYLE THING WITH CSS? Widgets differ in functionality, so the same class might not apply to the same widget. However, if you define a styling in CSS for a button, for example, you can apply that class on any button you want.   DOES CUSTOM CSS ALWAYS OVERRIDE THE WIDGET STYLES? Yes. That is the essential purpose of custom CSS integration – to rewrite styles.   IF YOU HAVE TWO CONFLICTING STYLES – ONE IN CSS AND THE OTHER IN A STYLE DEFINITION – WHICH ONE TAKES PRECEDENCE? CSS will typically rewrite the ThingWorx styles; however, it depends on the specificity of the CSS target definition. For example: “.button-element" will be overwritten by ".default-button .button-element". Visit https://developer.mozilla.org/en-US/docs/Web/CSS/Specificity for more details regarding this topic.   CAN I RESIZE MY WIDGETS DURING RUNTIME? The size of the widget is determined by the CSS, and how it renders in ThingWorx. While you can bind different sizing classes to the CustomClass property of the widget, resizing with your mouse is not available at Runtime.
View full tip
DataShape Simply put, it represents the data in your model giving your application an in-built sense on how to represent the data in different scenarios. DataShape is defined with set of field definitions and related metadata e.g. DataType. DataShapes are must have (except for ValueStream) when creating entities that deal with data storage i.e. DataTable & Stream. For more detail on  DataShapes and the DataTypes see DataShapes in ThingWorx Help Center   Note: See ThingShape : Nuances, Tips & Tricks  for ThingShape vs DataShape   Ways to create DataShape   Via the ThingWorx Composer Navigate to ThingWorx Composer click on New > DataShape Provide a unique name to the DataShape entity DataShape creation in ThingWorx Composer Navigate to the Field Definition and add required Field Definition Defining Field for the DataShape Via a custom service in ThingWorx Navigate to an entity under which the service is to be created, e.g. Thing Switch to Services section for the Thing and click Add to create new service OOTB available service CreateDataShape can be used from the Resources > EntityService // snippet creating the infotable with the var params = { infoTableName : "InfoTable", dataShapeName : "DemoDataShape" }; // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(DemoDataShape) var result = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); // snippet creating the DataShape using the Infotable queried above which returns the field and the metadata on those fields // DSName used below is created as the Resources["EntityServices"].CreateDataShape({ name: DSName /* STRING */, description: "Custom created DataShape" /* STRING */, fields: result /* INFOTABLE */, tags: undefined /* TAGS */ }); Here's how it'd appear in the Service editor :   DataShape creation with JavaScript service in ThingWorx Via the ThingWorx Extension SDK   Following example snippet shows the creation and usage of the DataShape while creating custom extension with the Extension SDK    @ThingworxConfigurationTableDefinitions(tables = { @ThingworxConfigurationTableDefinition(name = "ConfigTableExample1", description = "Example 1 config table", isMultiRow = false, dataShape = @ThingworxDataShapeDefinition(fields = { @ThingworxFieldDefinition(name = "field1", description = "", baseType = "STRING"), @ThingworxFieldDefinition(name = "field2", description = "", baseType = "NUMBER") })) }) Note: Refer to the ThingShape : Nuances, Tips & Tricks for Tips & Tricks   Other related reads How are DataShape used while storing the data in ThingWorx How to pass a DataShape as parameter Can two DataShapes have the same service name if used on the same thing in ThingWorx? DataShape in ThingWorx Help Center
View full tip
Original Post Date:     June 6, 2016 Description: This tutorial video will walk you through the installation process for the PostgreSQL-based version of the ThingWorx Platform in a Windows environment.  All required software components will be covered in this video.    
View full tip
The accuracy of a predictive model can be boosted in two ways: Either by embracing Feature engineering or by applying boosting algorithms straight away. There are multiple boosting algorithms like Gradient Boosting, XGBoost, AdaBoost, Gentle Boost etc. Every algorithm has its own underlying mathematics and a slight variation is observed while applying them. While working with boosting algorithms, we have come across two frequently occurring buzzwords: Bagging and Boosting. Bagging: It is an approach where you take random samples of data, build learning algorithms and take simple means to find bagging probabilities. Boosting: Boosting is similar, however the selection of sample is made more intelligently. We subsequently give more and more weight to hard to classify observations. Below are Default Algorithms used in Predictive Models generated in ThingWorx Analytics: Decision Tree Gradient Boost Linear regression Neural Net Random Forrest Logistic Regression Gradient boosting is a machine learning technique for regression and classification problems, which produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees. It builds the model in a stage-wise fashion like other boosting methods do, and it generalizes them by allowing optimization of an arbitrary differential loss function. Let’s begin with an easy example: Assume, you are given a previous model M to improve on. Currently you observe that the model has an accuracy of 80% (any metric). How do you go further about it? One simple way is to build an entirely different model using new set of input variables and trying better ensemble learners. On the contrary, we have a much simpler way to suggest. It goes like this: Y = M(x) + error What if we are able to see that error is not a white noise but have same correlation with outcome(Y) value. What if we can develop a model on this error term? Like:error = G(x) + error2 Probably, we will see error rate will improve to a higher number, say 84%. Let’s take another step and regress against error2: error2 = H(x) + error3 Now we combine all these together: Y = M(x) + G(x) + H(x) + error3 This probably will have a accuracy of even more than 84%. What if we can find an optimal weights for each of the three learners: Y = alpha * M(x) + beta * G(x) + gamma * H(x) + error4 How Gradient Boosting Works: 1. Loss Function: The loss function used depends on the type of problem being solved. It must be differential, but many standard loss functions are supported and you can define your own. A benefit of the gradient boosting framework is that a new boosting algorithm does not have to be derived for each loss function that may want to be used, instead, it is a generic enough framework that any differential loss function can be used. 2. Weak Learner: Decision trees are used as the weak learner in gradient boosting. Specifically regression trees are used that output real values for splits and whose output can be added together, allowing subsequent models outputs to be added and “correct” the residuals in the predictions. Trees are constructed in a greedy manner, choosing the best split points based on purity scores like Gini or to minimize the loss. 3. Additive Model: Trees are added one at a time, and existing trees in the model are not changed. A gradient descent procedure is used to minimize the loss when adding trees. we have weak learner sub-models or more specifically decision trees. After calculating the loss, to perform the gradient descent procedure, we must add a tree to the model that reduces the loss. Improvements to Basic Gradient Boosting: 1. Tree Constraints: It is important that the weak learners have skill but remain weak. Below are some constraints that can be imposed on the construction of decision trees: Number of trees: ​Generally adding more trees to the model can be very slow to over fit. The advice is to keep adding trees until no further improvement is observed. Tree depth: Deeper trees are more complex trees and shorter trees are preferred. Generally, better results are seen with 4-8 levels. Number of nodes or number of leaves: like depth, this can constrain the size of the tree, but is not constrained to a symmetrical structure if other constraints are used. Number of observations per split: Imposes a minimum constraint on the amount of training data at a training node before a split can be considered Minimum improvement to loss: Is a constraint on the improvement of any split added to a tree. 2. Weighted Updates: The contribution of each tree to this sum can be weighted to slow down the learning by the algorithm. This weighting is called a shrinkage or a learning rate. "Each update is simply scaled by the value of the “learning rate parameter v". 3. Stochastic Gradient Boosting: At each iteration a sub sample of the training data is drawn at random (without replacement) from the full training data set. The randomly selected sub sample is then used, instead of the full sample, to fit the base learner. 4. Penalized Gradient Boosting: The additional regularization term helps to smooth the final learnt weights to avoid over-fitting. Intuitively, the regularized objective will tend to select a model employing simple and predictive functions.
View full tip
Welcome to the Thingworx Community area for code examples and sharing.   We have a few how-to items and basic guidelines for posting content in this space.  The Jive platform the our community runs on provides some tools for posting and highlighting code in the document format that this area is based on.  Please try to follow these settings to make the area easy to use, read and follow.   At the top of your new document please give a brief description of the code sample that you are posting. Use the code formatting tool provided for all parts of code samples (including if there are multiple in one post). Try your best to put comments in the code to describe behavior where needed. You can edit documents, but note each time you save them a new version is created.  You can delete old versions if needed. You may add comments to others code documents and modify your own code samples based on comments. If you have alternative ways to accomplish the same as an existing code sample please post it to the comments. We encourage everyone to add alternatives noted in comments to the main post/document.   Format code: The double blue arrows allow you to select the type of code being inserted and will do key word highlighting as well as add line numbers for reference and discussions.
View full tip
We will host a live Expert Session: "Top 5 Thingworx environment monitoring best practices" on March 25th, 10h00 EST.   Please find below the description of the expert session and the registration link.   Expert Session: Top 5 Thingworx environment monitoring best practices Date and Time: March 25th, 10h00 EST Duration: 1 hour Host: Tori Firewind, Tim Atwood and Dave Bernbeck from Enterprise Deployment Center - Enterprise Deployment Center Registration Here: https://www.ptc.com/en/resources/iiot/webcast/top-5-thingworx-monitoring-best-practices    In this session, we will be reviewing the main monitoring practices to keep a heathy environment and discuss the main issues from the audience. Bring your questions!.   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’. You can also suggest topics for upcoming sessions using this small form.   Here are some recorded sessions that might be of your interest. You can find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search   Thingworx Active Active Clustering This session will cover the main aspects of the High Availability Clustering feature launched with the ThingWorx 9.0 release.   Recoding Link Upgrade to Thingworx 9 – How to Plan / Evaluate Impacts This session highlights the key points you should evaluate to properly plan your upgrade to Thingworx 9. Recording Link Top 5 items to check for Thingworx Performance Troubleshooting How to troubleshoot performance issues in a Thingworx Environment? Here we cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support     Recording Link
View full tip
Background: In the event that a Gateway/Connector Agent is offline or unable to connect to the Axeda Cloud Server, it uses an internal message queue to store information until the connection is restored. The message queue size is configured in the Axeda Builder project. By default, the queue is 200KB in size. Depending on how frequently your Agent sends data or how much data your Agent is collecting and trying to send, 200KB may be too small.  If the queue is too small, the data will “overflow” the queue. The queue is kept in memory only; data is not stored to disk and will be removed in a First-In-First-Out (FIFO) manner when the queue overflows. If you see queue overflow error messages in the Agent log (either EKernel.log and xGate.log), it may be time to change the size of the outbound message queue. The correct size setting for the Agent outbound message queue takes three variables into consideration: How much information you are sending? What is the maximum expected duration for loss of connection to the Internet (Cloud Server)? How much memory is available for your process? The more information the Agent is trying to send, the larger the queue size setting should be. Consider also that if your Agents are offline (disconnected) for a long period of time, they will likely accumulate lots of data, which may overflow the outbound message queue. If this is the case, you’ll need to increase the queue or risk losing data. Recommendation: Consider how the Agent operates (offline/online data collection) and how much data may be queued. When selecting the size of the queue, it’s important to maintain a balance between protecting against data loss and not occupying too much memory. If you do determine that you need to increase the outbound message queue size based, it’s important to note that Axeda recommends a maximum size of the outbound message queue of about 2MB. Need more information? For information about specifying Agent outbound message queue size, see the online help in Axeda® Builder (Enterprise Server Settings). For information about how the Agent delivers data to the Platform (via EEnterpriseProxy/xgEnterpriseProxy), see the Agent user’s guide for your Agent: either Axeda® Platform Axeda® Gateway User’s Guide (PDF) or Axeda® Platform Axeda® Connector User’s Guide (PDF). Axeda Support Site links: Axeda® Gateway User’s Guide, Axeda® Connector User’s Guide.
View full tip
Database backups are vital when it comes to ensure data integrity and data safety. PostgreSQL offers simple solutions to generate backups of the existing ThingWorx database instance and recover them when needed.   Please note that this does not replace a proper and well-defined disaster recovery plan. Export and Import are part of this strategy, but are not reflecting the complete strategy. The commands used in this post are for Windows, but can be adjusted to work on Linux-based systems as well.   Backup   To create a Backup, the export / dump functionality of PostgreSQL can be used.     pg_dump -U postgres -C thingworx > thingworxDump.sql     The -C option will include the statement to create the database in the .sql file and map it to the existing tablespace and user (e.g. 'thingworx' and 'twadmin'). The tablespace and user can be seen in the .sql file in the line with "ALTER DATABASE <dbname> OWNER TO <user>;"   In the above example, we're backing up the thingworx database schema and dump it into the thingworxDump.sql file   Tablespace, username & password are also included in the platform_settings.json   Restore   To restore the database, we just assume an empty PostgreSQL installation. We need to create the DB schema user first via the following commandline:     psql -U postgres -c "CREATE USER twadmin WITH PASSWORD 'ts';"     With the user created, we can now re-generate the tablespace and grant permissions to the twadmin user:   psql -U postgres -c "CREATE TABLESPACE thingworx OWNER twadmin LOCATION 'C:\ThingWorx\ThingworxPostgresqlStorage';" psql -U postgres -c "GRANT ALL PRIVILEGES ON TABLESPACE thingworx TO twadmin;" Finally the database itself can be restored by using the following commandline:   psql -U postgres < thingworxDump.sql This will create the database and populate it with tables, functions and sequences and will also restore the data from the .sql file.   It is important to have database, username and tablespace match with the original system - otherwise granting permissions and re-creating data might fail on recovery. User and tablespace can also be reused, so only the database has to be deleted before restoring it.   Part of the strategy   Part of the backup and recovery strategy should be to actually test the backup as well as the recovery part of the procedure. It should be well tested on a test-environment before deployed to any production environments. Take backups on a regular basis and test for disaster recovery once or twice a year, to ensure the procedure is still valid.   Data is the most important source in your application - protect it!  
View full tip
Original Post Date:            June 6, 2016   Description: This is a video tutorial on creating a Stream, adding a Data Shape with properties, and writing values to the Stream.    
View full tip
Previously Installing & Connecting C SDK to Federated ThingWorx with VNC Tunneling to the Edge device Installing and configuring Web Socket Tunnel Extension on ThingWorx Platform Overview     Using the C SDK Edge client configuration we did earlier, we'll now create above illustrated setup. In this C SDK Client we'll push the data to ThingWorx Publisher with servername : TW802Neo to ThingWorx Subscriber servername : TW81. Notice that the SteamSensor2 on the pulisher server is the one binding to the C SDK client and the FederatedSteamThing on subscriber is only getting data from the SteamSensor2. Let's crack on!   Content   Configure ThingWorx to publish Configure ThingWorx to subscribe Publish entities from Publisher to the Subscriber Create Mashup to view data published to the subscriber Pre-requisite Minimum requirement is to have two ThingWorx servers running. Note that both ThingWorx systems can be publisher & subscriber at the same time.   Configure ThingWorx Publisher   Configuring Federation Subsystem   1. Navigate to ThingWorx Composer > Subsystems > Federation Subsystems and configure the following highlighted sections   Essentially its required to configure the Server Identification, i.e. My Server name (FQDN / IP) , My Server Description (optional) Federation subscribers this server publishes to, i.e. all the server you want to publish to from this server. Refer to the Federation Subsystem doc in the Help Center to check detail description on each configurable parameter.   2. Save the federation subsystem   Configuring a Thing to be published   1. Navigate back to the Composer home page and select the entity which you'd like to publish 2. In this case I'm using SteamSensor2 which is created to connect to the C SDK client 3. To publish edit the entity and click on Publish checkbox, like so 4. Save the entity   Configure ThingWorx Subscriber     Configuring Federation Subsystem   1. Navigate to ThingWorx Composer > Subsystems > Federation Subsystems and configure the following highlighted sections   Essentially its required to configure the Server Identification, i.e. My Server name (FQDN / IP) , My Server Description (optional) Refer to help center doc on Federation Subsystems should you need more detail on the configurable parameter If you only want to use this server as a subscriber of entities from the publishing ThingWorx Server, in that case you don't have to Configure the section Federation subscribers this server publishes to, I've configured here anyway to show that both servers can work as publishers and subscribers   2. Save the federation subsystem   Configuring a Thing to subscribe to a published Thing 1. Subscribing to an entity is fairly straight forward, I'll demonstrate by utilizing the C SDK client which is currently pushing values to my remote thing called SteamSensor2 on server https://tw802neo:443/Thingworx 2. I have already Published the StreamSensor2, see above section Configuring a Thing to be published 3. Create a Thing called FederatedStreamThing with RemoteThingWithTunnels as ThingTemplate, 4. Browser for the Identifier and select the required entity to which binding must be done, like so   5. Navigate to the Properties section for the entity, click Manage Bindings to search for the remote properties like so for adding them to this thing:     6. Save the entity and then we can see the values that were pushed from the client C SDK     7. Finally, we can also quickly see the values pulled via a Mashup created in the subscriber ThingWorx , below a is a simple mashup with grid widget pulling values using QueryPropertyHistory service  
View full tip
Original Post Date:            September 30, 2016   Description: This tutorial video will walk you through the installation process for the PostgreSQL-based version of the ThingWorx Platform (7.2) in a RHEL environment.  All required software components will be covered in this video.    
View full tip
In our interactions with PTC customers we often learn they have previously performed Analytics modeling in Python, Matlab, R, or even built home grown analyses in languages such as Java or C++. As expected, when adopting an Industrial Innovation Platform such as ThingWorx that also has its own ThingWorx Analytics module, customers do not want to reimplement everything from scratch and would rather integrate their previous work in the Smart Applications built in ThingWorx, leveraging a combination of their existing toolset together with ThingWorx Analytics modeling. That is certainly possible and there are multiple ways to do that. In this article we will focus on several general ways to make that happen, but it is important to keep in mind that language specific approaches are also possible and we are happy to discuss those in the specific context of the customer.   Here are five different ways to bring existing Analytics into ThingWorx: If the task is to reuse an existing predictive model developed in a language such as Python/R/Matlab, typically one can export that model in PMML (Predictive Model Markup Language), an xml format, and import it in ThingWorx Analytics using the AnalyticsServer_ResultsThing -> UploadModel service. Libraries such as sklearn2pmml & r2pmml can be utilized towards that goal. The imported model can then be used in the same fashion as a ThingWorx Analytics developed model to power smart applications built in ThingWorx. If the Analysis involves more complex tasks than Predictive Modeling, such as custom data normalizations or non-standard Machine Learning models or home grown algorithms, one can use the options below. Call the ThingWorx exposed REST Web API from Python/Matlab/R/Java/Javascript. Every service from ThingWorx can be called that way, and the API can also be used to push analyses results into ThingWorx for further consumption, perhaps together with other sources of data such as sensor readings, in the smart applications built there. The documentation for the ThingWorx REST API can be found here.  Expose the existing Analytics via using a thin layer of REST Web Services. For example, in Python, this can be done using Flask, with few lines of code. Then, the orchestration can happen from ThingWorx by calling the exposed Web Service and weaving the results back into smart applications. Often our customers' current architecture involves a relational database (e.g. SQL Server, Oracle, etc) that is powering the existing Analytics, and stores the end results (predictions, correlations, etc). In this scenario, we can connect ThingWorx directly to that database to read these results.  Finally, in the case of complex Analytics, where a tighter integration with ThingWorx is desired, existing Analytics / algorithms can be wrapped into a ThingWorx Extension or an Analytics Provider using the corresponding PTC SDKs.  When choosing an integration option, customers need to carefully balance complexity of integration, constraints of their architecture, Analytics modeling complexity, as well as end user consumption requirements.
View full tip
There's a reason why many "Hello World" tutorials begin with writing to the logging tool.  Developers live in the console, inserting breakpoints and watching variables while debugging.  Especially with interconnected, complex systems, logging becomes crucial to saving developer hours and shipping on time. What this tutorial covers This tutorial introduces the three principal methods of monitoring the status of assets and the output of operations on an Axeda instance. Audit Logs Custom Object Log Reports The Audit Log You can filter the Audit Log by date range or by category. A list of the Audit Categories is available in the Help documentation at http://<<yourhost>>.axeda.com/help/en/audit_log_categories_for_filtering... You can write to the Audit Log from an Expression Rule or from a Custom Object. Writing to the Audit Log from an Expression Rule Use the Audit Action to write to the Audit Log: Audit("data-management", "The temperature " + DataItem.temp.value + "was reported at " + FormatDate(Now(), "yyyy/MM/dd hh:mm:ss")) You can insert values from the Variables or Functions list by using the plus operator as a string concatenator. Writing to the Audit Log from a Custom Object import com.axeda.common.sdk.id.Identifier import com.axeda.drm.sdk.Context import com.axeda.drm.sdk.audit.AuditCategory import com.axeda.drm.sdk.audit.AuditMessage auditMessage(Context.getSDKContext(), "data_management", "Thread started timestamp has ended.", context.device.id) private def auditMessage(Context CONTEXT, String category, String message, Identifier assetId) {     AuditCategory auditCategory = AuditCategory.getByString(category) ?: AuditCategory.DATA_MANAGEMENT     if (assetId == null) {         new AuditMessage(CONTEXT, "com.axeda.drm.rules.functions.AuditLogAction", auditCategory, [message]).store()     } else {         new AuditMessage(CONTEXT, "com.axeda.drm.rules.functions.AuditLogAction",    auditCategory, [message], assetId).store()     } }     In either case, a message written in the context of an Asset will be displayed on the Asset Dashboard (assuming the Audit module is enabled for the Asset Dashboard in the Model Preferences). The Custom Object Log The Configuration (6.1-6.5)/Manage(6.6+) tab provides access to the Custom Objects log when they are selected from the View sub-menu: This links allows you to open or save a zip archive of text files called customobject.logX where X is a digit that indicates that the log has rolled over into the next file (ie, customobject.log1).  The most current is customobject.log without a digit.  These files contain logging information in chronological order, identified by Custom Object name.  The log contains full stack traces of exceptions, as well as text written to the log. ERROR 2013-06-20 18:26:02,613 [sstreBinaryReturn,ajp-10.16.70.164-8009-6] Exception occurred in sstreBinaryReturn.groovy: java.lang.NullPointerException     at com.axeda.platform.sdk.v1.services.extobject.ExtendedObject.getPropertyByName(ExtendedObject.java:276)     at com.axeda.platform.sdk.v1.services.extobject.ExtendedObject$getPropertyByName.call(Unknown Source)    The Logger object in Custom Objects is a custom class ScriptoDebuggerLogger that is injected into the script and does not need to be explicitly imported. The following attributes are available for the Logger object: logger.info() logger.debug() logger.error()   All objects can be converted to a String by using the dump() function. logger.info(context.device.dump()) Additionally, a Javascript utility can be used with all SDK v2 domain objects and some SDK v1 domain objects to get a JSON pretty-print string of their attributes. import net.sf.json.JSONArray logger.info(JSONArray.fromObject(context.device).toString(2)) // Outputs: [{   "buildVersion": "",   "condition":  {     "detail": "",     "id": "3",     "label": "",     "restUrl": "",     "systemId": "3"   },   "customer":  {     "detail": "",     "id": "2",     "label": "Default Organization",     "restUrl": "",     "systemId": "2"   },   "dateRegistered":  {     "date": 31,     "day": 4,     "hours": 18,     "minutes": 39,     "month": 0,     "seconds": 31,     "time": 1359657571070,     "timezoneOffset": 0,     "year": 113   },   "description": "",   "detail": "mwc_location_1",   "details": null,   "gateways": [],   "id": "12345",   "label": "",   "location":  {     "detail": "Default Organization",     "id": "2",     "label": "Default Location",     "restUrl": "",     "systemId": "2"   },   "model":  {     "detail": "mwc_location",     "id": "4321",     "label": "standalone",     "restUrl": "",     "systemId": "4321"   },   "name": "mwc_location_1",   "pingRate": 10000,   "properties": [],   "restUrl": "",   "serialNumber": "mwc_location_1",   "sharedKey": [],   "systemId": "12345",   "timeZone": "America/New_York" }] ​Custom object logs may be retrieved by navigating to the Configuration (6.1-6.5)/Manage(6.6+) tab and selecting Custom Objects from the View sub-menu. Click the "Log" button at the bottom of the table and save, then view customobject.log in a text editor. Reports Reports provide a summary of data about the state of objects on the Axeda Platform.  Report titles are generally indicative of what they're reporting, such as Missing Devices Report, Auditing Report, Users Report. A separate license is needed in order to use the Reports feature.  New report types can only be created by a Reports administrator. To run a report, click Run from the Reports Tab. You can manage Reports from the Administration tab. These three tools together offer a full view of the state of domain objects on the Axeda Platform.  Make sure to take advantage of them while troubleshooting assets and applications.
View full tip
Background: In very rare situations, it is possible that the Firewall-Friendly Agent process may stop running. If the Agent is not running, no machine monitoring or communication with the Cloud Server is possible. Recommendation: WatchDog is a little known yet very helpful feature available with Firewall-Friendly Agents. This program lets you monitor whether an Agent is running; if it’s not running, WatchDog can restart that Agent if needed. If the Agent process fails, WatchDog can bring it back up! WatchDog can also be configured to watch other processes. You can configure WatchDog to run as a service (for Windows) or daemon (for Linux). You will register the Watchdog to run as a service.  The Watchdog configuration file will specify the process(es) to be monitored and what to do when one exits. The options are to attempt to restart the process or to restart the system. Note: Watchdog detects only if a watched process exits. It will not detect or report on processes that may be “hanging”. Need more information? For information about configuring and using WatchDog, see the Agent User’s Guide for your Agent: either Axeda® Platform Axeda® Gateway User’s Guide (PDF) or Axeda® Platform Axeda® Connector User’s Guide (PDF).
View full tip
Sometimes M2M Assets should poll the platform on demand, such as in the case of avoiding excessive data charges from chatty assets.  A mechanism was developed that instructs the Asset to contact (poll) the platform for actions that the Asset needs to act on such as File Uploads, Set DataItem, etc. The Shoulder Tap SMS message is the platform’s way of contacting the Asset – tapping it on the shoulder to let it know there’s a message waiting for it.  The Asset responds by polling for the waiting message.  This implementation in the platform provides a way to configure the Model Profile that is responsible for sending an SMS Shoulder Tap message to an M2M Asset.  The Model Profile contains model-wide instructions for how and when a Shoulder Tap message should be sent. How does it work? The M2M asset is set not to poll the Axeda Platform for a long period, but the Enterprise user has some actions that the Asset needs to act upon such as FOTA (Firmware Over-the-Air).       Software package deployed to M2M Asset from Axeda Platform and put into Egress queue.       The Shoulder Tap mechanism executes a Custom Object that then sends a message to the Asset through a delivery method like SMS, UDP, etc.       The Asset’s SMS, etc. handler receives the message and the Asset then sends a POLL to the Platform and acts upon the action in the egress queue How do you make Shoulder Tap work for your M2M Assets? The first step is to create a Model Profile, the model profile will tell Asset of this model, how to communicate. For Example, if the Model Profile is Shoulder Tap, then the mechanism used to communicate to the Asset will imply Shoulder Tap.  Execute the attached custom object, createSMSModelProfile.groovy, and it will create a Model Profile named "SMSModelProfile". When you create a new Model, you will see  “SMSModelProfile“ appear in the Communication Profile dropdown list as follows: The next step is to create the Custom Object Transport script which is responsible for sending out the SMS or other method of communication to the Asset.  In this example the custom object is be named SMSCustomObject​.  The contents of this custom object are outside the scope of this article, but could be REST API calls to Twilio, Jasper or to a wireless provider's REST APIs to communicate with the remote device using an SMS message.   This could also be used with the IntegrationPublisher API to send a JMS message to a server the customer controls which could then be used to talk directly with custom libraries that are not directly compatible with the Axeda Platform. Once the Shoulder Tap scripting has been tested and is working correctly, you can now directly send a Shoulder Tap to the Asset from an action or through an ExtendedUI Module, such as shown below: import com.axeda.platform.sdk.v1.services.ServiceFactory; final ServiceFactory sFact = new ServiceFactory() def assetId = (Long)parameters.get("assetId") def stapService = ServiceFactory.getShoulderTapService() stapService.sendShoulderTap( assetId ) See Extending the Axeda Platform UI - Custom Tabs and Modules for more about creating and configuring Extended UI Modules. What about Retries? maxRetryCount  - This built in attribute’s value defines the number of times the platform will retry to send the Shoulder Tap message before it gives up. retryInterval -The retry interval that can be used if the any message delivery needs to be retried. Retry Count and Interval are configured in the Model Profile Custom Object like so: final DeliveryMethodDescriptor dmd = new DeliveryMethodDescriptor(); fdmd.setMaxRetryCount(2); fdmd.setRetryInterval(60);
View full tip
Announcements