cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Learn all about the Community Ranking System, a fun gamification element of the PTC Community. X

IoT Tips

Sort by:

Only logged in customers with a PTC active maintenance contract can view this content. Learn More

Hi Community,   I've recently had a number of questions from colleagues around architectures involving MQTT and what our preferred approach was.  After some internal verification, I wanted to share an aggregate of my findings with the ThingWorx Architect and Developer Community.   PTC currently supports four methods for integrating with MQTT for IoT projects. ThingWorx Azure IoT Hub Connector ThingWorx MQTT Extension ThingWorx Kepware Server Choice is nice, but it adds complexity and sometimes confusion.  The intent of this article is to clarify and provide direction on the subject to help others choose the path best suited for their situation.   ThingWorx MQTT Extension The ThingWorx MQTT extension has been available on the marketplace as an unsupported “PTC Labs” extension for a number of years.  Recently its status has been upgraded to “PTC Supported” and it has received some attention from R&D getting some bug fixes and security enhancements.  Most people who have used MQTT with ThingWorx are familiar with this extension.  As with anything, it has advantages and disadvantages.  You can easily import the extension without having administrative access to the machine, it’s easy to move around and store with projects, and can be up and running quite quickly.  However it is also quite limited when it comes to the flexibility required when building a production application, is tied directly to the core platform, and does not get feature/functionality updates.   The MQTT extension is a good choice for PoCs, demos, benchmarks, and prototypes as it provides MQTT integration relatively quickly and easily.  As an extension which runs with the core platform, it is not a good choice as a part of a client/enterprise application where MQTT communication reliability is critical.   ThingWorx Azure IoT Hub Connector Although Azure IoT Hub is not a fully functional MQTT broker, Azure IoT does support MQTT endpoints on both IoT Hub and IoT Edge.  This can be an interesting option to have MQTT devices publish to Azure IoT and be integrated to ThingWorx using the Azure IoT Hub Connector without actually requiring an MQTT broker to run and be maintained.  The Azure IoT Hub Connector works similarly to the PAT and is built on the Connection Server, but adds the notion of device management and security provided by Azure IoT.   When using Azure IoT Edge configured as a transparent gateway with buffering (store and forward) enabled, this approach has the added benefit of being able to buffer MQTT device messages at a remote site with the ability to handle Internet interruptions without losing data.   This approach has the added benefit of having far greater integrated security capabilities by leveraging certificates and tying into Azure KeyVault, as well as easily scaling up resources receiving the MQTT messages (IoT Hub and Azure IoT Hub Connector).  Considering that this approach is build on the Connection Server core, it also follows our deployment guidance for processing communications outside of the core platform (unlike the extension approach).   ThingWorx Kepware Server As some will note, KepWare has some pretty awesome MQTT capabilities: both as north and southbound interfaces.  The MQTT Client driver allows creating an MQTT channel to devices communicating via MQTT with auto-tag creation (from the MQTT payload).  Coupled with the native ThingWorx AlwaysOn connection, you can easily connect KepWare to an on-premise MQTT broker and connect these devices to ThingWorx over AlwaysOn.   The IoT Gateway plug-in has an MQTT agent which allows publishing data from all of your KepWare connected devices to an MQTT broker or endpoint.  The MQTT agent can also receive tag updates on a different topic and write back to the controllers.  We’ve used this MQTT agent to connect industrial control system data to ThingWorx through cloud platforms like Azure IoT, AWS, and communications providers.   ThingWorx Product Segment Direction A key factor in deciding how to design your solution should be aligned with our product development direction.  The ThingWorx Product Management and R&D teams have for years been putting their focus on scalable and enterprise-ready approaches that our partners and customers can build upon.  I mention this to make it clear that not all supported approaches carry the same weight.  Although we do support the MQTT extension, it is not in active development due to the fact that out-of-platform microservices-based communication interfaces are our direction forward.   The Azure IoT Hub Connector, being built on the Connection Server is currently the way forward for MQTT communications to the ThingWorx Foundation.   Regards,   Greg Eva
View full tip
Everywhere in the Thingworx Platform (even the edge and extensions) you see the data structure called InfoTables.  What are they?  They are used to return data from services, map values in mashup and move information around the platform.  What they are is very simple, how they are setup and used is also simple but there are a lot of ways to manipulate them.  Simply put InfoTables are JSON data, that is all.  However they use a standard structure that the platform can recognize and use. There are two peices to an InfoTable, the DataShape definition and the rows array.  The DataShape is the definition of each row value in the rows array.  This is not accessible directly in service code but there are function and structures to manipulate it in services if needed. Example InfoTable Definitions and Values: { dataShape: {     fieldDefinitions : {           name: "ColOneName", baseType: "STRING"     },     {           name: "ColTwoName", baseType: "NUMBER"     }, rows: [     {ColOneName: "FirstValue", ColTwoName: 13},     {ColOneName: "SecondValue, ColTwoName: 14}     ] } So you can see that the dataShape value is made up of a group of JSON objects that are under the fieldDefinitions element.  Each field is a "name" element, which of course defined the field name, and the "baseType" element which is the Thingworx primitive type of the named field.  Typically this structure is automatically created by using a DataShape object that is defined in the platform.  This is also the reason DataShapes need to be defined, so that fields can be defined not only for InfoTables, but also for DataTables and Streams.  This is how Mashups know what the structure of the data is when creating bindings to widgets and other parts of the platform can display data in a structured format. The other part is the "rows" element which contains an array of of JSON objects which contain the actual data in the InfoTable. Accessing the values in the rows is as simple as using standard JavaScript syntax for JSON.  To access the number in the first row of the InfoTable referenced above (if the name of the InfoTable variable is "MyInfoTable") is done using MyInfoTable.rows[0].ColTowName.  This would return a value of 13.  As you can not the JSON array index starts at zero. Looping through an InfoTable in service script is also very simple.  You can use the index in a standard "for loop" structure, but a little cleaner way is to use a "for each loop" like this... for each (row in MyInfoTable.rows) {     var colOneVal = row.ColOneName;     ... } It is important to note that outputs of many base services in the platform have an output of the InfoTable type and that most of these have system defined datashapes built into the platform (such as QueryDataTableEntries, GetImplimentingThings, QueryNumberPropertyHistory and many, many more).  Also all service results from query services accessing external databases are returned in the structure of an InfoTable. Manipulating an InfoTable in script is easy using various functions built into the platform.  Many of these can be found in the "Snippets" tab of the service editor in Composer in both the InfoTableFunctions Resource and InfoTable Code Snippets. Some of my favorites and most commonly used... Create a blank InfoTable: var params = {   infoTableName: "MyTable" }; var MyInfoTable= Resources["InfoTableFunctions"].CreateInfoTable(params); Add a new field to any InfoTable: MyInfoTable.AddField({name: "ColNameThree", baseType: "BOOLEAN"}); Delete a field: MyInfoTable.RemoveField("ColNameThree"); Add a data row: MyInfoTable.AddRow({ColOneName: "NewRowValue", ColTwoName: 15}); Delete one or more data row matching the values defined (Note you can define multiple field in this statement): //delete all rows that have a value of 13 in ColNameOne MyInfoTable.Delete({ColNameOne: 13}); Create an InfoTable using a predefined DataShape: var params = {   infoTableName: "MyInfoTable",   dataShapeName: "dataShapeName" }; var MyInfoTable = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); There are many more functions built into the platform, including ones to filter, sort and query rows.  These can be extremely useful when tying to return limited or more strictly structured InfoTable data.  Hopefully this gives you a better understanding and use of this critical part of the Thingworx Platform.
View full tip
Hi everyone,   Maybe you got my email, I just wanted to post this also here.   I built a CSS generator for a few widgets, 8 at the moment. This is on a cloud instance accessible by anyone.   Advantages:      - don't have to style buttons and apply style definitions over and over again      - greater flexibility in styling the widget      - you don't have to write any CSS code   The way this works: you use the configurator to style the widget as you want. Use also a class to define what that widget is, or how it's styled. For example "primary-btn", "secondary-btn", etc. Copy the generated CSS code into the CustomCSS tab in ThingWorx and on the widget, put the CustomClass specified in the configurator.   As a best practice, I'd recommend placing all your CSS code into the Master mashup. And then all your mashups that use that master will also get the CustomCSS. So the only thing you have to do to your widgets in the mashups, is fill the CustomClass property with the desired generated style. Also, comment your differently styled widgets by separating them with /* My red button */ for example.   The mashups for this won't be released, this will only be offered as a service. As you'll see in the configurator, they are not that pretty, the main goal was functionality.   Here is the link to the configurator: https://pp-18121912279c.portal.ptc.io/Thingworx/Runtime/index.html#master=CSSMaster&mashup=ButtonVariables User: guest Password: guest123123   Give it a go and have fun! 🙂   NOTE: I will add more widgets to this in the future and will not take any requests in making it for a specific widget, I make these based on usage and styling capabilities.    
View full tip
Exciting news! ThingWorx now has improved support for Docker containers to help you manage CI/CD, improve development efficiency in your organization and save costs. Check out these FAQs below and, as always, reach out to me if you have any additional questions.   Stay connected, Kaya   FAQs: ThingWorx Docker Containers   What are Docker Containers? From Docker.com: “a Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings”. Learn more here.   What's the difference between Docker containers and VMs? Containers are an abstraction at the app layer that packages code and dependencies together, whereas Virtual Machines (VMs) are an abstraction of physical hardware turning one server into many servers. Here are some great discussions on it on Stack Overflow. Containers vs. VMs   How can I build ThingWorx Docker images? Check out the Building ThingWorx 8.3 Docker Images Guide or watch this video to instruct you on how to build and test Docker containers. (view in My Videos)   How does PTC support building ThingWorx Docker images? PTC provides the ability for customers and partners to build ThingWorx Docker images. A customer can download the Dockerfiles and scripts packaged as a zip folder from the PTC Software Downloads Portal under “ThingWorx Platform,” then “Release 8.3”  then“ThingWorx Dockerfiles.” (Please note that you must be logged in for the link to function properly.) PTC Software Downloads PortalThe zip folder contains the Dockerfiles, template jar, and scripts to fetch Tomcat, and ThingWorx WAR files using CLI. Java must be downloaded manually from the vendor's website. We also provide an instructional guide called “Build ThingWorx Docker Images” available on the Reference Documents page on the Support Portal.   How are ThingWorx Docker images different from the usual delivery media of WAR files? The WAR file delivery is typically accompanied by an installation guide that contains the manual steps for creating the VM or bare-metal environment. That guide includes instructions for the administrator to manually install the prerequisites, including Tomcat, Java, and ThingWorx platform settings files. To deploy and run the WAR file, the administrator follows the guide to create the runtime environment on an OS. In contrast, the Dockerfile build in this delivery automates the creation of a Docker image once supplied with the prerequisites.   Do you have any reference deployment and guidance? Yes, you can refer to our blog post to learn how to deploy and run ThingWorx Docker containers on your existing Kubernetes environment.   Is there any recommendation on which Container Orchestrator as a Service (CaaS) a customer should run ThingWorx Foundation Docker container images on? You can use Docker-Compose for testing, but it is generally not suggested for production deployment use cases. In a production environment, customers should use container orchestrators such as Kubernetes, OpenShift, Azure Kubernetes Service (AKS), or Amazon Elastic Container Service for Kubernetes (Amazon EKS), to deploy and manage ThingWorx Docker images.   What are the skill sets required? Familiarity with OS CLI and Docker tools is required to build building the ThingWorx Docker images. Familiarity with Docker-compose to run the resulting Docker containers is needed to test the resulting builds. We don’t recommend Docker-Compose for production use, but when using it for local testing and demo purposes, users can rapidly install ThingWorx and get it up and running in minutes. We expect PTC partners and customers who want to run ThingWorx containerized instances in their production environment to possess the required skill sets within their DevOps team.   How is ThingWorx licensing handled with the Docker images? By default, the container created from these Docker images starts up in a limited mode with no license supplied. You can configure your username and password for the PTC licensing portal to automatically load a license via environment variables passed into the container on startup. Additionally, you can mount a volume to the /ThingworxPlatform directory, which contains your license file, or to retrieve a license request. To keep your Host ID consistent, ensure that the /ThingworxStorage and /ThingworxPlatform directories are persisted and not removed with individual container restarts. More detailed instructions can be found in the build guide or in a Kubernetes blog post .   Is Docker free? What version of Docker does PTC support for ThingWorx? Docker is open-source and licensed under the Apache 2 license. Information on Docker licensing can be found here. The following Docker versions are required: Docker Community Edition (docker-ce) Version 18.05.0-ce is recommended. To install the Docker Community Edition on your system, follow the instructions for your operating system on the Docker website here. Docker Compose (docker-compose) Version 1.17.1 is recommended. To install the Docker Compose on your system, follow the instructions for your operating system on the Docker website here. What persistence providers are currently supported? PTC provides the ability to build ThingWorx Foundation containers for the following supported persistence providers: H2 Microsoft SQL Server PostgreSQL Additional persistence providers will be added to the Docker build delivery as the ThingWorx Foundation Platform releases support for those new databases in future releases.   What are some of the security best practices? For production use, customers are strongly advised to secure their Docker environments by following all the recommendations provided by Docker. Review and implement the best practices detailed at https://docs.docker.com/engine/security/security/.   Can we build Docker images for ThingWorx High Availability (HA) architecture? Yes. ThingWorx Dockerfiles are provided for both basic ThingWorx deployment architecture and HA ThingWorx deployment architecture.   How easy is the rehosting and upgrading of ThingWorx releases on Docker with existing data? In Kubernetes environment, data is kept in a separate volume and can be attached to different containers. When one container dies, the data can be attached to a different container and the container should start without issue. For more information, please refer to the upgrade section of the Building ThingWorx 8.3 Docker Images Guide.   Is it okay to use the Docker exec and access the bash shell to make config changes or should I always rebuild the image and re-deploy?­ Although using Docker exec to gain access to the container internals is useful for testing and troubleshooting issues, any changes made will not be saved after a container is stopped. To configure a container's environment, variables are passed in during the start process. This can be done with Docker start commands, using compose files with environment variables defined, or with helm charts. More detailed instructions can be found in the build guide or in this blog post .   What if there are issues? Should I call PTC Technical Support? We are providing the scripts and reference documents solely to empower our community to build ThingWorx Docker images. We believe that customers using Docker in their production processes would have expertise to manage running Docker containers themselves. If there are any issues or questions regarding the build scripts provided in the PTC official downloads portal, then customers can contact PTC Technical Support at 1-800-477-6435 or visit us online at: http://support.ptc.com. PTC does not provide support for orchestration troubleshooting.   What can you share about future roadmap plans? As we are enabling our customers and partners to build ThingWorx Foundation Platform Docker images, we plan to do the same for upcoming products such as ThingWorx Integration & Orchestration, ThingWorx Analytics, upcoming persistence providers such as InfluxDB, and many more. We also plan to provide additional reference architecture examples and use cases to help developers understand how to use Docker containers in their DevOps and production environments.   Where can I learn more about Docker containers and container orchestrators? See these resources below for additional information: https://training.docker.com/ https://kubernetes.io/docs/tutorials/online-training/overview/
View full tip
I have created a mashup which allows you to easily use and test the Prescriptions functionality in Thingworx Analytics (TWA). This is where you choose 1 or more fields for optimization, and TWA tells you how to adjust those fields to get an optimal outcome.   The functionality is based on a public sample dataset for concrete mixtures, full details are included in the attached documentation.  
View full tip
The Protocol Adapter Toolkit (PAT) is an SDK that allows developers to write a custom Connector that enables edge devices to connect to and communicate with the ThingWorx Platform.   In this blog, I will be dabbling with the MQTT Sample Project that uses the MQTT Channel introduced in PAT 1.1.2.   Preamble All the PAT sample projects are documented in detail in their respective README.md. This post is an illustrated Walk-thru for the MQTT Sample project, please review its README.md for in depth information. More reading in Protocol Adapter Toolkit (PAT) overview PAT 1.1.2 is supported with ThingWorx Platform 8.0 and 8.1 - not fully supported with 8.2 yet.   MQTT Connector features The MQTT Sample project provides a Codec implementation that service MQTT requests and a command line MQTT client to test the Connector. The sample MQTT Codec handles Edge initiated requests read a property from the ThingWorx Platform write a property to the ThingWorx Platform execute a service on the ThingWorx Platform send an event to the ThingWorx Platform (also uses a ServiceEntityNameMapper to map an edgeId to an entityName) All these actions require a security token that will be validated by a Platform service via a InvokeServiceAuthenticator.   This Codec also handles Platform initiated requests (egress message) write a property to the Edge device execute a service without response on the Edge device  Components and Terminology       Mqtt Messages originated from the Edge Device (inbound) are published to the sample MQTT topic Mqtt Messages originated from the Connector (outbound) are published to the mqtt/outbound MQTT topic   Codec A pluggable component that interprets Edge Messages and converts them to ThingWorx Platform Messages to enable interoperability between an Edge Device and the ThingWorx Platform. Connector A running instance of a Protocol Adapter created using the Protocol Adapter Toolkit. Edge Device The device that exists external to the Connector which may produce and/or consume Edge Messages. (mqtt) Edge Message A data structure used for communication defined by the Edge Protocol.  An Edge Message may include routing information for the Channel and a payload for Codec. Edge Messages originate from the Edge Device (inbound) as well as the Codec (outbound). (mqtt) Channel The specific mechanism used to transmit Edge Messages to and from Edge Devices. The Protocol Adapter Toolkit currently includes support for HTTP, WebSocket, MQTT, and custom Channels. A Channel takes the data off of the network and produces an Edge Message for consumption by the Codec and takes Edge Messages produced by the Codec and places the message payload data back onto the network. Platform Connection The connection layer between a Connector and ThingWorx core Platform Message The abstract representation of a message destined for and coming from the ThingWorx Platform (e.g. WriteProperty, InvokeService). Platform Messages are produced by the Codec for incoming messages and provided to the Codec for outgoing messages/responses.   Installation and Build  Protocol Adapter Toolkit installation The media is available from PTC Software Downloads : ThingWorx Connection Server > Release 8.2 > ThingWorx Protocol Adapter Toolkit Just unzip the media on your filesystem, there is no installer The MQTT Sample Project is available in <protocol-adapter-toolkit>\samples\mqtt Eclipse Project setup Prerequisite : Eclipse IDE (I'm using Neon.3 release) Eclipse Gradle extension - for example the Gradle IDE Pack available in the Eclipse Marketplace Import the MQTT Project : File > Import > Gradle (STS) > Gradle (STS) Project Browser to <protocol-adapter-toolkit>\samples\mqtt, then [Build Model] and select the mqtt project     Review the sample MQTT codec and test client Connector : mqtt > src/main/java > com.thingworx.connector.sdk.samples.codec.MqttSampleCodec decode : converts an MqttEdgeMessage to a PlatformRequest encode (3 flavors) : converts a PlatformMessage or an InfoTable or a Throwable to a MqttEdgeMessage Note that most of the conversion logic is common to all sample projects (websocket, rest, mqtt) and is done in an helper class : SampleProtocol The SampleProtocol sources are available in the <protocol-adapter-toolkit>\samples\connector-api-sample-protocol project - it can be imported in eclipse the same way as the mqtt. SampleTokenAuthenticator and SampleEntityNameMapper are also defined in the <protocol-adapter-toolkit>\samples\connector-api-sample-protocol project. Client : mqtt > src/client/java > com.thingworx.connector.sdk.samples.MqttClient Command Line MQTT client based on Eclipse Paho that allows to test edge initiated and platform initiated requests. Build the sample MQTT Connector and test client Select the mqtt project then RMB > Gradle (STS) > Task Quick Launcher > type Clean build +  [enter] This creates a distributable archive (zip+tar) in <protocol-adapter-toolkit>\samples\mqtt\build\distributions that packages the sample mqtt connector, some startup scripts, an xml with sample entities to import on the platform and a sample connector.conf. Note that I will test the connector and the client directly from Eclipse, and will not use this package. Runtime configuration and setup MQTT broker I'm just using a Mosquitto broker Docker image from Docker Hub​   docker run -d -p 1883:1883 --name mqtt ncarlier/mqtt  ThingWorx Platform appKey and ConnectionServicesExtension From the ThingWorx Composer : Create an Application Key for your Connector (remember to increase the expiration date - to make it simple I bind it to Administrator) Import the ConnectionServicesExtension-x.y.z.zip and pat-extension-x.y.z.zip extensions available in <protocol-adapter-toolkit>\requiredExtensions  Connector configuration Edit <protocol-adapter-toolkit>\samples\mqtt\src\main\dist\connector.conf Update the highlighted entries below to match your configuration :   include "application" cx-server {   connector {     active-channel = "mqtt"     bind-on-first-communication = true     channel.mqtt {       broker-urls = [ "tcp://localhost:1883" ]       // at least one subscription must be defined       subscriptions {        "sample": [ "com.thingworx.connector.sdk.samples.codec.MqttSampleCodec", 1 ]       }       outbound-codec-class = "com.thingworx.connector.sdk.samples.codec.MqttSampleCodec"     }   }   transport.websockets {     app-key = "00000000-0000-0000-0000-000000000000"     platforms = "wss://thingWorxServer:8443/Thingworx/WS"   }   // Health check service default port (9009) was in used on my machine. Added the following block to change it.   health-check {      port = 9010   } }  Start the Connector Run the Connector directly from Eclipse using the Gradle Task RMB > Run As ... > Gradle (STS) Build (Alternate technique)  Debug as Java Application from Eclipse Select the mqtt project, then Run > Debug Configurations .... Name : mqtt-connector Main class:  com.thingworx.connectionserver.ConnectionServer On the argument tab add a VM argument : -Dconfig.file=<protocol-adapter-toolkit>\samples\mqtt\src\main\dist\connector.conf Select [Debug]  Verify connection to the Platform From the ThingWorx Composer, Monitoring > Connection Servers Verify that a Connection Server with name protocol-adapter-cxserver-<uuid> is listed  Testing  Import the ThingWorx Platform sample Things From the ThingWorx Composer Import/Export > From File : <protocol-adapter-toolkit>\samples\mqtt\src\main\dist\SampleEntities.xml Verify that WeatherThing, EntityNameConverter and EdgeTokenAuthenticator have been imported. WeatherThing : RemoteThing that is used to test our Connector EdgeTokenAuthenticator : holds a sample service (ValidateToken) used to validate the security token provided by the Edge device EntityNameConverter : holds a sample service (GetEntityName) used to map an edgeId to an entityName  Start the test MQTT client I will run the test client directly from Eclipse Select the mqtt project, then Run > Run Configurations .... Name : mqtt-client Main class:  com.thingworx.connector.sdk.samples.MqttClient On the argument tab add a Program argument : tcp://<mqtt_broker_host>:1883 Select [Run] Type the client commands in the Eclipse Console  Test Edge initiated requests     Read a property from the ThingWorx Platform In the MQTT client console enter : readProperty WeatherThing temp   Sending message: {"propertyName":"temp","requestId":1,"authToken":"token1234","action":"readProperty","deviceId":"WeatherThing"} to topic: sample Received message: {"temp":56.3,"requestId":1} from topic: mqtt/outbound Notes : An authToken is sent with the request, it is validated by a platform service using the SampleTokenAuthenticator (this authenticator is common to all the PAT samples and is defined in <protocol-adapter-toolkit>\samples\connector-api-sample-protocol) EntityNameMapper is not used by readProperty (no special reason for that) The PlatformRequest message built by the codec is ReadPropertyMessage   Write a property to the ThingWorx Platform In the MQTT client console enter : writeProperty WeatherThing temp 20   Sending message: {"temp":"20","propertyName":"temp","requestId":2,"authToken":"token1234","action":"writeProperty","deviceId":"WeatherThing"} to topic: sample Notes : An authToken is sent with the request, it is validated by a platform service using the SampleTokenAuthenticator EntityNameMapper is not used by writeProperty The PlatformRequest message built by the codec is WritePropertyMessage No Edge message is sent back to the device   Send an event to the ThingWorx Platform   In the MQTT client console enter : fireEvent Weather WeatherEvent SomeDescription   Sending message: {"requestId":5,"authToken":"token1234","action":"fireEvent","eventName":"WeatherEvent","message":"Some description","deviceId":"Weather"} to topic: sample Notes : An authToken is sent with the request, it is validated by a platform service using the SampleTokenAuthenticator fireEvent uses a EntityNameMapper (SampleEntityNameMapper) to map the deviceId (Weather) to a Thing name (WeatherThing), the mapping is done by a platform service The PlatformRequest message built by the codec is FireEventMessage No Edge message is sent back to the device   Execute a service on the ThingWorx Platform ... can be tested with the GetAverageTemperature on WeatherThing ... Test Platform initiated requests     Write a property to the Edge device The MQTT Connector must be configured to bind the Thing with the Platform when the first message is received for the Thing. This was done by setting the bind-on-first-communication=true in connector.conf When a Thing is bound, the remote egress messages will be forwarded to the Connector The Edge initiated requests above should have done the binding, but if the Connector was restarted since, just bind again with : readProperty WeatherThing isConnected From the ThingWorx composer update the temp property value on WeatherThing to 30 An egress message is logged in the MQTT client console :   Received message: {"egressMessages":[{"propertyName":"temp","propertyValue":30,"type":"PROPERTY"}]} from topic: mqtt/outbound   Execute a service on the ThingWorx Platform ... can be tested with the SetNtpService on WeatherThing ...
View full tip
In this blog I will be testing the SAPODataConnector using the SAP Gateway - Demo Consumption System.   Overview   The SAPODataConnector enables the connection to the SAP Netweaver Gateway through the ODdata specification. It is a specialized implementation of the ODataConnector. See Integration Connectors for documentation.   It relies on three components : Integration Runtime : microservice that runs outside of ThingWorx and has to be deployed separately, it uses Web Socket to communicate with the ThingWorx platform (similar to EMS). Integration Subsystem : available by default since 7.4 (not extension needed) Integration Connector : SAPODataConnector available by default in 8.0 (not extension needed)   ThingWorx can use OAuth to access SAP, but in this blog I will just use basic authentication.   SAP Netweaver Gateway Demo system registration   1. Create an account on the Gateway Demo system (credentials to be used on the connector are sent by email) 2. Verify that the account has access to the basic OData sample service : https://sapes4.sapdevcenter.com/sap/opu/odata/IWBEP/GWSAMPLE_BASIC/   Integration Runtime microservice setup   1. Follow WindchillSwaggerConnector hands-on (7.4) - Integration Runtime microservice setup Note: Only one Integration Runtime instance is required for all your Integration Connectors (Multiple instances are supported for High Availability and scale).   SAPODataConnector setup   Use the New Composer UI (some setting, such as API maps, are not available in the ThingWorx legacy composer)     1. Create a DataShape that is used to map the attributes being retrieved from SAP SAPObjectDS : Id (STRING), Name (STRING), Price (NUMBER) 2. Create a Thing named TestSAPConnector that uses SAPODataConnector as thing template 3. Setup the SAP Netweaver Gateway connection under TestSAPConnector > Configuration Generic Connector Connection Settings Authentication Type = fixed HTTP Connector Connection Settings Username = <SAP Gateway user> Password = < SAP Gateway pwd> Base URL : https://sapes4.sapdevcenter.com/sap Relative URL : /opu/odata/IWBEP/GWSAMPLE_BASIC/ Connection URL : /opu/odata/IWBEP/GWSAMPLE_BASIC/$metadata 4. Create the API maps and service under TestSAPConnector > API Maps (New Composer only) Mapping ID : sap EndPoint : getProductSet Select DataShape : SAPObjectDS (created at step 1) and map the following attributes : Name <- Name Id <- ProductID Price <- Price Pick "Create a Service from this mapping"     Testing our Connector   Test the TestSAPConnector::getProductSet service (keep all the input parameters blank)
View full tip
Internationalization and Localization Internationalization (often abbreviated I18N – from "I" + 18 more letters + "n") is the process of developing software that supports many languages, including those with non-Latin character sets. Localization (L10N) refers to developing applications that can be delivered in many languages, relying on the underlying architecture of I18N. This how-to article focuses mostly on localization, since the infrastructure is in place and stable. Create a Localization Table You create a Localization Table entity when you need to add support for another language to the application you're developing. Someone from Sales has said "There's an opportunity if we can deliver the Spiffy application in Estonian." This suggests that an Estonian-speaking end user should be able to run Spiffy and see all of its labels, messages, prompts, dialogs, and so on in Estonian. Most of the cost of adding Estonian language support is in a (usually contracted) service that does the English-to-Estonian (or whatever target language) translations. Such services employ native speakers who can get the nuances of translation correct. See Tips for translators below for suggestions on improving the accuracy of the translation. In Composer, view the Localization Tables list. Begin by duplicating an existing table (e.g. check Default or another language and click Duplicate) or by clicking New. A new tab will open with a New Localization Table in edit mode. The fields shown are: Locale (required). This is the official language tag of the new language. Language tags are defined by an Internet standard, IETF BCP 47. Briefly, they consist of a standard abbreviation for a language (e.g. en for English, de for German), followed optionally by a script subtag (e.g. Cyrl for Cyrilic), followed optionally by a region code (a country code, such as CH for Switzerland or HK for Hong Kong, or a U.N. region number), followed optionally by other qualifiers such as dialect. A simple example is es, Spanish. A complex one is sl-Latn-IT-nedis, Slovenian rendered in Latin characters as spoken in Italy in the Natisone dialect. Software rarely needs such highly specific language tags; the most specific practical examples are the various scripts and regions for Chinese (e.g. zh-Hans-CN, zh-Hant-TW). Language Name (Native) (required). This is the name of the language as written in that language, such that it would be readable by a native speaker. For example, 日本語 for Japanese, ਪੰਜਾਬੀ ਦੇ for Punjabi, or Deutsche for German. Language Name (Common). This is the name of the language as written in a common administrative language. For an application delivered internationally, English is probably a safe choice. Administrators at a customer site might change these to be in the language of the headquarters country. Description. Free form text describing the language. This will appear to end-users as a tooltip as they hover over language choices. Tags. Standard ThingWorx entity tags. Home Mashup. Does not apply. Avatar. An icon for this language. The default is . No other icons are delivered as standard, but language selection interfaces in many products use national flags to help distinguish choices, and those could be supplied here. Avatars are 48x48px images. There may be political implications in choosing a flag or other symbol for a language; use caution. Note that subtags of a language tag are separated by a hyphen, as in zh-Hans-SG. Using underscore is a Java convention that does not conform to BCP 47.A complete properties definition for Czech might look like this: Once the table has been created and saved, you can edit the translated text in Composer. Under Entity Information, select Localization Tokens. A grid similar to this will appear: The columns shown are: Token Name. This is the symbol used by mashup developers to insert a localized string into a certain place in a widget. For example, no matter how the phrase "Add New Page" is rendered (Neue Seite hinzufügen, Adicionar nova página, 새 페이지 추가...) the application developer is only concerned that the token addNewPage appears on the proper widget. See How tokens are resolved below for more information. This Language. How the text is to be represented in this language, that is, the language of the Localization Table currently being viewed or edited. Language. How the text has already been represented in any other language currently defined on the system. This is simply for reference purposes, to compare one translation with another. Usage. Can be set to Label, Message, or left unspecified. This is a guide to translators, who have to be concerned about the size of translated text. Usage Label suggests that the text needs to fit in a confined space, such as in a column header or on the face of a button. Usage Message suggests that the text is meant for a popup, error message, help, or somewhere that full sentences can be accommodated. Context. This is a free-form text field to provide instructions, advice, context, or other explanatory material to the translator. For the token book, for example, the context field can distinguish between the senses of book (something to read), book a table, book a sale, or book a prisoner, which may all have different translations. Translations can be entered in Composer. However, it's also likely that a third-party translator will do the work without using this editor. See Tips for translators below. Define language preferences for a user The reason for localization is to present user interfaces in the best language for a given user. To support this, each ThingWorx user is associated with one or more languages – those that that user can read comfortably. Some applications might offer just one language or a few, some many, and the supported languages may or may not overlap. So each user defines an ordered preference list, saying in effect: my best language is Catalan, but I'm decent in Spanish, and if those aren't available I did spend a few years in Hungary, and as a last resort there was some French in school. This would be represented in ThingWorx as: ca,es,hu,fr. A user from Scotland might have language preference en-UK,en, meaning that English with United Kingdom spellings and vocabulary is best (tyre, windscreen), but if not available then any English will do (tire, windshield). (It is not necessary to spell out related preferences of this type – see How tokens are resolved.) Any application then interacts with a given user in the best language that the application and user have in common.To define the language preference(s) for a user, open the Users list in Composer: Then choose an existing user to edit, or click New to create a new account. The only localization related information here is the Languages field. An administrator who knows the names of available languages may edit or paste an ordered, comma-separated list into the Languages field (e.g.  ca,es,hu,fr-CA). Clicking the Edit... button brings up a drag-and-drop preferences editor: The column on the left shows available (unselected) languages. The column on the right shows this user's languages, with the top entry being the most preferred language. Dragging a language from left to right adds it to the user's list; from right to left removes it; dragging rows up and down on the right changes the preference order. As language entries are dragged, a highlight appears to show where they might be dropped: A user with no language preference set will have all tokens resolved from the Default and System tables. Language Preferences can be set programmatically, as detailed in KCS Article CS243270. Localize Mashups The job of the application developer is to keep hard-coded natural language strings out of applications. To support this, widgets define an attribute isLocalizable: true for widget properties that can contain text. This shows up in the Mashup editor as a globe icon next to each localizable property. In this example, both the Text and ToolTipField properties are localizable: Clicking the globe icon changes the property from static to localized. The appearance in the Mashup editor changes accordingly: Clicking the magic wand icon opens the localization token picker: The list of tokens on the right corresponds to the Token Name column in the Localization Table editor. This is the key that is common to the meaning of a word or phrase, independent of its translation into natural languages. Select one from the list, or click to create a new one. Enter the token name and its Default (usually English) value: Note that, complying with best practices for extension developers, the token name has been namespaced: this token belongs to Acme Inc.'s Spiffy application. The rest of the name is descriptive and may reflect other development standards.When a new token is created, it becomes available to edit in every configured Localization Table. If these are not updated, then the default (English) value will be shown wherever the token occurs. How tokens are resolved What happens at run time when the UI needs to display the value of a localization token? The answer is determined by the current user's language preferences the set of Localization Tables configured on the system the presence or absence of a translation for a given token in a given table To visualize this, picture the user's language preferences as a stack, with the most preferred language on top and the least one sitting on the floor – where the floor consists of the Default and System Localization Tables: The user's language preference is fr,pt,ru,hi (French, Portuguese, Russian, Hindi, with French most preferred). The system is configured with Localization Tables, which have no order, for it (Italian), fr-CA (Canadian French), ru (Russian), pt-BR(Brazilian Portuguese), es (Spanish), and the default (likely Engish). Now the UI needs to present this user with the best value for the token com.acme.spiffy.labelAssembly. To resolve this, we start at the top of the stack. Is there a fr Localization Table? There is. Does it contain a translation for com.acme.spiffy.labelAssembly? For the sake of illustration, assume that it does not – perhaps other applications have French support, but the Spiffy application doesn't, so there aren't any com.acme.spiffy.* tokens in the French Localization Table. So we still need a value. Continuing down through the user's preferences, the next acceptable language is pt. Is there a pt localization table? No. There is a Brazilian Portuguese translation, but that won't help a user from Portugal. Still looking, we move to the next language, ru. Is there a ru Localization Table? There is. Does it contain a translation forcom.acme.spiffy.labelAssembly? It does: Ассамблея – so the token has a value, and that is what gets displayed in the UI. Suppose that the user's preferences were more specific, something like this: The users's language preference is fr-CA,pt-BR,ru-Cyrl-RU,sl-Latn-IT-nedis (Canadian French, Brazilian Portuguese, Russian in Cyrillic characters as used in Russia, Slovenian in Latin characters as used in Italy where the Natisone River dialect prevails). ThingWorx treats this by internally expanding the stack to include acceptable fall-back languages. In effect, it looks like: Of the four languages that the user can accept and that the system defines (fr-CA, fr, pt-BR, ru) the first one containing the desired token determines its value in the UI. Token and translation management for applications While it's possible to edit localized values using the Localization Table editor in Composer, translations are usually done in bulk by subject-matter experts. While workflow will vary among organizations and projects, the following example illustrates the basic process. ACME, Inc. is developing a ThingWorx application called Cambot for controlling security cameras. ACME's developer begins by constructing a mashup: This is the first draft. There is an area for the video widget, to be added later, and some button and label widgets for choosing and controlling a camera. The widgets have been given static labels: As shown here, the text for the pan left button has been entered simply as "Pan Left." But the Cambot app needs to be localized, and delivered in English, French, and Spanish. The next step for the developer is to replace all of the static text with localization tokens. Clicking the globe icon to the left of the label property changes the text from static to tokenized: and adds a magic picker for localization tokens. This is a new application, and will need its own set of localization tokens. To create the one for "Pan Left," click the magic wand to open the tokens picker: and then click "+ Localization Token" to add a new one. A dialog opens prompting for the token name and its default (English) value: Note that the token name has been namespaced for two reasons: to prevent conflicts with tokens from other sources, and to allow the developer and translators to work only with application-specific tokens. On clicking "Add Localization Token," the token is created and the default value saved. The mashup builder now shows: . After all of the tokens needed by the application have been defined, they and their values may be seen on the Localization Tokens editor for the Default Localization Table. By entering the namespace prefix in the filter textbox, the display can be restricted to the tokens for this application: As application development continues, and more tokens are required, this process is repeated. When tokens are defined, the developer should edit the Default Localization Table to supply Usage and Context information for each one: Finally, it's time to do the translations for French and Spanish. First, create the localization tables for those languages, as described above in "Create a Localization Table." From the Import/Export menu, select EXPORT / To File: Then, depending on the file format desired, choose either the Entities or Single Entity tab. For Entities, set the Collections value to Localization Tables, enter the namespace in the Token Prefix field, and choose XML as the Export Type: This will produce a single output file, containing a Localization Table element for every language defined on the system – in this example, English, French, and Spanish -- but including only the com.acme.cambot tokens. For Single Entity, choose the language to export, specify the prefix, and choose XML: This must be repeated, once for each language, and creates a separate XML file for each. In either case, the translator should be supplied with the Default XML and the file for the language to be added. (Or, the tokens and values may be converted to and from other formats, depending on the requirements of the translation service. In any case, the translated values must be in the same XML format before they can be imported.) The Default export file will contain a <Rows> element like this: < Rows >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttonnext]]> </ name >         < context > <![CDATA[Button to switch view to next camera]]> </ context >         < value > <![CDATA[Next Camera]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttonpanleft]]> </ name >         < context > <![CDATA[Button to pan view to the left]]> </ context >         < value > <![CDATA[Pan Left]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttonpanright]]> </ name >         < context > <![CDATA[Button to pan view to the right]]> </ context >         < value > <![CDATA[Pan Right]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttonprev]]> </ name >         < context > <![CDATA[Button to switch view to previous camera]]> </ context >         < value > <![CDATA[Prev. Camera]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttontiltdown]]> </ name >         < context > <![CDATA[Button to tilt view down]]> </ context >         < value > <![CDATA[Tilt Down]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttontiltup]]> </ name >         < context > <![CDATA[Button to tilt view up]]> </ context >         < value > <![CDATA[Tilt Up]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttonzoomin]]> </ name >         < context > <![CDATA[Button to view more detail]]> </ context >         < value > <![CDATA[Zoom In]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttonzoomout]]> </ name >         < context > <![CDATA[Button to expand view]]> </ context >         < value > <![CDATA[Zoom Out]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.labelcamera]]> </ name >         < context > <![CDATA[Label for current camera name]]> </ context >         < value > <![CDATA[Camera:]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.labelrecording]]> </ name >         < context > <![CDATA[Notice displayed when camera is recording]]> </ context >         < value > <![CDATA[Recording]]> </ value >     </ Row > </ Rows > Whereas the French and Spanish export files will contain an empty <Rows/> element. This is where the new translations should be added. When the translations are ready, check that the <LocalizationTable> attributes (name, description, languageCommon, languageNative) are correct. Then import the new languages and inspect the results using the Localization Table editor. Localization tables for an application may be bundled into an extension .zip file as other entities are handled; on import, the tokens for the application will be merged with existing localization tables for the same language. In the case that a brand new language is being introduced, note that many widgets use tokens from the System localization table. These will need to be translated as well – however, there is no easy way to restrict the set of tokens to those actually used. At present this is a manual filtering step. For existing languages, check to see if the System tokens have already been translated. Important note on character encoding In handling the export, transmission and editing of XML files, it's important to ensure that UTF-8 encoding is maintained throughout. Encoding problems can show up either as errors when the file is re-imported, or as localized strings with question marks or other unexpected characters in place of accented letters. ThingWorx must run with UTF-8 as the default file encoding. Specify the Java option -Dfile.encoding=UTF-8 on launch. Windows In %CATALINA_HOME%\bin\setenv.bat, include this command:     set CATALINA_OPTS=-Dfile.encoding=UTF-8 Tips for translators Each token in an exported Localization Table XML file is defined by four fields: name, value, usage, and context. While name might be suggestive, it is actually arbitrary and should not be relied on. Value contains the natural language value for the token in another language (as agreed upon). Translating from this language into the target language is the object. Usage hints at constraints on the size of the translated text. ThingWorx widgets do not in general resize to fit contents; so a button label, column heading, field label, etc. may be more difficult to translate. Because the default language is likely to be English, and English is a particularly compact language, the application may have been designed with narrow constraints. Such tokens should be marked as tricky by having a usage value of Label. Tokens with a usage of Message are for strings in more adaptable spaces, such as a texarea, warning message, etc. Context allows the application developer to provide translation hints. This may disambiguate synonyms, explain usage, discuss space constraints, specify tone of voice, or anything else applicable. The interesting section of a language's XML representation is contained in the <Rows> element. For example: <Rows> example 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 < Rows >     < Row >         < usage />         < name > <![CDATA[com.acme.spiffy.labelPart]]> </ name >         < context />         < value > <![CDATA[Part]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[Label]]> </ usage >         < name > <![CDATA[com.acme.spiffy.labelAssembly]]> </ name >         < context > <![CDATA[Label identifying the name of the assembly being edited, appears as Assembly: external_name]]> </ context >         < value > <![CDATA[Assembly]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[Message]]> </ usage >         < name > <![CDATA[com.acme.spiffy.warningIncomplete]]> </ name >         < context > <![CDATA[Pop-up warning message on Save]]> </ context >         < value > <![CDATA[A referenced part is missing, undefined, or not allowed in this assembly.]]> </ value >     </ Row > </ Rows > In this example, the token defined in lines 2 through 7 is missing the translation cues usage and context. The translator's only option is to intuit the sense of "Part" – is it a noun or a verb? – and attempt a reasonable guess. Access to a running example of the application would clearly be helpful. Lines 8 through 13 identify a label and describe how it is used; lines 14 through 19 do the same for a message. The translator would know that space for the translation of "Assembly" might be limited but that the warning message can be expressed naturally. A translator working on French might then edit this file as follows (again, only the <Rows> element is illustrated): After translating 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 < Rows >     < Row >         < usage />         < name > <![CDATA[com.acme.spiffy.labelPart]]> </ name >         < context />         < value > <![CDATA[Partie]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[Label]]> </ usage >         < name > <![CDATA[com.acme.spiffy.labelAssembly]]> </ name >         < context > <![CDATA[Label identifying the name of the assembly being edited, appears as Assembly: external_name]]> </ context >         < value > <![CDATA[Assemblée]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[Message]]> </ usage >         < name > <![CDATA[com.acme.spiffy.warningIncomplete]]> </ name >         < context > <![CDATA[Pop-up warning message on Save]]> </ context >         < value > <![CDATA[Une partie référencé est manquant, indéfini, ou non autorisés dans cette assemblée.]]> </ value >     </ Row > </ Rows > Note that only the <value> elements need to be translated – the context and usage are hints for the translator. System tokens for international data formats There are several tokens used for formatting that are also subject to localization. Token Default value Notes datepickerDayNamesMin Su,Mo,Tu,We,Th,Fr,Sa Day-of-week abbreviations used in calendar heading. datepickerFirstDay 0 First day of the week, 0 for Sunday, 1 for Monday... datepickerMonthNames January,February,March,April,May,June,July,August,September,October,November,December Month names used in calendar heading. dateTimeFormat_Default yyyy-MM-dd HH:mm:ss Date and time format codes are defined by the moment.js library. dateTimeFormat_FullDateTime LLLL dateTimeFormat_LongDate LL dateTimeFormat_LongDateTime LLL dateTimeFormat_MediumDate ll dateTimeFormat_ShortDate l dateTimeFormat_TimeOnly LT shortDateFormat mm/DD/yyyy See also KCS Article CS241828​ for details about numeric localization. Allowing users to set their own language preferences It may not be practical for the Administrator to set the language preferences for each user. An application may elect to expose the preferences editor to the end user, so that each user may select from the available languages those that are useful. To support this, ThingWorx Composer offers a Preferences widget in the Mashup builder. The widget may be inserted into any application wherever the designer chooses. It may be tied to a button or menu item, or simply appear in a layout with other widgets – perhaps along with application-specific preferences and other settings. To use the Preferences widget, design a mashup for it to appear in. The minimal case would be a responsive page mashup containing nothing but the preferences widget. Add the Preferences widget by dragging it into place: A placeholder for the widget appears in the mashup: The widget may be customized by setting various properties: These properties are specific to the Preferences widget: ShowClearRecent: Check this to include the option for the user to clear the Most Recently Used history. You may specify a localized tooltip. ShowRestoreTabs: Check this to include the option for the user to set tab restoration to ask, always, or never. You may specify a localized tooltip. ShowLanguages: Check this to include the option for the user to edit language preferences. You may specify a localized tooltip. ShowUserName: Check this to label the preferences widget with the user's name. ShowUserAvatar: Check this to label the preferences widget with the user's avatar, if one is defined. Style: Style the preferences widget itself. ButtonStyle: Style the Clear Recent and Edit buttons. These should probably be set to the application's primary button style. After adding the Preferences widget to a mashup, provide some way for the user to navigate to it, consistent with the application's UI design. The mashup may be tied to a menu entry, or assigned to a Navigation widget, or included in a page within the application's workflow – whatever suits the application design. Here is an example of providing access to preferences through a button in the application's title area: 1) The Navigation widget is placed in the page header. 2) The MashupName property is set to the mashup containing a Preferences widget. 3) The TargetWindow property is set to Modal Popup. 4) For a more interesting UI, the button label is bound from the user's name. At runtime, the example looks like this: Note that there is also a menu item leading to the mashup with the Preferences widget.
View full tip
ThingWorx DevOps By Victoria Firewind,  IoT EDC This presentation accompanies a recent Expert Session, with video content including demos of the following topics:   found here!   DevOps is a process for taking planned changes through development, through testing, and into production,   where they can be accessed by end users.   One test instance typically has automated tests (integration testing) which ensure application logic is preserved in spite of whatever changes the developers are making, and often there is another test instance to ensure the application is usable (UAT testing) and able   to handle a production load (load testing).   So, a DevOps Pipeline starts with a task manager tasking out planned changes, where each task will become a branch in the repository. Each time a new branch is created, a new pipe is needed, which in this case, is produced by Docker Hub.   Developers then make changes within that pipe, which then flow along the pipe into testing. In this diagram, testing is shown as the valve which when open (i.e. when tests all pass) then   allows the changes to flow along the pipe into production.   A good DevOps process has good flow along the various pipelines, with as much automated or scripted as possible to reduce the chances for errors in deployments.   In order to create a seamless pipeline, whether or not it winds up automated, several third party tools are useful:       b                  Container software is a very good way to improve the maintainability and updatability of a ThingWorx instance, while minimizing the amount of resources needed to host each component.   n  1. Create Docker Image Consult the Help Center if need be. Update your YML file with everything you need before starting the image: see the example in the PTC community.   License the instance using the license management website. Follow the instructions from Docker for installing those tools: Docker itself (docker) and Docker Compose (docker-compose).   n   2. Save Docker Image in Docker Repo Docker Hub has some free options, and if a license is purchased,   can host more than a single Docker image and tag. It is also possible to set up your own Docker registry.           n 3. Access the image in Docker Desktop Download Docker Desktop and sign-in to the Docker account which hosts the repository.   Create some folders for storing the h2.env file and the ThingworxStorage and ThingWorxPlatform mounted folders.   Remember to license these containers as well. Developers login to the license management site themselves and put those into the ThingWorxPlatform mounted folder (“license_capability_response.bin”).         Git is a very versatile tool that can be used through many different mediums, like Azure DevOps or Github Desktop.  To get started as a totally new Git user, try downloading Github Desktop on your local machine and create a local repository with the provided sample code.    This can then be cloned on a Linux machine, presumably whichever instance hosts the integration ThingWorx instance, using the provided scripts (once they are configured).  Remember to install Git on the Linux machine, if necessary (sudo apt-get install git).     A sample ThingWorx application (which is not officially supported, and provided just as an example on how to do DevOps related tasks in ThingWorx) is attached to this post in a zip file, containing two directories, one for scripts and one for ThingWorx entities.   Copy the Git scripts and config file into the top level, above   the repository folder, and update the GitConfig.sh file with the URL for your Git repository and your login credentials. Then these scripts can be used to sync your Linux server with your Git repository, which any developer could easily update from their local machine. This also ensures changes are secure, and enables the potential use of other DevOps procedures like tasks, epics, and corresponding branches of code.     Steps to DevOps using the provided code as an example: Clone the repository into the SystemRepository or any other created repository, use the provided scripts in a Linux environment. Import the DeploymentUtilities entity, which again is scripted for Linux or for use with a development IDE with bash support. Then import the ThingWorx application from source control or use the script (which itself makes use of that DeploymentUtilities entity). Now create some local changes, add things, etc. and try out the UpdateApplication script or export to source control and then push to the Git repo. Data and localization table exports are also possible. Run the tests using the provided IntegrationTester thing or create your own by overriding the IntegrationTestTS thing shape, or use the TestTwxApplication script from a Linux terminal. Design a process for your application which  allows for easy application exports and updates to and from a repository, so that developers can easily send in their changes, which can then be easily loaded and tested in another environment.   In Conclusion: DevOps is a complex topic and every PTC customer will have their own process based around their unique requirements and applications. In the future, more mature pipeline solutions will be covered, ones that involve also publishing to Solution Central for easier deployment between various testing instances and production.        
View full tip

Only logged in customers with a PTC active maintenance contract can view this content. Learn More

Recently I needed to be able to parse and handle XML data natively inside of a ThingWorx script, and this XML file happened to have a SOAP namespace as well. I learned a few things along the way that I couldn’t find a lot of documentation on, so am sharing here.   Lessons Learned The biggest lesson I learned is that ThingWorx uses “E4X” XML handling. This is a language that Mozilla created as a way for JavaScript to handle XML (the full name is “ECMAscript for XML”). While Mozilla deprecated the language in 2014, Rhino, the JavaScript engine that ThingWorx uses on the server, still supports it, so ThingWorx does too. Here’s a tutorial on E4X - https://developer.mozilla.org/en-US/docs/Archive/Web/E4X_tutorial The built-in linter in ThingWorx will complain about E4X syntax, but it still works. I learned how to get to the data I wanted and loop through to create an InfoTable. Hopefully this is what you want to do as well.   Selecting an Element and Iterating My data came inside of a SOAP envelope, which was meaningless information to me. I wanted to get down a few layers. Here’s a sample of my data that has made-up information in place of the customer's original data:                <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" headers="">     <SOAP-ENV:Body>         <get_part_schResponse xmlns="urn:schemas-iwaysoftware-com:iwse">             <get_part_schResult>                 <get_part_schRow>                     <PART_NO>123456</PART_NO>                     <ORD_PROC_DIV_CD>E</ORD_PROC_DIV_CD>                     <MFG_DIV_CD>E</MFG_DIV_CD>                     <SCHED_DT>2020-01-01</SCHED_DT>                 </get_part_schRow>                 <get_part_schRow>                     <PART_NO>789456</PART_NO>                     <ORD_PROC_DIV_CD>E</ORD_PROC_DIV_CD>                     <MFG_DIV_CD>E</MFG_DIV_CD>                     <SCHED_DT>2020-01-01</SCHED_DT>                 </get_part_schRow>             </get_part_schResult>         </get_part_schResponse>     </SOAP-ENV:Body> </SOAP-ENV:Envelope> To get to the schRow data, I need to get past SOAP and into a few layers of XML. To do that, I make a new variable and use the E4X selections to get there: var data = resultXML.*::Body.*::get_part_schResponse.*::get_part_schResult.*; Note a few things: resultXML is a variable in the service that contains the XML data. I skipped the Envelope tag since that’s the root. The .* syntax does not mean “all the following”, it means “all namespaces”. You can define and specify the namespaces instead of using .*, but I didn’t find value in that. I found some sample code that theoretically should work on a VMware forum: https://communities.vmware.com/thread/592000. This gives me schRow as an XML List that I can iterate through. You can see what you have at this point by converting the data to a String and outputting it: var result = String(data); Now that I am to the schRow data, I can use a for loop to add to an InfoTable: for each (var row in data) {      result.AddRow({         PartNumber: row.*::PART_NO,         OrderProcessingDivCD: row.*::ORD_PROC_DIV_CD,         ManufacturingDivCD: row.*::MFG_DIV_CD,         ScheduledDate: row.*::SCHED_DT     }); } Shoo! That’s it! Data into an InfoTable! Next time, I'll ask for a JSON API. 😊
View full tip
Smoothing Large Data Sets Purpose In this post, learn how to smooth large data sources down into what can be rendered and processed more easily on Mashups. Note that the Time Series Chart  widget is limited to load 8,000 points (hard-coded). This is because rendering more points than this is almost never necessary or beneficial, given that the human eye can only discern so many points and the average monitor can only render so many pixels. Reducing large data sources through smoothing is a recommended best practice for ThingWorx, and for data analysis in general.   To show how this is done, there are sample entities provided which can be downloaded and imported into ThingWorx. These demonstrate the capacity of ThingWorx to reduce tens of thousands of data points based on a "smooth factor" live on Mashups, without much added load time required. The tutorial below steps through setting these entities up, including the code used to generate the dummy data.   Smoothing the Data on Mashups Create a Value Stream for storing the historical data. Create a Data Shape for use in the queries. The fields should be: TestProperty - NUMBER timestamp - DATETIME Create a Thing (TestChartCapacityThing) for simulating property updates and therefore Value Stream updates. There is one property: TestProperty - NUMBER - not persistent - logged The custom query service on this Thing (QueryNamedPropertyHistory) will have the logic for smoothing the data. Essentially, many points are averaged into one point, reducing the overall size, before the data is returned to the mashup. Unfortunately, there is no service built-in to do this (nothing OOTB service). The code is here (input parameters are to - DATETIME; from - DATETIME; SmoothFactor - INTEGER): // This is just for passing the property name into the query var infotable = Resources["InfoTableFunctions"].CreateInfoTable({infotableName: "NamedProperties"}); infotable.AddField({name: "name", baseType: "STRING"}); infotable.AddRow({name: "TestProperty"}); var queryResults = me.QueryNamedPropertyHistory({ maxItems: 9999999, endDate: to, propertyNames: infotable, startDate: from }); // This will be filled in below, based on the smoothing calculation var result = Resources["InfoTableFunctions"].CreateInfoTable({infotableName: "SmoothedQueryResults"}); result.AddField({name: "TestProperty", baseType: "NUMBER"}); result.AddField({name: "timestamp", baseType: "DATETIME"}); // If there is no smooth factor, then just return everything if(SmoothFactor === 0 || SmoothFactor === undefined || SmoothFactor === "") result = queryResults; else { // Increment by smooth factor for(var i = 0; i < queryResults.rows.length; i += SmoothFactor) { var sum = 0; var count = 0; // Increment by one to average all points in this interval for(var j = i; j < (i+SmoothFactor); j++) { if(j < queryResults.rows.length) if(j === i) { // First time set sum equal to first property value sum = queryResults.getRow(j).TestProperty; count++; } else { // All other times, add property values to first value sum += queryResults.getRow(j).TestProperty; count++; } } var average = sum / count; // Use count because the last interval may not equal smooth factor result.AddRow({TestProperty: average, timestamp: queryResults.getRow(i).timestamp}); } } Create a Timer for updating the property values on the Thing. The Timer should subscribe to itself, containing this code (ensure it is enabled as well): var now = new Date(); if(now.getMilliseconds() % 3 === 0) // Randomly reset the number to simulate outliers Things["TestChartCapacityThing"].TestProperty = Math.random()*100; else if(Things["TestChartCapacityThing"].TestProperty > 100) Things["TestChartCapacityThing"].TestProperty -= Math.random()*10; else Things["TestChartCapacityThing"].TestProperty += Math.random()*10; Don't forget to set the runAsUser in the Timer configuration. To generate many properties, set the updateRate to a small value, like 10 milliseconds. Disable the Timer after many thousands of properties are logged in the Value Stream. Create a Mashup for displaying the property data and capacity of the query to smooth the data. The Mashup should run the service created in step 4 on load. The service input comes from widgets on the mashup: Bindings: Place a Time Series Chart widget in the bottom of the Mashup layout. Bind the data from the query to the chart. View the Mashup. Note the difference in the data... All points in one minute: And a smooth factor of 10 in one minute: Note that the outliers still appear, and the peaks are much easier to see. With fewer points, trends become easier to spot and data is easier to understand. For monitoring the specific nature of the outliers, utilize alerts and other types of displays. Alternative forms of data reduction could involve using the mean of each interval (given by the smoothing factor) or the min or max, as needed for the specific use case. Display multiple types of these options for an even more detailed view. Remember, though, the more data needs to be processed, the slower the Mashup will load. As usual, ensure all mashups are load tested and that the number of end users per Mashup is considered during application design.
View full tip
This video will provide you with a brief introduction to the New Composer interface, which has been made available in the 7.4 and later releases of the ThingWorx Platform.  For complete details on what functionality is available within this next generation composer interface, and to also see what lies ahead on our road map, please refer to the following post in the ThingWorx Community: NG Composer feature availability
View full tip
The term ‘Extension’ or ‘Plugin’ often has many different meanings. For example, from the point of view of a Product Manager it often means an ‘easy’ way to add additional functionality to an existing piece of software. In contrast, from the point of view of a Software Developer it often means new syntax to memorize, extensive amounts of API documentation to read and very often weeks or even months of trial and error to make this ‘easy’ addition of functionality.  We at ThingWorx recognized that in order to be a true Platform we need to take action to make the creation of Extensions easier at all phases of the process. Our development team took on the challenge of understanding what it is that normally makes this such a difficult process, and try and find solutions. We took to the open source community looking for a Platform that could offer our Extension developers a wide array of functionality that was easily accessible and familiar. This is where the Eclipse IDE comes in. We were able to create a Plugin of our own for the Eclipse IDE that makes it easy for an Extension Developer to create an Extension Project, generate ThingWorx specific code, manage all the project configuration and build files and also package the Extension. We can do all of this without having a developer read any API documentation or manually write any code, leaving the Extension Developer to focus on what they are best at, which is adding that additional functionality we mentioned earlier. Extensibility and the true nature of a Platform Extensibility is a core aspect of any true Platform as it allows users to add functionality at any time to meet new and changing requirements. The capabilities of extensions are almost endless but here are a few examples: Adding new UI Widgets to be used to visualize data Adding any custom third party Libraries to be used seamlessly in ThingWorx Easily accessing REST APIs outside of ThingWorx Creating helper Resources to be used across the entire platform Add custom Entities easily to multiple ThingWorx instances Provide custom Authenticators and Directory Services As you can see, it is possible to do practically anything that you or our community might find useful for the Internet of Things. This is the nature of a true ‘Platform’. How do I get started developing an extension? There are three steps that will help you dive into Extension Development quickly. First, an instance of ThingWorx Foundation and the ability to navigate the UI, called Composer. Second, a basic understanding of the ThingWorx Model, or “Thing Model”, is necessary. Finally, you will need an installation of the Eclipse IDE with the ThingWorx Eclipse Plugin installed to get started developing your extension. 1)  Getting familiar with ThingWorx Foundation The easiest way to get started playing with the ThingWorx platform is to head over to the Developer Portal and spin up a hosted ThingWorx Foundation server. This is as easy as clicking the ‘Create Foundation Server’ button and a 30-day hosted instance will be created for you to start using as your own personal development playground. If you prefer to set up and work in your own environment, you can also download a Developer Trial Edition to host on your own machine. In order to get familiar with ThingWorx Foundation, I recommend going through our ThingWorx Foundation Quickstart Guide that introduces you to the core building blocks of the platform as well as guide you through a typical scenario of creating a simple IoT application. 2)  Understanding the Thing Model Basics If you are already familiar with the Thing Model and know the basics of using the ThingWorx Platform, then you can probably skip over this section. If you aren’t, or just want a refresher, I’ll go over the basics here. The Thing Model is a collection of Entities that define your solution or business model in ThingWorx. You need a Thing Model for a few reasons.  For those in software development, the Thing Model and its benefits are very similar to those of the Object Oriented programming model.  A good model allows you to maximize reusability, maintainability, and encapsulation.  Having a sound Thing Model means that the future of your IIoT solution will be minimally affected by things like migration, iterative changes, permission changes and security vulnerabilities. The three most commonly used and most important Entities within your model are Things, Thing Templates, and Thing Shapes. These entity types will be the main building blocks for your Thing Model. These are only a few of the Entity Types provided by ThingWorx. It is not necessary but definitely recommended, to have a more comprehensive understanding of the Thing Model, and to work with the entire collection of Entity Types within the ThingWorx Platform by going through the ThingWorx Foundation Quickstart Guide on the Developer Portal. 3)  Dive into the Eclipse Plugin and develop your own extension Lastly, if you don’t have Eclipse IDE installed, head over to the eclipse.org download page and get one installed on your machine. Once you have that, you can find the Eclipse Plugin on our Marketplace here. In a next step, you will want to create a new Extension Project. We added a ThingWorx Extension perspective that enables all of the custom functionality.   Once in the correct perspective, we tried to make our plugin as intuitive as possible. We did this by following as many of the Eclipse usability standards as we could, which means that if you are familiar with the Eclipse IDE you should be able to find most of the ThingWorx functionality on your own. For Example, a simple ‘File’ >> ‘New’ will show you all the options for creating a Project. Creating a new ThingWorx Extension project requires a Name and a ThingWorx Extension SDK (found on the ThingWorx Marketplace) of the version of ThingWorx that you are building your extension for.   By utilizing the capabilities of the Eclipse IDE, we were able to automate the creation of many of the artifacts that had slowed extension developers down. The wizard allows the plugin to handle creating and managing a build file and a metadata.xml as well as manage all of the project dependencies. Once you have an Extension Project, you can use the ThingWorx menus to create your Entities. These actions will create the necessary Java files and manage their applicable entry within the metadata.xml of your project.   After creating your Entity, you can right click the applicable java file which will show you the ‘ThingWorx Source’ menu. This houses the options to generate the code for additional characteristics (Services, Properties, etc.) making the need to learn all of the custom annotations and method signatures a much less daunting process.   Once you have generated some code with the Plugin, it is time to get started implementing your solution - This is the point where my expertise ends and yours begins! If you are interested in getting some more in-depth information on this topic, check out these additional resources: Tutorial: We have created a tutorial that guides you step-by-step through the entire process of developing a ThingWorx extension. Webcast: Watch this 60-minute, interactive deep-dive into IIoT development and learn how to use the Eclipse Plugin to rapidly create a custom ThingWorx extension. Head over to the Developer Portal and start bringing all of your great ideas to life!
View full tip
  Part I – Securing connection from remote device to Thingworx platform The goal of this first part is to setup a certificate authority (CA) and sign the certificates to authenticate MQTT clients. At the end of this first part the MQTT broker will only accept clients with a valid certificate. A note on terminology: TLS (Transport Layer Security) is the new name for SSL (Secure Sockets Layer).  Requirements The certificates will be generated with openssl (check if already installed by your distribution). Demonstrations will be done with the open source MQTT broker, mosquitto. To install, use the apt-get command: $ sudo apt-get install mosquitto $ sudo apt-get install mosquitto-clients Procedure NOTE: This procedure assumes all the steps will be performed on the same system. 1. Setup a protected workspace Warning: the keys for the certificates are not protected with a password. Create and use a directory that does not grant access to other users. $ mkdir myCA $ chmod 700 myCA $ cd myCA 2. Setup a CA and generate the server certificates Download and run the generate-CA.sh script to create the certificate authority (CA) files, generate server certificates and use the CA to sign the certificates. NOTE: Open the script to customize it at your convenience. $ wget https://github.com/owntracks/tools/raw/master/TLS/generate-CA.sh . $ bash ./generate-CA.sh The script produces six files: ca.crt, ca.key, ca.srl, myhost.crt,  myhost.csr,  and myhost.key. There are: certificates (.crt), keys (.key), a request (.csr a serial number record file (.slr) used in the signing process. Note that the myhost files will have different names on your system (ubuntu in my case) Three of them get copied to the /etc/mosquitto/ directories: $ sudo cp ca.crt /etc/mosquitto/ca_certificates/ $ sudo cp myhost.crt myhost.key /etc/mosquitto/certs/ They are referenced in the /etc/mosquitto/mosquitto.conf file like this: After copying the files and modifying the mosquitto.conf file, restart the server: $ sudo service mosquitto restart 3. Checkpoint To validate the setup at this point, use mosquitto_sub client: If not already installed please install it: Change folder to ca_certificates and run the command : The topics are updated every 10 seconds. If debugging is needed you can add the -d flag to mosquitto_sub and/or look at /var/logs/mosquitto/mosquitto.log. 4. Generate client certificates The following openssl commands would create the certificates: $ openssl genrsa -out client.key 2048 $ openssl req -new -out client.csr  -key client.key -subj "/CN=client/O=example.com" $ openssl x509 -req -in client.csr -CA ca.crt  -CAkey ca.key -CAserial ./ca.srl -out client.crt  -days 3650 -addtrust clientAuth The argument -addtrust clientAuth makes the resulting signed certificate suitable for use with a client. 5. Reconfigure Change the mosquitto configuration file To add the require_certificate line to the end of the /etc/mosquitto/mosquitto.conf file so that it looks like this: Restart the server: $ sudo service mosquitto restart 6. Test The mosquitto_sub command we used above now fails: Adding the --cert and --key arguments satisfies the server: $ mosquitto_sub -t \$SYS/broker/bytes/\# -v --cafile ca.crt --cert client.crt --key client.key To be able to obtain the corresponding certificates and key for my server (named ubuntu), use the following syntax: And run the following command: Conclusion This first part permit to establish a secure connection from a remote thing to the MQTT broker. In the next part we will restrict this connection to TLS 1.2 clients only and allow the websocket connection.
View full tip
In ThingWorx Analytics, you have the possibility to use an external model for scoring. In this written tutorial, I would like to provide an overview of how you can use a model developed in Python, using the scikit-learn library in ThingWorx Analytics. The provided attachment contains an archive with the following files: iris_data.csv: A dataset for pattern recognition that has a categorical goal. You can click here to read more about this dataset TestRFToPmml.ipynb: A Jupyter notebook file with the source code for the Python model as well as the steps to export it to PMML RF_Iris.pmml: The PMML file with the model that you can directly upload in Analytics without going through the steps of training the model in Python The tutorial assumes you already have some knowledge of ThingWorx and ThingWorx Analytics. Also, if you plan to run the Python code and train the model yourself, you need to have Jupyter notebook installed (I used the one from the Anaconda distribution). For demonstration purposes, I have created a very simple random forest model in Python. To convert the model to PMML, I have used the sklearn2pmml library. Because ThingWorx Analytics supports PMML format 4.3, you need to install sklearn2pmml version 0.56.2 (the highest version that supports PMML 4.3). To read more about this library, please click here Furthermore, to use your model with the older version of the sklearn2pmml, I have installed scikit-learn version 0.23.2.  You will find the commands to install the two libraries in the first two cells of the notebook.   Code Walkthrough The first step is to import the required libraries (please note that pandas library is also required to transform the .csv to a Dataframe object):   import pandas from sklearn.ensemble import RandomForestClassifier from sklearn2pmml import sklearn2pmml from sklearn.model_selection import GridSearchCV from sklearn2pmml.pipeline import PMMLPipeline   After importing the required libraries, we convert the iris_data.csv to a pandas dataframe and then create the features (X) as well as the goal (Y) vectors:   iris_df = pandas.read_csv("iris_data.csv") iris_X = iris_df[iris_df.columns.difference(["class"])] iris_y = iris_df["class"]   To best tune the random forest, we will use the GridSearchCSV and cross-validation. We want to test what parameters have the best validation metrics and for this, we will use a utility function that will print the results:   def print_results(results): print('BEST PARAMS: {}\n'.format(results.best_params_)) means = results.cv_results_['mean_test_score'] stds = results.cv_results_['std_test_score'] for mean, std, params in zip(means, stds, results.cv_results_['params']): print('{} (+/-{}) for {}'.format(round(mean, 3), round(std * 2, 3), params))   We create the random forest model and train it with different numbers of estimators and maximum depth. We will then call the previous function to compare the results for the different parameters:   rf = RandomForestClassifier() parameters = { 'n_estimators': [5, 50, 250], 'max_depth': [2, 4, 8, 16, 32, None] } cv = GridSearchCV(rf, parameters, cv=5) cv.fit(iris_X, iris_y) print_results(cv)   To convert the model to a PMML file, we need to create a PMMLPipeline object, in which we pass the RandomForestClassifier with the tuning parameters we identified in the previous step (please note that in your case, the parameters can be different than in my example). You can check the sklearn2pmml  documentation  to see other examples for creating this PMMLPipeline object :   pipeline = PMMLPipeline([ ("classifier", RandomForestClassifier(max_depth=4,n_estimators=5)) ]) pipeline.fit(iris_X, iris_y)   Then we perform the export:   sklearn2pmml(pipeline, "RF_Iris.pmml", with_repr = True)   The model has now been exported as a PMML file in the same folder as the Jupyter Notebook file and we can upload it to ThingWorx Analytics.   Uploading and Exploring the PMML in Analytics To upload and use the model for scoring, there are two steps that you need to do: First, the PMML file needs to be uploaded to a ThingWorx File Repository Then, go to your Analytics Results thing (the name should be YourAnalyticsGateway_ResultsThing) and execute the service UploadModelFromRepository. Here you will need to specify the repository name and path for your PMML file, as well as a name for your model (and optionally a description)   If everything goes well, the result of the service will be an id. You can save this id to a separate file because you will use it later on. You can verify the status of this model and if it’s ready to use by executing the service GetDetails:   Assuming you want to use the PMML for scoring, but you were not the one to develop the model, maybe you don’t know what the expected inputs and the output of the model are. There are two services that can help you with this: QueryInputFields – to verify the fields expected as input parameters for a scoring job   QueryOutputFields – to verify the expected output of the model The resultType input parameter can be either MODELS or CLUSTERS, depending on the type of model,    Using the PMML for Scoring With all this information at hand, we are now ready to use this PMML for real-time scoring. In a Thing of your choice, define a service to test out the scoring for the PMML we have just uploaded. Create a new service with an infotable as the output (don’t add a datashape). The input data for scoring will be hardcoded in the service, but you can also add it as service input parameters and pass them via a Mashup or from another source. The script will be as follows:   // Values: INFOTABLE dataShape: "" let datasetRef = DataShapes["AnalyticsDatasetRef"].CreateValues(); // Values: INFOTABLE dataShape: "" let data = DataShapes["IrisData"].CreateValues(); data.AddRow({ sepal_length: 2.7, sepal_width: 3.1, petal_length: 2.1, petal_width: 0.4 }); datasetRef.AddRow({ data: data}); // predictiveScores: INFOTABLE dataShape: "" let result = Things["AnalyticsServer_PredictionThing"].RealtimeScore({ modelUri: "results:/models/" + "97471e07-137a-41bb-9f29-f43f107bf9ca", //replace with your own id datasetRef: datasetRef /* INFOTABLE */, });   Once you execute the service, the output should look like this (as we would have expected, according to the output fields in the PMML model):   As you have seen, it is easy to use a model built in Python in ThingWorx Analytics. Please note that you may use it only for scoring, and the model will not appear in Analytics Builder since you have created it on a different platform. If you have any questions about this brief written tutorial, let me know.
View full tip
Back in 2018 an interesting capability was added to ThingWorx Foundation allowing you to enable statistical calculation of service and subscription execution.   We typically advise customers to approach this with caution for production systems as the additional overhead can be more than you want to add to the work the platform needs to handle.  This said, these statistics is used consciously can be extremely helpful during development, testing, and troubleshooting to help ascertain which entities are executing what services and where potential system bottlenecks or areas deserving performance optimization may lie.   Although I've used the Utilization Subsystem services for statistics for some time now, I've always found that the Composer table view is not sufficient for a deeper multi-dimensional analysis.  Today I took a first step in remedying this by getting these metrics into Excel and I wanted to share it with the community as it can be quite helpful in giving developers and architects another view into their ThingWorx applications and to take and compare benchmarks to ensure that the operational and scaling is happening as was expected when the application was put into production.   Utilization Subsystem Statistics You can enable and configure statistics calculation from the Subsystem Configuration tab.  The help documentation does a good job of explaining this so I won't mention it here.  Base guidance is not to use Persisted statistics, nor percentile calculation as both have significant performance impacts.  Aggregate statistics are less resource intensive as there are less counters so this would be more appropriate for a production environment.  Specific entity statistics require greater resources and this will scale up as well with the number of provisioned entities that you have (ie: 1,000 machines versus 10,000 machines) whereas aggregate statistics will remain more constant as you scale up your deployment and its load.   Utilization Subsystem Services In the subsystem Services tab, you can select "UtilizationSubsystem" from the filter drop down and you will see all of the relevant services to retrieve and reset the statistics.     Here I'm using the GetEntityStatistics service to get entity statistics for Services and Subscriptions.     Giving us something like this.      Using Postman to Save the Results to File I have used Postman to do the same REST API call and to format the results as HTML and to save these results to file so that they can be imported into Excel.   You need to call '/Thingworx/Subsystems/UtilizationSubsystem/Services/GetEntityStatistics' as a POST request with the Content-Type and Accept headers set to 'application/xml'.  Of course you also need to add an appropriately permissioned and secured AppKey to the headers in order to authenticate and get service execution authorization.     You'll note the Export Results > Save to a file menu over on the right to get your results saved.   Importing the HTML Results into Excel As simple as I would like to hope that getting a standard web formatted file into Excel should be, it didn't turn out to be as easy as I would have hoped and so I have to switch over to Windows to take advantage of Power Query.   From the Data ribbon, select Get Data > From File > From XML.  Then find and select the HTML file saved in the previous step.     Once it has loaded the file and done some preparation, you'll need to select the GetEntityStatistics table in the results on the left.  This should display all of the statistics in a preview table on the right.     Once the query completed, you should have a table showing your statistical data ready for... well... slicing and dicing.     The good news is that I did the hard part for you, so you can just download the attached spreadsheet and update the dataset with your fresh data to have everything parsed out into separate columns for you.     Now you can use the column filters to search for entity or service patterns or to select specific entities or attributes that you want to analyze.  You'll need to later clear the column filters to get your whole dataset back.     Updating the Spreadsheet with Fresh Data In order to make this data and its analysis more relevant, I went back and reset all of the statistics and took a new sample which was exactly one hour long.  This way I would get correct recent min/max execution time values as well as having a better understanding of just how many executions / triggers are happening in a one hour period for my benchmark.   Once I got the new HTML file save, I went into Excel's Data ribbon, selected a cell in the data table area, and clicked "Queries & Connections" which brought up the pane on the right which shows my original query.     Hovering over this query, I'm prompted with some stuff and I chose "Edit".     Then I clicked on the tiny little gear to the right of "Source" over on the pane on the right side.     Finally I was able to select the new file and Power Query opened it up for me.     I just needed to click "Close & Load" to save and refresh the query providing data to the table.     The only thing at this point is that I didn't have my nice little sparklines as my regional decimal character is not a period - so I selected the time columns and did a "Replace All" from '.' to ',' to turn them into numbers instead of text.     Et Voila!   There you have it - ready to sort, filter, search and review to help you better understand which parts of your application may be overly resource hungry, or even to spot faulty equipment that may be communicating and triggering workflows far more often than it should.   Specific vs General Depending on the type of analysis that you're doing you might find that the aggregate statistics are a better option.  As they'll be far, far less that the entity specific statistics they'll do a better job of giving you a holistic view of the types of things that are happening with your ThingWorx applications execution.   The entity specific data set that I'm showing here would be a better choice for troubleshooting and diagnostics to try to understand why certain customers/assets/machines are behaving strangely as we can specifically drill into these stats.  Keep in mind however that you should then compare these findings with the general baseline to see how this particular asset is behaving compared to the whole fleet.   As a size guideline - I did an entity specific version of this file for a customer with 1,000 machines and the Excel spreadsheet was 7Mb compared to the 30kb of the one attached here and just opening it and saving it was tough for Excel (likely due to all of my nested formulas).  Just keep this in mind as you use this feature as there is memory overhead meaning also garbage collection and associated CPU usage for such.
View full tip
ThingWorx Docker Overview and Pitfalls to Avoid    by Tori Firewind of the IoT EDC Containers are isolated and can run side-by-side on the same machine, but they share the host OS, making them more efficient in terms of memory usage and scalability.   Docker is a great tool for deploying ThingWorx instances because everything is pre-packaged within the Docker image and can be stored in a repository ready for deployment at any time with little configuration required.  By using a different container for every component of an application, conflicting dependencies can be avoided. Containers also facilitate the dev ops process, providing consistent application deployments which can be set up, taken down, and tested automatically using scripts.   Using containers is advantageous for many reasons: simplified configuration, easier dev ops management, continuous integration and deployment, cost savings, decreased delivery time for new application versions, and many versions of an application running side-by-side without any wasted resources setting them up or tearing them down. The ThingWorx Help Center is a great resource for setting up Docker and obtaining the ThingWorx Docker files from the PTC Software Downloads website. The files provided by PTC handle the creation of the image entirely, simplifying the process immensely. All one has to do is place the ThingWorx version and all of the required dependencies in the staging folder, configure the YML file, and run the build scripts. The Help Center has all of the detailed information required, but there are a few things worth noting here about the configuration process.   For one thing, the platform-settings.json file is generated based on the options given in the YML file, so configuration changes made within this configuration file will not persist if the same options aren’t given in the YML file. If using Docker Desktop to run an image on a Windows machine, then the configuration options must be given in an ENV file that can be referenced from the command used to start the image. The names of the configuration parameters differ from the platform-settings.json file in ways that are not always obvious, and a full list can be found here.   For example, if extension imports need to be enabled on a ThingWorx instance running in Docker, then the EXTPKG_IMPORT_POLICY_ENABLED option must be added to the environment section of the YML file like this:     environment: - "CATALINA_OPTS=-Xms2g -Xmx4g" # NOTE: TWX_DATABASE_USERNAME and TWX_DATABASE_PASSWORD for H2 platform must # be set to create the initial database, or connect to a previous instance. - "TWX_DATABASE_USERNAME=dbadmin" - "TWX_DATABASE_PASSWORD=dbadmin" - "EXTPKG_IMPORT_POLICY_ENABLED=true" - "EXTPKG_IMPORT_POLICY_ALLOW_JARRES=true" - "EXTPKG_IMPORT_POLICY_ALLOW_JSRES=true" - "EXTPKG_IMPORT_POLICY_ALLOW_CSSRES=true" - "EXTPKG_IMPORT_POLICY_ALLOW_JSONRES=true" - "EXTPKG_IMPORT_POLICY_ALLOW_WEBAPPRES=true" - "EXTPKG_IMPORT_POLICY_ALLOW_ENTITIES=true" - "EXTPKG_IMPORT_POLICY_ALLOW_EXTENTITIES=true" - "EXTPKG_IMPORT_POLICY_HA_COMPATIBILITY_LEVEL=WARN" - "DOCKER_DEBUG=true" - "THINGWORX_INITIAL_ADMIN_PASSWORD=Pleasechangemenow"   Note that if the container is started and then stopped in order for changes to the YML file to be made, the license file will need to be renamed from "successful_license_capability_response.bin" to "license_capability_response.bin" so that the Foundation server can rename it. Failing to rename this file may cause an error to appear in the Application Log, and the server to act as if no license was ever installed: "Error reading license feature info for twx_realtime_data_sub".   In Docker Desktop on a Windows machine, create a file called whatever.env and list the parameters as shown here: Then, reference this environment file when bringing up the machine using the following command in Powershell:      docker run -d --env-file h2.env -p 8080:8080 -v ${pwd}/ThingworxPlatform:/ThingworxPlatform -v ${pwd}/ThingworxStorage:/ThingworxStorage -it <image_id>     Notice in this command that the volumes for the ThingworxPlatform and ThingworxStorage folders are specified with the “-v” options. When building the Docker image in Linux, these are given in the YML file under the volumes section like this (only change the path to local mount on the left side of the colon, as the container mount on the right side will never change):      volumes: - ./ThingworxPlatform:/ThingworxPlatform - ./ThingworxStorage:/ThingworxStorage - ./tomcat-logs:/opt/apache-tomcat/logs     Specifying the volumes this way allows for ThingWorx logs and configuration files to be accessed directly, a crucial requirement to debugging any issues within the Foundation instance. These volumes must be mapped to existing folders (which have write permissions of course) so that if the instance won’t come up or there are any other issues which require help from Tech Support, the logs can be copied out and shared. Otherwise, the Docker container is like a black box which obscures what is really going on. There may not be any errors in the Docker logs; the container may just quit without error with no sign of why it won’t stay up. Checking the ThingWorx and Tomcat logs is necessary to debugging, so be sure to map these volumes correctly.   Once these volumes are mapped and ThingWorx is successfully making use of them, adding a license file to the Docker instance is simple. Use the output in the ThingworxPlatform folder to obtain the device ID, grab a valid license file, and put it right back into that ThingworxPlatform folder, exactly the same way as on a regular instance of ThingWorx. However, if the Docker image is being used for a dev ops process, a license may not be necessary. The ThingWorx instance will work and allow development for a time before the trial license expires, which normally will be enough time for developers to make their changes, push those changes to a repository, and tear the container down.   Another thing worth noting about ThingWorx Docker image creation is that the version of Java supplied in the staging folder must match the compatibility requirements for each version of ThingWorx. This is the version of Java used by the container to run the Foundation server. In versions of ThingWorx 9.2+, this means using the Amazon Corretto version of Java. The image absolutely will not start ThingWorx successfully if older versions of Java are used, even if the scripts do successfully build the image.   Also note that in the newer versions of ThingWorx Docker, the ThingWorx Foundation version within the build.env file is used throughout the Docker image creation process. Therefore, while the archive name can be hard-coded to whatever is desired, the version should be left as is, including any additional specifications beyond just the version number. For example, the name of the archive can be given as Thingworx-Platform-H2-9.2.0.zip (a prettier version of the archive name than is used by default), but the PLATFORM_VERSION should still be set to 9.2.0-b30 (which should be how it appears within the build.env file upon download of the ThingWorx Docker files).   Paying attention to every note in the Help Center is critically important to using ThingWorx Docker, as the process is extensive and can become very complicated depending on how the image will be used. However, as long as the volumes are specified and the log files accessible, debugging any issues while bringing up a Docker-contained ThingWorx instance is fairly straightforward.     Credits: Images borrowed from ThingWorx Docker Containerization Tech Talk by Adrian Petrescu
View full tip
5 Common Mistakes to Developing Scalable IoT Applications by Tori Firewind and the IoT EDC Team Introduction To build scalable applications, it’s necessary to identify common mistakes and avoid them at the early stages of development. In an expert session this past month, the PTC Enterprise Deployment Team elaborated on why scalability is important and how to avoid the common development pitfalls in IoT. That video presentation has been adapted here for visual consumption of the content as well.   What is Scalability and Why Does it Matter Enterprise ready applications can scale and easily be maintained, which is important even from day 1 because scalability concerns are the largest cause for delays to Go Lives.  Applications balance many competing requirements, and performance testing is crucial to ensure an application is ready for Go Live. However, don't just test how many remote assets can connect at once, but also any metrics that are expected to increase in time, like the number of remote properties per thing, the frequency of reporting from those properties, or the number of users accessing the system at once. Also consider how connecting more assets will affect the user experience and business logic, and not just the ability to ingest data.   Common Mistake 1: Edge Property Updates Because ThingWorx is always listening for updates pushed from the Edge and those resources are always in use, pulling updates from the Foundation side wastes resources. Fetch from remote every read is essentially a round trip, so it's slower and more memory intensive, but there are reasons to do it, like if the quality tag is needed since the cache doesn't store it. Say a property is pushed at 11:01, and then there's a network issue at 11:02. If the property is pulled from the cache, it will pull the value sent at 11:01 without any indication of there being a more recent value on the Edge device. Most people will use the default options here: read from server cache, which relies on the Edge to push updates, and the VALUE push type, and configuring a threshold is a good idea as well. This way, only those property updates which are truly necessary are sent to the Foundation server. Details on property aspects can be found in KCS Article 252792.   This is well documented in another PTC Community post. This approach is necessary and considered a best practice if there is event logic which depends on multiple properties at once. Sending all of the necessary properties to determine if an event should fire in one Infotable ensures there is no need to query the database each time a property update comes in from the Edge, which ensures independent business logic and reduces the load on the database to improve ingestion performance. This is a very broad topic and future articles will address it more specifically. The When Disconnected property aspect is a good way to configure what happens with Edge property values in a mass disconnect scenario. If revenue depends on uptime, consider losing any data that changes while a device is disconnected. All of the updates can be folded into a single value if the changes themselves aren't needed but an updated value is needed to populate remote properties upon reconnect. Many customers will want to keep all of their data, even when a device is offline and use data stores. In this case, consider how much data each Edge device can store (due to memory limitations on the devices themselves), and therefore how long an outage can last before data is lost anyway. Also consider if Foundation can handle massive spikes in activity when this data comes streaming in. Usually, a Connection Server isn't enough. Remember that the more data needs to be kept, the greater the potential for a thundering herd scenario.   Handling a thundering herd scenario goes beyond sizing considerations. It is absolutely crucial to randomize the delay each device will wait before attempting to reconnect. It should be considered a requirement to have the devices connect slowly and "ramp up" over time for multiple reasons. One is that too much data coming in too fast could overwhelm the ingestion queue and result in data loss. Another is that the business logic could demand so many system resources, that the Foundation server crashes again and again and cannot be recovered. Turning off the business logic it isn't possible if the downtime is unexpected, so definitely rely instead on randomized reconnection times for Edge devices.   Common Mistake 2: Overlooking Differences in HA To accommodate a shared thing model across many servers, changes had to be made in how the thing model is stored and the model tree is walked by the Foundation servers. Model information is no longer cached at the Thing level, and the model tree is therefore walked every time model information is needed, so the number of times a Thing is directly referenced within each service should be limited (see the Help Center for details).   It's best to store whatever information is needed from a Thing in an Infotable, making the Things[thingName] reference a single time, outside of any loops. Storing the property definitions outside of the loop prevents the repetitious Thing references within the service, which otherwise would have occurred twice for each property (for both the name and the description), and then again for every single property on the Thing, a runtime nightmare.   Certain states previously held in memory are now shared across the cluster, like property values, Thing states, and connection statuses. Improvements have been made to minimize the effects of latency on queries, like how they now only return property values on associated Thing Shapes or Thing Templates. Filtering for properties on implementing Things is still possible, but now there is a specific service to do it, called GetThingPropertyValues (covered in detail in the Help Center).    In the script shown above, the first step is a query to get the names of all implementing things of a particular Thing Shape. This is done outside of any loops or queries, so once per service call. Then, an Infotable is built to store what would have been a direct reference to each thing in a traditional loop. This is a very quick loop that doesn't add much by way of runtime since it is all in memory, with no references to the thing model or the database, instead using the results of the first query to build the Infotable. Finally, this thing reference Infotable is passed into the new service GetThingPropertyValues to retrieve all of the property info for all of these things at once, thereby only walking the thing model once. The easiest mistake people would make here is to do a direct thing reference inside of a loop, using code like Things[thingName].Get() over and over again, thereby traversing the thing model repeatedly and adding a lot of runtime. QueryImplementingThingsOptimized is another new service with new parameters for advanced configuration. Searches can now be done on particular networks or to particular depths, and there's an offset parameter that allows for a maximum number of items to be returned starting at any place in the list of Things, where previously if you needed the Things at the end of the list, you had to return all of the Things. All of these options are detailed in the Help Center, as well as the restrictions listed in the image above.    Common Mistake 3: Async Service Misuse   Async services are sometimes required, say if a user has to trigger many updates on many remote things at once by the click of a button on a mashup that should not be locked up waiting for service completion. Too many async service calls, though, result in spikes in activity and competition for resources. To avoid this mistake, do not use async unless strictly necessary, and avoid launching too many async threads in parallel. A thread dump will show how many threads there are and what they are doing.   Common Mistake 4: Thread Pool Overload Adding more threads to the pool may be beneficial in certain circumstances, like if the threads are waiting on other resources to complete their tasks, look stuff up in the database (I/O), or unlock data that can only be accessed one thread at a time (property writes). In this case, threads are waiting on other resources, and not the CPU, so adding more threads to the pool can improve performance. However, too many threads and performance degradation will occur due to increased contention, wasted CPU cycles, and context switches.   To check if there are too many or not enough threads in the pool, take thread dumps and time the completion of requests in the system. Also watch the subsystem memory usage, and note that the side of the queue should never approach the max. Also consider monitoring the overall performance of the system (CPU and Memory) with a tool like Grafana, and remember that a good performance test properly exercises all of the business logic and induces threads in a similar way to real world expectations.   Common Mistake 5: Stream Etiquette Upserts, or updates to database tables, are expensive operations that can interfere with ingestion if they are performed on the wrong tables. This is why Value Stream and Stream data should never be updated by end users of the application. As described in the DGIS document on best practices, aggregation is the key to unlocking optimal performance because it reduces the size of database tables that require upserts. Each data structure shown here has an optimal use in a well-designed ThingWorx application.   Data Tables are great for storing overview information on all of the Things in one view, and queries on this data source are the fastest. Update this data source as often as possible (by timer), allowing enough time for updates to be gathered and any necessary calculations made. Data Tables can also be updated by end users directly because each row locks one at a time during updates. Data Tables should be kept as small as possible to improve performance on mashups, so for instance, consider using one to show all Things per region if there are millions of Things. Roll up information is best stored here to avoid calculations upon mashup load, and while a real-time view of many thousands of things at once is practically impossible, this option allows for a frequently updates overview of many things, which can also drill down to other mashup views that are real-time for one Thing at a time.   Value Streams are best used for data ingestion, and queries to these should be kept to a minimum, largely performed by the roll up logic that populates the Data Tables mentioned above. Queries that chart all of the data coming in are best utilized on individual Thing views so that only a handful of users are querying the same data sources at a time. Also be sure to use start and end dates and make use of the "source" field to improve query performance and create a better user experience. Due to the massive size of the corresponding database tables, it's best to avoid updating Value Streams outside of the data ingestion process altogether.   Streams are similar, but better for storing aggregated, historical data. Usually once per day or per week (outside of business hours if possible), Value Stream data will be smoothed or reduced into less data points and then stored into Streams. This allows for data to be stored for longer periods of time on the server without using up as much memory or hurting query performance. Then the high volume ingested data sources can be purged frequently, as discussed below.   Infotables are the most memory intensive, and are really designed to hold only a small number of rows at a time, usually to facilitate the business logic. Sometimes they will be stored in Streams or Data Tables if they aren't expected to grow larger (see the DGIS Coffee Machine App for an example). Infotables should never be logged; if they are used to transmit Edge property updates (like in the Property Set Approach), they should be processed into other logged (usually local) properties.   Referring to the properties themselves is how to get real-time information on a mashup, say by using the GetProperties service and its auto-update option, which relies on internal websockets. This should be done on individual Thing views only, and sizing considerations need to be made if there will be many of these websockets open at once, say if there are many end users all viewing real-time data at a time.   In the newer versions of ThingWorx, these cannot be updated directly, so find the system object called ThingWorxPersistenceProvider and use the service UpdateStreamDataProcessingSettings. ThingWorx Foundation processes data received from remote devices in batches in order to manage the data flow and reduce database churn. All of these settings configure how large those batches are and how frequently they are flushed to the database (detailed in full in KCS Article 240607). This is very advanced configuration that heavily depends on use case and infrastructure, but some info applies to most people: adjusting the scan rate is usually not beneficial; a healthy queue should never approach the max limit; and defaults differ by database because they function differently. InfluxDB generally works better when there are less processing threads and higher numbers of things per thread, while PostgresDB can have a lot of threads, preferably with less things per thread. That's why the default values shown here are given as the same number of threads (and this can be changed), but Influx has a larger block size and size threshold because it can handle more items per thread. Value Streams ingest all data into the Foundation server, and so the database tables that correspond with these data sources grow very large, very quickly and need to be purged often and outside of business hours, usually once a day or once per week. That's why it's important to reduce the data down to less points and push them into Streams for historical reference. For a span of years, consider a single point a day might be enough, for a span of hours, consider a data point a minute. Push aggregated data into Streams and then purge the rest as soon as it is no longer needed.   In Conclusion
View full tip
This document attached to this blog entry actually came out of my first exposure to using the C SDK on a Raspberry PI. I took notes on what I had to do to get my own simple edge application working and I think it is a good introduction to using the C SDK to report real, sampled data. It also demonstrates how you can use the C SDK without having to use HTTPS. It demonstrates how to turn off HTTPS support. I would appreciate any feedback on this document and what additions might be useful to anyone else who tries to do this on their own.
View full tip