cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Need help navigating or using the PTC Community? Contact the community team. X

IoT Tips

Sort by:
  Hi, everyone!   We’re actively working towards the ThingWorx 9.0 release and we’re ready to provide a sneak peek into the biggest feature of 9.0: Active-Active Clustering for High Availability configuration.   You may be wondering: doesn’t ThingWorx already offer High Availability?   Yes, ThingWorx already supports a High Availability configuration. Previous versions of ThingWorx, such as ThingWorx 8.X version releases, support Active-Passive configuration, where one “active” ThingWorx server performs all processing and maintains the live connections to other systems such as databases and connected assets. Meanwhile, in parallel, there is a second “passive” ThingWorx server that is a mirror image and regularly updated with data but does not maintain active connections to any of the other systems. If the “active” ThingWorx server fails, the “passive” ThingWorx server is made the primary server, but this can take a few minutes to establish connections to the other systems.   So, how is ThingWorx Active-Active different?   Active-Active configuration differs from Active-Passive in that all the ThingWorx servers in the cluster are “active.” Not only is data mirrored across all ThingWorx servers, but all of the servers, instead of only one, maintain live connections with the other systems. This way, if any of the ThingWorx servers fail, the other ThingWorx servers take over instantaneously with no recovery time.   Since all ThingWorx servers are active, they are processing in parallel and, as a result, the cluster can process more data than that of a single server or a cluster with an Active-Passive configuration. Simply put, multiple servers working together outperform a single server. This allows customers to scale their deployment by simply adding more ThingWorx servers to the cluster (horizontally scaling), which does not have the same limitations of scale that is achieved by increasing the performance of the server itself.   What does that mean for me? Higher Availability - You can avoid single points of failure and configure the ThingWorx Foundation platform in an Active-Active cluster mode to achieve the highest availability for your IIoT systems and applications. Increased Scalability - Now, you can horizontally scale from one to many ThingWorx servers to easily manage large amounts of your IIoT data at scale more smoothly than ever before.   Stay tuned! We’ll be posting more information on Active-Active Clustering—how it's achieved in ThingWorx, architectural component overviews, and what it means for your ThingWorx deployment!   In the meantime, we're running the Active-Active Clustering Beta Program. Interested in participating? Reach out to Ryan Servais (rservais@ptc.com) or Ayush Tiwari (atiwari@ptc.com) to learn more about participation!   Stay connected! Kaya
View full tip
The term ‘Extension’ or ‘Plugin’ often has many different meanings. For example, from the point of view of a Product Manager it often means an ‘easy’ way to add additional functionality to an existing piece of software. In contrast, from the point of view of a Software Developer it often means new syntax to memorize, extensive amounts of API documentation to read and very often weeks or even months of trial and error to make this ‘easy’ addition of functionality.  We at ThingWorx recognized that in order to be a true Platform we need to take action to make the creation of Extensions easier at all phases of the process. Our development team took on the challenge of understanding what it is that normally makes this such a difficult process, and try and find solutions. We took to the open source community looking for a Platform that could offer our Extension developers a wide array of functionality that was easily accessible and familiar. This is where the Eclipse IDE comes in. We were able to create a Plugin of our own for the Eclipse IDE that makes it easy for an Extension Developer to create an Extension Project, generate ThingWorx specific code, manage all the project configuration and build files and also package the Extension. We can do all of this without having a developer read any API documentation or manually write any code, leaving the Extension Developer to focus on what they are best at, which is adding that additional functionality we mentioned earlier. Extensibility and the true nature of a Platform Extensibility is a core aspect of any true Platform as it allows users to add functionality at any time to meet new and changing requirements. The capabilities of extensions are almost endless but here are a few examples: Adding new UI Widgets to be used to visualize data Adding any custom third party Libraries to be used seamlessly in ThingWorx Easily accessing REST APIs outside of ThingWorx Creating helper Resources to be used across the entire platform Add custom Entities easily to multiple ThingWorx instances Provide custom Authenticators and Directory Services As you can see, it is possible to do practically anything that you or our community might find useful for the Internet of Things. This is the nature of a true ‘Platform’. How do I get started developing an extension? There are three steps that will help you dive into Extension Development quickly. First, an instance of ThingWorx Foundation and the ability to navigate the UI, called Composer. Second, a basic understanding of the ThingWorx Model, or “Thing Model”, is necessary. Finally, you will need an installation of the Eclipse IDE with the ThingWorx Eclipse Plugin installed to get started developing your extension. 1)  Getting familiar with ThingWorx Foundation The easiest way to get started playing with the ThingWorx platform is to head over to the Developer Portal and spin up a hosted ThingWorx Foundation server. This is as easy as clicking the ‘Create Foundation Server’ button and a 30-day hosted instance will be created for you to start using as your own personal development playground. If you prefer to set up and work in your own environment, you can also download a Developer Trial Edition to host on your own machine. In order to get familiar with ThingWorx Foundation, I recommend going through our ThingWorx Foundation Quickstart Guide that introduces you to the core building blocks of the platform as well as guide you through a typical scenario of creating a simple IoT application. 2)  Understanding the Thing Model Basics If you are already familiar with the Thing Model and know the basics of using the ThingWorx Platform, then you can probably skip over this section. If you aren’t, or just want a refresher, I’ll go over the basics here. The Thing Model is a collection of Entities that define your solution or business model in ThingWorx. You need a Thing Model for a few reasons.  For those in software development, the Thing Model and its benefits are very similar to those of the Object Oriented programming model.  A good model allows you to maximize reusability, maintainability, and encapsulation.  Having a sound Thing Model means that the future of your IIoT solution will be minimally affected by things like migration, iterative changes, permission changes and security vulnerabilities. The three most commonly used and most important Entities within your model are Things, Thing Templates, and Thing Shapes. These entity types will be the main building blocks for your Thing Model. These are only a few of the Entity Types provided by ThingWorx. It is not necessary but definitely recommended, to have a more comprehensive understanding of the Thing Model, and to work with the entire collection of Entity Types within the ThingWorx Platform by going through the ThingWorx Foundation Quickstart Guide on the Developer Portal. 3)  Dive into the Eclipse Plugin and develop your own extension Lastly, if you don’t have Eclipse IDE installed, head over to the eclipse.org download page and get one installed on your machine. Once you have that, you can find the Eclipse Plugin on our Marketplace here. In a next step, you will want to create a new Extension Project. We added a ThingWorx Extension perspective that enables all of the custom functionality.   Once in the correct perspective, we tried to make our plugin as intuitive as possible. We did this by following as many of the Eclipse usability standards as we could, which means that if you are familiar with the Eclipse IDE you should be able to find most of the ThingWorx functionality on your own. For Example, a simple ‘File’ >> ‘New’ will show you all the options for creating a Project. Creating a new ThingWorx Extension project requires a Name and a ThingWorx Extension SDK (found on the ThingWorx Marketplace) of the version of ThingWorx that you are building your extension for.   By utilizing the capabilities of the Eclipse IDE, we were able to automate the creation of many of the artifacts that had slowed extension developers down. The wizard allows the plugin to handle creating and managing a build file and a metadata.xml as well as manage all of the project dependencies. Once you have an Extension Project, you can use the ThingWorx menus to create your Entities. These actions will create the necessary Java files and manage their applicable entry within the metadata.xml of your project.   After creating your Entity, you can right click the applicable java file which will show you the ‘ThingWorx Source’ menu. This houses the options to generate the code for additional characteristics (Services, Properties, etc.) making the need to learn all of the custom annotations and method signatures a much less daunting process.   Once you have generated some code with the Plugin, it is time to get started implementing your solution - This is the point where my expertise ends and yours begins! If you are interested in getting some more in-depth information on this topic, check out these additional resources: Tutorial: We have created a tutorial that guides you step-by-step through the entire process of developing a ThingWorx extension. Webcast: Watch this 60-minute, interactive deep-dive into IIoT development and learn how to use the Eclipse Plugin to rapidly create a custom ThingWorx extension. Head over to the Developer Portal and start bringing all of your great ideas to life!
View full tip
Let's consider that we have two Streams Stream1 and Stream2 with same DataShape StreamDS. DataShape StreamDS has two fields Id (number) and Name (string). We want to copy all the entries from Stream1 to Stream2. Steps: 1. Open Stream1 Stream in Composer and run GetStreamEntriesWithData service. 2. In the popup click on Create DataShape from Result option to create a new DataShape GetStreamEntriesDS. 3. Create a Service and use JavaScript like below (Added Comments for Details): // Create Temporary Infotable to hold output of GetStreamEntriesWithData Service var paramsForInfotable = {   infoTableName: "InfoTable" /* STRING */,   dataShapeName: "GetStreamEntriesDS" /* DATASHAPENAME */ }; // result: INFOTABLE var InfotableForCopy = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(paramsForInfotable); //Save output of GetStreamEntriesWithData Service to Temporary Infotable InfotableForCopy var paramsForGetStreamEntriesWithDataService = {   oldestFirst: false /* BOOLEAN */,   maxItems: 10000 /* NUMBER */ }; // result: INFOTABLE dataShape: "GetStreamEntriesDS" InfotableForCopy = Things["Stream1"].GetStreamEntriesWithData(paramsForGetStreamEntriesWithDataService); // Read the data from Infotable row by row and add it to new Stream var tableLength = InfotableForCopy.rows.length; for (var x = 0; x < tableLength; x++) {   var row = InfotableForCopy.rows ; // values:INFOTABLE(Datashape: StreamDS) var values = Things["Stream2"].CreateValues(); values.Id = row.Id; //NUMBER values.Name = row.Name; //STRING var paramsForAddStreamEntryService = {   sourceType: row.sourceType /* STRING */,   values: values /* INFOTABLE*/,   location: row.location /* LOCATION */,   source: row.source /* STRING */,   timestamp: row.timestamp /* DATETIME */,   tags: row.tags /* TAGS */ }; // AddStreamEntry(tags:TAGS, timestamp:DATETIME, source:STRING, values:INFOTABLE(StreamDS), location:LOCATION):NOTHING Things["Stream2"].AddStreamEntry(paramsForAddStreamEntryService); } var result = InfotableForCopy;
View full tip
  DevOps. It’s not just a buzzword. It’s a true development methodology that can make all the difference in your application quality and release time. Today, I’ll walk you through how you can continuously integrate and deploy your ThingWorx applications to achieve CI/CD objectives as part of a DevOps-focused culture. At the end, I’ll provide you a sneak peek of what you can expect in a future release (hint: we’re working on some awesome new CI/CD functionality). Overview of ThingWorx DevOps and Common Tools I’ll start by providing an overview of the DevOps cycle, and then I’ll provide more details around each step of the cycle. Before we can start, you’ll need to define your high-level architecture and functional requirements as part of the “Plan” phase.   Now, let’s build your ThingWorx app. Ready? Here we go!   Code As with any software platform, developers can start working in any number of areas of the IoT application—from edge, to visualization, rules authoring to data modelling. For the purpose of this article, we’ll start with the UI, but much of the same steps can be applied in any order. Also, we’ll just call out high level steps of development, but for more info on building out each aspect of your application, please visit developer.thingworx.com.   In ThingWorx Composer, build out your user interface with Mashups. Starting with UI can help you think about the types of data you want to collect from devices and systems and how you want to solve your unique requirements for the business. Starting at this point can also help you show live POCs and functional mockups to stakeholders. Once you’ve built some starter screens and a skeleton of app navigation, you can start adding in data through configuration in Composer by creating your Things, Templates, properties and services. [Optional] We offer 65 out-of-the-box widgets for the UI in the ThingWorx platform. There are times when you have specific visualization requirements for your application and the out-of-the-box widgets don’t quite satisfy them. We have a path for that, through our custom widget extensions. If you choose to develop your own widget extensions, you can do so through other IDEs like Eclipse or WebStorm. Custom development and extensions are not just for UI. We also allow you to define Thing entities and their custom services in Java. If you are developing extensions in this way, we’d recommend you do so using Eclipse to code and Gradle to build and drive tests. For instructions on how to create your own extension, see “Creating Customized ThingWorx Widgets” on page 42 of the ThingWorx Application Development Guide posted on Ask Kaya. With a good start on the data model, business logic and UI, some quick testing and validation is in order. You’ll probably also want to save all of this work also to share with colleagues or move to other integration environments. Capture all of your entity and code artifacts (Mashups, style definitions, Thing shapes, Thing templates, JavaScript, etc.) by using the “Export to Source Control” feature from ThingWorx Composer to write entities to the file system. You can use Git or other source systems to monitor the file system and push to the remote repository of your choice (e.g. GitHub, Bitbucket, etc.). Again, if you are developing extensions outside of Composer, you’ll want to source control those items, too, from Eclipse or the file system directly. Build [Optional] You can build an application package as an extension with all entities and code from Eclipse using the ThingWorx Eclipse plugin. When you build the project, it will create an extension zip file. Again, more info in the Application Development Guide. Make your life easy by using tools like Gradle or Maven. ThingWorx is very similar to other Java development systems, so Gradle and Maven track your dependencies and create a package with all of the referenced extensions you may be using and put them into one single zip file package. Once you have a package built, you can import it into test or integration environments. For added automation, create repeatable tasks like a job in Jenkins so that every time your code is changed in the source repository (e.g. Git), it triggers a job to increment the version, build the project and create the package deliverables. Consider also configuring the Jenkins jobs to push artifacts to a central repository like Artifactory. Test Once your code has been built, we can’t forget about testing! Automation is king for DevOps! For ThingWorx apps, you should still design a test strategy for your application, and then define and create your tests. These can run in your local developer environment, as well as be triggered via build tasks/changes in the source repository. Tools like JUnit for your entities and Java-backed services or Selenium for testing the Mashup UIs can be used. You can create separate jobs in Jenkins along with the build to run the integration and unit tests against an instance of ThingWorx that has the latest artifacts deployed into it. You can also do static code analysis using tools like PMD to find bugs, check style issues or identify inefficient code paths. To round out your app also with performance and load testing, JMeter is one tool that you can leverage. Release Releasing is the culmination of the team’s great work! If the test results pass and the builds are green, you are good to go, and it’s time to establish your release build. Make sure that you consider a versioning scheme for your application and its artifacts. Semantic versioning is a pattern that can be implemented for your ThingWorx application. Correct versioning of ThingWorx packages affects your upgrade plans and is a signal to your users on the intent and content of the release. Again, see the Application Development Guide. Once a release milestone is met, you can create a source branch in Git for that milestone, which will have all the changes encompassed in that release. Configure a Jenkins job to create builds from that milestone branch for maintenance purposes. Deploy + Operate + Monitor   If you’ve tested and released your application, it’s time for production and real users! Using the build and testing infrastructure you’ve set up earlier in the development process, you can also deploy your release builds to your target staging and production ThingWorx environments with Jenkins jobs, Artifactory and automated steps. Finally, as with anything, it is important to measure success and monitor performance via KPIs, trends and logs. You can also extract application insights and recommendations from the PTC System Monitor (PSM) tool, which uses Dynatrace; here is a guide on how to install and deploy PSM.  There are many different paths through the platform and options for developers to match your local team processes and tools—this was simply a quick overview. Congrats! You’re now equipped to build ThingWorx apps while leveraging software best practices and incorporating a DevOps culture!   What can I expect in a future release of ThingWorx? Coming in a near-term release of ThingWorx, we’ll make it easier for you to continuously integrate and deploy your ThingWorx applications. How? Through new functionality that bolsters our packaging concepts, new cloud services to assist in deployment to environments and an error-proof way to integrate applications with an automated dependency awareness.   Stay tuned for more info about this exciting new deployment and application management functionality targeted for Fall 2019!   Reach out with any questions and stay connected.   -Kaya
View full tip
Get MQTT (like mosquitto) operating with SSL - use http://rockingdlabs.dunmire.org/exercises-experiments/ssl-client-certs-to-secure-mqtt as your primary guide to building out your self-signed CA cert and your server cert and key. Simply follow their directions with the one caveat of setting IPLIST and HOSTLIST environment variables prior to executing the generate-CA.sh script. This will be necessary for hosted environments like AWS where the actual IP address of the system cannot be used to access the server from the internet. Put the external facing IP address in IPLIST and the external facing fully qualified domain name (FQDN) into HOSTLIST. If you have multiple usable ip addresses or hostname aliases, enclose them in quotes and separate them with spaces  (export IPLIST="1.2.3.4 5.6.7.8") Complete steps 1-3 in the instructions above. This is sufficient to get the MQTT traffic encrypted and use it with Thingworx. Do not proceed until you can make a mosquitto_pub and mosquitto_sub pass data using the --cafile option and get an error if you do not supply the --cafile option. Make sure you have a copy of the ca.crt file generated by the script above to reference in the commands. Note that it may be necessary to use the ip address rather than the FQDN. mosquitto_sub --cafile path/to/ca.crt -h ipaddr -t topic mosquitto_pub --cafile path/to/ca.crt -h ipaddr -t topic -m message Create an MQTT Thing in Thingworx based on the MQTT ThingTemplate. Create a property in the new thing for sending messages to the MQTT broker. In the configuration page for the new MQTT Thing, set the serverName, serverPort and check the useSSL checkbox. In the Property to MQTT topic mappings, create a publish entry that points to the property you created in the thing and set the topic to the mqtt topic on which you want to publish . The ca.crt file created in the above script is the certificate for a new Certificate Authority (self-signed, so not really official). Clients may have to import this certificate into their trusted CA Root store in order to make the encryption work. Add the ca.crt file from the mqtt broker system to a keystore file that will become tomcat's truststore (the list of CAs trusted by the server). See the Tomcat documentation if you need to configure https on tomcat as well. Create a new keystore if one does not already exist as a truststore. keytool -import -trustcacerts -file /path/to/ca/ca.crt -alias CA_ALIAS -keystore path/to/TrustStore -storepass mypassword). Replace the CA_ALIAS with some identifying string like MyPrivateCertificateAuthority. It did not appear to care about the CA_ALIAS value used. Replace path/to/truststore to point to the file that already exists or you want to create. Add the following to the CATALINA_OPTS for starting tomcat -Djavax.net.ssl.trustStore=path/to/TrustStore"  -Djavax.net.ssl.trustStorePassword=xxxx Replace path/to/TrustStore with the pathname of the file you created / updated with keytool above. Replace xxxx with whatever password you used in the keytool command above Restart tomcat. Check the mqtt Thing for its isConnected property. It should now be true. If it is not, then check the log files for mosquitto and for tomcat looking for SSL issues. Change the property value and see it appear in the output of a (properly constructed) mosquitto_sub --cafile path/to/TrustStore -t test somewhere.
View full tip
The Protocol Adapter Toolkit (PAT) is an SDK that allows developers to write a custom Connector that enables edge devices to connect to and communicate with the ThingWorx Platform.   In this blog, I will be dabbling with the MQTT Sample Project that uses the MQTT Channel introduced in PAT 1.1.2.   Preamble All the PAT sample projects are documented in detail in their respective README.md. This post is an illustrated Walk-thru for the MQTT Sample project, please review its README.md for in depth information. More reading in Protocol Adapter Toolkit (PAT) overview PAT 1.1.2 is supported with ThingWorx Platform 8.0 and 8.1 - not fully supported with 8.2 yet.   MQTT Connector features The MQTT Sample project provides a Codec implementation that service MQTT requests and a command line MQTT client to test the Connector. The sample MQTT Codec handles Edge initiated requests read a property from the ThingWorx Platform write a property to the ThingWorx Platform execute a service on the ThingWorx Platform send an event to the ThingWorx Platform (also uses a ServiceEntityNameMapper to map an edgeId to an entityName) All these actions require a security token that will be validated by a Platform service via a InvokeServiceAuthenticator.   This Codec also handles Platform initiated requests (egress message) write a property to the Edge device execute a service without response on the Edge device  Components and Terminology       Mqtt Messages originated from the Edge Device (inbound) are published to the sample MQTT topic Mqtt Messages originated from the Connector (outbound) are published to the mqtt/outbound MQTT topic   Codec A pluggable component that interprets Edge Messages and converts them to ThingWorx Platform Messages to enable interoperability between an Edge Device and the ThingWorx Platform. Connector A running instance of a Protocol Adapter created using the Protocol Adapter Toolkit. Edge Device The device that exists external to the Connector which may produce and/or consume Edge Messages. (mqtt) Edge Message A data structure used for communication defined by the Edge Protocol.  An Edge Message may include routing information for the Channel and a payload for Codec. Edge Messages originate from the Edge Device (inbound) as well as the Codec (outbound). (mqtt) Channel The specific mechanism used to transmit Edge Messages to and from Edge Devices. The Protocol Adapter Toolkit currently includes support for HTTP, WebSocket, MQTT, and custom Channels. A Channel takes the data off of the network and produces an Edge Message for consumption by the Codec and takes Edge Messages produced by the Codec and places the message payload data back onto the network. Platform Connection The connection layer between a Connector and ThingWorx core Platform Message The abstract representation of a message destined for and coming from the ThingWorx Platform (e.g. WriteProperty, InvokeService). Platform Messages are produced by the Codec for incoming messages and provided to the Codec for outgoing messages/responses.   Installation and Build  Protocol Adapter Toolkit installation The media is available from PTC Software Downloads : ThingWorx Connection Server > Release 8.2 > ThingWorx Protocol Adapter Toolkit Just unzip the media on your filesystem, there is no installer The MQTT Sample Project is available in <protocol-adapter-toolkit>\samples\mqtt Eclipse Project setup Prerequisite : Eclipse IDE (I'm using Neon.3 release) Eclipse Gradle extension - for example the Gradle IDE Pack available in the Eclipse Marketplace Import the MQTT Project : File > Import > Gradle (STS) > Gradle (STS) Project Browser to <protocol-adapter-toolkit>\samples\mqtt, then [Build Model] and select the mqtt project     Review the sample MQTT codec and test client Connector : mqtt > src/main/java > com.thingworx.connector.sdk.samples.codec.MqttSampleCodec decode : converts an MqttEdgeMessage to a PlatformRequest encode (3 flavors) : converts a PlatformMessage or an InfoTable or a Throwable to a MqttEdgeMessage Note that most of the conversion logic is common to all sample projects (websocket, rest, mqtt) and is done in an helper class : SampleProtocol The SampleProtocol sources are available in the <protocol-adapter-toolkit>\samples\connector-api-sample-protocol project - it can be imported in eclipse the same way as the mqtt. SampleTokenAuthenticator and SampleEntityNameMapper are also defined in the <protocol-adapter-toolkit>\samples\connector-api-sample-protocol project. Client : mqtt > src/client/java > com.thingworx.connector.sdk.samples.MqttClient Command Line MQTT client based on Eclipse Paho that allows to test edge initiated and platform initiated requests. Build the sample MQTT Connector and test client Select the mqtt project then RMB > Gradle (STS) > Task Quick Launcher > type Clean build +  [enter] This creates a distributable archive (zip+tar) in <protocol-adapter-toolkit>\samples\mqtt\build\distributions that packages the sample mqtt connector, some startup scripts, an xml with sample entities to import on the platform and a sample connector.conf. Note that I will test the connector and the client directly from Eclipse, and will not use this package. Runtime configuration and setup MQTT broker I'm just using a Mosquitto broker Docker image from Docker Hub​   docker run -d -p 1883:1883 --name mqtt ncarlier/mqtt  ThingWorx Platform appKey and ConnectionServicesExtension From the ThingWorx Composer : Create an Application Key for your Connector (remember to increase the expiration date - to make it simple I bind it to Administrator) Import the ConnectionServicesExtension-x.y.z.zip and pat-extension-x.y.z.zip extensions available in <protocol-adapter-toolkit>\requiredExtensions  Connector configuration Edit <protocol-adapter-toolkit>\samples\mqtt\src\main\dist\connector.conf Update the highlighted entries below to match your configuration :   include "application" cx-server {   connector {     active-channel = "mqtt"     bind-on-first-communication = true     channel.mqtt {       broker-urls = [ "tcp://localhost:1883" ]       // at least one subscription must be defined       subscriptions {        "sample": [ "com.thingworx.connector.sdk.samples.codec.MqttSampleCodec", 1 ]       }       outbound-codec-class = "com.thingworx.connector.sdk.samples.codec.MqttSampleCodec"     }   }   transport.websockets {     app-key = "00000000-0000-0000-0000-000000000000"     platforms = "wss://thingWorxServer:8443/Thingworx/WS"   }   // Health check service default port (9009) was in used on my machine. Added the following block to change it.   health-check {      port = 9010   } }  Start the Connector Run the Connector directly from Eclipse using the Gradle Task RMB > Run As ... > Gradle (STS) Build (Alternate technique)  Debug as Java Application from Eclipse Select the mqtt project, then Run > Debug Configurations .... Name : mqtt-connector Main class:  com.thingworx.connectionserver.ConnectionServer On the argument tab add a VM argument : -Dconfig.file=<protocol-adapter-toolkit>\samples\mqtt\src\main\dist\connector.conf Select [Debug]  Verify connection to the Platform From the ThingWorx Composer, Monitoring > Connection Servers Verify that a Connection Server with name protocol-adapter-cxserver-<uuid> is listed  Testing  Import the ThingWorx Platform sample Things From the ThingWorx Composer Import/Export > From File : <protocol-adapter-toolkit>\samples\mqtt\src\main\dist\SampleEntities.xml Verify that WeatherThing, EntityNameConverter and EdgeTokenAuthenticator have been imported. WeatherThing : RemoteThing that is used to test our Connector EdgeTokenAuthenticator : holds a sample service (ValidateToken) used to validate the security token provided by the Edge device EntityNameConverter : holds a sample service (GetEntityName) used to map an edgeId to an entityName  Start the test MQTT client I will run the test client directly from Eclipse Select the mqtt project, then Run > Run Configurations .... Name : mqtt-client Main class:  com.thingworx.connector.sdk.samples.MqttClient On the argument tab add a Program argument : tcp://<mqtt_broker_host>:1883 Select [Run] Type the client commands in the Eclipse Console  Test Edge initiated requests     Read a property from the ThingWorx Platform In the MQTT client console enter : readProperty WeatherThing temp   Sending message: {"propertyName":"temp","requestId":1,"authToken":"token1234","action":"readProperty","deviceId":"WeatherThing"} to topic: sample Received message: {"temp":56.3,"requestId":1} from topic: mqtt/outbound Notes : An authToken is sent with the request, it is validated by a platform service using the SampleTokenAuthenticator (this authenticator is common to all the PAT samples and is defined in <protocol-adapter-toolkit>\samples\connector-api-sample-protocol) EntityNameMapper is not used by readProperty (no special reason for that) The PlatformRequest message built by the codec is ReadPropertyMessage   Write a property to the ThingWorx Platform In the MQTT client console enter : writeProperty WeatherThing temp 20   Sending message: {"temp":"20","propertyName":"temp","requestId":2,"authToken":"token1234","action":"writeProperty","deviceId":"WeatherThing"} to topic: sample Notes : An authToken is sent with the request, it is validated by a platform service using the SampleTokenAuthenticator EntityNameMapper is not used by writeProperty The PlatformRequest message built by the codec is WritePropertyMessage No Edge message is sent back to the device   Send an event to the ThingWorx Platform   In the MQTT client console enter : fireEvent Weather WeatherEvent SomeDescription   Sending message: {"requestId":5,"authToken":"token1234","action":"fireEvent","eventName":"WeatherEvent","message":"Some description","deviceId":"Weather"} to topic: sample Notes : An authToken is sent with the request, it is validated by a platform service using the SampleTokenAuthenticator fireEvent uses a EntityNameMapper (SampleEntityNameMapper) to map the deviceId (Weather) to a Thing name (WeatherThing), the mapping is done by a platform service The PlatformRequest message built by the codec is FireEventMessage No Edge message is sent back to the device   Execute a service on the ThingWorx Platform ... can be tested with the GetAverageTemperature on WeatherThing ... Test Platform initiated requests     Write a property to the Edge device The MQTT Connector must be configured to bind the Thing with the Platform when the first message is received for the Thing. This was done by setting the bind-on-first-communication=true in connector.conf When a Thing is bound, the remote egress messages will be forwarded to the Connector The Edge initiated requests above should have done the binding, but if the Connector was restarted since, just bind again with : readProperty WeatherThing isConnected From the ThingWorx composer update the temp property value on WeatherThing to 30 An egress message is logged in the MQTT client console :   Received message: {"egressMessages":[{"propertyName":"temp","propertyValue":30,"type":"PROPERTY"}]} from topic: mqtt/outbound   Execute a service on the ThingWorx Platform ... can be tested with the SetNtpService on WeatherThing ...
View full tip
In this blog post I'm covering the pratical aspects of setting up a Chain of Trust and configure ThingWorx to use it. There are already some examples available on how to do this via command line using a self-signed certificate, e.g. https://support.ptc.com/appserver/cs/view/solution.jsp?n=CS193947 In this example I'll be using the KeyStore Explorer, a graphical tool for managing keystores. http://keystore-explorer.org/ To learn more about the theory behind trust and encryption, see also Trust &amp; Encryption - Theory The Chain of Trust Creating a Root CA Open the KeyStore Explorer and create a new jks KeyStore File > New > JKS Via Tools > Generate Key Pair create a new key-pair. Use the RSA algorithm with a key size of 4096. This size is more than sufficient. Set a validity period of e.g. 302 days. It's important that all subsequent certificates have a lower validity period than the original signer. Leave the other defaults and fill out the Name (addressbook icon). Only the Common Name is required - usually I never use the Email field, as I want to avoid receiving more spam than necessary. Click OK and enter an alias for the Root CA (leave the default). A new password needs to be set for the private key part of the certificate. The new Root CA will now show up in the list of certificates. Creating an Intermediate CA The intermediate CA is signed by the Root CA and used to sign the server specific certificate. This is a common approach as - in case the signing CA for the server specific certificate gets corrupted, not all of the created certificates are affected, but only the ones signed by the corrupted CA. In the list right-click the Root CA and Sign > Sign New Key Pair Use the RSA algorithm with a key size of 4096 again. Ensure the validity period is less than for the Root CA, e.g. 301 days. (Because every second counts) Fill out the Name again - this time as Intermediate Certificate. A Intermediate Certificate is a Certification Authority (CA) in itself. To reflect this, a Basic Constraint must be added via Add Extensions. Mark the checkbox for Subject is a CA Confirm with OK, set the alias (or leave the default) and set the password for the private key part. The new Intermediate CA will now show up in the list of certificates. Creating a Server Specific Certificate In the list right-click the Intermediate CA and Sign > Sign New Key Pair Use the RSA algorithm with a key size of 4096 again. Ensure the validity period is less than for the Root CA, e.g. 300 days. (Because still, every second counts) Fill out the Name again - this time it's important to actually use the public servername, so that browsers can match the server identity with its name in the browser's address bar. If the servername does not match exactly, the certificates and establishing trust will not work! In addition to this, a Subject Alternative Name must be defined - otherwise the latest Chrome versions consider the certificate as invalid. Click on Add Extensions and the + sign, choose Subject Alternative Name. Add a new name with the + sign. Make it the same as the Common Name (CN) above. As before, confirm everything with OK, enter an alias and a password for the private key part. Saving the keystore All certificates should now show with their corresponding properties in the certificate list: Notice the expiry date - the higher up the chain, the sooner the expiry date. If this is not the case - the certificates and establishing trust will not work! Save the keystore via File > Save As and set a password - this time for the keystore itself. As Tomcat has a restriction the password for the keystore must be the password for the private key of the server specific certificate. It's highly recommended and (hopefully) obvious that passwords for the other private keys should be different and that the password for the keystore should be completely different as well. The keystore password will be shared with e.g. Tomcat - the private key password should never be shared with a source you do not trust! Configuring ThingWorx In ThingWorx, configure the <Tomcat>/conf/server.xml as e.g. mentioned in https://support.ptc.com/appserver/cs/view/solution.jsp?n=CS193947 I'm using a connector on port 443 (as it's the default HTTPS port): <Connector port="443" protocol="org.apache.coyote.http11.Http11NioProtocol" maxThreads="150"           SSLEnabled="true" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" enableLookups="false"           connectionTimeout="20000" keystoreFile="C:\ThingWorx\myKeystore.jks" keystorePass="changeme" redirectPort="8443" /> Restart Tomcat to consider the new configuration. ThingWorx can now be called via https://<myserver.com>/Thingworx It's not yet trusted by the client - so it shows as potentially unsafe, and for me it shows in German as well *sorry* However, it's already using encrypted traffic as every HTTPS connection will use TLS. In this case we're communicating in a secure way, but do not know if the server (and its providers) can be trusted or not. Sharing credit card information or other personal details could be dodgy in such a scenario. Trusting the Root CA To trust our Root CA, we first have to export it. Every browser allows to view the certificate, e.g. in Chrome this is done via the Developer Tools > Security As we're using KeyStore Explorer, we can use it to export the Root CA. Right-click it in the certificate list and Export > Export Certificate Chain. Leave the defaults and export as .cer file. Open the Certificate Manager, by typing certmgr into the Windows search field. Open the Trusted Root Certification Authorities and right-click on the Certificates folder. Under All Tasks > Import. Choose the file you've just exported for the Root CA. The Root CA is now a trusted certificate authority. As the ThingWorx server specific certificate will fall back to the Root CA, it's now also recognized by the browser and the connection is now considered as secure and trustworthy. Congratulations!
View full tip
We are excited to announce ThingWorx 8.4 is now available for download!    Key functional highlights ThingWorx 8.4 covers the following areas of the product portfolio: ThingWorx Analytics and ThingWorx Foundation which includes Connection Server and Edge capabilities.   ThingWorx Foundation Next Generation Composer: File Repository Editor added for application file management New entity Config Table Editor to enable application configurability and customization Localization support fornew languages: Italian, Japanese, Korean, Spanish, Russian, Chinese/Taiwan, Chinese/Simplified Mashup Builder: Responsive Layout with new Layout Editor 13 new and updated widgets (beta) Theming Editor (beta) New Functions Editor New Personalized Workspace Platform: Added support for AzureSQL, a relational database-as-a-service (DBaaS) as the new persistence provider A PaaS database that is always running on the latest stable version of SQL Server Database Engine and  patched OS with 99.99% availability.   Added support for InfluxData, a leading time series storage platform as the new ThingWorx persistence provider Supports ingesting large amounts of IoT data and offers high availability with clustering setup New extension for Remote Access and Control Supports VNC, RDP desktop sharing for any remote device HTTP and SSH connectivity supported An optional microservice to offload the ThingWorx server by allowing query execution to occur in a separate process on the same or on a different physical machine. Installers for Postgres versions of ThingWorx running on Windows or RHEL AzureSQL InfluxDB Thing Presence feature introduced which indicates whether the connection of a thing is “normal” based on the expected behavior of the device. Remote Access Extension Query Microservice: Click and Go Installers for Windows and Linux (RHEL) Security: Major investments include updating 3rd party libraries, handling of data to address cross-site scripting (XSS)  issues and enhancements to the password policy, including a password blacklist. A significant number of security issues have been fixed in this release. It is recommended that customers upgrade as soon as possible to take advantage of these important improvements. Docker Support  Added Dockerfile as a distribution media for ThingWorx Foundation and Analytics Allows building Docker container image that unlocks the potential of Dev and Ops Note:  Legacy Composer has been removed and replaced with the New Composer.   Documentation: ThingWorx 8.4 Reference Documents ThingWorx Platform 8.4 Release Notes ThingWorx Platform Help Center ThingWorx Analytics Help Center ThingWorx Connection Services Help Center  
View full tip
Embedded databases come with the installation of the ThingWorx Platform No additional installation or configuration is required for embedded databases Read about the various benefits and pitfalls of embedded versus external below Database Options H2 RDBMS (relational database management system), written in Java Has a small memory footprint Embedded into ThingWorx for easy installation Not as robust as other database options Not scalable in production environments (unless used alongside a separate, external database for stream, value stream, and other data) ​ See KCS Article CS243975 for further reading on the use of external databases Meant to be used for quick deployments and testing environments PostgreSQL ORDBMS (object-relational database management system), written in C PostgreSQL is the ThingWorx recommended database for production systems More Robust External database installed separately from ThingWorx Beneficial because external databases can be specifically configured for use in production, while embedded databases cannot Able to efficiently handle larger amounts of data and store more data without affecting ThingWorx system performance Greater Stability Recover from data corruptions more easily by accessing the database from an external application (separate from ThingWorx) using simple SQL statements Easier to back-up the database in case of issues (further reading in KCS Article CS246598) Less risky and simpler upgrade procedure, which occurs "in-place" Instead of exporting and importing data and entities, a simple schema update allows these to automatically persist into the new version If ThingworxStorage folder is accidentally deleted, entities and data are secure in the external database More Secure HA (High Availability) allows for multiple server instances at different locations in the network Assists in time of failover, i.e. if one server fails, the other can immediately take over Secures the data and prevents further data loss in the event of a failure Customizable security settings and complex password requirements Fewer security vulnerabilities than other databases Because Postgres is an external database, it can be harder to install Follow the steps in the installation guide closely See KCS Articles CS235937 and CS230085 for troubleshooting and help with installation and configuration Hana RDBMS (relational database management system) In-memory, column based data storage For more information on this database, please see the Getting Started with SAP HANA Guide Neo4J GDBMS (graph database management system), written in Java Data is not easily accessed by external applications, and CQL must be used instead of SQL, making recovery from corruptions very difficult Embedded database with limited configuration options Known to have issues with deadlocks Deprecated in version 7.0 (related KCS Article: CS228537) For full installation steps for H2 and PostgreSQL, see the ThingWorx Installation Guide
View full tip
When to Include InfluxDB in the ThingWorx Development Lifecycle (this article is also available for download as a PDF attached)   The Short Answer InfluxDB is a time series database designed specifically for data ingestion. Historically, InfluxDB has been viewed as a high-scale expansion option for ThingWorx: a way to ensure the application works as intended, even when scaled up to the enterprise level. This is certainly one way to view it, because when there are many, many remote things, each with a lot of properties writing to the Platform at short intervals, then InfluxDB is a sure choice. However, what about in smaller applications? Is there still a benefit to using an optimized data ingestion tool in any case? The short answer is: yes, there is!   Using InfluxDB for optimized data ingestion is a good idea even in smaller-sized applications, especially if there are plans to scale the application up in the future. It is far better to design the application around InfluxDB from the start than to adjust the data model of the application later on when an optimized data ingestion process is required. PostgreSQL and InfluxDB simply handle the storage of data in different ways, with the former functioning better with many Value Streams, and the latter with fewer Value Streams. Switching the way data is retained and referenced later, when the application is already on the larger side, causes delays in growing the application larger and adding more devices. Likewise, if the Platform reaches its ingestion limits in a production environment, there can be costly downtime and data loss while a proper solution (which likely involves reworking the application to work optimally with InfluxDB) is implemented.   Don’t think that InfluxDB is for expansion only; it is an optimized ingestion database that has benefits at every level of the application development lifecycle. From the end to end, InfluxDB can ensure reliable data ingestion, reduced risk of data loss, and reduced memory and CPU used by the deployment overall. Preliminary sizing and benchmark data is provided in this article to explain these recommendations. Consider how ThingWorx is ingesting data now, how much CPU and other resources are used just for acquiring the data, and perhaps InfluxDB would seem a benefit to improve application performance.   The Long Answer In order to uncover just how beneficial InfluxDB can be in any size application, the IoT Enterprise Deployment Center has run some simulations with small and medium sized applications. The use case in the simulation is simple with user requests coming from a collection of basic mashups and data ingestion coming from various numbers of things, each with a collection of “fast” and “slow” properties which update at different rates. This synthetic load of data does not include a more complete application scenario, so the memory and CPU usage shown here should not be used as sizing recommendations. For those types of recommendations, stay tuned for the soon-to-release ThingWorx 9.0 Sizing (or check out the current 8.5 Sizing Guide).   Comparing Runs When determining the health of the ThingWorx Platform, there are several categories to inspect: Value Stream Queue Rate and Queue Size, HTTP Requests, and the overall Memory and CPU use for each server. Using Grafana to store the metrics results in charts like those below which can easily be compared and contrasted, and used to evaluate which hardware configuration results in the best performance. The size of the numbers on the vertical axis indicate total numbers of resources used for that metric, while the slope or trend of each chart indicates bottlenecks and inadequate resource allocation for the use case.   In this case, all darker charts represent data from PostgreSQL ONLY configurations, while the lighter charts represent the InfluxDB instances. Because this is not a sizing guide, whether each of these charts comes from the small or medium run is unimportant as long as they match (for valid comparisons between with Influx and without it). The smaller run had something like 20k Things, and the larger closer to 60k, both with 275 total Platform users (25 Admins) and 3 mashups, which were each called at various refresh rates over the course of the 1-hour testing period. Note that in the PostgreSQL ONLY instances, there were more Thing Templates and corresponding Value Streams. This change is necessary between runs because only with fewer Value Streams does InfluxDB begin to demonstrate notable improvements.   The most important thing to note is that the lighter charts clearly demonstrate better performance for both size runs. Each section below will break down what the improvement looks like in the charts to show how to use Grafana to verify the best performance.   Value Stream Queue The vertical axis on the Value Stream Queue Rate chart shows how many total writes per minute (WPS) the Platform can handle. The average is 10 WPS higher using InfluxDB in both scenarios, and InfluxDB is also much more stable, meaning that the writes happen more reliably. The Value Stream Queue Size chart demonstrates how well the writes within the queue are processed. Both of these are necessary to determine the health of data ingestion.   If the queue size were to increase and trend upward in the lighter Queue Size chart, then that would mean the Platform couldn’t handle the higher ingestion rate. However, since the Queue Size is stable and close to 0 the entire time, it is clear that the Platform is capable of clearing out the Value Stream Queue immediately and reliably throughout the entire test. FIGURE 1 – THESE REPRESENT THE DATA GETTING STORED INTO THE DATA PROVIDER. NOTE: THE FORMER IS MUCH LOWER THAN THE LATTER.   FIGURE 2 – NOTE THE DATA LOSS IN THE NON-INFLUX INSTANCE (THE QUEUE IN GREEN REACHES THE MAX IN YELLOW). THE INFLUX INSTANCE HAS LESS TROUBLE CLEARING OUT THE QUEUE, AS DEMONSTRATED BY THE CONSISTENTLY LOW QUEUE SIZE.   HTTP Requests Taking the strain of ingestion off of the Platform’s primary database frees its resources up for other activities. This in turn improves the performance and reliability of the Platform to respond to HTTP requests, those which in a typical application are used to aggregate data into smaller data stores (depending on the use case) and which render the mashups for the end users. The business logic and mashups can be more complex when there is one database designated for ingestion (InfluxDB) and one for everything else (PostgreSQL). FIGURE 3 – THE DARKER CHART SHOWS A LOT OF CHOPPINESS, MEANING THAT WHILE THE PLATFORM WAS RESPONDING THE WHOLE TIME, IT WAS NOT DOING SO RELIABLY. THE SMOOTHER SECOND CHART SHOWS HOW MUCH EASIER THE PLATFORM CAN HANDLE THESE REQUESTS WHEN THE LOAD IS DISTRUBITED INTELLIGENTLY ACROSS MULTIPLE SERVERS, EACH OPTIMIZED FOR THE TYPE OF DATA THEY RECEIVE. THE “STAIRCASE” SHAPE OCCURS BECAUSE THE SIMULATOR INCREASES THE WORK LOAD EVERY 10 MINUTES UNTIL IT BREAKS.   Likewise, the nature of Postgres lends well towards this differentiation, given that there are many more database tables required for supporting the HTTP requests, something Postgres does well. That leaves Influx to handle the time-series data and ingestion, and those are the primary strengths of that software as well. So, splitting the load across multiple servers in this way results in smaller server sizes overall, each which is stream-lined and optimized to handle exactly what it is given by the Platform.   Note that in both of these charts, there are no bad requests, so both would seem to be successful runs. However, as future charts will demonstrate more clearly, there is a catastrophic failure when the load is increased around 12:30p. The simulation ends before the server begins to show any real symptoms of the issue, and that is why there are no bad requests. The maximum Operations Per Second (OPS) in the Hardware Specifications and Performance section is taken from before the failure begins.   Clearly the InfluxDB instance has better performance given that the average Operations Per Second (OPS) is substantially higher, nearly 4 times what is seen in the PostgreSQL ONLY instance. Obviously how well the Platform manages the business logic and mashup loading will depend on a lot of factors. In this test scenario, the OPS was increased by increasing the mashup refresh rate on the InfluxDB instances (which could handle over double the operations). Likewise, the number of Stream writes to the PostgreSQL database could be double what it was when PostgreSQL was the only database. Therefore, configuring InfluxDB for the data ingestion and leaving Postgres for the rest of the application certainly makes the load much easier on the Platform, and the same would be true even in a much more complex scenario.   Memory and CPU The important thing here is to keep the memory use low enough that any spikes in usage won’t cause a server malfunction. CPU Usage should stay at or below around 75%, and Memory should never exceed around 80% of the total allocated to the server. The sizing guides can help determine what this allocation of memory needs to be.   Of note in these charts is the slight, upward slope of the CPU usage in the darker chart, indicating the start of a catastrophic failure, and the difference in the total memory needed for the ThingWorx Platform and Postgres servers when Influx is used or not. As is apparent, the servers use much less memory when the database load is split up intelligently across multiple servers.   FIGURE 4 – THE THINGWORX CPU IS ABOUT THE SAME HERE AS IN THE INFLUXDB CONFIGURATION BELOW BUT LOOK AT HOW MUCH MORE MEMORY BOTH THE PLATFORM AND THE POSTGRES DATABASE NEED ALLOCATED TO THEM IN THIS CONFIGURATION (64 GB A PIECE). ALSO NOTE THE JUMP IN CPU AND MEMORY USAGE AFTER 12:30P. THIS IS REFERENCED IN THE PREVIOUS SECTION, AND THE SLOPE UPWARD OF THE USAGE AFTER THAT POINT INDICATES THE START OF A CATASROPHIC FAILURE. THE TEST ENDS TOO SOON TO SEE ANY SYMPTOMS OF FAILURE, BUT IT IS A SURE THING AFTER THE INCREASE IN LOAD AROUND 12:30P. FIGURE 5 – INFLUX NEEDS AN EXTRA SERVER, BUT THE SIZE OF THE INFLUX AND POSTGRES SERVERS TOGETHER IS LESS THAN HALF THE SIZE AS THAT REQUIRED FOR THE SINGLE POSTGRES DATABASE IN THE POSTGRES ONLY CONFIGURATION (8 GB). THINGWORX IS SMALLER TOO (32 GB).     Hardware Specifications and Performance These are the exact specifications for each simulated instance, broken down by size and whether InfluxDB is configured or not. Note that some of the hardware specifications may be more than is necessary real-world use case depending. As stated previously, this document is not a sizing guide (use the official ThingWorx Sizing Guide). Note that the maximum number of WPS and OPS are shown here. The maximums are higher in the InfluxDB scenarios, meaning that even with smaller-sized servers, the InfluxDB configurations can handle much greater loads.   Summary In conclusion, if InfluxDB may at some point be needed in the lifecycle of an application, because the expected number of things or the number of properties on each thing is large enough that it will max the limitations of the Platform otherwise, then InfluxDB should be used from the very start. There are benefits to using InfluxDB for data ingestion at every size, from performance to reliability, and of course the obviously improved scalability as well.   Reworking the application for use with InfluxDB later on can be costly and cause delays. This is why the benefits and costs associated with an InfluxDB-centric hardware configuration should be considered from the start. More servers are required for InfluxDB, but each of these servers can be sized smaller (depending on the use case), and all of this will affect the overall cost of hosting the ThingWorx application. The benefits of InfluxDB are especially pronounced when used in conjunction with clusters, which will be demonstrate fully in the 9.0 Sizing Guide (soon to be released). If InfluxDB is used to interface with the clusters, then there are even more resources to spare for user requests.   It is considered ThingWorx best practice for high ingestion customers to make use of InfluxDB in applications of any size. Note, though, that this will mean the number of Value Streams per Influx Database will need to be limited to single digits. We hope this helps, and from everyone here at the EDC, happy developing!
View full tip
Hi, ThingWorx users! We’re excited to share that we have partnered with InfluxData to make time series analysis in ThingWorx even easier. InfluxDB is a database by InfluxData that is “built specifically for metrics and events that empower developers to build next-generation IIoT, analytics and monitoring applications.”   Why InfluxData?  Today, application developers expect robust querying capabilities, fast response time, easy ways of aggregating and pivoting on data and leveraging results for reporting and visualization. IT and devops administrators also expect cost-effective storage and easy ways of aging data through archiving and the ability to keep large amounts of historical data to satisfy analysis requirements.   That’s why we’ve partnered with InfluxData to make it easier for developers to store, analyze and act on IIoT data in real-time. With InfluxData, developers can build connected IIoT applications more quickly while still incorporating the following capabilities: monitoring real-time alerting predictive maintenance streaming data anomaly and event detection visual and report-based analysis   We considered a few technologies for the purpose of improving ThingWorx time series analysis. Here are a few reasons we chose InfluxData: high compression of data ~45x ability to handle millions of writes per second* ability to read around thousands of rows in milliseconds* supports the standard time series functions of sampling, interpolation, time bucketing, aggregation, selector, transformation, predictor, etc.  * Query and write times will vary based on an individual ThingWorx application’s implementation with Influx. For example, as the number of concurrent reads increases, the query speed decreases. With the upcoming 8.4 release, the ThingWorx Sizing Guide will be updated to reflect representative performance for ThingWorx developers.   In addition to improved query capability, ThingWorx time series with Influx can now use less memory and CPU, giving your platform servers a bit of a break.   To start strategizing on how InfluxData can help you in your ThingWorx journey, here is a sneak preview of what it will look like:   New Features The new ThingWorx Influx Persistence Provider will make query services like ValueStream Thing QueryPropertyHistory, Stream Thing QueryStreamData and QueryStreamEntries even better. Simply create a new instance of the persistence provider, configure it to use your InfluxDB instance, create a new value stream (or stream) from the new persistence provider, and you’ll be writing, reading and analyzing your time series data like never before.   We’re also introducing a new enhancement to improve InfoTable support with time series data, including providing the ability to use a driver property. The driver property can be specified with the QueryPropertyHistoryWithDriverProperty service for time alignment and filling backward/forward in your stream queries.   Let’s walk through a driver property example where you have the properties of temperature, speed and battery level. Timestamp Temperature Speed Battery Level 1480589076592000000 80.003 5012 79 1480589077537000000 80.010 5011 79 1480589077550000000 80.010 5009 79 1480589077562000000 80.030 5011 78   Let’s say temperature is the key driver for your analysis. In other words, you are not concerned if speed or battery level changes—you only care about when temperature changes. We can specify temperature to be the driver property for that particular time and only return stream values for temperature, speed and battery level when temperature changes. If speed or battery level changes (but temperature does not change), the rows associated with those changes would not be included in the results set because neither speed nor battery level is a driver property. See chart below.  Timestamp Temperature (driver) Speed Battery Level 1480589076592000000 80.003 5012 79 1480589077537000000 80.010 5011 79 1480589077562000000 80.030 5011 78  Note that only three of the four rows are returned above because one entry in the original table did not have a change in temperature.    Stay Tuned Look out for these time series improvements and InfluxData integration in our upcoming 8.4 release. I’ll be sure to keep you updated on additional new features coming in our next release (like Orchestration and Mashup Builder 2.0), so check back shortly or subscribe to this Community so we can stay in touch. As always, if you have any questions, just ask Kaya!   Stay connected, Kaya
View full tip
Let us consider that we have 2 properties Property1 and Property2 in Thing Thing1 which we want to update using UpdatePropertyValues service. In our service we will use a system defined DataShape NamedVTQ which has following field definitions: In Thing1 create a custom service like following: var params1 = {      infoTableName : "InfoTable",      dataShapeName : "NamedVTQ" }; // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(NamedVTQ) var InputInfoTable = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params1); var now = new Date(); // Thing1 entry object var newEntry = new Object(); newEntry.time = now; // DATETIME - isPrimaryKey = true newEntry.quality = undefined; // STRING newEntry.name = "Property1"; // STRING newEntry.value = "Value1"; // STRING InputInfoTable.AddRow(newEntry); newEntry.name = "Property2"; // STRING newEntry.value = "Value2"; // STRING InputInfoTable.AddRow(newEntry); var params2 = {      values: InputInfoTable /* INFOTABLE */ }; me.UpdatePropertyValues(params2);
View full tip
The New and Improved DGIS Guide to ThingWorx Development Written by Victoria Firewind of the IoT EDC   The classic Developing Great IoT Solutions guide has been reskinned and revamped for newer versions of ThingWorx! The same information on how to build a quality IoT application is now available for versions of ThingWorx 9.1+, and now, a complete sample application is included to demonstrate these ideas.    Find within the attached archive a PDF with high-level overview information on development and application design geared towards managers and business users, so that everyone can understand the necessary requirements, common terms, and key tips on how to ensure an application is scalable and maintainable right from the very start. Reduce your chances of running into issues between PoC and Go Live by reviewing this information today!   Also find within this PDF a series of tutorials which teach not just how to use the ThingWorx software, but which also educate on how to make good application design choices. A basic rules engine for sending real-time notifications is included here, as well as a complete demo application which illustrates each concept in a real-world use case. This Coffee Machine Demo App relies upon the tutorial entities, which can also now be imported directly using the other XML files provided here. This ensures that anyone can review these concepts, regardless of how much time one can commit or how much knowledge one already has on the subject.   This is a complex guide, and any issues, questions, or bugs found within can be reported right here on this thread. Happy developing from the IoT EDC!
View full tip
Get Started with ThingWorx for IoT Guide Part 3   Step 7: Create Alerts and Subscriptions   An Event is a custom-defined message published by a Thing, usually when the value of a Property changes. A Subscription listens for a specific Event, then executes Javascript code. In this step, you will create an Alert which is quick way to define both an Event and the logic for when the Event is published.   Create Alert   Create an Alert that will be sent when the temperature property falls below 32 degrees. Click Thing Shapes under the Modeling tab in Composer, then open the ThermostatShape Thing Shape from the list.   Click Properties and Alerts tab.   Click the temperature property. Click the green Edit button if not already in edit mode, then click the + in the Alerts column.   Choose Below from the Alert Type drop down list. Type freezeWarning in the Name field.   Enter 32 in the Limit field. Keep all other default settings in place. NOTE: This will cause the Alert to be sent when the temperature property is at or below 32.        8. Click ✓ button above the new alert panel.       9. Click Save.     Create Subscription   Create a Subscription to this event that uses Javascript to record an entry in the error log and update a status message. Open the MyHouse Thing, then click Subscriptions tab.   Click Edit if not already in edit mode, then click + Add.   Type freezeWarningSubscription in the Name field. After clicking the Inputs tab, click the the Event drop down list, then choose Alert. In the Property field drop down, choose temperature.   Click the Subscription Info tab, then check the Enabled checkbox   Create Subscription Code   Follow the steps below to create code that sets the message property value and writes a Warning message to the ThingWorx log. Enter the following JavaScript in the Script text box to the right to set the message property.                       me.message = "Warning: Below Freezing";                       2. Click the Snippets tab. NOTE: Snippets provide many built-in code samples and functions you can use. 3. Click inside the Script text box and hit the Enter key to place the cursor on a new line. 4. Type warn into the snippets filter text box or scroll down to locate the warn Snippet. 5. Click All, then click the arrow next to warn, and Javascript code will be added to the script window. 6. Add an error message in between the quotation marks.                       logger.warn("The freezeWarning subscription was triggered");                       7. Click Done. 8. Click Save.   Step 8: Create Application UI ThingWorx you can create customized web applications that display and interact with data from multiple sources. These web applications are called Mashups and are created using the Mashup Builder. The Mashup Builder is where you create your web application by dragging and dropping Widgets such as grids, charts, maps, buttons onto a canvas. All of the user interface elements in your application are Widgets. We will build a web application with three Widgets: a map showing your house's location on an interactive map, a gauge displaying the current value of the watts property, and a graph showing the temperature property value trend over time. Build Mashup Start on the Browse, folder icon tab of ThingWorx Composer. Select Mashups in the left-hand navigation, then click + New to create a new Mashup.   For Mashup Type select Responsive.   Click OK. Enter widgetMashup in the Name text field, If Project is not already set, click the + in the Project text box and select the PTCDefaultProject, Click Save. Select the Design tab to display Mashup Builder.   Organize UI On the upper left side of the design workspace, in the Widget panel, be sure the Layout tab is selected, then click Add Bottom to split your UI into two halves.   Click in the bottom half to be sure it is selected before clicking Add Left Click anywhere inside the lower left container, then scroll down in the Layout panel to select Fixed Size Enter 200 in the Width text box that appears, then press Tab key of your computer to record your entry.   Click Save   Step 9: Add Widgets Click the Widgets tab on the top left of the Widget panel, then scroll down until you see the Gauge Widget Drag the Gauge widget onto the lower left area of the canvas on the right. This Widget will show the simulated watts in use.   Select the Gauge object on the canvas, and the bottom left side of the screen will show the Widget properties. Select Bindable from the Catagory dropdown and enter Watts for the Legend property value, and then press tab..   Click and drag the Google Map Widget onto the top area of the canvas. NOTE: The Google Map Widget has been provisioned on PTC CLoud hosted trial servers. If it is not available, download and install the Google Map Extension using the step-by-step guide for using Google Maps with ThingWorx . Click and drag the Line Chart Widget onto the lower right area of the canvas. Click Save
View full tip
    Build extensions quickly and extend your application functionality with the Eclipse Plugin.   GUIDE CONCEPT   Extensions enable you to quickly and easily add new functionality to an IoT solution. Extensions can be service (function/method) libraries, connector templates, functional widgets, and more.   The Eclipse Plugin for ThingWorx Extension Development (Eclipse Plugin) is designed to streamline and enhance the creation of extensions for the ThingWorx Platform. The plugin makes it easier to develop and build extensions by automatically generating source files, annotations, and methods as well as updating the metadata file to ensure the extension can be imported.   These features allow you to focus on developing functionality in your extension, rather than worrying about getting the syntax and format of annotations and the metadata file correct.   YOU'LL LEARN HOW TO   Install the Eclipse Plugin and Extension SDK Create and configure an Extension project Create Services, Events and Subscriptions Add Composer entities Build and import an Extension   NOTE: This guide's content aligns with ThingWorx 9.3. The estimated time to complete all parts of this guide is 60 minutes.   Step 1: Completed Example    Download the attached file needed for this tutorial: ExtensionSampleFiles.zip.   The ExtensionSampleFiles.zip file provided to you contains a completed example of the scenario you will be walk through in the following steps. Utilize this file if you would like to see a finished example as a reference or if you become stuck during this guide and need some extra help.     Step 2: Download Plugin and SDK    The ThingWorx Extension SDK provides supported classes and APIs to build Java-based extensions. The APIs included in this SDK allow manipulation of ThingWorx platform objects to create Java based extensions that can extend the capability of the existing ThingWorx platform.   The Eclipse Plugin assists in working with the Extension SDK to create projects, entities, and samples. Download the Eclipse Plugin. Download the Extension SDK.. Make a note of the directory where the plugin and the extension SDK are stored. The path of the directory will be required in upcoming steps. Do not extract the zip files.     Step 3: Install and Configure   Before you install the plugin, ensure that software requirements are met for proper installation of the plugin. Open Eclipse and choose a suitable directory as a workspace. Go to the menu bar of the Eclipse window and select Help->Install New Software… After the Install window opens, click Add to add the Eclipse Plugin repository. Click Archive… and browse to the directory where the Eclipse Plugin zip file is stored and click Open. Enter a name (for example, Eclipse Plugin).     Click OK. NOTE: Do not extract this zip file. Ensure that the Group items by category checkbox is not selected. Select ThingWorx Extension Builder in the items list of the Install window. Click Next and the items to be installed are listed Click Next and review the license agreement. Accept the license agreement and click Finish to complete the installation process.   NOTE: If a warning for unsigned content is displayed, click OK to complete the installation process. Restart Eclipse. When Eclipse starts again, ensure that you are in the ThingWorx Extension perspective. If not, select Window->Perspective->Open Perspective->Other->ThingWorx Extension, then click OK.      NOTE: Opening any item from File->New->Other…->ThingWorx will also change the perspective to ThingWorx Extension.     You are ready to start a ThingWorx Extension Project!     Click here to view Part 2 of this guide.  
View full tip
Get Started with ThingWorx for IoT Guide Part 4   Step 10: Display Data   Now that you have configured the visual part of your application, you need to bind the Widgets in your Mashup to a data source, and enable your application to display data from your connected devices.   Add Services to Mashup   Click the Data tab in the top-right section of the Mashup Builder. Click on the green + symbol in the Data tab.   Type MyHouse in the Entity textbox. Click MyHouse. In the Filter textbox below Services, type GetPropertyValues. Click the arrow to the right of the GetPropertyValues service to add it.   Select the checkbox under Execute on Load. NOTE: If you check the Execute on Load option, the service will execute when the Mashup starts. 8. In the Filter textbox under Services, type QueryProperty. 9. Add the QueryPropertyHistory service by clicking the arrow to the right of the service name. 10. Click the checkbox under Execute on Load. 11. Click Done. 12. Click Save.   Bind Data to Widgets   We will now connect the Services we added to the Widgets in the Mashup that will display their data.   Gauge   Configure the Gauge to display the current power value. Expand the GetPropertyValues Service as well as the Returned Data and All Data sections. Drag and drop the watts property onto the Gauge Widget.   When the Select Binding Target dialogue box appears, select # Data.   Map   Configure Google Maps to display the location of the home. Expand the GetPropertyValues service as well as the Returned Data section. Drag and drop All Data onto the map widget.   When the Select Binding Target dialogue box appears, select Data. Click on the Google Map Widget on the canvas to display properties that can configured in the lower left panel. Set the LocationField property in the lower left panel by selecting building_lat_lng from the drop-down menu.   Chart   Configure the Chart to display property values changing over time. Expand the QueryPropertyHistory Service as well as the Returned Data section. Drag and drop All Data onto the Line Chart Widget. When the Select Binding Target dialogue box appears, select Data. In the Property panel in the lower left, select All from the Category drop-down. Enter series in Filter Properties text box then enter 1 in NumberOfSeries . Enter field in Filter Properties text box then click XAxisField. Select the timestamp property value from the XAxisField drop-down. Select temperature from the DataField1 drop-down.   Verify Data Bindings   You can see the configuration of data sources bound to widgets displayed in the Connections pane. Click on GetPropertyValues in the data source panel then check the diagram on the bottom of the screen to confirm a data source is bound to the Gauge and Map widget.   Click on the QueryPropertyHistory data source and confirm that the diagram shows the Chart is bound to it. Click Save.   Step 11: Simulate a Data Source   At this point, you have created a Value Stream to store changing property value data and applied it to the BuildingTemplate. This guide does not include connecting edge devices and another guide covers choosing a connectivity method. We will import a pre-made Thing that creates simulated data to represent types of information from a connected home. The imported Thing uses Javascript code saved in a Subscription that updates the power and temperature properties of the MyHouse Thing every time it is triggered by its timer Event.    Import Data Simulation Entities   Download the attached sample:  Things_House_data_simulator.xml. In Composer, click the Import/Export icon at the lower-left of the page. Click Import. Leave all default values and click Browse to select the Things_House_data_simulator.xml file that you just downloaded. Click Open, then Import, and once you see the success message, click Close.   Explore Imported Entities   Navigate to the House_data_simulator Thing by using the search bar at the top of the screen. Click the Subscriptions tab. Click Event.Timer under Name. Select the Subscription Info tab. NOTE: Every 30 seconds, the timer event will trigger this subscription and the Javascript code in the Script panel will run. The running script updates the temperature and watts properties of the MyHouse Thing using logic based on both the temperature property from MyHouse and the isACrunning property of the simulator itself. 5. Expand the Subscription Info menu. The simulator will send data when the Enabled checkbox is checked. 6. Click Done then Save to save any changes.   Verify Data Simulation   Open the MyHouse Thing and click Properties an Alerts tab Click the Refresh button above where the current property values are shown   Notice that the temperature property value changes every 30 seconds when the triggered service runs. The watts property value is 100 when the temperature exceeds 72 to simulate an air conditioner turning on.   Step 12: Test Application   Browse to your Mashup and click View Mashup to launch the application.   NOTE: You may need to enable pop-ups in your browser to see the Mashup.       2. Confirm that data is being displayed in each of the sections.        Test Alert   Open MyHouse Thing Click the Properties and Subscriptions Tab. Find the temperature Property and click on pencil icon in the Value column. Enter the temperature property of 29 in the upper right panel. Click Check mark icon to save value. This will trigger the freezeWarning alert.   Click Refresh to see the value of the message property automatically set.   7. Click the the Monitoring icon on the left, then click ScriptLog to see your message written to the script log.   Click here to view Part 5 of this guide. 
View full tip
Timers and Schedulers can also be created and configured programmatically via custom services. The following service, which can be created on any Thing, will create a new Timer using the following Inputs:         // create new Thing var params = { name: ThingName /* STRING */, description: undefined /* STRING */, thingTemplateName: "Timer" /* THINGTEMPLATENAME */, tags: undefined /* TAGS */ }; Resources["EntityServices"].CreateThing(params); // read initial configuration // result: INFOTABLE var configtable = Things[ThingName].GetConfigurationTable({tableName: "Settings"}); // update configuration with service parameters configtable.updateRate = updateRate configtable.runAsUser = user // set new configuration table var params = { configurationTable: configtable /* INFOTABLE */, persistent: true /* BOOLEAN */, tableName: "Settings" /* STRING */ }; Things[ThingName].SetConfigurationTable(params);   This code is an example which could also be used to create a new Scheduler. The configuration table for a Timer has the following attributes: updateRate enabled runAsUser The configuration table for a Scheduler has the following attributes: schedule enabled runAsUser  
View full tip
The following is valid  for ThingWorx Analytics (TWA) 52.0.2 till 8.0 For release 8.3.0 and above see How to score new data in ThingWorx Analytics 8.3.x ?   Overview The main steps are as follow: Create a dataset Configure the dataset Upload data to the dataset Optimize the dataset Create filters for training and scoring data Train the model Execute scoring on existing data Upload new data to dataset Execute scoring on new data TWA models are dataset centric, which means a model created with one dataset cannot be reused with a different dataset. In order to be able to score new data, a specific feature, record purpose in the below example, is included in the dataset. This feature needs to be included from the beginning when the data is first uploaded to TWA. A filter on that feature can then be created to allow to isolate desired data. When new data comes in, they are added to the original dataset but with a specific value for the filtered feature (record purpose), which allows to discriminate and score only those new records. Process Create a dataset This example uses the beanpro demo dataset Create dataset is done through a POST on datasets REST API as below 2. Configure dataset This is done through a POST on <dataset>/configuration REST API 3.      Upload data         4.      Optimize the dataset         5.      Create filters The dataset includes a feature named record purpose created especially to differentiate between the rows to be used for training and the rows to be used for scoring. New data to be added will have record purpose set to scoringnew, which will allow to execute a scoring job limited to those filtered new rows Filter for training data: Filter for new scoring data        6.      Train the model This is done through a POST on <dataset>/prediction API        7.      Score the training data This is done through a POST on <dataset>/predictive_scores API. Note the use of the filter TrainingData created earlier. This allow to score only the rows with training as value for record purpose feature. Note: scoring could also be done without filter at this stage, in which case all the data in the dataset will be scored and not just the ones with training fore record purpose   Retrieving the scoring result show all the records in the dataset:   8.      Upload new data The newly uploaded csv file should only contains new record. This will be appended to the existing ones.   Note that the new record (it could be more than one) has a value scoringnew for the record purpose feature: This will allow to use the previously created filter ScoringNewData so that a new scoring job will only take into account this new record.   9.      Scoring new data A POST on API predictive_scores is executed however using the filter ScoringNewData. This results in only the newly added data to be scored and therefore a much quicker execution time too. Retrieving the scoring result shows only the new record:
View full tip
ThingWorx DevOps with Jenkins DevOps as a topic is vast and has been addressed at many times throughout the history of the PTC Community. Previous posts address what DevOps is, teach how to make use of DevOps like a pro,  announce updates to the PTC Git Extension, and explain why this extension is so helpful to achieving continuous Git integration with ThingWorx.   This post provides a PDF guide on Jenkins integration with ThingWorx, including tutorials with detailed information on how to setup your ThingWorx instance and how to configure your Jenkins Pipeline. The PDF is listed for download separately, but it is also included in the zip with the other required files for the tutorial. The Jenkins Pipeline provided here is intended as an example / starting point for managing your DevOps in ThingWorx and can easily be extended. Please note that this Pipeline is not officially supported by PTC. 
View full tip
Fresh look at getting started with ThingWorx in a relevant context that outlines the DEVOPS needed to kick-start your programming.     For full-sized viewing, click on the YouTube link in the player controls. Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
A confusion matrix is a technique for summarizing the performance of a classification algorithm. Classification accuracy alone can be misleading if you have an unequal number of observations in each class or if you have more than two classes in your data set. Calculating a confusion matrix can give you a better idea of what your classification model is getting right and what types of errors it is making. Classification Accuracy and its Limitations: ​Classification Accuracy = Correct Predictions/Total Predictions The main problem with classification accuracy is that it hides the detail you need to better understand the performance of your classification model. Below are two examples: 1.  When you are data has more than 2 classes. With 3 or more classes you may get a classification accuracy of 80%, but you don’t know if that is because all classes are being predicted equally well or whether one or two classes are being neglected by the model. 2.  When your data does not have an even number of classes. You may achieve accuracy of 90% or more, but this is not a good score if 90 records for every 100 belong to one class and you can achieve this score by always predicting the most common class value. Classification accuracy can hide the detail you need to diagnose the performance of your model. But thankfully we can tease apart this detail by using a confusion matrix. Confusion Matrix Terminology: A confusion matrix is a table that is often use to describe the performance of a classification model on a set of test data for which true values are known. Let’s start with an example for a binary classifier: N=165 Predicted no: Predicted yes: Actual no: 50 10 Actual yes: 5 100 What we can learn from Confusion Matrix? There are two possible predicted classes: "yes" and "no". If we were predicting the presence of a disease, for example, "yes" would mean they have the disease, and "no" would mean they don't have the disease. The classifier made a total of 165 predictions (e.g., 165 patients were being tested for the presence of that disease). Out of those 165 cases, the classifier predicted "yes" 110 times, and "no" 55 times. In reality, 105 patients in the sample have the disease, and 60 patients do not. Let's now define the most basic terms, which are whole numbers (not rates): True positives (TP): These are cases in which we predicted yes (they have the disease), and they do have the disease. True negatives (TN): We predicted no, and they don't have the disease. False positives (FP): We predicted yes, but they don't actually have the disease. (Also known as a "Type I error.") False negatives (FN): We predicted no, but they actually do have the disease. (Also known as a "Type II error.") N=165 Predicted No: Predicted Yes: Actual No: TN=50 FP=10 60 Actual Yes: FN=5 TP=100 105 55 110 This is a list of rates that are often computed from a confusion matrix for a binary classifier: Accuracy: Overall, how often is the classifier correct? 1. (TP+TN)/total = (100+50)/165 = 0.91 Misclassification Rate: Overall, how often is it wrong? 1. (FP+FN)/total = (10+5)/165 = 0.09 2. Equivalent to 1 minus Accuracy 3. Also known as "Error Rate" True Positive Rate: When it's actually yes, how often does it predict yes? 1. TP/actual yes = 100/105 = 0.95 2. Also known as "Sensitivity" or "Recall" False Positive Rate: When it's actually no, how often does it predict yes? 1. FP/actual no = 10/60 = 0.17 Specificity: When it's actually no, how often does it predict no? 1. TN/actual no = 50/60 = 0.83 2. Equivalent to 1 minus False Positive Rate Precision: When it predicts yes, how often is it correct? 1. TP/predicted yes = 100/110 = 0.91 Prevalence: How often does the yes condition actually occur in our sample? 1. Actual yes/total = 105/165 = 0.64
View full tip
Announcements