cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
Applicable Releases: ThingWorx Navigate 1.6.0 to 1.7.0   Description:   Covers how to configure ThingWorx Navigate to use Windchill Authentication: Background and Prerequisites X.509 Public Key Infrastructure (PKIX) Brief Introduction Steps to configure Thingworx Navigate with Windchill Authentication: Windchill Integration Runtime Thingworx Navigate     Additional Information Navigate SSL Configuration for Windchill Authentication General Checklist
View full tip
Applicable Releases: ThingWorx Navigate 1.6.0 to 8.5.0     Description:     How to use PingFederate script: Prerequisites Configuration Run the script Generated artifacts Live Demo         Associated documentation is available in PTC Single Sign On Architecture and Configuration Overview guide: PTC Single Sign-on Architecture and Configuration Overview  
View full tip
The .net-sdk can be configured to emit very detailed debugging and diagnostic information to a log file during execution. The .net-sdk uses the standard .NET System.Diagnostic infrastructure for Logging, as such, all configuration of the .net-sdk logger is done via the standard .NET Logging configuration system. By default, Logging is configured via the standard .NET “App.config” file. Log messages can be routed to any standard .NET TraceListener. Optionally, ThingWorx provides a FixedFieldTraceListener which can be used to output log messages to a file. The use of the ThingWorx provided FixedFieldTraceListener is recommended. The FixedFieldTraceListener when configured will automatically create a "logs" directory in the same location as (a sibling to) the running executable file (.exe). This "logs" directory will contain the log files. Every .NET Class can be configured as a specific “Trace Source” which emits log messages. It is recommended to add at least the following Trace Sources to your App.config file to receive the most useful amount of information: com.thingworx.communications.client.BaseClient com.thingworx.communications.client.ConnectedThingClient com.thingworx.communications.client.things.VirtualThing com.thingworx.communications.client.TwApiWrapper com.thingworx.communications.client.things.filetransfer.FileTransferVirtualThing com.thingworx.communications.client.things.contentloader.ContentLoaderVirtualThing The amount of information emitted can range from very low level Trace messages (the Verbose setting) to nothing at all (the Off setting). The “SourceLevels Enumeration” can be used to control how much information is written out to the log file. For reference, this is the <add name="SourceSwitch" value="Information" /> element in the sample below. Below is sample App.config file. <?xml version="1.0" encoding="utf-8"?> <configuration>     <system.diagnostics>       <sources>         <source name="com.thingworx.common.utils.JSONUtilities" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.TwApiWrapper" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.BaseClient" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.ConnectedThingClient" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.things.contentloader.ContentLoaderVirtualThing" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.things.filetransfer.FileTransferVirtualThing" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.things.VirtualThing" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.metadata.annotations.MetadataAnnotationParser" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>       </sources>       <switches>         <add name="SourceSwitch" value="Information" />       </switches>       <sharedListeners>         <add name="file" type="com.thingworx.common.logging.FixedFieldTraceListener, thingworx-dotnet-common, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" initializeData="false"/>       </sharedListeners>       <trace autoflush="true" indentsize="4" />   </system.diagnostics> </configuration>
View full tip
Applicable Releases: ThingWorx Navigate 1.6.0 to 8.5.0   Description:   Covers Single Sign On concepts and main items to take into account when defining SSO architecture, with the following agenda: What is Single Sign on? What is PTC Strategy for Single Sign on? How does PingFederate fits in the existing SSO Federation? What products currently can be configured for SSO? Optional SSO in Navigate 1.6 What are the key diferences in SSO over the different Navigate versions? What are we doing internally to prepare for SSO? What resources are available for SSO discussions?         Related articles for further information Related Service: Configure SSO for Navigate
View full tip
Applicable Releases: ThingWorx Platform 7.0 to 8.5   Description:   Covers how to apply patch upgrades to ThingWorx installation, with the following agenda: How to read ThingWorx version Upgrading to a major/minor version of the platform Focus on upgrading to a patch version of the platform Upgrading extensions       Always check the patch release notes for additional information and specific steps
View full tip
Applicable Releases: ThingWorx Platform 7.0 to 8.5   Description:   Covers the main topics that need to be considered when evaluating or designing a scalability strategy for the environment and applications The agenda contains the following topics: Introduction Connection Server Federation High Availability       Some of the databases mentioned in the presentation are no longer supported Please refer to the compatibility matrix to check supported databases Related Articles How to configure a Federate architecture Is high availability supported in ThingWorx  
View full tip
Applicable Releases: ThingWorx Platform 7.0 to 8.4   Description:   A practical example of how to build a data model in ThingWorx following a pre-defined design Following topics are covered: Review existing Design Plan Build all required entities in ThingWorx Composer Test the model and review scalability and reusability         The session was recorded using the old ThingWorx Composer, but the concepts are still applicable Related Success Service - Principles of Thingworx Modeling Related Service - Design your Thingworx Model
View full tip
Applicable Releases: ThingWorx Platform 7.0 to 8.5   Description:   Introduction to ThingWorx Extension Development, with the following topics: What is an Extension Why building an Extension Prerequisites Installing Eclipse plugin and features Creating entities with the plugin and including exported Entities in an Extension Project Upgrading or Updating and Existing extension in ThingWorx Building with Gradle and Ant       ThingWorx Extension Development Guide
View full tip
Applicable Releases: ThingWorx Platform 7.0 to 8.4   Description:   Concepts and basic Mashup design using an use case as example Following topics are covered: Recap Scenario Requirements and review concept design Introduction to Mashup Builder Design Mashup to visualize data     Related Success Service The session was recorded using the old ThingWorx Composer, but the concepts are still applicable
View full tip
Applicable Releases: ThingWorx Platform 7.0 to 8.5   Description:   Introduction to Edge connectivity in Thingworx Foundation: Edge concept and definition Available edge products Why use Edge products What is Edge Microserver and Lua Script Resource What are the SDKs What are connection servers AlwaysOn and HTTP protocols ThingTemplates to connect remote devices     The session was recorded in an old ThingWorx version, but all the concepts are still applicable
View full tip
Applicable Releases: ThingWorx Platform 8.3 to 8.5   Description:   Installation walkthrough of ThingWorx foundation using PostgreSQL, materializing some main steps that might be difficult to read in the installation guides       Reference installation guides for each version
View full tip
Recently a customer from the ThingWorx Academic Program sent in a sample program they were having problems with. They were trying to post data from a Raspberry PI using Python to their ThingWorx server. It turns out that their program did work just fine and was also a great example of posting data from a PI using REST. Here is how to set up this example. 1. Import the attached "Things_TempAndHumidityThing.xml" entity file. 2. from the PI run 'sudo pip install requests' 3. from the PI run 'sudo pip install logging' 4. from the PI run 'sudo pip install http_client' 5. Create a python file call test.py that contains this example code: #!/usr/bin/python import requests import json import logging import sys # These two lines enable debugging at httplib level (requests->urllib3->http.client) # You will see the REQUEST, including HEADERS and DATA, and RESPONSE with HEADERS but without DATA. # The only thing missing will be the response.body which is not logged. try:     import http.client as http_client except ImportError:     # Python 2     import httplib as http_client http_client.HTTPConnection.debuglevel = 1 # You must initialize logging, otherwise you'll not see debug output. logging.basicConfig() logging.getLogger().setLevel(logging.DEBUG) requests_log = logging.getLogger("requests.packages.urllib3") requests_log.setLevel(logging.DEBUG) requests_log.propagate = True #NYP Webserver URL in Thingworx NYP_Webhost = sys.argv[1] App_Key = sys.argv[2] ThingName = 'TempAndHumidityThing' headers = { 'Content-Type': 'application/json', 'appKey': App_Key } payload = { 'Prop_Temperature': 45, 'Prop_Humidity': 33 } response = requests.put(NYP_Webhost + '/Thingworx/Things/' + ThingName + '/Properties/*', headers=headers, json=payload, verify=False) 6. From the command line run, './test.py http://twhome:8080 e9274d87-58aa-4d60-b27f-e67962f3e5c4' except substitute your server and your app key. 7. A successful response should look like: INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): twhome send: 'PUT /Thingworx/Things/TempAndHumidityThing/Properties/* HTTP/1.1\r\nHost: twhome:8080\r\nappKey: e9274d87-58aa-4d60-b27f-e67962f3e5c4\r\nContent-Length: 45\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nUser-Agent: python-requests/2.8.1\r\nConnection: keep-alive\r\nContent-Type: application/json\r\n\r\n{"Prop_Temperature": 45, "Prop_Humidity": 33}' reply: 'HTTP/1.1 200 OK\r\n' header: Server: Apache-Coyote/1.1 header: Set-Cookie: JSESSIONID=E7436D2E6AE81C84EC197D406E7E365A; Path=/Thingworx/; HttpOnly header: Expires: 0 header: Cache-Control: no-store, no-cache header: Cache-Control: post-check=0, pre-check=0 header: Pragma: no-cache header: Content-Type: text/html;charset=UTF-8 header: Transfer-Encoding: chunked header: Date: Mon, 09 Nov 2015 12:39:24 GMT DEBUG:requests.packages.urllib3.connectionpool:"PUT /Thingworx/Things/TempAndHumidityThing/Properties/* HTTP/1.1" 200 None My thanks to the customer who sent in the simple example.
View full tip
Applicable Releases: ThingWorx Platform 7.0 to 8.4   Description:   Strategy and tools for Thingworx application backups Backup Terminology and concepts Drivers to define a backup strategy Tips for executing backup in a Thingworx instance: Tomcat, certificates, Configuration and file system data, application specific files, database     Neo4J database mentioned in the session is no longer supported For more information check Best Practices for ThingWorx Backup
View full tip
Applicable Releases: ThingWorx Platform 8.0 to 8.5; ThingWorx Navigate 1.5.0 to 8.5.0   Description:   Definition and concepts of Single Sign On (SSO), terminologies, components and architecture, as well as configuration prerequisites and high level steps to configure using PingFederation with Windchill and Navigate and main troubleshooting techniques    
View full tip
Applicable Releases: ThingWorx Platform 7.0 to 8.5   Description:   Main concepts and best practices for devops methodology such as Naming Conventions Setup and management of environments for development and testing Import/Export process and application deployment Use of Tags and Project to control your development Coding Standards Validation best practices         For project packaging and deployment, make sure to check the content about Solution Central created after this session was released
View full tip
  Hello, IIoT Developers!   Today, I’m going to provide you an overview of a new SDK we’re offering for developers to build custom web content. It’s called our Visual SDK. We released 9.0 in June, so it’s time to start getting excited! In case you missed it, check out these other 9.0 posts on active-active clustering, Composer and Mashup Builder, and the 9.0 release overall.   In 8.4, we introduced a new visualization architecture and set of web components based on Polymer. We’ve continually updated and added new widgets and features with this architecture each release, including new Chart Components in 9.0. This SDK was previously available mostly as a style guidance, informing designers and developers what elements and behaviors were available in the new PTC web component-based widgets. You could use this SDK to style in CSS custom elements of the web components in a mashup-based application. You weren’t, in 8.4, really able to use this SDK yet to actually create your own Polymer web components and have them be as robust in the Mashup Builder in ThingWorx.   In 9.0, this SDK has been expanded so you not only have style and behavioral descriptions of PTC components, but you also have tutorials and utilities that let you create your own components and import them into the Mashup Builder. The possibilities are endless here for custom content, so let’s look at what’s inside.   The SDK guide has a quick outline of some pre-requisites you should know about as you enter custom web development with ThingWorx. Things like knowledge of Polymer 3, downloading common tools like NPM and Gulp CLI, Aurelia, etc. Much of this info is also included in markdown documents within the SDK files, but the SDK web content makes it easier to follow and search.   From there, the guide walks you through more setup of your SDK directories, NPM install of PTC components, and basics around dependencies, styling, and demo pages. Each PTC-developed web component is also available in the SDK pages as well with more information on what they offer and their basic designs. This is useful if you would like to reference the PTC components as imports into your own web component. This technique is very useful for re-use and upgrade safety when developing custom components on top of ThingWorx. Sample overview pages in the SDK for ptcs-chart The SDK also includes a getting started tutorial and a sample Polymer component and a widget called simple-el , which are helpful as you want to reference during development and familiarizing yourself with concepts. The component is functional and offers a theme dropdown so you can see how the theming engine and events work. Sample Polymer component included in the SDK called simple-elOnce you have created your web component, there is also a new utility called mub, which scans your component project and wraps it in a shell for the Mashup Builder. If you run the mub utility on your component, you’ll find it produces a zip file with the relevant design and runtime wrappers for the mashup environment already mapped to your component. You can also use it to define properties for your new component in the mashup environment, include custom code for defining the widgets at design and runtime behaviors, and to add icons, categories and other standard platform features. Running the mub utility on a web component project Once you have run the utility, you just import the artifact into a ThingWorx platform and it will be available for your application developers to use in their mashups as a widget. Again, how it appears in the design experience, what properties are exposed, how it responds to platform binding and theming events are all customizable in the SDK. Sample Polymer Component wrapped as a widget for use in a Mashup Once you get the hang of things with the sample code and understand the ins and outs, you can then use those same patterns to develop your own content! These are the same techniques that the PTC R&D team uses when they make each of the new widgets that you see in our product, like the 9.0 charts! Uber cool stuff!   Like what you see? Have a question? Drop us a line in the comments!   Stay connected! Kaya    
View full tip
    Hi, everyone!   In previous tech tips, I’ve introduced the ThingWorx 9.0 active-active clustering feature and provided architectural details and configurations. If you haven’t already, I recommend you check them out to learn more about how active-active clustering enables higher availability for ThingWorx: 9.0 Sneak Peek: Active-Active Clustering for ThingWorx 9.0 Sneak Peek: ThingWorx Architecture for Active-Active Clustering 9.0 Sneak Peek: Flexible Deployments of Active-Active Clustering for ThingWorx “ThingWorx on Air” Ep. 08: FAQs: ThingWorx Active-Active Clustering for Higher Availability   Today, I’ll provide more details around the load balancer in the active-active clustering architecture, some of its requirements, and a few configuration examples. Ready? Here we go! Here are the top four FAQs around the load balancer that will help you maximize your use of active-active clustering.   What do you mean by load balancing? Load balancing is the process of distributing network traffic across multiple servers. An algorithm employed by the load balancer or a proxy, determines how the traffic is distrusted. Round robin, fastest response, and least established connections are some of the most common methods of load balancing and provide different benefits, but all fundamentally ensure no single server bears too much demand. By spreading the traffic, load balancing improves application responsiveness. It also increases availability of applications and websites for users. Modern applications cannot run without load balancers. In general load balancers can run as hardware appliances or as software-defined. Hardware appliances often run proprietary software optimized to run on custom processors. As traffic increases, the vendor simply adds more load balancing appliances to handle the volume. Software defined load balancers usually run on less-expensive, standard Intel x86 hardware. Installing the software in cloud environments like Azure VMs or AWS EC2 eliminates the need for a physical appliance.   Following the seven-layer Open System Interconnection (OSI) model, load balancing occurs between layers four to seven (L4-Transport, L5-Session, L6-Presentation and L7-Application), whereas network firewalls are at levels one to three (L1-Physical Wiring, L2-Data Link and L3-Network). Load balancers have a various capabilities, which include: L4 — directs traffic based on data from network and transport layer protocols, such as IP address and TCP port. L7 — adds content switching to load balancing. This allows routing decisions based on attributes like HTTP header, uniform resource identifier, SSL session ID and HTML form data. GSLB — Global Server Load Balancing extends L4 and L7 capabilities to servers in different geographic locations. More enterprises are seeking to deploy cloud-native applications in data centers and public clouds. This is leading to significant changes in the capability of load balancers. What is a load balancer’s role in the ThingWorx Active-Active Clustering setup? As is true of any load balancer, the load balancer required in the ThingWorx Foundation active-active clustering architecture is responsible for distributing incoming traffic across the nodes within the cluster.   In the Active-Active Clustering architecture for ThingWorx, the load balancer distributes the traffic using a round-robin method. Please note that there are a several algorithms that provide load balancing techniques and this article is a good read for further understanding of it. A round-robin method rotates servers by directing traffic to the first available server and then moves that server to the bottom of the queue.   In ThingWorx clustering setup, while both WebSocket and HTTP incoming traffic are handled in a round-robin manner, they are routed differently by the load balancer.   HTTP traffic is directly distributed amongst the ThingWorx Foundation Servers within the cluster. Sticky sessions are used for the HTTP sessions—sticky via cookie, so individual users are tied directly to a single server node and see all of their changes instantaneously.   WebSocket traffic is distributed across the and is balanced via source IP to ensure each request from a device goes through the same connection server. From the ThingWorx Connection Server, the device traffic is distributed amongst the underlying ThingWorx Foundation Servers, not requiring another load balancer between the ThingWorx Connection Servers and ThingWorx Foundation Servers.   Please note that the WebSocket traffic load does not necessarily get distributed evenly nor do the incoming requests due to stickiness. For example: 2 users connect HTTP, one sends 100 requests and the other sends 2. Since they are sticky, it is not distributed evenly. 2 devices connect to a ThingWorx Connection Server. 1 is a gateway for 100 other devices, all requests fthe gateway go to the same connection server. The Connection Server does a round-robin to the underlying Foundation Servers so that the load would be better distributed across, but the load balancer is sticky to a ThingWorx Connection Server.     Which load balancer can I choose for setting ThingWorx in an Active-Active Cluster mode? ThingWorx active-active clustering is pretty much load balancer agnostic, meaning if the load balancer of your choosing that you might be using in your IT center meets the requirements, it can be utilized within the active-active clustering architecture. The load balancer is required to support the following features: Based on Layer-7 architecture Supports HTTP and WebSocket traffic Ability to support sticky sessions for  traffic and/or IP based stickiness. IP based means all traffic from a specific IP will be routed to the same server (this can be a problem with gateway type scenarios). Sticky sessions are based on a cookie, sessions are routed to same server based on cookie. Different users same IP could route to different machines. Health checking on server endpoints. (optional) It can manage SSL termination and SSL internal endpoints. Supports Path based routing. This is the ability to route to specific backends based on the URL or part of the URL. By default, all routes should go to the platform servers, but the following routes should go to the connection server: /Thingworx/WS /Thingworx/WSTunnelServer /Thingworx/WSTunnelClient /Thingworx/VWS All servers should be setup to only be part of load balancing based on their health configuration.  When configuring health check frequency, they should be run at a rate based on the tolerance for bad requests to be processed. Thingworx Foundation has a /health and /ready endpoint.   The /Thingworx/ready endpoint should be used for the load balancer.  It will return a 200 when the server is ready to receive traffic.  Connection Server checks health requests on a specific port and will return 200 when healthy. What are some of the compatible load balancers that I can use? While you can use any load balancer that satisfies the above request and meets your IT standards, below are some of the third-party load balancers that provide the features that are required of the active-active clustering architecture: HAProxy - HAProxy is a free, open source software that provides a high availability load balancer and proxy server for TCP and HTTP-based applications that spreads requests across multiple servers. It is very powerful and supports monitoring capabilities out of the box. PTC tests the clustering architecture using HA Proxy and provides a reference document for the same through the ThingWorx Foundation help center docs. Please note that it runs only on Linux environments. For a quick reference example of how to set up an HAProxy load balancer, see our Help Center here. NGINX - NGINX is an HTTP and reverse proxy server, a mail proxy server, and a generic TCP/UDP proxy serve. NGINX provides proxy capabilities as well as web server options. Some features like sticky sessions, advanced monitoring are not available in the opensource version and require you upgrade to NGINX Plus.  If you’re a Windows shop or already use NGINX Plus in your IT, then you may choose this load balancer offering.  However, please note that PTC doesn’t provide any official configuration steps of setting it up through our Help Center documentation. For a quick reference example of how to set up an NGNIX load balancer, see our Help Center here. AWS Application Load Balancer - Application Load Balancer (ALB) is best suited for load balancing of HTTP and HTTPS traffic and provides advanced request routing targeted at the underlying ThingWorx applications. Operating at the individual request level (Layer 7), Application Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) based on the content of the request. If you’re running ThingWorx deployments on AWS, then you may choose to use AWS-offered managed load balancing services. F5: F5 Networks through its BIG-IP Local Traffic Manager solution provides advance load balancing techniques such as a full proxy where you can inspect, manage, and report on application traffic entering and exiting your network with additional features around SSL and performance optimization. Load balancers are another area where ThingWorx allows for flexibility and extensibility by enabling you to use the load balancer of your choosing that you’re most comfortable with or that best suits your needs (provided it meets the criteria above). You can also configure SSL or TLS for HAProxy when using ThingWorx HA clustering for end-to-end security. I hope this tech tip helped you develop a deeper understanding of how active-active clustering leverages load balancers to further increase your performance and thus availability and machine uptime, among many others.   If you’re not already on 9.0 and using active-active clustering, be sure to upgrade!   Stay connected, Kaya  
View full tip
Hello!   “ThingWorx on Air” Episode 9 is now available! Grab your headphones and listen to Neal, an Azure subject matter expert, and Janie, a PM focused on Azure functionality, introduce a new integration we’re working on between Kepware, Azure IoT Edge, Azure IoT Hub and ThingWorx.   Discover how you’ll be able to leverage and model OPC UA data directly in the Thing Model, how you’ll be able to connect just about any OPC UA device through the Azure stack and to the Cloud, and so much more. We can’t wait to continue to extend ThingWorx functionality to support industry standards like the OPC UA protocols.   Are you excited? Wish you could get your hands on this functionality early? You can! Reach out to Janie at jpascoe@ptc.com to learn how you can become involved in an exclusive OPC UA & ThingWorx preview program.   Enjoy the episode and let me know what you think below!   Stay connected, Kaya
View full tip
This small tutorial enables you to manage payload decoding for Adeunis Devices within ThingWorx Composer in less than 10 minutes.  Adeunis Devices communicates on LPWAN networks (LoRaWAN / Sigfox) covering sectors such as smart building, smart industry and smart city. The encoding is also possible but it will be covered in another article.   1. Get Adeunis Codec Adeunis is providing a codec enabling payload encoding and decoding.  Download here the resource file containing the codec.  Unzip the file and edit "script.txt" with your favorite text editor. Copy all text contained in the file.   2.  Create AdeunisCodec Thing Create a Thing called "AdeunisCodec" based on the GenericThing Template.   3. Create a service called "Decode" Create a Decode Service with the following setup: Inputs: type (String), payload (String) Output as JSON Past the previously copied "script.txt" content Save   4. Correct a couple of Warnings Remove all "var codec;" occurences except first one at line 1191.  Remove semi columns at lines 985,1088, 1096 and 1172   5. Remove the following section The codec relies on implementing functions on JavaScript prototypes which is not supported by ThingWorx Rhino JavaScript Engine. See the following documentation section, here.    Remove from line 1109 to 1157.   The following classes overrides will be removed: Uint8Array.prototype.readUInt16BE Uint8Array.prototype.readInt16BE Uint8Array.prototype.readUInt8 Uint8Array.prototype.readUInt32BE Uint8Array.prototype.writeUInt16BE Uint8Array.prototype.writeUInt8 Uint8Array.prototype.writeUInt32BE 6. Add new implementations of the removed functions The functions are adapted from a JavaScript framework which contains resources that helps dealing with binary data, here. Insert the  following section at the top of the "Decode" script.         function readInt16BE (payload,offset) { checkOffset(offset, 2, payload.length); var val = payload[offset + 1] | (payload[offset] << 8); return (val & 0x8000) ? val | 0xFFFF0000 : val; } function readUInt32BE (payload,offset) { checkOffset(offset, 4, payload.length); return (payload[offset] * 0x1000000) + ((payload[offset + 1] << 16) | (payload[offset + 2] << | payload[offset + 3]); } function readUInt16BE (payload,offset) { checkOffset(offset, 2, payload.length); return (payload[offset] << | payload[offset + 1]; } function readUInt8 (payload,offset) { checkOffset(offset, 1, payload.length); return payload[offset]; } function writeUInt16BE (payload,value, offset) { value = +value; offset = offset >>> 0; checkInt(payload, value, offset, 2, 0xffff, 0); if (Buffer.TYPED_ARRAY_SUPPORT) { this[offset] = (value >>> 8); payload[offset + 1] = value; } else objectWriteUInt16(payload, value, offset, false); return offset + 2; } function writeUInt8 (payload,value, offset) { value = +value; offset = offset >>> 0; checkInt(payload, value, offset, 1, 0xff, 0); if (!Buffer.TYPED_ARRAY_SUPPORT) value = Math.floor(value); payload[offset] = value; return offset + 1; } function writeUInt32BE (payload,value, offset) { value = +value; offset = offset >>> 0; checkInt(payload, value, offset, 4, 0xffffffff, 0); if (Buffer.TYPED_ARRAY_SUPPORT) { payload[offset] = (value >>> 24); payload[offset + 1] = (value >>> 16); payload[offset + 2] = (value >>> 8); payload[offset + 3] = value; } else objectWriteUInt32(payload, value, offset, false); return offset + 4; } function objectWriteUInt16 (buf, value, offset, littleEndian) { if (value < 0) value = 0xffff + value + 1; for (var i = 0, j = Math.min(buf.length - offset, 2); i < j; i++) { buf[offset + i] = (value & (0xff << (8 * (littleEndian ? i : 1 - i)))) >>> (littleEndian ? i : 1 - i) * 8; } } function objectWriteUInt32 (buf, value, offset, littleEndian) { if (value < 0) value = 0xffffffff + value + 1; for (var i = 0, j = Math.min(buf.length - offset, 4); i < j; i++) { buf[offset + i] = (value >>> (littleEndian ? i : 3 - i) * & 0xff; } }     7. Add the following function to support previous inserted functions     function checkOffset (offset, ext, length) { if ((offset % 1) !== 0 || offset < 0) throw new Error ('offset is not uint'); if (offset + ext > length) throw new Error ('Trying to access beyond buffer length'); }     8. Add the following function for casting String to Bytes     function splitInBytes(data) { var bytes = []; var bytesAsString = ''; for (var i = 0, j = 0; i < data.length; i += 2, j++) { bytes[j] = parseInt(data.substr(i, 2), 16); bytesAsString += bytes[j] + ' '; } return bytes; }     9. Remap function calls to newly inserted functions Use the built-in script editor replace feature for the following, see below:   Within the service script perform a Replace for each of the following lines. Search Replace by payload.readInt16BE( readInt16BE(payload, payload.readUInt32BE( readUInt32BE(payload, payload.readUInt16BE( readUInt16BE(payload, payload.readUInt8( readUInt8(payload, payload.writeUInt16BE( writeUInt16BE(payload, payload.writeUInt8( writeUInt8(payload, payload.writeUInt32BE( writeUInt32BE(payload,   10. At the Bottom update the following Replace : decoder.setDeviceType("temp"); By : decoder.setDeviceType(type);   11. Insert the following at the bottom var result = Decoder(splitInBytes(payload), 0);   12. Save Service and Thing   13. Create a test Service for Adeunis Temp Device Within "AdeunisCodec" Thing Create a new service called "test_decode_temp" with Output as String Insert the following code:      // result: STRING var result = me.Decode({type: "temp" /* STRING */,payload: "43400100F40200F1" /* STRING */});     Save & Execute  The expected result is:     {"temperatures":[{"unit":"°C","name":"probe 1","id":0,"value":24.4},{"unit":"°C","name":"probe 2","id":0,"value":24.1}],"type":"0x43 Temperature data","status":{"frameCounter":2,"lowBattery":false,"hardwareError":false,"probe1Alarm":false,"configurationDone":false,"probe2Alarm":false}}       Please visit the Decoder test section of Adeunis website to see the reference for the Temp device test case, here.   Spoiler (Highlight to read) The resources has been tested on ThingWorx 8.5 and with the latest and greatest ThingWorx 9...   If you are more interested in the result than in the implementation process then import the attached "Things_AdeunisCodec.xml" 😉  The resources has been tested on ThingWorx 8.5 and with the latest and greatest ThingWorx 9...  If you are more interested in the result than in the implementation process then import the attached "Things_AdeunisCodec.xml"    
View full tip
ThingWorx provides multiple ways to deliver your data to the server. You can choose from the C based EMS to your own C application that uses the C SDK as well as SDKs for many popular languages but what can you do if the device you want to collect data on is so small that it need a very lightweight data delivery method. Normally you would consider using the REST web service interface and writing your own custom client to post your data by there is an alternative, MQTT. MQTT is a lightweight protocol that can be used from an Arduino with an Ethernet Shield that can stream real time data directly to ThingWorx by installing the MQTT Marketplace Extension on your server. To learn more about how this kind of solution worked, I created this slide deck while building a hardware example: DeliveringArduinoDataToThingworx.pdf Hopefully, it can help others out who want to create this kind of solution as well.
View full tip
JMeter for ThingWorx Overview Apache JMeter is an open-source tool designed for load testing and measuring the performance of a web application. JMeter has a wide range of features to facilitate this testing, including support for a variety of server and protocol types, a full-featured testing IDE with the ability to record the test steps from both a browser or a native application, and built-in debugging tools. Information about JMeter can be found on Apache’s website.   Working with JMeter is not always intuitive, but it also isn’t that much harder than regular software development. Take some time to explore the official Apache JMeter Documentation and figure out where things go and how to mechanically make use of the JMeter IDE. Then step through this tutorial to create a basic test that logins to ThingWorx, accesses a mashup, and clicks on a few widgets. This is the first in a series to come, courtesy of IoT EDC Engineer Tim Atwood ( @atwood ) and the whole EDC team.   Installation Download JMeter from Apache’s website. Unpack the archive and copy the files to a desired location. Run the application by double clicking on the “ApacheJMeter.jar” file within the bin directory. JMeter is now installed and ready to use. Creating a Test Set up a proxy in your browser of choice (or on the OS in settings).   Select the green “templates” icon in JMeter, and then select “Recording” for the template.   Configure the recording template to point towards your ThingWorx Navigate or Foundation server, then click “Create”. Hit “Start” under the “HTTP(S) Test Script Recorder” tab of the new JMeter project. Make sure the port is set correctly under Global Settings.   A pop-up box will appear that always stays visible on top of the active browser window, so that the recording can be controlled and stopped at any time. Leave the “Transaction name” field empty so that each transaction recorded by the software is automatically named after the web request (this helps differentiate one from the other, and they can each be renamed later).   Open your browser, and navigate (via direct URL if possible, to keep things simple) to the mashup you wish to test. Login and let the page load. Click on anything you’d like on the mashup to capture the activity of that test. Then click “Stop” on the pop-up recorder window to stop the recording. Each transaction will be assigned an index as well, and the source code behind each of these transactions can be reviewed and manually modified in the main JMeter window. Here is the login request for instance:   The HTTP Authorization Manager is used to automatically authorize a defined user login for the thread to any of the Base URLs listed. In this case, though, there are two separate servers being accessed during the test, and one may need to be added manually:   Save the project before continuing, as manual modifications come next.   Within the task page as you do the recording, a set of parameters or body data will be recorded. Modifying this is how you want to parametrize the test scenario, variables like the username and password. To simulate logging in as other users, you have to parameterize this, and not rely on the administrator account name and password entered into the browser.   Rename the task controller to “MyTasks” or something more easily identified than the long string it has now:   Some recorded items like static images and stylesheets will be non-essential, things the browser processes for better graphical representation, but which are often cached and do not greatly affect the scalability results of the test. These can be highlighted and disabled all at once:   Also ensure that any cascading stylesheets have been disabled. Enable the “View Results Tree” to ensure you can review the results of the test script during the editing phase. However, this “Listener” element has a high memory footprint during test execution, so it should be disabled before running an actual scale test.   Next we need to parametrize the user login information and pull it from a csv file.   The colon means that “Administrator” is the default user to use for login.   You can add other properties as well, like ramp up time, run time, number of users, and protocols to use. The ramp up time determines how quickly the threads are allocated for the test, which if done slowly enough, prevents the thundering herd scenario. In more complex scenarios, logic controllers can be inserted to control the flow of the test. This allows for options such as if-then conditions for different user permissions, or parameter-based routes for better randomization of actions in different threads. This will be covered in more detail in a future article.   Pre- and Post-Processors can be used as well, with the latter being used here much more than the former, to extract information from the response, in order to then use that as part of the variables going into one of the follow up requests. For example, see the script in this image: This one has a variable that it extracts from the object number property, defined in the CSV file, and converts it into another variable that is used in subsequent scripts. This script uses the object number reference to pull the name out of the body data and make the request, which is then post-processed by a bunch of these extractors. One is a JSON extractor which is trying to get an ID out of the JSON response. There is a regular expression extractor and a bean shell post-processor, which populates some variables based on what it responded with. Once it extracts all of the variables from the response to this particular request (GetSearchResults in this case), it then tailors the additional requests based on these. -   Customize the script according to the needs of your own application. Alternate between recording and manually modifying the recording code to ensure the test performs exactly as required and from the perspective of different users with different permissions. Also vary the type of activity performed on the mashup. Highlight the “View Results Tree” tab and click the green start button at the top of the window to see the results appear.     If you are getting an unauthorized message, ensure that the scope is right for the login information, which may require moving the “HTTP Authentication Manager” component around in the project. Be sure to check the URLs and credentials entered for each type of user. Occasionally the recorder will insert a long authentication string into the URL, and you want to manually set the URL for the credentials to the most generic URL possible for the server. This can be parametrized too: Referencing the CSV file defined here: Which looks like this for a more complicated scenario (covered in the future):  The columns here represent the username, password, object number in Windchill, and object name in Windchill, as well as the wait time used to vary the way the logic is executed and some extra variables which differentiate for the switches what to do to create a more varied and realistic test.   Conclusion Following these steps again and again on the various mashups throughout an application can ensure that a script for each web page and each type of user on each web page is created and added to the testing suite. This results in a load test that is perfectly representative of the real-world user load placed on an application. Load testing is a critical part of the development lifecycle in any application, and ThingWorx is no exception. Any further questions about the capabilities of JMeter not covered here, can be answered by the whole JMeter user manual, found on the Apache website. Future articles will include some basic scripts that test basic things, which can serve as an example for more complex ThingWorx JMeter script development. Here is an example of one tool PTC uses for internal QA of ThingWorx, designed to load test a Navigate application (specifically its built-in mashups):   Something similar to this tool may be available for public use later this summer. In the meantime, feel free to use the tutorial above to create scripts of your own. Any issues building your custom load tests in JMeter can be discussed right here on this thread with our JMeter experts. Happy developing!
View full tip
Announcements