cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
  Hello, IIoT Developers!   9.0 is out—let’s dive right into what’s new with Composer and Mashup Builder. (If you haven’t already checked out what’s new in 9.0 with active-active clustering, be sure to check out this tech tip.) We have a lot of new functionality that we can’t wait for you to start using, so without further ado, let’s begin!   Mashup Builder   In 9.0, we continue to make great advancements to our Mashup Builder and visualization toolset. We’re all about productive developers building the coolest IIoT apps! Mashup in 9.0 with new Line Chart widgets!   Undo/Redo   Ever spent a good half hour arranging a layout, making some data bindings, adding some styles, and once you view the Mashup, you decide you aren’t quite happy with your last few tweaks? The panic sets in when you forget exactly what you changed, and you don’t want to lose all of your edits. What is a developer to do? In older versions of ThingWorx, you might cancel without saving your edits or you might try to surgically get back to a good state. Either way, you were not a happy developer.   ThingWorx 9.0 will make you happy again. All actions in the Mashup are now tracked by an undo/redo buffer. Buttons are now available in the toolbar to help you revert actions. An action history drop-down is also available if you want to undo or redo a few jumps at once. Sometimes, it’s the little things!   Undo and Redo actions now available in the Mashup Builder.   Mobile Settings   For a few releases now, we’ve been upgrading our visualization toolset and examining ways we can be better for desktop and mobile experiences. It starts with our latest layout engine/editor introduced in 8.4 and with new Polymer-based, responsiveness “designed-in” web components introduced in 8.4, 8.5, and 9.0. For the more adventurous folks out there, you can also use Custom CSS to do media queries and influence your layouts based on the viewport settings. There are also custom resolutions and screen orientations available in the Mashup Builder toolbar itself so you can view your content in design mode with each of those targets in mind.   In 9.0, we now have introduced a new Mobile Settings configuration editor on the Mashup. This allows you to define for mobile browsers your scaling and width as well as your height and zoom settings. There are even iOS-specific settings for shortcuts and the status bar.  Mobile Settings Editor and iPhone view of a 9.0 Mashup.   New Configure Bindings Dialog   The heart of any application is the data, and how it is leveraged in the UI. For Mashups, there many good ways to do that with our drag-and-drop functionality or our Bindings panel. But in 9.0, we have completely revamped the Configure Bindings Dialog. You’ll quickly notice when you open the new dialog that it has a more usable interface with more screen real estate to explore services, properties, sources and targets. There is now a good separation between the Widgets, Data and Function sources, which makes things easier to locate and build. You’ll also see, if you’ve made bindings already, the complete map of bindings for your context. New search enhancements and target bindings chip-based filters are also now added.   New Configure Bindings Dialog Another cool feature is that the bindings graph in the dialog will also show you any circular references you may have inadvertently created. If you can see in the diagram below, the red circle icon with a number 1 inside of it—this is almost always a bug, so we may as well tell you about it! New Circular references checking! New Web Components   If you look at almost any IIoT application, you’re almost sure to see a chart. IIoT decisions are always centered around looking at telemetry and KPIs at specific moments of time, events, history, and future projections. ThingWorx has new charts in 9.0 for Line, Bar, and Schedule. They look sharp, they are powerful, and are a true upgrade over the former ThingWorx charts.   Here are some highlights. All are based on D3 framework and follow the PTC Design System. The Line chart also supports sub types of Run, Step, Area, Streamgraph and Scatter plot. The Bar chart also supports a column-based view. The Schedule chart is a great way to visualize downtime events, production orders, machine states, or device alarms. All charts feature responsive layout, advanced performance and data sampling, tooltips, multiple series support, multiple orientations for legends and axis labels, and plenty of styling and data configurations. They also all have great zooming capability for larger data sets including horizontal and vertical pan, drag/lasso zoom, interval controls, range zoom and zoom slider controls. Line Charts with filters, zoom sliders, and markers   Bar Charts with zoom sliders, horizontal and vertical orientations, and configurable legends   Schedule Chart with drilldown and hover tooltips Other Coolness   The team was busy with these highlighted features, but there is so much more in 9.0! For Mashup, we also added: Improved tooltip and icon hover support for all web component-based widgets Accessibility improvements for keyboard navigation and focus New category filters for widget property configuration editors New data tools panel New context menu options New theming options for layout containers Composer For application development outside of the Mashup area, you’ll also notice some new changes in the Composer tool. One of our favorites is the new navigation panels. If you’ve been with ThingWorx for a few releases, you’ve seen many redesigns and updates to the Composer interface. We are constantly evaluating and testing this interface’s design with users to make it a highly productive and intuitive environment. You’ll now see much more horizontal real estate in the Composer because we’ve moved the top header bar into a new left-hand navigation. We’ve also improved the grid resizing in the entity and other list views in the interface to work better with larger result sets. New Composer Layout with updated left-hand navigation One more bonus feature to highlight! We now have quick copy buttons in common places in the interface where you might want to copy entity names or application keys. Just click and that text is in your clipboard. Very handy for searching or making bindings!   Quick copy buttons on entity names     As you can see, plenty of awesome new features and upgrades in the ThingWorx 9.0 application development tools. We also have a brand-new visual SDK available in the 9.0 release so that you can make your own widgets with Polymer and import them into the Mashup Builder. Stay tuned for another Ask Kaya tech tip soon on the SDK.   Like what you see? Have a question? Drop us a line in the comments!   Stay Connected! Kaya  
View full tip
  It’s here! It’s happening! ThingWorx 9.0 released today!   Unleash the Power of Now and leverage exciting new features like: Improved performance and higher availability via active-active clustering Faster, more streamlined development with: New widgets like line, bar & schedule charts Undo & redo functionality in Mashup Builder (look out for an upcoming Ask Kaya tech tip) New Mashup mobile settings Simpler management of entities and applications: Improved auditing scale & usability Ability to group entities via Thing Groups Ability to deploy apps built prior to 8.5 with Solution Central And more—like: Confidence model training Delegated authorization for Flow [Navigate] Component-based app development for custom production use apps (upgrade-safe) And even more—the list goes on and on. Check out the full list of new functionality in the 9.0 release notes and discover our What’s New page.   Access 9.0 under “ThingWorx Foundation” on the PTC Downloads Page and unleash its power today! As you get started, check out the ThingWorx Help Center for guidance.   Enjoy 9.0 and let us know what you think below! Kaya
View full tip
  Hello everyone!   We’re back with Episode 08 of ThingWorx on Air! In this episode, I sit down with Ryan Servais, one of our High Availability (HA) experts on the ThingWorx product management team. We continue our HA discussion from previous Ask Kaya tech tips and cover some frequently asked questions like what are the benefits of active-active clustering? How does active-active clustering enable horizontal scale? How can I get started? Brand-new to active-active clustering? Check out these tech tips to start: 9.0 Sneak Peek: Active-Active Clustering for ThingWorx 9.0 Sneak Peek: ThingWorx Architecture for Active-Active Clustering 9.0 Sneak Peek: Flexible Deployments of Active-Active Clustering for ThingWorx Click here to listen to how active-active clustering can help you in a variety of scenarios: If you have a request overflow in production and your servers are slowing down, try out active-active clustering! If your IT admin keeps delaying replacing the network card on your HPE rack server and you keep losing connections, check out the power of active-active clustering! If your team is challenged to provision 1000s of additional assets into your system and you’re worried one server can’t handle it, use active-active clustering for horizontal scale! Finally, if you haven’t already, check out Ryan’s LiveWorx session with Senior IoT Product Manager Ayush Tiwari where they break down availability into its core components and explain how you can leverage active-active clustering to achieve key benefits like reduced downtime, increased cost savings, and more.   Enjoy!   Stay connected, Kaya
View full tip
This is a slide deck I created while learning how to post data from an Arduino to ThingWorx using MQTT protocol.
View full tip
Hiya,   I recently prepared a short demo which shows how to onboard and use Azure IoT devices in ThingWorx and added some usability tips and tricks to help others who might struggle with some of the things that I did.     The good news... I recorded and posted it to YouTube here.   •Connect Azure IoT Hub with ThingWorx (to be updated soon for 9.0 release) •Using the Azure IoT Dev Kit with ThingWorx •Getting the Azure IoT Hub Connector Up and Running (V3/8.5)   Enjoy, and don't hesitate to comment with your own tips and feedback.   Cheers,   Greg
View full tip
When to Include InfluxDB in the ThingWorx Development Lifecycle (this article is also available for download as a PDF attached)   The Short Answer InfluxDB is a time series database designed specifically for data ingestion. Historically, InfluxDB has been viewed as a high-scale expansion option for ThingWorx: a way to ensure the application works as intended, even when scaled up to the enterprise level. This is certainly one way to view it, because when there are many, many remote things, each with a lot of properties writing to the Platform at short intervals, then InfluxDB is a sure choice. However, what about in smaller applications? Is there still a benefit to using an optimized data ingestion tool in any case? The short answer is: yes, there is!   Using InfluxDB for optimized data ingestion is a good idea even in smaller-sized applications, especially if there are plans to scale the application up in the future. It is far better to design the application around InfluxDB from the start than to adjust the data model of the application later on when an optimized data ingestion process is required. PostgreSQL and InfluxDB simply handle the storage of data in different ways, with the former functioning better with many Value Streams, and the latter with fewer Value Streams. Switching the way data is retained and referenced later, when the application is already on the larger side, causes delays in growing the application larger and adding more devices. Likewise, if the Platform reaches its ingestion limits in a production environment, there can be costly downtime and data loss while a proper solution (which likely involves reworking the application to work optimally with InfluxDB) is implemented.   Don’t think that InfluxDB is for expansion only; it is an optimized ingestion database that has benefits at every level of the application development lifecycle. From the end to end, InfluxDB can ensure reliable data ingestion, reduced risk of data loss, and reduced memory and CPU used by the deployment overall. Preliminary sizing and benchmark data is provided in this article to explain these recommendations. Consider how ThingWorx is ingesting data now, how much CPU and other resources are used just for acquiring the data, and perhaps InfluxDB would seem a benefit to improve application performance.   The Long Answer In order to uncover just how beneficial InfluxDB can be in any size application, the IoT Enterprise Deployment Center has run some simulations with small and medium sized applications. The use case in the simulation is simple with user requests coming from a collection of basic mashups and data ingestion coming from various numbers of things, each with a collection of “fast” and “slow” properties which update at different rates. This synthetic load of data does not include a more complete application scenario, so the memory and CPU usage shown here should not be used as sizing recommendations. For those types of recommendations, stay tuned for the soon-to-release ThingWorx 9.0 Sizing (or check out the current 8.5 Sizing Guide).   Comparing Runs When determining the health of the ThingWorx Platform, there are several categories to inspect: Value Stream Queue Rate and Queue Size, HTTP Requests, and the overall Memory and CPU use for each server. Using Grafana to store the metrics results in charts like those below which can easily be compared and contrasted, and used to evaluate which hardware configuration results in the best performance. The size of the numbers on the vertical axis indicate total numbers of resources used for that metric, while the slope or trend of each chart indicates bottlenecks and inadequate resource allocation for the use case.   In this case, all darker charts represent data from PostgreSQL ONLY configurations, while the lighter charts represent the InfluxDB instances. Because this is not a sizing guide, whether each of these charts comes from the small or medium run is unimportant as long as they match (for valid comparisons between with Influx and without it). The smaller run had something like 20k Things, and the larger closer to 60k, both with 275 total Platform users (25 Admins) and 3 mashups, which were each called at various refresh rates over the course of the 1-hour testing period. Note that in the PostgreSQL ONLY instances, there were more Thing Templates and corresponding Value Streams. This change is necessary between runs because only with fewer Value Streams does InfluxDB begin to demonstrate notable improvements.   The most important thing to note is that the lighter charts clearly demonstrate better performance for both size runs. Each section below will break down what the improvement looks like in the charts to show how to use Grafana to verify the best performance.   Value Stream Queue The vertical axis on the Value Stream Queue Rate chart shows how many total writes per minute (WPS) the Platform can handle. The average is 10 WPS higher using InfluxDB in both scenarios, and InfluxDB is also much more stable, meaning that the writes happen more reliably. The Value Stream Queue Size chart demonstrates how well the writes within the queue are processed. Both of these are necessary to determine the health of data ingestion.   If the queue size were to increase and trend upward in the lighter Queue Size chart, then that would mean the Platform couldn’t handle the higher ingestion rate. However, since the Queue Size is stable and close to 0 the entire time, it is clear that the Platform is capable of clearing out the Value Stream Queue immediately and reliably throughout the entire test. FIGURE 1 – THESE REPRESENT THE DATA GETTING STORED INTO THE DATA PROVIDER. NOTE: THE FORMER IS MUCH LOWER THAN THE LATTER.   FIGURE 2 – NOTE THE DATA LOSS IN THE NON-INFLUX INSTANCE (THE QUEUE IN GREEN REACHES THE MAX IN YELLOW). THE INFLUX INSTANCE HAS LESS TROUBLE CLEARING OUT THE QUEUE, AS DEMONSTRATED BY THE CONSISTENTLY LOW QUEUE SIZE.   HTTP Requests Taking the strain of ingestion off of the Platform’s primary database frees its resources up for other activities. This in turn improves the performance and reliability of the Platform to respond to HTTP requests, those which in a typical application are used to aggregate data into smaller data stores (depending on the use case) and which render the mashups for the end users. The business logic and mashups can be more complex when there is one database designated for ingestion (InfluxDB) and one for everything else (PostgreSQL). FIGURE 3 – THE DARKER CHART SHOWS A LOT OF CHOPPINESS, MEANING THAT WHILE THE PLATFORM WAS RESPONDING THE WHOLE TIME, IT WAS NOT DOING SO RELIABLY. THE SMOOTHER SECOND CHART SHOWS HOW MUCH EASIER THE PLATFORM CAN HANDLE THESE REQUESTS WHEN THE LOAD IS DISTRUBITED INTELLIGENTLY ACROSS MULTIPLE SERVERS, EACH OPTIMIZED FOR THE TYPE OF DATA THEY RECEIVE. THE “STAIRCASE” SHAPE OCCURS BECAUSE THE SIMULATOR INCREASES THE WORK LOAD EVERY 10 MINUTES UNTIL IT BREAKS.   Likewise, the nature of Postgres lends well towards this differentiation, given that there are many more database tables required for supporting the HTTP requests, something Postgres does well. That leaves Influx to handle the time-series data and ingestion, and those are the primary strengths of that software as well. So, splitting the load across multiple servers in this way results in smaller server sizes overall, each which is stream-lined and optimized to handle exactly what it is given by the Platform.   Note that in both of these charts, there are no bad requests, so both would seem to be successful runs. However, as future charts will demonstrate more clearly, there is a catastrophic failure when the load is increased around 12:30p. The simulation ends before the server begins to show any real symptoms of the issue, and that is why there are no bad requests. The maximum Operations Per Second (OPS) in the Hardware Specifications and Performance section is taken from before the failure begins.   Clearly the InfluxDB instance has better performance given that the average Operations Per Second (OPS) is substantially higher, nearly 4 times what is seen in the PostgreSQL ONLY instance. Obviously how well the Platform manages the business logic and mashup loading will depend on a lot of factors. In this test scenario, the OPS was increased by increasing the mashup refresh rate on the InfluxDB instances (which could handle over double the operations). Likewise, the number of Stream writes to the PostgreSQL database could be double what it was when PostgreSQL was the only database. Therefore, configuring InfluxDB for the data ingestion and leaving Postgres for the rest of the application certainly makes the load much easier on the Platform, and the same would be true even in a much more complex scenario.   Memory and CPU The important thing here is to keep the memory use low enough that any spikes in usage won’t cause a server malfunction. CPU Usage should stay at or below around 75%, and Memory should never exceed around 80% of the total allocated to the server. The sizing guides can help determine what this allocation of memory needs to be.   Of note in these charts is the slight, upward slope of the CPU usage in the darker chart, indicating the start of a catastrophic failure, and the difference in the total memory needed for the ThingWorx Platform and Postgres servers when Influx is used or not. As is apparent, the servers use much less memory when the database load is split up intelligently across multiple servers.   FIGURE 4 – THE THINGWORX CPU IS ABOUT THE SAME HERE AS IN THE INFLUXDB CONFIGURATION BELOW BUT LOOK AT HOW MUCH MORE MEMORY BOTH THE PLATFORM AND THE POSTGRES DATABASE NEED ALLOCATED TO THEM IN THIS CONFIGURATION (64 GB A PIECE). ALSO NOTE THE JUMP IN CPU AND MEMORY USAGE AFTER 12:30P. THIS IS REFERENCED IN THE PREVIOUS SECTION, AND THE SLOPE UPWARD OF THE USAGE AFTER THAT POINT INDICATES THE START OF A CATASROPHIC FAILURE. THE TEST ENDS TOO SOON TO SEE ANY SYMPTOMS OF FAILURE, BUT IT IS A SURE THING AFTER THE INCREASE IN LOAD AROUND 12:30P. FIGURE 5 – INFLUX NEEDS AN EXTRA SERVER, BUT THE SIZE OF THE INFLUX AND POSTGRES SERVERS TOGETHER IS LESS THAN HALF THE SIZE AS THAT REQUIRED FOR THE SINGLE POSTGRES DATABASE IN THE POSTGRES ONLY CONFIGURATION (8 GB). THINGWORX IS SMALLER TOO (32 GB).     Hardware Specifications and Performance These are the exact specifications for each simulated instance, broken down by size and whether InfluxDB is configured or not. Note that some of the hardware specifications may be more than is necessary real-world use case depending. As stated previously, this document is not a sizing guide (use the official ThingWorx Sizing Guide). Note that the maximum number of WPS and OPS are shown here. The maximums are higher in the InfluxDB scenarios, meaning that even with smaller-sized servers, the InfluxDB configurations can handle much greater loads.   Summary In conclusion, if InfluxDB may at some point be needed in the lifecycle of an application, because the expected number of things or the number of properties on each thing is large enough that it will max the limitations of the Platform otherwise, then InfluxDB should be used from the very start. There are benefits to using InfluxDB for data ingestion at every size, from performance to reliability, and of course the obviously improved scalability as well.   Reworking the application for use with InfluxDB later on can be costly and cause delays. This is why the benefits and costs associated with an InfluxDB-centric hardware configuration should be considered from the start. More servers are required for InfluxDB, but each of these servers can be sized smaller (depending on the use case), and all of this will affect the overall cost of hosting the ThingWorx application. The benefits of InfluxDB are especially pronounced when used in conjunction with clusters, which will be demonstrate fully in the 9.0 Sizing Guide (soon to be released). If InfluxDB is used to interface with the clusters, then there are even more resources to spare for user requests.   It is considered ThingWorx best practice for high ingestion customers to make use of InfluxDB in applications of any size. Note, though, that this will mean the number of Value Streams per Influx Database will need to be limited to single digits. We hope this helps, and from everyone here at the EDC, happy developing!
View full tip
Hi Community,   Although we have reference architectures and integration paths for connecting devices to ThingWorx through Azure IoT; no one has ever written anything about doing the same from one ThingWorx to another.  I thought I’d change that and put some ideas out there around how one might go about doing this.  Although this is not officially supported or recommended by PTC; I have consulted with a number of leading SMEs on the subject, which have participated in forming the basis of my thinking outlined here.   Components Required (in order of communication path): On-premise ThingWorx Platform Protocol Adapter Toolkit* (CXS) - MQTT Azure IoT Edge Azure IoT Hub ThingWorx Azure IoT Hub Connector (CXS) Azure Cloud-hosted ThingWorx Platform   PAT (2) with codec to encode MQTT messages publishes to on-premise IoT Edge MQTT endpoint which handles store-and-forward of messages to IoT Hub.  An Azure IoT device would exist for each Thing you wish to represent on the ThingWorx servers.  The Azure IoT Hub Connector would pick-up the incoming messages and pass them on to the cloud ThingWorx which would decode the MQTT payload and map to Thing property updates.   The only part that I presently don’t like about this approach is that you’ll need to decode the MQTT messages on the ThingWorx platform in the cloud when they are received from the IoT Hub, and this mechanism will need to also need to handle encoding and publishing back to the IoT Hub if C2D (Cloud-to-Device) messages are to be implemented (aka bi-directional).  This is required as ThingWorx only supports AlwaysOn as an application level protocol so some form of mapping needs to be done.   * Another approach would be to replace the PAT with a custom agent which implements both the ThingWorx Edge SDK and the Azure IoT device SDK   Regards,   Greg Eva
View full tip
Hi Community,   I've recently had a number of questions from colleagues around architectures involving MQTT and what our preferred approach was.  After some internal verification, I wanted to share an aggregate of my findings with the ThingWorx Architect and Developer Community.   PTC currently supports four methods for integrating with MQTT for IoT projects. ThingWorx Azure IoT Hub Connector ThingWorx MQTT Extension ThingWorx Kepware Server Choice is nice, but it adds complexity and sometimes confusion.  The intent of this article is to clarify and provide direction on the subject to help others choose the path best suited for their situation.   ThingWorx MQTT Extension The ThingWorx MQTT extension has been available on the marketplace as an unsupported “PTC Labs” extension for a number of years.  Recently its status has been upgraded to “PTC Supported” and it has received some attention from R&D getting some bug fixes and security enhancements.  Most people who have used MQTT with ThingWorx are familiar with this extension.  As with anything, it has advantages and disadvantages.  You can easily import the extension without having administrative access to the machine, it’s easy to move around and store with projects, and can be up and running quite quickly.  However it is also quite limited when it comes to the flexibility required when building a production application, is tied directly to the core platform, and does not get feature/functionality updates.   The MQTT extension is a good choice for PoCs, demos, benchmarks, and prototypes as it provides MQTT integration relatively quickly and easily.  As an extension which runs with the core platform, it is not a good choice as a part of a client/enterprise application where MQTT communication reliability is critical.   ThingWorx Azure IoT Hub Connector Although Azure IoT Hub is not a fully functional MQTT broker, Azure IoT does support MQTT endpoints on both IoT Hub and IoT Edge.  This can be an interesting option to have MQTT devices publish to Azure IoT and be integrated to ThingWorx using the Azure IoT Hub Connector without actually requiring an MQTT broker to run and be maintained.  The Azure IoT Hub Connector works similarly to the PAT and is built on the Connection Server, but adds the notion of device management and security provided by Azure IoT.   When using Azure IoT Edge configured as a transparent gateway with buffering (store and forward) enabled, this approach has the added benefit of being able to buffer MQTT device messages at a remote site with the ability to handle Internet interruptions without losing data.   This approach has the added benefit of having far greater integrated security capabilities by leveraging certificates and tying into Azure KeyVault, as well as easily scaling up resources receiving the MQTT messages (IoT Hub and Azure IoT Hub Connector).  Considering that this approach is build on the Connection Server core, it also follows our deployment guidance for processing communications outside of the core platform (unlike the extension approach).   ThingWorx Kepware Server As some will note, KepWare has some pretty awesome MQTT capabilities: both as north and southbound interfaces.  The MQTT Client driver allows creating an MQTT channel to devices communicating via MQTT with auto-tag creation (from the MQTT payload).  Coupled with the native ThingWorx AlwaysOn connection, you can easily connect KepWare to an on-premise MQTT broker and connect these devices to ThingWorx over AlwaysOn.   The IoT Gateway plug-in has an MQTT agent which allows publishing data from all of your KepWare connected devices to an MQTT broker or endpoint.  The MQTT agent can also receive tag updates on a different topic and write back to the controllers.  We’ve used this MQTT agent to connect industrial control system data to ThingWorx through cloud platforms like Azure IoT, AWS, and communications providers.   ThingWorx Product Segment Direction A key factor in deciding how to design your solution should be aligned with our product development direction.  The ThingWorx Product Management and R&D teams have for years been putting their focus on scalable and enterprise-ready approaches that our partners and customers can build upon.  I mention this to make it clear that not all supported approaches carry the same weight.  Although we do support the MQTT extension, it is not in active development due to the fact that out-of-platform microservices-based communication interfaces are our direction forward.   The Azure IoT Hub Connector, being built on the Connection Server is currently the way forward for MQTT communications to the ThingWorx Foundation.   Regards,   Greg Eva
View full tip
ThingWorx is great for storing large amounts of data coming from your devices but it can also be used like a traditional, row based database for information you would like to integrate with your thing data. Attached to this blog entry is a short example of creating an address book database using a DataTable and a DataShape. It does not focus on creating mashups but sticks with discussing the modeling and service calls you would use to create a simple database.
View full tip
  Ever dreamed of participating in the design of the latest ThingWorx features? Now’s your chance! Join the UX Lab in usability and research sessions to help inform the design of your favorite ThingWorx features from the Edge to Kepware to remote monitoring to Solution Central and more!   For those of you newer to PTC, the UX Lab is an opportunity to see early views of product mockups, wireframes, etc. to provide your direct feedback; the UX Lab is part of LiveWorx, our definitive event for digital transformation, and this year both are virtual!   Click here to influence the design of Edge, Kepware, remote monitoring, Solution Central, and so much more!   Reach out with any questions!   Stay connected, Kaya
View full tip
We are bringing the Liveworx UX Lab to you this year! Contribute to the design and development of PTC’s IoT & AR products from the comfort of your home.   Participate in online 1:1 session with PTC Researchers and Designers to take our prototypes & conceptual designs for a spin, share your expertise to directly impact the experience of our future products.   For all Liveworx 2020 UX Lab sessions, click here   IoT & AR session links: SOLUTION CENTRAL: User Management SOLUTION CENTRAL: PTC Solution Deployment SERVICE: Remote Monitoring Solution THINGWORX: Managing Building Blocks in Composer THINGWORX KEPWARE EDGE: Container Based Software at Scale EDGE: Next Generation IoT Edge (Asset) Manager THINGWORX: Asset Modeling DIGITAL PERFORMANCE MANAGEMENT SOLUTION: Bottleneck ID & Balanced Scorecard MANUFACTURING: PTC Solution Strategy using Building Blocks IMPLEMENTING AR and IOT: Successes and Difficulties VUFORIA EDITOR: Authoring Work Instructions VUFORIA EDITOR: Authoring Work Containing Augmented Reality
View full tip
ThingWorx 8.5 Architecture Deployment Guide The ThingWorx 8.5 Architecture Deployment Guide has recently been updated with a few bug fixes and semantic corrections. Note that older guides are still available as well. Look forward to the 9.0 Deployment Guide coming soon.   Happy developing!
View full tip
  Hi, everyone!   In previous tech tips, we’ve introduced 9.0’s biggest feature, active-active clustering, and covered some of the main architectural components. Today, I’ll cover some other architectural configurations and the flexibility that active-active clustering provides to meet your IoT deployment needs.   Deployment Flexibility With Active-Active Clustering, ThingWorx allows you to achieve the highest availability of your IIoT system while still retaining the high degree of deployment flexibility that ThingWorx is known for.   It’s important to emphasize flexibility not only in where you deploy ThingWorx, but also in how  you deploy it. Flexibility is embedded in the ThingWorx architecture to suit your deployment needs. Let’s look at two interesting options.   Cost-Efficient Deployment The first example is deploying ThingWorx in a cost-efficient architecture for smaller or cost-sensitive deployments. Instead of each component on its own separate VM or box, ThingWorx Connection Server, ThingWorx Foundation Server, ZooKeeper, and Ignite are all installed on the same box.   With three boxes in parallel, the system still achieves high availability while reducing deployment costs by minimizing the number of servers utilized. A minimum of three servers is needed to achieve an odd number of instances required by ZooKeeper while still providing redundancy.   In order to optimize for cost while still providing high availability, there are a few factors to consider with this deployment. In this configuration, Ignite is run within the same JVM as ThingWorx Foundation, rather than on a different server. While this does reduce the overall footprint, Ignite will consume additional memory, sharing the resources with the ThingWorx Foundation platform. Since ThingWorx Connection Server is run on the same VM as ThingWorx Foundation Server, this introduces socket limitations, which can restrict the maximum number of connected devices.     In short, the example above illustrates how, while being in an active-active clustered environment, you still retain the deployment flexibility to optimize for your deployment needs. In this scenario, the architecture is optimized for a leaner, more cost-effective deployment at the expense of numbers of connections and memory.   Now, let’s walk through some common use cases where ThingWorx is deployed on premise on VMs or baremetal to support use cases that require a balance of availability and scalability at an affordable cost. A few of these examples include: On-premise deployment of ThingWorx Foundation to support a dedicated factory production line on a plant floor to improve its connected manufacturing operations. VM based on-site deployment of ThingWorx Foundation to support IoT applications for a Smart Building scenario. Smart Ship deployments managing key ship operations where ThingWorx is deployed on a ship and sends data intermittently to private data centers running ThingWorx on shore through a satellite network leveraging ThingWorx federation technology. A failover-proof redundant setup required to capture key service metrics for medical equipment for predictive failures and maintenance where ThingWorx is deployed in hospital premises. OEM Deployment with Common Components The second example covers how active-active clustering architectural components can be shared across multiple clusters to reduce the cost and complexity of deployment. This can include scenarios where a large deployment spans multiple sites, geographies or end customers.   In this deployment, there are two ThingWorx clusters that share a common ZooKeeper cluster and database layer. Within each cluster, both the Connection Server and ThingWorx Foundation with Ignite running as embedded are deployed on a VM.   For the shared components, proper separation measures must be taken to ensure security. For ZooKeeper, separate namespaces and authentication from the clients are required. For the shared database layer, each cluster must have a unique schema, a separate shared file system and separate user credentials with database-level isolation. Please note that with a large shared database across tenants, rather than a dedicated database for each tenant of HA cluster, there is a risk of impacting performance and availability of other tenants due to issues caused by one of the shared ones.     This architecture is optimized for a larger deployment comprised of multiple ThingWorx clusters requiring separation between the clusters, while still allowing the provider to share a common infrastructure for cost reduction.   Again, let’s walk through another few common use cases, including: A ThingWorx OEM hosting multiple ThingWorx Foundation instances in their managed data center or public cloud to offer applications with further customization abilities to their end users. In this case, an OEM vendor can offer multiple ThingWorx clusters in active-active cluster mode with shared pieces of infrastructure across multiple tenants. A ThingWorx enterprise customer looking to build and host different ThingWorx-based applications in their IT data center where each application is powered by an individual active-active cluster satisfying specific digital transformation use cases across the enterprise value chain, including those related to engineering, product, sales, marketing, supply chain, partners, service and support. A ThingWorx factory customer looking to host multiple ThingWorx applications from their main regional factory IT data center to cater to the IoT needs of different factories located in surrounding geographic regions. In this case, a customer could dedicate one HA cluster to each factory to enable that factory/site to power multiple IoT applications efficiently.   These two configurations are some of the lesser known—yet equally powerful—deployments of this new architecture. Again, flexibility of our deployment architecture is one of the key superpowers of the ThingWorx platform. And, the active-active fun doesn’t stop there! Be on the lookout for an upcoming LiveWorx session titled Active-Active Clustering with ThingWorx 9.0  (Session ID: IP1117B), future tech tips like this one, and, of course, the GA release of ThingWorx 9.0 (targeted for June 2020) where you can take advantage of this new functionality!   Let us know what you think in the comments!   Stay connected, Kaya      
View full tip
Here is a spreadsheet that I created which helps to estimate data transfer volumes for the purpose of estimating egress costs when transferring data out of region.   You find that there are a number of input parameters like numbers of assets, properties, file sizes, compression ratio, as well as a page with the cost elements which can be updated from the Interweb.    
View full tip
  Hi everyone,   In anticipation of ThingWorx 9.0’s biggest feature, active-active clustering, we’d like to provide an architectural overview of a sample active-active configuration and its underlying components. If you haven’t already seen it, we invite you to read our previous Community tech tip, where we introduce the concept of active-active clustering for ThingWorx Foundation, which enables you to: significantly reduce unplanned downtime for your mission-critical services and apps support horizontal scalability of the ThingWorx Server where you can scale your services up and down based on your requirements easily run, package, deploy, and operate advanced apps and services with the help of intuitive browser-based navigation, interactive monitoring and debugging tools, and more deploy anywhere - public cloud, private datacenter, on-premise, hybrid, or even locally on your laptop with deep optimizations for Azure Now, before we go too deep, we’d like to let you know that you can continue to seamlessly upgrade from previous versions of ThingWorx releases to upcoming ThingWorx 9.0  releases. Previously, you could deploy ThingWorx Foundation in a “single server” mode, and, for a high availability in “active-passive cluster” mode (see here for details). In the ThingWorx 9.0 release and onwards, you’ll be able to continue to deploy ThingWorx Foundation in a “single server” mode and for high availability scenarios via our new “active-active cluster” mode. Please note that active-passive clustering configuration will no longer be supported in ThingWorx 9.0 or onwards.   Let’s start with a quick recap on how the ThingWorx Foundation 9.0 release would look like in single server deployment.   ThingWorx 9.0 Deployed in a "Single-Server" Architecture Below is a high-level diagram depicting the main architectural layers and components of ThingWorx Foundation deployed in a single server mode.   Deployment Components Below is a brief summary of all major architectural components and their purpose in the deployment architecture:   The Client Layer This layer is comprised of everything that connects with, sends data to, and receives content from the ThingWorx platform. It be broken down into two groups: Devices/Things: Things, devices, agents, and other assets. Users/Clients: People and the respective products (primarily web browsers) they use to access ThingWorx.   The Application Layer This layer is where ThingWorx Foundation and other applications deployed with ThingWorx Foundation reside, such as ThingWorx Analytics, ThingWorx Connection Server, ThingWorx Azure IoT Hub Connector and others. This layer provides connectivity to the client layer, performs authentication and authorization checks, ingests/processes/analyzes content, and reacts to conditions by sending alerts. For a specific ThingWorx Foundation deployment that needs basic device data ingestion, processing and storage, you can setup only ThingWorx Foundation server. In some cases, with large number of device connections, you may want to setup ThingWorx Connection Server with ThingWorx Foundation in a single server for further scalable connectivity. ThingWorx Foundation: ThingWorx Foundation is a java-based application that serves as a rapid, model-based application development platform. Shared File Storage: Shared Disk space to contain ThingWorx Storage repositories, store and archive log files accessed by all ThingWorx Foundation servers. A NAS file storage, AWS Elastic File System, or Azure Files and others could be used for this purpose.   The Data Layer ThingWorx Foundation includes several persistence provider implementations that enable you to choose a database option that best fits your use case. A persistence provider enables the connection to a data store and the ability to perform a CRUD operation on that data. See here for more information. Currently, there are two basic variations of persistence providers: Model Provider – Responsible for ThingWorx model metadata and system data. Data Provider – Responsible for runtime data ingested against the model elements, including streams, value streams, data tables, etc. ThingWorx supports H2 (in-memory Database), PostgreSQL, MS SQL Server and AzureSQL as both model and data providers, and InfluxDB as only a data provider. Please see here for model and data best practices.   ThingWorx 9.0 Deployed in an Active-Active Clustering Reference Architecture Below is a reference architecture diagram for ThingWorx 9.0 with multiple ThingWorx Foundation servers configured in an active-active cluster deployment. Please note that this is only one reference example of how ThingWorx 9.0 can be deployed in an active-active clustered environment. There could be other architectural configurations dependent upon the needs of the specific deployment. Deployment Components Once you have developed an understanding of the basic architectural components in a single server mode, below are the additional components required to run ThingWorx in active-active cluster mode.   The Client Layer This will be similar to what has been mentioned in the above single server configuration.   The Application Layer In this layer, if you’re familiar with ThingWorx active-passive cluster configuration, then you may be aware of most of the components used below—with the exception of a new component: Apache Ignite that provides Distributed Caching for the horizontally scalable ThingWorx Foundation servers. Load Balancer: A third-party device that receives network traffic and distributes requests among available servers. In active-active cluster configuration, the load balancer is used to direct WebSocket-based traffic to the ThingWorx Connection Servers while user requests (http/https) traffic is directly distributed to the ThingWorx Foundation servers. Users can continue to use a load balancer with ThingWorx 9.0 that they might already be using for their existing active-passive or single server deployments with ThingWorx 8.X or previous releases.  Some example load balancers include, but are not limited to: HAProxy, Azure Application Gateway, and AWS Application Load Balancer. ThingWorx Connection Services: These services handle all messages routing to and from devices, providing scalable connectivity to the ThingWorx Foundation Server. With the ThingWorx 9.0 release, ThingWorx Connection Services have been upgraded with many additional features to support active-active clustering of the ThingWorx Foundation servers, where now they route all WebSocket traffic in a round robin fashion to the connected ThingWorx Foundation servers. Depending upon the various use cases, one could use multiple ThingWorx Connection Services available, such as ThingWorx Connection Server, ThingWorx Azure IoT Hub Connector, and ThingWorx Protocol Adapter Toolkit. Please see here for further details. Please note that for ThingWorx 9.0 releases, ThingWorx Connection Server would be required in an active-active configuration to support all the WebSocket-based traffic routing, including egress of files and device messages from multiple ThingWorx Foundation servers back to the devices, and it would also serve as WebSocket communication from ThingWorx Mashup-based applications to ThingWorx Composer. ThingWorx Foundation: With ThingWorx 9.0 and onwards, you can set up ThingWorx Foundation servers in an N-active-active cluster model to provide higher availability to your applications and horizontally scale the Foundation server nodes up and down based on your scalability needs. Apache Zookeeper: Apache ZooKeeper is a centralized service for maintaining configuration information and naming as well as providing distributed synchronization and group services. It is a coordination service for distributed applications that enables synchronization across a cluster. Specific to ThingWorx, ZooKeeper is used for distributed locking, selecting a singleton server during the server initialization, service discovery for Apache Ignite, allowing it to find instances of ThingWorx Foundation servers. Apache Ignite: This offers a distributed cache for the active-active cluster setup. It is used by ThingWorx Foundation Servers to share state. It may be embedded with each ThingWorx instance or can be run as a standalone cluster for larger scale. In this configuration, Ignite is set up in a standalone cluster but can be run embedded within the ThingWorx Foundation. Running Ignite in a standalone cluster is more ideal for larger scale, as it supports higher vertical scale of memory in the deployment setup.   The Data Layer ThingWorx is largely database agnostic. You can continue to use officially supported persistence providers that you may already be using in your existing deployments based off of ThingWorx 8.X or previous releases. Please look out for an upcoming ThingWorx update as well as enhanced installation documents to help with your upgrade and migration questions with the general availability of ThingWorx 9.0. Please note that this diagram does not make the distinction between model and data providers; depending on your data ingestion needs, separate model and data providers can be used. As a reminder, all databases should be deployed in a high-availability configuration to help eliminate any single point of failure.   In closing, we can't wait to launch active-active clustering in 9.0 to help you: dramatically further reduce application downtime scale your deployments and more efficiently manage your apps, regardless of where they’re deployed   If you have any questions about active-active clustering or its architecture, please do not hesitate to reach out!   Stay connected! Kaya
View full tip
I created this recently for another group -  might be useful    Video Link to Create a Database connection to you Postgres Thingwork Database 
View full tip
When using the Auto-bind section of an EMS configuration it is very important to note the difference between "gateway":true and "gateway":false. Using either gateway value, when used with a valid "name" field, will result in the EMS attempting to bind the Thing with the ThingWorx platform, and will allow the EMS to respond to file transfer and tunnel services related to the auto-bound things, but this is around where the similarities end. Non-Gateway: This type of auto-bound thing can be thought of as a placeholder because the EMS will still require a LuaScriptResource to be bound in order to respond to property/service/event related messages. There must be a corresponding Thing based on the RemoteThing template (or any RemoteThing derived template e.g. RemoteThingWithFileTransfer) on the ThingWorx server in order for the bind to succeed. There are many reasons to use this type of auto-bound thing, but the most common is to bind a simple thing that can facilitate file transfer and tunnel services but does not need any custom services, properties, or events that would be provided by custom lua scripts within the LuaScriptResource. Gateway: An auto-bound gateway can be bound to the ThingWorx platform ephemerally if there is no Thing present to bind with on the platform. To clarify, if no Things exist with the matching Thing Name on the platform, and the EMS is attempting to bind a Gateway, a Thing will be automatically created on the platform to bind with the auto-bound gateway. This newly created ephemeral thing will only be accessible through the ThingWorx REST API, and once the EMS unbinds the gateway the ephemeral thing will be deleted If there is a pre-existing Thing on the ThingWorx server, then it must be based off of the EMSGateway template in order for the bind to succeed. The EMSGateway template, used both normally and ephemerally, will provide some gateway specific services that would otherwise be inaccessible to a normal remote thing. See EMSGateway Class Documentation for more details.
View full tip
Still not sure what the reference benchmarks have to offer? Check out this short video abstract reviewing the purpose of the reference benchmarks, some notes on how to read the guide, and information about what these guides will have to offer. Then, check out the reference benchmark for remote monitoring here!   ~~
View full tip
The App URI in the ThingWorx Remote Thing Tunnel configuration specifies the endpoint of the specified tunnel. The default value (/Thingworx/tunnel/vnc.jsp) will point to the built in ThingWorx VNC client that can be downloaded through the Remote Access Widget in a Mashup to provide VNC remote desktop access. Leaving the App URI blank will result in the Tunnel being connected to the listen port on the users machine as specified in the Remote Access Widget​. In this case the user must supply the application client (e.g. an ssh client) in order to connect to the tunnel endpoint.
View full tip
  Hi, everyone!   We’re actively working towards the ThingWorx 9.0 release and we’re ready to provide a sneak peek into the biggest feature of 9.0: Active-Active Clustering for High Availability configuration.   You may be wondering: doesn’t ThingWorx already offer High Availability?   Yes, ThingWorx already supports a High Availability configuration. Previous versions of ThingWorx, such as ThingWorx 8.X version releases, support Active-Passive configuration, where one “active” ThingWorx server performs all processing and maintains the live connections to other systems such as databases and connected assets. Meanwhile, in parallel, there is a second “passive” ThingWorx server that is a mirror image and regularly updated with data but does not maintain active connections to any of the other systems. If the “active” ThingWorx server fails, the “passive” ThingWorx server is made the primary server, but this can take a few minutes to establish connections to the other systems.   So, how is ThingWorx Active-Active different?   Active-Active configuration differs from Active-Passive in that all the ThingWorx servers in the cluster are “active.” Not only is data mirrored across all ThingWorx servers, but all of the servers, instead of only one, maintain live connections with the other systems. This way, if any of the ThingWorx servers fail, the other ThingWorx servers take over instantaneously with no recovery time.   Since all ThingWorx servers are active, they are processing in parallel and, as a result, the cluster can process more data than that of a single server or a cluster with an Active-Passive configuration. Simply put, multiple servers working together outperform a single server. This allows customers to scale their deployment by simply adding more ThingWorx servers to the cluster (horizontally scaling), which does not have the same limitations of scale that is achieved by increasing the performance of the server itself.   What does that mean for me? Higher Availability - You can avoid single points of failure and configure the ThingWorx Foundation platform in an Active-Active cluster mode to achieve the highest availability for your IIoT systems and applications. Increased Scalability - Now, you can horizontally scale from one to many ThingWorx servers to easily manage large amounts of your IIoT data at scale more smoothly than ever before.   Stay tuned! We’ll be posting more information on Active-Active Clustering—how it's achieved in ThingWorx, architectural component overviews, and what it means for your ThingWorx deployment!   In the meantime, we're running the Active-Active Clustering Beta Program. Interested in participating? Reach out to Ryan Servais (rservais@ptc.com) or Ayush Tiwari (atiwari@ptc.com) to learn more about participation!   Stay connected! Kaya
View full tip
Developing with Axeda Artisan (for Axeda Platform, v6.8 and later) Axeda Artisan is a development tool based on the Apache Maven build system. Artisan includes authoring, management, and deployment mechanisms that define and manage the development of Axeda Platform extension points, allowing you to flexibly install, update, and uninstall many types of objects on the Axeda Platform. The Components of Artisan The Artisan framework is comprised of several components: The Axeda Platform Each instance of the Axeda Platform contains libraries and sample Artisan Projects (archetypes) that are downloaded by developers to author and deploy their own Artisan Projects. The Artisan Project An Artisan Project is a collection of content and configuration files that are used to define a set of objects that will be installed on the Axeda Platform. The Artisan Installer The Artisan Installer is a flexible, configurable application that uses the Apache Maven build life-cycle to upload the contents of an Artisan Project to the Axeda Platform. Axeda Platform APIs The Artisan Installer interacts with the Axeda Platform and uses Axeda APIs to install, upgrade, or uninstall ArtisanProjects. How You Work With Artisan The goal of using Artisan is to develop an Artisan Project that will become an object or application that runs on the Axeda Platform. There are several phases involved in creating an Artisan Project: Preparing Your Development Environment – When first working with Artisan, you need to prepare your development environment by installing or configuring Java, Apache Maven, and any other required development tools. Creating an Artisan Project – Artisan uses archetypes as the basis for Artisan Projects. Archetypes are used to generate an Artisan Project that contains the correct type of development framework, which you then modify to create your own Artisan Project: The Hello World archetype provides a sample project you can use to create common objects on the Axeda Platform. The Machine Streams archetype provides a sample project you can use to configure machine streams for your Axeda Platform. Installing an Artisan Project on the Axeda Platform – The Artisan Installer packages the contents of the Artisan Project, and deploys them on the Axeda Platform. Testing the Artisan Project – Once installed, you can test your Artisan Project to verify its functionality and identify any issues that need to be addressed. It is common for an Artisan Project to be installed on the Axeda Platform several times as issues are identified and corrected during development. Creating a Stand-alone Installer – Once ready for deployment to production, the Artisan Installer is used to create a stand-alone installer for distributing the project contents to other instances of the Axeda Platform. Where to Go from Here To begin working with the Artisan framework to create your own projects, refer to the following: Artisan Landing Page – A page hosted on the Axeda Platform that introduces Artisan and serves as a starting point for creating an Artisan Project. The Artisan Landing Page can be found at the following location: http://instance_name:port/artisan                     Where instance_name:port/ is the ip address and port of an instance of the Axeda Platform. Axeda® Artisan Developer’s Guide – A reference that introduces Artisan, describes how to prepare your development environment, and provides detailed technical information concerning creating an Artisan Project and configuring the Artisan Installer. The Axeda® Artisan Developer’s Guide is available with all Axeda product documentation from the PTC Support site, http://support.ptc.com/
View full tip
Announcements