cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Learn all about the Community Ranking System, a fun gamification element of the PTC Community. X

IoT Tips

Sort by:
  Question: What is the best way to use Git with ThingWorx? Disclaimer: Please note that, while the ThingWorx Git Backup Extension is a very useful tool, it is not a PTC product, nor is it supported by PTC.   After the release of ThingWorx 8.4 two weeks ago, are you looking for even more? Can’t get enough of ThingWorx? Good thing—because we’ve got you covered.   We have just released Version 2.0 of the ThingWorx Git Backup Extension! Reach out if you'd like to learn how to obtain access to it.    In the newest version, you’ll find: Major UX improvements and UI restyling. The extension now includes a new page called GitBackup.Main.Mashup, which offers access to all the functionality previously available in the Home Mashup (see below). GitBackup.Main.Mashup is now the single interface for all the GitBackupThings in the system; you’ll no longer need to go to Composer to manage them individually. New ThingWorx Git Backup Extension 2.0 Support for querying and selecting the Bitbucket repositories that you, as a user, have access to. An updated ExtensionExportExtension with bugfixes.   If you’re looking for guidance on how to configure Git with ThingWorx, check out one of my earlier posts that explains how you can use Git to achieve continuous integration with ThingWorx or view the updated Git Backup Extension User Guide attached (see the “Attachments” section to the right).   Shoutouts to Vladimir, Gabriel, Bogdan, Moritz and Pierre for making this available.   Let me know what you think of Version 2.0 in the comments below!   The open-source Git Backup Extension can be found here.    Stay connected, Kaya
View full tip
We will host a live Expert Session: Thingworx Navigate Component Based App Development on Wednesday 09/30, 08:00 AM Eastern Daylight Time   Please find below the description of the expert session as well as the link to register .   Expert Session: Thingworx Navigate Component Based App Development Date and Time: Wednesday 09/30, 08:00 AM Eastern Daylight Time Duration: 1 hour Host: Pratibha Bhatnagar Description: Following the series of new capabilities released with Navigate 9.0, this session will focus in the details of Navigate Component Based app development and how to leverage this to your use cases.   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’   You can also suggest topics for upcoming sessions using this small form
View full tip
ThingWorx 8.4 is here!   We know you’ve been patient, as we’ve released sneak peeks on Ask Kaya of various new or updated features, including: InfluxDB as New Time Series Data Persistence Provider Responsive Mashup Layout with New Layout Editor ThingPresence to Address Assets that Always Appear Offline Functions to Allow Expression & Validator Widgets to No Longer Crowd Canvases at Design Time Property Transforms to Do Statistical Transforms for Property Values No longer are you forced to sit idly as we give you glimpses of the new functionality without the ability to play with it. Now that it’s available, go run with the wind!   To discover even more features and details, check out the release notes.   ThingWorx 8.4 can be downloaded here.   Let us know what you think of the new release below!   - Kaya
View full tip
Hello!   We will host a live Expert Session: "Understanding ThingWorx Navigate Licensing" on February 11th, 10h EST.   Please find below the description of the expert session and the registration link.   Expert Session: Understanding ThingWorx Navigate Licensing Date and Time: February 11th, 10h EST Duration: 1 hour Host: Christoph Braeuchle, Emily Larkin and Steve Scheib - ThingWorx Navigate PM team Registration Here: https://www.ptc.com/en/resources/plm/webcast/understanding-thingworx-navigate-licensing     Description: ThingWorx Navigate licensing opens many users a way to access PLM data and functionality at an attractive price tag when they don’t need to use the full power of Windchill functionality. This licensing and packaging have changed over the past 1.5 years and this is the perfect time to share an update on available license types and answer essential questions like... Which license types do my end-users really need? What capabilities are provided by each license type? What are the best ways to understand and control license usage in my company? Don’t miss this session if you want to understand how ThingWorx Navigate licensing works and which options are available.   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’. You can also suggest topics for upcoming sessions using this small form.   Here are some recorded sessions that might be of your interest. You can find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search Navigate - SSL & Windchill Authentication This in Expert Session will take you through a step-by-step approach for configuring authentication between Windchill and Navigate with SSL.   Recoding Link Top 5 items to check for Thingworx Performance Troubleshooting How to troubleshoot performance issues in a Thingworx Environment? Here we cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support   Recording Link Thingworx 9.0 Component Based App Development Following the series of new capabilities released with Navigate 9.0, this session will focus in the details of Navigate Component Based app development and how to leverage this to your use cases Recording Link
View full tip
We will host a live Expert Session: "Windchill & Thingworx Navigate Authentication" on November 10th at 10:30 AM EST.   Please find below the description of the expert session and the registration link .   Expert Session: Windchill & Thingworx Navigate Authentication Date and Time: Tuesday, November 10th, 2020 10:30 am EST Duration: 1 hour Host: Arshad Imam, PLM Product Technology Lead   Description: This in Expert Session will take you through a step-by-step approach for configuring authentication between Windchill and Navigate with SSL. Plus, you can take advantage of a unique opportunity to ask questions in a live Q&A following the presentation.   Register here   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’.   You can also suggest topics for upcoming sessions using this small form.   Here are some recorded sessions that might be of your interest. You can find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search   Navigate 9.0 – What’s New? This session is the intro of a series that will cover new capabilities of the recent Navigate 9 release and the value that each can bring to your implementation. Then we will have further sessions covering the details of some of them   Recoding Link Top 5 items to check for Thingworx Performance Troubleshooting How to troubleshoot performance issues in a Thingworx Environment? Here we cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support   Recording Link Thingworx 9.0 Component Based App Development Following the series of new capabilities released with Navigate 9.0, this session will focus in the details of Navigate Component Based app development and how to leverage this to your use cases Recording Link Thingworx Navigate 3D Viewer Following the series of new capabilities released with Navigate 9.0, this session focus in the details of Navigate 3D Viewer leverage this to your use cases Recording Link
View full tip
  Hi everyone,   Ever feel like your hands are too full? Are you juggling your cup of coffee in one hand and your tablet in another so that you can read Ask Kaya on the go?   Problem solved.   Today, we’re introducing ThingWorx On Air—the Ask Kaya developer-focused podcast designed to take the complexity out of building IIoT solutions.   Listen to our first episode here or search “ThingWorx on Air” on iTunes.   In Episode 01, we introduce Operator Advisor, a brand-new PTC manufacturing solution that helps you accelerate your development of IIoT applications for workers on the shop floor. Learn how you can use it to quickly build solutions that provide greater visibility of equipment statuses across your factory to improve workforce efficiency. I hope you enjoy!   Be sure to tune into Episode 02 where we’ll share the “Wowza Widget of the Week.”   Stay connected, Kaya   P.S. If you have any questions you’d like answered in our next episode, comment below!
View full tip
We will host a live Expert Session: "Thingworx Navigate 3D Viewer" on October 9th at 11:00 AM EST.   Please find below the description of the expert session and the registration link .   Expert Session: Thingworx Navigate 3D Viewer Date and Time: Friday, October 9th, 2020 11:00 am EST Duration: 1 hour Host: Robbie Morrison, Product Management Senior Manager   Description: Following the series of new capabilities released with Navigate 9.0, this session will focus in the details of Navigate 3D Viewer leverage this to your use cases   Register here   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’.   You can also suggest topics for upcoming sessions using this small form.   Here are some recorded sessions that might be of your interest. You can find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search   Navigate 9.0 – What’s New? This session is the intro of a series that will cover new capabilities of the recent Navigate 9 release and the value that each can bring to your implementation. Then we will have further sessions covering the details of some of them   Recoding Link Top 5 items to check for Thingworx Performance Troubleshooting How to troubleshoot performance issues in a Thingworx Environment? Here we cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support     Recording Link Thingworx 9.0 Component Based App Development Following the series of new capabilities released with Navigate 9.0, this session will focus in the details of Navigate Component Based app development and how to leverage this to your use cases Recording Link
View full tip
  Hi everyone,   In case you’re looking for more reasons to appreciate the power of Azure, today we’re answering 10 frequently asked questions around how and why to use Azure SQL with ThingWorx.   You likely already know that we support multiple persistence providers, like Azure SQL, InfluxDB, H2, MSSQL and PostgreSQL, for you to store and persist your ThingWorx data. Here’s an up-close-and-personal look into why we recommend Azure SQL.   1. What is Azure SQL?         Azure SQL is a relational database hosted in the Azure cloud and is a fully managed Platform as a Service (PaaS) Database Engine. Azure SQL Database engine is based on the Enterprise Edition of SQL Server. The Azure platform fully manages every Azure SQL Database with a high percentage of data availability and guarantees no data loss. Azure SQL Database comes with built-in high availability, disaster recovery, and upgrade for the database. Refer to Microsoft's Azure SQL Database - Platform as a Service documentation for more information on Azure SQL Database and its features.   2. What are the top 3 reasons to use Azure SQL with ThingWorx?   Ease of Use and Management: Azure SQL greatly reduces the need to manage database resources for ThingWorx. It helps to reduce your total cost of ownership for managing database resources for ThingWorx by managing virtual machines, operating system, database software, upgrades, high availability, and backups for you, so you can focus on building your IoT solution. It provides unmatched scale and high availability for compute and storage without sacrificing performance. With Azure SQL, you can scale your application on demand with up to 99.95% availability.   Hybrid Deployments: ThingWorx supports multiple persistence providers to store IoT data for different use cases. Please refer to the ThingWorx Model and Data Best Practices Guide to learn more. If you’re already using Microsoft SQL Server with ThingWorx on premise, then you can use Azure SQL for your cloud deployments of ThingWorx-based IIoT solutions in hybrid scenarios. This allows you to reduce development time—develop once and deploy anywhere through a common programming surface area across Azure SQL (on cloud) and SQL Server (on premise). You can leverage ThingWorx federation to run ThingWorx in different deployment topologies.   If needed, you can also accelerate your on-premise SQL Server migrations without changing the application code by leveraging Managed Instance. Use the Azure Hybrid Benefit Savings Calculator to calculate your TCO. Enjoy additional deployment flexibility with Single Database for SQL applications created in the cloud or Elastic Pool for multi-tenant applications.   Security and Compliance: Azure SQL Database meets the most stringent compliance standards with built-in auditing and information protection technology. With its availability in different regions, its best suited for Government cloud and sovereign cloud. Please see this link to check for the latest update on Azure product availability by region. You can also get multi-layered security provided by Microsoft across physical datacenters, infrastructure, and operations and will always have the latest SQL Server capabilities in the cloud, with no patching or upgrading. It also offers protection to your databases from malicious acts with fine-grained access controls, Always Encrypted technology, and advanced threat protection capabilities.   3. How do I configure ThingWorx for Azure SQL? From ThingWorx Foundation platform version 8.4 release onwards, ThingWorx provides you an option to choose Azure SQL as a persistence provider to store your value stream, stream, and data table data. This Help Center provides all the details and steps to help you set up Azure SQL with ThingWorx.    You can run ThingWorx with Azure SQL either by downloading the ThingWorx Azure SQL .WAR file or by running it as containerized ThingWorx Docker images by downloading ThingWorx Dockerfiles. For reference, see the below image to help you download ThingWorx 8.4 artifacts.   Here’s a video demonstrating how to install ThingWorx. (view in My Videos)   Here’s a second video that walks you through configuring ThingWorx with Azure SQL. (view in My Videos)   4. Which versions of Azure SQL does ThingWorx support? Consult the latest system requirements guide here to learn which versions of Azure SQL ThingWorx supports.   5. What database deployment options do I have? In Azure, you can have your SQL Server workloads running in a hosted infrastructure (IaaS) or running as a hosted service (PaaS). Within PaaS, you have multiple deployment options and service tiers within each deployment option, such as Single Database, Elastic Pool sets, and managed Instance. ThingWorx supports all the deployment options to setup Azure SQL as a persistence provider. You can refer to this link on Azure SQL Database versus SQL Server to help you choose an option that works best for your business needs.   6. Why would I want to use an PaaS database? Service tools and built-in features enable a more streamlined and automated means of controlling and operating your database. The need for constant manual control and tweaking of information, recovery tools, compliance and updates is now configured and built into Azure SQL for a more hands-off approach to your storage database. Here is a table to inform you on how Azure SQL PaaS helps.   7. Which features are new to Azure SQL 2019? Azure SQL now offers Always Encrypted data transfer through TLS and auto-failover for managed instance deployment to enable transparent and coordinated failover of multiple databases. Azure SQL also offers a data migration assistant, which detects compatibility issues that can impact functionality when upgrading your database. For more information on features and functionality, see Microsoft SQL documentation or Azure SQL’s latest release notes.   8. Is there any guidance available to help me migrate to Azure SQL? Yes! Microsoft’s Database Migration Service enables seamless migration to Managed Instance with downtime measured in minutes. The process is highly automated and risk-free while streamlining the transition of SQL Server and on-Microsoft database systems such as Oracle to Azure SQL Database. You can learn more about upgrading to Azure SQL here.   9. What purchasing models are available to me?   vCore based (recommended) - For customers that prioritize flexibility and control, this model offers scaling of compute, storage, and I/O resources independently to optimize price based on need. The customer chooses the hardware and service tier based on high-availability design, storage type, fault-isolation methods, and I/O ranges.   DTU based - Three distinct available tiers are differentiated based directly upon compute, memory, and I/O resources. This model bundles the measures together for customers who want pre-configured or simplified resource options. You can refer to more pricing and purchase options here.   10. What should I do if I need technical support? If you select Azure SQL as your persistence provider, then all support requests related to configuring Azure SQL can be logged through PTC Technical Support at https://support.ptc.com or by calling 1-800-477-6435.   You may also want to use the PTC Community to learn and collaborate with the growing PTC developer community. For all other requests related to database management, troubleshooting, monitoring, and administration, we encourage you to reach out to Microsoft directly.   Let me know what you think in the comments below.   Stay connected, Kaya
View full tip
Disclaimer: Please note that, while the ThingWorx Git Backup Extension is a very useful tool, it is not a PTC product, nor is it supported by PTC.   Hi ThingWorx users,   Trying to manage your ThingWorx application artifacts in a CI process? Wondering who changed that line of code in your Thing Service? Trying to see what your Mashup looked like last release? Time to Git excited! Introducing the Git Backup Extension, an open-source tool available here to offer a stronger integration with the Git source repository. This Git feature can push or pull code and artifacts (like entities, data exports or extension dependencies) to your Git repository.   Here are some highlights of how this works within ThingWorx:   First, configure your Git repo to work with ThingWorx by creating a Git Backup Thing. Then, simply open your new Thing, navigate to the Configuration editor and enter information like your Git URL, your Git username and password, your repo and branch names, etc. See example below. Configuring your Git repoWith this configuration in place, you can now use the Home Mashup of this new Git thing to browse the repository and pull down contents to your local ThingWorx instance. For new projects, you can also push new entities to the repo as you work on your application.   As you and your team are working, you’ll want to see the differences of the files you are editing and working on collaboratively. The Git extension feature makes this easy. Just like you can see diffs clearly delineated for a file with your Git client, you can see the same with this Git integration in ThingWorx. Similar to the git status command, the Git ThingWorx extension will show you the list of files you have changed that are available to push, as well as their diffs. See an example below. Checking the Git status While working, if you want to switch branches or pull down a new project, you can check out a specific version and see all commits available on that branch (see below). Checking out a specific commit Want to learn more or try it for yourself? Find the open-source Git Backup Extension here and check out the Git Backup Extension User Guide for guidance.   Stay connected, Kaya   P.S. What do you think? Comment your thoughts below!
View full tip
Hi All   Our expert session: Thingworx Flow Overview is tomorrow!!! Click the link below to register and remember to talk about it to colleagues that might benefit from its content.   Expert Session: Thingworx Flow Overview Date and Time: December 10th, 8h00 EST Duration: 1 hour Host: Antony Moffa; Vinay Vaidya - Thingworx IoT Platfom Senior Directors Registration Here: https://www.ptc.com/en/customer-success/expert-sessions-for-thingworx-foundation-webcasts    See you there!   Here are other upcoming sessions that might be of your interest: Upgrade to Thingworx 9 – How to Plan / Evaluate Impacts This session will highlight the key points you should evaluate to properly plan your upgrade to Thingworx 9 Register Here Active Active Clustering This session will cover the main aspects of the High Availability Clustering feature launched with the ThingWorx 9.0 release Register Here
View full tip
Hello!   We will host a live Expert Session: "Top 5 items to check for Thingworx Performance Troubleshooting" on Sept 3rdh at 09:00 AM EST.   Please find below the description of the expert session as well as the link to register .   Expert Session: Top 5 items to check for Thingworx Performance Troubleshooting Date and Time: Thursday, Sept 3rd, 2020 09:00 am EST Duration: 1 hour Description: How to troubleshoot performance issues in a Thingworx Environment? Here we will cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support Registration: here   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’   You can also suggest topics for upcoming sessions using this small form.
View full tip
Hello!   We will host a live Expert Session: "What's new in Navigate 9.0" on August 18h at 01:00 PM EST. Please find below the description of the expert session as well as the link to register.   Expert Session: What's new in Navigate 9.0 Date and Time: Tuesday, August 18th, 2020 01:00 pm EST Duration: 1 hour Registration link: https://www.ptc.com/en/special-event/thingworx-navigate Description: This session is the intro of a series that will cover new capabilities of the recent Navigate 9 release and the value that each can bring to your implementation. Then we will have further sessions covering the details of some of them   You can also suggest topics for upcoming sessions using this small form.
View full tip
  Question: What should I know about using ThingWorx with InfluxDB to store my time series data? Hi, ThingWorx users!   It’s here! Thanks for waiting patiently since my previous post announcing ThingWorx’ new support of InfluxDB as a time series persistence provider.   As of our 8.4 release, you can now use InfluxDB to store your ThingWorx time series data with incredible power and ease.   Want to learn more? Check out the following FAQs:   1. What is InfluxDB? Who is InfluxData? InfluxDB is a time series database designed to handle high write and query loads. It is meant to be used as a backing store for any use case involving large amounts of timestamped data, like monitoring, application metrics, IoT sensor data, and real-time analytics that you’d find in ThingWorx.   InfluxDB is created by InfluxData, an awesome company that we are proud to call a PTC partner.   2. When would I want to use InfluxDB for IIoT? While the ThingWorx IIoT platform supports multiple databases to persist IIoT data and is agnostic when it comes to the storage layer, InfluxDB is the ideal choice for time series. When the number of connected devices increases, along with the amount of streaming data, the need to have a high-scale telemetry database choice is obvious.   For very high scale data ingestion, InfluxDB should be used as a persistent provider with the ThingWorx platform for multiple reasons. Its flexibility and ease of use provides native support for standard time series functions, including: sampling, interpolation, time bucketing, aggregation, selector, transformation, predictor, etc. It does all of this while supporting a high compression of data (~45x) with the ability to handle thousands of writes per second and read thousands of rows in milliseconds.   Check out this article by our Enterprise Deployment Center (EDC) explaining why InfluxDB is great for small ThingWorx applications.   3. What are the three different flavors of InfluxDB? InfluxDB Open Source (TICK Stack), InfluxDB Enterprise & InfluxDB Cloud. Here’s more info on each: InfluxDB Open Source (TICK Stack): This is the open-source version of the product available to download via the InfluxData website. Also included here are the other projects that comprise the TICK Stack, including: [T] Telegraf; open source collection agent [I] InfluxDB; open source time series database [C] Chronograf; open source visualization application [K] Kapacitor; open source streaming processing engine; side car to InfluxDB InfluxDB Enterprise: This is the commercial software version of InfluxDB for high availability clustering and the recommended time series database to be used for production with ThingWorx 8.4 and later. InfluxDB Enterprise works with the rest of the TICK stack interchangeably (Telegraf, Chronograf, Kapacitor). InfluxDB Cloud: This is the commercial service version of InfluxDB, hosted on AWS, managed by InfluxData, and delivered as a service to customers. InfluxDB Cloud works with the rest of the TICK stack interchangeably (Telegraf, Chronograf, Kapacitor). To learn more about the different modules of InfluxDB (Telegraf, Chronograf, Kapcitor), check out InfluxData Introduction for documentation or InfluxData Products for product info.   4. What is the difference between InfluxDB opensource and enterprise? InfluxDB Open Source is available in a single (1 only) data node configuration only, albeit with “n” number of vCPU or “cores” provisioned on that single node.  InfluxDB Enterprise is available in multiple (2 or more) data node configuration, also with “n” number of vCPU or “cores” provisioned to each node. The Enterprise edition is generally preferred for production deployments that require high availability, replication, and redundancy. Provisioned along with the data nodes are three (3) meta nodes and a load balancer to distribute data workload across the multiple nodes. Typical configurations are in even increments of data nodes (i.e. 2, 4, 6, 8, etc.).   5. Where can I find the pricing overview for buying enterprise licenses for InfluxDB? The PTC product and go-to-market team have defined commercial pricing for InfluxDB Enterprise. For help with pricing, reach out to Chris Wensley (cwensley@ptc.com) and Anders Hinrichsen (anders@influxdata.com).   6. How do I configure InfluxDB with ThingWorx? We’ve outlined the steps for you in the ThingWorx Help Center and created a quick video to instruct you on how to install InfluxDB with ThingWorx. (view in My Videos) To see the current version of InfluxDB that we support, read our ThingWorx 9.0 System Requirements guide.   7. How do I configure InfluxDB and ThingWorx in a high availability scenario? With the ability to leverage multiple data stores, we work to provide the flexibility to best meet the needs of your IT preferences and investments. InfluxDB helps us do that. To configure ThingWorx for High Availability, please refer to this section of the ThingWorx Platform 9 Help Center. To configure InfluxDB for High Availability at the database level, please refer to InfluxData’s documentation on how to Install and deploy InfluxDB Enterprise clusters.   8. Where can I learn more about how to monitor and manage InfluxDB? Monitoring info for InfluxDB can be found here: Monitoring Tools for TICK Stack.   9. How can I tune and optimize InfluxDB with ThingWorx? The best approach for running InfluxDB with PTC ThingWorx 8.4 (or later) is to treat the workload and configuration just as you would in a stand-alone deployment. We suggest to stick to the recommendations in the InfluxDB and TICK stack documentation.   10. How do I perform backup and recovery of ThingWorx with InfluxDB? Please see the ThingWorx Platform Backup and Recovery Planning Technical Brief to plan for back and recovery. You can also find more more details on taking backups and restoring data from InfluxDB in the Backing up and restoring in InfluxDB Enterprise overview.   11. Where can I learn more about sampling, interpolation, time bucketing, aggregation, pivot​ and other key features of InfluxDB? Features of InfluxDB can be found here: InfluxData Time Series Platform. Implementation of InfluxDB features can be found here: Getting Started with InfluxDB.   12. What are all the different persistence providers supported with ThingWorx? When should I use InfluxDB? ThingWorx supports the following model and data provider storage options: H2, PostgreSQL, MS SQL Server and AzureSQL ThingWorx supports the following data provider only storage options: InfluxDB Please refer to the model and data best practices section of the ThingWorx 9 Help Center for further information on options how to store your model and data with ThingWorx.   We have also updated the ThingWorx Platform 9.0 Sizing Guide to provide relevant information to estimate the amount of processing and memory that ThingWorx may need to meet your requirements. It also provides guidance on when to use InfluxDB for your scale needs.   13. When should I use InfluxDB over DataStax Enterprise (DSE)? Here is a good blog post that benchmarks time series data performance of InfluxDB vs. Cassandra, which is the core of DataStax Enterprise (DSE). In specific use cases, InfluxData Enterprise may be more cost effective when compared to similar telemetry use cases with DSE.   14. How can I migrate my data from PostgreSQL to InfluxDB? Migration from PostgreSQL or MSSQL is supported by the ThingWorx in-built data tools, which can export entities and data from PostgreSQL or MSSQL and then import them into InfluxDB.   Details on how to upgrade to ThingWorx 9.0 can be found in the Upgrading ThingWorx  section of the ThingWorx 9 Help Center.   15. Should I use InfluxDB as a time series store rather than OSI PI, IP21, or others? For ThingWorx 8.4 and later, InfluxDB is the recommended time series store. This can be implemented at the edge with ThingWorx (i.e. “front end”) using the open source edition and can also be implemented at the hub (i.e. “back end”) using either of the commercial editions designed for HA production workloads.   As always, ThingWorx can connect to most industrial software, including OSI PI, IP21, etc. with our integration toolset.   That’s a wrap—almost! We’ve added two extra questions for you.   16. What’s on the roadmap for ThingWorx with InfluxDB? Key development work to fully leverage built-in InfluxDB querying capabilities and support InfluxDB 2.0 in future ThingWorx releases Leveraging query operations capabilities from InfluxDB to further improve query performance Supporting additional native InfluxDB features (e.g. continuous queries)   17. What should I do if I need technical support with InfluxDB? If you select InfluxDB as your persistence provider, then all support requests related to configuring InfluxDB can be logged through PTC Technical Support at https://support.ptc.com or by calling 1-800-477-6435. You may also want to use the PTC Community to learn and collaborate with the growing PTC developer community. For all other requests related to database management, troubleshooting, monitoring, and administration, we encourage you to reach out to InfluxData directly based on your enterprise purchase contract made with InfluxData. PTC customers using InfluxDB can also email ptc-support@influxdata.com for support requests related to InfluxData.   If you’re as excited as I am about the ability to store your time series data with InfluxDB, let me know in the comments below!   Until next time, if you have any questions, just ask Kaya!
View full tip
  Today on Ask Kaya, I have a riddle.   I was effective and trendy, but now I can be annoying. I sometimes tend to look out of place and I get in others’ space. I look easy to learn, but few have truly mastered my intricacies. Am I the floss dance?   No, I’m not the floss dance. I’m the expression and validator widgets.   It’s time to say goodbye to those pesky widgets that were super useful but super annoying. Yep, those widgets that littered your design canvas but were “Invisible at Runtime.”     I’m talking about expressions, validators, status messages and event routers. In our next release, expression and validator widgets will no longer appear on the canvas at build time.   You may remember from the previous post titled “Ask the Expert: What are the top three features in ThingWorx 8.4 that I might not know about?” In the post, we discussed the concept of Data Helpers, now known as “Functions.”   What do Functions do? Functions give you the ability add custom logic and bindings to improve UI application functionality. Before we describe how to configure them, let’s first explain what they are.   An expression widget runs any expression you give it. That piece of logic might be something like result = a +  b. While an expression can run any type of logic—not just numbers—you must specify the output base type to be the same as the input base type.    Expressions can also be used to run a different service based on an event. For example, a user may write an expression to run a service if a value changes. They may not care about the value itself but rather just want to know that the value changed.   A validator widget is similar to an expression widget; the key difference is that a validator only outputs a Boolean. When their result is true, you bind to one service, when false, another. Unlike an expression widget, the validator widget does not have to have matching input and output datatypes because the output datatype will always be Boolean.   The input for a validator can be anything. You can create a scenario in which a validator widget outputs a status message that reads “the value is within the acceptable range” when the validator returns true or “the value is outside of the acceptable range” when the validator returns false.   Ready for the extra good stuff? We’re introducing a new editor for you to create, add and configure expressions and validators.   How can I use Functions? Let’s walk through an example using the following these steps.   Create a new Mashup. You’ll see a new tab called “Functions,” which, on default, appears in the bottom right panel.                 Click the “+” arrow in the top right of the “Functions” panel. Choose “expression.” Use the new Functions editor to write your expression. In this example, we’ll say that result = a + b; New Functions Editor  We’ll then set the default values for a and b to be 2 and 3, respectively, to output a result of 5.     Expressions are just as powerful as they were before, but they no longer take up space on your mashup during design time, and they can now be configured in our brand new editor! (To spread the joy even more, the same holds true for validator widgets.)   Reach out with any questions or thoughts below!   Stay connected and keep floss dancing, Kaya
View full tip
Exciting news! ThingWorx now has improved support for Docker containers to help you manage CI/CD, improve development efficiency in your organization and save costs. Check out these FAQs below and, as always, reach out to me if you have any additional questions.   Stay connected, Kaya   FAQs: ThingWorx Docker Containers   What are Docker Containers? From Docker.com: “a Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings”. Learn more here.   What's the difference between Docker containers and VMs? Containers are an abstraction at the app layer that packages code and dependencies together, whereas Virtual Machines (VMs) are an abstraction of physical hardware turning one server into many servers. Here are some great discussions on it on Stack Overflow. Containers vs. VMs   How can I build ThingWorx Docker images? Check out the Building ThingWorx 8.3 Docker Images Guide or watch this video to instruct you on how to build and test Docker containers. (view in My Videos)   How does PTC support building ThingWorx Docker images? PTC provides the ability for customers and partners to build ThingWorx Docker images. A customer can download the Dockerfiles and scripts packaged as a zip folder from the PTC Software Downloads Portal under “ThingWorx Platform,” then “Release 8.3”  then“ThingWorx Dockerfiles.” (Please note that you must be logged in for the link to function properly.) PTC Software Downloads PortalThe zip folder contains the Dockerfiles, template jar, and scripts to fetch Tomcat, and ThingWorx WAR files using CLI. Java must be downloaded manually from the vendor's website. We also provide an instructional guide called “Build ThingWorx Docker Images” available on the Reference Documents page on the Support Portal.   How are ThingWorx Docker images different from the usual delivery media of WAR files? The WAR file delivery is typically accompanied by an installation guide that contains the manual steps for creating the VM or bare-metal environment. That guide includes instructions for the administrator to manually install the prerequisites, including Tomcat, Java, and ThingWorx platform settings files. To deploy and run the WAR file, the administrator follows the guide to create the runtime environment on an OS. In contrast, the Dockerfile build in this delivery automates the creation of a Docker image once supplied with the prerequisites.   Do you have any reference deployment and guidance? Yes, you can refer to our blog post to learn how to deploy and run ThingWorx Docker containers on your existing Kubernetes environment.   Is there any recommendation on which Container Orchestrator as a Service (CaaS) a customer should run ThingWorx Foundation Docker container images on? You can use Docker-Compose for testing, but it is generally not suggested for production deployment use cases. In a production environment, customers should use container orchestrators such as Kubernetes, OpenShift, Azure Kubernetes Service (AKS), or Amazon Elastic Container Service for Kubernetes (Amazon EKS), to deploy and manage ThingWorx Docker images.   What are the skill sets required? Familiarity with OS CLI and Docker tools is required to build building the ThingWorx Docker images. Familiarity with Docker-compose to run the resulting Docker containers is needed to test the resulting builds. We don’t recommend Docker-Compose for production use, but when using it for local testing and demo purposes, users can rapidly install ThingWorx and get it up and running in minutes. We expect PTC partners and customers who want to run ThingWorx containerized instances in their production environment to possess the required skill sets within their DevOps team.   How is ThingWorx licensing handled with the Docker images? By default, the container created from these Docker images starts up in a limited mode with no license supplied. You can configure your username and password for the PTC licensing portal to automatically load a license via environment variables passed into the container on startup. Additionally, you can mount a volume to the /ThingworxPlatform directory, which contains your license file, or to retrieve a license request. To keep your Host ID consistent, ensure that the /ThingworxStorage and /ThingworxPlatform directories are persisted and not removed with individual container restarts. More detailed instructions can be found in the build guide or in a Kubernetes blog post .   Is Docker free? What version of Docker does PTC support for ThingWorx? Docker is open-source and licensed under the Apache 2 license. Information on Docker licensing can be found here. The following Docker versions are required: Docker Community Edition (docker-ce) Version 18.05.0-ce is recommended. To install the Docker Community Edition on your system, follow the instructions for your operating system on the Docker website here. Docker Compose (docker-compose) Version 1.17.1 is recommended. To install the Docker Compose on your system, follow the instructions for your operating system on the Docker website here. What persistence providers are currently supported? PTC provides the ability to build ThingWorx Foundation containers for the following supported persistence providers: H2 Microsoft SQL Server PostgreSQL Additional persistence providers will be added to the Docker build delivery as the ThingWorx Foundation Platform releases support for those new databases in future releases.   What are some of the security best practices? For production use, customers are strongly advised to secure their Docker environments by following all the recommendations provided by Docker. Review and implement the best practices detailed at https://docs.docker.com/engine/security/security/.   Can we build Docker images for ThingWorx High Availability (HA) architecture? Yes. ThingWorx Dockerfiles are provided for both basic ThingWorx deployment architecture and HA ThingWorx deployment architecture.   How easy is the rehosting and upgrading of ThingWorx releases on Docker with existing data? In Kubernetes environment, data is kept in a separate volume and can be attached to different containers. When one container dies, the data can be attached to a different container and the container should start without issue. For more information, please refer to the upgrade section of the Building ThingWorx 8.3 Docker Images Guide.   Is it okay to use the Docker exec and access the bash shell to make config changes or should I always rebuild the image and re-deploy?­ Although using Docker exec to gain access to the container internals is useful for testing and troubleshooting issues, any changes made will not be saved after a container is stopped. To configure a container's environment, variables are passed in during the start process. This can be done with Docker start commands, using compose files with environment variables defined, or with helm charts. More detailed instructions can be found in the build guide or in this blog post .   What if there are issues? Should I call PTC Technical Support? We are providing the scripts and reference documents solely to empower our community to build ThingWorx Docker images. We believe that customers using Docker in their production processes would have expertise to manage running Docker containers themselves. If there are any issues or questions regarding the build scripts provided in the PTC official downloads portal, then customers can contact PTC Technical Support at 1-800-477-6435 or visit us online at: http://support.ptc.com. PTC does not provide support for orchestration troubleshooting.   What can you share about future roadmap plans? As we are enabling our customers and partners to build ThingWorx Foundation Platform Docker images, we plan to do the same for upcoming products such as ThingWorx Integration & Orchestration, ThingWorx Analytics, upcoming persistence providers such as InfluxDB, and many more. We also plan to provide additional reference architecture examples and use cases to help developers understand how to use Docker containers in their DevOps and production environments.   Where can I learn more about Docker containers and container orchestrators? See these resources below for additional information: https://training.docker.com/ https://kubernetes.io/docs/tutorials/online-training/overview/
View full tip
  Hi, everyone!   We’re actively working towards the ThingWorx 9.0 release and we’re ready to provide a sneak peek into the biggest feature of 9.0: Active-Active Clustering for High Availability configuration.   You may be wondering: doesn’t ThingWorx already offer High Availability?   Yes, ThingWorx already supports a High Availability configuration. Previous versions of ThingWorx, such as ThingWorx 8.X version releases, support Active-Passive configuration, where one “active” ThingWorx server performs all processing and maintains the live connections to other systems such as databases and connected assets. Meanwhile, in parallel, there is a second “passive” ThingWorx server that is a mirror image and regularly updated with data but does not maintain active connections to any of the other systems. If the “active” ThingWorx server fails, the “passive” ThingWorx server is made the primary server, but this can take a few minutes to establish connections to the other systems.   So, how is ThingWorx Active-Active different?   Active-Active configuration differs from Active-Passive in that all the ThingWorx servers in the cluster are “active.” Not only is data mirrored across all ThingWorx servers, but all of the servers, instead of only one, maintain live connections with the other systems. This way, if any of the ThingWorx servers fail, the other ThingWorx servers take over instantaneously with no recovery time.   Since all ThingWorx servers are active, they are processing in parallel and, as a result, the cluster can process more data than that of a single server or a cluster with an Active-Passive configuration. Simply put, multiple servers working together outperform a single server. This allows customers to scale their deployment by simply adding more ThingWorx servers to the cluster (horizontally scaling), which does not have the same limitations of scale that is achieved by increasing the performance of the server itself.   What does that mean for me? Higher Availability - You can avoid single points of failure and configure the ThingWorx Foundation platform in an Active-Active cluster mode to achieve the highest availability for your IIoT systems and applications. Increased Scalability - Now, you can horizontally scale from one to many ThingWorx servers to easily manage large amounts of your IIoT data at scale more smoothly than ever before.   Stay tuned! We’ll be posting more information on Active-Active Clustering—how it's achieved in ThingWorx, architectural component overviews, and what it means for your ThingWorx deployment!   In the meantime, we're running the Active-Active Clustering Beta Program. Interested in participating? Reach out to Ryan Servais (rservais@ptc.com) or Ayush Tiwari (atiwari@ptc.com) to learn more about participation!   Stay connected! Kaya
View full tip
  DevOps. It’s not just a buzzword. It’s a true development methodology that can make all the difference in your application quality and release time. Today, I’ll walk you through how you can continuously integrate and deploy your ThingWorx applications to achieve CI/CD objectives as part of a DevOps-focused culture. At the end, I’ll provide you a sneak peek of what you can expect in a future release (hint: we’re working on some awesome new CI/CD functionality). Overview of ThingWorx DevOps and Common Tools I’ll start by providing an overview of the DevOps cycle, and then I’ll provide more details around each step of the cycle. Before we can start, you’ll need to define your high-level architecture and functional requirements as part of the “Plan” phase.   Now, let’s build your ThingWorx app. Ready? Here we go!   Code As with any software platform, developers can start working in any number of areas of the IoT application—from edge, to visualization, rules authoring to data modelling. For the purpose of this article, we’ll start with the UI, but much of the same steps can be applied in any order. Also, we’ll just call out high level steps of development, but for more info on building out each aspect of your application, please visit developer.thingworx.com.   In ThingWorx Composer, build out your user interface with Mashups. Starting with UI can help you think about the types of data you want to collect from devices and systems and how you want to solve your unique requirements for the business. Starting at this point can also help you show live POCs and functional mockups to stakeholders. Once you’ve built some starter screens and a skeleton of app navigation, you can start adding in data through configuration in Composer by creating your Things, Templates, properties and services. [Optional] We offer 65 out-of-the-box widgets for the UI in the ThingWorx platform. There are times when you have specific visualization requirements for your application and the out-of-the-box widgets don’t quite satisfy them. We have a path for that, through our custom widget extensions. If you choose to develop your own widget extensions, you can do so through other IDEs like Eclipse or WebStorm. Custom development and extensions are not just for UI. We also allow you to define Thing entities and their custom services in Java. If you are developing extensions in this way, we’d recommend you do so using Eclipse to code and Gradle to build and drive tests. For instructions on how to create your own extension, see “Creating Customized ThingWorx Widgets” on page 42 of the ThingWorx Application Development Guide posted on Ask Kaya. With a good start on the data model, business logic and UI, some quick testing and validation is in order. You’ll probably also want to save all of this work also to share with colleagues or move to other integration environments. Capture all of your entity and code artifacts (Mashups, style definitions, Thing shapes, Thing templates, JavaScript, etc.) by using the “Export to Source Control” feature from ThingWorx Composer to write entities to the file system. You can use Git or other source systems to monitor the file system and push to the remote repository of your choice (e.g. GitHub, Bitbucket, etc.). Again, if you are developing extensions outside of Composer, you’ll want to source control those items, too, from Eclipse or the file system directly. Build [Optional] You can build an application package as an extension with all entities and code from Eclipse using the ThingWorx Eclipse plugin. When you build the project, it will create an extension zip file. Again, more info in the Application Development Guide. Make your life easy by using tools like Gradle or Maven. ThingWorx is very similar to other Java development systems, so Gradle and Maven track your dependencies and create a package with all of the referenced extensions you may be using and put them into one single zip file package. Once you have a package built, you can import it into test or integration environments. For added automation, create repeatable tasks like a job in Jenkins so that every time your code is changed in the source repository (e.g. Git), it triggers a job to increment the version, build the project and create the package deliverables. Consider also configuring the Jenkins jobs to push artifacts to a central repository like Artifactory. Test Once your code has been built, we can’t forget about testing! Automation is king for DevOps! For ThingWorx apps, you should still design a test strategy for your application, and then define and create your tests. These can run in your local developer environment, as well as be triggered via build tasks/changes in the source repository. Tools like JUnit for your entities and Java-backed services or Selenium for testing the Mashup UIs can be used. You can create separate jobs in Jenkins along with the build to run the integration and unit tests against an instance of ThingWorx that has the latest artifacts deployed into it. You can also do static code analysis using tools like PMD to find bugs, check style issues or identify inefficient code paths. To round out your app also with performance and load testing, JMeter is one tool that you can leverage. Release Releasing is the culmination of the team’s great work! If the test results pass and the builds are green, you are good to go, and it’s time to establish your release build. Make sure that you consider a versioning scheme for your application and its artifacts. Semantic versioning is a pattern that can be implemented for your ThingWorx application. Correct versioning of ThingWorx packages affects your upgrade plans and is a signal to your users on the intent and content of the release. Again, see the Application Development Guide. Once a release milestone is met, you can create a source branch in Git for that milestone, which will have all the changes encompassed in that release. Configure a Jenkins job to create builds from that milestone branch for maintenance purposes. Deploy + Operate + Monitor   If you’ve tested and released your application, it’s time for production and real users! Using the build and testing infrastructure you’ve set up earlier in the development process, you can also deploy your release builds to your target staging and production ThingWorx environments with Jenkins jobs, Artifactory and automated steps. Finally, as with anything, it is important to measure success and monitor performance via KPIs, trends and logs. You can also extract application insights and recommendations from the PTC System Monitor (PSM) tool, which uses Dynatrace; here is a guide on how to install and deploy PSM.  There are many different paths through the platform and options for developers to match your local team processes and tools—this was simply a quick overview. Congrats! You’re now equipped to build ThingWorx apps while leveraging software best practices and incorporating a DevOps culture!   What can I expect in a future release of ThingWorx? Coming in a near-term release of ThingWorx, we’ll make it easier for you to continuously integrate and deploy your ThingWorx applications. How? Through new functionality that bolsters our packaging concepts, new cloud services to assist in deployment to environments and an error-proof way to integrate applications with an automated dependency awareness.   Stay tuned for more info about this exciting new deployment and application management functionality targeted for Fall 2019!   Reach out with any questions and stay connected.   -Kaya
View full tip
Hi, ThingWorx users! We’re excited to share that we have partnered with InfluxData to make time series analysis in ThingWorx even easier. InfluxDB is a database by InfluxData that is “built specifically for metrics and events that empower developers to build next-generation IIoT, analytics and monitoring applications.”   Why InfluxData?  Today, application developers expect robust querying capabilities, fast response time, easy ways of aggregating and pivoting on data and leveraging results for reporting and visualization. IT and devops administrators also expect cost-effective storage and easy ways of aging data through archiving and the ability to keep large amounts of historical data to satisfy analysis requirements.   That’s why we’ve partnered with InfluxData to make it easier for developers to store, analyze and act on IIoT data in real-time. With InfluxData, developers can build connected IIoT applications more quickly while still incorporating the following capabilities: monitoring real-time alerting predictive maintenance streaming data anomaly and event detection visual and report-based analysis   We considered a few technologies for the purpose of improving ThingWorx time series analysis. Here are a few reasons we chose InfluxData: high compression of data ~45x ability to handle millions of writes per second* ability to read around thousands of rows in milliseconds* supports the standard time series functions of sampling, interpolation, time bucketing, aggregation, selector, transformation, predictor, etc.  * Query and write times will vary based on an individual ThingWorx application’s implementation with Influx. For example, as the number of concurrent reads increases, the query speed decreases. With the upcoming 8.4 release, the ThingWorx Sizing Guide will be updated to reflect representative performance for ThingWorx developers.   In addition to improved query capability, ThingWorx time series with Influx can now use less memory and CPU, giving your platform servers a bit of a break.   To start strategizing on how InfluxData can help you in your ThingWorx journey, here is a sneak preview of what it will look like:   New Features The new ThingWorx Influx Persistence Provider will make query services like ValueStream Thing QueryPropertyHistory, Stream Thing QueryStreamData and QueryStreamEntries even better. Simply create a new instance of the persistence provider, configure it to use your InfluxDB instance, create a new value stream (or stream) from the new persistence provider, and you’ll be writing, reading and analyzing your time series data like never before.   We’re also introducing a new enhancement to improve InfoTable support with time series data, including providing the ability to use a driver property. The driver property can be specified with the QueryPropertyHistoryWithDriverProperty service for time alignment and filling backward/forward in your stream queries.   Let’s walk through a driver property example where you have the properties of temperature, speed and battery level. Timestamp Temperature Speed Battery Level 1480589076592000000 80.003 5012 79 1480589077537000000 80.010 5011 79 1480589077550000000 80.010 5009 79 1480589077562000000 80.030 5011 78   Let’s say temperature is the key driver for your analysis. In other words, you are not concerned if speed or battery level changes—you only care about when temperature changes. We can specify temperature to be the driver property for that particular time and only return stream values for temperature, speed and battery level when temperature changes. If speed or battery level changes (but temperature does not change), the rows associated with those changes would not be included in the results set because neither speed nor battery level is a driver property. See chart below.  Timestamp Temperature (driver) Speed Battery Level 1480589076592000000 80.003 5012 79 1480589077537000000 80.010 5011 79 1480589077562000000 80.030 5011 78  Note that only three of the four rows are returned above because one entry in the original table did not have a change in temperature.    Stay Tuned Look out for these time series improvements and InfluxData integration in our upcoming 8.4 release. I’ll be sure to keep you updated on additional new features coming in our next release (like Orchestration and Mashup Builder 2.0), so check back shortly or subscribe to this Community so we can stay in touch. As always, if you have any questions, just ask Kaya!   Stay connected, Kaya
View full tip
Fresh look at getting started with ThingWorx in a relevant context that outlines the DEVOPS needed to kick-start your programming.     For full-sized viewing, click on the YouTube link in the player controls. Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
The natively exposed ThingWorx Platform performance metrics can be extremely valuable to understanding overall platform performance and certain of the core subsystem operations, however as a development platform this doesn't give any visibility into what your built solution is or is not doing.   Here is an amazing little trick that you can use to embed custom performance metrics into your application so that they show up automatically in your Prometheus monitoring system. What you do with these metrics is up to your creativity (with some constraints of course). Imaging a request counter for specific services which may be incredibly important or costly to run, or an exception metric that is incremented each time you catch an exception, or a query result size metric that informs you of how much data is being queried from the database.   Refer to Resources > MetricsServices: GetCounterMetric GetGaugeMetric IncrementCounterMetric DecrementCounterMetric SetGaugeMetric You'll need to give your metric a name - identified by key - and this is meant to be dotted notation* which will then be converted to underscores when the metric is exposed on the OpenMetrics endpoint.  Use sections/domains in the dotted notation to structure your metrics in-line with your application design.   COUNTER type metrics are the most commonly used and relate to things happening through time.  They are an index which will get timestamped as they're collected by Prometheus so that you will be able to look back in time and analyse and investigate what happened when and what the scale or impact was.  After the fact functions and queries will need to be applied to make these metrics most useful (delta over time, increase, rate per second).   Common examples of counter type metrics are: requests, executions, bytes transferred, rows queried, seconds elapsed, execution time.     Resources["MetricServices"].IncrementCounterMetric({ basetype: "LONG", value: 1, key: "__PTC_Reported.integration.mes.requests", aggregate: false });     GAUGE type metrics are point-in-time status of some thing being measured.   Common gauge type metrics are: CPU load/utilization, memory utilization, free disk space, used disk space, busy/active threads.     Resources["MetricServices"].SetGaugeMetric({ basetype: "NUMBER", value: 12, key: "__PTC_Reported.Users.ConnectedOperatorCount", aggregate: true });     Be aware of the aggregate flag, as it will make this custom metric cluster level which can have some unintended consequences.  Normally you always want performance metrics for the specific node as you then see what work is happening where and can confirm that it is being properly distributed within the cluster.  There are some situations however where you might want the cluster aggregation however, like with this concurrently connected operators.   Happy Monitoring!  
View full tip
  Hello everyone,   If you’re like me, you’re always looking for the optimal or most efficient way to do something. Today, I’ll share a quick trick and two tips to help you develop your awesome IoT solutions with ThingWorx.   #1. Trick: Finding Dependency References We are targeting a new “Where Used” Composer feature in an upcoming release of the platform to help you find your references of bindings, properties, mashups, and services. In the meantime, did you know you can get some of that information yourself today with a quick service call?   As of ThingWorx 8.5, a new service is present on Project entities; the service crawls the contents of your project and highlights the full external dependency list to help you find references. On any Project Entity, ListExternalDependencies() shows output like this in 9.0:  ListExternalDependencies() output   For each entity (“A”) in the project, the service calls out any entities (“B”) that it is referencing and the referenced dependency’s extension package if present. It will only find external dependencies to the project and will not currently list dependencies within the project. Notice also in the infotable output, the last column, “where used,” even lists the type of reference (e.g. coded in JavaScript, Mashup Data, Resource, Property binding, etc.). Pretty handy!   Code reference from “Where Used” service output   Click this link for additional help content that explains the service output and usage. Again, it only searches for entity references outside of your current project scope. Also, this service will stop crawling the dependency hierarchy when it finds items in a project, since its current purpose is packaging.  Consider if you have Thing T1 in Project P1, which uses ThingTemplate TT2 and it’s not in a Project. TT2, in turn, uses ThingShape TS3 which is also not in a Project.  Calling ListExternalDependencies()  on Project P1 will find both TT2 and TS3. If, however, we then put TT2 in a Project P2, then call the List() service on Project P1, the scan will stop at TT2 and NOT identify TS3.  The reason for this is that the service assumes that when you package P2, it will find the orphan TS3.     We know this doesn’t cover all “where used” type use cases, so there is still a planned feature to really complete this concept on the platform. But even in the 8.5 or 9.0 releases, if you wanted to see entity references (inside and outside of its project) for a single Thing A, you could quickly assign Thing A to a new project and run the ListExternalDependencies() service to find all of its references and then assign Thing A back to its original project once you’ve found what you are looking for. Moving entities into projects just for searching is not something I would recommend doing often, but it can work in a pinch!   #2. Tip: JavaScript looping When iterating through data from infotables, use a .forEach() loop! Consider these four code options and their average performance on the Rhino engine:  Infotable looping performance   Very clearly, the .forEach() syntax is the most performant and, in my opinion, the cleanest to read. Try it out in your app! We plan to update our help documentation with more of these ThingWorx JavaScript best practices in 9.1. We also plan to provide some updates to our Code Snippets features in an upcoming Composer release so we can recommend these good practices right from the start.   #3. Tip: Code optimizations As with many performance bottlenecks, it is those pesky loops that can really amplify degradation. Here are two ThingWorx patterns for your consideration:   Wrong Way:   In this block of code, we setup the property names we are looking for, and then loop through to make a logger message. While creating each logger message, we are making an API call for querying all things for a Thing named me.name and executing a service call GetMetadataAsJSON() on that Thing which walks the hierarchy to build a JSON representation of itself. In this trivial example, we are making these same API 2 calls for each item in the propertyNames list, though the Thing reference and JSON definitions are never changing. Pretty expensive.   Correct Way:   Notice in this example, we are not only declaring the propertyNames outside of the loop, but also the propertyDefinitions. This will significantly improve performance and reduce the number of API calls and round trips to the application server. Again, this is a trivial example, but can pay off in larger and more complex code areas.   If you like these quick tips, check out more best practices here! Got a tip of your own? Have a question on how to tackle something? As always, just Ask Kaya!   Stay connected! Kaya
View full tip
Announcements