cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
This document contains information that should be reviewed before installing or upgrading to the latest version ThingWorx for both new and existing customers. Note that many of the links in this document require that you have created and validated an account on the PTC website. Account Creation For users who do not yet have an active maintenance agreement, an account can be created by accessing the Basic Account Creation page.  With a basic account, you will have access to the ThingWorx Community, product documentation, and support articles. Also, a basic level account will grant access to our new Developer eSupport Portal, which is a great resource for users of all levels to become more proficient with ThingWorx and the emerging world of IoT in general.  For more details on the new Developer eSupport Portal, please refer to our Getting Started with the New eSupport Portal guide. For users having an active maintenance agreement with PTC that have not yet created an account on the PTC website, a new customer account can be created by accessing the New Customer Account page.  With a customer-level account, you will have access to all basic account links listed above, plus access to download all licensed PTC products.  You will also have access to our dedicated application support team.  In order to create a customer account, you will be asked to provide your Customer Number, and one of either your Service Contract Number (SCN), Sales Order Number (SON), or Site Number.  This information will have been included in documentation sent to your company by your PTC Sales Representative. If you have any questions or concerns in relation to the account creation process, please contact us using the Web Account Case Logger. Document Structure Throughout this document, references will be made to specific documentation related to the ThingWorx Platform.  A listing of links to this supporting documentation will be provided at the end of this article for all supported releases of the ThingWorx Platform. Please reference this link to review documentation specific to the release version being installed in your local environment. Overview of Changes For a complete listing of new features and enhancements introduced in the latest version of the ThingWorx Platform, please refer to the Release notes documentation, which is included within the platform downloadable zip file. In addition to providing brief descriptions of each enhancement, this document indicates where you can find more comprehensive coverage where applicable. Release notes are also available online for review.  For a complete listing of release notes for all supported releases of ThingWorx, please refer to the link at the end of this document. Required Software The following table lists the files that are required for a complete installation of the ThingWorx Platform: Component Link Oracle JDK Oracle JDK Download Page Apache Tomcat Apache Tomcat Download Page PostgreSQL PostgreSQL Download Page Full details on installing and configuring the above files are provided in the ThingWorx Installation Guide for all supported OS environments.  Please also take note of any version requirements for the above files based on the version of the ThingWorx Platform being installed.  Version requirements are noted in the ThingWorx System Requirements guide. ThingWorx Installation Files The desired version of the ThingWorx platform can be obtained through the PTC eSupport Site (reminder: maintenance agreement required). The file is provided in a zip format, that starts with a name "MED-XXXXXX-CD...”, where X is some digit.  Once the directory is unarchived, the Tomcat-deployable file is Thingworx.war.  Release notes are included within the zip archive. Installation and Reference Documentation In addition to the listing at the end of this guide, all installation and reference documentation is also available online from the PTC Reference Documents page. To locate documentation within this link related to the latest release of ThingWorx, follow these steps. Set the Product field to ThingWorx. Set the Reported Release appropriately. Narrow the search results by setting the Document Type field if desired, or leave this set to All Document Types. Leave the User Role field set to the default All User Roles selection. It is strongly recommended to review the following documents at minimum: ThingWorx Platform System Requirements Installing ThingWorx (For new installations) Upgrading ThingWorx (For customers migrating from an older release) Upgrade Planning for Returning Customers For existing customers who are upgrading from a prior release of the ThingWorx Platform, PTC offers an Upgrade Planning Guide that can help in preparing for the upgrade process. This guide provides a checklist of activities that are critical to performing a successful upgrade to the latest version of ThingWorx. PTC recommends reviewing this guide for assistance in planning for the upgrade process. Additional References and Troubleshooting The following table lists common reference material and troubleshooting material involving the installation of the platform. General Functionality Frequently Seen Errors upon launching the ThingWorx application Link "HTTP Status 401 - Could not handle request" error when attempting to access a new PostgreSQL-based installation of ThingWorx Link Contacting Technical Support Should you have any questions about the installation process, or if you encounter any issues during the process, our qualified team of technical support engineers are available to assist you. With an active maintenance agreement for ThingWorx, you will have access to web-based technical assistance as well as live phone-based support. Contact details vary, depending on you region. For comprehensive information on how to obtain technical support, please refer to our online Customer Support Guide. Links to Documentation for Supported Releases of ThingWorx For links to supporting documentation for current and legacy releases of ThingWorx, please refer to the following article: Where to Find ThingWorx Documentation
View full tip
This example shows how a file can be retrieved via Scripto and then displayed on a Web page. Precondition is that an asset has an uploaded file. This script assumes the file is there and that it is not extremely large (under 1 megabyte). This example uses base64 encoding to convert the file into a string. Future versions of Scripto will support other data streams so that base64 encoding will not be necessary. import com.axeda.drm.sdk.Context import com.axeda.drm.sdk.data.UploadedFile import com.axeda.drm.sdk.data.UploadedFileFinder import com.axeda.drm.sdk.device.Device import com.axeda.drm.sdk.device.DeviceFinder // This script requires parameter "id" Context ctx = Context.create(parameters.username); def response = '' try {     DeviceFinder deviceFinder = new DeviceFinder(ctx, new Identifier(parameters.id as Integer));     Device device = deviceFinder.find();     UploadedFileFinder uff = new UploadedFileFinder(ctx)     uff.device = device     uff.hint = 'photo'     def ufiles = uff.findAll()     UploadedFile ufile     if (ufiles.size() > 0) {         ufile = ufiles[0]         File f = ufile.extractFile()         response = getBytes(f).encodeBase64(false).toString()     } } catch (Exception e) {     logger.info(e.message);     response = [             faultcode: 'Groovy Exception',             faultstring: e.message     ]; } return ['Content-Type': 'data:image/png;base64', 'Content': response]; static byte[] getBytes(File file) throws IOException {     return getBytes(new FileInputStream(file)); } static byte[] getBytes(InputStream is) throws IOException {     ByteArrayOutputStream answer = new ByteArrayOutputStream(); // reading the content of the file within a byte buffer     byte[] byteBuffer = new byte[8192];     int nbByteRead /* = 0*/;     try {         while ((nbByteRead = is.read(byteBuffer)) != -1) { // appends buffer             answer.write(byteBuffer, 0, nbByteRead);         }     } finally {         is.close()     }     return answer.toByteArray(); }
View full tip
This project is a simple custom tab that allows you to search all models and see their assets with basic information.  It is packaged as an Axeda SDK v2 Artisan project. Further Reading Developing with Axeda Artisan (Axeda Platform v6.8 and later) Axeda Sample Application: Populating A Web Page with Data Items Extending the Axeda Platform UI - Custom Tabs and Modules
View full tip
The following script takes a parameter of a model name, a device serial number and a data item name, finds the asset location and uses that longitude to determine the current TimeZone.  It then converts the Timezone of the data item timestamp to an Eastern Standard Timezone timestamp. import groovy.xml.MarkupBuilder import com.axeda.drm.sdk.Context import java.util.TimeZone import com.axeda.drm.sdk.data.* import com.axeda.drm.sdk.device.* import com.axeda.common.sdk.jdbc.*; import net.sf.json.JSONObject import net.sf.json.JSONArray import com.axeda.drm.sdk.mobilelocation.MobileLocationFinder import com.axeda.drm.sdk.mobilelocation.MobileLocation import com.axeda.drm.sdk.mobilelocation.CurrentMobileLocationFinder def response try {     Context ctx = Context.getUserContext()     ModelFinder mfinder = new ModelFinder(ctx)     mfinder.setName(parameters.model_name)     Model m = mfinder.find()     DeviceFinder dfinder = new DeviceFinder(ctx)     dfinder.setModel(m);     dfinder.setSerialNumber(parameters.device)     Device d = dfinder.find()     CurrentMobileLocationFinder cmlFinder = new CurrentMobileLocationFinder(ctx);     cmlFinder.setDeviceId(d.id.getValue());     MobileLocation ml = cmlFinder.find();     def lng = -72.158203125     if (ml?.lng){         lng = ml?.lng     }     // set boundaries for timezones - longitudes     def est = setUSTimeZone(-157.95415000000003)     def tz = setUSTimeZone(lng)     CurrentDataFinder cdfinder = new CurrentDataFinder(ctx, d)     DataValue dvalue = cdfinder.find(parameters.data_item_name)     def adjtime = convertToNewTimeZone(dvalue.getTimestamp(),tz,est)     def results = JSONObject.fromObject(lat: ml?.lat, lng: ml?.lng, current: [name: dvalue.dataItem.name, time: adjtime.format("MM/dd/yyyy HH:mm"), value: dvalue.asString()]).toString(2)     response = results } catch (Exception e) {     response = [                 message: "Error: " + e.message             ]     response =  JSONObject.fromObject(response).toString(2) } return ['Content-Type': 'application/json', 'Cache-Control':'no-cache', 'Content': response] def setUSTimeZone(lng){     TimeZone tz     // set boundaries for US timezones by longitude     if (lng <= -67.1484375 && lng > -85.517578125){         tz = TimeZone.getTimeZone("EST");     }     else if (lng <= -85.517578125 && lng > -96.591796875){         tz = TimeZone.getTimeZone("CST");     }     else if (lng <= -96.591796875 && lng > -113.90625){         tz = TimeZone.getTimeZone("MST");     }     else if (lng <= -113.90625){         tz = TimeZone.getTimeZone("PST");     }     logger.info(tz)     return tz } public Date convertToNewTimeZone(Date date, TimeZone oldTimeZone, TimeZone newTimeZone){     long oldDateinMilliSeconds=date.time - oldTimeZone.rawOffset     // oldtimeZone.rawOffset returns the difference(in milliSeconds) of time in that timezone with the time in GMT     // date.time returns the milliseconds of the date     Date dateInGMT=new Date(oldDateinMilliSeconds)     long convertedDateInMilliSeconds = dateInGMT.time + newTimeZone.rawOffset     Date convertedDate = new Date(convertedDateInMilliSeconds)     return convertedDate }
View full tip
User Load Testing in ThingWorx Java Client Tutorial Written by Tori Firewind, IoT EDC   Introduction As stated in previous posts, user load testing is a critical component of ensuring a ThingWorx solution is Enterprise-ready. Even a sturdy new feature that seems to function well in development can run into issues once larger loads are thrown into the mix. That's why no piece of code should be considered production-ready until it has undergone not just unit and integration testing (detailed in our Comprehensive DevOps Guide), but also load testing that ensures a positive user experience and an adequately sized server to facilitate the user load.    The EDC has spent quite a few posts detailing the process of setting up an accurate, real-world testing suite using JMeter for ThingWorx. In this piece, we detail an alternative approach that makes use of the Java Spring Boot Framework to call rest requests against the ThingWorx server and simulate the user load. This Java Client tutorial produces a very immature user load client, one which would still take a lot of development to function as flexibly as the JMeter tutorial counterpart. For Java developers, however, this is still a very attractive approach; it allows for more custom, robust testing suites that come only as an investment made in a solid testing tool.   For someone experienced in Java, the risk is smaller of overlooking some aspect of simulation that JMeter may have handled automatically. For example, JMeter automatically creates more than one HTTP session, and it's much easier to implement randomized user logins instead of one account. The Java Client could do it with some extra work (not demonstrated here), but it uses just the Administrator login by default for a quick and dirty sort of load test, one focused less on the customer experience and more on server and database performance under the strain of the user requests (the method used in our sizing guidance, for instance, to see if a server is sized correctly).   The amount of time required to develop a Java Client isn't so bad for a Java developer, and when compared with learning the JMeter Framework, might be a better investment. A tool like this can handle a greater number of threads on a single testing VM; JMeter caps out around 250 threads per client on an 8Gb VM (under ideal conditions), while a Java Client can have thousands of threads easily. Likewise, a Java Client has less memory overhead than JMeter, less concern for garbage collection, and less likelihood that influence from heap memory management will affect the test results.   However, remember that everything in a Java Client has to be built from scratch and maintained over time. That means that beyond the basic tutorial here, there needs to be some kind of metrics gathering and analysis tool implemented (JMeter has built-in reporting tools), the calls need to be randomized, and not called at set intervals like they are here (which is not a very accurate representation of user load compared to a real-world scenario), and the number of users accessing the system at once should probably vary over time (to resemble peak usage hours). JMeter has a recording tool to ensure all the necessary REST requests to simulate a mashup load are made, so great care has to be taken to ensure all of the necessary REST calls for a mashup are made by the Java Client if a true simulation is called for by that approach.    Java Client Tutorial   Conclusion Neither a Java Client nor a JMeter testing suite is inherently better than the other, and both have their place within PTC's various testing processes. The best test of all is to stand up any sort of user load testing client, either of these approaches, at the same time as the UAT or QA user experience testing. QA testers who load and click about on mashups in true, user fashion can then see most accurately how the mashups will perform and what the users will experience in the Enterprise-ready, production application once the changes go out.
View full tip
The DPM User Experience Written by Tori Firewind, IoT EDC Team   As discussed in a previous post, DPM is a tool designed to be beneficial at all levels of a company, from the operators monitoring automated data on production events from the factory machines themselves, to the production supervisors who need to establish, task out, and track machine maintenance and improvement measures. DPM also engages the continuous improvement and plant leadership, by providing a standardized way to monitor performance that ultimately rolls up to the executive level. The end users of DPM are therefore diverse both in how they access DPM, and how they make use of its various features.   One of the perks to building DPM on top of the ThingWorx Foundation is that many of the webpages (called “mashups”) within ThingWorx are already responsive, and any  which aren’t responsive OOTB can be modified and custom designed for different size viewing screens to ensure that if necessary, end users can access DPM   from a variety of locations and devices. Most of the time, end users will be accessing mashups from hard-wired dashboards mounted on the actual devices,    or from wireless laptops which have standard size screens with standard resolutions. For use cases involving phones or tablets, however, it may be necessary to see how DPM will perform across a variety of bandwidth and latency conditions. Often, cellular or satellite connection is a must to facilitate field team cooperation, and 5G networks often result in worsened performance.   So, to demonstrate the influence of bandwidth and latency on the responsiveness of DPM, the Production Dashboard was loaded in the Google Chrome browser repeatedly under varying conditions. This dashboard is the webpage most operators and field users would access to log event information and production details (so it is widely used by end users). This provides a sort of benchmark of the DPM solution, something which indicates what can be expected and tells us a few things about how DPM should be deployed and configured.   Latency was introduced by hosting the servers involved in the test in different regions (all Azure cloud hosted servers, one in US East, one US West, and one in Japan East). Bandwidth was introduced using a tool on the PC with either no bandwidth or 4 megabits/second.   Browser caching was turned on and off as well, to simulate the difference between new and return users; new users would not have the webpage cached, so their load times are expected to be longer. Tomcat compression was also configured in half of the runs to demonstrate the importance of compression for optimal performance.   Each of these 24 scenarios was then tested 10 times from each location, and the actual data can be found in the attached benchmark document (a working  solution benchmark, which is not designed to be referenced directly, as matters of infrastructure may influence the exact performance of the solution).  Even with bandwidth, every region sees better performance for return users versus new users, which may be important to note. However, because DPM field users most commonly access DPM often, the return user time is a better indicator of adoption, and those numbers look great in our simulations. Notice the top line which shows the very worst of mobile performance, what happens over networks with bandwidth when Tomcat Compression is not enabled. Load times vary only slightly for regular networks when Tomcat Compression is enabled, and they vastly improve performance across regions and on mobile networks, so it is highly recommended (instructions on how to enable are below).   Key Takeaways Latency and bandwidth impact DPM performance in exactly the way one would expect of a web application. While any DPM server can be accessed from any region, regions with more latency will experience delays proportional to the amount of latency. In the chart here, find the three regions represented three times by three different colors (different from the charts above): The three different shades of each color represent the different regions Green represents the optimal configuration settings (Tomcat compression enabled, caching turned on) for returning users with bandwidth limitations (i.e. mobile networks like 5G) Blue shows first-time page visitors with no bandwidth limitations Purple shows first-time visitors that do have bandwidth limitations The uncompressed first-time load for mobile users (those with bandwidth limitations imposed) within the same region is also given to demonstrate the importance of enabling Tomcat Compression (load times only get worse without compression the farther the region) Notice how the green series has lower load times across the board than the blue one, meaning that return users even with bandwidth limitations have better performance across every region than new users. Also notice how the gap is larger between lighter colors and darker colors, where the darker the color, the farther the region from the DPM servers. This indicates that network latency has a more significant influence on performance versus bandwidth, with only longer running transactions like file uploads seeing a significant performance hit when on a network with bandwidth limitations.  Find out how to enable tomcat compression  and review the full solution benchmark in the document attached.  
View full tip
In a recent post, I gave an overview of the types of Building Blocks that are available with the ThingWorx Platform. As a reminder, Building Blocks are a collection of entities packaged together for modular software development. They are intended to be reusable, repeatable, and scalable, and they are the fastest way to either build your own solution or customize a pre-made PTC solution, like ThingWorx Digital Performance Management. There are four types of Building Blocks we will talk about for the development of IIoT applications and solutions on the ThingWorx platform: Connectors, Domain Models, Business Logic, and UI. In this post, we are going to do a deep dive on Connectors, which improve application performance and the transfer of data from disparate devices and systems.   What does a Connector look like in ThingWorx? All ThingWorx Building Blocks follow the same naming convention of CompanyName.BuildingBlockName, so any PTC-created Connectors will appear as PTC.Connector. Connectors in ThingWorx are external integrations that can come in through an industrial system, like an MES that could be connected to with ThingWorx Kepware, or business system, like a CRM that could be connected to via ThingWorx Flow or REST APIs. It could also be a connection to an external database. These are your data connections, so their structure will be somewhat dependent upon your database and assets.   What does a Connector look like in use in a PTC Solution? If we use the example of Digital Performance Management (DPM), one of the connectors we use is a Database Manager(ptc.DBConnection.Manager). It pulls information from the database that is being used from the implementation of DPM. If you think of Building Blocks like bricks, Connectors are the foundation. In this case, the Database Manager sits at the bottom layer of bricks to connect the asset data to the next layer of bricks (Domain Models, which I will cover in the next post) and allows you to pull any information you need.   How can you use a Connector in your solutions? As mentioned above, a Connector is the foundation building block for most solutions. It is what aggregates and transfers your solution-related data into the ThingWorx platform for use. The Connectors we currently have available on the ThingWorx platform will “talk” to your database and the other building blocks you use in your solutions, so for your own solutions, a Connector will be the entry point of your data into your solution.   How can you adapt a Connector for your own solutions? Because all PTC building blocks are built with JavaScript in the ThingWorx Mashup Builder, you can leverage existing Connectors on the ThingWorx platform and extend these same Connectors for your unique use case or build your own. You can view the code we used to create Connectors, so if they don’t pull data into your solution the way you want it to flow, you can override the Connector’s functions with your own capabilities.   The ThingWorx PM team is here to listen to your thoughts and feedback, so tell us: What questions do you have about Connectors and how they can improve your experience building solutions in the ThingWorx platform? Or, if you are waiting for the full deep dive into Building Blocks, keep an eye out for our next post on Domain Models, where we will cover the next “layer up” of the types of Building Blocks for use in ThingWorx.   Stay Connected, Rachel  
View full tip
The long-awaited manufacturing solution,  ThingWorx Digital Performance Management (DPM), has arrived! Announced at PTC’s  Manufacturing Live event, DPM provides key use cases around overall equipment effectiveness and real-time performance monitoring, while delivering insights with analytics and automated bottleneck identification tools. DPM gives customers clear insight into where and what to fix to drive efficiencies. Composed of modular building blocks with a foundation on the ThingWorx platform, DPM is easily configurable and customizable for closed-loop problem solving that drives productivity.   Let’s take a deeper look into what DPM is and how you can implement it to ensure your investment in the ThingWorx platform and digital transformation delivers business impact.   Monitor in real-time with Production Dashboard The Production Dashboard allows for automated or manual data entry of reason codes with a simple interface for limited disruption. Rather than providing front-line workers with the typical, difficult to understand, percentage based KPIs, Production Dashboard standardizes all losses, so operators can proactively resolve issues during production. You can configure this dashboard to collect granular data and allow opportunities for continuous improvement in process tracking.     Focus with Bottleneck Analysis Bottleneck analysis automatically identifies bottlenecks across the manufacturing process. Identifying bottlenecks can help you prioritize the highest-impact opportunities in the business process. This saves you having to manually identify and analyze potential issues and frees you up to work on other projects.   Prioritize with Time Loss Waterfall and Analyze with Loss Reason Pareto Monitor and analyze performance with data visualizations that help you pinpoint root causes and suggest improvements. Bring together your siloed data into one system and create a standard for how performance is measured and reported.   Improve with Action Tracker Action Tracker allows you to create continuous improvement actions tied to real production losses, to ensure your actions are having positive impact and return.  Create a digital workspace for teams to collaborate and learn from each other. Plus, you can track the improvements delivered through each individual action, so you can drill down and create transparency of work being done.   Confirm value delivered with Scorecard (Available in Later Versions) With the Scorecard feature, you can leverage a standard scorecard for enterprise wide KPIs to summarize factory health and compare similar factory operations. Use the scorecard to create trending and reporting that can be filtered based on the audience you are presenting data to. The scorecard gives you a consistent view that measures performance across the network and drives visibility and accountability across your business.   How do you plan to leverage DPM or the building blocks that make it up? We’d love to hear your thoughts on the first manufacturing solution from PTC.   Stay connected, Rachel   
View full tip
  ThingWorx 9.2 is here! Deploy an entire solution and all its dependencies in one click with Solution Central’s one-click deploy, garner deeper analytic insight with our new waterfall charts, and manage and authenticate users more seamlessly with an Azure Active Directory integration. Discover these features and more in my 9.2 preview post here!   Review our release notes here and be sure to upgrade to 9.2!   Stay connected, Kaya
View full tip
Hi I have attached a Postman collection, this can be used as a template and be modified. steps to upload the collection to Postman. 1. In your Postman window click at Import. 2. Once you clicked import, you can chose your file. 3. The collection is now visible in your left side of the window.
View full tip
Hi everyone,   We’re back! And we’ve got exciting news about Solution Central! As tempted as I am to share the news myself, I thought it only fitting to have Janie Pascoe, Product Manager of Solution Central, share the news with you. You may remember Janie from this post on ThingWorx's OPCUA functionality or this one on what it’s like to transition from a ThingWorx developer to a ThingWorx product manager. Janie, welcome back! The floor is yours.   Janie, PM of Solution Central: Thank you so much Kaija. I wanted to bring your readers up to speed on some of the latest and greatest in Solution Central. If you haven’t logged in to the Solution Central portal in a while, I highly recommend you do so because you will immediately be notified of what’s new in the application as you can see here due to our newly added what’s new popup blurb!     But in the spirit of giving you even more detail, let me tell you a bit more about what is new in Solution Central 3.0. This release is full of more intelligence than ever before! You can now not only deploy the solutions themselves from Solution Central but deploy all of a solution’s dependencies with the single click of a button. So, instead of having to deploy each dependency separately and in a specific order, Solution Central is now smart enough to understand the dependencies and deploy them for you. We have also added enhancements to our Solution Detail panel to make it even more intuitive and easy find what you’re looking for. And when it comes to clean up activities, we have you covered. Solution Central can now forget an instance when it’s no longer needed—no more questioning whether an instance is in active use or not.   Kaya: Thanks, Janie. Exciting stuff! Readers, you can learn all about these new features and more in our Release Notes and Help Center documentation. Be sure to try out the latest functionality!   Any questions, comments, or ideas for enhancements to Solution Central can be sent directly to jpascoe@ptc.com.   Stay on the lookout for our next release!   As always, stay connected, Kaya
View full tip
ThingWorx community,   As part of a continuous re-evaluation of our third-party software requirements, we regularly add and remove support for versions of operating systems, persistence providers, and web browsers.   On this note, we are planning to end support for Ubuntu 18.04 beginning with the ThingWorx release targeted for mid-CY2022. Per its release cycle, Ubuntu will move version 18.04 into Extended Security Maintenance at this point, meaning it will no longer receive regular maintenance updates.   PTC will continue to support Ubuntu 20.04 for the immediate future and will consider supporting Ubuntu 22.04 once it becomes Generally Available (GA).   Please let me know if you have any questions or concerns.   Regards,   Walter Haydock ThingWorx Product Management
View full tip
ThingWorx community,   As part of a continuous re-evaluation of our third-party software requirements, we regularly add and remove support for versions of operating systems, persistence providers, and web browsers.   On this note, for the ThingWorx release after 9.2 (currently targeted for the end of CY 2021), we are planning to end support for Windows Server 2016. Already five years old, this product has generally been replaced by Windows Server 2019 in common usage.   As per Microsoft’s published lifecycle, this version will be going out of "Mainstream Support" approximately one year after the aforementioned ThingWorx release. PTC will continue to support Windows Server 2019 for the immediate future, and will consider supporting Windows Server 2022 once it becomes Generally Available (GA).   Please let me know if you have any questions or concerns.   Regards,   Walter Haydock ThingWorx Product Management
View full tip
Hello!   We will host a live Expert Session: "Understanding ThingWorx Navigate Licensing" on February 11th, 10h EST.   Please find below the description of the expert session and the registration link.   Expert Session: Understanding ThingWorx Navigate Licensing Date and Time: February 11th, 10h EST Duration: 1 hour Host: Christoph Braeuchle, Emily Larkin and Steve Scheib - ThingWorx Navigate PM team Registration Here: https://www.ptc.com/en/resources/plm/webcast/understanding-thingworx-navigate-licensing     Description: ThingWorx Navigate licensing opens many users a way to access PLM data and functionality at an attractive price tag when they don’t need to use the full power of Windchill functionality. This licensing and packaging have changed over the past 1.5 years and this is the perfect time to share an update on available license types and answer essential questions like... Which license types do my end-users really need? What capabilities are provided by each license type? What are the best ways to understand and control license usage in my company? Don’t miss this session if you want to understand how ThingWorx Navigate licensing works and which options are available.   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’. You can also suggest topics for upcoming sessions using this small form.   Here are some recorded sessions that might be of your interest. You can find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search Navigate - SSL & Windchill Authentication This in Expert Session will take you through a step-by-step approach for configuring authentication between Windchill and Navigate with SSL.   Recoding Link Top 5 items to check for Thingworx Performance Troubleshooting How to troubleshoot performance issues in a Thingworx Environment? Here we cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support   Recording Link Thingworx 9.0 Component Based App Development Following the series of new capabilities released with Navigate 9.0, this session will focus in the details of Navigate Component Based app development and how to leverage this to your use cases Recording Link
View full tip
From the documentation, a SOLR node is only needed in case of using DataTables.  If the SOLR configuration field left blank, the extension will request to provide an input. Are SOLR nodes required or optional in order to use DSE with TW (in the hypothetical case of not using DataTables)?      -- As for functionality of the Thingworx, a Solr node is not required. However, the extension does try to validate the configuration, and hence, at this point, a SOLR node is mandatory to properly configure the extension. This will be fixed in the future. When there are 2 entries for addresses, one for a Cassandra Cluster and one for a Solr Cluster, are they the same Cluster, or different Clusters?      -- They could be either. There can be one machine with SOLR enabled and using the same IP for both Cassandra and Solr. However, it's not recommend for production workloads. It would be perfectly fine for development or test environments. In a Cluster, in order to have Solr and Cassandra nodes, use of Datacenters is required. Even if a Datacenter isn't explicitlydefined, a default install of DSE will create two data centers called "Cassandra" and "Solr" which is what would be seen see in the default "Cassandra Keyspace Settings" property in the configuration. If the user does create Datacenters with specific names then they will have to update the "Cassandra Keyspace Settings" property to reflect the same. I.e. replication = {'class':'NetworkTopologyStrategy', 'Cassandra':1, 'Solr':1} The number in front (1 being the default) represents the replication factor (https://docs.datastax.com/en/cql/3.1/cql/cql_using/update_ks_rf_t.html) depending on the number of nodes in each data center
View full tip
The purpose of this post is to provide some ideas and help diagnosing issues in mashup. First, check if the problem occurs at mashup runtime or in design(edit) mode. Runtime: Is the issue visual or related to improper service execution? (e.g, "my data is displaying correctly but the styling or formatting is wrong" -- visual, "my data is displayed incorrectly but the styling and formatting is right" -- improper service execution) For visual/styling/formatting issues, return to the edit mode of mashup, and ensure the proper style definitions were set up. Ensure the logic behind the connections is correct. Check configuration of the widget(s) involved. Were there any changes made to the styles after the mashup was saved and run the first time? If so, try - clearing the browser cache;  -reconnecting the dependent entity with the style involved in the issue. If the problem persists, contact technical support to raise a cosmetic defect ticket. For improper service execution, return to the composer and use the "test" button on the service to execute and validate the output. If the outputs are incorrect, check the code inside of the service. If the outputs come out as expected, try reconnecting the service in the mashup design mode and clearing the browser cache. If the issue is related to the data from the user database not displaying  -- ensure the database connectivity and proper credentials. If the problem persists, reach out to the technical support to raise a defect.    2.   Design/edit mode: If the widgets are not displaying correctly or not appearing in the list: Check the extensions involved are appearing under the extension manager. Re-upload if needed and restart the composer. If the Google Maps widget is not showing in the mashup the first time of being used, allow up to 2 hrs to load and cache. Submit a ticket to technical support, including the screenshots of the issue. For other styling, formatting, or improper display issues at design time: document the observation and supply the screenshots to the technical support team for investigation. Note: See Tools and approaches used in troubleshooting Twx issues.
View full tip
For a recent project, I was needing to find all of the children in a Network Hierarchy of a particular template type... so I put together a little script that I thought I'd share. Maybe this will be useful to others as well.   In my situation, this script lived in the Location template. This was useful so that I could find all the Sensor Things under any particular node, no matter how deep they are.   For example, given a network like this: Location 1 Sensor 1 Location 1A Sensor 2 Sensor 3 Location 1AA Sensor 4 Location 1B Sensor 5 If you run this service in Location 1, you'll get an InfoTable with these Things: Sensor 1 Sensor 2 Sensor 3 Sensor 4 Sensor 5 From Location 1A: Sensor 2 Sensor 3 Sensor 4 From Location 1AA: Sensor 4 From Location 1B: Sensor 5   For this service, these are the inputs/outputs: Inputs: none Output: InfoTable of type NetworkConnection   // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(AlertSummary) let result = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape({ infoTableName : "InfoTable", dataShapeName : "NetworkConnection" }); // since the hierarchy could contain locations or sensors, need to recursively loop down to get all the sensors function findChildrenSensors(thingName) { let childrenThings = Networks["Hierarchy_NW"].GetChildConnections({ name: thingName /* STRING */ }); for each (var row in childrenThings.rows) { // row.to has the name of the child Thing if (Things[row.to].IsDerivedFromTemplate({thingTemplateName: "Location_TT"})) { findChildrenSensors(row.to); } else if (Things[row.to].IsDerivedFromTemplate({thingTemplateName: "Sensor_TT"})) { result.AddRow(row); } } } findChildrenSensors(me.name);    
View full tip
Here are some tips on how to submit a ticket to the ThingWorx technical support team and what to expect. Providing a typical minimum information is always a good practice to lessen the questions and unnecessary back-and-forth communication prior to the actual investigation of the problem. Open a new ticket for each separate issue. We do track every technical issue that comes in. If the ticket is being submitted for troubleshooting: Please provide the versions of Thingworx, Tomcat, java; Operating System and specs. Attach the list of the extensions used. Include a detailed description of the problem; if applicable, include the screenshots. Evaluate the business impact caused by the issue. Optional: state the method of contact preference, whether it's a phone or email, and time if applicable. Expect a support engineer (SE) to establish the first contact via email, letting known of the case ownership, and further investigation. If the ticket is being submitted for enhancement request or improvement: Please provide a clear description of the feature, use case(s), expectations and any additional details that might play a role in prioritizing the request. Once the ticket has been created, it will be assigned to a support engineer (SE) who will then place a request (Jira) to R&D and provide a Jira # to the point of contact in the support ticket Enhancement requests and improvements are always considered; however, the delivery is not guaranteed. Once an SE provides the case contact with the Jira #, the support ticket will be closed, and the point of contact may reach out to the SE at any time to check on the status of the Jira. If the ticket is being submitted for a bug or a defect: Please provide the versions of Thingworx, Tomcat, java; Operating System and specs. Include a clear description of the problem, expected result, current result; a Evaluate the business impact. If reproducible, include the steps. Optional: include the entities and data (.xml, .json if applicable) to demonstrate the issue Once the ticket has been created, it will be assigned to a support engineer (SE) who will then place a request (Jira) to R&D and provide a Jira # to the point of contact in the support ticket (assuming no further information is required) The R&D will provide an estimate release after the issue is evaluated. Upon sending the ETA to the case contact, the SE will close the support ticket.
View full tip
Generating and Reviewing JMeter Results Overview The 4th in a series of articles on load testing with JMeter, this one covers pushing the limits of a test to see how much the application can handle, as well as generating and analyzing reports once the testing completes. This article rounds off the basics of JMeter, such that anyone should be able to perform enterprise-level load testing after reviewing the content here.    Multiple criteria can be used to evaluate results, including: response time (as monitored both by JMeter, and by some other tool on the system side) throughput number of errors resource saturation CPU, Memory, disk, and network utilization Depending on use case, some of these may be considered more important than others. For instance, some customers don't care if users wait a while for results to appear on the page (response time), because they set their users' expectations and mitigate the experience with well-designed loading graphics. With response times secondary, the real issues center around data loss or system outages, with resource utilization and number of errors becoming the more important indicators of system health. Request and database timeout errors are more important indicators, as they occur most often when resources are saturated and there is data loss.   It is typical for many customers to find preventing data loss and/or promoting data integrity to be more important than preventing long response times. Consider which of these factors is most important to your use case as you determine what kind of information to gather and review in your reports.   How to Create Client-Side Reports in JMeter Creating reports for the client-side data is very simple using JMeter, both from the command line and within the UI (as shown in the tutorial below). These reports have graphical displays of response times, information about the number and type of response errors, and other criteria of performance used to gauge the success or failure of a load test. Follow these steps to generate an index file, which when opened in your browser of choice, will show all of the relevant JMeter data. Tutorial: Create an empty directory in which to store reports: Start the JMeter test with these options, or run these commands after the fact, to generate the HTML report: Once the test completes, use: jmeter -g <outputfile.jtl/csv> -o <path to output folder for html report>​ To start a test with the correct command for report generation, use this command: jmeter -n -t <test JMX file> -l <outputfile.jtl/csv> -e -o <Path to output folder>​ Running the above commands will generate these files: When the test is complete, the many JMeter client consoles will look like this: Go ahead and close the windows to terminate once they are finished. Optionally you can run multiple tests sequentially using the same jmeter-server windows. Click on the “index.html” file to open the results viewing window:     At any time, modify the settings of this “HTML dashboard” using the details from the JMeter user manual. This citation describes many options for these dashboards, as well as recommendations on how to group and format the results in ways which best convey the success or failure of the test, based on the custom requirements of the application and how granular the view needs to be. Most of the time, the default settings work ok, showing something similar to this: The charts aren’t labeled very well here, so click on the Response Times submenu: This page may take some time to render if there is a lot of data: Next, scroll down to see all the requests that occurred and sort them by how long they took to complete. Anything which took over 5 seconds (or more depending on what is expected) should be investigated as part of the post-test analysis. Does something need to be tuned or optimized? This is how to tell which request is holding things up for your customers.  There is also a chart that shows the overview, grouping the response times by how long they took to demonstrate the health of the system more concretely. Typically, the bars look something like this:  This represents expected behavior, where most of the requests are quite fast, and then there are a few that had errors or took a bit longer. This is pretty typical for web activity. You can also generate the report through the main JMeter client: Give it a results file and an output directory to generate the same index file: There are log files in each of the JMeter client directories called “jmeter-server.log”: These files may show the wrong timezone, but the elapsed times are correct, and they will show when the JMeter clients started, how many threads they ran, which servers were which, and if there were any errors. Not all errors will mean a failed test, so review anything that appears and determine what is expected. Consider designing a batch script to gather all of these logs together, or even analyze them automatically to extract only relevant information.     How to Create Server-Side Results in DynaTrace Collecting data from the environment, including CPU usage, Memory utilization (used vs. total), Garbage Collection times and other metrics of system health on the server, will require the use of an external tool. PTC’s official tool for this is called DynaTrace (PTC System Monitor), shown here. PTC offers a runtime license for DynaTrace to anyone who buys certain products, including Kepware Server, ThingWorx Foundation and Navigate, Windchill, Integrity, and more. Read more information about DevOps on the PTC Community, and stay tuned for more articles on the subject to come from the EDC.   Another option would be something like telegraf and Grafana (from the previous blog post), which facilitate the option to create dashboards around the data output specific to the needs of the application, which can still be monitored even once the application goes live. It can certainly be worth it to use such a tool for monitoring the server-side, but the set-up takes more time. Likewise, many VMs have monitoring faculties for CPU usage and memory utilization built-in, but DynaTrace also has visualization, consolidation of system elements, and other features that make it easy to use right out of the box. See the screenshots below for some examples on how to use DynaTrace, and be sure to review PTC’s full documentation here.   The example shown here is a ThingWorx Navigate system, with Windchill and ThingWorx Foundation set up side-by-side. This chart shows the overall response times of the server-side of the system. JMeter collects the statistics on what the client looks like, while another tool is required to collect the server-side metrics like CPU usage and Memory utilization, things that indicate the health of the VM or computer hosting the clients. An older version of DynaTrace is depicted here, available for free for all ThingWorx customers from the PTC Downloads Site (under various product listings).   In DynaTrace, you can build new dashboards using PurePaths: You can also look at the response times for each service, but be sure to change the response limit to a large number so that all the results are returned. Changing the response limit to a large number to ensure all of the results show in the PurePaths dashboard.   Highlighted here in DynaTrace is the longest service that ran, which in this case took 95 seconds to fully respond: More specific analysis of this service can now begin. Perhaps it needs to be tuned, or otherwise optimized to handle the number of threads, i.e. the number of users. Perhaps the system needs more resources or the VM isn’t large enough for the test. Perhaps more JMeter clients and system resources are required. Something will explain this long response time, and that will inform as to what work might still remain before this system can scale up to the enterprise level.   How to Use the Test Results Load Testing often means scaling the test up a little more each time until the system eventually breaks, or the target performance is reached. Within JMeter, this won’t mean increasing the overall number of threads per one JMeter client, but instead, scaling horizontally to other JMeter clients (as covered in the previous blog post). Now that the remote or distributed clients are configured and the test running, how do we know when the test is beginning to fail?   It turns out that this answer is not a simple one. Which results are considered desirable will vary from one customer to the next based on many factors, and analyzing the test results is a massive topic all on its own. However, there is one thing that any customer would care to review, and that is the response time overview chart found within the JMeter reports. This chart can be used to compare the performance of the majority of threads against a baseline, indicating the point at which the test begins to fail, i.e. the point at which the limits of the system are reached.   The easiest way to determine a good standard response time for a load test, a baseline, is to start with a single JMeter client and record the response times for just 1-5 threads. You can record the response times for individual requests, particularly queries and other services with expected long response times, or the average response times across all requests or groups of requests, if the performance of some mashups are more important than others.   This approach is better than relying on the response times seen in a browser because HTML pages load differently when rendered in a browser, with differing graphical resource requirements than what is requested in JMeter. Note that some customers will also manually record response times within a separate browser-based test scenario during load testing as either a sanity check or as part of their overall benchmarking in order to further validate the scalability of the application, but this wouldn’t involve JMeter given that browsers load things differently and cross-comparison is a bad idea.   Once the baseline response times are established, start increasing the thread counts across the many JMeter clients until you see the response times go up on average. PTC’s standard criteria for load testing is exceeded when the average response times are roughly doubled, or when the system seems overwhelmed with the user load on the server side (which is what to look out for in DynaTrace or the external system monitor). At this point, the application is said to have reached a bottleneck, which could be a simple tuning problem, or it could be saturated by resource requirements. Either way, the bottleneck is proof that the system can’t take any more threads without users beginning to notice and the response times approaching an unreasonable delay.   Other criteria can be used as well, say if any one thread takes more than 5 seconds to respond. Also ensure there are no unexpected errors, as gateway errors represent failed tests too. Sometimes there will be errors even when the test is successful, though, so consider monitoring the error percentage, a column in the Summary Report tab of JMeter, to see what is normal. The throughput column may also be something to monitor. Many watch for increases in throughput as the thread count increases to ensure there is no degradation in performance (which may indicate hardware or sizing constraints).   The Summary Report will look something like this, with thread group results from all of the clients appearing side by side, differentiated from each other by the unique port: Conclusions Generating and reviewing reports within JMeter is straight-forward and easily customizable. Be sure to also monitor the system itself using an external tool like DynaTrace, PTC’s official System Monitor, which has a lot of value considering how easy it is to use out of the box. If the system looks healthy on the server side and the response times are within an acceptable range on the client side, then the application is ready for enterprise use. Be sure to generate a baseline for response times within JMeter, remembering that browsers have different loading processes than JMeter, and not to cross-compare.   This article constitutes the end of the basics. The final article to come will talk about more advanced test design features and best practices, so stay tuned!
View full tip
Distributed Testing with JMeter Overview Running JMeter to the scale required by most customers is something that demands additional considerations than discussed in the previous two articles. At scale, a test may need to simulate thousands of users, which will require more than just one JMeter client be set-up on one or many hosts, as shown in the 3rd JMeter article here, in a tutorial on Distributed Testing.     Distributed Testing Remote Testing configuration in which the main JMeter client is located at one IP address, controlling the rest as they step through their own copies of the JMeter tests, based on their own unique data files as necessary, to simulate a user load across a network, a series of regions, or simply across many machines if limited by the size of the physical hardware [JMeter link for this image in text body below] One key aspect of a proper JMeter load test is distributed or remote testing, i.e. making use of more than one JMeter client at a time to simulate the user load on the Application server. There are many reasons to make use of a network of clients such as this, like mimicking cross-region user access to the Foundation server, simulating different levels of latency for different users, and increasing the overall number of users which can contribute to the load test, while minimizing the performance cost of hosting that many threads on any single server.      A single JMeter client has a practical limit of 150-250 threads across all groups and requires about 1 CPU and 8 Gb of RAM. After this point, the amount of garbage collection and other processing there is for each client to do is substantial. As the client processes its own data and sends requests to the Application server at the same time, there are diminishing returns, and the responses begin to take longer (or errors start occurring) simply because of resource starvation within the client process rather than on the Application server. Therefore, distributed testing is required for most customers doing larger load tests using JMeter. Many applications will have more than a few hundred users and/or will have users accessing the system from a variety of regions and networks, each of which could have significantly different network latency. So, in order to work with the limitations of the JMeter executable and address regional concerns, distributed or remote testing is typically required for almost all of PTC’s customers who scale test with JMeter.      With a simple (monolithic) distributed test, all of the JMeter clients are located on the same host and share an IP address, but each must be configured with a unique RMI port to connect to the controlling process. If these are located on a VM, then the resource specifications can merely be increased and the VM sized larger as necessary to ensure the network of JMeter clients runs as expected. Each JMeter client requires around 8 GB for its heap size and 1 CPU (with some additional resources for the host operating system). Multi-hosted testing becomes the required option when limited by physical hardware (or a relatively small VM hardware host). If there are only 4-core, 32-GB machines, then plan for a machine per every 3 JMeter clients. If simulating thousands of users, this could mean half a dozen machines or more are required, which can still sometimes work out to be more cost efficient than one large, 256 GB, VM hosted in the cloud. Using many hosts in physical locations can also simulate regions with different network characteristics.      A tutorial for distributed testing across one host is shown here. For more information, see the Apache web articles on each topic: Remote Testing and Distributed Testing Step by Step.     Tutorial: Step Up Distributed Test on One Host Copy the source directory for the whole JMeter project and rename it however many times as required. Here there are 22 JMeter clients side-by-side on a single, 256-GB VM (3000+ users):   Each directory (shown above) is identical, except that the “jmeter.properties” files (found in the bin directory in each project) have unique settings, namely the RMI port:     Each JMeter client must contain a copy of the same test scripts found on the main server:   In the “jmeter.properties” file for the main server, specify the IPs and ports for each remote/distributed client (under remote_hosts), as shown: In this image, the IPs are all the same, with just the port differing from client to client. Here only 4 clients are in use, with the rest commented out for future tests. This is how to scale up and test incrementally more users each time. Just add another server to add another 150-250 users, until eventually the target number of users is reached, or the server is saturated. These IPs will differ if doing a true remote test, with each being the server location of the JMeter client within the same network. The combination of IP address and port will all still need to be unique, and communication between the overall jmeter controller and the clients over the RMI ports needs to be allowed by the network/firewalls. Note that the number of users is set using the parameter under “Test Plan” which was set-up last time. This value represents the number of users by specifying the number of threads per thread group, and it can remain the same for every client or vary accordingly, if for instance one region is smaller than another. The “Test Plan” parameters are shown here:   To optionally start all of the clients at once in preparation for test execution, create a basic batch or shell script which goes to the bin directory of each agent and calls the start command: “jmeter-server”. In this image from a Windows JMeter host, only the first few agents are in use, but removing the “rem” to uncomment the other start command lines in this file would add more servers to be started. Note how the Java parameter for java.rmi.server.hostname must match the main JMeter client network configuration here for them to connect (see Apache links above for more information). This will start each of them in their own CMD window, which once closed, will terminate the JMeter client processes. Parameter like rampUp time within the main test script will scale with the number of client processes. For example, 100 users and 300 seconds rampUp with 4 clients results in 400 overall user threads that are all logged in after 300 seconds. Once all clients are running, then click Remote Start All to start the test across every server from a GUI (usually for debugging) or execute the test using command line: jmeter -n -r -t <test.jmx> -l <results.jtl>   The main server sends the actions to the remote clients to run, so all the clients need is input parameters. For instance, a CSV file may exist in each directory which has different data from client to client, to create pseudo-random user loads and represent different kinds of user activity. The file shown in this image is different, and unique, in each of the client directories:   Conclusion Here, we learned how to horizontally scale the load test, setting up more JMeter clients to facilitate larger, more complete user loads. We also discussed the difference between distributed and remote testing, and how the former is easier to set up and use, especially on VMs, but the latter might be better for simulating region differences and the impact of network latency. The latter will likely also be required if there are hardware constraints to consider, since each JMeter client needs about 8 GB for its heap, and another 8 GBs, or a core or two of similar size, is needed per every 3 JMeter clients for the communication and processing of data. Stay tuned for the next article on generating and reviewing the results of the load tests.  
View full tip
Hello!   We will host a live Expert Session: "Top 5 items to check for Thingworx Performance Troubleshooting" on Sept 3rdh at 09:00 AM EST.   Please find below the description of the expert session as well as the link to register .   Expert Session: Top 5 items to check for Thingworx Performance Troubleshooting Date and Time: Thursday, Sept 3rd, 2020 09:00 am EST Duration: 1 hour Description: How to troubleshoot performance issues in a Thingworx Environment? Here we will cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support Registration: here   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’   You can also suggest topics for upcoming sessions using this small form.
View full tip
Announcements