cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT Tips

Sort by:
    Build a remote monitoring application with our developer toolkit for real-time insight into a simulated SMT assembly line.   Guide Concept   This project will introduce methods to creating your IoT application with the ability to analyze real time information as the goal. Following the steps in this guide, you will create an IoT application with the ThingWorx Java SDK that is based on the functionality of an SMT assembly line. We will teach you how to use the ThingWorx Java SDK, ThingWorx Composer, and the ThingWorx Mashup Builder to connect and build a fully functional IoT application running numerous queues and "moving parts".   You'll learn how to   Use ThingWorx Composer to build an application that uses simulated data Track diagnostics and performance in real-time   NOTE: The estimated time to complete this guide is 60 minutes       Step 1: Completed Example   Download the completed files for this tutorial attached here: ManagementApplication.zip.   In this tutorial, we walk through a real-world scenario for a Raspberry Pi assembly line. The ManagementApplication.zip file provided to you contains a completed example of an SMT application. Utilize this file to see a finished example and return to it as a reference if you become stuck creating your own fully flushed out application. Keep in mind, this download uses the exact names for Entities used in this tutorial. If you would like to import this example and also create Entities on your own, change the names of the Entities you create. The download contains the following Java classes that support this scenario:    Name                          Description Motherboard Abstract representation of a Thing inheriting from a MotherboardTemplate AssemblyLine Abstract representation of a Thing inheriting from a SMTAssemblyLineTemplate AssemblyMachine Abstract representation of a Thing inheriting from a AssemblyMachineTemplate   Once you complete the Java environment setup by installing a Java JDK, import the Entities/ThingWorxEntities.xml file into ThingWorx Composer. This file contains various Data Shapes, Mashups, Value Streams, Things, and Thing Templates necessary to support the application. The more important Entities are as follows:    Feature                                                 Entity Type          Description RaspberryPi 1 - 6 Thing Things that inherit from the motherboard template SolderPasteAssemblyMachine Thing A Thing that inherits from the assembly machine template PickPlaceAssemblyMachine Thing A Thing that inherits from the assembly machine template ReflowSolderAssemblyMachine Thing A Thing that inherits from the assembly machine template InspectionAssemblyMachine Thing A Thing that inherits from the assembly machine template RaspberryPiSMTAssemblyLine Thing A Thing that inherits from the assembly line template MotherboardTemplate ThingTemplate A template used for building motherboard devices AssemblyMachineTemplate ThingTemplate A template used to create the various types of SMT assembly machines SMTAssemblyLineTemplate ThingTemplate A template used to represent the entire assembly line and all devices in it Advisor User User created to be used with the Java SDK examples   NOTE: An Application Key is NOT included in the zip file you downloaded. You will need to create your Application Key and assign it to the Advisor user provided in the ThingWorxEntities.xml file, the Administrator (which is not recommended for production applications), or any user you've created. If you do not know how to create one or just need a refresher, visit the Create An Application Key guide, then come back to this guide.       Step 2: Run Application   The Java code provided in the download is pre-configured to run and connect to the entities in the ThingWorxEntities.xml file. Open the Executable/Script in a text editor, and edit the script with your host and port.  Operating System   File Name Mac/Linux Script.sh Windows Script.bat Update the <HOST> and <PORT> arguments to that of your ThingWorx Composer and update the Application Key argument to the one you have created. Use the examples in the file for assistance. NOTE: If you are using the hosted trial server, follow the HTTPS example and use 443 as the port. After updating the script that pertains to your operating system, double-click or run Script.sh (Linux, Mac) or Script.bat (Windows) to run the Java program. In your browser, proceed to the following URL (replace the host field with your ThingWorx Composer host) in order to see the application work:   <host>/Thingworx/Runtime/index.html#master=AssemblyLineMaster&mashup=RaspberryPiAssemblyLine   You can also open the RaspberryPiAssemblyLine Mashup in the Composer and click View Mashup.   You should be able to see rows of assembly machines with buttons. Click the Start button to start the assembly line. Click the Add Board button to add Raspberry Pi motherboards.   NOTE: The screen will not update and properties cannot be changed until the Java backend starts running. Ensure the connection is made before attempting to start the assembly line.   Functional Breakdown   At runtime, the Mashup executes the following functions:                Mashup Component        Function 1  Assembly Machines Selecting an assembly machine will provide you with information on the diagnostic status of that assembly machine and access to charts highlighting its performance. 2  Start Button Start up the assembly line and all assembly machines. 3  Shutdown Button Stop the assembly line and shutdown all assembly machines. Queues will not be purged. 4  Motherboard Add Dropdown A dropdown that shows the available motherboards that can be added to the assembly line. 5  Add Boards Button If a MotherboardTemplate Entity is selected in the Motherboard Add dropdown, that Raspberry Pi will be added to the assembly line. If no Motherboard is selected, this will add a new Raspberry Pi Thing to the assembly line. 6  Motherboard Image Show all motherboards currently inside the assembly line queue of Raspberry Pi. 7  Motherboard Pick Up Dropdown A dropdown that shows the motherboards in the assembly line that are not in a Complete Stage. 8  Add Pick Up Button If a MotherboardTemplate entity is selected in the Motherboard Pick Up dropdown, that Raspberry Pi will be removed from the assembly line and no longer be available. This can be done if a Raspberry Pi is slowing down the other queues. 9  Box Image Show all motherboards currently in the Complete Stage.         Step 3: Services and Java Implementation   JavaScript using ThingWorx Services   To support and run the application quickly, ThingWorx Services are utilized as much as possible. This ensures the speed and quality of the application are maintained while also ensuring code changes can be made quickly.   Opening and Starting Up   Open the RaspberryPiAssemblyLine Mashup by going to the URL provided in the last section. The machines will all be in a shut-down (RED) state. This is ensured by a call to the Shutdown service within the SMTAssemblyLineTemplate ThingTemplate. This method begins the process of resetting the Motherboards to their default states and AssemblyMachines to a shutdown state.   Click the Start button to call the StartUp Service. This call will notify the Java Code to turn the simulated machines on and begin waiting for any motherboards to be added to the queue.   INFO: The StartUp and Shutdown services call other services, some of which can be overrided. If you would like to make a change to the implementation, make the change in an implementation of the SMTAssemblyLineTemplate ThingTemplate. You can use RaspberryPiSMTAssemblyLine as an example.   New Raspberry Pi Names   The CommonServices Entity provides services that can be reused by other entities easily. The GenerateRandomThingName service is utilized to create a psuedo-random name for a new Motherboard. You can use this service to create names - names may start with “Raspberry,” but not necessarily - they are based on how you set the parameters.   Creating and Adding Boards   Select the Add Board button to make a call to the AddBoard service of the SMTAssemblyLineTemplate ThingTemplate. This service will call the CommonServices Thing to create a new name for the Motherboard, then begin the process of creating, enabling, and adding that Motherboard to the simulated devices in the Java code.   Pickup Boards   Select the Pickup Board button to make a call to the PickUpMotherboard service of the SMTAssemblyLineTemplate ThingTemplate. This service will remove a Motherboard from the assembly line, update the status to having been picked up, and ensure the simulated devices are updated with this new information.   Queue Processing   Add a Motherboard to the available queue of a machine when the Motherboard is ready to be worked on that machine. A machine will NOT know information about a Motherboard until that motherboard is ready for that stage of processing.   The Motherboard is then added to the internal queue of the machine based on the size of the internal queue of that machine. Being in the internal queue of a machine does not mean it is being worked on. The Motherboard is ONLY being worked on when the machine has added the Motherboard to it’s working queue. The size of the working queue is based on the machine’s placement heads. You can play with these values to increase or descrease queue performance.   INFO: The heads, speeds, and queue sizes of the machines are created in the RaspberryPiSMTAssemblyLine Thing. To change these configurations, update the AddStartingMachines service with new values or new machines.   Java Implementation using ThingWorx Java SDK   The Java code we created for the Assembly Line scenario creates a connection to the ThingWorx Composer as any ThingWorx SDK utility would. This code is used to allow extended functionality for the application, and mimics the behavior of devices or machines connected to the ThingWorx Composer.   Motherboard Class   The Motherboard Class contains several methods to ensure the location of the motherboard is known at all times. It also updates the status level from 0 to 100 as the motherboard is being assembled.   AssemblyMachine Class   The fields in the AssemblyMachine class ensure that the queues handled by the machine are working correctly. When an AssemblyMachine is created, it will load both the available queue and the internal queue if the machine will be the first stage in the assembly line (Soldering). If not a solder machine, the queues will be empty, as no device is pending its task. If the machine is on, it will continue to work based on its current status of the motherboards in its queue. When a machine is turned on or the current task is complete, the AssemblyMachine will re-evaluate the queues to optimize timing and decrease idle time.   Challenge: Find a way to improve the timing of the queue and reduce the idle time even more. Think of a problem an assembly line might have when machines are waiting on a prior machine to complete a task.   SMTAssemblyLine Class   The SMTAssemblyLine class handles the overall process and controls how motherboards are handled when entering and exiting the assembly line. There are also listeners to start up the assembly machines.   When a board is added to the queue of the assembly line, it will instantly be added to the available queue for a solder machine to begin processing. This is the only machine that will have immediate access to the motherboard. When a board is picked up from the assembly line queue, the status of the board is set to “PICKED UP”. That motherboard will be available later for processing by the assembly line.     Click here to view Part 2 of this guide.  
View full tip
KEPServerEX requires the 32-bit version of Java if you are using the IoT Gateway Plug-in. If you do not have the 32-bit version installed and attempt to connect the IoT Gateway, the KEPServerEX Event Log will report the following error: “IoT Gateway failed to start, 32-bit JRE required." Some of the Manufacturing Applications training content relies on this Plug-in, as well. As a best practice, it is recommended that both the 32-bit and 64-bit versions of Java be installed. This install is available for download from the Oracle website, here: Java SE Runtime Environment 8 - Downloads
View full tip
By Tim Atwood and Dave Bernbeck, Edited by Tori Firewind Adapted from the March 2021 Expert Session Produced by the IoT Enterprise Deployment Center The primary purpose of monitoring is to determine when your application may be exhausting the available resources. Knowledge of the infrastructure limits help establish these monitoring boundaries, determining straightforward thresholds that indicate an app has gone too far. The four main areas to monitor in this way are CPU, Memory, Networking, and Disk.   For the CPU, we want to know how many cores are available to the application and potentially what the temperature is for each or other indicators of overtaxation. For Memory, we want to know how much RAM is available for the application. For Networking, we want to know the network throughput, the available bandwidth, and how capable the network cards are in general. For Disk, we keep track of the read and write rates of the disks used by the application as well as how much space remains on those.   There are several major infrastructure categories which reflect common modes of operation for ThingWorx applications. One is Bare Metal, which relies upon the traditional use of hardware to connect directly between operating system and hardware, with no intermediary. Limits of the hardware in this case can be found in manufacturing specifications, within the operating system settings, and listed somewhere within the IT department normally. The IT team is a great resource for obtaining these limits in general, also keeping track of such things in VMware and virtualized infrastructure models.   VMware is an intermediary between the operating system and the hardware, and often its limits are determined based on the sizing of the application and set by the IT team when the infrastructure is established. These can often be resized as needed, and the IT team will be well aware of the limits here, often monitoring some of the performance themselves already. This is especially so if Cloud Providers are used, given that these are scaled up virtualizations which are configured in easy-to-use cloud portals. These two infrastructure models can also be resized as needed.   Lastly Containers can be used to designate operating system resources as needed, in a much more specific way that better supports the sharing of resources across multiple systems. Here the limits are defined in configuration files or charts that define the container.   The difficulties here center around learning what the limits are, especially in the case of network and disk usage. Network bandwidth can fluctuate, and increased latency and network congestion can occur at random times for seemingly no reason. Most monitoring scenarios can therefore make due with collecting network send and receive rates, as well as disk read and write rates, performed on the server.   Cloud Providers like Azure provide VM and disk sizing options that allow you to select exactly what you need, but for network throughput or network IO, the choices are not as varied. Network IO tends to increase with the size of the VM, proportional to the number of CPU cores and the amount of Memory, so this may mean that a VM has to be oversized for the user load, for the bulk of the application, in order to accommodate a large or noisy edge fleet. The next few slides list the operating metrics and common thresholds used for each. We often use these thresholds in our own simulations here at PTC, but note that each use case is different, and each situation should be analyzed individually before determining set limits of performance.   Generally, you will want to monitor: % utilization of all CPU cores, leaving plenty of room for spikes in  activity; total and used memory, ensuring total memory remains constant throughout and used memory remains below a reasonable percentage of the total, which for smaller systems (16 GB and lower) means leaving around 20% Memory for the OS, and for larger systems, usually around 3-4 GB.    For disks, the read and write rates to ensure there is ample free space for spikes and to avoid any situation that might result in system down time;  and for networking, the send and receive rates which should be below 70% or so, again to leave room for spikes.   In any monitoring situation, high consistent utilization  should trigger concern and an investigation into  what’s happening. Were new assets added? Has any recent change caused regression or other issues?    Any resent changes should be inspected and the infrastructure sizing should be considered as well. For ThingWorx specific monitoring, we look at max queue sizes, entries performed, pool sizes, alerts, submitted task counts, and anything that might indicate some kind of data loss. We want the queues to be consistently cleared out to reduce the risk of losing data in the case of an interruption, and to ensure there is no reason for resource use to build up and cause issues over time. In order for a monitoring set-up to be truly helpful, it needs to make certain information easily accessible to administrative users of the application. Any metrics that are applicable to performance needs to be processed and recorded in a location that can be accessed quickly and easily from wherever the admins are. They should quickly and easily know the health of the application from a glance, without needing to drill down a lot to be made aware of issues. Likewise, the alerts that happen should be  meaningful, with minimal false alarms, and it is best if this is configurable by the admins from within the application via some sort of rules engine (see the DGIS guide, soon to be released in version 9.1). The  monitoring tool should also be able to save the system history and export it for further analysis, all in the name of reducing future downtime and creating a stable, enterprise system.     This dashboard (above) is a good example of how to  rollup a number of performance criteria into health indicators for various aspects of the application. Here there is a Green-Yellow-Red color-coding system for issues like web requests taking longer than 30s, 3 minutes, or more to respond.   Grafana is another application used for monitoring internally by our team. The easy dashboard creation feature and built-in chart modes make this tool  super easy to get started with, and certainly easy to refer to from a central location over time. Setting this up is helpful for load testing and making ready an application, but it is also beneficial for continued monitoring post-go-live, and hence why it is a worthy investment. Our team usually builds a link based on the start and end time of tests for each simulation performed, with all of the various servers being monitored by one Grafana server, one reference point.   Consider using PTC Performance Advisor to help monitor these kinds of things more easily (also called DynaTrace). When most administrators think of monitoring, they think of reading and reacting to dashboards, alerts, and reports. Rarely does the idea of benchmarking come to mind as a monitoring activity, and yet, having successful benchmarks of system performance can be a crucial part of knowing if an application is functioning as expected before there are major issues. Benchmarks also look at the response time of the server and can better enable  tracking of actual end user experience. The best  option is to automate such tests using JMeter or other applications, producing a daily snapshot of user performance that can anticipate future issues and create a more reliable experience for end users over time.   Another tool to make use of is JMeter, which has the option to build custom reports. JMeter is good for simulating the user load, which often makes up most of the server load of a ThingWorx application, especially considering that ingestion is typically optimized independently and given the most thought. The most unexpected issues tend to pop up within the application itself, after the project has gone live.   Shown here (right) is an example benchmark from a Windchill application, one which is published by PTC to facilitate comparison between optimized test systems and real life performance. Likewise, DynaTrace is depicted here, showing an automated baseline (using Smart URL Detection) on Response Time (Median and 90th percentile) as well as Failure Rate. We can also look at Throughput and compare it with the expected value range based on historical throughput data. Monitoring typically increases system performance  and availability, but its other advantage is to provide faster, more effective troubleshooting. Establish a systematic process or checklist to step through when problems occur, something that is organized to be done quickly, but still takes the time to find and fix the underlying problems. This will prevent issues from happening again and again and polish the system periodically as problems occur, so that the stability and integrity of the system only improves over time. Push for real solutions if possible, not band-aids, even if more downtime is needed up front; it is always better to have planned downtime up front than unplanned downtime down the line. Close any monitoring gaps when issues do occur, which is the valid RCA response if not enough information was captured to actually diagnose or resolve the issue.   PTC Tech Support developed a diagnostic data gathering query for Oracle that customers can use, found in our knowledgebase. This is an example of RCA troubleshooting that looks at different database factors, reporting on which queries perform the worst  based on inputted criteria. Another example of troubleshooting is for the Java JVM, where we look at all of the things listed here (below) in an automated, documented process that then generates a report for easy end user consumption.   Don’t hesitate to reach out to PTC Technical Support in advance to go over your RCA processes, to review benchmark discrepancies between what PTC publishes and what your real-life systems show, and to ensure your monitoring is adequate to maintain system stability and availability at all times.  
View full tip
In the ptc-windchill-extension-1.0.0-14.zip archive there is an extension called infotableselector_ExtensionPackage.zip​ . This extension enables the use of the Widget called Infotable Selector, which can be used to clear the selection in a grid. For how to use this widget, take a look at the picture:
View full tip
Recently, mentor.axeda.com was retired. The content has not yet been fully migrated to Thingworx Community, though a plan is in place to do this over the coming weeks. Attached, please find OneNote 2016 and PDF attachments that contain the content that was previously available on the Axeda Mentor website.
View full tip
Generating and Reviewing JMeter Results Overview The 4th in a series of articles on load testing with JMeter, this one covers pushing the limits of a test to see how much the application can handle, as well as generating and analyzing reports once the testing completes. This article rounds off the basics of JMeter, such that anyone should be able to perform enterprise-level load testing after reviewing the content here.    Multiple criteria can be used to evaluate results, including: response time (as monitored both by JMeter, and by some other tool on the system side) throughput number of errors resource saturation CPU, Memory, disk, and network utilization Depending on use case, some of these may be considered more important than others. For instance, some customers don't care if users wait a while for results to appear on the page (response time), because they set their users' expectations and mitigate the experience with well-designed loading graphics. With response times secondary, the real issues center around data loss or system outages, with resource utilization and number of errors becoming the more important indicators of system health. Request and database timeout errors are more important indicators, as they occur most often when resources are saturated and there is data loss.   It is typical for many customers to find preventing data loss and/or promoting data integrity to be more important than preventing long response times. Consider which of these factors is most important to your use case as you determine what kind of information to gather and review in your reports.   How to Create Client-Side Reports in JMeter Creating reports for the client-side data is very simple using JMeter, both from the command line and within the UI (as shown in the tutorial below). These reports have graphical displays of response times, information about the number and type of response errors, and other criteria of performance used to gauge the success or failure of a load test. Follow these steps to generate an index file, which when opened in your browser of choice, will show all of the relevant JMeter data. Tutorial: Create an empty directory in which to store reports: Start the JMeter test with these options, or run these commands after the fact, to generate the HTML report: Once the test completes, use: jmeter -g <outputfile.jtl/csv> -o <path to output folder for html report>​ To start a test with the correct command for report generation, use this command: jmeter -n -t <test JMX file> -l <outputfile.jtl/csv> -e -o <Path to output folder>​ Running the above commands will generate these files: When the test is complete, the many JMeter client consoles will look like this: Go ahead and close the windows to terminate once they are finished. Optionally you can run multiple tests sequentially using the same jmeter-server windows. Click on the “index.html” file to open the results viewing window:     At any time, modify the settings of this “HTML dashboard” using the details from the JMeter user manual. This citation describes many options for these dashboards, as well as recommendations on how to group and format the results in ways which best convey the success or failure of the test, based on the custom requirements of the application and how granular the view needs to be. Most of the time, the default settings work ok, showing something similar to this: The charts aren’t labeled very well here, so click on the Response Times submenu: This page may take some time to render if there is a lot of data: Next, scroll down to see all the requests that occurred and sort them by how long they took to complete. Anything which took over 5 seconds (or more depending on what is expected) should be investigated as part of the post-test analysis. Does something need to be tuned or optimized? This is how to tell which request is holding things up for your customers.  There is also a chart that shows the overview, grouping the response times by how long they took to demonstrate the health of the system more concretely. Typically, the bars look something like this:  This represents expected behavior, where most of the requests are quite fast, and then there are a few that had errors or took a bit longer. This is pretty typical for web activity. You can also generate the report through the main JMeter client: Give it a results file and an output directory to generate the same index file: There are log files in each of the JMeter client directories called “jmeter-server.log”: These files may show the wrong timezone, but the elapsed times are correct, and they will show when the JMeter clients started, how many threads they ran, which servers were which, and if there were any errors. Not all errors will mean a failed test, so review anything that appears and determine what is expected. Consider designing a batch script to gather all of these logs together, or even analyze them automatically to extract only relevant information.     How to Create Server-Side Results in DynaTrace Collecting data from the environment, including CPU usage, Memory utilization (used vs. total), Garbage Collection times and other metrics of system health on the server, will require the use of an external tool. PTC’s official tool for this is called DynaTrace (PTC System Monitor), shown here. PTC offers a runtime license for DynaTrace to anyone who buys certain products, including Kepware Server, ThingWorx Foundation and Navigate, Windchill, Integrity, and more. Read more information about DevOps on the PTC Community, and stay tuned for more articles on the subject to come from the EDC.   Another option would be something like telegraf and Grafana (from the previous blog post), which facilitate the option to create dashboards around the data output specific to the needs of the application, which can still be monitored even once the application goes live. It can certainly be worth it to use such a tool for monitoring the server-side, but the set-up takes more time. Likewise, many VMs have monitoring faculties for CPU usage and memory utilization built-in, but DynaTrace also has visualization, consolidation of system elements, and other features that make it easy to use right out of the box. See the screenshots below for some examples on how to use DynaTrace, and be sure to review PTC’s full documentation here.   The example shown here is a ThingWorx Navigate system, with Windchill and ThingWorx Foundation set up side-by-side. This chart shows the overall response times of the server-side of the system. JMeter collects the statistics on what the client looks like, while another tool is required to collect the server-side metrics like CPU usage and Memory utilization, things that indicate the health of the VM or computer hosting the clients. An older version of DynaTrace is depicted here, available for free for all ThingWorx customers from the PTC Downloads Site (under various product listings).   In DynaTrace, you can build new dashboards using PurePaths: You can also look at the response times for each service, but be sure to change the response limit to a large number so that all the results are returned. Changing the response limit to a large number to ensure all of the results show in the PurePaths dashboard.   Highlighted here in DynaTrace is the longest service that ran, which in this case took 95 seconds to fully respond: More specific analysis of this service can now begin. Perhaps it needs to be tuned, or otherwise optimized to handle the number of threads, i.e. the number of users. Perhaps the system needs more resources or the VM isn’t large enough for the test. Perhaps more JMeter clients and system resources are required. Something will explain this long response time, and that will inform as to what work might still remain before this system can scale up to the enterprise level.   How to Use the Test Results Load Testing often means scaling the test up a little more each time until the system eventually breaks, or the target performance is reached. Within JMeter, this won’t mean increasing the overall number of threads per one JMeter client, but instead, scaling horizontally to other JMeter clients (as covered in the previous blog post). Now that the remote or distributed clients are configured and the test running, how do we know when the test is beginning to fail?   It turns out that this answer is not a simple one. Which results are considered desirable will vary from one customer to the next based on many factors, and analyzing the test results is a massive topic all on its own. However, there is one thing that any customer would care to review, and that is the response time overview chart found within the JMeter reports. This chart can be used to compare the performance of the majority of threads against a baseline, indicating the point at which the test begins to fail, i.e. the point at which the limits of the system are reached.   The easiest way to determine a good standard response time for a load test, a baseline, is to start with a single JMeter client and record the response times for just 1-5 threads. You can record the response times for individual requests, particularly queries and other services with expected long response times, or the average response times across all requests or groups of requests, if the performance of some mashups are more important than others.   This approach is better than relying on the response times seen in a browser because HTML pages load differently when rendered in a browser, with differing graphical resource requirements than what is requested in JMeter. Note that some customers will also manually record response times within a separate browser-based test scenario during load testing as either a sanity check or as part of their overall benchmarking in order to further validate the scalability of the application, but this wouldn’t involve JMeter given that browsers load things differently and cross-comparison is a bad idea.   Once the baseline response times are established, start increasing the thread counts across the many JMeter clients until you see the response times go up on average. PTC’s standard criteria for load testing is exceeded when the average response times are roughly doubled, or when the system seems overwhelmed with the user load on the server side (which is what to look out for in DynaTrace or the external system monitor). At this point, the application is said to have reached a bottleneck, which could be a simple tuning problem, or it could be saturated by resource requirements. Either way, the bottleneck is proof that the system can’t take any more threads without users beginning to notice and the response times approaching an unreasonable delay.   Other criteria can be used as well, say if any one thread takes more than 5 seconds to respond. Also ensure there are no unexpected errors, as gateway errors represent failed tests too. Sometimes there will be errors even when the test is successful, though, so consider monitoring the error percentage, a column in the Summary Report tab of JMeter, to see what is normal. The throughput column may also be something to monitor. Many watch for increases in throughput as the thread count increases to ensure there is no degradation in performance (which may indicate hardware or sizing constraints).   The Summary Report will look something like this, with thread group results from all of the clients appearing side by side, differentiated from each other by the unique port: Conclusions Generating and reviewing reports within JMeter is straight-forward and easily customizable. Be sure to also monitor the system itself using an external tool like DynaTrace, PTC’s official System Monitor, which has a lot of value considering how easy it is to use out of the box. If the system looks healthy on the server side and the response times are within an acceptable range on the client side, then the application is ready for enterprise use. Be sure to generate a baseline for response times within JMeter, remembering that browsers have different loading processes than JMeter, and not to cross-compare.   This article constitutes the end of the basics. The final article to come will talk about more advanced test design features and best practices, so stay tuned!
View full tip
To setup the Single-Sign On with Windchill, we can just follow steps in Windchill extension guide. However, there is a huge problem to use "Websocket" for EMS or Edge SDKs from devices since Apache for Windchill blocks to pass "ws" or "wss" protocol. It's like a problem of a proxy server. There might be a couple of ways to avoid this issue, but I suggest to change filter-mappings for the SSO filter. When you look at the Windchill extension guide, it says that users set filters for all incoming URLs of ThingWorx by using "/*" filter mappings. Please use below settings for "web.xml" of ThingWorx server to avoid the problem that I stated above. It looks quite long and complicated, but basically the filter mappings from settings for "AuthenticationFilter" which are already defined by default except "Websocket" related urls. <!-- Windchill Extension SSO Start--> <filter> <filter-name>IdentityProviderAuthenticationFilter</filter-name> <filter-class>com.ptc.connected.plm.thingworx.wc.idp.client.filter.IdentityProviderAuthenticationFilter</filter-class> <init-param> <param-name>idpLoginUrl</param-name> <param-value>http(s)://<SERVERHOSTURL>/Windchill/wtcore/jsp/genIdKey.jsp</param-value> </init-param> </filter> <filter-mapping>   <filter-name>IdentityProviderAuthenticationFilter</filter-name>   <url-pattern>/extensions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/action-authenticate/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/action-login/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/action-confirm-creds/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/action-change-password/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingworxMain.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingworxMain.html/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Server/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ApplicationKeys/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Networks/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Dashboards/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DirectoryServices/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Authenticators/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/PersistenceProviderPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/tunnel/wsadapter.jsp</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/tunnel/adapter.jsp</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Logs/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Resources/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Subsystems/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Users/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Home/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/StateDefinitions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/StyleDefinitions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ScriptFunctionLibraries/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/AtomFeedService/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DataShapes/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Importer/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ImageEncoder/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Exporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExportDatabase/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExportTheme/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExportDefaultEntities/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ImportDatabase/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DataExporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DataImporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Widgets/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Groups/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Things/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingTemplates/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingShapes/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DataTags/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ModelTags/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Composer/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Squeal/index.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Runtime/index.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Mashups/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Menus/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/MediaEntities/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/loaders/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/demos/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExtensionPackageUploader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExtensionPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/FileRepositoryUploader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/FileRepositoryDownloader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/FileRepositories/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/xmpp/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/LocalizationTables/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Organizations/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/RemoteTunnel/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/PersistenceProviders/*</url-pattern>   </filter-mapping> <filter> <filter-name>IdentityProviderKeyValidationFilter</filter-name> <filter-class>com.ptc.connected.plm.thingworx.wc.idp.client.filter.IdentityProviderKeyValidationFilter</filter-class> <init-param> <param-name>keyValidationUrl</param-name> <param-value>http(s)://<SERVERHOSTURL>/Windchill/login/validateIdKey.jsp</param-value> </init-param> </filter> <filter-mapping>   <filter-name>IdentityProviderKeyValidationFilter</filter-name>   <url-pattern>/extensions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/action-authenticate/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/action-login/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/action-confirm-creds/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/action-change-password/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingworxMain.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingworxMain.html/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Server/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ApplicationKeys/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Networks/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Dashboards/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DirectoryServices/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Authenticators/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/PersistenceProviderPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/tunnel/wsadapter.jsp</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/tunnel/adapter.jsp</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Logs/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Resources/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Subsystems/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Users/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Home/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/StateDefinitions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/StyleDefinitions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ScriptFunctionLibraries/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/AtomFeedService/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DataShapes/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Importer/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ImageEncoder/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Exporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExportDatabase/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExportTheme/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExportDefaultEntities/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ImportDatabase/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DataExporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DataImporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Widgets/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Groups/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Things/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingTemplates/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingShapes/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DataTags/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ModelTags/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Composer/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Squeal/index.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Runtime/index.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Mashups/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Menus/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/MediaEntities/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/loaders/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/demos/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExtensionPackageUploader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExtensionPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/FileRepositoryUploader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/FileRepositoryDownloader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/FileRepositories/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/xmpp/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/LocalizationTables/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Organizations/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/RemoteTunnel/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/PersistenceProviders/*</url-pattern>   </filter-mapping> <!-- Windchill Extension SSO End-->
View full tip
Just like the perfect sandwich, we know that you have specific preferences and requirements for your ThingWorx deployment. Whether you like to keep things simple with a classic grilled cheese or you like to spice things up with a more elaborate chipotle mayo BLT, we’ve got you covered. Our ThingWorx Deployment Architecture Guide explains what you’ll need to deploy ThingWorx in three different scenarios: production, enterprise and high-availability (pictured below).   Deployment Architecture for ThingWorx on Azure in High-Availability We’ve recently published Version 1.1 of the ThingWorx Deployment Architecture Guide. In it, you can find updated deployment architecture diagrams to more distinctly show the data and application layers within a ThingWorx environment. Our team has also added a new section on what you’ll need to deploy ThingWorx on Microsoft Azure, PTC’s preferred cloud platform.   Check it out here or in the attachment section on the right.   Stay connected, Kaya
View full tip
ThingWorx 7.4 introduces a new licensing system. A license file (license.bin) needs to be placed in the ThingworxPlatform folder. A new license file is also required if you upgrade from 7.4  to a major or minor release (not service pack-level releases). For example: • If you are using version 7.3, a license is not required. • If you upgrade from version 7.4.1 to version 7.4.2, a license upgrade is not required. • If you upgrade from version 7.4.3 to version 7.5.2, a license upgrade is required. Refer to the Installing ThingWorx 7.4 guide or Upgrading ThingWorx 7.4 guide for detailed process steps. Paid customers would have unlimited use of entities for 7.4.0. As currently a license file is locked to  version rather than SCN/host and is part of download package on  PTC Support, customers can use the same downloadable for multiple instances. Developer Trial Edition provides a constrained license file (5 users, 100 things, 120 days), and the license file is part of on premise download package on Dev Portal. Developer Trial Edition for Manufacturing (Kinex) provides a constrained license file (5 users, 100 things, no Composer access), and license file is part of download package on Kepware Portal. A new Licensing Subsystem is now available. Licensing subsystem services include: -AcquireLicense– service allows for retrieval of feature entitlements in license.bin, used when new license dropped in folder (no need to instance restart) –GetCurrentLicenseInfo – returns info on current license file –GetRemainingDaysInLicense –used for trial editions –GetLicenseUsageData – returns nformation about user’s license usage –PurgeLicenseUsageData –deletes the license usage data that is two years and older
View full tip
Putting this out because this is a difficult problem to troubleshoot if you don't do it right. Let's say you have an application where you have visibility permissions in effect. So you have Users group removed from the Everyone Organization Now you have a Thing "Thing1" with Properties that are being logged to a ValueStream "VS1" What do you need to make this work? Obviously the necessary permissions to Write the values to the Thing1 and read the values from Thing1 (for UI) But for visibility what you'll need is: Visibility to Thing1 (makes sense) Visibility to the Persistence Provider of the ValueStream VS1 !!!! Nope you don't need Visibility to the ValueStream itself, but you DO need Visibility to the Persistence Provider of that ValueStream The way the lack of this permission was showing in the Application Log was a message about trying to provide a Null value.
View full tip
  Whether you’re new to ThingWorx or you’re a seasoned user, understanding the Thing Model is key to accelerating your IoT development. Today, I’ll dive into what ThingShapes, ThingTemplates and Things are and how to use them to accelerate development.   Before I dive into the definitions of these concepts, let’s first consider the wide array of machines that exist out there in world. The variety is huge—there’s MRI machines, 3D printers, laser cutters, CNC machines, tractors, and so much more.   At their core, all MRI machines share similar properties and capabilities—they have a name, a physical location, a magnetic strength, a radio frequency current, and the ability to visually display what’s going on inside the human body. There are, however, different types of MRI machines, and, while they are fundamentally the same type of machine, there are notable differences as well. When creating our IoT app, it’s important that we have a way to model these differences so that we can cascade changes across entities and reduce development time.   Let’s walk through an example using MRI machines. Consider the various MRI machines that exist today; there’s the traditional closed MRI machine, the open MRI machine and the standing/sitting MRI machine.   To represent the fundamental properties (i.e., characteristics or readings) and services (i.e., functionality) of a generic MRI machine—name, location, magnetic strength, etc.—we’ll create a ThingTemplate. The ThingTemplate is the general definition/representation of the real-world physical thing (i.e. the MRI machine) that is being modeled. You can think of a ThingTemplate as a blueprint of what you’re modeling. A ThingTemplate defines what a Thing is; if you’re familiar with object-oriented programming, a ThingTemplate is similar to the concept of inheritance; it defines a “is a” relationship. Using our ThingTemplate, we’re able to create multiple instances of the template that inherit the properties and services from that template. If you have 100 MRI machines in a particular region, rather than updating each one separately, simply updating the template will allow you to propagate these changes.   Let’s say that, of our 100 MRI machines, 40 are traditional closed machines, 30 are open machines and 30 are standing/sitting machines. The traditional machines have a specific diameter of the opening where the patient goes in to lay down and the sitting/standing machine may have a particular height of the seat where the patient sits. Due to the nature of the machines having unique components/parts, the different types of machines have difference maintenance service.   To model each of these “add-on” properties, we’ll want to create a ThingShape. A ThingShape is a representation of particular properties or services that may optionally come in some versions of the machine but not others. The ThingShape is a single feature or piece of the physical thing that’s being modeled. You can think of a ThingShape as a reusable part, or a set of properties/services that comes with some versions, but not all. A ThingShape defines what a Thing has; if you’re familiar with object-oriented programming, a ThingShape is similar to the concept of composition; it defines a “has a” relationship. So, for our MRI example, we could create one ThingShape for the standing MRI and a second ThingShape for the closed MRI. The StandingMRIThingShape would have a property of “SeatHeight” and a service of “StandingMRIMaintenanceService.” The ClosedMRIThingShape would have a property of “opening diameter” and a service of “ClosedMRIMaintenanceService.” Just like a ThingTemplate, the properties and services that make up a ThingShape are also inherited by the instances that use that ThingShape.   Finally, Things. A Thing is simply an instance of a ThingTemplate with (optionally) ThingShapes added for additional unique properties/services.   Let’s say we want to model a single closed MRI machine. We’ll represent the machine as a Thing that inherits from Templates and Shapes. We’ll start with the MRIMachineThingTemplate so that we can create an MRI Machine Thing (i.e., instance).   Since this is a closed MRI machine and has the additional property of opening diameter, we’ll want to make sure we include that property. To do this, we’ll add the ClosedMRIThingShape.   Viola! We now have a digital twin of our closed MRI machine with all the base properties of an MRI machines from our MRIMachineThingTemplate and all the special add-ons of the closed version with our ClosedMRIMachineThingShape.   Here’s a visual recap of what we just modeled.   If you’re looking for even further guidance on how to model your data with the Thing Model, check out the Data Model Introduction guide on the Developer Portal to get started and the Design Your Data Model guide to learn even more.   Happy data modeling!   Stay connected, Kaya  
View full tip
  There are times when the raw sensor readings are not directly useful for monitoring conditions on a machine. The raw data may need to be transformed before it can provide value within your monitoring applications. For example, instead of monitoring individual pressure readings reported each second, you may only be concerned with the maximum pressure reading each minute. Or, maybe you want to monitor the median value of the electrical current pulled by a machine every five seconds to smooth out the noise of raw sub-second sensor readings. Or, maybe you want to monitor if the average hourly temperature of a machine exceeds a control limit in 2 of the past 3 hours.   Let’s take the example of monitoring the max pressure of a valve reading over the past 45 seconds for your performance dashboard. How do you do it? Today, you might add a new property (e.g. “MaxPressure”) to your valve Thing. Then, you might add a subscription that triggers when the Pressure property value changes, and then call a service FindMax() to return the maximum pressure for that time interval. Lastly, you might write that maximum result value to the new property MaxPressure to store it and visualize it in the dashboard. Admittedly, not the worst process, but also not the most efficient.   Coming in 8.4, we will now offer Property Transforms, which enable you to automatically execute common statistical calculations—like min, max, average, median, mode and standard deviation, as well as SPC calculations—directly within a property itself. These transforms are configurable to run at certain intervals of time or points collected and can also be used with our alerting subsystem to drive behavior and user action where necessary. There is no longer a need to create an elaborate subscription-based logic flow just to do simple calculations!  This is just another way that ThingWorx 8.4 offers a more productive environment for IoT developers than ever before.   Ready to see it in action? Check out this video below by our product manager Mark!   (view in My Videos)   Comment your thoughts below!   Stay connected, Kaya
View full tip
The App URI in the ThingWorx Remote Thing Tunnel configuration specifies the endpoint of the specified tunnel. The default value (/Thingworx/tunnel/vnc.jsp) will point to the built in ThingWorx VNC client that can be downloaded through the Remote Access Widget in a Mashup to provide VNC remote desktop access. Leaving the App URI blank will result in the Tunnel being connected to the listen port on the users machine as specified in the Remote Access Widget​. In this case the user must supply the application client (e.g. an ssh client) in order to connect to the tunnel endpoint.
View full tip
Connect and Monitor Industrial Plant Equipment Learning Path   Learn how to connect and monitor equipment that is used at a processing plant or on a factory floor.   NOTE: Complete the following guides in sequential order. The estimated time to complete this learning path is 180 minutes.   Create An Application Key  Install ThingWorx Kepware Server Connect Kepware Server to ThingWorx Foundation Part 1 Part 2 Create Industrial Equipment Model Build an Equipment Dashboard Part 1 Part 2
View full tip
For those of you that aren't aware - the newest version of the Eclipse Plugin for Extension Development was made available last week in the ThingWorx Marketplace here. Because of the infancy of the product, there is not an official process for supplying release notes along with the plugin.  These are not official or all encompassing, but cover the main items worked on for 7.0. New Features: Added Configuration Table Wizard for code generation SDK Javadocs now automatically linked to SDK resources on project creation When creating a Service, Trace logging statements are generated inside of it (along with appropriate initializers) ThingWorx Source actions are now available from right click menu within a .java file Bugs: Fixed problem where some BaseTypes are not uppercase in annotations when generating code Fixed error when Creating and importing Extension Projects when the Eclipse install has a space in the file path Fixed inconsistent formatting in the metadata.xml when adding new Entities We are hoping to have a more official Release Note process for the next release.  Feel free to reply with questions or concerns.
View full tip
Tune in to The Lean Manufacturer podcast where expert guests bring their outside view of the IIoT and discuss various aspects of manufacturing. Over the course of the series, we’ll cover some of the most important ways the IIoT can maximize manufacturing efficiency and bring value to your organization, including the need for reducing planned and unplanned downtime, enabling operational efficiency, ensuring digital continuous improvement, and so much more.      
View full tip
Ran into this recently thought I share an approach to getting a table with multi-column distinct yet retaining all the columns of the row. If you use Distinct, you get only the Columns you do Distinct on. This isn't very helpful if you want the 'latest' or the 'first occurrences'  of records in your table with a combination of fields being unique. For example I had Process, Part, Dimension and Point for which I had multiple value and date time entries, but I only wanted the latest entries. Following is how I solved it, if you have a better way please leave a comment! P.S.: for the query I used the awesome query builder available in the snippet section! --------------------------------------- var q1Result = Things["MyThing"].QueryStreamEntriesWithData({maxItems:99999, query:query1}); //Below creates a temporary measurement table to store the latest meaurement values var params = {                 infoTableName : "InfoTable",                 dataShapeName : "MyDatashape.DS" }; // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(MyDataShape.DS) var tempTable1 = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); // Extract only the latest measurements for the PART from the measurement result table 'q1Result' //The way we are going to reduce this to unique measurements is //1. records are in reverse order of date time //2. get distinct by Process Part Dim Point //3. Step through and match against distinct set //4. First match goes into final set //5. Upon match remove from distinct set //6. If no match then skip record //7. If no more distinct match records break loop var params = {                 t: q1Result /* INFOTABLE */,                 columns: 'ProcessID,PartID,Dimension,Point' /* STRING */ }; // result: INFOTABLE var distinctResult = Resources["InfoTableFunctions"].Distinct(params); for (var x = 0; x < q1Result.rows.length; x++) {     var query = {       "filters": {         "type": "AND",         "filters": [           {             "fieldName": "ProcessID",             "type": "EQ",             "value": q1Result.rows .ProcessID           },           {             "fieldName": "PartID",             "type": "EQ",             "value": q1Result.rows .PartID          },           {             "fieldName": "Dimension",             "type": "EQ",             "value": q1Result.rows .Dimension           },           {             "fieldName": "Point",             "type": "EQ",             "value": q1Result.rows .Point           }         ]       }     };   var params = {                 t: distinctResult /* INFOTABLE */,                 query: query /* QUERY */ }; // result: INFOTABLE var matchResult = Resources["InfoTableFunctions"].Query(params);     if (matchResult.rows.length == 1) {         tempTable1.AddRow(q1Result.rows );            var params = {             t: distinctResult /* INFOTABLE */,             query: query /* QUERY */         };         // result: INFOTABLE         var distinctResult = Resources["InfoTableFunctions"].DeleteQuery(params);         if (distinctResult.rows.length == 0) {                        break                    }            }    } //I now have a tempTable1 with the full rows and the 4 fields distinct result = tempTable1
View full tip
  Part I – Securing connection from remote device to Thingworx platform The goal of this first part is to setup a certificate authority (CA) and sign the certificates to authenticate MQTT clients. At the end of this first part the MQTT broker will only accept clients with a valid certificate. A note on terminology: TLS (Transport Layer Security) is the new name for SSL (Secure Sockets Layer).  Requirements The certificates will be generated with openssl (check if already installed by your distribution). Demonstrations will be done with the open source MQTT broker, mosquitto. To install, use the apt-get command: $ sudo apt-get install mosquitto $ sudo apt-get install mosquitto-clients Procedure NOTE: This procedure assumes all the steps will be performed on the same system. 1. Setup a protected workspace Warning: the keys for the certificates are not protected with a password. Create and use a directory that does not grant access to other users. $ mkdir myCA $ chmod 700 myCA $ cd myCA 2. Setup a CA and generate the server certificates Download and run the generate-CA.sh script to create the certificate authority (CA) files, generate server certificates and use the CA to sign the certificates. NOTE: Open the script to customize it at your convenience. $ wget https://github.com/owntracks/tools/raw/master/TLS/generate-CA.sh . $ bash ./generate-CA.sh The script produces six files: ca.crt, ca.key, ca.srl, myhost.crt,  myhost.csr,  and myhost.key. There are: certificates (.crt), keys (.key), a request (.csr a serial number record file (.slr) used in the signing process. Note that the myhost files will have different names on your system (ubuntu in my case) Three of them get copied to the /etc/mosquitto/ directories: $ sudo cp ca.crt /etc/mosquitto/ca_certificates/ $ sudo cp myhost.crt myhost.key /etc/mosquitto/certs/ They are referenced in the /etc/mosquitto/mosquitto.conf file like this: After copying the files and modifying the mosquitto.conf file, restart the server: $ sudo service mosquitto restart 3. Checkpoint To validate the setup at this point, use mosquitto_sub client: If not already installed please install it: Change folder to ca_certificates and run the command : The topics are updated every 10 seconds. If debugging is needed you can add the -d flag to mosquitto_sub and/or look at /var/logs/mosquitto/mosquitto.log. 4. Generate client certificates The following openssl commands would create the certificates: $ openssl genrsa -out client.key 2048 $ openssl req -new -out client.csr  -key client.key -subj "/CN=client/O=example.com" $ openssl x509 -req -in client.csr -CA ca.crt  -CAkey ca.key -CAserial ./ca.srl -out client.crt  -days 3650 -addtrust clientAuth The argument -addtrust clientAuth makes the resulting signed certificate suitable for use with a client. 5. Reconfigure Change the mosquitto configuration file To add the require_certificate line to the end of the /etc/mosquitto/mosquitto.conf file so that it looks like this: Restart the server: $ sudo service mosquitto restart 6. Test The mosquitto_sub command we used above now fails: Adding the --cert and --key arguments satisfies the server: $ mosquitto_sub -t \$SYS/broker/bytes/\# -v --cafile ca.crt --cert client.crt --key client.key To be able to obtain the corresponding certificates and key for my server (named ubuntu), use the following syntax: And run the following command: Conclusion This first part permit to establish a secure connection from a remote thing to the MQTT broker. In the next part we will restrict this connection to TLS 1.2 clients only and allow the websocket connection.
View full tip
Saw this great question in the Developers forum https://community.ptc.com/t5/ThingWorx-Developers/Thingworx-Permission-Hierarchy/m-p/556829#M29312. Answered it there, copying it to here: Question Hi, I have a few of questions regarding the permissions model in Thingworx. I can't find any documentation that explains it clearly. Hoping someone can help, or point me in the right direction for more in depth documentation.   My understanding is that permissions can be set at a number of different levels.  Collection Level Template Level Instance Level Thing level My question is, how do these levels interact with one another. Do they all get 'AND'ed together, or do those at the lower levels supersede the ones set at higher levels. e.g.  If I set some visibility at collection level would this overridden by me setting a different visibility at say the Template instance level, or would both visibility permissions be valid. At each of level there is the ability to override (e.g. for a particular property or service). How does that fit in the hierarchy. I have read that in Thingworx 'deny' always supersedes an 'allow' permission. Is this still the case if I set deny at collection level and then at a lower level I gave 'allow' permissions would the deny take precedence. As far as I can tell 'Create' permissions can only be set at collection level. Does this mean that I am unable to restrict one set of users to create things of one template, and a different set of users to create another type of thing. Thanks in advance for any replies   Answer: Great question Thing/Entity level permissions always take precedence So if you set on Collection then on Template then on Entity it will first look at Entity then fill in with Template and Permission So if Collection says can't do Service Execute Template says Can execute Service 1 but not Service 2 Entity says Can execute Service 2 and leaves Service 1 as inherited the end result is that the user can execute service 1 and 2   In Template and Entity you can find the Override ability, that is to specifically allow or disallow the execution of a Service or read/write of a Property   What is a BEST PRACTICE? 1. Give the System user all service execute on collection level 2. Give User Groups 'blanket' permissions to Property Read/Write on ThingTemplate Level 3. Give User Groups only Override permission to execute Services on ThingTemplate Level 4. Override User Group permission to DENY property read on potential properties they are not supposed to read on the ThingTemplate Level   Generally most properties all users can access fully and the blanket permission on a ThingTemplate is fine It is very BAD to give user groups blanket permission to Service execute and should always be done by Override   Entity Hierarchy overrides the Allow Deny hierarchy, but within a single level (Collection / Template / Entity) Deny wins over Allow.   Create is indeed only set on the Collection Level, however the way to secure this is to give the System user the Create ability and create Wrapper services that use the CreateThing service which you can then secure for specific Groups. So you could create a CreateNewThingType1 and CreateNewThingType2 for example and give User Group 1 permission to Type 1 creation and User Group 2 permission to Type 2 creation.   Hope that helps.
View full tip
PostgreSQL is a powerful, open source object-relational database system that provides unlimited database size. Thingworx 6.5 introduces PostgreSQL as persistence provider and supports High Availability. Main advantages with Thingworx Postgres are 1. Highly customizable PostgreSQL also includes a framework that allows developers to define and create their own custom data types along with supporting functions and operators that define their behavior. Triggers and stored procedures can be written in C and loaded into the database as a library, allowing great flexibility in extending its capabilities. 2. Synchronous replication PostgreSQL streaming replication is asynchronous by default. Synchronous replication offers the ability to confirm that all changes made by a transaction have been transferred to one synchronous standby server. This extends the standard level of durability offered by a transaction commit. The only possibility that data can be lost is if both the primary and the standby suffer crashes at the same time. 3. Write ahead logging for fault tolerance The Write Ahead Log (WAL), is the feature of PostgreSQL that allows it to recover data, usually up to the point where the server stopped. As you make changes to your data, PostgreSQL aggressively writes those changes to the WAL. PostgreSQL issues a checkpoint when a buffer limit is reached. When PostgreSQL restarts, it replays the changes from the WAL since the last Checkpoint, to bring the database back to the state of the last completed commit. Master node sends a live stream of data changes to the slave nodes through the WAL and slaves applies this data and stay up to date. 4. Point-in time recovery Point-in-time Recovery (PITR) also called as incremental database backup , online backup or may be archive backup. This mechanism use the history records stored in WAL file to do roll-forward changes made since last database full backup. With Point-in-time Recovery, database backup down time can totally eliminated because this mechanism can make database backup and system access happened at the same time. with PITR, we backup the latest archive log file since last backup instead of full database backup everyday. Thingworx streams data from the connected devices and postgres handles it with a greater scalability. In Thingworx, postgresql acts as a persistence provider that stores both run-time data and metadata about things. Run-time data is the data that is persisted once the things are composed and are used by connected devices to store their data. Streams and value streams fetch huge amounts of data, once the streaming data reaches a limit fo 50gb neo4j can't handle the performance. For example, for a singleStream that has 50 properties that gathers data from 10000 devices, it will quickly hit the memory limit with neo persistence provider. So, it is strongly recommended to choose postgresql for a better performance issues. Overview of Installing Thingworx PostgreSQL: Install latest version of Java and make sure environment variables are configured. Follow the instructions in Installing Thingworx 6.5​ to install tomcat. Instructions/commands may vary for different Linux flavors. Install PostgreSQL. For Linux/Unix environments, YUM-Installation Guidelines. Create 'ThingworxPostgresqlStorage' and 'ThingworxPlatform' folders in the root directory( / ), assign access permissions to the user. Copy modelproviderconfig.json file (from Thingworx download package) to 'ThingworxPlatform' folder. Execute ThingworxPostgresSchemaSetup and ThingworxPostgresDBSetup scripts (.bat for windows and .sh for Unix/Linux environments), for further instructions follow Getting Started with PostgreSQL ThingWorx Administrators Guide​. Restart the tomcat.
View full tip
Official name: DataStax Enterprise, sometimes referred as Cassandra. Note: DBA skills required, free self-paced training can be found here Training | DataStax The extension package can further be obtained through Technical Support. Thingworx 6.0 introduces DSE as a backend database scaling to much greater byte count, ad Neo4j performance limitations hit at 50Gbs. Some of the main reasons to consider DSE are: 1. Elastic scalability -- Alows to easily add capacity online to accommodate more customers and more data when needed. 2. Always on architecture -- Contains no single point of failure (as with traditional master/slave RDBMS's and other NoSQL solutions) resulting in continious availability for business-critical applications that can't afford to go down. 3. Fast linear-scale performance -- Enables sub-second response times with linear scalability (double the throughput with two nodes, quadruple it with four, and so on) to deliver response time speeds. 4. Flexible data storage -- Easily accommodates the full range of data formats - structured, semi-structured and unstructured -- that run through today's modern applications. 5. Easy data distribution -- Read and write to any node with all changes being automatically synchronized across a cluster, giving maximum flexibility to distribute data by replicating across multiple datacenters, cloud, and even mixed cloud/on-premise environments. Note: Windows+DSE is currently not fully supported. Connecting Thingworx: Prerequisite: fully configured DSE database. 1. Obtain the dse_persistancePackage 2. Import as an extension in Composer. 3. In composer, create a new persistence provider. 4. Select the imported package as Persistence Provider Package. 5. In Configuration tab:      - For Cassandra Cluster Host, enter the IP address set in cassandra.yaml or localhost if hosted locally      - Enter new of existing Cassandra Keyspace name      - Enter Solr Cluster URL      - Other fields can be left at default (*) 6. Go to Services and execute TestConnectivity service to ensure True response. 7. When creating new Stream, Value Stream, or a Data Table, set Persistence Provider to the one created in previous steps. Currently all reads and writes are done through Thingworx and all Thingworx data is encoded in DSE.  Opcenter still allows to see connectes streams, datatables, valuestreams. *SimpleStrategy can be used for a single data center, or NetworkTopologyStrategy is recommended for most deployments, because it is much easier to expand to multiple data centers when required by future expansion. Is there a limit of data per node? 1 TB is a reasonable limit on how much data a single node can handle, but in reality, a node is not at all limited by the size of the data, only the rate of operations. A node might have only 80 GB of data on it, but if it's continuously hit with random reads and doesn't have a lot of RAM, it might not even be able to handle that number of requests at a reasonable rate. Similarly, a node might have 10 TB of data, but if it's rarely read from, or there is a small portion of data that is hot (so it could be effectively cached), it will do just fine. If the replication factor is above 1 and there is no reads at consistency level ALL, other replicas will be able to respond quickly to read requests, so there won't be a large difference in latency seen from a client perspective.
View full tip
Announcements