Community Tip - Help us improve the PTC Community by taking this short Community Survey! X

IoT Tips

Sort by:
Generating and Reviewing JMeter Results Overview The 4th in a series of articles on load testing with JMeter, this one covers pushing the limits of a test to see how much the application can handle, as well as generating and analyzing reports once the testing completes. This article rounds off the basics of JMeter, such that anyone should be able to perform enterprise-level load testing after reviewing the content here.    Multiple criteria can be used to evaluate results, including: response time (as monitored both by JMeter, and by some other tool on the system side) throughput number of errors resource saturation CPU, Memory, disk, and network utilization Depending on use case, some of these may be considered more important than others. For instance, some customers don't care if users wait a while for results to appear on the page (response time), because they set their users' expectations and mitigate the experience with well-designed loading graphics. With response times secondary, the real issues center around data loss or system outages, with resource utilization and number of errors becoming the more important indicators of system health. Request and database timeout errors are more important indicators, as they occur most often when resources are saturated and there is data loss.   It is typical for many customers to find preventing data loss and/or promoting data integrity to be more important than preventing long response times. Consider which of these factors is most important to your use case as you determine what kind of information to gather and review in your reports.   How to Create Client-Side Reports in JMeter Creating reports for the client-side data is very simple using JMeter, both from the command line and within the UI (as shown in the tutorial below). These reports have graphical displays of response times, information about the number and type of response errors, and other criteria of performance used to gauge the success or failure of a load test. Follow these steps to generate an index file, which when opened in your browser of choice, will show all of the relevant JMeter data. Tutorial: Create an empty directory in which to store reports: Start the JMeter test with these options, or run these commands after the fact, to generate the HTML report: Once the test completes, use: jmeter -g <outputfile.jtl/csv> -o <path to output folder for html report>​ To start a test with the correct command for report generation, use this command: jmeter -n -t <test JMX file> -l <outputfile.jtl/csv> -e -o <Path to output folder>​ Running the above commands will generate these files: When the test is complete, the many JMeter client consoles will look like this: Go ahead and close the windows to terminate once they are finished. Optionally you can run multiple tests sequentially using the same jmeter-server windows. Click on the “index.html” file to open the results viewing window:     At any time, modify the settings of this “HTML dashboard” using the details from the JMeter user manual. This citation describes many options for these dashboards, as well as recommendations on how to group and format the results in ways which best convey the success or failure of the test, based on the custom requirements of the application and how granular the view needs to be. Most of the time, the default settings work ok, showing something similar to this: The charts aren’t labeled very well here, so click on the Response Times submenu: This page may take some time to render if there is a lot of data: Next, scroll down to see all the requests that occurred and sort them by how long they took to complete. Anything which took over 5 seconds (or more depending on what is expected) should be investigated as part of the post-test analysis. Does something need to be tuned or optimized? This is how to tell which request is holding things up for your customers.  There is also a chart that shows the overview, grouping the response times by how long they took to demonstrate the health of the system more concretely. Typically, the bars look something like this:  This represents expected behavior, where most of the requests are quite fast, and then there are a few that had errors or took a bit longer. This is pretty typical for web activity. You can also generate the report through the main JMeter client: Give it a results file and an output directory to generate the same index file: There are log files in each of the JMeter client directories called “jmeter-server.log”: These files may show the wrong timezone, but the elapsed times are correct, and they will show when the JMeter clients started, how many threads they ran, which servers were which, and if there were any errors. Not all errors will mean a failed test, so review anything that appears and determine what is expected. Consider designing a batch script to gather all of these logs together, or even analyze them automatically to extract only relevant information.     How to Create Server-Side Results in DynaTrace Collecting data from the environment, including CPU usage, Memory utilization (used vs. total), Garbage Collection times and other metrics of system health on the server, will require the use of an external tool. PTC’s official tool for this is called DynaTrace (PTC System Monitor), shown here. PTC offers a runtime license for DynaTrace to anyone who buys certain products, including Kepware Server, ThingWorx Foundation and Navigate, Windchill, Integrity, and more. Read more information about DevOps on the PTC Community, and stay tuned for more articles on the subject to come from the EDC.   Another option would be something like telegraf and Grafana (from the previous blog post), which facilitate the option to create dashboards around the data output specific to the needs of the application, which can still be monitored even once the application goes live. It can certainly be worth it to use such a tool for monitoring the server-side, but the set-up takes more time. Likewise, many VMs have monitoring faculties for CPU usage and memory utilization built-in, but DynaTrace also has visualization, consolidation of system elements, and other features that make it easy to use right out of the box. See the screenshots below for some examples on how to use DynaTrace, and be sure to review PTC’s full documentation here.   The example shown here is a ThingWorx Navigate system, with Windchill and ThingWorx Foundation set up side-by-side. This chart shows the overall response times of the server-side of the system. JMeter collects the statistics on what the client looks like, while another tool is required to collect the server-side metrics like CPU usage and Memory utilization, things that indicate the health of the VM or computer hosting the clients. An older version of DynaTrace is depicted here, available for free for all ThingWorx customers from the PTC Downloads Site (under various product listings).   In DynaTrace, you can build new dashboards using PurePaths: You can also look at the response times for each service, but be sure to change the response limit to a large number so that all the results are returned. Changing the response limit to a large number to ensure all of the results show in the PurePaths dashboard.   Highlighted here in DynaTrace is the longest service that ran, which in this case took 95 seconds to fully respond: More specific analysis of this service can now begin. Perhaps it needs to be tuned, or otherwise optimized to handle the number of threads, i.e. the number of users. Perhaps the system needs more resources or the VM isn’t large enough for the test. Perhaps more JMeter clients and system resources are required. Something will explain this long response time, and that will inform as to what work might still remain before this system can scale up to the enterprise level.   How to Use the Test Results Load Testing often means scaling the test up a little more each time until the system eventually breaks, or the target performance is reached. Within JMeter, this won’t mean increasing the overall number of threads per one JMeter client, but instead, scaling horizontally to other JMeter clients (as covered in the previous blog post). Now that the remote or distributed clients are configured and the test running, how do we know when the test is beginning to fail?   It turns out that this answer is not a simple one. Which results are considered desirable will vary from one customer to the next based on many factors, and analyzing the test results is a massive topic all on its own. However, there is one thing that any customer would care to review, and that is the response time overview chart found within the JMeter reports. This chart can be used to compare the performance of the majority of threads against a baseline, indicating the point at which the test begins to fail, i.e. the point at which the limits of the system are reached.   The easiest way to determine a good standard response time for a load test, a baseline, is to start with a single JMeter client and record the response times for just 1-5 threads. You can record the response times for individual requests, particularly queries and other services with expected long response times, or the average response times across all requests or groups of requests, if the performance of some mashups are more important than others.   This approach is better than relying on the response times seen in a browser because HTML pages load differently when rendered in a browser, with differing graphical resource requirements than what is requested in JMeter. Note that some customers will also manually record response times within a separate browser-based test scenario during load testing as either a sanity check or as part of their overall benchmarking in order to further validate the scalability of the application, but this wouldn’t involve JMeter given that browsers load things differently and cross-comparison is a bad idea.   Once the baseline response times are established, start increasing the thread counts across the many JMeter clients until you see the response times go up on average. PTC’s standard criteria for load testing is exceeded when the average response times are roughly doubled, or when the system seems overwhelmed with the user load on the server side (which is what to look out for in DynaTrace or the external system monitor). At this point, the application is said to have reached a bottleneck, which could be a simple tuning problem, or it could be saturated by resource requirements. Either way, the bottleneck is proof that the system can’t take any more threads without users beginning to notice and the response times approaching an unreasonable delay.   Other criteria can be used as well, say if any one thread takes more than 5 seconds to respond. Also ensure there are no unexpected errors, as gateway errors represent failed tests too. Sometimes there will be errors even when the test is successful, though, so consider monitoring the error percentage, a column in the Summary Report tab of JMeter, to see what is normal. The throughput column may also be something to monitor. Many watch for increases in throughput as the thread count increases to ensure there is no degradation in performance (which may indicate hardware or sizing constraints).   The Summary Report will look something like this, with thread group results from all of the clients appearing side by side, differentiated from each other by the unique port: Conclusions Generating and reviewing reports within JMeter is straight-forward and easily customizable. Be sure to also monitor the system itself using an external tool like DynaTrace, PTC’s official System Monitor, which has a lot of value considering how easy it is to use out of the box. If the system looks healthy on the server side and the response times are within an acceptable range on the client side, then the application is ready for enterprise use. Be sure to generate a baseline for response times within JMeter, remembering that browsers have different loading processes than JMeter, and not to cross-compare.   This article constitutes the end of the basics. The final article to come will talk about more advanced test design features and best practices, so stay tuned!
View full tip
In the ptc-windchill-extension-1.0.0-14.zip archive there is an extension called infotableselector_ExtensionPackage.zip​ . This extension enables the use of the Widget called Infotable Selector, which can be used to clear the selection in a grid. For how to use this widget, take a look at the picture:
View full tip
To setup the Single-Sign On with Windchill, we can just follow steps in Windchill extension guide. However, there is a huge problem to use "Websocket" for EMS or Edge SDKs from devices since Apache for Windchill blocks to pass "ws" or "wss" protocol. It's like a problem of a proxy server. There might be a couple of ways to avoid this issue, but I suggest to change filter-mappings for the SSO filter. When you look at the Windchill extension guide, it says that users set filters for all incoming URLs of ThingWorx by using "/*" filter mappings. Please use below settings for "web.xml" of ThingWorx server to avoid the problem that I stated above. It looks quite long and complicated, but basically the filter mappings from settings for "AuthenticationFilter" which are already defined by default except "Websocket" related urls. <!-- Windchill Extension SSO Start--> <filter> <filter-name>IdentityProviderAuthenticationFilter</filter-name> <filter-class>com.ptc.connected.plm.thingworx.wc.idp.client.filter.IdentityProviderAuthenticationFilter</filter-class> <init-param> <param-name>idpLoginUrl</param-name> <param-value>http(s)://<SERVERHOSTURL>/Windchill/wtcore/jsp/genIdKey.jsp</param-value> </init-param> </filter> <filter-mapping>   <filter-name>IdentityProviderAuthenticationFilter</filter-name>   <url-pattern>/extensions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/action-authenticate/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/action-login/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/action-confirm-creds/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/action-change-password/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingworxMain.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingworxMain.html/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Server/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ApplicationKeys/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Networks/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Dashboards/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DirectoryServices/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Authenticators/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/PersistenceProviderPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/tunnel/wsadapter.jsp</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/tunnel/adapter.jsp</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Logs/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Resources/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Subsystems/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Users/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Home/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/StateDefinitions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/StyleDefinitions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ScriptFunctionLibraries/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/AtomFeedService/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DataShapes/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Importer/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ImageEncoder/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Exporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExportDatabase/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExportTheme/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExportDefaultEntities/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ImportDatabase/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DataExporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DataImporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Widgets/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Groups/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Things/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingTemplates/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingShapes/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DataTags/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ModelTags/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Composer/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Squeal/index.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Runtime/index.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Mashups/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Menus/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/MediaEntities/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/loaders/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/demos/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExtensionPackageUploader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExtensionPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/FileRepositoryUploader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/FileRepositoryDownloader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/FileRepositories/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/xmpp/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/LocalizationTables/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Organizations/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/RemoteTunnel/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/PersistenceProviders/*</url-pattern>   </filter-mapping> <filter> <filter-name>IdentityProviderKeyValidationFilter</filter-name> <filter-class>com.ptc.connected.plm.thingworx.wc.idp.client.filter.IdentityProviderKeyValidationFilter</filter-class> <init-param> <param-name>keyValidationUrl</param-name> <param-value>http(s)://<SERVERHOSTURL>/Windchill/login/validateIdKey.jsp</param-value> </init-param> </filter> <filter-mapping>   <filter-name>IdentityProviderKeyValidationFilter</filter-name>   <url-pattern>/extensions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/action-authenticate/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/action-login/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/action-confirm-creds/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/action-change-password/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingworxMain.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingworxMain.html/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Server/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ApplicationKeys/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Networks/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Dashboards/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DirectoryServices/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Authenticators/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/PersistenceProviderPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/tunnel/wsadapter.jsp</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/tunnel/adapter.jsp</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Logs/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Resources/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Subsystems/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Users/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Home/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/StateDefinitions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/StyleDefinitions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ScriptFunctionLibraries/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/AtomFeedService/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DataShapes/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Importer/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ImageEncoder/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Exporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExportDatabase/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExportTheme/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExportDefaultEntities/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ImportDatabase/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DataExporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DataImporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Widgets/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Groups/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Things/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingTemplates/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingShapes/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DataTags/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ModelTags/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Composer/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Squeal/index.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Runtime/index.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Mashups/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Menus/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/MediaEntities/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/loaders/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/demos/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExtensionPackageUploader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExtensionPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/FileRepositoryUploader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/FileRepositoryDownloader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/FileRepositories/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/xmpp/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/LocalizationTables/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Organizations/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/RemoteTunnel/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/PersistenceProviders/*</url-pattern>   </filter-mapping> <!-- Windchill Extension SSO End-->
View full tip
ThingWorx 7.4 introduces a new licensing system. A license file (license.bin) needs to be placed in the ThingworxPlatform folder. A new license file is also required if you upgrade from 7.4  to a major or minor release (not service pack-level releases). For example: • If you are using version 7.3, a license is not required. • If you upgrade from version 7.4.1 to version 7.4.2, a license upgrade is not required. • If you upgrade from version 7.4.3 to version 7.5.2, a license upgrade is required. Refer to the Installing ThingWorx 7.4 guide or Upgrading ThingWorx 7.4 guide for detailed process steps. Paid customers would have unlimited use of entities for 7.4.0. As currently a license file is locked to  version rather than SCN/host and is part of download package on  PTC Support, customers can use the same downloadable for multiple instances. Developer Trial Edition provides a constrained license file (5 users, 100 things, 120 days), and the license file is part of on premise download package on Dev Portal. Developer Trial Edition for Manufacturing (Kinex) provides a constrained license file (5 users, 100 things, no Composer access), and license file is part of download package on Kepware Portal. A new Licensing Subsystem is now available. Licensing subsystem services include: -AcquireLicense– service allows for retrieval of feature entitlements in license.bin, used when new license dropped in folder (no need to instance restart) –GetCurrentLicenseInfo – returns info on current license file –GetRemainingDaysInLicense –used for trial editions –GetLicenseUsageData – returns nformation about user’s license usage –PurgeLicenseUsageData –deletes the license usage data that is two years and older
View full tip
Announcing the Final Installment   JMeter for ThingWorx, the Comprehensive Guide and Best Practice Tips This is the final post on using JMeter for ThingWorx. Below there are best practice tips for using JMeter and for load testing in general. Attached to this post is a comprehensive guide including all of the information from every post we've made on JMeter, including the tutorials. For a more central source, feel free to download the guide , or see the past posts here: JMeter for ThingWorx (original post) Building More Complex Tests in JMeter Distributed Testing with JMeter Generating and Reviewing JMeter Results   JMeter Best Practice Tips Use Distributed Testing As already mentioned in a previous post, each JMeter client can only handle about 150-250 threads depending on the complexity of the tests, and each client will need around 1 CPU and up to 8 GB of RAM for the Java heap. Some test plans will run with fewer host resources, so resizing the test client VM up or down is often required during test development. Create a batch or shell script to start the multiple JMeter clients for greater ease of use. Use Non-Graphical Mode Non-graphical mode allows the system to scale up higher; client processing uses up resources just to keep the simulation running, but with graphical mode turned off, there is less of an impact on the response times and other results. Graphical mode is essentially only used for debugging. Turn off Embedded Resources This setting reloads all of the typically cached requests over and over; there will be far more download requests, and to the exclusion of other requests, than is helpful. Ensure this box is not checked, especially in the HTTP Requests Defaults element:   Browser caching means that this setting doesn’t actually simulate a proper user load, given that many of the reloaded resources would not be reloaded by actual users. Use this incrementally, for one or two HTTP requests only, if there is a reason why those requests might need to download fresh images, scripts, or other resources with each call; for instance, simulate page timeouts using this once per hour or something similar. Using this across the whole project will prevent it from scaling well, while not actually simulating real-world conditions. Avoid Using Listeners For instance, the “View Results Tree”, which uses additional resources that may impact the results in disingenuous ways, based around the needs of the clients themselves and not the actual response times of the server. Many listeners are only for debugging a handful of threads while designing the tests. A list of recommended listeners for different purposes is in JMeter documentation. Summary Report is the only one you want enabled, as that exports the results as a csv or similarly formatted file, which can then be used to build reports. JMeter CAN handle SSO JMeter can authenticate into and test an SSO-enabled system. Sometimes the SSO configuration is essential for customers, and they may be quick to assume therefore that they cannot use JMeter, but that's not entirely true. Some external tools that might help with this are BlazeMeter (mentioned again in just a moment) and Fiddler, a good tool for decoding what data a particular SSO setup is exchanging during the authentication process. Use Logic Controllers for Parametrization Parametrization is critical to mirroring a proper user load, and allowing different data sets to be queried or created; the load should seem organic, random in the right ways, with actions occurring at random times, not predictable times, to prevent seeing artificial peaks of usage that don’t represent real usage of the Foundation server. Random order controllers direct the threads down different paths based on random dice rolls, allowing for a randomized collection of user activity each time, not something that has to be regenerated like a set of Boolean values that is specified in an input CSV and used to navigate a series of true or false switches. Switches just look for an environment variable to be either 1 or 0, and when it hits a switch that’s a 1, it triggers the switch below, running them in the order given under the transaction controller that goes with the switch. In this image, the 1’s and 0’s are given in the CSV input file; randomizing that input file therefore randomizes the execution of the switches too:   Use Commercial Add-Ons There are many external, add-on tools and plugins which enhance JMeter’s capabilities. One external tool that can enhance JMeter’s capabilities is Blazemeter, which has some free and some paid options to help create better reports, removing automatically much of the “garbage” REST calls (which would otherwise need to be manually deleted), and provide more consumable test reports right out of the box. Other tools and plugins include: Maven Netbeans SonarQube Jenkins Autometer Gradle Amazon EC2 Lightning IntelliJ IDEA Cassandra Grafana For more best practice information, see the JMeter Best Practice Manual.   General Load Testing Guidelines Concurrency Requirements – How to Properly Estimate the Size of the Load Test Take a brand new ThingWorx-based app. How people will be accessing the system and how often? How many are business users? How many are engineers? What do they do? Many assume that every named user in the corporate LDAP will need to access to the server, often 10s of thousands of users; this generally drastically oversizes the system. Load testing for many thousands of users is very hard and requires a lot of set-up, tuning, and optimization to get right; so if it seems that thousands of users are expected, then validating this claim is important: most customers don’t really have that many concurrent users in an engineering system. Use estimates based on how many people work at which offices, which time zones those people are in, and what kinds of users they are. Do they need access to engineering data? Perhaps there are simpler mashups for them that uses less resources. One tool for these sorts of estimations that PTC offers is the office time zone overlap Windchill Sizing Calculator (shown here) Other ways to estimate include: Analyzing the business processes, things like how long workloads typically take to complete and how many workloads are generated per day, converted into hour, minute, or second as desired for the peak duration, the length of the test. “Day in the Life” modelling, or considering things like “what does user X do in a day?” Maybe, user X checks out some drawings, edits them, and then checks them back in at 4:30. Maybe user Y actually digs into the underlying parts and assemblies, putting in change requests or orders throughout the day, instead of waiting for the end. Models are made based around the types of users. Also consider: What are worst case scenarios? What are the longest running activities? What produces the largest data transfers? What activities have large, heavy data base queries? When is the peak overlap of usage? Beginning and end of day downloads and check ins? Reports that are generated regularly? How do these impact the foreground users? For a simpler estimate, start with a percentage of the named user count, anywhere from 5-15% is a good ballpark percentage. Don’t overestimate to feel like the application has been financially worth it; even if everyone is logged in and using it all at once, which is unlikely, load testing for every single user doesn’t take into account the fact that people pause in between clicking on things to think, type emails, get coffee, and so forth. Fewer people than expected are actually doing concurrent activities like loading web pages and updating data streams. Whenever possible, use concurrency data from existing customer systems to guide the estimate for the new system. Legacy system are great places to start.   Use Grafana to monitor the system side throughout the load test, which is also required to know the test has been successful; also set up Grafana to monitor the application once it goes live, to both prevent and mitigate more rapidly any technical issues with the server. Also remember that PTC Technical Support is here to help! Provide thread dumps with an open case to any TSE, and they will help troubleshoot the tests and review any errors in the ThingWorx or Tomcat logs.    
View full tip
  There are times when the raw sensor readings are not directly useful for monitoring conditions on a machine. The raw data may need to be transformed before it can provide value within your monitoring applications. For example, instead of monitoring individual pressure readings reported each second, you may only be concerned with the maximum pressure reading each minute. Or, maybe you want to monitor the median value of the electrical current pulled by a machine every five seconds to smooth out the noise of raw sub-second sensor readings. Or, maybe you want to monitor if the average hourly temperature of a machine exceeds a control limit in 2 of the past 3 hours.   Let’s take the example of monitoring the max pressure of a valve reading over the past 45 seconds for your performance dashboard. How do you do it? Today, you might add a new property (e.g. “MaxPressure”) to your valve Thing. Then, you might add a subscription that triggers when the Pressure property value changes, and then call a service FindMax() to return the maximum pressure for that time interval. Lastly, you might write that maximum result value to the new property MaxPressure to store it and visualize it in the dashboard. Admittedly, not the worst process, but also not the most efficient.   Coming in 8.4, we will now offer Property Transforms, which enable you to automatically execute common statistical calculations—like min, max, average, median, mode and standard deviation, as well as SPC calculations—directly within a property itself. These transforms are configurable to run at certain intervals of time or points collected and can also be used with our alerting subsystem to drive behavior and user action where necessary. There is no longer a need to create an elaborate subscription-based logic flow just to do simple calculations!  This is just another way that ThingWorx 8.4 offers a more productive environment for IoT developers than ever before.   Ready to see it in action? Check out this video below by our product manager Mark!   (view in My Videos)   Comment your thoughts below!   Stay connected, Kaya
View full tip
The App URI in the ThingWorx Remote Thing Tunnel configuration specifies the endpoint of the specified tunnel. The default value (/Thingworx/tunnel/vnc.jsp) will point to the built in ThingWorx VNC client that can be downloaded through the Remote Access Widget in a Mashup to provide VNC remote desktop access. Leaving the App URI blank will result in the Tunnel being connected to the listen port on the users machine as specified in the Remote Access Widget​. In this case the user must supply the application client (e.g. an ssh client) in order to connect to the tunnel endpoint.
View full tip
Putting this out because this is a difficult problem to troubleshoot if you don't do it right. Let's say you have an application where you have visibility permissions in effect. So you have Users group removed from the Everyone Organization Now you have a Thing "Thing1" with Properties that are being logged to a ValueStream "VS1" What do you need to make this work? Obviously the necessary permissions to Write the values to the Thing1 and read the values from Thing1 (for UI) But for visibility what you'll need is: Visibility to Thing1 (makes sense) Visibility to the Persistence Provider of the ValueStream VS1 !!!! Nope you don't need Visibility to the ValueStream itself, but you DO need Visibility to the Persistence Provider of that ValueStream The way the lack of this permission was showing in the Application Log was a message about trying to provide a Null value.
View full tip
  Whether you’re new to ThingWorx or you’re a seasoned user, understanding the Thing Model is key to accelerating your IoT development. Today, I’ll dive into what ThingShapes, ThingTemplates and Things are and how to use them to accelerate development.   Before I dive into the definitions of these concepts, let’s first consider the wide array of machines that exist out there in world. The variety is huge—there’s MRI machines, 3D printers, laser cutters, CNC machines, tractors, and so much more.   At their core, all MRI machines share similar properties and capabilities—they have a name, a physical location, a magnetic strength, a radio frequency current, and the ability to visually display what’s going on inside the human body. There are, however, different types of MRI machines, and, while they are fundamentally the same type of machine, there are notable differences as well. When creating our IoT app, it’s important that we have a way to model these differences so that we can cascade changes across entities and reduce development time.   Let’s walk through an example using MRI machines. Consider the various MRI machines that exist today; there’s the traditional closed MRI machine, the open MRI machine and the standing/sitting MRI machine.   To represent the fundamental properties (i.e., characteristics or readings) and services (i.e., functionality) of a generic MRI machine—name, location, magnetic strength, etc.—we’ll create a ThingTemplate. The ThingTemplate is the general definition/representation of the real-world physical thing (i.e. the MRI machine) that is being modeled. You can think of a ThingTemplate as a blueprint of what you’re modeling. A ThingTemplate defines what a Thing is; if you’re familiar with object-oriented programming, a ThingTemplate is similar to the concept of inheritance; it defines a “is a” relationship. Using our ThingTemplate, we’re able to create multiple instances of the template that inherit the properties and services from that template. If you have 100 MRI machines in a particular region, rather than updating each one separately, simply updating the template will allow you to propagate these changes.   Let’s say that, of our 100 MRI machines, 40 are traditional closed machines, 30 are open machines and 30 are standing/sitting machines. The traditional machines have a specific diameter of the opening where the patient goes in to lay down and the sitting/standing machine may have a particular height of the seat where the patient sits. Due to the nature of the machines having unique components/parts, the different types of machines have difference maintenance service.   To model each of these “add-on” properties, we’ll want to create a ThingShape. A ThingShape is a representation of particular properties or services that may optionally come in some versions of the machine but not others. The ThingShape is a single feature or piece of the physical thing that’s being modeled. You can think of a ThingShape as a reusable part, or a set of properties/services that comes with some versions, but not all. A ThingShape defines what a Thing has; if you’re familiar with object-oriented programming, a ThingShape is similar to the concept of composition; it defines a “has a” relationship. So, for our MRI example, we could create one ThingShape for the standing MRI and a second ThingShape for the closed MRI. The StandingMRIThingShape would have a property of “SeatHeight” and a service of “StandingMRIMaintenanceService.” The ClosedMRIThingShape would have a property of “opening diameter” and a service of “ClosedMRIMaintenanceService.” Just like a ThingTemplate, the properties and services that make up a ThingShape are also inherited by the instances that use that ThingShape.   Finally, Things. A Thing is simply an instance of a ThingTemplate with (optionally) ThingShapes added for additional unique properties/services.   Let’s say we want to model a single closed MRI machine. We’ll represent the machine as a Thing that inherits from Templates and Shapes. We’ll start with the MRIMachineThingTemplate so that we can create an MRI Machine Thing (i.e., instance).   Since this is a closed MRI machine and has the additional property of opening diameter, we’ll want to make sure we include that property. To do this, we’ll add the ClosedMRIThingShape.   Viola! We now have a digital twin of our closed MRI machine with all the base properties of an MRI machines from our MRIMachineThingTemplate and all the special add-ons of the closed version with our ClosedMRIMachineThingShape.   Here’s a visual recap of what we just modeled.   If you’re looking for even further guidance on how to model your data with the Thing Model, check out the Data Model Introduction guide on the Developer Portal to get started and the Design Your Data Model guide to learn even more.   Happy data modeling!   Stay connected, Kaya  
View full tip
For those of you that aren't aware - the newest version of the Eclipse Plugin for Extension Development was made available last week in the ThingWorx Marketplace here. Because of the infancy of the product, there is not an official process for supplying release notes along with the plugin.  These are not official or all encompassing, but cover the main items worked on for 7.0. New Features: Added Configuration Table Wizard for code generation SDK Javadocs now automatically linked to SDK resources on project creation When creating a Service, Trace logging statements are generated inside of it (along with appropriate initializers) ThingWorx Source actions are now available from right click menu within a .java file Bugs: Fixed problem where some BaseTypes are not uppercase in annotations when generating code Fixed error when Creating and importing Extension Projects when the Eclipse install has a space in the file path Fixed inconsistent formatting in the metadata.xml when adding new Entities We are hoping to have a more official Release Note process for the next release.  Feel free to reply with questions or concerns.
View full tip
Just like the perfect sandwich, we know that you have specific preferences and requirements for your ThingWorx deployment. Whether you like to keep things simple with a classic grilled cheese or you like to spice things up with a more elaborate chipotle mayo BLT, we’ve got you covered. Our ThingWorx Deployment Architecture Guide explains what you’ll need to deploy ThingWorx in three different scenarios: production, enterprise and high-availability (pictured below).   Deployment Architecture for ThingWorx on Azure in High-Availability We’ve recently published Version 1.1 of the ThingWorx Deployment Architecture Guide. In it, you can find updated deployment architecture diagrams to more distinctly show the data and application layers within a ThingWorx environment. Our team has also added a new section on what you’ll need to deploy ThingWorx on Microsoft Azure, PTC’s preferred cloud platform.   Check it out here or in the attachment section on the right.   Stay connected, Kaya
View full tip
Connect and Monitor Industrial Plant Equipment Learning Path   Learn how to connect and monitor equipment that is used at a processing plant or on a factory floor.   NOTE: Complete the following guides in sequential order. The estimated time to complete this learning path is 180 minutes.   Create An Application Key  Install ThingWorx Kepware Server Connect Kepware Server to ThingWorx Foundation Part 1 Part 2 Create Industrial Equipment Model Build an Equipment Dashboard Part 1 Part 2
View full tip
ThingWorx Monitoring and Alerting, Part 1 Using Prometheus and Grafana By Tori Firewind, IoT EDC Introduction and Getting Started     As ThingWorx has become a more mature product during the lifetime of the IoT EDC, so too have our dev ops recommendations. As we’ve stated throughout many posts now, testing is a key part of ensuring enterprise readiness, and it occurs at every stage of the process: from unit testing to preserve individual service logic, to integration tests which preserve the functionality of the application as a whole, to user and edge load testing and user experience testing, which ensure enterprise readiness. So testing is a critical component, but the process of dev ops never stops. In order to effectively test the system, a comprehensive monitoring solution is also required.     Once the application is tested and the changes pushed into production, there is no knowing with certainty that everything will run smoothly indefinitely. Random spikes in usage, server bandwidth or availability, any unforeseeable factors like these can come along and cause issues for a system. If these issues aren’t detected and addressed early, then they can very rapidly morph into much larger problems: outages, data loss, inflated data tables which are hard to revert due to their size. It is critical to detect performance issues on a system as early as possible, to have as much information as is necessary to figure out where the problem is heading, and what may have started it. Monitoring is key to a healthy system. CI/CD stands for “Continuous Integration/Continuous Deployment”, a never-ending cycle of improvement. Testing just once before the initial go live isn’t enough. Each system should have automated tests that run continuously, as well as monitors and alerts which reveal problems sooner. Diagnostic tools play a role as well, being the bridge from the end of the dev ops process cycle back to the beginning (monitoring into planning). A good CI/CD dev ops process will ensure that problems are found earlier, fixed more rapidly, and fixed for everyone using the system.       In a fully mature dev ops pipeline, issues are anticipated, discovered and researched before they become production outages or critical issues. These investigations or testing follow-ups produce development tasks (usually bugs, but also features at times) which then start the dev ops cycle all over again. This is why a good, efficient dev ops pipeline is needed, one which allows changes to quickly and safely go from development to production.     This is also why diagnostic tools play a role in the monitoring piece of the dev ops process. They are the bridge between monitoring and planning. Tools like Dynatrace can be configured to provide call stacks and take thread dumps when issues start to occur, before the system is performing so poorly it needs a restart, which happens automatically in a cluster and can clear out any trace of the issue.     Thread dumps are often necessary to diagnosing the root cause of the issue (to permanently fix it), and doing so quickly ensures application stability and availability. That is, after all, the purpose of the dev ops process. Diagnostics is therefore an equally important piece of the dev ops Figure-8-shaped pie, and one which deserves its own spotlight in an article to come.     Every piece of the dev ops process must be viewed as equally important in its own way, lest the dev ops cycle get hung up on bottlenecks of its own. A safe and stable system is not one which never experiences issues, it is one which has a good, efficient plan in place to handle recovery and prevention of repetition. A wholesome dev ops process is a happy dev ops process.   The Monitoring Stack     There are many monitoring options available, but in our experience one of the easiest and most effective monitoring stacks to use with ThingWorx is Prometheus for metrics gathering with Grafana for metrics analysis and review. In a mature monitoring stack, Telegraf is also commonly installed on each VM/host to gather the system metrics (like CPU and Memory usage, things we’ve stated are good metrics of system performance and stability in past articles on scale and size testing) and output them in Prometheus format.     Prometheus is a highly scalable open-source monitoring framework that contains out of the box monitoring and alert capabilities for Kubernetes-based deployments (not covered in this article). Using Prometheus is very simple because the ThingWorx application exposes a metrics endpoint which is formatted directly for use by Prometheus. There is also built-in alerting in Prometheus, but not the ability to form dashboards for reviewing data or screenshotting it for documentation purposes. That’s where Grafana comes into play. Grafana has a preconfigured Prometheus-type data source and many preconfigured dashboard templates for various applications and services. Telegraf is also easily imported into Grafana, as is shown in the section below. The Prometheus targets in the larger diagram are expanded out on the left. For each target, some tool exports the data in a syntax which Prometheus can scrape. For VMs, this can be Telegraf, for Kubernetes, the Node Exporter. JVM has a JMX Exporter, and other tools like CX Server use Graphite. Many apps already have a Prometheus endpoint built-in, like ThingWorx and Zookeeper. Telegraf is not strictly necessary; the node exporter can also be used on VMs, but Telegraf is the more common choice since it is a more mature dev ops tool.     Once Prometheus is scraping the targets, alerting on them can be done with OOTB Prometheus functionality, and dashboards for monitoring can be made easily in Grafana (with built-in support as well). This stack does not include the diagnostics piece, something which triggers thread dumps or the like when issues do occur. There are too many ways to conduct a successful diagnostic piece to cover here.   How to Get Started     Getting started monitoring a ThingWorx application is incredibly easy in the latest versions. Simply open up a browser, and type in the ThingWorx URL, followed by “/Metrics”. At this endpoint, there is a specially formatted response that can automatically be read by the Prometheus monitoring software which contains subsystem and service data. In addition to the application metrics, Prometheus can be configured to collect metrics from a node exporter at the (virtualized) operating system or container (Kubernetes) level as well.     If you haven’t already, install Grafana, install Telegraf as a service, and install Docker Desktop. These are the tools required (in addition to ThingWorx of course) to set-up a simple sandbox system for familiarization with the monitoring stack recommended by PTC. The easiest way to try Prometheus on a local Windows instance is to use Docker. The command for that will be found below, but first open up Docker Desktop to set contextual parameters that the command line will need. Then, modify the configuration file for Telegraf or create one (called telegraf.conf in the same folder as the exe file), and put the following into the file (or uncomment it; the default config file has thousands of lines, so just search for “prometheus”):             Output plugin [[outputs.prometheus_client]] listen = "0.0.0.0:9125"             Alternatively, install the Prometheus Node Exporter tool, which will likely require some additions to the Prometheus config file (not covered here) which we are about to create.     Then, create a configuration file (called prom_config_localhost_scraper.yml in the command to come), add the following (assuming a standard localhost installation of ThingWorx):             # my global config global: scrape_interval: 45s evaluation_interval: 30s scrape_timeout: 30s # scrape_timeout is set to the global default (10s). rule_files: - prom_config_rules.yml scrape_configs: - job_name: thingworx static_configs: - targets: ['host.docker.internal:8080'] basic_auth: username: "Administrator" password: "admin!123456789" metrics_path: /Thingworx/Metrics scheme: http params: x-thingworx-session: - "false" - job_name: prometheus static_configs: - targets: ['localhost:9090'] - job_name: Telegraf # If telegraf is installed, grab stats about the local # machine by default. static_configs: - targets: ['host.docker.internal:9125']                 This example script file uses the host.docker.internal instead of localhost for the server target for ThingWorx because it is running outside of the Docker container which contains Prometheus. This yml file configures Prometheus to monitor both ThingWorx and itself, as well as the server metrics coming from Telegraf (as long as they are configured to push). It’s a sandbox-only configuration, really, as you wouldn’t want to use the Administrator user, or have the password printed in plain text in the config file in a real system. Also note the need for the x-thingworx-session parameter, as runaway sessions which spawn every 30s or so (whatever the scrape interval is) will result in memory issues over time (so we don’t want to use sessions here).     The rules file given here (prom_config_rules.yml) needs to be created separately. This is where all of the alert rules will be defined. This will determine if an alert state is happening, but without configuring the alert manager, there won’t be any notification. That isn’t covered here but is covered extensively in the Grafana docs. Here is an alert example:             groups: - name: alert.rules rules: # Alert for any instance that is unreachable for >5 minutes. - alert: HighMemory expr: mem_used > 14000000 for: 1s labels: severity: page annotations: summary: "High Memory" description: "Localhost Memory Usage is High"             Now, save these files and use Powershell to run the Docker container:             docker run -p 9090:9090 -v C:\<path_to_document>\prom_config_localhost_scraper.yml:/etc/prometheus/prometheus.yml prom/prometheus                 It should download Prometheus and install it in that container (if this is the first time), allowing you to very rapidly deploy it to an endpoint of localhost:9090 by default. If there is an error like the one shown below, this means that you forgot to start Docker Desktop (the application) before opening Powershell. Docker Desktop sets system parameters required for containers to run in a command line (in Linux, it should work if Docker is installed for use by the command line, simple as that).     The localhost endpoints are accessible in a browser. ThingWorx defaults to localhost:8080 endpoint. Prometheus defaults to localhost:9090. Telegraf is on port 9125. Open any of these in a browser tab to see the full monitoring stack. You can see easily if Prometheus is working by clicking “Status” > “Targets” at localhost:9090:     If all of the targets appear as blue and say “last scrape” and a time stamp, then they’re working as expected. If they don’t, ensure you have the right ports, that there aren’t any firewall issues (if things aren’t all on localhost), and that everything is running without errors.     The last step in the process here is to install a dashboard tool like Grafana. Once this is installed and running on localhost:3000 (by default), you can display the data from Prometheus with a few configuration steps the Grafana UI. Highlight over the settings icon in the bottom left of the screen, and then click on “Data sources”. Select the “Add data source” button, and then click on Prometheus. You have to type the URL again  (localhost:9090), but most of the defaults will be ok here, and all you have to do is click “Save and test”.     Now both targets should appear within Grafana, with their metrics showing up throughout the Grafana UI. This data source is what allows for the building of monitoring dashboards.    
View full tip
Ran into this recently thought I share an approach to getting a table with multi-column distinct yet retaining all the columns of the row. If you use Distinct, you get only the Columns you do Distinct on. This isn't very helpful if you want the 'latest' or the 'first occurrences'  of records in your table with a combination of fields being unique. For example I had Process, Part, Dimension and Point for which I had multiple value and date time entries, but I only wanted the latest entries. Following is how I solved it, if you have a better way please leave a comment! P.S.: for the query I used the awesome query builder available in the snippet section! --------------------------------------- var q1Result = Things["MyThing"].QueryStreamEntriesWithData({maxItems:99999, query:query1}); //Below creates a temporary measurement table to store the latest meaurement values var params = {                 infoTableName : "InfoTable",                 dataShapeName : "MyDatashape.DS" }; // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(MyDataShape.DS) var tempTable1 = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); // Extract only the latest measurements for the PART from the measurement result table 'q1Result' //The way we are going to reduce this to unique measurements is //1. records are in reverse order of date time //2. get distinct by Process Part Dim Point //3. Step through and match against distinct set //4. First match goes into final set //5. Upon match remove from distinct set //6. If no match then skip record //7. If no more distinct match records break loop var params = {                 t: q1Result /* INFOTABLE */,                 columns: 'ProcessID,PartID,Dimension,Point' /* STRING */ }; // result: INFOTABLE var distinctResult = Resources["InfoTableFunctions"].Distinct(params); for (var x = 0; x < q1Result.rows.length; x++) {     var query = {       "filters": {         "type": "AND",         "filters": [           {             "fieldName": "ProcessID",             "type": "EQ",             "value": q1Result.rows .ProcessID           },           {             "fieldName": "PartID",             "type": "EQ",             "value": q1Result.rows .PartID          },           {             "fieldName": "Dimension",             "type": "EQ",             "value": q1Result.rows .Dimension           },           {             "fieldName": "Point",             "type": "EQ",             "value": q1Result.rows .Point           }         ]       }     };   var params = {                 t: distinctResult /* INFOTABLE */,                 query: query /* QUERY */ }; // result: INFOTABLE var matchResult = Resources["InfoTableFunctions"].Query(params);     if (matchResult.rows.length == 1) {         tempTable1.AddRow(q1Result.rows );            var params = {             t: distinctResult /* INFOTABLE */,             query: query /* QUERY */         };         // result: INFOTABLE         var distinctResult = Resources["InfoTableFunctions"].DeleteQuery(params);         if (distinctResult.rows.length == 0) {                        break                    }            }    } //I now have a tempTable1 with the full rows and the 4 fields distinct result = tempTable1
View full tip
  Part I – Securing connection from remote device to Thingworx platform The goal of this first part is to setup a certificate authority (CA) and sign the certificates to authenticate MQTT clients. At the end of this first part the MQTT broker will only accept clients with a valid certificate. A note on terminology: TLS (Transport Layer Security) is the new name for SSL (Secure Sockets Layer).  Requirements The certificates will be generated with openssl (check if already installed by your distribution). Demonstrations will be done with the open source MQTT broker, mosquitto. To install, use the apt-get command: $ sudo apt-get install mosquitto $ sudo apt-get install mosquitto-clients Procedure NOTE: This procedure assumes all the steps will be performed on the same system. 1. Setup a protected workspace Warning: the keys for the certificates are not protected with a password. Create and use a directory that does not grant access to other users. $ mkdir myCA $ chmod 700 myCA $ cd myCA 2. Setup a CA and generate the server certificates Download and run the generate-CA.sh script to create the certificate authority (CA) files, generate server certificates and use the CA to sign the certificates. NOTE: Open the script to customize it at your convenience. $ wget https://github.com/owntracks/tools/raw/master/TLS/generate-CA.sh . $ bash ./generate-CA.sh The script produces six files: ca.crt, ca.key, ca.srl, myhost.crt,  myhost.csr,  and myhost.key. There are: certificates (.crt), keys (.key), a request (.csr a serial number record file (.slr) used in the signing process. Note that the myhost files will have different names on your system (ubuntu in my case) Three of them get copied to the /etc/mosquitto/ directories: $ sudo cp ca.crt /etc/mosquitto/ca_certificates/ $ sudo cp myhost.crt myhost.key /etc/mosquitto/certs/ They are referenced in the /etc/mosquitto/mosquitto.conf file like this: After copying the files and modifying the mosquitto.conf file, restart the server: $ sudo service mosquitto restart 3. Checkpoint To validate the setup at this point, use mosquitto_sub client: If not already installed please install it: Change folder to ca_certificates and run the command : The topics are updated every 10 seconds. If debugging is needed you can add the -d flag to mosquitto_sub and/or look at /var/logs/mosquitto/mosquitto.log. 4. Generate client certificates The following openssl commands would create the certificates: $ openssl genrsa -out client.key 2048 $ openssl req -new -out client.csr  -key client.key -subj "/CN=client/O=example.com" $ openssl x509 -req -in client.csr -CA ca.crt  -CAkey ca.key -CAserial ./ca.srl -out client.crt  -days 3650 -addtrust clientAuth The argument -addtrust clientAuth makes the resulting signed certificate suitable for use with a client. 5. Reconfigure Change the mosquitto configuration file To add the require_certificate line to the end of the /etc/mosquitto/mosquitto.conf file so that it looks like this: Restart the server: $ sudo service mosquitto restart 6. Test The mosquitto_sub command we used above now fails: Adding the --cert and --key arguments satisfies the server: $ mosquitto_sub -t \$SYS/broker/bytes/\# -v --cafile ca.crt --cert client.crt --key client.key To be able to obtain the corresponding certificates and key for my server (named ubuntu), use the following syntax: And run the following command: Conclusion This first part permit to establish a secure connection from a remote thing to the MQTT broker. In the next part we will restrict this connection to TLS 1.2 clients only and allow the websocket connection.
View full tip
Saw this great question in the Developers forum https://community.ptc.com/t5/ThingWorx-Developers/Thingworx-Permission-Hierarchy/m-p/556829#M29312. Answered it there, copying it to here: Question Hi, I have a few of questions regarding the permissions model in Thingworx. I can't find any documentation that explains it clearly. Hoping someone can help, or point me in the right direction for more in depth documentation.   My understanding is that permissions can be set at a number of different levels.  Collection Level Template Level Instance Level Thing level My question is, how do these levels interact with one another. Do they all get 'AND'ed together, or do those at the lower levels supersede the ones set at higher levels. e.g.  If I set some visibility at collection level would this overridden by me setting a different visibility at say the Template instance level, or would both visibility permissions be valid. At each of level there is the ability to override (e.g. for a particular property or service). How does that fit in the hierarchy. I have read that in Thingworx 'deny' always supersedes an 'allow' permission. Is this still the case if I set deny at collection level and then at a lower level I gave 'allow' permissions would the deny take precedence. As far as I can tell 'Create' permissions can only be set at collection level. Does this mean that I am unable to restrict one set of users to create things of one template, and a different set of users to create another type of thing. Thanks in advance for any replies   Answer: Great question Thing/Entity level permissions always take precedence So if you set on Collection then on Template then on Entity it will first look at Entity then fill in with Template and Permission So if Collection says can't do Service Execute Template says Can execute Service 1 but not Service 2 Entity says Can execute Service 2 and leaves Service 1 as inherited the end result is that the user can execute service 1 and 2   In Template and Entity you can find the Override ability, that is to specifically allow or disallow the execution of a Service or read/write of a Property   What is a BEST PRACTICE? 1. Give the System user all service execute on collection level 2. Give User Groups 'blanket' permissions to Property Read/Write on ThingTemplate Level 3. Give User Groups only Override permission to execute Services on ThingTemplate Level 4. Override User Group permission to DENY property read on potential properties they are not supposed to read on the ThingTemplate Level   Generally most properties all users can access fully and the blanket permission on a ThingTemplate is fine It is very BAD to give user groups blanket permission to Service execute and should always be done by Override   Entity Hierarchy overrides the Allow Deny hierarchy, but within a single level (Collection / Template / Entity) Deny wins over Allow.   Create is indeed only set on the Collection Level, however the way to secure this is to give the System user the Create ability and create Wrapper services that use the CreateThing service which you can then secure for specific Groups. So you could create a CreateNewThingType1 and CreateNewThingType2 for example and give User Group 1 permission to Type 1 creation and User Group 2 permission to Type 2 creation.   Hope that helps.
View full tip
Thing Subscription This post is intended for novice ThingWorx users who wants to understand what the definition of Thing Subscription is and the overall purpose of using Thing Subscriptions.   Definition of a Thing Subscription? A Thing subscription is a script(JavaScript) that is called each time an event occurs. Events are property states which are of end users interest (e.g. temperature) and therefore indicators to kick off some functionality in a Thing subscription when any action needed. Events can e.g. be triggered by an Alert that detects a change or an anomaly in property values. The Thing subscription is explicitly linked to an event and when the event is fired the data is being passed to the subscriber.    Why Use a Thing Subscription? Imagine your machine is running 24 hours 7 days a week with supervised human interaction. If a pump temperature exceeds accepted value it needs to be regulated by the manufacturing department. But no one in the department knows when the temperature will exceed accepted value or drop suddenly therefore, the machines is always sporadically physically supervised by humans which leads to heavy costs for the manufacture. With a Thing Subscription a notification alert email can be sent directly to the department manager who acts based on the email notification.   Thing Subscription must have A Thing subscription must have defined a rule which gets executed when an event occurs. The definition of the rule may accommodate any appropriate business logic.   Thing Subscription example process In this scenario Thing subscription is using a predictive analytics model to detect Data Change or any anomaly values going through a Thing Property. So, based on historical data including failure information, a predictive analytics model begins to analyze run-time values from individual Things/properties to the analytics server. The predictive analytics model detects a pattern which detects past failures, when the analytics model predicts a failure/event based on the analyzed patterns an action is being fired via a Thing subscription. That action could be for ThingWorx to create a service ticket or send a notification email to the service department.   Example of a simple Thing Subscription set-up without using Analytics model to analyze data but instead a build-in ThingWorx alert Below example of Thing Subscription will send a notification email when temperature exceeds defined values from ThingWorx alert configuration. Prerequisites; it is necessary to have a mail server extension imported into the ThingWorx Composer this enables the service department to receive the email notification when an event have occurred. The extension can be downloaded from the marketplace. 1. Create a Thing with the MailServer[i] as the Base Thing template.     2. Create a new Thing and add Properties together with an alert that is triggered when the value exeeds user defined temerature.   3. Enable the Thing Subcriptions by Select Subscription and click +Add Make sure to mark the checkbox Enabled Selecting your Event name and your Property name In the right side of the screen you can enter your script/function that will notify ThingWorx email service to create the email notification Select Done and Save   4. Enable Email notification by selecting Services Provide an name Select Me/Entities Mark Other entity Find your Thing where the MailServer is the Thing Template   5. Then find the SendMessage snippet/script and fill out the snippet with your personal information.   [i] View this blog for more information on how to install the MailServer
View full tip
A common issue that is seen when trying to deploy, design or scale up a ThingWorx application is performance.  Slow response, delayed data and the application stopping have all been seen when a performance problems either slowly grows or suddenly pops up.  There are some common themes that are seen when these occur typically around application model or design.  Here are a few of the common problems and some thoughts on what to do about them or how to avoid them. Service Execution This covers a wide range of possibilities and is most commonly seen when trying to scale an application.  Data access within a loop is one particular thing to avoid.  Accessing data from a Thing, other service or query may be fast when only testing it on 100 loops, but when the application grows and you have 1000 suddenly it's slow.  Access all data in one query and use that as an in memory reference.  Writing data to a data store (Stream, Datatable or ValueStream) then querying that same data in one service can cause problems as well.  Run the query first then use all the data you have in the service variables.   To troubleshoot service executions there are a few methods that can be used.  Some for will not be practical for a production system since it is not always advisable to change code without testing first. Used browser development tools to see the execution time of a service.  This is especially helpful when a mashup is slow to load or respond.  It will allow quickly identifying which of multiple services may be the issue. Addition of logging in a service.  Once a service is identified adding simple logging points in the service can narrow what code in the service cases the slow down (it may be another service call).  These logging statements show up in the script logs with time stamps ( you can also log the current time with the logging statements). Use the test button in Composer.  This is a simple on but if the service does not have many parameters (or has defaults) it's a fast and easy way to see how long a service takes to return,' When all else fails you can get thread dumps from the JVM.  ThingWorx Support created an extension that assists with this.  You can find it on the Marketplace with instructions on how to use it.  You can manually examine the output files or open a ticket with support to allow them to assist.  Just be careful of doing memory dumps, there are much larger, hard to analyse and take a lot of memory.  https://marketplace.thingworx.com/tools/thingworx-support-tools Queries ​These of course are services too but a specific type.  Accessing data in ThingWorx storage structures or from external sources seems fairly straight forward but can be tricky when dealing with large data sets.  When designing and dealing with internal platform storage refer to this guide as a baseline to decide where to store data...  Where Should I Store My Thingworx Data?   NEVER store historical data in infotable properties.  These are held in memory (even if they are persistent) and as they grow so will the JVM memory use until the application runs out of it.  We all know what happens then.  Finally one other note that has causes occasional confusion.  The setting on a query service or standard ThingWorx query service that limits the number of records returned.  This is how many records are returned to from the service at the end of processing, not how many are processed or loaded in memory.  That number may be much higher and could cause the same types of issues. Subscriptions and Events ​This is similar to service however there is an added element frequency.  Typical events are data change and timers/schedulers.  This again is often an issue only when scaling up the number of Things or amount of data that need to be referenced.  A general reference on timers and schedulers can be found here.  This also describes some of the event processing that takes place on the platform.  Timers and Schedulers - Best Practice For data change events be very cautions about adding these to very rapidly changing property values.  When a property is updating very quickly, for example two times each second, the subscription to that event must be able to complete in under 0.5 seconds to stay ahead of processing.  Again this may work for 5-10 Things with properties but will not work with 500 due to resources, speed and need to briefly lock the value to get an accurate current read.  In these cases any data processing should be done at the edge when possible (or in the originating system) and pushed to the platform in a separate property or service call.  This allows for more parallel processing since it is de-centralized. A good practice for allowing easier testing of these types of subscription code is to take all of the script/logic and move it to a service call.  Then pass any of the needed event data to parameters in the service.  This allows for easier debug since the event does not need to fire to make the logic execute.  In fact it can essentially be stand alone by the test button in Composer. Mashup Performance This​ one can be very tricky since additional browser elements and rendering can come into play. Sometimes service execution is the root of the issue and reviewed above, other times it is UI elements and design that cause slow down. The Repeater widget is a common culprit. The biggest thing to note here is that each repeater will need to render every element that is repeated and all of the data and formatting for each of those widgets in the repeated mashup. So any complex mashup that is repeated many times may become slow to load. You can minimize this to a degree based on the Load/Unload setting of the widget and when the slowness is more acceptable (when loading or when scrolling). When a mashup is launched from Composer it comes with some debugging tools built in to see errors and execution. Using these with browser debug tools can be very helpful. Scaling an Application When initially modeling an application scale must be considered from the start. It is a challenge (but not impossible) to modify an application after deployment or design to be very efficient. Many times new developers on the ThingWorx platform fall into what I call the .Net trap. Back when .Net was released one of the quote I recall hearing about it's inefficiencies was "memory is cheap". It was more cost efficient to purchase and install more memory than to take extra development time to optimize memory use. This was absolutely true for installed applications where all of the code was complied and stored on every system. Web based applications are not quite a forgiving since most processing and execution is done on the single central web server. Keep this in mind especially when creating Shapes, Templates and Subscriptions. While you may be writing one piece of code when this code is repeated on 1,000 Things they will all be in memory and all be executing this code in parallel. You can quickly see how competition for resources, locks on databases and clean access to in memory structures can slow everything down (and just think when there are 10,000 pieces of that same code!!). Two specific things around this must be stated again (though they were covered in the above sections). Data held in properties has fast access since it is in JVM memory. But this is held in memory for each individual Thing, so hold 5 MB of information in one Thing seems small, loading 10,000 Thing mean instant use of 50 GB of memory!! Next execution of a service. When 10 things are running a service execution takes 2 seconds. Slow but not too bad and may not be too noticeable in the UI. Now 10,000 Things competing for the same data structure and resources. I have seen execution time jump to 2 minutes or more. Aside from design the best thing you can do is TEST on a scaled up structure. If you will have 1,000 Things next year test your application early at that level of deployment to help identify any potential bottlenecks early. Never assume more memory will alleviate the issue. Also do NOT test scale on your development system. This introduces edits changes and other variables which can affect actual real world results. Have a QA system setup that mirrors a production environment and simulate data and execution load. Additional suggestions are welcome in comments and will likely update this as additional tool and platform updates change.
View full tip
Preface This guide applies to a clean installation of the CentOS 7 Minimal distribution. This is labeled as "Minimal ISO" on the CentOS.org website and the filename of the iso image used to install the operating system will resemble "CentOS-7-x86_64-Minimal-1611.iso." The machine used in this guide was a virtual machine created using Oracle VirtualBox but the same steps should apply to any machine with a clean CentOS 7 Minimal install. It is however possible that some installations may encounter slight variations due to hardware configurations. Before starting Unzip the downloaded "MED-..._ThingWorx-Analytics-Server-Linux-Standalone-8-0-0.zip".  Inside the unzipped directory you will find a file called "ThingWorxAnalyticsServer-8.0.0-linux-x64-installer.run". Before running step number 10, upload that file to your CentOS machine using a SFTP SCP tool of your choice. Configuration and installation steps Step 1: Install Docker with the following commands (these steps are presented at https://docs.docker.com/engine/installation/linux/docker-ce/centos/#install-using-the-repository😞 yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum makecache fast yum -y install docker-ce Step 2: Create a group called docker (If this command reports the group already exists, that is ok. You can move to the next step): groupadd docker Step 3: Add your non-root user to the docker group, in this example my non-root user is called "thingworx", please replace with the correct username: usermod -aG docker thingworx Step 4: Start the Docker service and enable it to auto start after reboot: systemctl start docker systemctl enable docker Step 5: Verify that docker is working: docker ps Step 6: After running the above command you should see a single line output that resembles the following: "CONTAINER ID        IMAGE              COMMAND            CREATED            STATUS              PORTS              NAMES" Step 7: Disable selinux with the two following commands. Note by doing this you will want to make sure if this is a public facing server that you take appropriate security measures to lock down the system. setenforce 0 sed -i -e 's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux Step 8: Set the hostname of your machine to something otherthan the default which is "localhost.localdomain".  In this case I am using the name "centos", this can be replaced with a name of your choosing: hostname centos echo "centos" > /etc/hostname Step 9: Allow traffic through the default CentOS firewall.  Note that in a production environment, the firewall should be configured more granular to allow incoming traffic to only the required ports (5432, 2181 and 8080). Please refer to CentOS documentation and consult security best practices within your organization for more information. The following commands will completely disable the CentOS firewall. systemctl disable firewalld systemctl stop firewalld Step 10: Ensure the ThingWorx Analytics Server installer is executable then run the installer. You may have to change to the directory where the installer was uploaded to the machine, in this case I have it in the home directory of the user named thingworx.  Please replace that path with the correct path for your machine.  Note below are 3 separate commands. cd /home/thingworx chmod +x ThingWorxAnalyticsServer-8.0.0-linux-x64-installer.run ./ThingWorxAnalyticsServer-8.0.0-linux-x64-installer.run Step 11: Verify that the ThingWorx Analytics Server installation is successful. Note that it may take a few minutes for the system to become available. Retry the command after a few minutes if an error is initially encountered. curl http://127.0.0.1:8080/analytics/1.0/about/versioninfo NOTE: The response from the above command should resemble the following: {"implementationVersion":"8.0.0"}
View full tip
Twilio extends the ThingWorx functionality to send SMS and voice messages in variety of languages. Starting from scratch I'll cover the steps required to setup the Twilio extension and the basic account registration (trial version) and setup at Twilio.com Twilio allows registering with the trial account, which comes with $15 as initial account balance, something which is quite useful for testing the initial setup and testing of Twilio extension in conjunction to ThingWorx. Prerequisite 1. Sign up, if not already done, with Twilio.com for free trail account 2. Configure the account to setup a verified phone number to receive text and voice messages, free trial account allows setting up 1 verified phone number that can receive text and voice messages Note: See What's the difference between a verified phone number and a Twilio phone number? 3. Choose and configure a Twilio phone number capable of sending text and voice messages, free trial account allows setting up of 1 Twilio number which will be used in the Extension configuration in ThingWorx for sending the messages 4. Download Twilio Extension from ThingWorx Marketplace Note that Twilio phone number should be enabled with the send SMS or send voice message capability, depending on what your use case is. Failing to do so will lead to error while testing the sendSMS or sendVOICE service (two default services provided with the Twilio Extension when imported in ThingWorx) Signing up and setting up Twilio account 1. Sign up on Twilio.com with your details and emailID. When registering for the first time you'll be prompted for your personal phone number as part of the security requirement. Also with free trial account this will be the only phone number where you'll be able to send SMS or Voice message via Twilio Extension in ThingWorx 2. Next would be to setup the Twilio number with voice and text messaging, navigate to https://www.twilio.com/console/phone-numbers/getting-started 3. Do ensure that you setup the voice or text capabilities for the new Twilio number, failing to do so will lead to error in sending message via Twilio Extension in ThingWorx 4. Different Twilio Number have different capabilities, it's quite possible that you don't find the number with required capabilities i.e. Voice, SMS - in that scenario simply search for different number, one that has the capabilities you are looking for 5. With trial account only 1 Twilio number is avilable for setup 6. While setting up the Twilio number you will be required to provide a valid local address of your country The cost of setting up the number and further sending the SMS will only be deducted from the $15 made available when you signed up for the first time. Navigate to your account details on Twilio.com to make note of the following information which will be used when configuring the Twilio Extension in ThingWorx: a. Account SID b. Authentication ID c. Your Verified Phone Number (will be used for receiving the messages) d. Twilio Phone number created with required capabilities, e.g. that number is capable of sending text message Here's how it shows up once you are successfully registered and logged on to the twilio.com/console To check your Twilio Phone Number and the Verified Phone Number navigate to twilio.com/phone-numbers As it can be seen in the above screenshot, the number I have bought from Twilio is capable of sending Voice and SMS. With this all's set  and ready to configure the Twilio Extension in ThingWorx, which is pretty straight forward. Setup Twilio Extension in ThingWorx 1. Let's setup the extension by navigating to the ThingWorx Composer > Import/Export > Extensions > Import > browse to the location where you saved the Twilio Extension from ThingWorx Marketplace and click Import 2. Once imported successfully refresh the composer and navigate to Modeling > Things , to create a new Thing implementing the Twilio Template (imported with the extension) like so, in screenshot below Thing is named DemoTwilioThing 3. Navigate to the the Configuration section of that Thing, to provide the information we noted above from Twilio account Note: this demo thing is setup to send the text SMS to my verified phone number only, therefore the CallerID in the configuration above is the Twilio number I created after signing up on Twilio with text message capability. 4. Save the created the Thing 5. Finally, we can test with the 1 of the 2 default services provided with the Twilio Template i.e. SendSMSMessage service 6. While testing the SendSMSMessage service use your verified phone number in To for receiving the message Troubleshooting some setup related issues I'll try to cover some of the generic errors that came up while doing this whole setup with Twilio Extension Error making outgoing SMS: 404 / TwilioResponse 20404 Reason If the Twilio number is not enabled with SMS sending capabilities you may run into such an error when testing the service. Here's the full error stack Unable to Invoke Service SendSMSMessage on DemoTwilioThing : Error making outgoing SMS: 404 <?xml version='1.0' encoding='UTF-8'?><TwilioResponse><RestException><Code>20404</Code><Message>The requested resource /2010-04-01/Accounts/<AccountSID>/SMS/Messages was not found</Message><MoreInfo>https://www.twilio.com/docs/errors/20404</MoreInfo><Status>404</Status></RestException></TwilioResponse> Resolution Navigate back to the Phone Number section on Twilio's website and as highlight in screenshot above check if the Twilio number is enabled with SMS capabilities. Error making outgoing SMS:400 / TwilioResponse 21608 Reason You may encounter following error when attempting to send SMS/Voice message, here's the full error detail Wrapped java.lang.Exception: Error making outgoing SMS: 400 <?xml version='1.0' encoding='UTF-8'?><TwilioResponse><RestException><Code>21608</Code><Message>The number <somePhoneNumber>is unverified. Trial accounts cannot send messages to unverified numbers; verify <somePhoneNumber>at twilio.com/user/account/phone-numbers/verified, or purchase a Twilio number to send messages to unverified numbers.</Message><MoreInfo>https://www.twilio.com/docs/errors/21608</MoreInfo><Status>400</Status></RestException></TwilioResponse> Cause: Error making outgoing SMS: 400 <?xml version='1.0' encoding='UTF-8'?><TwilioResponse><RestException><Code>21608</Code><Message>The number <somePhoneNUmber>is unverified. Trial accounts cannot send messages to unverified numbers; verify <somePhoneNumber> at twilio.com/user/account/phone-numbers/verified, or purchase a Twilio number to send messages to unverified numbers.</Message><MoreInfo>https://www.twilio.com/docs/errors/21608</MoreInfo><Status>400</Status></RestException></TwilioResponse> Resolution As the error points out clearly the number used in To section while testing the SendSMSMessage service didn't have verified number. With free trial account you can only use the registered verified phone number where SMS/Voice message can be sent. If you want to use different number an account upgrade is required. Error making outgoing SMS:400 / TwilioResponse 21606 Reason Following error is thrown while testing SendSMSMessage service with different Twilio number which is either not the same as the number you bought when setting up the trial account or it doesn have SMS sending capabiltiy. Here's the full error stack Wrapped java.lang.Exception: Error making outgoing SMS: 400 <?xml version='1.0' encoding='UTF-8'?><TwilioResponse><RestException><Code>21606</Code><Message>The From phone number <TwilioNumber>is not a valid, SMS-capable inbound phone number or short code for your account.</Message><MoreInfo>https://www.twilio.com/docs/errors/21606</MoreInfo><Status>400</Status></RestException></TwilioResponse> Cause: Error making outgoing SMS: 400 <?xml version='1.0' encoding='UTF-8'?><TwilioResponse><RestException><Code>21606</Code><Message>The From phone number <TwilioNumber> is not a valid, SMS-capable inbound phone number or short code for your account.</Message><MoreInfo>https://www.twilio.com/docs/errors/21606</MoreInfo><Status>400</Status></RestException></TwilioResponse> Resolution Check the configuration in the ThingWorx Composer for the Thing created out of Twilio Template whether or not the callerID is configured with correct Twilio account number
View full tip
It usually happens that we need to copy a large file to ThingWorx server periodically, and what's worse, the big file is changing(like a log file). This sample give a simpler way to implement. The main idea in the sample is: 1. Lower the management burden from ThingWorx server and instead it put all the work in edge SDK side 2. Save network burden with only uploading the incremented file and append it to the older file on ThingWorx server   Java SDK version in this sample: 6.0.1-255
View full tip
Announcements