cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
In our interactions with PTC customers we often learn they have previously performed Analytics modeling in Python, Matlab, R, or even built home grown analyses in languages such as Java or C++. As expected, when adopting an Industrial Innovation Platform such as ThingWorx that also has its own ThingWorx Analytics module, customers do not want to reimplement everything from scratch and would rather integrate their previous work in the Smart Applications built in ThingWorx, leveraging a combination of their existing toolset together with ThingWorx Analytics modeling. That is certainly possible and there are multiple ways to do that. In this article we will focus on several general ways to make that happen, but it is important to keep in mind that language specific approaches are also possible and we are happy to discuss those in the specific context of the customer.   Here are five different ways to bring existing Analytics into ThingWorx: If the task is to reuse an existing predictive model developed in a language such as Python/R/Matlab, typically one can export that model in PMML (Predictive Model Markup Language), an xml format, and import it in ThingWorx Analytics using the AnalyticsServer_ResultsThing -> UploadModel service. Libraries such as sklearn2pmml & r2pmml can be utilized towards that goal. The imported model can then be used in the same fashion as a ThingWorx Analytics developed model to power smart applications built in ThingWorx. If the Analysis involves more complex tasks than Predictive Modeling, such as custom data normalizations or non-standard Machine Learning models or home grown algorithms, one can use the options below. Call the ThingWorx exposed REST Web API from Python/Matlab/R/Java/Javascript. Every service from ThingWorx can be called that way, and the API can also be used to push analyses results into ThingWorx for further consumption, perhaps together with other sources of data such as sensor readings, in the smart applications built there. The documentation for the ThingWorx REST API can be found here.  Expose the existing Analytics via using a thin layer of REST Web Services. For example, in Python, this can be done using Flask, with few lines of code. Then, the orchestration can happen from ThingWorx by calling the exposed Web Service and weaving the results back into smart applications. Often our customers' current architecture involves a relational database (e.g. SQL Server, Oracle, etc) that is powering the existing Analytics, and stores the end results (predictions, correlations, etc). In this scenario, we can connect ThingWorx directly to that database to read these results.  Finally, in the case of complex Analytics, where a tighter integration with ThingWorx is desired, existing Analytics / algorithms can be wrapped into a ThingWorx Extension or an Analytics Provider using the corresponding PTC SDKs.  When choosing an integration option, customers need to carefully balance complexity of integration, constraints of their architecture, Analytics modeling complexity, as well as end user consumption requirements.
View full tip
One of the interesting features of ThingWorx Analytics Manager is its ability to run distributed models created in Excel (and more of course).  Most people having been tasked with understanding data have built models in Excel and have sometimes built quite complex models (or even applications) with it.   The ability to tie these models to real data coming from various systems connected through ThingWorx and operationalise their execution is a really simple way for people to leverage their existing work and I.P. on a connected analytics journey.   To demonstrate this power and ease of implementation, I created a sample data set with historical data, traffic profile, and a simple anomaly detection model to execute with Analytics Manager.  (files are attached)   The online help center was quite helpful in explaining the process of Creating the Excel Workbook, however I got stuck at the XML mapping stage.  The Analytics and Excel documentation both neglect to mention one important detail -- you must be using the Windows version of Excel in order to get the XML Source functionality (and I use Mac).  Once using Windows, it was easy to do - here is a video of the XML mapping part of the process (for the inputs and results).   
View full tip
This is part of the continuing series of Blog posts regarding Troubleshooting the Application, this article will discuss more advance issues that some clients and customer have encountered while building or using ThingWorx Analytics. Packer Script Error – Unable to Download CentOS Image As the application is developed and built inside a CentOS image, the ThingWorx Analytics Packer Script tool for Virtual Machine Appliance creation utilizes the CentOS mirror repository in the creation process. When the end user is attempting to build the Virtual Machine Appliance with the Packer Script media creation tool, part of the process is to download the CentOS 7 ISO image file as the basis for the operating system that the ThingWorx Analytics Server software will be installed to. If CentOS updates or changes their mirror links for the source file ISO, you may encounter the following error: ==> virtualbox-iso: Downloading or copying Guest additions virtualbox-iso: Downloading or copying: file:///C:/Program%20Files/Oracle/VirtualBox/VBoxGuestAdditions.iso ==> virtualbox-iso: Downloading or copying ISO virtualbox-iso: Downloading or copying: file:///local-file-repo/CentOS-7-x86_64-Minimal-1511.iso virtualbox-iso: Error downloading: open local-file-repo/CentOS-7-x86_64-Minimal-1511.iso: The system cannot find the path specified. virtualbox-iso: Downloading or copying: http://mirror.spro.net/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1511.iso virtualbox-iso: Error downloading: checksums didn't match expected: 88c0437f0a14c6e2c94426df9d43cd67 ==> virtualbox-iso: ISO download failed. Build 'virtualbox-iso' errored: ISO download failed. ==> Some builds didn't complete successfully and had errors: --> virtualbox-iso: ISO download failed. ==> Builds finished but no artifacts were created. Solution Method 1: Configuration File Replacement We have created a custom JSON configuration file that resolves the mirror issue for CentOS 7 v1611. You can download the JSON file here; you may have to right-click and “save link as” a JSON extension file. Also note, you will have to save/rename this JSON file as neuron-solo-variables.json. Using this file, navigate to your Packer Script builder directory, usually this is found in the following path: <PATH>\ThingWorx-Analytics-Server-Standalone\components\vm-builder\neuron-vm-builder Copy the new JSON file into this directory, and replace the current existing copy. You can now re-run the Packer Script for your desired Virtual Machine Appliance output. Method 2: Manual Configuration File Adjustment You will have to locate an active mirror for CentOS 7. A list of current active mirrors can be found here. When selecting a mirror, you will need to select the Minimal ISO install, as this is the base image that is used for the VM creation. Next, you will have to open the current neuron-solo-variables.json configuration file located in the <PATH>\ThingWorx-Analytics-Server-Standalone\components\vm-builder\neuron-vm-builder directory. You will have to replace the os_image_download_url value with an active Mirror URL from the list above. Next, for the os_iso_md5_checksum variable, you will need to replace the entry with the new SHA256 checksum from CentOS, which can be located here. Default Settings: New Settings: Save changes and close the neuron-solo-variables.json configuration file. CentOS has switched over from MD5 to SHA256 checksums. Even though in the following the variable name has “MD5” in the string, we will be modifying a second JSON configuration file to address this. In the same directory that we are currently working in, open the neuron-solo.json configuration file. You will need to modify the attribute iso_checksum_type to sha256 Default Settings: New Settings: Save changes and close the neuron-solo.json configuration file. You can now re-run the Packer Script for your desired Virtual Machine Appliance output.
View full tip
In this video we cover: a short introduction of Thingworx Analytics Builder The import of the Thingworx Analytics Builder extension   This video applies to ThingWorx Analytics 52.1 till 8.1   Updated Link for access to this video:  Installing Thingworx Analytics Builder:  Part 1 of 3
View full tip
Video Author:                    Christophe Morfin Original Post Date:            June 9, 2017 Applicable Releases:        ThingWorx Analytics 8.0   Description: In this video we go through the steps to install ThingWorx Analytics Server 8.0.    
View full tip
In this video we cover the process of installing ThingWorx Analytics Server 52.1. Make sure to have reviewed the part 1 video about pre requisite   Updated Link for access to this video:  Installing ThingWorx Analytics Server: Part 2 of 2
View full tip
Build a Predictive Analytics Model Guide Part 2   Step 5: Profiles   The Profiles section of ThingWorx Analytics looks for combinations of data which are highly correlated with your desired goal. On the left, click ANALYTICS BUILDER > Profiles. Click New....The New Profile pop-up will open. NOTE: Notice the Text Data Only section which is new in ThingWorx 9.3.         3. In the Profile Name field, enter vibration_profile. 4. In the Dataset field, select vibration_dataset. 5. Leave the Goal field set to the default of low_grease. 6. Leave the Filter field set to the default of all_data. 7. Leave the Excluded Fields from Profile field set to the default of empty. 8. Click Submit. 9. After ~30 seconds, the Signal State will change to COMPLETED. The results will be displayed at the bottom.                 The results show several Profiles (combinations of data) that appear to be statistically significant. Only the first few Profiles, however, have a significant percentage of the total number of records. The later Profiles can largely be ignored. Of those first Profiles, both Frequency Bands from Sensor 1 and Sensor 2 appear. But in combination with the result from Signals (where Sensor 1 was always more important), this could possibly indicate that Sensor 1 is still the most important overall. In other words, since Sensor 1 is statistically significant both by itself and in combination (but Sensor 2 is only significant in combation with Sensor 1), then Sensor 2 may not be necessary.     Step 6: Create Model   Models are primarily used by Analytics Manager (which is beyond the scope of this guide), but they can still be used to measure the accuracy of predictions. When Models are calculated, they inherently withhold a certain amount of data. The prediction model is then run against the withheld data. This provides a form of "accuracy measure", which we'll use to determine whether Sensor 2 is necessary to the detection of a low grease condition by creating two different Models. The first Model (which you will create below) will contain all the data, while the second Model (in the next step) will exclude Sensor 2. On the left, click ANALYTICS BUILDER > Models.   Click New….The New Predictive Model pop-up will open.   3. In the Model Name field, enter vibration_model. 4. In the Dataset field, select vibration_dataset. 5. Leave the Goal field set to the default of low_grease. 6. Leave the Filter field set to the default of all_data.         7. Leave the Excluded Fields from Model section at its default of empty.       8. Click Submit. 9. After ~60 seconds, the Model Status will change to COMPLETED.   View Model   Now that the prediction model is COMPLETED, you can view the results. Select the model that was created in the previous step, i.e. vibration_model. Click View… to open the Model Information page.   Review the visualization of the validation results. Note that your results may differ slightly from the picture, as the automatically-withheld "test" portion of the dataset is randomly chosen. Click on the ? icon to the right of the chart for details on the information displayed.   The desired outcome is for the model to have a relatively high level of accuracy. The True Positive Rate shown on the Receiver Operating Characteristic (ROC) chart are much higher than the False Positives. The curve is relatively high and to the left, which indicates a high accuracy level. You may also click on the Confusion Matrix tab in the top-left, which will show you the number of True Positive and True Negatives in comparison to False Positives and False Negatives.     Note that the number of correct predictions is much higher than the number of incorrect predictions.     As such, we now know that our Sensors have a relatively good chance at predicting an impending failure by detecting low grease conditions before they cause catastrophic engine failure.     Step 7: Refine Model   We will now try comparing this first Model that includes both Sensors to a simpler Model using only Sensor 1. We do this because we suspect that Sensor 2 may not be necessary to achieve our goal. On the left, click ANALYTICS BUILDER > Models.   Click New…. In the Model Name field, enter vibration_model_s1_only. In the Dataset field, select vibration_dataset. Leave the Goal field set to the default of low_grease. Leave the Filter field set to the default of all_data.   On the right beside Excluded Fields from Model, click the Excluded Fields button. The Fields To Be Excluded From Job pop-up will open. 8. Click s2_fb1 to select the first Sensor 2 Frequency Band. 9. Select the rest of the Frequency Bands through s2_fb5 to choose all of the Sensor 2 frequencies. 10. While all the s2 values are selected, click the green "right arrow", i.e. the > button in the middle. 11. At the bottom-left, click Save. The Fields To Be Excluded From Job pop-up will close.           12. Click Submit. 13. After ~60 seconds, the Model State will change to COMPLETED. 14. With vibration_model_s1_only selected, click View....   The ROC chart is comparable to the original model (including Sensor 2). Likewise, the Confusion Matrix (on the other tab) indicates a good ratio of correct predictions versus incorrect predictions.     NOTE: These Models may vary slightly from your own final scores, as what data is used for the prediction versus for evaluation is random. ThingWorx Analytics's Models have indicated that you are likely to receive roughly the same accuracy of predicting a low-grease condition whether you use one sensor or two! If we can get an accurate early-warning of the low grease condition with just one sensor, it then becomes a business decision as to whether the extra cost of Sensor 2 is necessary.   Step 8: Next Steps   Congratulations! You've successfully completed the Build a Predictive Analytics Model guide, and learned how to:   Load an IoT dataset Generate machine learning predictions Evaluate the analytics output to gain insight    This is the last guide in the Getting Started on the ThingWorx Platform learning path.   This is the last guide in the Monitor Factory Supplies and Consumables learning path.   The next guide in the Design and Implement Data Models to Enable Predictive Analytics learning path is Operationalize an Analytics Model.     Additional Resources   If you have questions, issues, or need additional information, refer to:   Resource Link Support Analytics Builder Help Center    
View full tip
    Hi, everyone!   In previous tech tips, I’ve introduced the ThingWorx 9.0 active-active clustering feature and provided architectural details and configurations. If you haven’t already, I recommend you check them out to learn more about how active-active clustering enables higher availability for ThingWorx: 9.0 Sneak Peek: Active-Active Clustering for ThingWorx 9.0 Sneak Peek: ThingWorx Architecture for Active-Active Clustering 9.0 Sneak Peek: Flexible Deployments of Active-Active Clustering for ThingWorx “ThingWorx on Air” Ep. 08: FAQs: ThingWorx Active-Active Clustering for Higher Availability   Today, I’ll provide more details around the load balancer in the active-active clustering architecture, some of its requirements, and a few configuration examples. Ready? Here we go! Here are the top four FAQs around the load balancer that will help you maximize your use of active-active clustering.   What do you mean by load balancing? Load balancing is the process of distributing network traffic across multiple servers. An algorithm employed by the load balancer or a proxy, determines how the traffic is distrusted. Round robin, fastest response, and least established connections are some of the most common methods of load balancing and provide different benefits, but all fundamentally ensure no single server bears too much demand. By spreading the traffic, load balancing improves application responsiveness. It also increases availability of applications and websites for users. Modern applications cannot run without load balancers. In general load balancers can run as hardware appliances or as software-defined. Hardware appliances often run proprietary software optimized to run on custom processors. As traffic increases, the vendor simply adds more load balancing appliances to handle the volume. Software defined load balancers usually run on less-expensive, standard Intel x86 hardware. Installing the software in cloud environments like Azure VMs or AWS EC2 eliminates the need for a physical appliance.   Following the seven-layer Open System Interconnection (OSI) model, load balancing occurs between layers four to seven (L4-Transport, L5-Session, L6-Presentation and L7-Application), whereas network firewalls are at levels one to three (L1-Physical Wiring, L2-Data Link and L3-Network). Load balancers have a various capabilities, which include: L4 — directs traffic based on data from network and transport layer protocols, such as IP address and TCP port. L7 — adds content switching to load balancing. This allows routing decisions based on attributes like HTTP header, uniform resource identifier, SSL session ID and HTML form data. GSLB — Global Server Load Balancing extends L4 and L7 capabilities to servers in different geographic locations. More enterprises are seeking to deploy cloud-native applications in data centers and public clouds. This is leading to significant changes in the capability of load balancers. What is a load balancer’s role in the ThingWorx Active-Active Clustering setup? As is true of any load balancer, the load balancer required in the ThingWorx Foundation active-active clustering architecture is responsible for distributing incoming traffic across the nodes within the cluster.   In the Active-Active Clustering architecture for ThingWorx, the load balancer distributes the traffic using a round-robin method. Please note that there are a several algorithms that provide load balancing techniques and this article is a good read for further understanding of it. A round-robin method rotates servers by directing traffic to the first available server and then moves that server to the bottom of the queue.   In ThingWorx clustering setup, while both WebSocket and HTTP incoming traffic are handled in a round-robin manner, they are routed differently by the load balancer.   HTTP traffic is directly distributed amongst the ThingWorx Foundation Servers within the cluster. Sticky sessions are used for the HTTP sessions—sticky via cookie, so individual users are tied directly to a single server node and see all of their changes instantaneously.   WebSocket traffic is distributed across the and is balanced via source IP to ensure each request from a device goes through the same connection server. From the ThingWorx Connection Server, the device traffic is distributed amongst the underlying ThingWorx Foundation Servers, not requiring another load balancer between the ThingWorx Connection Servers and ThingWorx Foundation Servers.   Please note that the WebSocket traffic load does not necessarily get distributed evenly nor do the incoming requests due to stickiness. For example: 2 users connect HTTP, one sends 100 requests and the other sends 2. Since they are sticky, it is not distributed evenly. 2 devices connect to a ThingWorx Connection Server. 1 is a gateway for 100 other devices, all requests fthe gateway go to the same connection server. The Connection Server does a round-robin to the underlying Foundation Servers so that the load would be better distributed across, but the load balancer is sticky to a ThingWorx Connection Server.     Which load balancer can I choose for setting ThingWorx in an Active-Active Cluster mode? ThingWorx active-active clustering is pretty much load balancer agnostic, meaning if the load balancer of your choosing that you might be using in your IT center meets the requirements, it can be utilized within the active-active clustering architecture. The load balancer is required to support the following features: Based on Layer-7 architecture Supports HTTP and WebSocket traffic Ability to support sticky sessions for  traffic and/or IP based stickiness. IP based means all traffic from a specific IP will be routed to the same server (this can be a problem with gateway type scenarios). Sticky sessions are based on a cookie, sessions are routed to same server based on cookie. Different users same IP could route to different machines. Health checking on server endpoints. (optional) It can manage SSL termination and SSL internal endpoints. Supports Path based routing. This is the ability to route to specific backends based on the URL or part of the URL. By default, all routes should go to the platform servers, but the following routes should go to the connection server: /Thingworx/WS /Thingworx/WSTunnelServer /Thingworx/WSTunnelClient /Thingworx/VWS All servers should be setup to only be part of load balancing based on their health configuration.  When configuring health check frequency, they should be run at a rate based on the tolerance for bad requests to be processed. Thingworx Foundation has a /health and /ready endpoint.   The /Thingworx/ready endpoint should be used for the load balancer.  It will return a 200 when the server is ready to receive traffic.  Connection Server checks health requests on a specific port and will return 200 when healthy. What are some of the compatible load balancers that I can use? While you can use any load balancer that satisfies the above request and meets your IT standards, below are some of the third-party load balancers that provide the features that are required of the active-active clustering architecture: HAProxy - HAProxy is a free, open source software that provides a high availability load balancer and proxy server for TCP and HTTP-based applications that spreads requests across multiple servers. It is very powerful and supports monitoring capabilities out of the box. PTC tests the clustering architecture using HA Proxy and provides a reference document for the same through the ThingWorx Foundation help center docs. Please note that it runs only on Linux environments. For a quick reference example of how to set up an HAProxy load balancer, see our Help Center here. NGINX - NGINX is an HTTP and reverse proxy server, a mail proxy server, and a generic TCP/UDP proxy serve. NGINX provides proxy capabilities as well as web server options. Some features like sticky sessions, advanced monitoring are not available in the opensource version and require you upgrade to NGINX Plus.  If you’re a Windows shop or already use NGINX Plus in your IT, then you may choose this load balancer offering.  However, please note that PTC doesn’t provide any official configuration steps of setting it up through our Help Center documentation. For a quick reference example of how to set up an NGNIX load balancer, see our Help Center here. AWS Application Load Balancer - Application Load Balancer (ALB) is best suited for load balancing of HTTP and HTTPS traffic and provides advanced request routing targeted at the underlying ThingWorx applications. Operating at the individual request level (Layer 7), Application Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) based on the content of the request. If you’re running ThingWorx deployments on AWS, then you may choose to use AWS-offered managed load balancing services. F5: F5 Networks through its BIG-IP Local Traffic Manager solution provides advance load balancing techniques such as a full proxy where you can inspect, manage, and report on application traffic entering and exiting your network with additional features around SSL and performance optimization. Load balancers are another area where ThingWorx allows for flexibility and extensibility by enabling you to use the load balancer of your choosing that you’re most comfortable with or that best suits your needs (provided it meets the criteria above). You can also configure SSL or TLS for HAProxy when using ThingWorx HA clustering for end-to-end security. I hope this tech tip helped you develop a deeper understanding of how active-active clustering leverages load balancers to further increase your performance and thus availability and machine uptime, among many others.   If you’re not already on 9.0 and using active-active clustering, be sure to upgrade!   Stay connected, Kaya  
View full tip
In case it's useful for anyone, I successfully used the Thingworx Importer to import an Entity using version 8.4.1. The import command was in a CURL script (can also be run using e.g. Cygwin on Windows) and the entity data was contained in an XML file. The XML file is attached to this post, and the CURL script is copied into the bottom of the post body.   Notes on using the script: Copy CURL code into import.sh You might need to change the line endings to UNIX, e.g. in Notepad++ menu option Edit > EOL Conversion > Unix Give permissions to run import.sh (chmod +x import.sh) ./import.sh >>>>>>>>>>>> #!/bin/bash APPKEY="2e6704c0-XXXX-XXXX-XXXX-a1589e387d1a" TWX_HTTP_PORT="8018" FILE_PATH_TWX="Things_testImport.xml" PROTOCOL="http://" IP="localhost" URL="$PROTOCOL""$IP"":""$TWX_HTTP_PORT""/Thingworx/Importer?purpose=import" curl -X POST -H 'appKey: '$APPKEY \ -H 'Content-Type: multipart/form-data' \ -H 'Accept: application/json' \ -H 'x-thingworx-session: true' \ -H 'X-XSRF-TOKEN: TWX-XSRF-TOKEN-VALUE' \ -F 'upload=@'$FILE_PATH_TWX \ $URL <<<<<<<<<<<<   Also TWX Importer is explained in this support article.
View full tip
Background: In the event that a Gateway/Connector Agent is offline or unable to connect to the Axeda Cloud Server, it uses an internal message queue to store information until the connection is restored. The message queue size is configured in the Axeda Builder project. By default, the queue is 200KB in size. Depending on how frequently your Agent sends data or how much data your Agent is collecting and trying to send, 200KB may be too small.  If the queue is too small, the data will “overflow” the queue. The queue is kept in memory only; data is not stored to disk and will be removed in a First-In-First-Out (FIFO) manner when the queue overflows. If you see queue overflow error messages in the Agent log (either EKernel.log and xGate.log), it may be time to change the size of the outbound message queue. The correct size setting for the Agent outbound message queue takes three variables into consideration: How much information you are sending? What is the maximum expected duration for loss of connection to the Internet (Cloud Server)? How much memory is available for your process? The more information the Agent is trying to send, the larger the queue size setting should be. Consider also that if your Agents are offline (disconnected) for a long period of time, they will likely accumulate lots of data, which may overflow the outbound message queue. If this is the case, you’ll need to increase the queue or risk losing data. Recommendation: Consider how the Agent operates (offline/online data collection) and how much data may be queued. When selecting the size of the queue, it’s important to maintain a balance between protecting against data loss and not occupying too much memory. If you do determine that you need to increase the outbound message queue size based, it’s important to note that Axeda recommends a maximum size of the outbound message queue of about 2MB. Need more information? For information about specifying Agent outbound message queue size, see the online help in Axeda® Builder (Enterprise Server Settings). For information about how the Agent delivers data to the Platform (via EEnterpriseProxy/xgEnterpriseProxy), see the Agent user’s guide for your Agent: either Axeda® Platform Axeda® Gateway User’s Guide (PDF) or Axeda® Platform Axeda® Connector User’s Guide (PDF). Axeda Support Site links: Axeda® Gateway User’s Guide, Axeda® Connector User’s Guide.
View full tip
Introduction to the ThingWorx Composer and a demonstration of how you go about building out the design plan.   For full-sized viewing, click on the YouTube link in the player controls.   Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
Distributed Testing with JMeter Overview Running JMeter to the scale required by most customers is something that demands additional considerations than discussed in the previous two articles. At scale, a test may need to simulate thousands of users, which will require more than just one JMeter client be set-up on one or many hosts, as shown in the 3rd JMeter article here, in a tutorial on Distributed Testing.     Distributed Testing Remote Testing configuration in which the main JMeter client is located at one IP address, controlling the rest as they step through their own copies of the JMeter tests, based on their own unique data files as necessary, to simulate a user load across a network, a series of regions, or simply across many machines if limited by the size of the physical hardware [JMeter link for this image in text body below] One key aspect of a proper JMeter load test is distributed or remote testing, i.e. making use of more than one JMeter client at a time to simulate the user load on the Application server. There are many reasons to make use of a network of clients such as this, like mimicking cross-region user access to the Foundation server, simulating different levels of latency for different users, and increasing the overall number of users which can contribute to the load test, while minimizing the performance cost of hosting that many threads on any single server.      A single JMeter client has a practical limit of 150-250 threads across all groups and requires about 1 CPU and 8 Gb of RAM. After this point, the amount of garbage collection and other processing there is for each client to do is substantial. As the client processes its own data and sends requests to the Application server at the same time, there are diminishing returns, and the responses begin to take longer (or errors start occurring) simply because of resource starvation within the client process rather than on the Application server. Therefore, distributed testing is required for most customers doing larger load tests using JMeter. Many applications will have more than a few hundred users and/or will have users accessing the system from a variety of regions and networks, each of which could have significantly different network latency. So, in order to work with the limitations of the JMeter executable and address regional concerns, distributed or remote testing is typically required for almost all of PTC’s customers who scale test with JMeter.      With a simple (monolithic) distributed test, all of the JMeter clients are located on the same host and share an IP address, but each must be configured with a unique RMI port to connect to the controlling process. If these are located on a VM, then the resource specifications can merely be increased and the VM sized larger as necessary to ensure the network of JMeter clients runs as expected. Each JMeter client requires around 8 GB for its heap size and 1 CPU (with some additional resources for the host operating system). Multi-hosted testing becomes the required option when limited by physical hardware (or a relatively small VM hardware host). If there are only 4-core, 32-GB machines, then plan for a machine per every 3 JMeter clients. If simulating thousands of users, this could mean half a dozen machines or more are required, which can still sometimes work out to be more cost efficient than one large, 256 GB, VM hosted in the cloud. Using many hosts in physical locations can also simulate regions with different network characteristics.      A tutorial for distributed testing across one host is shown here. For more information, see the Apache web articles on each topic: Remote Testing and Distributed Testing Step by Step.     Tutorial: Step Up Distributed Test on One Host Copy the source directory for the whole JMeter project and rename it however many times as required. Here there are 22 JMeter clients side-by-side on a single, 256-GB VM (3000+ users):   Each directory (shown above) is identical, except that the “jmeter.properties” files (found in the bin directory in each project) have unique settings, namely the RMI port:     Each JMeter client must contain a copy of the same test scripts found on the main server:   In the “jmeter.properties” file for the main server, specify the IPs and ports for each remote/distributed client (under remote_hosts), as shown: In this image, the IPs are all the same, with just the port differing from client to client. Here only 4 clients are in use, with the rest commented out for future tests. This is how to scale up and test incrementally more users each time. Just add another server to add another 150-250 users, until eventually the target number of users is reached, or the server is saturated. These IPs will differ if doing a true remote test, with each being the server location of the JMeter client within the same network. The combination of IP address and port will all still need to be unique, and communication between the overall jmeter controller and the clients over the RMI ports needs to be allowed by the network/firewalls. Note that the number of users is set using the parameter under “Test Plan” which was set-up last time. This value represents the number of users by specifying the number of threads per thread group, and it can remain the same for every client or vary accordingly, if for instance one region is smaller than another. The “Test Plan” parameters are shown here:   To optionally start all of the clients at once in preparation for test execution, create a basic batch or shell script which goes to the bin directory of each agent and calls the start command: “jmeter-server”. In this image from a Windows JMeter host, only the first few agents are in use, but removing the “rem” to uncomment the other start command lines in this file would add more servers to be started. Note how the Java parameter for java.rmi.server.hostname must match the main JMeter client network configuration here for them to connect (see Apache links above for more information). This will start each of them in their own CMD window, which once closed, will terminate the JMeter client processes. Parameter like rampUp time within the main test script will scale with the number of client processes. For example, 100 users and 300 seconds rampUp with 4 clients results in 400 overall user threads that are all logged in after 300 seconds. Once all clients are running, then click Remote Start All to start the test across every server from a GUI (usually for debugging) or execute the test using command line: jmeter -n -r -t <test.jmx> -l <results.jtl>   The main server sends the actions to the remote clients to run, so all the clients need is input parameters. For instance, a CSV file may exist in each directory which has different data from client to client, to create pseudo-random user loads and represent different kinds of user activity. The file shown in this image is different, and unique, in each of the client directories:   Conclusion Here, we learned how to horizontally scale the load test, setting up more JMeter clients to facilitate larger, more complete user loads. We also discussed the difference between distributed and remote testing, and how the former is easier to set up and use, especially on VMs, but the latter might be better for simulating region differences and the impact of network latency. The latter will likely also be required if there are hardware constraints to consider, since each JMeter client needs about 8 GB for its heap, and another 8 GBs, or a core or two of similar size, is needed per every 3 JMeter clients for the communication and processing of data. Stay tuned for the next article on generating and reviewing the results of the load tests.  
View full tip
I'm getting up to speed on all the great new stuff in 8.5, and have found that since the JavaScript engine was upgraded to Rhino 1.7.11, there's some awesome new JavaScript ES6 functionality available. I have tested arrow functions, filter, map, and reduce. Compose does not look like it is supported.   If you're not familiar with this functionality, I highly recommend reading up on them. Filter, map, and reduce are incredibly useful for working with arrays. They can save you a lot of annoying logic.   Here's some resources that I've found helpful for learning: JavaScript Functional Programming - map, filter and reduce Arrow Functions: Fat and Concise Syntax in Javascript If you really want to dive into ES6, Wes Bos has incredible tutorial sessions that are worth every penny: Wes Bos: ES6 for Everyone!   Have you played around with ES6 functionality in ThingWorx 8.5 yet?
View full tip
ThingWorx offers Docker based installations utilizing existing PostgreSQL databases. In newer releases ThingWorx Docker installers also offer using other databases.   Personally I'm using a certain method of deployment where I can just easily exchange some files, create new images and have a H2 based environment running for some quick tests.   As H2 is a built-in database, I will not dive into setting up the platform-settings.json for other connectivity. However other databases can be connected to by adjusting the platform-settings.json. This might also require an internal Docker Network structure which I will not elaborate on here.   Note: the following procedure is not fully supported as it's not using the deployment methods provided by the installers!   Create the Directory Structure   My Directory structure looks the following (expanded for the 8.2.x branch):   /home/ts/docker/ twx.8.0.x.h2 twx.8.1.x.h2 twx.8.2.x.h2 Dockerfile settings platform-settings.json <license_file> storage Thingworx.war twx.8.3.x.h2   I have a directory for every version I want to test with.   In each directory there's the Dockerfile - the recipe file I'm using. There's also the version specific Thingworx.war file as well as two directories: settings and storage which I will map to the ThingWorx directories inside the image later.   The Recipe File   FROM tomcat:latest MAINTAINER me@somewhere.com LABEL version = "8.2.0" LABEL database = "H2"  RUN mkdir -p /ThingworxPlatform RUN mkdir -p /ThingworxStorage RUN mkdir -p /ThingworxBackupStorage ENV LANG=C.UTF-8 ENV JAVA_OPTS="-server -d64 -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -Dfile.encoding=UTF-8 -Duser.timezone=GMT -XX:+UseNUMA -XX:+UseG1GC -Djava.library.path=/usr/local/tomcat/webapps/Thingworx/WEB-INF/extensions COPY Thingworx.war /usr/local/tomcat/webapps VOLUME ["/ThingworxPlatform", "/ThingworxStorage"] EXPOSE 8080   I change the version label to keep track of the versions for each recipe.   Deploying   Build the Docker Image by navigating to the directory where the recipe file is based in   sudo docker build -t twx.8.2.x.h2 .   Create a Docker Container and start it   sudo docker run -d --name=twx.8.2.x.h2 -p 82:8080 -v /home/ts/docker/twx.8.2.x.h2/storage:/ThingworxStorage -v /home/ts/docker/twx.8.2.x.h2/settings:/ThingworxPlatform twx.8.2.x.h2   I change the name of the Image and the Container as well as the external port to distinguish all the different versions. The -v option maps the paths in my Operating System to the paths in the Docker Container, so I can browse the ThingworxStorage and ThingworxPlatform folder without connecting inside the Container. That's quite handy to check the logs, or place the license file.   Starting and Stopping   I can fire up and shut down Containers I need with the following commands:   sudo docker start twx.8.2.x.h2 sudo docker stop twx.8.2.x.h2   What next   That's just my basic setup. Usually I copy & paste a working directory for deploying another version and adjust what needs to be changed. You could use this as a basis for quick and easy deployment where even additional features could be added, i.e. HTTPS configuration or auto-deploying certain ThingWorx Extensions via a REST API call.   To ensure starting with a clean Image, for building new Images I delete the contents of the storage folder and only leave the platform-settings.json in the settings folder (I copy the license later after generating it with my new Device ID).
View full tip
Original Post Date:     June 6, 2016 Description: This tutorial video will walk you through the installation process for the PostgreSQL-based version of the ThingWorx Platform in a Windows environment.  All required software components will be covered in this video.    
View full tip
  Connect to an existing database and design a connected data model.   GUIDE CONCEPT   There are times you already have your database designed and only need to integrate the ThingWorx environment.   These concepts and steps will allow you to focus on development of your application while still allowing the ability to utilize the power of ThingWorx!   We will teach you how to create a data model around your database design and connect to that database.     YOU'LL LEARN HOW TO   How to connect an external database and created services to be used with it How to design and implement a new data model based on an external resource Using data models with database services   Note: The estimated time to complete this guide is 30 minutes.      Step 1: Examples and Strategy   If you’d like to skip ahead, download and unzip the completed example of the Aerospace and Defense learning path attached to this guide:  AerospaceEntitiesGuide1.zip.   By now, you likely know how to create a data model from scratch. You have likely already created services that work with Info Tables. What you might not have completed, is combining both a new data model, handling data in services, and connecting it all to an external database.   Our organization, PTC Defense Department, has existed for years and has an existing database setup. Developers in our organization refuse to remodel the database, so we must model the ThingWorx data model to our database schema. With ThingWorx, this is not a difficult task, but there are numerous decisions and options that we will explore in this guide.     Step 2: Database Connections   ThingWorx is based on the Java programming language and can make connections to any database that supports a Java-based connection. Dropping the JAR file for the database JDBC driver to the lib folder of Tomcat is all that is needed for connection to the ThingWorx Platform. Follow the below steps to get started creating the connection.   To establish the connection and begin working with an external database, you will need to create a Database Thing and configure the connection string and credentials. Let us start with our database connection. If you have not done so already, download the Aerospace and Defense database scripts: DatabaseApplication.zip. Use the README.txt file to create the database schema. It is based on Microsoft SQL Server, but you can configure the scripts to your database preferences.   NOTE: You will not need to connect to a database to utilize this guide as a learning utility. For your services to work, you will need to connect to a database.   1. In ThingWorx Composer, click the + New at the top-left of the screen.     2. Select Thing in the dropdown.     3. Name the Thing `DatabaseController.Facilities` and select Database as the Base Thing Template.     4.Click Save and go to the Configurations tab.   In this tab, you will enter the class name of your driver, the connection string for that database connection, and the credentials to access the database.   Keep in mind, the JDBC Driver Class Name, JDBC Connection String, and the connection Validation String values are all database type specific. For example, to connect to a SQL Server database, the below configuration can be used.   Title Description  Example   JDBC Driver Class Name  The specific class name of the driver being used for the connection.  net.sourceforge.jtds.jdbc.Driver (SQL Server) or oracle.jdbc.driver.OracleDriver (Oracle)  JDBC Connection String  The connection string to locate the database by host/port/database name.  jdbc:jtds:sqlserver://server:port/databaseName (SQL Server) or jdbc:oracle:thin:@hostname:port:databaseName (Oracle)  connectionValidationString  A simple query that should always work in a database.  SELECT GetDate() (SQL Server) or SELECT SYSDATE FROM DUAL (Oracle)   5. After entering credentials, click Save.     6. Go the Properties and Alerts tab.   The connected Property should be checked. This property informs us of the current connection to the database. The lastConnection Datetime Property should show a current timestamp. This property informs us of the last time there was a valid connection to the database. This is an easy way to confirm the connection to the database.   If you do not have a connection, work on your configurations in the Configurations tab and validate the credentials being used. If you are still having troubles, see the examples section below or use the next section for help to try a query of the database.   Help and Troubleshooting   For help finding the correct configuration for you, check out these JDBC Configuration Examples or try out this Connection String Reference for help with your connection string.   You have just established your first database connection! Now jump to the next section and let us begin to build a data model to match the database schema.   Step 3: Query Data from External Database   Now that you're connected to the database, you can begin querying the database for information and the flow of data. The queries and data shown below are based on the table scripts provided in the download.   Examples of how the ThingWorx entity should look can be seen in the SQLServerDatabaseController and OracleDatabaseController entities.   Running a Query   As you may have noticed by working in ThingWorx and developing applications, an InfoTable is often used to work with large sets of data. An InfoTable is also how data is returned from a database query. If you're expecting only 1 value in the result set, the data will be held in the first index of the InfoTable. If you're expecting rows of data coming back, expect there to be rows of information inside of the InfoTable.   Follow the steps below to set up a helper service to perform queries for the database. While other services might generate the query to be used, this helper service will be a shared execution service.   In the DatabaseController entity, go to the Services tab. Create a new service of type SQL (Query) called RunDatabaseQuery.           3. Set the Output as InfoTable, but do not set the DataShape for the InfoTable.       4. Add the following parameter: Name Base Type Required query String True       5. Add the following code to the new service:   <<query>>       6. Click Save and Continue. Your service signature should look like the below example.       You now have a service that can run queries to the database. This is also a simple method to test/troubleshoot the database connection or a quick query.   Run your service with a simple query. You might notice that no matter the fields in the result set, the InfoTable will match it based on field type and field name.   There are two ways you can go from here. You can either query the database using services that call this service, or you can create more SQL command services that query the database directly. Let's go over each method next, starting with a service to call our helper.   In the Services tab of the DatabaseController entity, create a new service of type JavaScript. Name the service JavaScriptQuery_PersonsTable. Set the Output as InfoTable, but do not set the DataShape for the InfoTable. Add the following code to your new service: try { var query = "SELECT * FROM Persons"; logger.debug("DatabaseController.JavaScriptQuery_PersonsTable(): Query - " + query); var result = me.RunDatabaseQuery({query:query}); } catch(error) { logger.error("DatabaseController.JavaScriptQuery_PersonsTable(): Error - " + error.message); }       5. Click Save and Continue.   Any parameter, especially those that were entered by users, that is being passed into an SQL Statement using the Database Connectors should be fully validated and sanitized before executing the statement! Failure to do so could result in the service becoming an SQL Injection vector.   This is a simple way to query the database since much of your development inside of ThingWorx was already based in JavaScript.   Now, let's utilize the second method to create a query directly to the database. You can use open and close brackets to create parameters for your query. You can also use <> as a method to mark a value that will need to be replaced. As you build your query, use [[Parameter Name]] for parameters/variables substitution and <> for string substitution.   In the Services tab of the DatabaseController entity, create a new service of type JavaScript. Name the service SQLQuery_GetPersonByEmail. Ensure the Output is InfoTable. Add the following code to your new service: SELECT * FROM Persons WHERE person_email = [[email]];       5. Add the following parameter:   Name Base Type Required email String True         6. Click Save and Continue.   An example of using the string replacement is as follows: DELETE FROM <> WHERE (FieldName = '[[MatchName]]'); DELETE FROM << TableName >> WHERE ( FieldName = [[MatchNumber]]);       Click here to view Part 2 of this guide.
View full tip
This video is the 3 rd part of a series of 3 videos walking you through how to setup ThingWatcher for Anomaly Detection. In this video we will use Anomaly Mashup to visualize data received from my remote device.   Updated Link for access to this video:  Anomaly Detection 8.0:  Viewing Data via Anomaly Mashup:  Part 3 of 3
View full tip
I have put together a small sample of how to get property values from a Windows Powershell command into Thingworx through an agent using the Java SDK. In order to use this you need to import entities from ExampleExport.xml and then run SteamSensorClient.java passing in the parameters shown in run-configuration.txt (URL, port and AppKey must be adapted for your system). ExampleExport.xml is a sample file distributed with the Java SDK which I’ve also included in the zipfile attached to this post. You need to go in Thingworx Composer to Import/Export … Import from File … Entities … Single File … Choose File … Import. Further instructions / details are given in this short video: Video Link : 2181
View full tip
ThingWorx 8.2 System Requirements ThingWorx 8.2 Helpcenter The following feature enhancements and bug fixes exist in ThingWorx 8.2.0: Due to security updates, a minimum version of Apache Tomcat 8.0.47 or 8.5.23 should be used with ThingWorx. Enhancements Platform • Included information about opting out of metrics reporting. For more information, see the ThingWorx Metrics Reporting Services Configuration section in the Platform Subsystem help topic. • The Script Log Error has been added to improve error logging for scripts. • Added support to allow mashups to be rendered using jQuery 3.x runtime. • Query service optimization. This includes improved performance for the QueryPropertyHistory and QueryPropertyNamedHistory services. Previously, a database call was made for every logged property. With this improvement, one database call is made for all logged properties, resulting in the following improvements: ▪ A ~20% decrease in memory usage for the QueryPropertyHistory and QueryNamedPropertyHistory service queries if no filters are applied (PostgreSQL and MSSQL). ▪ Decreased time to execute query (~10%) for the QueryPropertyHistory and QueryNamedPropertyHistory services. Depends on latency to the database (PostgreSQL and MSSQL). ▪ Additional decrease in memory, based on the filter supplied during the query for QueryPropertyHistory and QueryNamedPropertyHistory services. (PostgreSQL and MSSQL). If a filter is applied that reduces the record count by 50%, then there is an additional 50% decrease in memory usage on top of the other 50% described in the first point. This optimization also results in an approximate 10% decrease in memory for single property queries. The Audit Subsystem has been added. It supports the following capabilities: • Automatically add audit entries to online storage. • Search for audit entries (use the QueryAuditHistory service) stored online. • Archive online audit entries to offline storage (automatically performed daily by default). • Export audit data, using the language selected for the export. • Purge online audit data on the basis of a specified number of days for audit data to remain online and also on the specified number of rows to keep online. • Clean up archived audit data automatically, based on a configured schedule. • The security of the PASSWORD base type has been enhanced and is now encrypted. See Passwords for more information. • Added the Collection Widget, which allows you to replicate/repeat mashups and content by using infotables to dynamically supply visual content and data. Refer to the KCS article for additional information here • Additional capability has been added to New Composer. For more information, refer to the ThingWorx Community blog. • The licensing process has been improved. An activation ID is no longer required to obtain a license and a new license file is not required for minor or major release upgrades. ◦ For connected scenarios, activation IDs are no longer required in the platform-settings.json file. ◦ For disconnected scenarios, go to the enhanced PTC Support site pages, select the product, enter a Device ID, and retrieve a license. • You can enable the Application Key Authenticator when SSO is enabled by editing the sso-settings.json configuration. For more information, see Configure the sso-settings.json File. • The CSS Editor was added to Mashup Builder, which allows developers to create modern experiences with responsiveness, animations, and advanced styling and behaviors. Refer to the KCS article for additional information here. • Added support for "Store and Forward" functionality to the interface between KEPServerEX and the ThingWorx platform. KepServerEX can be configured to store updated property data to disk when disconnected from the ThingWorx platform and will send that data gracefully when the connection is re-established. • In mashups, row and column gadget sizes 1 to 8 are now available. TW-25477 Bug Fixes Platform Related JIRA • Fixed an issue with Thing Shapes when editing subscriptions twice before canceling or closing in which the second edit was not saved. TW-28718 • Fixed an issue that was causing SQL Server apparent deadlock exceptions. TW-28208 • Added useful log information for troubleshooting LDAP and Active Directory errors. TW-23873 • Fixed an issue with exception handling in DSLProcessor in which line numbers were not included in the log. TW-18042 TW-17255 • Fixed an issue in which opening/closing brackets are not highlighted if there were 100 or more lines of code in a JavaScript service. TW-12740 Mashup Builder • Service error notification messages were fixed to display on multiple lines based on line breaks in the message. TW-24738 • Fixed an issue in which a master mashup header image was not fully displayed. PSPT-3365 Extensions Related JIRA • The Google Maps JavaScript API was updated to prevent the use of the library without an api key. If you are using the Google Map extension in your application, verify that the extension's metadata.xml file is updated with the correct URL (https://maps.google.com/maps/api/js?sensor=false&key=YOUR_API_KEY). Re-zip the extension and reimport into ThingWorx after making this change.
View full tip
With the new licensing introduction, it could get confusing at first on how to obtain and apply, especially with more than one app in place. This is an example on how to apply both foundation and manufacturing license when installing Thingworx 8. 1) Install Manufacturing App 8.0 and needed components (ex: Kepware) per  the guide with manufacturing app license - manufacturing app widget can now be accessed. 2) Accessing /Thingworx reports a licensing issue 3) Download Thingworx license from the license portal. 4) Rename the manufacturing app license.bin to <name>.bin and put Thingworx license.bin in the ThingworxPlatform folder. 5) Restart Thingworx service 6) Access /Thingworx and accept license agreement 7) Change license.bin back to the original manufacturing app license.bin (step 4) 😎 Restart Thingworx server 9) Both manufacturing app and foundation functions are available.
View full tip
DataShape Simply put, it represents the data in your model giving your application an in-built sense on how to represent the data in different scenarios. DataShape is defined with set of field definitions and related metadata e.g. DataType. DataShapes are must have (except for ValueStream) when creating entities that deal with data storage i.e. DataTable & Stream. For more detail on  DataShapes and the DataTypes see DataShapes in ThingWorx Help Center   Note: See ThingShape : Nuances, Tips & Tricks  for ThingShape vs DataShape   Ways to create DataShape   Via the ThingWorx Composer Navigate to ThingWorx Composer click on New > DataShape Provide a unique name to the DataShape entity DataShape creation in ThingWorx Composer Navigate to the Field Definition and add required Field Definition Defining Field for the DataShape Via a custom service in ThingWorx Navigate to an entity under which the service is to be created, e.g. Thing Switch to Services section for the Thing and click Add to create new service OOTB available service CreateDataShape can be used from the Resources > EntityService // snippet creating the infotable with the var params = { infoTableName : "InfoTable", dataShapeName : "DemoDataShape" }; // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(DemoDataShape) var result = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); // snippet creating the DataShape using the Infotable queried above which returns the field and the metadata on those fields // DSName used below is created as the Resources["EntityServices"].CreateDataShape({ name: DSName /* STRING */, description: "Custom created DataShape" /* STRING */, fields: result /* INFOTABLE */, tags: undefined /* TAGS */ }); Here's how it'd appear in the Service editor :   DataShape creation with JavaScript service in ThingWorx Via the ThingWorx Extension SDK   Following example snippet shows the creation and usage of the DataShape while creating custom extension with the Extension SDK    @ThingworxConfigurationTableDefinitions(tables = { @ThingworxConfigurationTableDefinition(name = "ConfigTableExample1", description = "Example 1 config table", isMultiRow = false, dataShape = @ThingworxDataShapeDefinition(fields = { @ThingworxFieldDefinition(name = "field1", description = "", baseType = "STRING"), @ThingworxFieldDefinition(name = "field2", description = "", baseType = "NUMBER") })) }) Note: Refer to the ThingShape : Nuances, Tips & Tricks for Tips & Tricks   Other related reads How are DataShape used while storing the data in ThingWorx How to pass a DataShape as parameter Can two DataShapes have the same service name if used on the same thing in ThingWorx? DataShape in ThingWorx Help Center
View full tip
Announcements