cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
It’s critical for us to configure all correct parameters while running your application in Production environment or even in development env. While GUI makes it very user-friendly and easy to set up the right values in the right fields, it's useful to know how to do the same programmatically/without the "Configure Tomcat" utility. One way, if you're using Tomcat as a Windows service, you can adjust the JVM options by going to the bin dir and running: tomcat8 //US//MYSERVICENAME ++JvmOptions=-Dexample.license.directory="C:\Program Files\example" Turn the service off before you do this and restart it when you finish. cd $CATALINA_HOME .\bin\service.bat install tomcat .\bin\tomcat8.exe //US//tomcat8 --JvmMs=512 --JvmMx=1024 --JvmSs=1024 Setting the --JvmXX parameters may not be enough. You may also need to specify the JVM memory values explicitly. From the command line it may look like this: bin\tomcat8w.exe //US//tomcat8 --JavaOptions=-Xmx=1024;-Xms=512;.. Be careful not to override the other JavaOptions. But the best and recommended way is to use setenv.sh/setenv.bat (Linux/Windows respectively). It isn't in the as-downloaded Tomcat. But if you look in catalina.sh/catalina.bat, there's a check for a file called setenv. If it's there, it's run. That's where you set JAVA_OPTS, CATALINA_OPTS, etc. We use it to set JAVA_HOME, JAVA_OPTS, CATALINA_OPTS and JPDA_ADDR. Putting all your environment variables into this file is ideal because then you don't have to change the stock startup scripts. Then when monitoring the log we can see the parameters taken:
View full tip
A common issue that is seen when trying to deploy, design or scale up a ThingWorx application is performance.  Slow response, delayed data and the application stopping have all been seen when a performance problems either slowly grows or suddenly pops up.  There are some common themes that are seen when these occur typically around application model or design.  Here are a few of the common problems and some thoughts on what to do about them or how to avoid them. Service Execution This covers a wide range of possibilities and is most commonly seen when trying to scale an application.  Data access within a loop is one particular thing to avoid.  Accessing data from a Thing, other service or query may be fast when only testing it on 100 loops, but when the application grows and you have 1000 suddenly it's slow.  Access all data in one query and use that as an in memory reference.  Writing data to a data store (Stream, Datatable or ValueStream) then querying that same data in one service can cause problems as well.  Run the query first then use all the data you have in the service variables.   To troubleshoot service executions there are a few methods that can be used.  Some for will not be practical for a production system since it is not always advisable to change code without testing first. Used browser development tools to see the execution time of a service.  This is especially helpful when a mashup is slow to load or respond.  It will allow quickly identifying which of multiple services may be the issue. Addition of logging in a service.  Once a service is identified adding simple logging points in the service can narrow what code in the service cases the slow down (it may be another service call).  These logging statements show up in the script logs with time stamps ( you can also log the current time with the logging statements). Use the test button in Composer.  This is a simple on but if the service does not have many parameters (or has defaults) it's a fast and easy way to see how long a service takes to return,' When all else fails you can get thread dumps from the JVM.  ThingWorx Support created an extension that assists with this.  You can find it on the Marketplace with instructions on how to use it.  You can manually examine the output files or open a ticket with support to allow them to assist.  Just be careful of doing memory dumps, there are much larger, hard to analyse and take a lot of memory.  https://marketplace.thingworx.com/tools/thingworx-support-tools Queries ​These of course are services too but a specific type.  Accessing data in ThingWorx storage structures or from external sources seems fairly straight forward but can be tricky when dealing with large data sets.  When designing and dealing with internal platform storage refer to this guide as a baseline to decide where to store data...  Where Should I Store My Thingworx Data?   NEVER store historical data in infotable properties.  These are held in memory (even if they are persistent) and as they grow so will the JVM memory use until the application runs out of it.  We all know what happens then.  Finally one other note that has causes occasional confusion.  The setting on a query service or standard ThingWorx query service that limits the number of records returned.  This is how many records are returned to from the service at the end of processing, not how many are processed or loaded in memory.  That number may be much higher and could cause the same types of issues. Subscriptions and Events ​This is similar to service however there is an added element frequency.  Typical events are data change and timers/schedulers.  This again is often an issue only when scaling up the number of Things or amount of data that need to be referenced.  A general reference on timers and schedulers can be found here.  This also describes some of the event processing that takes place on the platform.  Timers and Schedulers - Best Practice For data change events be very cautions about adding these to very rapidly changing property values.  When a property is updating very quickly, for example two times each second, the subscription to that event must be able to complete in under 0.5 seconds to stay ahead of processing.  Again this may work for 5-10 Things with properties but will not work with 500 due to resources, speed and need to briefly lock the value to get an accurate current read.  In these cases any data processing should be done at the edge when possible (or in the originating system) and pushed to the platform in a separate property or service call.  This allows for more parallel processing since it is de-centralized. A good practice for allowing easier testing of these types of subscription code is to take all of the script/logic and move it to a service call.  Then pass any of the needed event data to parameters in the service.  This allows for easier debug since the event does not need to fire to make the logic execute.  In fact it can essentially be stand alone by the test button in Composer. Mashup Performance This​ one can be very tricky since additional browser elements and rendering can come into play. Sometimes service execution is the root of the issue and reviewed above, other times it is UI elements and design that cause slow down. The Repeater widget is a common culprit. The biggest thing to note here is that each repeater will need to render every element that is repeated and all of the data and formatting for each of those widgets in the repeated mashup. So any complex mashup that is repeated many times may become slow to load. You can minimize this to a degree based on the Load/Unload setting of the widget and when the slowness is more acceptable (when loading or when scrolling). When a mashup is launched from Composer it comes with some debugging tools built in to see errors and execution. Using these with browser debug tools can be very helpful. Scaling an Application When initially modeling an application scale must be considered from the start. It is a challenge (but not impossible) to modify an application after deployment or design to be very efficient. Many times new developers on the ThingWorx platform fall into what I call the .Net trap. Back when .Net was released one of the quote I recall hearing about it's inefficiencies was "memory is cheap". It was more cost efficient to purchase and install more memory than to take extra development time to optimize memory use. This was absolutely true for installed applications where all of the code was complied and stored on every system. Web based applications are not quite a forgiving since most processing and execution is done on the single central web server. Keep this in mind especially when creating Shapes, Templates and Subscriptions. While you may be writing one piece of code when this code is repeated on 1,000 Things they will all be in memory and all be executing this code in parallel. You can quickly see how competition for resources, locks on databases and clean access to in memory structures can slow everything down (and just think when there are 10,000 pieces of that same code!!). Two specific things around this must be stated again (though they were covered in the above sections). Data held in properties has fast access since it is in JVM memory. But this is held in memory for each individual Thing, so hold 5 MB of information in one Thing seems small, loading 10,000 Thing mean instant use of 50 GB of memory!! Next execution of a service. When 10 things are running a service execution takes 2 seconds. Slow but not too bad and may not be too noticeable in the UI. Now 10,000 Things competing for the same data structure and resources. I have seen execution time jump to 2 minutes or more. Aside from design the best thing you can do is TEST on a scaled up structure. If you will have 1,000 Things next year test your application early at that level of deployment to help identify any potential bottlenecks early. Never assume more memory will alleviate the issue. Also do NOT test scale on your development system. This introduces edits changes and other variables which can affect actual real world results. Have a QA system setup that mirrors a production environment and simulate data and execution load. Additional suggestions are welcome in comments and will likely update this as additional tool and platform updates change.
View full tip
This video shows the commands to execute to deploy the training and results microservices as docker container. This is based on Docker Toolbox to highlight the specific settings required on Toolbox.   Updated Link for access to this video:  Deploying Training & Result Microservices via Docker Containers for Anomaly Detection
View full tip
This Expert Session will walk you through the Components involved in the ThingWorx Studio Augmented Reality Environment, a detailed Architecture, supported devices, and exploring the resources. The session shall provide great insight into the working and the technicalities involved in the ThingWorx Studio.   For full-sized viewing, click on the YouTube link in the player controls.   Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
This Expert Session is designed to help beginners get up and running with ThingWorx Analytics. It covers basic concepts like: What are APIs, how to configure the metadata file, and a live Demo that shows you how to interact and use ThingWorx Analytics in real time. This Expert Session would also be useful for experienced users who need a refresher course.   For full-sized viewing, click on the YouTube link in the player controls.   Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
This Expert Session will walk you through the complete installation of ThingWorx Analytics from the Prerequisites to Confirming the Installation is successful and all steps in between. The first half of the video gives a breakdown of the components and the process of the installation with the second half being an actual Demo of the Installation.   For full-sized viewing, click on the YouTube link in the player controls.   Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
This Expert Session is about platform sizing with dependency on the type of environment and correlated scalability options. It talks about federation and high availability as well as provides visual diagrams to understand the architecture of different ThingWorx solutions.   For full-sized viewing, click on the YouTube link in the player controls.   Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
​​​There are four types of Analytics:                                                                 Prescriptive analytics: What should I do about it? Prescriptive analytics is about using data and analytics to improve decisions and therefore the effectiveness of actions.Prescriptive analytics is related to both Descriptive and Predictive analytics. While Descriptive analytics aims to provide insight into what has happened and Predictive analytics helps model and forecast what might happen, Prescriptive analytics seeks to determine the best solution or outcome among various choices, given the known parameters. “Any combination of analytics, math, experiments, simulation, and/or artificial intelligence used to improve the effectiveness of decisions made by humans or by decision logic embedded in applications.”These analytics go beyond descriptive and predictive analytics by recommending one or more possible courses of action. Essentially they predict multiple futures and allow companies to assess a number of possible outcomes based upon their actions. Prescriptive analytics use a combination of techniques and tools such as business rules, algorithms, machine learning and computational modelling procedures. Prescriptive analytics can also suggest decision options for how to take advantage of a future opportunity or mitigate a future risk, and illustrate the implications of each decision option. In practice, prescriptive analytics can continually and automatically process new data to improve the accuracy of predictions and provide better decision options. Prescriptive analytics can be used in two ways: Inform decision logic with analytics: Decision logic needs data as an input to make the decision. The veracity and timeliness of data will insure that the decision logic will operate as expected. It doesn’t matter if the decision logic is that of a person or embedded in an application — in both cases, prescriptive analytics provides the input to the process. Prescriptive analytics can be as simple as aggregate analytics about how much a customer spent on products last month or as sophisticated as a predictive model that predicts the next best offer to a customer. The decision logic may even include an optimization model to determine how much, if any, discount to offer to the customer. Evolve decision logic: Decision logic must evolve to improve or maintain its effectiveness. In some cases, decision logic itself may be flawed or degrade over time. Measuring and analyzing the effectiveness or ineffectiveness of enterprises decisions allows developers to refine or redo decision logic to make it even better. It can be as simple as marketing managers reviewing email conversion rates and adjusting the decision logic to target an additional audience. Alternatively, it can be as sophisticated as embedding a machine learning model in the decision logic for an email marketing campaign to automatically adjust what content is sent to target audiences. Different technologies of Prescriptive analytics to create action: Search and knowledge discovery: Information leads to insights, and insights lead to knowledge. That knowledge enables employees to become smarter about the decisions they make for the benefit of the enterprise. But developers can embed search technology in decision logic to find knowledge used to make decisions in large pools of unstructured big data. Simulation: ​Simulation imitates a real-world process or system over time using a computer model. Because digital simulation relies on a model of the real world, the usefulness and accuracy of simulation to improve decisions depends a lot on the fidelity of the model. Simulation has long been used in multiple industries to test new ideas or how modifications will affect an existing process or system. Mathematical optimization: Mathematical optimization is the process of finding the optimal solution to a problem that has numerically expressed constraints. Machine learning: “Learning” means that the algorithms analyze sets of data to look for patterns and/or correlations that result in insights. Those insights can become deeper and more accurate as the algorithms analyze new data sets. The models created and continuously updated by machine learning can be used as input to decision logic or to improve the decision logic automatically. Paragmetic AI: ​Enterprises can use AI to program machines to continuously learn from new information, build knowledge, and then use that knowledge to make decisions and interact with people and/or other machines.                                               Use of Prescriptive Analytics in ThingWorx Analytics: Thing Optimizer: Thing Optimizer functionality provides the prescriptive scoring and optimization capabilities of ThingWorx Analytics. While predictive scoring allows you to make predictions about future outcomes, prescriptive scoring allows you to see how certain changes might affect future outcomes. After you have generated a prediction model (also called training a model), you can modify the prescriptive attributes in your data (those attributes marked as levers) to alter the predictions. The prescriptive scoring process evaluates each lever attribute, and returns an optimal value for that feature, depending on whether you want to minimize or maximize the goal variable. Prescriptive scoring results include both an original score (the score before any lever attributes are changed) and an optimized score (the score after optimal values are applied to the lever attributes). In addition, for each attribute identified in your data as a lever, original and optimal values are included in the prescriptive scoring results. How to Access Thing Optimizer Functionality: ThingWorx Analytics prescriptive scoring can only be accessed via the REST API Service. Using a REST client, you can access the Scoring service which includes a series of API endpoints to submit scoring requests, retrieve results, list jobs, and more. Requires installation of the ThingWorx Analytics Server. How to avoid mistakes - Below are some common mistakes while doing Prescriptive analytics: Starting digital analytics without a clear goal Ignoring core metrics Choosing overkill analytics tools Creating beautiful reports with little business value Failing to detect tracking errors                                                                                                                                 Image source: Wikipedia, Content: go.forrester.com(Partially)
View full tip
This blog is intended to help diagnose and fix the most common issues that may be encountered when working with ThingWatcher. It cannot be stressed strongly enough that you should be familiar with your data including the average time interval between data points, and the collection duration and certainty threshold you specified. Before you start troubleshooting ThingWatcher, check that result and training microservices is running. For testing result microservices open a web browser and paste result URL; http://<IP of microservices>:<Port of results microservices>/results/models (e.g., http://localhost:8096/results/models) For testing training microservices open a web browser and paste training URL; http://<IP of microservices>:<Port of training microservices>/training (e.g., http://localhost:8091/training) If you see either: {"values":[],"total":0,"next":null,"previous":null} or a list of training jobs in JSON format, this means the result and training microservice service is available. 1. Question. I haven't seen an anomaly but I believe that my 'property' is anomalous?         This can be caused by different reasons, here are the most common causes: The certainty is too high. If the certainty is too high ThingWatcher is conservative in its categorization of "true positives" and therefore may emit more "false negatives". Reducing the certainty will change this behavior but note that ThingWatcher may now categorize too many "false positives" as a result. In other words, ThingWatcher may detect the desired anomalies but also some non-anomalies. The 'property' is anomalous during training data collection. If ThingWatcher creates a predictive model from anomalous data, it may not be able to detect the desired anomalies during MONITORING because the data does not really appear to be anomalous. So ThingWatcher treats this pattern as 'normal'. Therefore, ensure that 'property' values are also non-anomalous during training. There are long time gaps during the monitoring state so ThingWatcher stays in Buffering and categorizes these data points as non-anomalous. 2. Question. ThingWatcher detects an anomaly but my 'property' is non-anomalous? The certainty might be too low. In this case, ThingWatcher reports anomalies when the incoming data pattern looks even slightly different from the expected data pattern. ThingWatcher might need more training data. If the 'property' data has a pattern that occurs over a long time span, ThingWatcher needs to collect multiple cycles of all these patterns in order to detect a true anomaly without emitting too many false positives. 3. Question. ThingWatcher is in FAILED State, why?     There are many possible reasons for a failed state, here are the most likely problems that can cause a failed state. ThingWatcher emits a FAILED ThingWatcher State because the training service has not been setup or is down. similarly, the result service is not available. NotemessageText=Unexpected exception. {Throwable=[ConnectException: Operation timed out}]]messageText=Unexpected exception. {Throwable=[ConnectException: Connection refused}]]. Note that ThingWatcher is still able to collect all training data and you will only begin to see these failed states after ThingWatcher tried to post the training request. ThingWatcher emits a FAILED ThingWatcher State because time gaps prevent the data collection for training.You will see this warning in the log messages : "A long time gap was detected in the data that is greater than the threshold of {n}". This means you have a long gap in the training data and ThingWatcher will recollect the data. If there are more than 3 recollections due to a long time gap, ThingWatcher transitions to a failed state and will not be able to recover. In this case you can either instruct ThingWatcher to retrain and try again or check the data source to make sure it does not have long gaps. 4. Question. Why does ThingWatcher remain in Buffering? There are many possible reasons for ThingWatcher to remain in Buffering, but the most likely issue is time gaps which cause ThingWatcher to remain stuck in Buffering. If the incoming data regularly contains long time gaps, you will notice that ThingWatcher keeps alternating between the monitoring and buffering states. You may need to provide better quality data i.e. more evenly spaced data. Source: Alex Meng, Specialist Software Engineer
View full tip
An introduction to installing the ThingWorx platform. Information on the environment, prerequisites, and configuration steps when installing ThingWorx. Includes walkthroughs of installing with H2 and PostgreSQL databases, an introduction and demonstration of the Linux installation script, solutions to common installation problems and more.     For full-sized viewing, click on the YouTube link in the player controls.   Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
A confusion matrix is a technique for summarizing the performance of a classification algorithm. Classification accuracy alone can be misleading if you have an unequal number of observations in each class or if you have more than two classes in your data set. Calculating a confusion matrix can give you a better idea of what your classification model is getting right and what types of errors it is making. Classification Accuracy and its Limitations: ​Classification Accuracy = Correct Predictions/Total Predictions The main problem with classification accuracy is that it hides the detail you need to better understand the performance of your classification model. Below are two examples: 1.  When you are data has more than 2 classes. With 3 or more classes you may get a classification accuracy of 80%, but you don’t know if that is because all classes are being predicted equally well or whether one or two classes are being neglected by the model. 2.  When your data does not have an even number of classes. You may achieve accuracy of 90% or more, but this is not a good score if 90 records for every 100 belong to one class and you can achieve this score by always predicting the most common class value. Classification accuracy can hide the detail you need to diagnose the performance of your model. But thankfully we can tease apart this detail by using a confusion matrix. Confusion Matrix Terminology: A confusion matrix is a table that is often use to describe the performance of a classification model on a set of test data for which true values are known. Let’s start with an example for a binary classifier: N=165 Predicted no: Predicted yes: Actual no: 50 10 Actual yes: 5 100 What we can learn from Confusion Matrix? There are two possible predicted classes: "yes" and "no". If we were predicting the presence of a disease, for example, "yes" would mean they have the disease, and "no" would mean they don't have the disease. The classifier made a total of 165 predictions (e.g., 165 patients were being tested for the presence of that disease). Out of those 165 cases, the classifier predicted "yes" 110 times, and "no" 55 times. In reality, 105 patients in the sample have the disease, and 60 patients do not. Let's now define the most basic terms, which are whole numbers (not rates): True positives (TP): These are cases in which we predicted yes (they have the disease), and they do have the disease. True negatives (TN): We predicted no, and they don't have the disease. False positives (FP): We predicted yes, but they don't actually have the disease. (Also known as a "Type I error.") False negatives (FN): We predicted no, but they actually do have the disease. (Also known as a "Type II error.") N=165 Predicted No: Predicted Yes: Actual No: TN=50 FP=10 60 Actual Yes: FN=5 TP=100 105 55 110 This is a list of rates that are often computed from a confusion matrix for a binary classifier: Accuracy: Overall, how often is the classifier correct? 1. (TP+TN)/total = (100+50)/165 = 0.91 Misclassification Rate: Overall, how often is it wrong? 1. (FP+FN)/total = (10+5)/165 = 0.09 2. Equivalent to 1 minus Accuracy 3. Also known as "Error Rate" True Positive Rate: When it's actually yes, how often does it predict yes? 1. TP/actual yes = 100/105 = 0.95 2. Also known as "Sensitivity" or "Recall" False Positive Rate: When it's actually no, how often does it predict yes? 1. FP/actual no = 10/60 = 0.17 Specificity: When it's actually no, how often does it predict no? 1. TN/actual no = 50/60 = 0.83 2. Equivalent to 1 minus False Positive Rate Precision: When it predicts yes, how often is it correct? 1. TP/predicted yes = 100/110 = 0.91 Prevalence: How often does the yes condition actually occur in our sample? 1. Actual yes/total = 105/165 = 0.64
View full tip
The following power point contains some reference slides to start up with DSE/ThingWorx integration. Start with understanding DSE architecture and specifically, the differences compared to regular Relational Databases. Free online courses offered by DataStaxAcademy: –https://academy.datastax.com/courses/understanding-cassandra-architecture –https://academy.datastax.com/courses/installing-and-configuring-cassandra   The following section will guide you through some of the specifics: http://datastax.com/documentation/cassandra/2.0/cassandra/architecture/architecturePlanningAbout_c.html
View full tip
Key Functional Highlights Production Advisor is now available in the Freemium and Developer Kit downloads. Plant Managers are provided with real-time monitoring of production status and critical KPI’s such as utilization, performance, quality and OEE, by unifying data from disparate lines, assets and sensors. With Production Advisor, Plant Managers have the ability to detect and react instantly to production issues- reaching lower downtime, higher production throughput and better quality from the factory resources. Compatibility ThingWorx 8.0.1 KEPServerEX 6.2 KEPServerEX V6,1 and older as well as different OPC Servers (with Kepware OPC aggregator) Documentation ThingWorx Manufacturing Apps Setup and Configuration Guide: https://support.ptc.com/WCMS/files/173133/en/ThingWorxManufacturingAppsSetup_8-0-1.pdf ThingWorx Manufacturing Apps Customization Guide: https://support.ptc.com/WCMS/files/173135/en/ThingWorxManufacturingAppsCust_8-0-1.pdf Get Started Documentation on Portal: https://www.ptc.com/en/thingworx/manufacturing-apps/Dashboard/Get-Started (PTC users should use their normal login credentials and do not need to register on the portal) Download Freemium and Developer Kit (8.0.1) are available for download here: https://www.ptc.com/en/thingworx/manufacturing-apps/Dashboard (PTC users should use their normal login credentials and do not need to register on the portal ThingWorx Platform Extensions (8.1.0, released 1 Nov 2017) are available for download here: https://support.ptc.com/appserver/auth/it/esd/product.jsp?prodFamily=TWA
View full tip
Preface This guide applies to a clean installation of the CentOS 7 Minimal distribution. This is labeled as "Minimal ISO" on the CentOS.org website and the filename of the iso image used to install the operating system will resemble "CentOS-7-x86_64-Minimal-1611.iso." The machine used in this guide was a virtual machine created using Oracle VirtualBox but the same steps should apply to any machine with a clean CentOS 7 Minimal install. It is however possible that some installations may encounter slight variations due to hardware configurations. Before starting Unzip the downloaded "MED-..._ThingWorx-Analytics-Server-Linux-Standalone-8-0-0.zip".  Inside the unzipped directory you will find a file called "ThingWorxAnalyticsServer-8.0.0-linux-x64-installer.run". Before running step number 10, upload that file to your CentOS machine using a SFTP SCP tool of your choice. Configuration and installation steps Step 1: Install Docker with the following commands (these steps are presented at https://docs.docker.com/engine/installation/linux/docker-ce/centos/#install-using-the-repository😞 yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum makecache fast yum -y install docker-ce Step 2: Create a group called docker (If this command reports the group already exists, that is ok. You can move to the next step): groupadd docker Step 3: Add your non-root user to the docker group, in this example my non-root user is called "thingworx", please replace with the correct username: usermod -aG docker thingworx Step 4: Start the Docker service and enable it to auto start after reboot: systemctl start docker systemctl enable docker Step 5: Verify that docker is working: docker ps Step 6: After running the above command you should see a single line output that resembles the following: "CONTAINER ID        IMAGE              COMMAND            CREATED            STATUS              PORTS              NAMES" Step 7: Disable selinux with the two following commands. Note by doing this you will want to make sure if this is a public facing server that you take appropriate security measures to lock down the system. setenforce 0 sed -i -e 's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux Step 8: Set the hostname of your machine to something otherthan the default which is "localhost.localdomain".  In this case I am using the name "centos", this can be replaced with a name of your choosing: hostname centos echo "centos" > /etc/hostname Step 9: Allow traffic through the default CentOS firewall.  Note that in a production environment, the firewall should be configured more granular to allow incoming traffic to only the required ports (5432, 2181 and 8080). Please refer to CentOS documentation and consult security best practices within your organization for more information. The following commands will completely disable the CentOS firewall. systemctl disable firewalld systemctl stop firewalld Step 10: Ensure the ThingWorx Analytics Server installer is executable then run the installer. You may have to change to the directory where the installer was uploaded to the machine, in this case I have it in the home directory of the user named thingworx.  Please replace that path with the correct path for your machine.  Note below are 3 separate commands. cd /home/thingworx chmod +x ThingWorxAnalyticsServer-8.0.0-linux-x64-installer.run ./ThingWorxAnalyticsServer-8.0.0-linux-x64-installer.run Step 11: Verify that the ThingWorx Analytics Server installation is successful. Note that it may take a few minutes for the system to become available. Retry the command after a few minutes if an error is initially encountered. curl http://127.0.0.1:8080/analytics/1.0/about/versioninfo NOTE: The response from the above command should resemble the following: {"implementationVersion":"8.0.0"}
View full tip
There is a test on connection server 7.2. With 4core CPU and 8GB memory, we sent 1000 http requests every second and there is 5% http request losted. After changing configuration for connection server, the lost rate drop to 0.86%. Here are some suggestions to improve connection server performance Reset parameters in connection server configuration file cxserver.conf. (..\conf\cxserver.conf) Adjust parameters max-connection-pool-size and max-wait-queue-size Change the default JVM settings. (increase memory for JVM properly) In this case, I created a new file named startMyConnectionServer.bat file with below code: SET CONNECTION_SERVER_HOME=C:\connection-server-7.2.0.2095 SET JAVA_OPTS=-Xms2G -Xmx2G %CONNECTION_SERVER_HOME%\bin\connection-server.bat Increase connection server's hardware (memory, CPU cores) Minimum system requirement 16GB memory, 4 CPU cores. Refer to ThingWorx Core 8.0 System Requirements for more hardware information
View full tip
Please note that the below configuration is intended for testing purposes only.  Make sure that your final deployment is within your business security policies. The installation guide can be found at: http://support.ptc.com/WCMS/files/173161/en/ThingWorxDockerInstaller.pdf Postgres: Reference the Installation Guide above for a supported version of Postgres Once deployed, configure it to support remote connections: Navigate to: <PostgresInstallPoint>\data Open the following with a text editor: pg_hba.conf Find the line with IPv4 local connections Change 127.0.0.1/32 to 0.0.0.0/0 Restart PostgreSQL server NOTE: This could open up security vulnerabilities to the database, so make sure you take appropriate security measures if the data will be sensitive Docker: Find the appropriate Docker platform for your OS Docker Community Edition For Windows Server 2016, there is a download for the Edge (Windows Server 2016) under the above link -> Docker CE for Windows -> And then scroll down a little bit Docker Toolbox If you try to deploy the Docker Community Edition on a system that doesn't support, it will direct you to this installation instead At some point during or after the installation, it will prompt you to enable Hyper-V If this is a physical server, these settings will be in your Bios For VMWare, while the VM is powered down, go to VM-> Virtual Machine Properties -> Hardware -> Processors -> Enable 'Virtualize Intel VT-x/EPT or AMD-V/RVI' Restart, and make sure Docker is running (whale icon in your system tray for the Windows Server 2016 edge version) With Docker running, open a command prompt and look at your IP settings For windows Server 2016, right click the start menu -> Command Prompt (admin) and run IPCONFIG Write down the IP assigned to DockerNAT, as this is will be your Postgres HOST later Share your main drive with Docker In Windows Server 2016, right click the docker icon in the system tray -> Settings -> Shared Drives -> C: Thingworx Installation: At this point you should have Docker installed and Postgres remotely configured with only the admin user (postgres) The installer will create the image/container inside of Docker, Install Tomcat, and configure your database Below is a capture of the settings used in the above screenshots.  Anything not listed (like specifying the container name, which is twxfoundation by default) was left as the default values:       Installation Directory: C:\Program Files (x86)\twxEnterpriseFoundationPostgresDocker       ThingWorx License Directory: C:\Users\Administrator\Desktop\license.bin       Local ThingWorx Foundation Port: 8080       Java Initial Heap setting for TWX Foundation: 1024       Java Max Heap setting for TWX Foundation: 2048       RDS Instance: 1     PostgreSQL Host: 10.0.75.1       PostgreSQL Port: 5432       PostgreSQL Admin Schema: postgres       PostgreSQL Admin Username: postgres       PostgreSQL Admin Password: <see note>       PostgreSQL ThingWorx Foundation Schema: thingworx       PostgreSQL ThingWorx Foundation Username: thingworx       PostgreSQL ThingWorx Foundation Password: <see note>       PostgreSQL ThingWorx Tablespace Location: /                     ​NOTE:​ It is highly recommended to use a complex password (Letters of all cases, numbers, and symbols) as we have opened up our database to remote connections RDS was set to Yes (Default is no) PostgreSQL Host is the IP taken from the earlier steps In this example, the Tablespace location is defined inside of Docker, not Windows Post Install: Confirm that Thingworx is running properly by opening a broswer and attempting to log in For our example, the URL is http://10.0.75.1:8080/Thingworx Troubleshooting: If the installation fails, refer to the end of the Installation Guide on where to look for logs, and items that need to be cleaned up before attempting to install again If the install was successful, but connecting fails, run the following in the command prompt to look at the Docker Server's startup logs for hints: Docker logs -f twxfoundation *Note that twxfoundation is the default during installation.  If this was changed in your installation, use that instead
View full tip
Ran into this recently thought I share an approach to getting a table with multi-column distinct yet retaining all the columns of the row. If you use Distinct, you get only the Columns you do Distinct on. This isn't very helpful if you want the 'latest' or the 'first occurrences'  of records in your table with a combination of fields being unique. For example I had Process, Part, Dimension and Point for which I had multiple value and date time entries, but I only wanted the latest entries. Following is how I solved it, if you have a better way please leave a comment! P.S.: for the query I used the awesome query builder available in the snippet section! --------------------------------------- var q1Result = Things["MyThing"].QueryStreamEntriesWithData({maxItems:99999, query:query1}); //Below creates a temporary measurement table to store the latest meaurement values var params = {                 infoTableName : "InfoTable",                 dataShapeName : "MyDatashape.DS" }; // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(MyDataShape.DS) var tempTable1 = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); // Extract only the latest measurements for the PART from the measurement result table 'q1Result' //The way we are going to reduce this to unique measurements is //1. records are in reverse order of date time //2. get distinct by Process Part Dim Point //3. Step through and match against distinct set //4. First match goes into final set //5. Upon match remove from distinct set //6. If no match then skip record //7. If no more distinct match records break loop var params = {                 t: q1Result /* INFOTABLE */,                 columns: 'ProcessID,PartID,Dimension,Point' /* STRING */ }; // result: INFOTABLE var distinctResult = Resources["InfoTableFunctions"].Distinct(params); for (var x = 0; x < q1Result.rows.length; x++) {     var query = {       "filters": {         "type": "AND",         "filters": [           {             "fieldName": "ProcessID",             "type": "EQ",             "value": q1Result.rows .ProcessID           },           {             "fieldName": "PartID",             "type": "EQ",             "value": q1Result.rows .PartID          },           {             "fieldName": "Dimension",             "type": "EQ",             "value": q1Result.rows .Dimension           },           {             "fieldName": "Point",             "type": "EQ",             "value": q1Result.rows .Point           }         ]       }     };   var params = {                 t: distinctResult /* INFOTABLE */,                 query: query /* QUERY */ }; // result: INFOTABLE var matchResult = Resources["InfoTableFunctions"].Query(params);     if (matchResult.rows.length == 1) {         tempTable1.AddRow(q1Result.rows );            var params = {             t: distinctResult /* INFOTABLE */,             query: query /* QUERY */         };         // result: INFOTABLE         var distinctResult = Resources["InfoTableFunctions"].DeleteQuery(params);         if (distinctResult.rows.length == 0) {                        break                    }            }    } //I now have a tempTable1 with the full rows and the 4 fields distinct result = tempTable1
View full tip
This has been moved to its new home in the Augmented Reality Category in the PTC Community.
View full tip
This is a useful trick for rolling up metrics in Thingworx across various levels of a hierarchy by using Networks, ThingShapes, and recursive service definitions. Say that you have a hierachy of Things in your model such as a Global view, a Region view, and a Store view -- this could be a Mfg Plant, a Building, an Asset, etc; whatever the core metric producing Thing is in your model -- where your Store has KPIs that you want to roll up across regions and globally. First, create a template for each of your hierarchical levels. In my case it is a GlobalTemplate, RegionalTemplate, and StoreTemplate. Add a property to your StoreTemplate that will be the KPI. Now, create a Thing for the Globe, and each of your Regions and Stores. Add them to a hierachical Network as such: Now, we need to create a ThingShape to aggregate our KPIs and apply it to the Global, Regional, and Store template. Now we will define a recursive funciton on our ThingShape called GetKPI ​and define it with the following: //define our base case, when the thing template we are on is the lowest level of our hierarchy, in this case the StoreTemplate if (me.thingTemplate == "StoreTemplate") {     //in our base case, the result is just the property for the metric we want to aggregate     result = me.someMetric } else {     //otherwise, we are at some other level in the hierarchy and we need to get our child connections from the network     //this gets all the things below us in the network     var params = {         name: me.name /* STRING */     };     // result: INFOTABLE dataShape: NetworkConnection     var network = Networks["Network"].GetChildConnections(params);     //loop through each of the things below us in the hierarchy and recursively add the result of GetKPI() to our result     result = 0;     for each (var row in network.rows) {             result += Things[row.to].GetKPI();     } } This is a simple case of just summing up a single property, but we can take this further using the Union and Aggregate snippets provided by thingworx to do other kinds of summarization. First add a new property called someAvgMetric ​to our StoreTemplate, and define a new service GetKPIProperties as such, with an InfoTable result, on the StoreTemplate varparams = {     propertyNames: {"items": ["someMetric", "someAvgMetric"]} /* JSON */ }; // result: INFOTABLE dataShape: "undefined" var result = me.GetNamedProperties(params); Now, define a new service on our ThingShape to utilize this service as our base case, and aggregate the resulting InfoTable when necessary. We'll call this service GetKPIAggregates: //define our base case if (me.thingTemplate == "StoreTemplate") {     //this function will be on the StoreTemplate, and returns the base infotable     result = me.GetKPIProperties() } else {     //grab our network     var params = {         name: me.name /* STRING */     };     // result: INFOTABLE dataShape: NetworkConnection     var network = Networks["Network"].GetChildConnections(params);     //need to create an empty infotable to union into. I glossed over this, but you'll need a datashape here     //create empty infotable     var result = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape({ infoTableName: "InfoTable", dataShapeName: "KPIDataShape" });     //loop through and union each of our results to our new infotable     for each (var row in network.rows) {         var params = {             t1: result /* INFOTABLE */,             t2: Things[row.to].GetKPIAggregates() /* INFOTABLE */         };         var result = Resources["InfoTableFunctions"].Union(params);     }     //aggregate each of our fields     var params = {         t: result /* INFOTABLE */,         columns: "someMetric,someAvgMetric" /* STRING */,         aggregates: "SUM,AVERAGE" /* STRING */,         groupByColumns: undefined /* STRING */     };     // result: INFOTABLE     var result = Resources["InfoTableFunctions"].Aggregate(params);     //need to loop through each of our field names and make them match our base infotable     // infotable datashape iteration     var dataShapeFields = result.dataShape.fields;     for (var fieldName in dataShapeFields) {         var stringName = dataShapeFields[fieldName].name;         var params = {             t: result /* INFOTABLE */,             from: stringName /* STRING */,             to: stringName.split("_")[1] /* STRING */         };         // result: INFOTABLE         var result = Resources["InfoTableFunctions"].RenameField(params);     } } Now, in our mashups, we can use a DynamicThingShape and call our GetKPIs service at any level in our network, and our data will be aggregated correctly for whatever level we are at in the hierarchy!
View full tip
This has been moved to its new home in the Augmented Reality Category in the PTC Community.
View full tip
Sometimes it's needed to delete the existing PostgreSQL database, especially if a different major version was installed at first by mistake (for example, 9.6 in place of the supported 9.4). Then it's absolutely necessary to ensure the database is fully deleted and there is no db ghosts. The proper way to uninstall is to go to the postgresql server installation directory and find one uninstall-postgresql file. Double click on the Uninstall-postgresql file to run the un-installer- it will un-install postgresql. In case the uninstall wasn't performed correctly, below are the manual steps to clean it up. One sign of existing "ghost" db, is randomly seeing a second PostgreSQL server in the pgAdmin III or experiencing "error"-less problems when running the ThingWorx installation scripts. To uninstall manually,in this example we will use 9.6 as the version to delete - please replace with your own where needed: Remove the postgresql server installation directory. (rd /s /q "C:\Program Files\PostgreSQL\9.6") Assuming default location. Delete the user 'postgres' (net user postgres /delete) Remove the Registry entries. (HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.6) and (HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Services\postgresql-9.6) Remove the postgresql-9.6 service. (sc delete postgresql-9.6)
View full tip
Announcements