cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
Predictive models: ​ Predictive model is one of the best technique to perform predictive analytics. This is the development of models that are trained on historical data and make predictions on new data. These models are built in order to analyse the current data records in combination with some historical data.   Use of Predictive Analytics in Thingworx Analytics and How to Access Predictive Analysis Functionality via Thingworx Analytics   Bias and variance are the two components of imprecision in predictive models. Bias in predictive models is a measure of model rigidity and inflexibility, and means that your model is not capturing all the signal it could from the data. Bias is also known as under-fitting.  Variance on the other hand is a measure of model inconsistency, high variance models tend to perform very well on some data points and really bad on others. This is also known as over-fitting and means that your model is too flexible for the amount of training data you have and ends up picking up noise in addition to the signal.   If your model is performing really well on the training set, but much poorer on the hold-out set, then it’s suffering from high variance. On the other hand if your model is performing poorly on both training and test data sets, it is suffering from high bias.   Techniques to improve:   Add more data: Having more data is always a good idea. It allows the “data to tell for itself,” instead of relying on assumptions and weak correlations. Presence of more data results in better and accurate models. The question is when we should ask for more data? We cannot quantify more data. It depends on the problem you are working on and the algorithm you are implementing, example when we work with time series data, we should look for at least one-year data, And whenever you are dealing with neural network algorithms, you are advised to get more data for training otherwise model won’t generalize.  Feature Engineering: Adding new feature decreases bias on the expense of variance of the model. New features can help algorithms to explain variance of the model in more effective way. When we do hypothesis generation, there should be enough time spent on features required for the model. Then we should create those features from existing data sets. Feature Selection: This is one of the most important aspects of predictive modelling. It is always advisable to choose important features in the model and build the model again only with important and significant features. e. let’s say we have 100 variables. There will be variables which drive most of the variance of a model. If we just select the number of features only on p-value basis, then we may still have more than 50 variables. In that case, you should look for other measures like contribution of individual variable to the model. If 90% variance of the model is explained by only 15 variables then only choose those 15 variables in the final model. Multiple Algorithms: Hitting at the right machine learning algorithm is the ideal approach to achieve higher accuracy. Some algorithms are better suited to a particular type of data sets than others. Hence, we should apply all relevant models and check the performance. Algorithm Tuning: We know that machine learning algorithms are driven by parameters. These parameters majorly influence the outcome of learning process. The objective of parameter tuning is to find the optimum value for each parameter to improve the accuracy of the model. To tune these parameters, you must have a good understanding of these meaning and their individual impact on model. You can repeat this process with a number of well performing models. For example: In random forest, we have various parameters like max_features, number_trees, random_state, oob_score and others. Intuitive optimization of these parameter values will result in better and more accurate models. Cross Validation: Cross Validation is one of the most important concepts in data modeling. It says, try to leave a sample on which you do not train the model and test the model on this sample before finalizing the model. This method helps us to achieve more generalized relationships. Ensemble Methods: This is the most common approach found majorly in winning solutions of Data science competitions. This technique simply combines the result of multiple weak models and produce better results. This can be achieved through many ways.  Bagging: It uses several versions of the same model trained on slightly different samples of the training data to reduce variance without any noticeable effect on bias. Bagging could be computationally intensive esp. in terms of memory. Boosting: is a slightly more complicated concept and relies on training several models successively each trying to learn from the errors of the models preceding it. Boosting decreases bias and hardly affects variance.     
View full tip
Welcome to the ThingWorx Manufacturing Apps Community! The ThingWorx Manufacturing Apps are easy to deploy, pre-configured role-based starter apps that are built on PTC’s industry-leading IoT platform, ThingWorx. These Apps provide manufacturers with real-time visibility into operational information, improved decision making, accelerated time to value, and unmatched flexibility to drive factory performance.   This Community page is open to all users-- including licensed ThingWorx users, Express (“freemium”) users, or anyone interested in trying the Apps. Tech Support community advocates serve users on this site, and are here to answer your questions about downloading, installing, and configuring the ThingWorx Manufacturing Apps.     A. Sign up: ThingWorx Manufacturing Apps Community: PTC account credentials are needed to participate in the ThingWorx Community. If you have not yet registered a PTC eSupport account, start with the Basic Account Creation page.   Manufacturing Apps Web portal: Register a login for the ThingWorx Manufacturing Apps web portal, where you can download the free trial and navigate to the additional resources discussed below.     B. Download: Choose a download/packaging option to get started.   i. Express/Freemium Installer (best for users who are new to ThingWorx): If you want to quickly install ThingWorx Manufacturing Apps (including ThingWorx) use the following installer: Download the Express/Freemium Installer   ii. 30-day Developer Kit trial: To experience the capabilities of the ThingWorx Platform with the Manufacturing Apps and create your own Apps: Download the 30-day Developer Kit trial   iii. Import as a ThingWorx Extension (for users with a Manufacturing Apps entitlement-- including ThingWorx commercial customers, PTC employees, and PTC Partners): ThingWorx Manufacturing apps can be imported as ThingWorx extensions into an existing ThingWorx Platform install (v8.1.0). To locate the download, open the PTC Software Download Page and expand the following folders:   ThingWorx Platform | Release 8.x | ThingWorx Manufacturing Apps Extension | Most Recent Datacode     C. Learn After downloading the installer or extensions, begin with Installation and Configuration.   Follow the steps laid out in the ThingWorx Manufacturing Apps Setup and Configuration Guide 8.2   Find helpful getting-started guides and videos available within the 'Get Started' section of the ThingWorx Manufacturing Apps Portal.     D. Customize Once you have successfully downloaded, installed, and configured the Manufacturing Apps, begin to explore the deeper potential of the Apps and the ThingWorx Platform.   Follow along with the discussion and steps contained in the ThingWorx Manufacturing Apps and Service Apps Customization Guide  8.2   Also contained within the the 'Get Started' page of the ThingWorx Manufacturing Apps Portal, find the "Evolve and Expand" section, featuring: -Custom Plant Layout application -Custom Asset Advisor application -Global Plant View application -Thingworx Manufacturing Apps Technical Lab with Sigma Tile (Raspberry Pi application) -Configuring the Apps with demo data set and simulator -Additional Advanced Documentation     E. Get help / give feedback / interact Use the ThingWorx Manufacturing Apps Community page as a resource to find documentation, peruse past forum threads, or post a question to start a discussion! For advanced troubleshooting, licensed users are encouraged to submit support tickets to the PTC My eSupport portal.
View full tip
  Maintain cookies and security information by implementing session parameters in your application.   Guide Concept   This project will introduce creating and accessing session data from a User logged into your application. Session data is global session-specific parameters that can be used on the Client and Server side.   Following the steps in this guide, you will be able to access the logged in User's information and their set values.   We will teach you how to access session data, that can later be used to provide Users with unique experiences and a more robust application.   You'll learn how to   Create Session Data Access Stored Session Data   NOTE: This guide's content aligns with ThingWorx 9.3. The estimated time to complete this guide is 30 minutes     Step 1: Completed Example   Download the completed files for this tutorial:  Sessions.xml.   The Sessions.xml file contains a completed example of session parameters. Utilize this file to see a finished example and return to it as a reference if you become stuck during this guide. Keep in mind, this download uses the exact names for entities used in this tutorial. If you would like to import this example and also create entities on your own, change the names of the entities you create.     In the bottom-left of Composer, click Import/Export.     Click IMPORT.     In the Import pop-up, keep the default values and click Browse. Navigate to the Sessions.xml file you downloaded. Select it and click Open. Click Import in the Import pop-up. Click Close to close the pop-up.       Step 2: Create Session Parameters  Click the Browse folder on the left-hand side. Under System, select Subsystems.     Filter for UserManagementSubsystem and open it in Edit mode.     Select Services. Filter for the AddSessionShape Service.     Click the Play button to open the Execute window. Enter UserLogin (the provided ThingShape) as the name input field. Click Execute.     Click Done.   You've just created your first Session Parameter. These values are used for content held in a cookie for a website or information that might be static for the User or session.   Best Practice: For information that will be static for the entire application and not based on the session, use a database option or a stored value in a Thing.       Step 3: Access Session Parameters   Click the Browse folder on the left-hand side. Under System, select Resources.   Filter for CurrentSessionInfo and open it.   Select Services. Filter for the GetGlobalSessionValues Service.   Click the Play button to open the Execute window. Click Execute. You will notice the result is a list of the properties in the UserLogin ThingShape. Your result might differ from mine.   Click Done.   NOTE: There is a difference between Session parameters and Mashup parameters. Mashups can have input values that will be used for services or content of that Mashup ONLY. Session parameters are based on the user using the application in a session. This data will be accessible throughout the application and last until they have completed their usage. This guide shows how to create Session parameters that are considered global session parameters.     Step 4: Next Steps   Congratulations! You've successfully completed the Create Session Parameters guide, and learned how to: Access a logged-in user's information and their set values Use session data to provide users with unique experiences and a more robust application   Learn More   We recommend the following resources to continue your learning experience:   Capability Guide Build Create Custom Business Logic Build Data Model Introduction   Additional Resources   If you have questions, issues, or need additional information, refer to:   Resource  Link Community Developer Community Forum Support Session Parameter Help Center  
View full tip
Introduction Oracle 12c release introduced the concept of multi-tenant architecture for housing several databases running as service under a single database, I'll try to address the connectivity and required configuration to connect to one of the Pluggable database running in the multi-tenant architecture. Multi-tenant database architecture in scope of ThingWorx External Data Source What is multi-tenant Database architecture ? Running multiple databases under a single database installation. Oracle 12c allows user to create one database called Container Database (CDB) and then spawn several databases called Pluggable Databases (PDB) running as services under it. Why use multi-tenant architecture? Such a setup allows users to spawn a new PDB as and when needed with limited resource requirements, easily administer several PDBs just by administering the container database - since all the PDBs are contained within a single database's tablespace structure, start and stop individual PDB leading to low cost on maintaining different databases - as the resource management is limited to one CDB. When to use multi-tenant architecture? In scenarios like creating PoCs, different test environments requiring external data storage, maintaining different versions of dataset, having this run in the multi-tenant architecture could help save time, money and effort. Create Container Database (CDB) Creation of a Container Database (CDB) is not very different from creating a non Container Database use the attached guide Installing Oracle Database Software and Creating a Database.pdf same is accessible online. Create Pluggable Database (PDB) Use the attached Multitenant : Create and Configure a Pluggable Database (PDB) in Oracle Database 12c PDF guide to create and plug a Pluggable Database into the Container Database created in previous step, same is accessible online Using above guide I have bunch of pluggable databases as can be seen below. I'll be using TW724 for connecting to ThingWorx server as an external datasource for following example Connect to a Pluggable Database(PDB) as external data source for ThingWorx Download and unzip the Relational Databases Connectors Extension from ThingWorx Marketplace and extract Oracle12Connector_Extension Import Oracle12Connector_Extension to the ThingWorx using Extension -> Import Create a Thing using OracleDBServer12 Thing Template , e.g. TW724_PDB_Thing Navigate to the Configurations for TW724_PDB_Thing to update the default configuration: JDBC Driver Class Name : oracle.jdbc.OracleDriver JDBC Connection String : jdbc:oracle:thin:@//oravm.ptcnet.ptc.com:1521/tw724.ptcnet.ptc.com Database Username : <UserName> Database Password : <password>   5. Once done save the entity Note: A PDB in a container database can be reached only as a service and not using the CDB's SID. In the above configuration TW724 is a PDB which can be connected to via it's service name i.e. TW724.PTCNET.PTC.COM Let's head to the Services tab for TW724_PDB_Thing to query and access the PDB data Creating Services to access the PDB as external database source for ThingWorx Once the configuration is done the TW724_PDB_Thing is ready for use. The queries remain the same as any other SQL query needed to access the data from Oracle. Service for creating a Table Once on the Services tab for the TW724_PDB_Thing click on Add My Service select the service handler as SQL Command to use following script to create a testTable1 in the PDB create table testTable1 (     id NUMBER GENERATED ALWAYS AS IDENTITY primary key,     col1 varchar2(100),     col2 number ) Note: GENERATED ALWAYS AS IDENTITY option is Oracle 12c specific and I included it here for the reason that with Oracle 12c the possibility to auto generate is now built in with that option simplifying the sequence generation when compared with older Oracle versions such as Oracle 11g. User creating table will need access right on creating table and sequence checkout the Oracle documentation on Identity for more on this. Service for getting all the data from the table Add another service with script Select * from testTable1 for getting all the data from the table Service for inserting data into the table Adding another service with script insert into testTable1 (col1, col2) values ('TextValue', 123)  will insert the data into the table created above Service for getting all tables from the PDB i.e. TW724 Using Select * from tab lists all the available tables in the TW724 PDB Summary Just a quick wrap up on how this would look visually refer to the following image. Since this is a scalable setup - given the platform having enough resources it's possible to create upto 252 PDBs under a CDB therefore 252 PDBs could be created and configured to as many things extending the OracleDBServer12 Thing. ______________________________________________________________________________________________________________________________________________ Edit: Common Connection Troubleshooting If you observe the error something like this Unable to Invoke Service GetAllPDBTables on TW724_PDB_Thing : ORA-01033: ORACLE initialization or shutdown in progress Ensure that the pluggable database, in this error TW724 (since this is what I created and used above in my services) is opened and accessible. If it's not opened use the command after logging in as sys/system (with admin rights) in CDB, which is ORCL in via SQL*Plus or SQL Developer or any SQL utility of your choice capable of connecting to Oracle DB and open the pluggable database using the command : alter pluggable database tw724 open;
View full tip
This document attached to this blog entry actually came out of my first exposure to using the C SDK on a Raspberry PI. I took notes on what I had to do to get my own simple edge application working and I think it is a good introduction to using the C SDK to report real, sampled data. It also demonstrates how you can use the C SDK without having to use HTTPS. It demonstrates how to turn off HTTPS support. I would appreciate any feedback on this document and what additions might be useful to anyone else who tries to do this on their own.
View full tip
  GUIDE CONCEPT   This guide introduces connecting an Allen-Bradley PLC to ThingWorx Kepware Server.   YOU'LL LEARN HOW TO   Create and run a simple ladder logic application on an Allen-Bradley PLC Connect the PLC to ThingWorx Kepware Server   NOTE: The estimated time to complete this guide is 30 minutes.      Step 1: Learning Path Overview   Assuming you are using this guide as part of the Rockwell Automation Learning Path, then you have now completed each of the following installations:        1. Connected Components Workbench       2. ThingWorx Kepware Server       3. ThingWorx Foundation (for Windows)   In this continued step, you'll now connect an Allen-Bradley PLC to Connected Components Workbench and then to ThingWorx Kepware Server.   In a later guide, we'll propogate that information further from ThingWorx Kepware Server into ThingWorx Foundation.   NOTE: Both Rockwell Automation's Connected Components Workbench and ThingWorx Kepware Server are time-limited trials. If significant time has passed while persuing this Learning Path, you may need to reinitialize them. Consult the Troubleshooting step of this guide for more information.       Step 2: Setup PLC   This guide uses an inexpensive Allen-Bradley Micro820 PLC as a demonstration.   ThingWorx Kepware Server offers drivers for hundreds of devices, making this step the only one that contains device-specific instructions.   Read and understand installation instructions before making any electrical connections to the PLC.   1. Connect the postive lead of a 24V power supply along with a 6" test lead to Terminal 1 of the output terminal block.   2. Connect the negative lead of the power supply to Terminal 2.     3. Confirm the test lead is secure from making contact with anything conductive; it will be  connected to +24V. Power on the supply and confirm the LEDs briefly light.       4. Carefully touch the test lead to the Input 1 terminal and confirm the indicator LED for Input 1 turns on.     5. Power off the supply before continuing to the next step.       Step 3: Create PLC Project   In this step, you will create a simple PLC application. This application will connect to a ThingWorx Mashup in subsequent guides in the Learning Path.    1. After opening Connected Components Workbench, click New... in the Project section.   2. Enter ThingWorxGuide in the Name field and click Create.   3. Browse to the PLC model you are using and click Select, then Add to Project.     4. Right-click Program, then left-click Add > New LD: Ladder Diagram     5. Double-click Prog1 to open the ladder window.     Ladder Logic   You will create a simple application that will turn on output 2 when there is a signal on input 2.    1.. Right-click in the box to the left of the rung, hover over Insert Ladder Elements, then click on Direct Coil   .     2. Click the I/O - Micro820 tab towards the right and select an output coil - this guide uses _IO_EM_DO_02. Then click OK                  3. Add an input contact by right-clicking in the box to the left of the rung, hover over Insert Ladder Elements, then click on Direct Contact.   4. Click the I/O - Micro820 tab and scroll down to select an input - this guide uses _IO_EM_DI_02. Then click OK.        5. The program window should now look like this:     Upload   Next, you will propagate the program to the PLC.   1. Secure the test lead then apply power to the PLC.   2. Connect an ethernet cable directly between the PLC and your Windows computer.   3. Click Device > Connect to connect to the PLC; a pop-up will appear saying the project does not match the program in the controller.     NOTE: When either your PLC or computer are restarted, they may be assigned a new IP address, requiring you to reconfigure the connection. Click the tab labled with your PLC, then click the pencil icon next to connection path, click Browse, expand the Ethernet driver, highlight the active controller, and click OK. Click Close and then Connect.       4. Click Download current project to the controller   5. Confirm overwriting any program in the controller by clicking Download.   6. After your project is downloaded, run it on the controller by clicking Yes.     7. Touch the test lead to the I-02 terminal, and your program will turn on the #2 output. You can confirm your project is working by both hearing the soft click from the PLC and seeing the output indicator turn on.       Step 4: Configure ThingWorx Kepware Server   Now that you have a simple project running on the PLC, you need to configure ThingWorx Kepware Server to monitor it.   1. Open ThingWorx Kepware Server, right-click on Connectivity, and click New Channel.   2. Select Allen-Bradley Micro800 Ethernet from the drop-down, then click Next.       3. Click Next to accept the defaults, and click Finish to create Channel2.   4. Click Click to add a device below Channel2, enter myPLC in the name field, and click Next.   5. Enter the IP address of your PLC, then click Next. The IP address of your PLC is shown in Connected Components Workbench in Device > Configure.       NOTE: The IP address of the PLC may change when it is power cycled and must be updated in ThingWorx Kepware Server to match   6. Click Next to accept default values for each pop-up, and click Finish to create the myPLC device.       7. Click the Click to add a static tag message.   8. Enter Coil2 in the Name field, _IO_EM_DO_02 in the Address field, change the Data Type drop-down to Boolean, and click OK.  The address must exactly match a variable name in the PLC.       9. Create a second tag by right-clicking on myPLC again and clicking New Tag.   10. Enter Coil3 in the Name field, _IO_EM_DO_03 in the Address field, select Boolean from the Data Type drop-down, and click OK.         Step 5: Troubleshooting   1. If the connection to the PLC stops working and there is a Thumbs Down icon next to your Properties, the ThingWorx Kepware Server trial edition drivers are not connected to your PLC. The trial edition stops running after 2 hours and must be stopped and restarted. Right-click on ThingWorx icon in system tray.     Click Stop Runtime service. Wait a minute for the process to stop, then click Start Runtime service.   2.  If Connected Components Workbench does not connect to PLC, check the IP address of the PLC using RS Linx Classic software that was installed as part of Connected Components  Workbench. RS Linx Classic is located Start > All Programs > Rockwell Software > RSLinx > RSLinx Classic Click AB_ETHIP-1, Ethernet and IP addresses of connected PLCs will be discovered   NOTE: A changed PLC IP Address (typically seen through Connected Components Workbench) will require an IP Address change in ThingWorx Kepware Server settings.       Step 6: Next Steps   Congratulations! You've successfully completed the Connect to an Allen-Bradley PLC tutorial. You've learned how to:   Create and upload a simple ladder logic application to a PLC Connect a PLC to ThingWorx Kepware Server   The next guide in the Using an Allen-Bradley PLC with ThingWorx learning path is Create an Application Key.   Learn More   Capability Resource Analyze Monitor an SMT Assembly Line     Additional Resources   For additional information on ThingWorx Kepware Server:   Resource Link Website Connecting & Managing Industrial Assets Documentation Kepware documentation Support Kepware Support site
View full tip
I imagine a lot of people that face this problem might be using Session Parameters, but there is a secret lost Ninja art that allows you to do it with Mashup parameters which is much more contextual and direct. The key is to have Mashup parameters with the same name. End Result Starting out I am on my main mashup, you can see the Tree Data in the Grid below Clicking on the next node now shows the new mashup and the TO field inside. That To value was passed in using a mashup parameter Clicking the next node, you can see it is actually a different mashup, but I am still passing the TO value How is it done: Here is my mashup with the Tree and Contained mashup, you can see the bindings are in place already, but how did I do it, since the Contained Mashup is empty? First create the new mashups with a mashup parameter named the SAME in this case EntityName Here is Mashup2 and you can see the Mashup parameter with the same name EntityName bound to one of the Value Displays Now how do I bind from my main mashup? What you need to do is to temporarily assign one of the Mashups to the Contained Mashup, here I am showing Mashup1 assigned. This will now allow you to bind not just the Mashup Name, but also bind a value to the Mashup Parameter in that Mashup. Just drag your selected row values onto the contained mashup. Here you can see the parameter showing as a property, I just dropped my value on the contained mashup and I can bind to Name (name of the mashup to show) and EntityName (the value I want to pass to the mashup parameter) Now just remove the assigned mashup from the Contained mashup and you’ll note that the bindings stay intact. That’s it!
View full tip
Sometimes it's needed to delete the existing PostgreSQL database, especially if a different major version was installed at first by mistake (for example, 9.6 in place of the supported 9.4). Then it's absolutely necessary to ensure the database is fully deleted and there is no db ghosts. The proper way to uninstall is to go to the postgresql server installation directory and find one uninstall-postgresql file. Double click on the Uninstall-postgresql file to run the un-installer- it will un-install postgresql. In case the uninstall wasn't performed correctly, below are the manual steps to clean it up. One sign of existing "ghost" db, is randomly seeing a second PostgreSQL server in the pgAdmin III or experiencing "error"-less problems when running the ThingWorx installation scripts. To uninstall manually,in this example we will use 9.6 as the version to delete - please replace with your own where needed: Remove the postgresql server installation directory. (rd /s /q "C:\Program Files\PostgreSQL\9.6") Assuming default location. Delete the user 'postgres' (net user postgres /delete) Remove the Registry entries. (HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.6) and (HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Services\postgresql-9.6) Remove the postgresql-9.6 service. (sc delete postgresql-9.6)
View full tip
The New and Improved DGIS Guide to ThingWorx Development Written by Victoria Firewind of the IoT EDC   The classic Developing Great IoT Solutions guide has been reskinned and revamped for newer versions of ThingWorx! The same information on how to build a quality IoT application is now available for versions of ThingWorx 9.1+, and now, a complete sample application is included to demonstrate these ideas.    Find within the attached archive a PDF with high-level overview information on development and application design geared towards managers and business users, so that everyone can understand the necessary requirements, common terms, and key tips on how to ensure an application is scalable and maintainable right from the very start. Reduce your chances of running into issues between PoC and Go Live by reviewing this information today!   Also find within this PDF a series of tutorials which teach not just how to use the ThingWorx software, but which also educate on how to make good application design choices. A basic rules engine for sending real-time notifications is included here, as well as a complete demo application which illustrates each concept in a real-world use case. This Coffee Machine Demo App relies upon the tutorial entities, which can also now be imported directly using the other XML files provided here. This ensures that anyone can review these concepts, regardless of how much time one can commit or how much knowledge one already has on the subject.   This is a complex guide, and any issues, questions, or bugs found within can be reported right here on this thread. Happy developing from the IoT EDC!
View full tip
It's been challenging, in the absence of OOTB Software Content Management (SCM) system within ThingWorx, to track, maintain code changes in different entities or to roll back in case of any entity is wrongly edited or removed. Though there's possibility to some extent to compare the differences i.e. when importing back the entities from the Source Control repository in ThingworxStorage. However this approach itself has it's own limitations. Note : This is not an official document, rather more of a trick to help workaround this challenge. I'll be using following components in this blog to help setup a workable environment. Git VSCode ThingWorx Scheduler Thing First two components i.e. Git & VSCode is something I preferred to use due to their ease and my familiarity. However, you are free to choose your own versioning platform and IDE to review the branches, commits, diff, etc. or simply use GIT Bash to review the branches and all related changes to the code. This blog divides into following structures Installing & setting up code versioning software Installing IDE Creating a Scheduler Thing in ThingWorx Reviewing the changes 1. Installing & setting up code versioning software As mentioned you can use any code versioning platform of your choice, in this blog I'll be using Git. To download and install Git, refer to the Git- Installing Git Setting up the Git Repository Navigate to the \\ThingworxStorage as this is the folder which contains the path to the \repository folder which in turns have SystemRepository folder. Note: Of course more custom repositories could be created if preference is to separate it from the SystemRepository (available OOTB) Once Git is installed, let's initialize the repository. If you are initializing the repository in ThingworxStorage and would like only repository folder to be tracked, be sure to create .gitignore and add following to it: # folders to ignore database esapi exports extensions logs reports certificates # keystore to ignore keystore.jks Note : Simply remove the folder/file from the .gitignore file if you'd like that file/folder to be tracked within the Git repository. Following commands have been issued in the Git Bash which can be started from Windows Start > Git Bash. Then from the Git Bash navigate to the ThingworxStorage/repository folder. Git Command to initialize repository $ git init Git command to check the status of tracked/un-tracked files/folders $ git status This may or may not return list(s) of files/folders that are not tracked/un-tracked. While issuing this command for the first time, it'll show that the repository and its content i.e. SystemRepository folder is not tracked (file/folder names will likely be highlighted in red color. Git command to configure user and then add required files/folders for tracking $ git config --global user.name "" $ git config --global user.email "" $ git add . This will add all the folders/files that are not ignored in the .gitignore file as we created above. $ git commit -m "" This will perform first commit to the master branch, which is the default branch after the initial setup of the git repository $ git branch -a This will list all available branches within this repository $ git branch e.g. $ git branch newfeatureA This will create a new branch with that name, however to switch to that branch checkout command needs to be used. $ git checkout newfeatureA Note this will reflect the change in the command prompt e.g. it'll be switched from  MINGW64 /c/ThingworxStorage/repository (master) to MINGW64 /c/ThingworxStorage/repository (newfeatureA) If there's a warning with files/folders highlighted in Red it may mean that the files/folders are not yet staged (not yet ready for being committed under the new branch and are not tracked under the new branch). To stage them: $ git add . To add them for tracking under this branch $ git commit -m "Initial commit under branch newfeatureA" Above command will commit with the message defined with "-m" parameter 2. Installing IDE Now that the Git is installed and configured for use, there are several options here e.g. using an IDE like VSCode or Atom or any other IDE for that matter, using Git Bash to review the branches and commit codes via command or to use the Git GUI. I'll be using VSCode (because apart from tracking Git repos I can also debug HTML/CSS, XML with minimum setup time and likely these are going to be the languages mostly used when working with ThingWorx entities) and will install certain extensions to simplify the access and reviewing process of the branches containing code changes, new entities those that are created or getting committed from different users, etc To install VSCode refer to the Setting up Visual Studio Code. This will cover all the platforms which you might be working on. Here are the list of extensions that I have installed within VS Code to work with the Git repository. Required Extensions Git Extension Pack Optional Extensions Markdown All in One XML Tools HTML CSS Support Once installed and done with all the above mentioned extensions simply navigate to the VSCode application's File > Open Folder > \\ThingworxStorage\repository This will open the SystemRepository folder which will also populate the GitLense section highlighting the lists of branches, which one is the active branch (marked with check icon) & what uncommitted changes are remaining in which branch see following: To view the history of all the branches we can use the extension that got installed with the Git Extension Pack by pressing keyboard key F1 and then search for Git: View History (git log) > Select all branches; this will provide overview such as this 3. Creating a Scheduler Thing in ThingWorx Now that we have the playground setup, we can either: Export ThingWorx entities manually by navigating to the ThingWorx Composer > Import/Export > Export > Source Control Entities, or Invoke the ExportSourceControlledEntities service automatically (based on a Scheduler's CRON job) available under Resources > SourceControlFunctions To invoke this service automatically, a subscription could be created to the Scheduler Thing's Event which invokes the execution of ExportSourceControlledEntities service periodically. I'm using this service with following input parameters : var params = { path: "/"/* STRING */, endDate: undefined/* DATETIME */, includeDependents: undefined/* BOOLEAN */, collection: undefined/* STRING */, repositoryName: "SystemRepository"/* THINGNAME */, projectName: undefined/* PROJECTNAME */, startDate: undefined/* DATETIME */, tags: undefined/* TAGS */}; // no return Resources["SourceControlFunctions"].ExportSourceControlledEntities(params); Service is only taking path & repositoryName as input. Of course, this can be updated based on the type of entities, datetime, collection type, etc. that you would want to track in your code's versioning system. 4. Reviewing the changes With the help of the Git tool kit installed in the VS Code (covered in section 2. Installing IDE) we can now track the changed / newly created entities immediately (as soon as they are exported to the respository ) in the Source Control section (also accessible via key combination Ctrl + Shift + G) Based on the scheduled job the entities will be exported to the specified repository periodically which in turn will show up in the branch under which that repository is being tracked. Since I'm using VS Code with Git extension I can track these changes under the tab GitLens Additionally, for quick access to all the new entities created or existing ones modified - Source Control section could be checked Changes marked with "U" are new entities that got added to the repository and are also un-tracked and the changes marked with "M" are the ones that are modified entities compared to their last commit state in the specific branch. To quickly view the difference/modifications in the entity, simply click on the modified file on the left which will then highlight the difference on the right side, like so
View full tip
JMeter for ThingWorx Overview Apache JMeter is an open-source tool designed for load testing and measuring the performance of a web application. JMeter has a wide range of features to facilitate this testing, including support for a variety of server and protocol types, a full-featured testing IDE with the ability to record the test steps from both a browser or a native application, and built-in debugging tools. Information about JMeter can be found on Apache’s website.   Working with JMeter is not always intuitive, but it also isn’t that much harder than regular software development. Take some time to explore the official Apache JMeter Documentation and figure out where things go and how to mechanically make use of the JMeter IDE. Then step through this tutorial to create a basic test that logins to ThingWorx, accesses a mashup, and clicks on a few widgets. This is the first in a series to come, courtesy of IoT EDC Engineer Tim Atwood ( @atwood ) and the whole EDC team.   Installation Download JMeter from Apache’s website. Unpack the archive and copy the files to a desired location. Run the application by double clicking on the “ApacheJMeter.jar” file within the bin directory. JMeter is now installed and ready to use. Creating a Test Set up a proxy in your browser of choice (or on the OS in settings).   Select the green “templates” icon in JMeter, and then select “Recording” for the template.   Configure the recording template to point towards your ThingWorx Navigate or Foundation server, then click “Create”. Hit “Start” under the “HTTP(S) Test Script Recorder” tab of the new JMeter project. Make sure the port is set correctly under Global Settings.   A pop-up box will appear that always stays visible on top of the active browser window, so that the recording can be controlled and stopped at any time. Leave the “Transaction name” field empty so that each transaction recorded by the software is automatically named after the web request (this helps differentiate one from the other, and they can each be renamed later).   Open your browser, and navigate (via direct URL if possible, to keep things simple) to the mashup you wish to test. Login and let the page load. Click on anything you’d like on the mashup to capture the activity of that test. Then click “Stop” on the pop-up recorder window to stop the recording. Each transaction will be assigned an index as well, and the source code behind each of these transactions can be reviewed and manually modified in the main JMeter window. Here is the login request for instance:   The HTTP Authorization Manager is used to automatically authorize a defined user login for the thread to any of the Base URLs listed. In this case, though, there are two separate servers being accessed during the test, and one may need to be added manually:   Save the project before continuing, as manual modifications come next.   Within the task page as you do the recording, a set of parameters or body data will be recorded. Modifying this is how you want to parametrize the test scenario, variables like the username and password. To simulate logging in as other users, you have to parameterize this, and not rely on the administrator account name and password entered into the browser.   Rename the task controller to “MyTasks” or something more easily identified than the long string it has now:   Some recorded items like static images and stylesheets will be non-essential, things the browser processes for better graphical representation, but which are often cached and do not greatly affect the scalability results of the test. These can be highlighted and disabled all at once:   Also ensure that any cascading stylesheets have been disabled. Enable the “View Results Tree” to ensure you can review the results of the test script during the editing phase. However, this “Listener” element has a high memory footprint during test execution, so it should be disabled before running an actual scale test.   Next we need to parametrize the user login information and pull it from a csv file.   The colon means that “Administrator” is the default user to use for login.   You can add other properties as well, like ramp up time, run time, number of users, and protocols to use. The ramp up time determines how quickly the threads are allocated for the test, which if done slowly enough, prevents the thundering herd scenario. In more complex scenarios, logic controllers can be inserted to control the flow of the test. This allows for options such as if-then conditions for different user permissions, or parameter-based routes for better randomization of actions in different threads. This will be covered in more detail in a future article.   Pre- and Post-Processors can be used as well, with the latter being used here much more than the former, to extract information from the response, in order to then use that as part of the variables going into one of the follow up requests. For example, see the script in this image: This one has a variable that it extracts from the object number property, defined in the CSV file, and converts it into another variable that is used in subsequent scripts. This script uses the object number reference to pull the name out of the body data and make the request, which is then post-processed by a bunch of these extractors. One is a JSON extractor which is trying to get an ID out of the JSON response. There is a regular expression extractor and a bean shell post-processor, which populates some variables based on what it responded with. Once it extracts all of the variables from the response to this particular request (GetSearchResults in this case), it then tailors the additional requests based on these. -   Customize the script according to the needs of your own application. Alternate between recording and manually modifying the recording code to ensure the test performs exactly as required and from the perspective of different users with different permissions. Also vary the type of activity performed on the mashup. Highlight the “View Results Tree” tab and click the green start button at the top of the window to see the results appear.     If you are getting an unauthorized message, ensure that the scope is right for the login information, which may require moving the “HTTP Authentication Manager” component around in the project. Be sure to check the URLs and credentials entered for each type of user. Occasionally the recorder will insert a long authentication string into the URL, and you want to manually set the URL for the credentials to the most generic URL possible for the server. This can be parametrized too: Referencing the CSV file defined here: Which looks like this for a more complicated scenario (covered in the future):  The columns here represent the username, password, object number in Windchill, and object name in Windchill, as well as the wait time used to vary the way the logic is executed and some extra variables which differentiate for the switches what to do to create a more varied and realistic test.   Conclusion Following these steps again and again on the various mashups throughout an application can ensure that a script for each web page and each type of user on each web page is created and added to the testing suite. This results in a load test that is perfectly representative of the real-world user load placed on an application. Load testing is a critical part of the development lifecycle in any application, and ThingWorx is no exception. Any further questions about the capabilities of JMeter not covered here, can be answered by the whole JMeter user manual, found on the Apache website. Future articles will include some basic scripts that test basic things, which can serve as an example for more complex ThingWorx JMeter script development. Here is an example of one tool PTC uses for internal QA of ThingWorx, designed to load test a Navigate application (specifically its built-in mashups):   Something similar to this tool may be available for public use later this summer. In the meantime, feel free to use the tutorial above to create scripts of your own. Any issues building your custom load tests in JMeter can be discussed right here on this thread with our JMeter experts. Happy developing!
View full tip
Put together a quick example mashup in support of a couple of analytics projects to demonstrate use of the new TWX Analytics 8.1 APIs within TWX mashup builder. The intention here is for use in POCs to provide a quick way of demonstrating customer-facing analytics outputs along with the more detailed view available in Analytics Builder. Required pre-requisites are: ThingWorx 8.1 + Analytics Extensions ThingWorx Analytics Server 8.1 Carousel TWX UI widget (attached) imported into TWX Data set(s) loaded with signals / profiles generated. The demo can be installed by importing the attached entities file into TWX composer then launching the mashup 'EMEA.Analytics.CustomerInsightMashUp'. A quick run through of the functionality ... On launching the mashup, data sets and models are displayed for selection on the left hand-side. On selecting dataset and model, signals are presented in two tabs - first an overview of all signals. The list on the left can be expanded by changing the value for 'Top <n> Contributing Features'. On selecting a signal from the list, the 'Selected Signal Details' tab displays additional charting for value ranges, average goal etc. The number of 'bins' to display can be edited. Similarly, profiles can be viewed from the 'Profiles' tab - each profile can be selected by dragging the upper carousel. This is all done using the Analytics 8.1 "Things" in TWX along with an additional custom Thing with some scripted services (EMEA.Analytics.Helper). Thanks to Arian Van Huelsen & Tanveer Saifee at PTC for their support; all comments / feedback welcome.
View full tip
ThingWorx provides multiple ways to deliver your data to the server. You can choose from the C based EMS to your own C application that uses the C SDK as well as SDKs for many popular languages but what can you do if the device you want to collect data on is so small that it need a very lightweight data delivery method. Normally you would consider using the REST web service interface and writing your own custom client to post your data by there is an alternative, MQTT. MQTT is a lightweight protocol that can be used from an Arduino with an Ethernet Shield that can stream real time data directly to ThingWorx by installing the MQTT Marketplace Extension on your server. To learn more about how this kind of solution worked, I created this slide deck while building a hardware example: DeliveringArduinoDataToThingworx.pdf Hopefully, it can help others out who want to create this kind of solution as well.
View full tip
This video shows the steps to install ThingWorx Analytics release 8.3  
View full tip
Note: The following tutorial are based on a Thingworx/CWC 9.5. Steps and names may differ in another version. Context As a human reaction, the tracked time displayed may be misperceived by the Operator. It can lead to a reject of the solution. CWC doesn’t have (yet?) the capability to configure the visibility to hide the timer. The purpose of this tutorial is to create a quick and straight to the point customization to hide the timer in the execution screen. All other features, services and interfaces are left untouched.   As a big picture, here are the 6 modifications you will need to do: Modify the 4 mashups Modify 2 values in 2 tables of the MSSQL database   Status The mashup containing the timer is PTC.FSU.CWC.Execution.Overview_MU. It is easy to duplicate it and hide the timer widget (switch the visible property to false). But now, how to set it in the standard interface? In order to do it, you need to duplicate the mashups linked to the Execution.Overview_MU mashup. PTC.FSU.CWC.Execution.Overview_MU is directly referenced by the following entities: PTC.FSU.CWC.Authoring.Preview_MU PTC.FSU.CWC.Execution.WorkInstructionStart_MU PTC.FSU.CWC.GIobalUI.ApplicationSpecificHeader_HD PTC.FSU.CWC.WorkDefinitionExecution.StationSelectionContainer_EP   Customization Duplicate all those mashups except Authoring.Preview_MU because we will focus only on the authoring side of CWC. Hereafter it will be called the same as the original + _DUPLICATE. Perform the following modifications.   Open PTC.FSU.CWC.GlobalUI.ApplicationSpecificHeader_HD_DUPLICATE, then in Functions: open the expression named NavigateToStationSelection. Change the name of the mashup to the relevant one, example: PTC.FSU.CWC.WorkDefinitionExecution.StationSelectionContainer_EP_DUPLICATE open the validator named ShowRaiseHand. Change the name of the 2 mashups to the relevant ones, example: PTC.FSU.CWC.Execution.WorkInstructionStart_MU_DUPLICATE and PTC.FSU.CWC.Execution.Overview_MU_DUPLICATE open the validator named ShowStationSelection. Change the name of the 2 mashups to the relevant ones, example: PTC.FSU.CWC.Execution.WorkInstructionStart_MU_DUPLICATE   Open PTC.FSU.CWC.Execution.Overview_DUPLICATE, then in Functions: open the expression named SetMashupToWorkInstructionStart. Change the name of the mashup to the relevant one, example: PTC.FSU.CWC.Execution.WorkInstructionStart_MU_DUPLICATE open the expression named SetMashupToStationSelection. Change the name of the mashup to the relevant one, example: PTC.FSU.CWC.WorkDefinitionExecution.StationSelectionContainer_EP_DUPLICATE   Open PTC.FSU.CWC.Execution.WorkInstructionStart_MU_DUPLICATE, then in Functions: open the expression named NavigateToStart. Change the name of the mashup to the relevant one, example: PTC.FSU.CWC.Execution.Overview_MU_DUPLICATE open the expression named NavigateToStationSelection. Change the name of the mashup to the relevant one, example: PTC.FSU.CWC.WorkDefinitionExecution.StationSelectionContainer_EP_DUPLICATE   Open PTC.FSU.CWC.WorkDefinitionExecution.StationSelectionContainer_EP_DUPLICATE, in functions: open the expression named NavigateToStart. Change the name of the mashup to the relevant one, example: PTC.FSU.CWC.Execution.Overview_MU_DUPLICATE open the expression named NavigateToWorkInstructionStart. Change the name of the mashup to the relevant one, example: PTC.FSU.CWC.Execution.WorkInstructionStart_MU_DUPLICATE   Now, let’s change the database value. In MSSQL, navigate to the thingworxapps database, and edit the dbo.menu table. Look the line for AssemblyExecution (by default line 22) and look the value in column targetmashuplink. Switch the original value PTC.FSU.CWC.WorkDefinitionExecution.StationSelectionContainer_EP to the name of the duplication of this mashup. Lastly, edit the dbo.menucontext table. Look the line related to CWC (application UID = 5) and look the value in column targetmashuplink. Switch the original value PTC.FSU.CWC.GlobalUI.ApplicationSpecificHeader_HD to the name of the duplication of this mashup.   Result After this modification, you can start and check an operation. You should see the following result:
View full tip
Overview A global leader in chemical processing and industrial manufacturing, with a strong international footprint and multiple production sites worldwide, set out to transform its production ecosystem by adopting Industrial IoT (IIoT). The objective was to unify fragmented factory data, enable real-time analytics, and drive operational efficiency through AI-powered insights. Based on detailed use case documentation and architectural workshop findings, this reference architecture outlines a robust, scalable solution designed to integrate factory systems, deliver AI-supported insights in real time, and empower teams through self-service applications.   The solution leverages PTC’s ThingWorx suite—along with Microsoft Azure services and complementary technologies—to address key challenges in production, quality, and efficiency across engineering, manufacturing, and operations. About Beyond the Pilot series Use Case   A. Engineering – Process Optimization & Quality Control   Problem: Resolving Data Integration & Visibility Challenges   Customer’s engineering teams struggled with fragmented data across various factory systems, limiting their ability to analyze process performance and optimize production parameters. Without a unified data platform, engineers could not effectively compare historical and real-time machine center lining values, making it difficult to maintain consistent production quality.   Solution: Unified Data Integration & Advanced Process Analytics   The reference architecture establishes a central, cloud-based data platform that aggregates and correlates machine data from various sources in real time. By integrating OPC Aggregators and Kepware with Azure IoT Hub, factory data is ingested, processed, and made accessible via ThingWorx applications. Engineers can now visualize mechanical and digital process values, set dynamic thresholds, and receive alerts when deviations occur—ensuring precise process control and quality optimization.   Role of PTC Products:   PTC Kepware: Standardizes and integrates machine data from disparate factory systems, ensuring a seamless flow of real-time process variables. ThingWorx Platform: Provides a robust dashboard for analyzing centerlining data, visualizing production trends, and enabling data-driven decision-making. ThingWorx Digital Performance Management (DPM): Automates the identification of process inefficiencies, allowing engineers to fine-tune machine settings dynamically.   B. Manufacturing – Scrap Reduction & Production Efficiency   Problem: Enhancing Scalability and Reducing Operational Inefficiencies   Customer faced challenges in scaling its IIoT solution as new sensors and data sources were introduced. Traditional systems struggled with the increased volume of factory data, leading to slow system response times and ineffective real-time analytics. Additionally, manual process adjustments resulted in inconsistencies, contributing to increased scrap rates and wasted materials.   Solution: Cloud-Scalable Infrastructure with Real-Time Process Optimization   To address these issues, the architecture leverages Azure IoT Hub, Azure Data Explorer (ADX), and Influx DB to handle massive data streams and provide low-latency analytics. This ensures that production trends, environmental conditions, and machine parameters are continuously monitored and optimized in real time. Advanced machine learning models predict process inefficiencies, enabling operators to make automatic adjustments to reduce scrap and optimize yield.   Role of PTC Products:   ThingWorx Platform: Acts as the central command hub, enabling real-time decision-making based on factory data trends. ThingWorx Digital Performance Management (DPM): Uses historical data to provide AI-supported recommendations for reducing material waste and improving overall equipment effectiveness (OEE). PTC Kepware: Ensures reliable, high-speed data acquisition from sensors, production lines, and environmental monitoring systems, feeding critical information into ThingWorx for optimization.   C. Driving Digital Transformation & Quality Optimization   Problem: Lack of Digital Process Automation & AI-Powered Decision Making   Customer’s previous factory systems relied on manual reporting and fixed thresholds for process control, limiting the ability to detect and respond to process inefficiencies in real time. Operators needed a system that could provide intelligent, self-service applications with AI-driven recommendations for optimal production performance.   Solution: AI-Driven Automation & Dynamic Quality Control   The IIoT architecture integrates AI-powered predictive analytics to analyze deviations in real-time and suggest automatic machine adjustments. Real-time applications, customizable process recipes, and dynamic alerting systems empower production teams with actionable insights. By embedding self-service applications in ThingWorx, engineers and operators can fine-tune process settings and receive automated recommendations for improving quality and efficiency.   Role of PTC Products:   ThingWorx Platform: Serves as the central analytics hub, delivering AI-powered insights for continuous process improvement. ThingWorx DPM: Uses machine learning to correlate scrap rates with process variables, recommending changes that minimize waste and enhance quality. PTC Kepware: Captures real-time process data, ensuring that AI models receive accurate inputs for predictive analysis.   Customer’s digital transformation journey is now backed by a robust, PTC-powered IIoT ecosystem that delivers continuous improvement, higher production efficiency, and proactive maintenance capabilities—ultimately driving the future of smart manufacturing. Technical Architecture and Implementation Details   This section combines detailed technical descriptions with the overall reference architecture. It describes the core components, integration points, and implementation strategies that deliver a robust IIoT solution for the customer.   A. Architecture Overview Diagram       High-level architecture diagram for the final solution B. Detailed Technical Components     Component Role Key Features OPC Aggregators & Kepware Stream and bridge machine data from production, DEV, and QA environments to Azure IoT Hub for real-time processing in ThingWorx. Scalable ingestion; latency monitoring; secure device connectivity; segregated closed environments for DEV/QA. Azure IoT Hub Ingests and secures machine telemetry data for analytics. Centralized data ingestion; integration with Azure services. ThingWorx on VMs Hosts the core IIoT application that processes data, provides end-user applications, and manages workflows. High performance; disaster recovery via VM snapshots; enhanced security through Azure AD integration and SSL support. Managed PostgreSQL Provides high availability for persistent application data through replication and failover. Data redundancy; managed service benefits; automated backup and recovery. Azure Data Explorer / Influx DB Handles advanced analytics, timeseries visualization, and predictive insights for telemetry data. Real-time analytics; anomaly detection; cost-effective long-term storage. Monitoring & Logging Tools Ensure comprehensive observability and prompt incident response across all components. Real-time applications monitoring; alerting; centralized log aggregation. RESTful APIs Enable seamless integration with ERP systems, legacy data sources, and other IoT devices. Secure data exchange; standardized connectivity protocols.     C. User Personas   The success of this solution relies on a well-defined team of technical experts responsible for deployment and ongoing management:     Persona Key Responsibilities Plant Manager Oversee overall factory performance and use data insights for strategic decision-making Drive process improvements and efficiency Digital Transformation Lead Analyzes and prioritizes valuable use-cases for the business Implement IIoT solutions across factory operations and scale AI-driven automation and data analytics Ensure long-term digital innovation and adoption Operations Manager Oversee production lines and ensure efficiency and optimize machine settings based on real-time insights Troubleshoot and resolve process issues quickly Quality Assurance Engineer Monitor production quality in real time and ensure compliance with quality standards Reduce scrap and rework by addressing deviations early Maintenance Engineer Monitor equipment health and respond to alerts and perform predictive maintenance to prevent failures Minimize downtime through proactive repairs Software Engineer Develop and maintain IIoT backend and frontend systems and ensure seamless data integration and API connectivity Optimize system performance and scalability Cloud Architect Design and manage IIoT cloud infrastructure and ensure scalable and secure cloud deployments Optimize data storage and processing in the cloud Security Analyst Implement and monitor security measures for IIoT systems and conduct risk assessments and threat analysis Ensure compliance with cybersecurity standards DevOps Engineer Manage CI/CD pipelines for IIoT applications and automate deployments and infrastructure management Optimize system performance and reliability     NOTE : Although these personas were required, the needs were fulfilled by a team of only 4–5 developers effectively playing multiple roles. Outcome   Optimized Production Efficiency By unifying machine telemetry, process parameters, and historical trends, customer empowers engineers with real-time insights. AI-driven recommendations and automated adjustments replace trial-and-error, enabling precise, dynamic optimizations. Bottlenecks and inefficiencies are identified instantly, allowing rapid corrective actions for peak performance.   Reduced Waste & Enhanced Quality Real-time process optimization and automated quality control significantly reduce material waste and variability. The system detects deviations at the source, enabling instant adjustments and ensuring consistent product quality, minimizing scrap, rework, and compliance risks.   Seamless Data Visibility & Collaboration A centralized dashboard provides real-time access to critical metrics, eliminating fragmented reports and delays. Engineers and operators can compare production data across sites, standardize best practices, and drive continuous improvements across the network.   Future-Ready Innovation Beyond immediate gains, this IIoT transformation lays the foundation for scalable sensor integration, AI-driven automation, and advanced predictive analytics. It’s not just a solution for today—it’s a long-term framework for sustained digital innovation in smart manufacturing. This reference architecture is not just about solving today’s challenges—it establishes a long-term, adaptive framework that will continue to evolve, enabling our customer to remain at the forefront of smart manufacturing and industrial digitalization. Additional Information   This section provides further insights into the project implementation and future strategic direction.   Parameter Description Example/Notes Time to First Go-Live Estimated duration from project initiation to initial production deployment. Approximately 16 weeks Partner Involvement Key strategic and technical partners collaborating on the deployment. Microsoft, Ansys, and Deloitte were supporting the digital transformation initiative centered around ThingWorx. Customer Roadmap Future enhancements planned by customer, such as AI-based predictive analytics and further automation. An expansion to incorporate AI and advanced machine learning–driven insights is planned       Vineet Khokhar Principal Product Manager, IoT Security   Disclaimer: These reference architectures will be based on real-world implementation; however, specific customer details and proprietary information will be omitted or generalized to maintain confidentiality.   Stay tuned for more updates, and as always, in case of issues, feel free to reach out to <support.ptc.com>  
View full tip
Original Post Date:     June 6, 2016   Description: This is a video tutorial on creating a DataTable with a DataShape, and adding and retrieving an entry.  
View full tip
Modbus is a commonly used communications protocol that allows data transfer between computers and PLCs. This is intended to be a simple guide on setting up and using a Modbus PLC Simulator with ThingWorx. ThingWorx provides Modbus packages for Windows, Linux and Linux ARM. The Modbus Package contains libraries and lua files intended to be used along with the Edge Microserver. Note: The Modbus package is not intended as an out of the box solution Requirements: ThingWorx Platform Edge Microserver Modbus Package Modbus PLC Simulator In this guide, a free Modbus PLC Simulator​ is used. Here is the direct download link for their v8.20 binary release. Configuring the EMS: The first step is to configure the EMS as a gateway. This is done via adding an auto_bind section in the config.json: "auto_bind": [ {     "name": "ModbusGateway",     "gateway": true }] This creates an ephemeral Thing that only exists when the EMS is running. The next step is to modify the config.lua to include the Modbus configuration. Copy over the contents of the etc folder of the Modbus Package over to the etc folder of the EMS. A sample config_modbus.lua is provided in the Modbus Package as a reference. The following code defines a Thing called MyPLC (which is a Remote Thing created on the Platform): scripts.MyPLC = {     file = "thing.lua",     template = "modbusExample",     identifier = "plc",     updateRate = 2000 } scripts.Thingworx = {     file = "thingworx.lua" } scripts.modbus_handler = {     file = "modbus_handler.lua",     name = "modbus_handler",     host = "localhost" } Adding 'modbusExample' to the above script enables the usage of the same located at /etc/custom/templates/. 'modbusExample' is a reference point for creating a script to add the registers of the PLC. The given template has examples for different basetypes. The different types of available registers are noted and referenced in the modbus.lua file available under /etc/thingworx/lua/. Setting up the PLC Simulator: Extract the mod_RSsim to a folder and run the executable. Since we are 'simulating' a PLC connection, set the protocol to Modbus TCP/IP. Change the I/O to Holding Registers (or any other relevant option), with the Address set to Dec. In the Simulation menu, select 'No animation' if you want to enter values manually or use 'Increment BYTES' to automatically generate values. This PLC Simulator will run at port 502. The Connection: With the EMS & luaScriptResource running, the PLC Simulator should have a connection to the platform with activity on the received/sent section. Now if you open the Remote Thing 'MyPLC' in the platform, the isConnected property (under the Properties section) should be true. (If not, go back to General Information, click on Browse in the Identifier section and select 'plc'). Go back to the Properties section, and click on Manage Bindings. Click on the Remote tab and the list of defined properties should appear. For example, the following code from the modbusExample.lua: properties.Int16HoldRegExample = {key="holding_register/1/40001?format=Int16", handler="modbus_handler", basetype="NUMBER"} denotes a property named Int16HoldRegExample at register 40001. The value at the address 40001 in the PLC Simulator should correspond with the value at the platform once this property is added and the Thing saved. If you are running into any errors when connecting with a Raspberry Pi, please take a look atDuan Gauche's follow up document/ guide - Using your Raspberry Pi with the Edge Microserver and Modbus
View full tip
Predicting time to failure (TTF) or remaining useful life (RUL) is a common need in IIOT world. We are looking here at some  ways to implement it. We are going to use one of the Nasa dataset publicly available that simulates the Turbofan engine degradation (https://c3.nasa.gov/dashlink/resources/139/) . The original dataset has got 26 features as below Column 1 – asset id Column 2 – cycle/time of sensor data collection Column 3- 5 – operational setting Column 6-26 – sensor measurement In the training dataset the sensor measurement ends when the failure occurs.     Data Collection Since the prediction model is based on historic data, the data collection is a critical point. In some cases the data would have been already collected form the past and you need to make the best out of it. See the Data preparation chapter below. In situation where you are collecting data, a few points are good to keep in mind, some may or may not apply depending on the type of data to be collected. More frequent (higher frequency) collection is usually better, especially for electronic measure. In situation where one or more specific sensor values are known to impact the TTF, it is good to take measure at different values of this sensor until the failure without artificially modifying the values. For example, for a light bulb with normal working voltage of 1.5V, it is good to take some measure at let’s say 1V, 1.5V , 2V , 3V and 4V. But each time run till the failure. Do not start at 1.5V and switch to 4V after 1h. This would compromise what the model can learn. More variation is better as it helps the prediction model to generalize. In the same example of voltage it is best to collect data for 1V, 1.5V , 2V , 3V and 4V rather than just 1.5V which would be the normal running condition. This also depends on the use case, for example if we know for sure that voltage will always be between 1.45V and 1.55V, then we could focus only on data collection in this range. Once the failure is reached, stop collecting data. We are indeed not interested in what happens after the failure. Collecting data after the failure will also lead to lower prediction model accuracy. Each failure run should be a separate cycle in the dataset. In other word from a metadata stand point, each failure run should be represented by a different ENTITY_ID. TTF business need Before going into data preparation and model creation we need to understand what information is important in term of TTF prediction for our business need. There are several ways to conceive the TTF, for example: Exact time value when failure might occur This is probably going to be the most challenging to predict However one should consider if it is really necessary. Indeed do we need to know that a failure might occur in 12 min as opposed to 14 min ? Very often knowing that the time to failures is less than X min, is what is important, not the exact time. So the following options are often more appropriate. Threshold For some application knowing that a critical threshold is reached is all that is needed. In this case a Boolean goal, for example lessThan30min or healthy with yes/no values, can be used This is usually much easier than the exact value above. Range For other applications we may need to have a bit more insight and try to predict some ranges, for example: lessThan30min, 30to60min, 60to90min and moreThan90min In this case we will define an ordinal goal The caveat here is that currently ThingWorx Analytics Builder does not support ordinal goal, though ThingWorx Analytics Server does support it. So it only means that the model creation needs to be done through the API. This is the option we will take with the NASA dataset. The picture below shows the 3 different types of TTF listed above   Data Preparation   General Feature engineering Data Preparation is always a very important step for any machine learning work. It is important to present the data in the best suitable way for the algorithms to give the best results. There are a lot of practices that can be used but beyond the scope of this post. The Feature Engineering  post gives some starting point on this. There are also a lot of resources available on the Internet to get started, though the use of a data scientist may be necessary. As an example, in the original NASA dataset we can see that a few features have a constant value therefore there are unlikely to impact the prediction and will be removed. This will allow to free computational resources and prevent confusion in the model. Sensor data resampling The data sampling across the different sensor should be uniform. In a real case scenario we may though have sensors data collected at different time interval. Data transformation/extrapolation should  be done so that all sensor values are at the same frequency in the uploaded dataset. TTF feature Since we want to predict the time to failure, we do need a column in the dataset that represent this values for the data we have. In a real case scenario we obviously cannot measure the time to failure, but we usually have sensor data up to the point of failure, which we can use to derived the TTF values. This is what happens in the NASA dataset, the last cycle corresponds to the time when the failure occurred. We can therefore derive a new feature TTF in this dataset. This will start at 0 for the last cycle when failure occurred, and will be incremented by 1 up to the very first measurement, as shown below:   Once this TTF column is defined, we may need to transform it further depending on the path we choose for TTF prediction, as described in the TTF business need chapter. In the case of the NASA dataset we are choosing a range TTF with values of more100, 50to100, 10to50 and less10 to represent the number of remaining cycles till the predicted failure. This is the information we need to predict in order to plan a suitable maintenance action. Our transformed TTF column look as below:     Once the data in csv is ready, we need to create the json file to represent the metadata. In the case of range TTF this will be defined as an ordinal goal as below (see attachment for the full matadata json file) {         "fieldName": "TTF",         "values": ["less10",                   "10to50",                   "50to100",                   "more100"],         "range": null,         "dataType": "STRING",         "opType": "ORDINAL",         "timeSamplingInterval": null,         "isStatic": false   }   Model creation Once the data is ready it can be uploaded into ThingWorx Analytics and work on the prediction model can start. ThingWorx Analytics is designed to make machine learning easy and accessible to non data scientists, so this steps will be easier than when using other solutions. However some trial and error are needed to refine the model which may also involve reworking the dataset. Important considerations: When dealing with Time to failure prediction, it is usually needed to unset the Use Goal History in the Advanced parameters of the model creation wizard. If using API, the equivalent is to set the virtualSensor parameter to true. Tests with Redundancy Filter enabled should be done as this has shown to give better results. In a first attempt it is a good idea to keep lookback Size to 0. This indicates to ThingWorx Analytics to find the best lookback size between 2, 4, 8 and 16. If you need a different value or know that a different value is better suited, you can change this value accordingly. However bear in mind the following: Larger lookback size will lead to less data being available to train, since more data are needed to predict one goal. Larger lookback do lead to significant memory increase – See https://www.ptc.com/en/support/article?n=CS294545     In the case of the NASA dataset, since we are using an ordinal goal, we need to execute it through API. This can be done through mashup and services (see How to work with ordinal and categorical data in ThingWorx Analytics ? for an example) for a more productive way. As a test the TrainingThing.CreateJob service can be called from the Composer directly, as shown below:       Once the model is created we can check some performance statistics in ThingWorx Analytics Builder or, in the case of ordinal goal, via the ValidationThing.RetrieveResults service. The parameter most relevant in the case of ordinal goal will be the confusion matrix. Here is the confusion matrix I get   Another validation is to compute some PVA (Predicted Vs Actual) results for some validation data. ThingWorx Analytics does validation automatically when using ThingWorx Analytics Builder and present some useful performance metrics and graph. In the case of ordinal goal, we can still get this automatic validation run (hence the above confusion matrix), but no PVA graph or data is available. This can be done manually if some data are kept aside and not passed to the training microservice. Once the model is completed, we can then score (using PredictionThing.RealTimeScore or BatchScore for ordinal goal, or Builder UI for other goal) this validation dataset and compare the prediction result with the actual value. here is one example:     Depending on the business case this model can be deemed acceptable or may need rework, such as change the range values, change learners’ parameters, modify dataset … There is certainly a fair amount of experimentation before creating the optimal model but hopefully this post does give some good starting points.   Resources:   Original Dataset attached as train_FD001-original.csv Transformed dataset attached as train_FD001-TTF-transformed.csv json metadata file for transformed dataset attached as train_FD001-ttford.json                    
View full tip
Get Started with ThingWorx for IoT Guide Part 2   Step 4: Create Thing   A Thing is used to digitally represent a specific component of your application in ThingWorx. In Java programming terms, a Thing is similar to an instance of a class. In this step, you will create a Thing that represents an individual house using the Thing Template we created in the previous step. Using a Thing Template allows you to increase development velocity by creating multiple Things without re-entering the same information each time. Start on the Browse, folder icon tab on the far left of ThingWorx Composer. Under the Modeling tab, hover over Things then click the + button. Type MyHouse in the Name field. NOTE: This name, with matching capitalization, is required for the data simulator which will be imported in a later step. 4. If Project is not already set, click the + in the Project text box and select the PTCDefaultProject. 5. In the Base Thing Template text box, click the + and select the recently created BuildingTemplate. 6. In the Implemented Shapes text box, click the + and select the recently created ThermostatShape. 7. Click Save.     Step 5: Store Data in Value Stream   Now that you have created the MyHouse Thing to model your application in ThingWorx, you need to create a storage entity to record changing property values. This guide shows ways to store data in ThingWorx Foundation. This exercise uses a Value Stream which is a quick and easy way to save time-series data.   Create Value Stream   Start on the Browse, folder icon tab on the far left of ThingWorx Composer. Under the Data Storage section of the left-hand navigation panel, hover over Value Streams and click the + button. Select the ValueStream template option, then click OK. Enter Foundation_Quickstart_ValueStream in the Name field. If Project is not already set, click the + in the Project text box and select the PTCDefaultProject.   Click Save.   Update Thing Template   Navigate to the BuildingTemplate Thing Template. TIP: You can use the Search box at the top if the tab is closed.       2. Confirm you are on the General Information tab.       3. Click Edit button if it is visible, then, in the Value Stream text entry box, click the + and select Foundation_Quickstart_ValueStream               4. Click Save     Step 6: Create Custom Service   The ThingWorx Foundation server provides the ability to create and execute custom Services written in Javascript. Expedite your development with sample code snippets, code-completion, and linting in the Services editor for Things, Thing Templates, and Thing Shapes. In this section, you will create a custom Service in the Electric Meter Thing Shape that will calculate the current hourly cost of electricity based on both the simulated live data, and the electricity rate saved in your model. You will create a JavaScript that multiplies the current meter reading by the cost per hour and stores it in a property that tracks the current cost. Click Thing Shapes under the Modeling tab on the left navigation pane; then click on MeterShape in the list. Click Services tab, then click + Add and select Local (Javascript). Type calculateCost into the Name field. Click Me/Entities to open the tab. Click Properties. NOTE: There are a number of properties including costPerKWh, currentCost and currentPower. These come from the Thing Shape you defined earlier in this tutorial. 6. Click the arrow next to the currentCost property. This will add the Javascript code to the script box for accessing the currentCost property. 7. Reproduce the code below by typing in the script box or clicking on the other required properties under the Me tab:           me.currentCost = me.costPerKWh * me.currentPower;           8. Click Done. 9. Click Save. NOTE: There is a new ThingWorx 9.3 feature that allows users to easily Execute tests for ‘Services’ right from where they are defined so users can quickly test solution code.    Click here to view Part 3 of this guide. 
View full tip
Sometimes you need the values from different ThingTemplate members in ONE grid. Therefore it would be great, if you can join 2 "GetImplementedThingsWithData" results into a common one. Here a script that works generally as long as you don't mess with datatypes on same column names. I'm very interested, if someone can find a much easier solution. The Union function was the only one I found suited for the task, but this needs preparation of the infotables upfront. Input: Table1 :Infotable Table2: Infotable Output: Infotable Here the "Snippet": // Define params for an Infotable to hold column names var params = {   infoTableName: "field" /* STRING */ }; // Define column 1 var newField = new Object(); newField.name = "field"; newField.baseType = 'STRING'; // Two 1 columns Infotables to store the field definition; var field1 = Resources["InfoTableFunctions"].CreateInfoTable(params); field1.AddField(newField); var field2 = Resources["InfoTableFunctions"].CreateInfoTable(params); field2.AddField(newField); // Define the cell to add to Infotable var myField = new Object(); myField.field = ""; myField.baseType = "STRING"; // Loop through Table1 var dataShapeFields = Table1.dataShape.fields; for (var fieldName in dataShapeFields) {   logger.debug('field1 name is ' + dataShapeFields[fieldName].name);     myField.field = dataShapeFields[fieldName].name;    field1.AddRow(myField); } // Loop through Table2 var dataShapeFields = Table2.dataShape.fields; for (var fieldName in dataShapeFields) {   logger.debug('field2 name is ' + dataShapeFields[fieldName].name);    myField.field = dataShapeFields[fieldName].name;    field2.AddRow(myField); } // Using inner join functionality to filter only the values that exist in both var params = {   columns2: "field" /* STRING */,   columns1: "field" /* STRING */, joinType: "INNER" /* STRING */,   t1: field1 /* INFOTABLE */, t2: field2 /* INFOTABLE */,   joinColumns1: "field" /* STRING */,   joinColumns2: "field" /* STRING */ }; var commonFields = Resources["InfoTableFunctions"].Intersect(params); // Loop over the result to build a search string var commonColumns = ""; var tableLength = commonFields.rows.length; for (var x = 0; x < tableLength; x++) {   var row = commonFields.rows ;   commonColumns = commonColumns + row.field + ","; } // Reduce Table1 to match only common columns var params = { t: Table1 /* INFOTABLE */, columns: commonColumns /* STRING */ }; var result1 = Resources["InfoTableFunctions"].Distinct(params); // Reduce Table2 to match only common columns var params = {   t: Table2 /* INFOTABLE */,   columns: commonColumns /* STRING */ }; var result2 = Resources["InfoTableFunctions"].Distinct(params); // At the END JOIN the tables together (does not work if colums are different) var params = {   t1: result1 /* INFOTABLE */,   t2: result2 /* INFOTABLE */ }; var result = Resources["InfoTableFunctions"].Union(params);
View full tip
Announcements