cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Need to share some code when posting a question or reply? Make sure to use the "Insert code sample" menu option. Learn more! X

IoT Tips

Sort by:
Thingworx Analytics is offered through the User interface called Analytics Builder with some pre-configured functionality. However, should you want to create your own jobs and mashups, all features from Analytics Builder and some more are available through the Thingworx Services.  Running most functionality requires that you provide some data to run the Analytics Services. This is where the datasetRef parameter is required.        Data uploaded through Analytics Builder Any dataset uploaded through builder will require have a datasetUri as shown in the image above and format will be parquet (all small letters) datasetUri can be obtained from the list of datasets in builder Passing data as an in-body Dataset If data isn't uploaded through Analytics Builder, data can be supplied as an Infotable in the data parameter of the datasetRef. Metadata will also need to be supplied if a new dataset is being created (create Job of the AnalyticsServer_DataThing) If this data is being supplied for a scoring job, as long as the column names match up to what the model is expecting, TWX Analytics will inference them appropriately. The filter parameter is for parquet datasets already uploaded into TWXA and will take an ANSI SQL statement format to add conditions to reduce number of rows. Exclusions is an single column infotable list of the columns you wish to remove from the job you are trying to submit Example: If you want Profiles to only run on 5 out of 10 columns, you would give a list of 5 columns that you don't want to include in this exclusions infotable. Data may also be supplied as a csv file in the file repo in some cases, in which case you would give the dataseturi parameter the location of the file on the TWX File repo (of the format thingworx://UseCaseFileRepo/tempdata.csv) and the format which would be csv
View full tip
  Hello, everyone!   With the release of ThingWorx 8.5, we’ve incorporated a lot of new functionality into our manufacturing and service apps. To cover a few, I’ve included the list below. We created a manufacturing common layer extension to bundle all the PTC-offered IoT apps (Operator Advisor, Asset Advisor, Production KPIs and Controls Advisor) into one extension to enable you to use them even more quickly. We added a UI to Operator Advisor to strengthen your development of work instructions. We introduced new shift and crew data models and user interfaces to standardize how to track workers’ shifts and crew availability. We also introduced Flexible KPIs to help you more rapidly develop apps to calculate common metrics. We enhanced the Operator Advisor MPMLink connector to allow users to access navigation criteria for filtering parameters based on required criteria with support for standard processes. We incorporated multiple context supports for assets—like different business units, separating maintenance views from production views or segmenting sites by location—to allow segmentation based on role and responsibility, showing the power of ThingWorx networks and permissions.   Today, I’d like to highlight one of these areas in particular: flexible KPI calculations.   I spoke with one of product managers, Ward (who you may recognize from this post) to learn more about what this new feature does and the value it brings the business. Here’s what he said:   Kaya: Why did we create flexible KPIs? What was the challenge users were facing that led us to create them? Ward: While there is an industry standard for KPIs such as OEE, availability, productivity and quality, many customers choose to customize the calculations slightly and use their own specific versions in their decision-making process or production monitoring.    Kaya: What do the flexible KPIs do? Can you provide an example? Ward: KPIs can now be customized by ThingTemplate; this allows users to calculate KPIs differently for different “classes” of things. Imagine you have a group of CNC machines and a group of pumps. You want to calculate the availability of each group, but the availability calculation for the CNC machines may differ slightly from the same availability calculations for pumps. With our new flexible KPIs, you’re able to customize the availability calculation to make slight tweaks or changes based on differences in machines or devices. So, you can calculate the availability for both your CNC machines and for your pumps using your customized availability calculations. You can also create your own KPIs to calculate metrics like safety incidents or waste.   Kaya: Let’s dive a little deeper there. If I want to create a quality station for my robot with custom KPIs, how exactly would I do that? Ward: Let’s consider OEE. We have an OEE ThingShape applied to our ISA95 physical ThingTemplate. This shape has services to perform the OEE calculation. You’re able to customize the service on this template, so you have the flexibility to change the way you perform OEE calculations. Now, let’s say you want to add a new KPI like mean time between repair (MTBR). To do so, you would create a MTBR ThingShape and add it to the ThingTemplates where you want OEE calculated. Then, you would update the KPI manager service GetKPINames to add your ThingShape to the list of KPIs to be executed each iteration. The SCO apps will then execute your MTBR service along with the other KPIs.   Kaya: What are their use cases? How does they improve the business? Ward: Customers can have their cells roll up OEE based on the worse performing asset and have their operations roll up on a different criterion.  If the customer wants to use a modified OEE on a machine that includes the size of the crew operating it, they can now do so on that one machine, on classes of machines or on all machines.   Kaya: Wow. You were certainly busy with 8.5 with all these new features. Can you tell me what you’re most excited about for 9.0? Ward: I’m most looking forward to High Availability. The ability to have multiple servers in an active-active mode allows me to do more processing in ThingWorx and provide a level of reliability to my customers. (Look out for info on this exciting new functionality in the future!)   Ready to get started using the flexible KPIs yourself? Check out the ThingWorx Apps Customization Guide! While you’re at it, be sure to also check out the other new features in 8.5 listed above!   Reach out with any questions and stay connected! Kaya
View full tip
Is your team operating an effective DevOps pipeline? DevOps is an important part of a mature, enterprise ready application, but the process isn’t simple.   This expert session will focus on how a full DevOps pipeline looks like and how PTC can help to build a seamless pipeline. Join us for our upcoming Expert Session to learn how to create a Docker image, integrate Azure with Docker and Git, and set up a seamless DevOps pipeline.   When? Thursday, September 30th 2021 | 11 AM EST Host: Tori FIrewind, Senior Engineer in PTC IOT Enterprise Deployment Center Registration link: https://www.ptc.com/en/resources/iiot/webcast/devops-pipeline-thingworx 
View full tip
  Hi everyone,   This week, Anthony Moffa returns to Ask Kaya in a different form from his original appearance explaining the benefits of Thing Presence in ThingWorx. As much as we enjoy reading Anthony in print, you can now hear from the man himself in the “Moffa Monitoring Minute!”   Listen to Episode 04 of “ThingWorx on Air” as he explains what Asset Advisor is and how you can use it to remotely monitor assets, shorten service cycles, and improve visibility of your device fleet.   Want to learn even more about Asset Advisor? Check out this video or read through our website!   Reach out with any questions and just Ask Kaya!   Stay connected, Kaya
View full tip
   Who’s ready for exciting new functionality like smoother integration with Azure, rapid deployment capabilities, flexible KPIs and so much more?   Good news! Yesterday, we released ThingWorx 8.5—equipped with features like a new Azure connector for ThingWorx Flow, new software content management (SCM) capabilities with the Azure IoT Hub Connector, streamlined deployments with Solution Central, flexible KPI calculations with our PTC manufacturing and service apps—just to name a few. Check out the 8.5 release notes to discover all the highlights and goodness of our latest release and hear our CTO of IoT, Joe Biron (who you may recognize from previous Ask Kaya posts) and one of our product experience specialists, Sebastian Bergner, highlight new functionality and share demos in this exciting webcast (please note that we've had a little snafu with the link and the recording should be available later this week).   Play around with our new features by downloading ThingWorx 8.5, and let the good times roll.   Let us know what you think in the comments below and be on the lookout for future Ask Kaya posts highlighting new 8.5 functionality.   Stay connected, Kaya
View full tip
Applicable Releases: ThingWorx Platform 7.0 to 8.4   Description:   Strategy and tools for Thingworx application backups Backup Terminology and concepts Drivers to define a backup strategy Tips for executing backup in a Thingworx instance: Tomcat, certificates, Configuration and file system data, application specific files, database     Neo4J database mentioned in the session is no longer supported For more information check Best Practices for ThingWorx Backup
View full tip
To simplify the development of IIoT applications and solutions on the ThingWorx platform, we introduce the concept of Building Blocks. The intent of Building Blocks is to ease the creation of your own solutions and customization of PTC’s solutions. These Building Blocks are domain specific business logic pre-made for reusability, which means you won’t need to build from scratch on ThingWorx and can accelerate your time to value. What do we mean by Building Blocks? Building Blocks are premade components that enable modular software development. They are reusable, replaceable packages of functionality that can be connected into an architecture framework. Building Blocks allow for quicker development and customization of solutions and applications. What are the different types of Building Blocks?   Connectors  Leverage the same connectors we use for PTC solutions for better overall application performance and seamless transfer of data from disparate devices and systems. Identify the devices and systems you would like to monitor and let the connector do the rest.   Domain Models  Incorporate behavior and data from your devices and systems into a conceptual model of the domain, which is prepackaged based on common use cases. You can also leverage our out of the box models to connect and build dependencies between domains.   Business Logic  Encode real-world business rules that determine how data can be created, stored, and changed. Create KPIs for your devices and systems with these rules and create alerts based on your unique parameters.   UI  Construct widgets to view or analyze key data points in a graphical user interface that you can customize and leverage to extend functionality. Created with manufacturing and service use cases in mind, UI are predesigned to make it easy to view and understand data.   Building Blocks build upon the ThingWorx platform and are the base of all of PTC’s current and future solutions. We will continue to discuss Building Blocks in future posts, but in the meantime: How will you leverage building blocks in your own solutions? Is there more you want to know?   Stay connected, Rachel  
View full tip
  Hello, ThingWorx Users!   As promised, we are back with Episode 02 of ThingWorx on Air. Listen to our PM Milan share the secrets of Operator Advisor and how we built the solution with an eye for IIoT developers.   Learn how Operator Advisor provides you with pre-built snippets of code for widgets, services, etc. targeted specifically for shop floor operators. No more starting from scratch!   Reach out if you have any questions or topic requests!   Stay connected, Kaya   P.S. Keep your ears peeled for the “Wowza Widget of the Week!”
View full tip
  Helloooooo ThingWorx users,   Ever wanted to see the coolest technology in action? Ever wished you could surround yourself with awesome ThingWorx developers? Maybe you’ve even wished you could meet the ThingWorx product management team!   If that’s the case, you’re in luck! Join me for LiveWorx 2019 from June 10 – 13 in Boston this summer to discover how you can make digital transformation a reality for your organization. See what all the hype’s about here!   I’ll be presenting a rockin’ session on some exciting new functionality coming in ThingWorx to help with enterprise-wide app deployment. See me present with Chris Baldwin on Tues, June 11, @ 1:15pm in our session titled Introducing Solution Central: Your Gateway to Accelerated IIoT Value Across the Enterprise!   For a sneak peek of what’s to come at LiveWorx, here are seven sensational sessions our developers can’t miss! (Note: Dates and times are subject to change.) It's Electric: HowCaterpillar Develops Compelling IIoT Apps That Resonate With Customers & Dealers Mon, June 10, @ 4:30 (45 min) ThingWorx and Microsoft Azure from A to Z Tues, June 11, @ 4:00pm (45 min) It’s All About The Apps: Introducing ThingWorxMashupBuilder 2.0 and More! Wed, June 12, @ 9:00am (45 min) Connecting Asset Advisor to Azure Wed, June 12, @ 3:00pm (45 min) ThingWorx for Scalability with InfluxDB and Beyond! Wed, June 12, @ 3:00pm (45 min) From Pilot to Production: Tips & Tricks for Vuforia Studio Thurs, June 13, @ 12:00pm (45 min)   Hope to see you all there and meet you in person!   Stay connected, Kaya
View full tip
In a recent post, I gave an overview of the types of Building Blocks that are available with the ThingWorx Platform. As a reminder, Building Blocks are a collection of entities packaged together for modular software development. They are intended to be reusable, repeatable, and scalable, and they are the fastest way to either build your own solution or customize a pre-made PTC solution, like ThingWorx Digital Performance Management. There are four types of Building Blocks we will talk about for the development of IIoT applications and solutions on the ThingWorx platform: Connectors, Domain Models, Business Logic, and UI. In this post, we are going to do a deep dive on Connectors, which improve application performance and the transfer of data from disparate devices and systems.   What does a Connector look like in ThingWorx? All ThingWorx Building Blocks follow the same naming convention of CompanyName.BuildingBlockName, so any PTC-created Connectors will appear as PTC.Connector. Connectors in ThingWorx are external integrations that can come in through an industrial system, like an MES that could be connected to with ThingWorx Kepware, or business system, like a CRM that could be connected to via ThingWorx Flow or REST APIs. It could also be a connection to an external database. These are your data connections, so their structure will be somewhat dependent upon your database and assets.   What does a Connector look like in use in a PTC Solution? If we use the example of Digital Performance Management (DPM), one of the connectors we use is a Database Manager(ptc.DBConnection.Manager). It pulls information from the database that is being used from the implementation of DPM. If you think of Building Blocks like bricks, Connectors are the foundation. In this case, the Database Manager sits at the bottom layer of bricks to connect the asset data to the next layer of bricks (Domain Models, which I will cover in the next post) and allows you to pull any information you need.   How can you use a Connector in your solutions? As mentioned above, a Connector is the foundation building block for most solutions. It is what aggregates and transfers your solution-related data into the ThingWorx platform for use. The Connectors we currently have available on the ThingWorx platform will “talk” to your database and the other building blocks you use in your solutions, so for your own solutions, a Connector will be the entry point of your data into your solution.   How can you adapt a Connector for your own solutions? Because all PTC building blocks are built with JavaScript in the ThingWorx Mashup Builder, you can leverage existing Connectors on the ThingWorx platform and extend these same Connectors for your unique use case or build your own. You can view the code we used to create Connectors, so if they don’t pull data into your solution the way you want it to flow, you can override the Connector’s functions with your own capabilities.   The ThingWorx PM team is here to listen to your thoughts and feedback, so tell us: What questions do you have about Connectors and how they can improve your experience building solutions in the ThingWorx platform? Or, if you are waiting for the full deep dive into Building Blocks, keep an eye out for our next post on Domain Models, where we will cover the next “layer up” of the types of Building Blocks for use in ThingWorx.   Stay Connected, Rachel  
View full tip
  ThingWorx 9.2 is here! Deploy an entire solution and all its dependencies in one click with Solution Central’s one-click deploy, garner deeper analytic insight with our new waterfall charts, and manage and authenticate users more seamlessly with an Azure Active Directory integration. Discover these features and more in my 9.2 preview post here!   Review our release notes here and be sure to upgrade to 9.2!   Stay connected, Kaya
View full tip
In ThingWorx Analytics, you have the possibility to use an external model for scoring. In this written tutorial, I would like to provide an overview of how you can use a model developed in Python, using the scikit-learn library in ThingWorx Analytics. The provided attachment contains an archive with the following files: iris_data.csv: A dataset for pattern recognition that has a categorical goal. You can click here to read more about this dataset TestRFToPmml.ipynb: A Jupyter notebook file with the source code for the Python model as well as the steps to export it to PMML RF_Iris.pmml: The PMML file with the model that you can directly upload in Analytics without going through the steps of training the model in Python The tutorial assumes you already have some knowledge of ThingWorx and ThingWorx Analytics. Also, if you plan to run the Python code and train the model yourself, you need to have Jupyter notebook installed (I used the one from the Anaconda distribution). For demonstration purposes, I have created a very simple random forest model in Python. To convert the model to PMML, I have used the sklearn2pmml library. Because ThingWorx Analytics supports PMML format 4.3, you need to install sklearn2pmml version 0.56.2 (the highest version that supports PMML 4.3). To read more about this library, please click here Furthermore, to use your model with the older version of the sklearn2pmml, I have installed scikit-learn version 0.23.2.  You will find the commands to install the two libraries in the first two cells of the notebook.   Code Walkthrough The first step is to import the required libraries (please note that pandas library is also required to transform the .csv to a Dataframe object):   import pandas from sklearn.ensemble import RandomForestClassifier from sklearn2pmml import sklearn2pmml from sklearn.model_selection import GridSearchCV from sklearn2pmml.pipeline import PMMLPipeline   After importing the required libraries, we convert the iris_data.csv to a pandas dataframe and then create the features (X) as well as the goal (Y) vectors:   iris_df = pandas.read_csv("iris_data.csv") iris_X = iris_df[iris_df.columns.difference(["class"])] iris_y = iris_df["class"]   To best tune the random forest, we will use the GridSearchCSV and cross-validation. We want to test what parameters have the best validation metrics and for this, we will use a utility function that will print the results:   def print_results(results): print('BEST PARAMS: {}\n'.format(results.best_params_)) means = results.cv_results_['mean_test_score'] stds = results.cv_results_['std_test_score'] for mean, std, params in zip(means, stds, results.cv_results_['params']): print('{} (+/-{}) for {}'.format(round(mean, 3), round(std * 2, 3), params))   We create the random forest model and train it with different numbers of estimators and maximum depth. We will then call the previous function to compare the results for the different parameters:   rf = RandomForestClassifier() parameters = { 'n_estimators': [5, 50, 250], 'max_depth': [2, 4, 8, 16, 32, None] } cv = GridSearchCV(rf, parameters, cv=5) cv.fit(iris_X, iris_y) print_results(cv)   To convert the model to a PMML file, we need to create a PMMLPipeline object, in which we pass the RandomForestClassifier with the tuning parameters we identified in the previous step (please note that in your case, the parameters can be different than in my example). You can check the sklearn2pmml  documentation  to see other examples for creating this PMMLPipeline object :   pipeline = PMMLPipeline([ ("classifier", RandomForestClassifier(max_depth=4,n_estimators=5)) ]) pipeline.fit(iris_X, iris_y)   Then we perform the export:   sklearn2pmml(pipeline, "RF_Iris.pmml", with_repr = True)   The model has now been exported as a PMML file in the same folder as the Jupyter Notebook file and we can upload it to ThingWorx Analytics.   Uploading and Exploring the PMML in Analytics To upload and use the model for scoring, there are two steps that you need to do: First, the PMML file needs to be uploaded to a ThingWorx File Repository Then, go to your Analytics Results thing (the name should be YourAnalyticsGateway_ResultsThing) and execute the service UploadModelFromRepository. Here you will need to specify the repository name and path for your PMML file, as well as a name for your model (and optionally a description)   If everything goes well, the result of the service will be an id. You can save this id to a separate file because you will use it later on. You can verify the status of this model and if it’s ready to use by executing the service GetDetails:   Assuming you want to use the PMML for scoring, but you were not the one to develop the model, maybe you don’t know what the expected inputs and the output of the model are. There are two services that can help you with this: QueryInputFields – to verify the fields expected as input parameters for a scoring job   QueryOutputFields – to verify the expected output of the model The resultType input parameter can be either MODELS or CLUSTERS, depending on the type of model,    Using the PMML for Scoring With all this information at hand, we are now ready to use this PMML for real-time scoring. In a Thing of your choice, define a service to test out the scoring for the PMML we have just uploaded. Create a new service with an infotable as the output (don’t add a datashape). The input data for scoring will be hardcoded in the service, but you can also add it as service input parameters and pass them via a Mashup or from another source. The script will be as follows:   // Values: INFOTABLE dataShape: "" let datasetRef = DataShapes["AnalyticsDatasetRef"].CreateValues(); // Values: INFOTABLE dataShape: "" let data = DataShapes["IrisData"].CreateValues(); data.AddRow({ sepal_length: 2.7, sepal_width: 3.1, petal_length: 2.1, petal_width: 0.4 }); datasetRef.AddRow({ data: data}); // predictiveScores: INFOTABLE dataShape: "" let result = Things["AnalyticsServer_PredictionThing"].RealtimeScore({ modelUri: "results:/models/" + "97471e07-137a-41bb-9f29-f43f107bf9ca", //replace with your own id datasetRef: datasetRef /* INFOTABLE */, });   Once you execute the service, the output should look like this (as we would have expected, according to the output fields in the PMML model):   As you have seen, it is easy to use a model built in Python in ThingWorx Analytics. Please note that you may use it only for scoring, and the model will not appear in Analytics Builder since you have created it on a different platform. If you have any questions about this brief written tutorial, let me know.
View full tip
Applicable Releases: ThingWorx Navigate 1.6.0 to 1.7.0   Description:   Covers how to configure ThingWorx Navigate to use Windchill Authentication: Background and Prerequisites X.509 Public Key Infrastructure (PKIX) Brief Introduction Steps to configure Thingworx Navigate with Windchill Authentication: Windchill Integration Runtime Thingworx Navigate     Additional Information Navigate SSL Configuration for Windchill Authentication General Checklist
View full tip
In the summer heat, keep your operators cool with Operator Advisor. Sit by the pool and relax to the tunes of Episode 05 of “ThingWorx on Air.”     High five! We’re back with Episode 05 of “ThingWorx on Air,” our developer-focused IoT podcast.   In today’s episode Jordan Chaisson, a super talented product manager, joins me to share even more about Operator Advisor (OA). You may remember that we introduced OA in our very first episode of “ThingWorx on Air”. Today, we dive deeper into its business value and reveal what’s on its roadmap. Plus, hear the coolest use case she’s seen yet with Operator Advisor!   OA is an accelerator application built on the ThingWorx platform that enables manufacturing operators through digital work instructions and a comprehensive user experience to receive the right data at the right time to minimize scrap and maximize efficiency.   Looking for more? Check out the Operator Advisor Guide or discover where to download Operator Advisor today.   Reach out with any questions, and, as always, stay connected!   -  Kaya
View full tip
I've had a lot of questions over the years working with Azure IoT, Kepware, and ThingWorx that I really struggled getting answers to. I was always grateful when someone took the time to help me understand, and now it is time to repay the favour.   People ask me many things about Azure (in a ThingWorx context), and one of the common ones has been about MQTT communications from Kepware to ThingWorx using IoT Hub. Recently the topic has come up again as more and more of the ThingWorx expert community start to work with Azure IoT. Today, I took the time to build, test, validate, and share an approach and utilities to do this in cases where the Azure Industrial IoT OPC UA integration is overkill or simply a step later in the project plan. Enjoy!   End to end Integration of Kepware to ThingWorx using MQTT over Azure IoT (YoutTube 45 minute deep-dive)   ThingWorx entities for import (ThingWorx 9.0)   This approach can be quite good for a simple demo if you have a Kepware Integrator or Kepware Enterprise license, but the use of IoT Gateway for many servers and tags can be quite costly.   Those looking to leverage Azure IoT Hub for MQTT integration to ThingWorx would likely also find this recorded session and shared utilities quite helpful.   Cheers, Greg
View full tip
Back in 2018 an interesting capability was added to ThingWorx Foundation allowing you to enable statistical calculation of service and subscription execution.   We typically advise customers to approach this with caution for production systems as the additional overhead can be more than you want to add to the work the platform needs to handle.  This said, these statistics is used consciously can be extremely helpful during development, testing, and troubleshooting to help ascertain which entities are executing what services and where potential system bottlenecks or areas deserving performance optimization may lie.   Although I've used the Utilization Subsystem services for statistics for some time now, I've always found that the Composer table view is not sufficient for a deeper multi-dimensional analysis.  Today I took a first step in remedying this by getting these metrics into Excel and I wanted to share it with the community as it can be quite helpful in giving developers and architects another view into their ThingWorx applications and to take and compare benchmarks to ensure that the operational and scaling is happening as was expected when the application was put into production.   Utilization Subsystem Statistics You can enable and configure statistics calculation from the Subsystem Configuration tab.  The help documentation does a good job of explaining this so I won't mention it here.  Base guidance is not to use Persisted statistics, nor percentile calculation as both have significant performance impacts.  Aggregate statistics are less resource intensive as there are less counters so this would be more appropriate for a production environment.  Specific entity statistics require greater resources and this will scale up as well with the number of provisioned entities that you have (ie: 1,000 machines versus 10,000 machines) whereas aggregate statistics will remain more constant as you scale up your deployment and its load.   Utilization Subsystem Services In the subsystem Services tab, you can select "UtilizationSubsystem" from the filter drop down and you will see all of the relevant services to retrieve and reset the statistics.     Here I'm using the GetEntityStatistics service to get entity statistics for Services and Subscriptions.     Giving us something like this.      Using Postman to Save the Results to File I have used Postman to do the same REST API call and to format the results as HTML and to save these results to file so that they can be imported into Excel.   You need to call '/Thingworx/Subsystems/UtilizationSubsystem/Services/GetEntityStatistics' as a POST request with the Content-Type and Accept headers set to 'application/xml'.  Of course you also need to add an appropriately permissioned and secured AppKey to the headers in order to authenticate and get service execution authorization.     You'll note the Export Results > Save to a file menu over on the right to get your results saved.   Importing the HTML Results into Excel As simple as I would like to hope that getting a standard web formatted file into Excel should be, it didn't turn out to be as easy as I would have hoped and so I have to switch over to Windows to take advantage of Power Query.   From the Data ribbon, select Get Data > From File > From XML.  Then find and select the HTML file saved in the previous step.     Once it has loaded the file and done some preparation, you'll need to select the GetEntityStatistics table in the results on the left.  This should display all of the statistics in a preview table on the right.     Once the query completed, you should have a table showing your statistical data ready for... well... slicing and dicing.     The good news is that I did the hard part for you, so you can just download the attached spreadsheet and update the dataset with your fresh data to have everything parsed out into separate columns for you.     Now you can use the column filters to search for entity or service patterns or to select specific entities or attributes that you want to analyze.  You'll need to later clear the column filters to get your whole dataset back.     Updating the Spreadsheet with Fresh Data In order to make this data and its analysis more relevant, I went back and reset all of the statistics and took a new sample which was exactly one hour long.  This way I would get correct recent min/max execution time values as well as having a better understanding of just how many executions / triggers are happening in a one hour period for my benchmark.   Once I got the new HTML file save, I went into Excel's Data ribbon, selected a cell in the data table area, and clicked "Queries & Connections" which brought up the pane on the right which shows my original query.     Hovering over this query, I'm prompted with some stuff and I chose "Edit".     Then I clicked on the tiny little gear to the right of "Source" over on the pane on the right side.     Finally I was able to select the new file and Power Query opened it up for me.     I just needed to click "Close & Load" to save and refresh the query providing data to the table.     The only thing at this point is that I didn't have my nice little sparklines as my regional decimal character is not a period - so I selected the time columns and did a "Replace All" from '.' to ',' to turn them into numbers instead of text.     Et Voila!   There you have it - ready to sort, filter, search and review to help you better understand which parts of your application may be overly resource hungry, or even to spot faulty equipment that may be communicating and triggering workflows far more often than it should.   Specific vs General Depending on the type of analysis that you're doing you might find that the aggregate statistics are a better option.  As they'll be far, far less that the entity specific statistics they'll do a better job of giving you a holistic view of the types of things that are happening with your ThingWorx applications execution.   The entity specific data set that I'm showing here would be a better choice for troubleshooting and diagnostics to try to understand why certain customers/assets/machines are behaving strangely as we can specifically drill into these stats.  Keep in mind however that you should then compare these findings with the general baseline to see how this particular asset is behaving compared to the whole fleet.   As a size guideline - I did an entity specific version of this file for a customer with 1,000 machines and the Excel spreadsheet was 7Mb compared to the 30kb of the one attached here and just opening it and saving it was tough for Excel (likely due to all of my nested formulas).  Just keep this in mind as you use this feature as there is memory overhead meaning also garbage collection and associated CPU usage for such.
View full tip
  Ever dreamed of participating in the design of the latest ThingWorx features? Now’s your chance! Join the UX Lab in usability and research sessions to help inform the design of your favorite ThingWorx features from the Edge to Kepware to remote monitoring to Solution Central and more!   For those of you newer to PTC, the UX Lab is an opportunity to see early views of product mockups, wireframes, etc. to provide your direct feedback; the UX Lab is part of LiveWorx, our definitive event for digital transformation, and this year both are virtual!   Click here to influence the design of Edge, Kepware, remote monitoring, Solution Central, and so much more!   Reach out with any questions!   Stay connected, Kaya
View full tip
Hi Community,   Although we have reference architectures and integration paths for connecting devices to ThingWorx through Azure IoT; no one has ever written anything about doing the same from one ThingWorx to another.  I thought I’d change that and put some ideas out there around how one might go about doing this.  Although this is not officially supported or recommended by PTC; I have consulted with a number of leading SMEs on the subject, which have participated in forming the basis of my thinking outlined here.   Components Required (in order of communication path): On-premise ThingWorx Platform Protocol Adapter Toolkit* (CXS) - MQTT Azure IoT Edge Azure IoT Hub ThingWorx Azure IoT Hub Connector (CXS) Azure Cloud-hosted ThingWorx Platform   PAT (2) with codec to encode MQTT messages publishes to on-premise IoT Edge MQTT endpoint which handles store-and-forward of messages to IoT Hub.  An Azure IoT device would exist for each Thing you wish to represent on the ThingWorx servers.  The Azure IoT Hub Connector would pick-up the incoming messages and pass them on to the cloud ThingWorx which would decode the MQTT payload and map to Thing property updates.   The only part that I presently don’t like about this approach is that you’ll need to decode the MQTT messages on the ThingWorx platform in the cloud when they are received from the IoT Hub, and this mechanism will need to also need to handle encoding and publishing back to the IoT Hub if C2D (Cloud-to-Device) messages are to be implemented (aka bi-directional).  This is required as ThingWorx only supports AlwaysOn as an application level protocol so some form of mapping needs to be done.   * Another approach would be to replace the PAT with a custom agent which implements both the ThingWorx Edge SDK and the Azure IoT device SDK   Regards,   Greg Eva
View full tip
  Happy New Year, everyone! New year, same mission. As we continue to improve ThingWorx, we remain committed to taking into account what you, our users, are saying. What are you using ThingWorx for? What do you want it to do? What additional tools are you looking for?   To hear from you directly, our PM team has created a quick survey to understand your identity management and SaaS strategies a little better.   Complete the survey below for a chance to win a free Solution Central t-shirt! Loading…   Thanks in advance! At the end of the survey, all respondents will receive a link to check out the latest in our ThingWorx 8.5 release. One lucky winner will receive the Solution Central t-shirt.   Thanks for sharing your info! We’ll be sure to study it as we continue to develop our robust IoT solutions platform.   Stay connected in the New Year! Kaya
View full tip
  When it’s time to make pizza, most of us head to the fridge for our bag of dough; we don’t head for the flour and yeast to start from scratch. So, why would your ThingWorx apps be any different? Start with pre-built solutions like Asset Advisor to rapidly create health monitoring apps and dramatically reduce your development time.   We previously introduced Asset Advisor on Episode 04 of “ThingWorx on Air.” Today, we dive deeper into Asset Advisor with Greg Huet, Asset Advisor’s technical product manager (aka product owner). Listen to Ep. 06: Rapidly Build IIoT Apps for Service & Monitoring with Asset Advisor to hear Greg share our strategy of studying existing use cases and finding similarities that we can pre-build into solutions so that you don’t have to build them from scratch. Hear how you can use Asset Advisor out-of-the-box with tweaks for your company’s configurations or as an accelerated starting point where you can add as much customization as your use case desires—it’s like building a custom pizza, but starting with pre-made dough, rather than yeast and flour.   Greg also mentions the ThingWorx Application Development Guide. Be sure to check out my previous post, where Ward, one of the document creators, shares four of his top tips from the guide.   Now, sit back, relax and go enjoy some pizza while you listen to Episode 06.   As always, stay connected! Kaya
View full tip
  Hi everyone,   Today I’m here with a very exciting guest—Janie! Janie recently joined PTC after spending the last two years at a global professional services firm where she held many roles including that of a lead ThingWorx developer. While Janie did do some font-end development work and UI creation, the bulk of her time was spent on the backend writing services that enabled the application’s functionality. In this role, she not only wrote code herself, she also played a large part in the in-platform architectural decisions, determining the best way to utilize the platform in order to satisfy complex business requirements.   Throughout her last two years, she worked on various IIoT projects in the manufacturing industry developing a real-time asset health monitoring and OEE/production tracking application to monitor key metrics like downtime and output. For each customer engagement an assessment of IoT platforms was often undertaken in order to assess strengths and weaknesses based off-of the business requirements of the customer and while she spent time looking into a variety of platforms, ThingWorx always came out on top.   That said, her experience with ThingWorx showed her how much and how quickly that technology can can change the way a manufacturing company operates. She decided that instead of doing implementations of the technology, she wanted to have an impact on the technology itself and have a hand in the direction it goes and the way it continues to change the businesses we know today. This desire is what led her to product management and the role she currently holds at PTC today.   I recently sat down with Janie to hear how it has been moving from a developer on the platform to a PM for it to learn about her past ThingWorx experiences and how those will influence the work she does at PTC.     Here’s how our convo went.   Kaya: In your previous role as an IoT developer at another company, what made you choose ThingWorx over other IoT solution platforms? Janie: We chose ThingWorx over other IoT platforms due to its strong ability to connect such a wide variety of machines using ThingWorx Industrial Connectivity via Kepware, its industry-leading standing in the market and the fact that it did not require its users to have extensive development knowledge given its low-code environment, which enabled non-developers to be successful.   Kaya: Why do you think ThingWorx saved you time over other development platforms/products? Janie: There were three key ways the platform helped to save me time. First, I didn’t have to write code from scratch. For example, when needing to manipulate arrays or parse through uploaded csv files, there were pieces of code readily available for me to do so without having to know how to do it by myself. Second, due to the widgets ThingWorx already had available, I didn’t have to spend time making custom front-end UIs or widgets. And, thirdly, I saved a ton of time during the connectivity phase because connectivity to devices was supported OOTB through Kepware’s extensive suite of drivers available including some of the key ones we utilized such as Allen-Bradley Control Logix, Modbus, Siemens, and user configurable drivers.   Kaya: Along the lines of saving time, how would your experience with the platform have differed had you leveraged one of our manufacturing apps like Production KPIs? Janie: As I talked about previously, having access to a pre-built solution would have evidently saved a ton of development time and accelerated time to value. It also would have reduced the complexity of our completed app, which would have made it more scalable. While I understand the Manufacturing apps are not 100% ready-to-go out of the box but rather configurable stepping stones into a larger and mole holistic solution, if we could have had Production KPI’s as our development starting point, we would have had a more sound and already proven way of tackling Production and OEE tracking and could have added even more value on top of that.   Kaya: What were some of your favorite aspects about the platform? Janie: The code-snippets to get started were a big favorite, as were the OOTB widgets that allowed for quick visualization of important data. The OOTB industrial connectivity with seamless integration to ThingWorx was huge—we were able to connect multiple devices and stream information in real-time with little difficulty, which enabled us to derive even more value from the platform and really focus on becoming a fully smart and connected operation. Finally, the drag-and-drop UI was simple and intuitive.   Kaya: What do you think the top misconception is about IoT? Janie: I would say repeatability. Scaling an IoT solution across an enterprise is not as simple as it is often represented. I know we’re releasing a new functionality to help users more easily deploy their solutions, so I’m excited about that.   Kaya: So, now that you work as PM for the platform, what were a few things you wish you knew as a developer working on ThingWorx? Janie: Seeing everything we’re working on for the platform now is very exciting; I wish I could have been more aware of what’s on the roadmap so I knew what was coming as I was developing on the platform.   Kaya: On a similar note, now that you are working on the ThingWorx PM team, what items are you hoping to drive/change based on your experience? Janie: First, I’d like to drive stronger interaction with our system integrators (SIs) and partners by providing them insights into our roadmaps and allowing for them to provide feedback in a more seamless way. Next, I’d like to improve upon our essential platform developer capabilities, such as CI/CD, debugging, versioning, testing, etc. That said, Solution Central and PTC’s commitment to and integration with Microsoft products are very exciting roadmap features for me.   Kaya: I completely agree. Those are some great points. As I’m sure you’re aware, but our readers might not be aware of, we are working within the platform itself and a new feature, Solution Central, to provide more of that basic functionality so that, in addition to building blocks, low-code and a drag-and-drop UI, we can continue to help you accelerate your time to value.   Readers, I hope you enjoyed hearing from Janie! We’re excited to have her on the ThingWorx team and we look forward to making ThingWorx even stronger.   Let me know what you think in the comments below.   Stay connected, Kaya
View full tip