cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Learn all about the Community Ranking System, a fun gamification element of the PTC Community. X

IoT Tips

Sort by:
Hello everyone,   Following a recent  experience, I felt it was important to share my insights with you. The core of this article is to demonstrate how you can format a Flux request in ThingWorx and post it to InfluxDB, with the aim of reporting the need for performance in calculations to InfluxDB. The following context is renewable energy. This article is not about Kepware neither about connecting to InfluxDB. As a prerequisite, you may like to read this article: Using Influx to store Value Stream properties from... - PTC Community     Introduction   The following InfluxDB usage has been developed for an electricity energy provider.   Technical Context Kepware is used as a source of data. A simulation for Wind assets based on excel file is configured, delivering data in realtime. SQL Database also gather the same data than the simulation in Kepware. It is used to load historical data into InfluxDB, addressing cases of temporary data loss. Once back online, SQL help to records the lost data in InfluxDB and computes the KPIs. InfluxDB is used to store data overtime as well as calculated KPIs. Invoicing third party system is simulated to get electricity price according time of the day.   Orchestration of InfluxDB operations with ThingWorx ThingWorx v9.4.4 Set the numeric property to log Maintain control over execution logic Format Flux request with dynamic inputs to send to Influx DB  InfluxDB Cloud v2 Store logged property Enable quick data read Execute calculation Note: Free InfluxDB version is slower in write and read, and only 30 days data retention max.     ThingWorx model and services   ThingWorx context Due to the fact relevant numeric properties are logged overtime, new KPIs are calculated based on the logged data. In the following example, each Wind asset triggered each minute a calculation to get the monetary gain based on current power produced and current electricity price. The request is formated in ThingWorx, pushed and executed in InfluxDB. Thus, ThingWorx server memory is not used for this calculation.   Services breakdown CalculateMonetaryKPIs Entry point service to calculate monetary KPIs. Use the two following services: Trigger the FormatFlux service then inject it in Post service. Inputs: No input Output: NOTHING FormatFlux _CalculateMonetaryKPI Format the request in Flux format for monetary KPI calculation. Respect the Flux synthax used by InfluxDB. Inputs: bucketName (STRING) thingName (STRING) Output: TEXT PostTextToInflux Generic service to post the request to InfluxDB, whatever the request is Inputs: FluxQuery (TEXT) influxToken (STRING) influxUrl (STRING) influxOrgName (STRING) influxBucket (STRING) thingName (STRING) Output: INFOTABLE   Highlights - CalculateMonetaryKPIs Find in attachments the full script in "CalculateMonetaryKPIs script.docx". Url, token, organization and bucket are configured in the Persitence Provider used by the ValueStream. We dynamically get it from the ValueStream attached to this thing. From here, we can reuse it to set the inputs of two other services using “MyConfig”.   Highlights - FormatFlux_CalculateMonetaryKPI Find in attachments the full script in "FormatFlux_CalculateMonetaryKPI script.docx". The major part of this script is a text, in Flux synthax, where we inject dynamic values. The service get the last values of ElectricityPrice, Power and Capacity to calculate ImmediateMonetaryGain, PotentialMaxMonetaryGain and PotentialMonetaryLoss.   Flux logic might not be easy for beginners, so let's break down the intermediate variables created on the fly in the Flux request. Let’s take the example of the existing data in the bucket (with only two minutes of values): _time _measurement _field _value 2024-07-03T14:00:00Z WindAsset1 ElectricityPrice 0.12 2024-07-03T14:00:00Z WindAsset1 Power 100 2024-07-03T14:00:00Z WindAsset1 Capacity 150 2024-07-03T15:00:00Z WindAsset1 ElectricityPrice 0.15 2024-07-03T15:00:00Z WindAsset1 Power 120 2024-07-03T15:00:00Z WindAsset1 Capacity 160   The request articulates with the following steps: Get source value Get last price, store it in priceData _time ElectricityPrice 2024-07-03T15:00:00Z 0,15 Get last power, store it in powerData _time Power 2024-07-03T15:00:00Z 120 Get last capacity, store it in capacityData _time Capacity 2024-07-03T15:00:00Z 160 Join the three tables *Data on the same time. Last values of price, power and capacity maybe not set at the same time, so final joinedData may be empty. _time ElectricityPrice Power Capacity 2024-07-03T14:00:00Z 0,15 120 160 Perform calculations gainData store the result: ElectricityPrice * Power _time _measurement _field _value 2024-07-03T15:00:00Z WindAsset1 ImmediateMonetaryGain 18 maxGainData store the result: ElectricityPrice * Capacity lossData store the result: ElectricityPrice * (Capacity – Power) Add the result to original bucket   Highlights - PostTextToInflux Find in attachments the full script in "PostTextToInflux script.docx". Pretty straightforward script, the idea is to have a generic script to post a request. The header is quite original with the vnd.flux content type Url needs to be formatted according InfluxDB API     Well done!   Thanks to these steps, calculated values are stored in InfluxDB. Other services can be created to retrieve relevant InfluxDB data and visualize it in a mashup.     Last comment It was the first time I was in touch with Flux script, so I wasn't comfortable, and I am still far to be proficient. After spending more than a week browsing through InfluxDB documentation and running multiple tests, I achieved limited success but nothing substantial for a final outcome. As a last resort, I turned to ChatGPT. Through a few interactions, I quickly obtained convincing results. Within a day, I had a satisfactory outcome, which I fine-tuned for relevant use.   Here is two examples of two consecutive ChatGPT prompts and answers. It might need to be fine-tuned after first answer.   Right after, I asked to convert it to a ThingWorx script format:   In this last picture, the script won’t work. The fluxQuery is not well formatted for TWX. Please, refer to the provided script "FormatFlux_CalculateMonetaryKPI script.docx" to see how to format the Flux query and insert variables inside. Despite mistakes, ChatGPT still mainly provides relevant code structure for beginners in Flux and is an undeniable boost for writing code.  
View full tip
Design and Implement Data Models to Enable Predictive Analytics Learning Path   Design and implement your data model, create logic, and operationalize an analytics model.   NOTE: Complete the following guides in sequential order. The estimated time to complete this learning path is 390 minutes.    Data Model Introduction  Design Your Data Model Part 1 Part 2 Part 3  Data Model Implementation Part 1 Part 2 Part 3  Create Custom Business Logic  Implement Services, Events, and Subscriptions Part 1 Part 2  Build a Predictive Analytics Model  Part 1 Part 2 Operationalize an Analytics Model  Part 1 Part 2  
View full tip
Create Custom Business Logic    Overview   This project will introduce you to creating your first ThingWorx Business Rules Engine.   Following the steps in this guide, you will know how to create your business rules engine and have an idea of how you might want to develop your own. We will teach you how to use your data model with Services, Events, and Subscriptions to establish a rules engine within the ThingWorx platform.   NOTE: This guide's content aligns with ThingWorx 9.3. The estimated time to complete this guide is 60 minutes.    Step 1: Completed Example   Download the attached, completed files for this tutorial: BusinessLogicEntities.xml.   The BusinessLogicEntities.xml file contains a completed example of a Business Rules Engine. Utilize this file to see a finished example and return to it as a reference if you become stuck during this guide and need some extra help or clarification. Keep in mind, this download uses the exact names for entities used in this tutorial. If you would like to import this example and also create entities on your own, change the names of the entities you create.   Step 2: Rules Engine Introduction   Before implementing a business rule engine from scratch, there are a number of questions that should first be answered. There are times in which a business rule engine is necessary, and times when the work can be down all within regular application coding.   When to Create a Rules Engine: When there are logic changes that will often occur within the application. This can be decisions on how to do billing based on the state or how machines in factories should operate based on a release. When business analysts are directly involved in the development or utilization of the application. In general, these roles are often non-technical, but being involved with the application directly will mean the need for a way to make changes. When a problem is highly complex and no obvious algorithm can be created for the solution. This often covered scenarios in which an algorithm might not be the best option, but a set of conditions will suffice.   Advantages of a Rules Engine The key reward is having an outlet to express solutions to difficult problems than can be easily verifiable. A consolidated knowledge base for how a part of a system works and a possible source of documentation. This source of information provides people with varying levels of technical skill to all have insight into a business model.   Business Logic with ThingWorx Core Platform: A centralized location for development, data management, versioning, tagging, and utilization of third party applications. The ability to create the rules engine within the ThingWorx platform and outside of ThingWorx platform. Being that the rules engine can be created outside of the ThingWorx platform, third party rules engines can be used. The ThingWorx platform provides customizable security and provided services that can decrease the time in development.     Step 3: Establish Rules   In order to design a business rules engine and establish rules before starting the development phase, you must capture requirements and designate rule characteristics.   Capture Requirements The first step to building a business rules engine is to understand the needs of the system and capture the rules necessary for success.   Brainstorm and discuss the conditions that will be covered within the rules engine Construct a precise list Identify exact rules and tie them to specific business requirements.   Each business rule and set of conditions within the business rule will need to be independent of other business rules. When there are several scenarios involved, it is best to create multiple rules – one handling each. When business rules are related to similar scenarios, the best methodology is to group the rules into categories.   Category Description Decision Rules Set of conditions regarding business choices Validation Rules Set of conditions regarding data verifications Generation Rules Set of conditions used for data object creation in the system Calculation Rules Set of conditions that handle data input utilized for computing values or assessments   Designate Rule Characteristics Characteristics for the rules include, but are not limited to: Naming conventions/identifiers Rule grouping Rule definition/description Priority Actions that take place in each rule.   After this is completed, you will be ready to tie business requirements to business rules, and those directly to creating your business rules engine within the platform.   Rules Translation to ThingWorx There are different methods for how the one to one connections can be made between rules and ThingWorx. The simplified method below shows one way that all of this can be done within the ThingWorx platform:   Characteristic  ThingWorx Aspect Rule name/identifier Service Name Ruleset  Thing/ThingTemplate Rule definition  Service Implementation Rule conditions Service Implementation Rule actions Service Implementation Data management DataTables/Streams   Much of the rule implementation is handled by ThingWorx Services using JavaScript. This allows for direct access to data, other provided Services, and a central location for all information pertaining to a set of rules. The design provided above also allows for easier testing and security management.   Step 4: Scenario Business Rule Engine    An important aspect to think about before implementing your business rules engine, is how the Service implementation will flow.   Will you have a singular entry path for the entire rules engine? Or will you have several entries based on what is being requested of it? Will you have create only Services to handle each path? Or will you create Events and Subscriptions (Triggers and Listeners) in addition to Services to split the workload?   Based on how you answer those questions, dictates how you will need to break up your implementation. The business rules for the delivery truck scenario are below. Think about how you would break down this implementation.   High Level Flow 1 Customer makes an order with a Company (Merchant). 1.A Customer to Merchant order information is created. 2 The Merchant creates an order with our delivery company, PTCDelivers. 2.A Merchant order information is populated. 2.B Merchant sets delivery speed requested. 2.C Merchant sets customer information for the delivery. 3 The package is added to a vehicle owned by PTCDelivers. 4 The vehicle makes the delivery to the merchant's customer.   Lower Level: Vehicles 1 Package is loaded onto vehicle 1.i Based on the speed selected, add to a truck or plane. 1.ii Ground speed option is a truck. 1.iii Air and Expedited speed options are based on planes usage and trucks when needed. 2 Delivery system handles the deliveries of packages 3 Delivery system finds the best vehicle option for delivery 4 An airplane or truck can be fitted with a limited number of packages.   Lower Level: Delivery 1 Delivery speed is set by the customer and passed on to PTCDelivers. 2 Delivery pricing is set based on a simple formula of (Speed Multiplier * Weight) + $1 (Flat Fee). 2.i Ground arrives in 7 days. The ground speed multiplier is $2. 2.ii Air arrives in 4 days. The air speed multiplier is $8. 2.iii Expedited arrives in 1 day. The expedited speed multiplier is $16. 3 Deliveries can be prioritized based on a number of outside variables. 4 Deliveries can be prioritized based on a number of outside variables. 5 Bulk rate pricing can be implemented.   How would you implement this logic and add in your own business logic for added profits? Logic such as finding the appropriate vehicle to make a delivery can be handled by regular Services. Bulk rates, prioritizing merchants and packages, delivery pricing, and how orders are handled would fall under Business Logic. The MerchantThingTemplate Thing contains a DataChange Subscription for it's list of orders. This Subscription triggers an Event in the PTCDelivers Thing.   The PTCDelivers Thing contains an Event for new orders coming in and a Subscription for adding orders and merchants to their respective DataTables. This Subscription can be seen as the entry point for this scenario. Nevertheless, you can create a follow-up Service to handle your business logic. We have created the PTCDeliversBusinessLogic to house your business rules engine.   Step 5: Scenario Data Model Breakdown   This guide will not go into detail of the data model of the application, but here is a high level view of the roles played within the application.   Thing Shapes ClientThingShape Shape used to represent the various types of clients the business faces (merchants/customers). VehicleThingShape Shape used to represent different forms of transportation throughout the system.   Templates PlaneThingTemplate Template used to construct all representations of a delivery plane. TruckThingTemplate Template used to construct all representations of a delivery truck. MerchantThingTemplate Template used to construct all representations of a merchant where goods are purchased from. CustomerThingTemplate Template used to construct all representations of a customer who purchases goods.   Things/Systems PTCDeliversBusinessLogic This Thing will hold a majority of the business rule implementation and convenience services. PTCDelivers A Thing that will provide helper functions in the application.   DataShapes PackageDeliveryDataShape DataShape used with the package delivery event. Will provide necessary information about deliveries. PackageDataShape DataShape used for processing a package. OrderDataShape DataShape used for processing customer orders. MerchantOrderDataShape DataShape used for processing merchant orders. MerchantDataShape DataShape used for tracking merchants.   DataTables OrdersDatabase DataTable used to store all orders made with customers. MerchantDatabase DataTable used to store all information for merchants.     Step 6: Next Steps   Congratulations! You've successfully completed the Create Custom Business Logic guide, and learned how to: Create business logic for IoT with resources provided in the ThingWorx platform Utilize the ThingWorx Edge SDK platforms with a pre-established business rule engine   We hope you found this guide useful.    The next guide in the Design and Implement Data Models to Enable Predictive Analytics learning path is Implement Services, Events, and Subscriptions.     
View full tip
Data Model Implementation Guide Part 1   Overview   This project will introduce you to methods for creating the data model that you have designed and are ready to implement. Following the steps in this guide, you will implement the Data Model you've already designed. After having insight into your desired Data Model, this guide will provide instructions and examples on how to build out your application using the ThingWorx platform. We will teach you how to utilize the ThingWorx platform to implement your fully functional IoT application. NOTE: This guide’s content aligns with ThingWorx 9.3. The estimated time to complete ALL 3 parts of this guide is 60 minutes. All content is relevant but there are additional tools and design patterns you should be aware. Please go to this link for more details.     Step 1: Completed Example   Download the completed files for this tutorial:  DataModelEntities.xml. The DataModelEntities.xml file provided to you contains a completed example of the completed data model implementation. Utilize this file to see a finished example and return to it as a reference if you become stuck during this guide and need some extra help or clarification. Keep in mind, this download uses the exact names for entities used in this tutorial. If you would like to import this example and also create entities on your own, change the names of the entities you create.   Step 2: Data Model Scenario   This guide will implement the scenario shown in the Data Model Design guide. Let's revisit our Smart Factory example scenario. Name Description Operations User to keep the line running and make sure that it’s producing quality products Maintenance User to keep machines up and running so that the operator can crank out products Management User in charge of dispatching production orders and making sure the quotas are being met Conveyor Belts Thing on factory line to pass items along to the next stage Pneumatic Gate Thing on factory line Robotic Arm Thing on factory line Quality Check Camera Final Thing on factory line to ensure quality In order to add this to our solution, we will want to build a "connector" between ThingWorx and the existing system. These connectors will be Things as well. Internal system connection Thing for Production Order System Internal system connection Thing for Maintenance Request System Operator   Required Functionality Description 1 File Maintenance Request 2 Get quality data from assets on their line 3 Get performance data for the whole line 4 Get a prioritized list of production orders for their line 5 Create Maintenance Requests   Required Information Description 1 Individual asset performance metrics 2 Full line performance metrics 3 Product quality readings   Maintenance   Required Functionality Description 1 Get granular data values from all assets 2 Get a list of maintenance requests 3 Update maintenance requests 4 Set triggers for automatic maintenance request generation 5 Automatically create maintenance requests when triggers have been activated   Required Information Description 1 Granular details for each asset: In order to better understand healthy asset behavior 2 Current alert status for each asset: to know if there is something going wrong with an asset 3 When the last maintenance was performed on an asset 4 When the next maintenance is scheduled for an asset 5 Maintenance request info: Creation date, due date, progress notes   Management   Required Functionality Description 1 Create production orders 2 Update production orders 3 Cancel Production orders 4 Access line productivity data 5 Elevate maintenance request priority   Required Information Description 1 Production line productivity levels (OEE) 2 List of open Maintenance requests   Overlapping Matrix   This matrix represents all of the overlapping Components that are shared by multiple types of Things in our system:   Unique Matrix   This matrix represents the unique Components to each type of Thing:     Step 3: LineAsset Thing Template   After prioritizing and grouping common functionality and information, we came up with the list below for the first Thing Template to create, LineAsset with five Properties, one Event, and one Subscription. The breakdown for the LineAsset Thing Template is as follows:   Follow the below instruction to create this Entity and get the implementation phase of your development cycle going.   Line Asset Properties   Let's build out our Properties. In the ThingWorx Composer, click the + New at the top of the screen. Select Thing Template in the dropdown.        3. In the name field, enter LineAsset and set the Project (ie, PTCDefaultProject). 4. For the Base Thing Template field, select GenericThing.     5. Click Save.  6. Switch to the Properties and Alerts tab.  7. Click the plus button to add a new Property.   The Properties for the LineAsset Thing Template are as follows: Name Base Type Aspects Data Change Type State String Persistent and Logged ALWAYS SerialNumber String Persistent, Read Only, and Logged NEVER LastMaintenance DATETIME Persistent and Logged VALUE NextMaintenance DATETIME Persistent and Logged VALUE PowerConsumption NUMBER, Min Value: 0 Persistent and Logged ALWAYS Follow the next steps for all the properties shown in our template property table. Click Add. Enter the name of the property (ie, State). Select the Base Type of the proprty from the dropdown. Check the checkboxes for the property Aspects. Select the Data Change Type from the dropdown.   Click Done when finished creating the property. Your properties should match the below configurations.     Line Asset Event   Switch to the Events tab. Click Add. Enter the name of the Event (ie, Error). Select AlertStatus as the Data Shape. This DataShape will allow us to provide simple information including an alert type, the property name, and a status message.   Click Done. Your Event should match the below configurations.          Line Asset Subscription   Switch to the Subscriptions tab. Click Add. Check the Enabled checkbox. Switch to the Inputs tab. Select the name of the Event (ie, Error). Click Done. Your Subscription should match the below configurations.             Challenge Yourself   We have left the Subscription code empty. Think of a way to handle Error Events coming from your line asset and implement it in this section.   Click here to view Part 2 of this guide. 
View full tip
Data Model Implementation Guide Part 3   Step 7: Unique Components Thing Templates   All of the shared component groups have been created. The next stage is creating the unique component group of ThingTemplates. Each of the below sections will cover one ThingTemplate, how the final property configuration should look, and any other aspects that should be added. The breakdown for the unique component group ThingTemplates is as follows:   Robotic Arm Properties   The properties for the RoboticArm ThingTemplate are as follows: Name Base Type Aspects Data Change Type TimeSincePickup NUMBER, Min Value: 0 Persistent and Logged ALWAYS Axis1 String Persistent and Logged VALUE Axis2 String Persistent and Logged VALUE Axis3 String Persistent and Logged VALUE ClampPressure NUMBER, Min Value: 0 Persistent and Logged ALWAYS ClampStatus String Persistent and Logged ALWAYS   Your properties should match the below configurations.   Pneumatic Gate Properties   The properties for the PneumaticGate ThingTemplate are as follows: Name Base Type Aspects Data Change Type GateStatus String Persistent and Logged ALWAYS   Your properties should match the below configurations.   Conveyor Belt Properties   The properties for the ConveyorBelt ThingTemplate are as follows: Name Base Type Aspects Data Change Type BeltSpeed INTEGER, Min Value: 0 Persistent and Logged ALWAYS BeltTemp INTEGER, Min Value: 0 Persistent and Logged ALWAYS BeltRPM INTEGER, Min Value: 0 Persistent and Logged ALWAYS   Your properties should match the below configurations.   Quality Control Camera   Properties   The properties for the QualityControlCamera ThingTemplate are as follows: Name Base Type Aspects Data Change Type QualityReading INTEGER, Min Value: 0 Persistent and Logged ALWAYS QualitySettings String Persistent and Logged ALWAYS CurrentImage IMAGE Persistent and Logged ALWAYS   Your properties should match the below configurations.   Event   Create a new Event named BadQuality. Select AlertStatus as the Data Shape. Your Event should match the below configurations:     Step 8: Data Tables and Data Shapes   For the Data Model we created, an individual DataTable would be best utilized for each products, production orders, and maintenance requests. Utilizing DataTables will allow us to store and track all of these items within our application. In order to have DataTables, we will need DataShapes to create the schema that each DataTable will follow. This database creation aspect can be considered a part of the Data Model design or a part of the Database Design. Nevertheless, the question of whether to create DataTables is based on the question of needed real time information or needed static information. Products, production orders, and maintenance requests can be considered static data. Tracking the location of a moving truck can be considered a need for real time data. This scenario calls for using DataTables, but a real time application will often have places where Streams and ValueStreams are utilized (DataShapes will also be needed for Streams and ValueStreams). NOTE: The DataShapes (schemas) shown below are for a simplified example. There are different ways you can create your database setup based on your own needs and wants. DataTable Name DataShape Purpose MaintenanceRequestDataTable MaintenanceRequest Store information about all maintenanced requests created ProductDataTable ProductDataShape Store information about the product line ProductionOrderDataTable ProductionOrderDataShape Store all information about production orders that have been placed   Maintenance Requests DataShape   The maintenance requests DataShape needs to be trackable (unique) and contain helpful information to complete the request. The DataShape fields are as follows: Name Base Type Additional Info Primary Key ID String NONE YES Title String NONE NO Request String NONE NO CompletionDate DATETIME NONE NO   Unless you’ve decided to change things around, your DataShape fields should match the below configurations.   Products DataShape   The product DataShape needs to be trackable (unique) and contain information about the product. The DataShape fields are as follows: Name Base Type Additional Info Primary Key ProductId String NONE YES Product String NONE NO Description String NONE NO Cost NUMBER Minimum: 0 NO   Unless you’ve decided to change things around, your DataShape fields should match the below configurations.   Production Order DataShape   The production order DataShape needs to be trackable (unique), contain information that would allow the operator and manager to know where it is in production, and information to help make decisions. The DataShape fields are as follows: Name Base Type Additional Info Primary Key OrderId String NONE YES Product InfoTable: DataShape: ProductDataShape NONE NO ProductionStage String NONE NO OrderDate DATETIME NONE NO DueDate DATETIME NONE NO   Unless you’ve decided to change things around, your DataShape fields should match the below configurations.     Step 9: SystemConnections Implementation   We have created the ThingTemplates and ThingShapes that will be utilized within our Data Model for creating instances (Things). Before we finish the build out of our Data Model, let's create the Services that will be necessary for the MaintenanceSystem and ProductionOrderSystem Things.    This guide will not cover the JavaScript and business logic aspect of creating an application. When complete with the below sections, see the Summary page for how to create that level of definition.       Maintenance System   This is the system managed by the maintenance user and geared towards their needs.   Properties   The properties for the MaintenanceSystem Thing are as follows:     Name Base Type Aspects Data Change Type  MaintEngineerCredentials  PASSWORD  Persistent  VALUE    Your properties should match the below configurations.         Services    The Services for the MaintenanceSystem Thing are as follows:    Service Name  Parameters  Output Base Type Purpose   GetAllMaintenanceRequests  NONE  InfoTable: MaintenanceRequest  Get all of the maintenance requests filed for the maintenance user.  GetFilteredMaintenanceRequests  String: TitleFilter  InfoTable: MaintenanceRequest  Get a filtered version of all maintenance requests filed for the maintenance user.  UpdateMaintenanceRequests  String: RequestTitle  NOTHING  Update a maintenance request already in the system.    Use the same method for creating Services that were provided in earlier sections. Your Services list should match the below configurations.     Production Order System   This is the system utilized by the operator and product manager users and geared towards their needs.   Services   The Services for the ProductionOrderSystem Thing are as follows:      Service Name  Parameters Output Base Type   AssignProductionOrders String: Operator, String: ProductOrder  NOTHING   CreateProductionOrders  String: OrderNumber, String: Product, DATETIME: DueDate  NOTHING  DeleteProductionOrders  String: ProductOrder  NOTHING  GetFilteredProductionOrders  String: ProductOrder  InfoTable: ProductionOrder  GetProductionLineList  NONE  InfoTable: ProductDataShape  GetUnfilteredProductionOrders  NONE  InfoTable: ProductionOrder  MarkSelfOperator  NONE  BOOLEAN  UpdateProductionOrdersOP  String: ProductOrder, String: UpdatedInformation  NOTHING  UpdateProductionOrdersPM  String: ProductOrder, String: UpdatedInformation  NOTHING   Use the same method for creating Services that were provided in earlier sections. Your Services list should match the below configurations.       Challenge Yourself     Complete the implementation of the Data Model shown below by creating the Thing instances of the ThingTemplates we have created. When finish, add more to the Data Model. Some ideas are below.         Ideas for what can be added to this Data Model: #  Idea  1 Add users and permissions   2  Add Mashups to view maintenance requests, products, and production orders  3  Add business logic to the Data Model   Step 10: Next Steps     Congratulations! You've successfully completed the Data Model Implementation Guide. This guide has given you the basic tools to: Create Things, Thing Templates, and Thing Shapes Add Events and Subscriptions   The next guide in the Design and Implement Data Models to Enable Predictive Analytics learning path is Create Custom Business Logic.  
View full tip
Data Model Implementation Guide Part 2   Step 4: SystemConnector Thing Template   After grouping our second set of common functionality and information, we came up with the list below for the second Thing Template to create, SystemConnector with 3 Properties. The breakdown for the SystemConnector Thing Template is as follows:   Follow the below instruction to create this Entity and get the implementation phase of your development cycle going.   System Connector Properties   Let's jump right in. In the ThingWorx Composer, click the + New at the top of the screen.        2. Select Thing Template in the dropdown. 3. In the name field, enter SystemConnector and select a Project (ie, PTCDefaultProject). 4. For the Base Thing Template field, select GenericThing. 5. Click Save. 6. Switch to the Properties and Alerts tab. 7. Click the plus button to add a new Property.   The Properties for the SystemConnector Thing Template are as follows: Name Base Type Aspects Data Change Type EndPointConfig String Persistent and Logged VALUE OperatorCredentials PASSWORD Persistent VALUE ProdManagerCredentials PASSWORD Persistent VALUE Follow the next steps for all the Properties shown in our template property table. Click Add. Enter the name of the property (ie, EndPointConfig). Select the Base Type of the proprty from the dropdown. Check the checkboxes for the property Aspects. Select the Data Change Type from the dropdown.   Click Done when finished creating the property. Your Properties should match the below configurations.            Step 5: HazardousAsset Thing Template     After another round of prioritizing and grouping common functionality and information, we came up with the third Thing Template to create, HazardousAsset. It is a child of the LineAsset Thing Template with one added Service. The breakdown for the HazardousAsset Thing Template is as follows:   Hazardous Asset Service   In the ThingWorx Composer, click the + New at the top of the screen. 2. Select Thing Template in the dropdown. 3. For the Base Thing Template field, select LineAsset and select a Project (PTCDefaultProject). 4. In the name field, enter HazardousAsset. 5.  Click Save then edit to store all changes now. 6.  Switch to the Services tab. 7.  Click Add. 8.  Enter EmergencyShutdown as the name of the service. 9. Switch to the Me/Entities tab. 10. Expand Properties. 11. Click the arrow next to the State property. 12. Modify the generated code to match the following:       me.State = "Danger!! Emergency Shutdown";       Your first Service is complete! 13. Click Done. 14. Click Save to save your changes. Your Service should match the below configurations.     Step 6: InventoryManager Thing Shape   This time around, we will create our first ThingShape, InventoryManager with 1 Property. The breakdown for the InventoryManager Thing Shape is as follows:   Follow the below instruction to create this Entity and get the implementation phase of your development cycle going. System Connector Properties The properties for the InventoryManager Thing Shape are as follows: Name Base Type Aspects Data Change Type ProductCount INTEGER Min Value:0 Persistent and Logged ALWAYS In the ThingWorx Composer, click the + New at the top of the screen. Select Thing Shape in the dropdown. In the name field, enter InventoryManager and select a Project (ie, PTCDefaultProject).       4. Click Save then Edit to store all changes now.         5. Switch to the Properties tab.        6. Click Add.       7. Enter ProductCount as the name of the property.       8. Select the Base Type of the proprty from the dropdown (ie, INTEGER).       9. Check the checkboxes for the property Aspects.      10. Select the Data Change Type from the dropdown.            11. Click Done when finished creating the property. Your Properties should match the below configurations.   Add Thing Shape to Template   We can see that there is some overlap in the components of our HazardousAsset and LineAsset ThingTemplates. In particular, both want information about the product count. Because HazardousAsset inherits from LineAsset, would only need to change LineAsset. Follow the steps below to perform this change: Open the LineAsset Thing Template. In the Implemented Shapes field, enter and select InventoryManager. Save changes.         Click here to view Part 3 of this guide.   
View full tip
In ThingWorx Analytics, you have the possibility to use an external model for scoring. In this written tutorial, I would like to provide an overview of how you can use a model developed in Python, using the scikit-learn library in ThingWorx Analytics. The provided attachment contains an archive with the following files: iris_data.csv: A dataset for pattern recognition that has a categorical goal. You can click here to read more about this dataset TestRFToPmml.ipynb: A Jupyter notebook file with the source code for the Python model as well as the steps to export it to PMML RF_Iris.pmml: The PMML file with the model that you can directly upload in Analytics without going through the steps of training the model in Python The tutorial assumes you already have some knowledge of ThingWorx and ThingWorx Analytics. Also, if you plan to run the Python code and train the model yourself, you need to have Jupyter notebook installed (I used the one from the Anaconda distribution). For demonstration purposes, I have created a very simple random forest model in Python. To convert the model to PMML, I have used the sklearn2pmml library. Because ThingWorx Analytics supports PMML format 4.3, you need to install sklearn2pmml version 0.56.2 (the highest version that supports PMML 4.3). To read more about this library, please click here Furthermore, to use your model with the older version of the sklearn2pmml, I have installed scikit-learn version 0.23.2.  You will find the commands to install the two libraries in the first two cells of the notebook.   Code Walkthrough The first step is to import the required libraries (please note that pandas library is also required to transform the .csv to a Dataframe object):   import pandas from sklearn.ensemble import RandomForestClassifier from sklearn2pmml import sklearn2pmml from sklearn.model_selection import GridSearchCV from sklearn2pmml.pipeline import PMMLPipeline   After importing the required libraries, we convert the iris_data.csv to a pandas dataframe and then create the features (X) as well as the goal (Y) vectors:   iris_df = pandas.read_csv("iris_data.csv") iris_X = iris_df[iris_df.columns.difference(["class"])] iris_y = iris_df["class"]   To best tune the random forest, we will use the GridSearchCSV and cross-validation. We want to test what parameters have the best validation metrics and for this, we will use a utility function that will print the results:   def print_results(results): print('BEST PARAMS: {}\n'.format(results.best_params_)) means = results.cv_results_['mean_test_score'] stds = results.cv_results_['std_test_score'] for mean, std, params in zip(means, stds, results.cv_results_['params']): print('{} (+/-{}) for {}'.format(round(mean, 3), round(std * 2, 3), params))   We create the random forest model and train it with different numbers of estimators and maximum depth. We will then call the previous function to compare the results for the different parameters:   rf = RandomForestClassifier() parameters = { 'n_estimators': [5, 50, 250], 'max_depth': [2, 4, 8, 16, 32, None] } cv = GridSearchCV(rf, parameters, cv=5) cv.fit(iris_X, iris_y) print_results(cv)   To convert the model to a PMML file, we need to create a PMMLPipeline object, in which we pass the RandomForestClassifier with the tuning parameters we identified in the previous step (please note that in your case, the parameters can be different than in my example). You can check the sklearn2pmml  documentation  to see other examples for creating this PMMLPipeline object :   pipeline = PMMLPipeline([ ("classifier", RandomForestClassifier(max_depth=4,n_estimators=5)) ]) pipeline.fit(iris_X, iris_y)   Then we perform the export:   sklearn2pmml(pipeline, "RF_Iris.pmml", with_repr = True)   The model has now been exported as a PMML file in the same folder as the Jupyter Notebook file and we can upload it to ThingWorx Analytics.   Uploading and Exploring the PMML in Analytics To upload and use the model for scoring, there are two steps that you need to do: First, the PMML file needs to be uploaded to a ThingWorx File Repository Then, go to your Analytics Results thing (the name should be YourAnalyticsGateway_ResultsThing) and execute the service UploadModelFromRepository. Here you will need to specify the repository name and path for your PMML file, as well as a name for your model (and optionally a description)   If everything goes well, the result of the service will be an id. You can save this id to a separate file because you will use it later on. You can verify the status of this model and if it’s ready to use by executing the service GetDetails:   Assuming you want to use the PMML for scoring, but you were not the one to develop the model, maybe you don’t know what the expected inputs and the output of the model are. There are two services that can help you with this: QueryInputFields – to verify the fields expected as input parameters for a scoring job   QueryOutputFields – to verify the expected output of the model The resultType input parameter can be either MODELS or CLUSTERS, depending on the type of model,    Using the PMML for Scoring With all this information at hand, we are now ready to use this PMML for real-time scoring. In a Thing of your choice, define a service to test out the scoring for the PMML we have just uploaded. Create a new service with an infotable as the output (don’t add a datashape). The input data for scoring will be hardcoded in the service, but you can also add it as service input parameters and pass them via a Mashup or from another source. The script will be as follows:   // Values: INFOTABLE dataShape: "" let datasetRef = DataShapes["AnalyticsDatasetRef"].CreateValues(); // Values: INFOTABLE dataShape: "" let data = DataShapes["IrisData"].CreateValues(); data.AddRow({ sepal_length: 2.7, sepal_width: 3.1, petal_length: 2.1, petal_width: 0.4 }); datasetRef.AddRow({ data: data}); // predictiveScores: INFOTABLE dataShape: "" let result = Things["AnalyticsServer_PredictionThing"].RealtimeScore({ modelUri: "results:/models/" + "97471e07-137a-41bb-9f29-f43f107bf9ca", //replace with your own id datasetRef: datasetRef /* INFOTABLE */, });   Once you execute the service, the output should look like this (as we would have expected, according to the output fields in the PMML model):   As you have seen, it is easy to use a model built in Python in ThingWorx Analytics. Please note that you may use it only for scoring, and the model will not appear in Analytics Builder since you have created it on a different platform. If you have any questions about this brief written tutorial, let me know.
View full tip
I recently had a customer who wanted to run services on ThingWorx from Power BI to retrieve existing operational data, and we were a bit stumped on how to pass the API key over in the headers, so I did a bit of Googling and pieced together the solution. It's not quite intuitive on the Power BI side, so I thought it would be helpful to share. If you have any other experience with integrating ThingWorx with Power BI, feel free to add a comment.    Prepare ThingWorx Create an Application Key that has Run Time execution access to the services you need. Understand the inputs needed for the service you would like. I'll have examples of none, one, an InfoTable, and multiple inputs.   Power BI Following the following steps in Power BI: 1. In Power BI, create a new blank query   2. On the left, right click on Query1 and go to the Advanced Editor:   3. Replace all of the body content with the following, replacing your API key, appropriate end point, and base URL as needed (this is an example with NO input parameters, I'll follow with examples of other parameters):     let appKey = "your-application-key-here", endpoint = "Things/YourThingNameHere/Services/YourServiceNameHere", baseUrl = "https://YourServerNameHere/Thingworx/", url = Text.Combine({baseUrl,endpoint}), body = "", request = Web.Contents( url, [ Headers = [ appKey = appKey, #"Content-Type" = "application/json", Accept = "application/json" ], Content = Text.ToBinary(body) ] ), Source = Json.Document(request) in Source       4. Click "Done", and now you'll have a warning about how to connect. Click the "Edit Credentials" button. 5. Leave it on Anonymous and click "Connect":   6. You should now see the return data coming from ThingWorx.   Note that I had a little trouble with this authentication initially and it saved the wrong method. To clear that out, go to the ribbon bar item "Data source settings" and select the server and clear it out.   Other Examples Here is an example for sending a single string parameter:   let appKey = "your-application-key-here", endpoint = "Things/YourThingNameHere/Services/YourServiceNameHere", baseUrl = "https://YourServerNameHere/Thingworx/", url = Text.Combine({baseUrl,endpoint}), body = "{""InputParameter"": ""InputValue""}", request = Web.Contents( url, [ Headers = [ appKey = appKey, #"Content-Type" = "application/json", Accept = "application/json" ], Content = Text.ToBinary(body) ] ), Source = Json.Document(request) in Source     Here's an example of sending a string and an integer: let appKey = "your-application-key-here", endpoint = "Things/YourThingNameHere/Services/YourServiceNameHere", baseUrl = "https://YourServerNameHere/Thingworx/", url = Text.Combine({baseUrl,endpoint}), body = "{""InputString"": ""Hello, world!"", ""InputNumber"" : 42}", request = Web.Contents( url, [ Headers = [ appKey = appKey, #"Content-Type" = "application/json", Accept = "application/json" ], Content = Text.ToBinary(body) ] ), Source = Json.Document(request) in Source   Here is an example for sending an InfoTable. Note that you must supply the dataShape with fieldDefinitions. If you're using an existing Data Shape, you can get the JSON by using the service GetDataShapeMetadataAsJSON() that is on the data shape.     let appKey = "your-application-key-here", endpoint = "Things/YourThingNameHere/Services/YourServiceNameHere", baseUrl = "https://YourServerNameHere/Thingworx/", url = Text.Combine({baseUrl,endpoint}), body = "{""propertyNames"": { ""rows"": [ { ""name"": ""FirstEntityName"", ""description"": ""The first entity"" }, { ""name"": ""SecondEntityName"", ""description"": ""The second entity"" }], ""dataShape"": { ""fieldDefinitions"": { ""name"": { ""name"": ""name"", ""aspects"": { ""isPrimaryKey"": true }, ""description"": ""Entity name"", ""baseType"": ""STRING"", ""ordinal"": 0 }, ""description"": { ""name"": ""description"", ""aspects"": {}, ""description"": ""Entity description"", ""baseType"": ""STRING"", ""ordinal"": 0 } } } }}", request = Web.Contents( url, [ Headers = [ appKey = appKey, #"Content-Type" = "application/json", Accept = "application/json" ], Content = Text.ToBinary(body) ] ), Source = Json.Document(request) in Source       If I find any more interesting ways to use Power BI with ThingWorx services, I'll add them on here.  
View full tip
Distributed Timer and Scheduler Execution in a ThingWorx High Availability (HA) Cluster Written by Desheng Xu and edited by Mike Jasperson    Overview Starting with the 9.0 release, ThingWorx supports an “active-active” high availability (or HA) configuration, with multiple nodes providing redundancy in the event of hardware failures as well as horizontal scalability for workloads that can be distributed across the cluster.   In this architecture, one of the ThingWorx nodes is elected as the “singleton” (or lead) node of the cluster.  This node is responsible for managing the execution of all events triggered by timers or schedulers – they are not distributed across the cluster.   This design has proved challenging for some implementations as it presents a potential for a ThingWorx application to generate imbalanced workload if complex timers and schedulers are needed.   However, your ThingWorx applications can overcome this limitation, and still use timers and schedulers to trigger workloads that will distribute across the cluster.  This article will demonstrate both how to reproduce this imbalanced workload scenario, and the approach you can take to overcome it.   Demonstration Setup   For purposes of this demonstration, a two-node ThingWorx cluster was used, similar to the deployment diagram below:   Demonstrating Event Workload on the Singleton Node   Imagine this simple scenario: You have a list of vendors, and you need to process some logic for one of them at random every few seconds.   First, we will create a timer in ThingWorx to trigger an event – in this example, every 5 seconds.     Next, we will create a helper utility that has a task that will randomly select one of the vendors and process some logic for it – in this case, we will simply log the selected vendor in the ThingWorx ScriptLog.     Finally, we will subscribe to the timer event, and call the helper utility:     Now with that code in place, let's check where these services are being executed in the ScriptLog.     Look at the PlatformID column in the log… notice that that the Timer and the helper utility are always running on the same node – in this case Platform2, which is the current singleton node in the cluster.   As the complexity of your helper utility increases, you can imagine how workload will become unbalanced, with the singleton node handling the bulk of this timer-driven workload in addition to the other workloads being spread across the cluster.   This workload can be distributed across multiple cluster nodes, but a little more effort is needed to make it happen.   Timers that Distribute Tasks Across Multiple ThingWorx HA Cluster Nodes   This time let’s update our subscription code – using the PostJSON service from the ContentLoader entity to send the service requests to the cluster entry point instead of running them locally.       const headers = { "Content-Type": "application/json", "Accept": "application/json", "appKey": "INSERT-YOUR-APPKEY-HERE" }; const url = "https://testcluster.edc.ptc.io/Thingworx/Things/DistributeTaskDemo_HelperThing/services/TimerBackend_Service"; let result = Resources["ContentLoaderFunctions"].PostJSON({ proxyScheme: undefined /* STRING */, headers: headers /* JSON */, ignoreSSLErrors: undefined /* BOOLEAN */, useNTLM: undefined /* BOOLEAN */, workstation: undefined /* STRING */, useProxy: undefined /* BOOLEAN */, withCookies: undefined /* BOOLEAN */, proxyHost: undefined /* STRING */, url: url /* STRING */, content: {} /* JSON */, timeout: undefined /* NUMBER */, proxyPort: undefined /* INTEGER */, password: undefined /* STRING */, domain: undefined /* STRING */, username: undefined /* STRING */ });   Note that the URL used in this example - https://testcluster.edc.ptc.io/Thingworx - is the entry point of the ThingWorx cluster.  Replace this value to match with your cluster’s entry point if you want to duplicate this in your own cluster.   Now, let's check the result again.   Notice that the helper utility TimerBackend_Service is now running on both cluster nodes, Platform1 and Platform2.   Is this Magic?  No!  What is Happening Here?   The timer or scheduler itself is still being executed on the singleton node, but now instead of the triggering the helper utility locally, the PostJSON service call from the subscription is being routed back to the cluster entry point – the load balancer.  As a result, the request is routed (usually round-robin) to any available cluster nodes that are behind the load balancer and reporting as healthy.   Usually, the load balancer will be configured to have a cookie-based affinity - the load balancer will route the request to the node that has the same cookie value as the request.  Since this PostJSON service call is a RESTful call, any cookie value associated with the response will not be attached to the next request.  As a result, the cookie-based affinity will not impact the round-robin routing in this case.   Considerations to Use this Approach   Authentication: As illustrated in the demo, make sure to use an Application Key with an appropriate user assigned in the header. You could alternatively use username/password or a token to authenticate the request, but this could be less ideal from a security perspective.   App Deployment: The hostname in the URL must match the hostname of the cluster entry point.  As the URL of your implementation is now part of your code, if deploy this code from one ThingWorx instance to another, you would need to modify the hostname/port/protocol in the URL.   Consider creating a variable in the helper utility which holds the hostname/port/protocol value, making it easier to modify during deployment.   Firewall Rules: If your load balancer has firewall rules which limit the traffic to specific known IP addresses, you will need to determine which IP addresses will be used when a service is invoked from each of the ThingWorx cluster nodes, and then configure the load balancer to allow the traffic from each of these public IP address.   Alternatively, you could configure an internal IP address endpoint for the load balancer and use the local /etc/hosts name resolution of each ThingWorx node to point to the internal load balancer IP, or register this internal IP in an internal DNS as the cluster entry point.
View full tip
With ThingWorx, we can already use univariate anomaly alerts (on a single sensor value). However, in many situations, the readings from an individual sensor may not tell you much about the overall issue and a multivariate anomaly detector can be more useful. This post is intended to provide an overview of the Azure Anomaly Detector and how it can be integrated with ThingWorx. The attachment contains: A document with detailed instructions about the setup; A .csv file with the multivariate timeseries dataset; A .twx file with some entities that need to be imported in ThingWorx as well as the CSVParser extension that needs to be installed; A .zip file that will need to uploaded in an Azure Blob Container at some point in the setup
View full tip
Those who have been working with ThingWorx for many years will have noticed the work done around ingress stress testing and performance optimization.  Adding InfluxDB as a time-series data persistence provider really helped level up these capabilities while simultaneously decreasing the overall resources required by the infrastructure.  However with this ease comes a hidden challenge: query and data processing performance to work it into something useful.   Often It's Too Much Data In general most customers that I work with want to collect far too much data -- without knowing what it will be used for, or what processing will be required in order to make it usable and useful.  This is a trap in general with how many people envision IoT projects, being told by infrastructure providers that cloud storage and compute resources are abundant and cheap and that they should get as much data as possible.  This buildup of data means that more effort needs to be spent working it into something useful (data engineering/feature extraction) and addressing common data issues (quality, gaps, precision, etc.).  This might be fine for mature companies with large data analytics teams; however this is a makeup that I've only seen in the largest of our customers.  Some advice - figure out what you need and how you'll use it, and then collect that.  Work on extracting value today rather than hoping that extra data collected  now will provide some insights years from now.   Example - Problem Statement You got your Thing Model designed, and edge devices connected.  Now you've got data flowing in and being stored every 5 seconds in InfluxDB.  Great progress!  Now on to building the applications which cover the various use cases. The raw data is most likely going to need to be processed and potentially even significantly transformed into other information in order to make it useful.  Turning a "powered on and running" BOOLEAN to an "hour meter" INTEGER is a simple example.  Then you may need to provide a report showing equipment run time hours by day over a month.  The maintenance team may also have asked to look for usage patterns which lead to breakdowns, requiring extracting other data points from the initial one like number of daily starts, average daily run time, average time between restarts. The problem here is that unless you have prepared these new data points and stored them as well (say in a Stream), you are going to have to build these data sets on the fly, and that can be time and resource intensive and not give you the response time expected.  As you can imagine, repeatedly querying and processing large volumes of unchanging raw data is going to have resource and time implications - so this is why data collection and data use need to be thought about separately.   Data Engineering In the above examples, the key is actually creating new data points which are calculated progressively throughout normal operation.  This not only makes the information that you want available when you need it - in the right format - but it also significantly reduces resource requirements by constantly reprocessing raw data.  It also helps managing data purging, because as you create and store usable insights, you can eventually just archive away your old raw data streams.   Direct Database Queries vs. Thingworx Data Services Despite the above being a rule of thumb, sometimes a simple well structured database query can get you exactly what you need and do so quite quickly.  This is especially true for InfluxDB when working with extremely large time-series datasets.  The challenge here is that ThingWorx persistence providers abstract away the complexity of writing ones own database queries, so we can't easily get at the databases raw power and are forced to query back more data than needed and work it into a usable format in memory (which is not fast).   Leveraging the InfluxDB API using the ContentLoader Technique As InfluxDBs API is 100% REST, we can access it using in-built ThingWorx Content Loader services.  Check out this demonstration and explanation video where I talk about how to interact directly with InfluxDB in order to crush massive time-series data and get back much more usable and manageable data sets.  It is important to note here that you should use a read-only database user here, as you should never modify the ThingWorx databases to avoid untested scenarios which may lead to data corruption.   Optimizing ThingWorx query performance with the InfluxDB REST API - YouTube InfluxToolBox ThingWorx demo project (by T. Wobben)      
View full tip
Analytics projects typically involve using the Analytics API rather than the Analytics Builder to accomplish different tasks. The attached documentation provides examples of code snippets that can be used to automate the most common analytics tasks on a project such as: Creating a dataset Training a Model Real time scoring predictive and prescriptive Retrieving the validation metrics for a model Appending additional data to a dataset Retraining the model The documentation also provides examples that are specific to time series datasets. The attached .zip file contains both the document as well as some entities that you need to import in ThingWorx to access the services provided in the examples. 
View full tip
I've had a lot of questions over the years working with Azure IoT, Kepware, and ThingWorx that I really struggled getting answers to. I was always grateful when someone took the time to help me understand, and now it is time to repay the favour.   People ask me many things about Azure (in a ThingWorx context), and one of the common ones has been about MQTT communications from Kepware to ThingWorx using IoT Hub. Recently the topic has come up again as more and more of the ThingWorx expert community start to work with Azure IoT. Today, I took the time to build, test, validate, and share an approach and utilities to do this in cases where the Azure Industrial IoT OPC UA integration is overkill or simply a step later in the project plan. Enjoy!   End to end Integration of Kepware to ThingWorx using MQTT over Azure IoT (YoutTube 45 minute deep-dive)   ThingWorx entities for import (ThingWorx 9.0)   This approach can be quite good for a simple demo if you have a Kepware Integrator or Kepware Enterprise license, but the use of IoT Gateway for many servers and tags can be quite costly.   Those looking to leverage Azure IoT Hub for MQTT integration to ThingWorx would likely also find this recorded session and shared utilities quite helpful.   Cheers, Greg
View full tip
For a recent project, I was needing to find all of the children in a Network Hierarchy of a particular template type... so I put together a little script that I thought I'd share. Maybe this will be useful to others as well.   In my situation, this script lived in the Location template. This was useful so that I could find all the Sensor Things under any particular node, no matter how deep they are.   For example, given a network like this: Location 1 Sensor 1 Location 1A Sensor 2 Sensor 3 Location 1AA Sensor 4 Location 1B Sensor 5 If you run this service in Location 1, you'll get an InfoTable with these Things: Sensor 1 Sensor 2 Sensor 3 Sensor 4 Sensor 5 From Location 1A: Sensor 2 Sensor 3 Sensor 4 From Location 1AA: Sensor 4 From Location 1B: Sensor 5   For this service, these are the inputs/outputs: Inputs: none Output: InfoTable of type NetworkConnection   // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(AlertSummary) let result = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape({ infoTableName : "InfoTable", dataShapeName : "NetworkConnection" }); // since the hierarchy could contain locations or sensors, need to recursively loop down to get all the sensors function findChildrenSensors(thingName) { let childrenThings = Networks["Hierarchy_NW"].GetChildConnections({ name: thingName /* STRING */ }); for each (var row in childrenThings.rows) { // row.to has the name of the child Thing if (Things[row.to].IsDerivedFromTemplate({thingTemplateName: "Location_TT"})) { findChildrenSensors(row.to); } else if (Things[row.to].IsDerivedFromTemplate({thingTemplateName: "Sensor_TT"})) { result.AddRow(row); } } } findChildrenSensors(me.name);    
View full tip
Applicable Releases: ThingWorx Navigate 1.6.0 to 8.5.0     Description:     How to use PingFederate script: Prerequisites Configuration Run the script Generated artifacts Live Demo         Associated documentation is available in PTC Single Sign On Architecture and Configuration Overview guide: PTC Single Sign-on Architecture and Configuration Overview  
View full tip
Applicable Releases: ThingWorx Platform 7.0 to 8.5   Description:   Introduction to ThingWorx Extension Development, with the following topics: What is an Extension Why building an Extension Prerequisites Installing Eclipse plugin and features Creating entities with the plugin and including exported Entities in an Extension Project Upgrading or Updating and Existing extension in ThingWorx Building with Gradle and Ant       ThingWorx Extension Development Guide
View full tip
Applicable Releases: ThingWorx Platform 7.0 to 8.5   Description:   Concepts and methodology to design a data model using an use case as example The following topics are covered: Real-world Product Example ThingWorx Terminology and concepts Formulate an implementation Strategy       Related Success Service
View full tip
This small tutorial enables you to manage payload decoding for Adeunis Devices within ThingWorx Composer in less than 10 minutes.  Adeunis Devices communicates on LPWAN networks (LoRaWAN / Sigfox) covering sectors such as smart building, smart industry and smart city. The encoding is also possible but it will be covered in another article.   1. Get Adeunis Codec Adeunis is providing a codec enabling payload encoding and decoding.  Download here the resource file containing the codec.  Unzip the file and edit "script.txt" with your favorite text editor. Copy all text contained in the file.   2.  Create AdeunisCodec Thing Create a Thing called "AdeunisCodec" based on the GenericThing Template.   3. Create a service called "Decode" Create a Decode Service with the following setup: Inputs: type (String), payload (String) Output as JSON Past the previously copied "script.txt" content Save   4. Correct a couple of Warnings Remove all "var codec;" occurences except first one at line 1191.  Remove semi columns at lines 985,1088, 1096 and 1172   5. Remove the following section The codec relies on implementing functions on JavaScript prototypes which is not supported by ThingWorx Rhino JavaScript Engine. See the following documentation section, here.    Remove from line 1109 to 1157.   The following classes overrides will be removed: Uint8Array.prototype.readUInt16BE Uint8Array.prototype.readInt16BE Uint8Array.prototype.readUInt8 Uint8Array.prototype.readUInt32BE Uint8Array.prototype.writeUInt16BE Uint8Array.prototype.writeUInt8 Uint8Array.prototype.writeUInt32BE 6. Add new implementations of the removed functions The functions are adapted from a JavaScript framework which contains resources that helps dealing with binary data, here. Insert the  following section at the top of the "Decode" script.         function readInt16BE (payload,offset) { checkOffset(offset, 2, payload.length); var val = payload[offset + 1] | (payload[offset] << 8); return (val & 0x8000) ? val | 0xFFFF0000 : val; } function readUInt32BE (payload,offset) { checkOffset(offset, 4, payload.length); return (payload[offset] * 0x1000000) + ((payload[offset + 1] << 16) | (payload[offset + 2] << | payload[offset + 3]); } function readUInt16BE (payload,offset) { checkOffset(offset, 2, payload.length); return (payload[offset] << | payload[offset + 1]; } function readUInt8 (payload,offset) { checkOffset(offset, 1, payload.length); return payload[offset]; } function writeUInt16BE (payload,value, offset) { value = +value; offset = offset >>> 0; checkInt(payload, value, offset, 2, 0xffff, 0); if (Buffer.TYPED_ARRAY_SUPPORT) { this[offset] = (value >>> 8); payload[offset + 1] = value; } else objectWriteUInt16(payload, value, offset, false); return offset + 2; } function writeUInt8 (payload,value, offset) { value = +value; offset = offset >>> 0; checkInt(payload, value, offset, 1, 0xff, 0); if (!Buffer.TYPED_ARRAY_SUPPORT) value = Math.floor(value); payload[offset] = value; return offset + 1; } function writeUInt32BE (payload,value, offset) { value = +value; offset = offset >>> 0; checkInt(payload, value, offset, 4, 0xffffffff, 0); if (Buffer.TYPED_ARRAY_SUPPORT) { payload[offset] = (value >>> 24); payload[offset + 1] = (value >>> 16); payload[offset + 2] = (value >>> 8); payload[offset + 3] = value; } else objectWriteUInt32(payload, value, offset, false); return offset + 4; } function objectWriteUInt16 (buf, value, offset, littleEndian) { if (value < 0) value = 0xffff + value + 1; for (var i = 0, j = Math.min(buf.length - offset, 2); i < j; i++) { buf[offset + i] = (value & (0xff << (8 * (littleEndian ? i : 1 - i)))) >>> (littleEndian ? i : 1 - i) * 8; } } function objectWriteUInt32 (buf, value, offset, littleEndian) { if (value < 0) value = 0xffffffff + value + 1; for (var i = 0, j = Math.min(buf.length - offset, 4); i < j; i++) { buf[offset + i] = (value >>> (littleEndian ? i : 3 - i) * & 0xff; } }     7. Add the following function to support previous inserted functions     function checkOffset (offset, ext, length) { if ((offset % 1) !== 0 || offset < 0) throw new Error ('offset is not uint'); if (offset + ext > length) throw new Error ('Trying to access beyond buffer length'); }     8. Add the following function for casting String to Bytes     function splitInBytes(data) { var bytes = []; var bytesAsString = ''; for (var i = 0, j = 0; i < data.length; i += 2, j++) { bytes[j] = parseInt(data.substr(i, 2), 16); bytesAsString += bytes[j] + ' '; } return bytes; }     9. Remap function calls to newly inserted functions Use the built-in script editor replace feature for the following, see below:   Within the service script perform a Replace for each of the following lines. Search Replace by payload.readInt16BE( readInt16BE(payload, payload.readUInt32BE( readUInt32BE(payload, payload.readUInt16BE( readUInt16BE(payload, payload.readUInt8( readUInt8(payload, payload.writeUInt16BE( writeUInt16BE(payload, payload.writeUInt8( writeUInt8(payload, payload.writeUInt32BE( writeUInt32BE(payload,   10. At the Bottom update the following Replace : decoder.setDeviceType("temp"); By : decoder.setDeviceType(type);   11. Insert the following at the bottom var result = Decoder(splitInBytes(payload), 0);   12. Save Service and Thing   13. Create a test Service for Adeunis Temp Device Within "AdeunisCodec" Thing Create a new service called "test_decode_temp" with Output as String Insert the following code:      // result: STRING var result = me.Decode({type: "temp" /* STRING */,payload: "43400100F40200F1" /* STRING */});     Save & Execute  The expected result is:     {"temperatures":[{"unit":"°C","name":"probe 1","id":0,"value":24.4},{"unit":"°C","name":"probe 2","id":0,"value":24.1}],"type":"0x43 Temperature data","status":{"frameCounter":2,"lowBattery":false,"hardwareError":false,"probe1Alarm":false,"configurationDone":false,"probe2Alarm":false}}       Please visit the Decoder test section of Adeunis website to see the reference for the Temp device test case, here.   Spoiler (Highlight to read) The resources has been tested on ThingWorx 8.5 and with the latest and greatest ThingWorx 9...   If you are more interested in the result than in the implementation process then import the attached "Things_AdeunisCodec.xml" 😉  The resources has been tested on ThingWorx 8.5 and with the latest and greatest ThingWorx 9...  If you are more interested in the result than in the implementation process then import the attached "Things_AdeunisCodec.xml"    
View full tip
Recently I needed to be able to parse and handle XML data natively inside of a ThingWorx script, and this XML file happened to have a SOAP namespace as well. I learned a few things along the way that I couldn’t find a lot of documentation on, so am sharing here.   Lessons Learned The biggest lesson I learned is that ThingWorx uses “E4X” XML handling. This is a language that Mozilla created as a way for JavaScript to handle XML (the full name is “ECMAscript for XML”). While Mozilla deprecated the language in 2014, Rhino, the JavaScript engine that ThingWorx uses on the server, still supports it, so ThingWorx does too. Here’s a tutorial on E4X - https://developer.mozilla.org/en-US/docs/Archive/Web/E4X_tutorial The built-in linter in ThingWorx will complain about E4X syntax, but it still works. I learned how to get to the data I wanted and loop through to create an InfoTable. Hopefully this is what you want to do as well.   Selecting an Element and Iterating My data came inside of a SOAP envelope, which was meaningless information to me. I wanted to get down a few layers. Here’s a sample of my data that has made-up information in place of the customer's original data:                <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" headers="">     <SOAP-ENV:Body>         <get_part_schResponse xmlns="urn:schemas-iwaysoftware-com:iwse">             <get_part_schResult>                 <get_part_schRow>                     <PART_NO>123456</PART_NO>                     <ORD_PROC_DIV_CD>E</ORD_PROC_DIV_CD>                     <MFG_DIV_CD>E</MFG_DIV_CD>                     <SCHED_DT>2020-01-01</SCHED_DT>                 </get_part_schRow>                 <get_part_schRow>                     <PART_NO>789456</PART_NO>                     <ORD_PROC_DIV_CD>E</ORD_PROC_DIV_CD>                     <MFG_DIV_CD>E</MFG_DIV_CD>                     <SCHED_DT>2020-01-01</SCHED_DT>                 </get_part_schRow>             </get_part_schResult>         </get_part_schResponse>     </SOAP-ENV:Body> </SOAP-ENV:Envelope> To get to the schRow data, I need to get past SOAP and into a few layers of XML. To do that, I make a new variable and use the E4X selections to get there: var data = resultXML.*::Body.*::get_part_schResponse.*::get_part_schResult.*; Note a few things: resultXML is a variable in the service that contains the XML data. I skipped the Envelope tag since that’s the root. The .* syntax does not mean “all the following”, it means “all namespaces”. You can define and specify the namespaces instead of using .*, but I didn’t find value in that. I found some sample code that theoretically should work on a VMware forum: https://communities.vmware.com/thread/592000. This gives me schRow as an XML List that I can iterate through. You can see what you have at this point by converting the data to a String and outputting it: var result = String(data); Now that I am to the schRow data, I can use a for loop to add to an InfoTable: for each (var row in data) {      result.AddRow({         PartNumber: row.*::PART_NO,         OrderProcessingDivCD: row.*::ORD_PROC_DIV_CD,         ManufacturingDivCD: row.*::MFG_DIV_CD,         ScheduledDate: row.*::SCHED_DT     }); } Shoo! That’s it! Data into an InfoTable! Next time, I'll ask for a JSON API. 😊
View full tip
Thingworx Analytics is offered through the User interface called Analytics Builder with some pre-configured functionality. However, should you want to create your own jobs and mashups, all features from Analytics Builder and some more are available through the Thingworx Services.  Running most functionality requires that you provide some data to run the Analytics Services. This is where the datasetRef parameter is required.        Data uploaded through Analytics Builder Any dataset uploaded through builder will require have a datasetUri as shown in the image above and format will be parquet (all small letters) datasetUri can be obtained from the list of datasets in builder Passing data as an in-body Dataset If data isn't uploaded through Analytics Builder, data can be supplied as an Infotable in the data parameter of the datasetRef. Metadata will also need to be supplied if a new dataset is being created (create Job of the AnalyticsServer_DataThing) If this data is being supplied for a scoring job, as long as the column names match up to what the model is expecting, TWX Analytics will inference them appropriately. The filter parameter is for parquet datasets already uploaded into TWXA and will take an ANSI SQL statement format to add conditions to reduce number of rows. Exclusions is an single column infotable list of the columns you wish to remove from the job you are trying to submit Example: If you want Profiles to only run on 5 out of 10 columns, you would give a list of 5 columns that you don't want to include in this exclusions infotable. Data may also be supplied as a csv file in the file repo in some cases, in which case you would give the dataseturi parameter the location of the file on the TWX File repo (of the format thingworx://UseCaseFileRepo/tempdata.csv) and the format which would be csv
View full tip
OpenDJ is a directory server which is also the base for WindchillDS. It can be used for centralized user management and ThingWorx can be configured to login with users from this Directory Service.   Before we start Pre-requisiste Docker on Ubuntu JKS keystore with a valid certificate JKS keystore is stored in /docker/certificates - on the machine that runs the Docker environments Certificate is generated with a Subject Alternative Name (SAN) extension for hostname, fully qualified hostname and IP address of the opendj (Docker) server Change the blue phrases to the correct passwords, machine names, etc. when following the instructions If possible, use a more secure password than "Password123456"... the one I use is really bad   Related Links https://hub.docker.com/r/openidentityplatform/opendj/ https://backstage.forgerock.com/docs/opendj/2.6/admin-guide/#chap-change-certs https://backstage.forgerock.com/knowledge/kb/article/a43576900   Configuration Generate the PKCS12 certificate Assume this is our working directory on the Docker machine (with the JKS certificate in it)   cd /docker/certificates   Create .pin file containing the keystore password   echo "Password123456" > keystore.pin   Convert existing JKS keystore into a new PKCS12 keystore   keytool -importkeystore -srcalias muc-twx-docker -destalias server-cert -srckeystore muc-twx-docker.jks -srcstoretype JKS -srcstorepass `cat keystore.pin` -destkeystore keystore -deststoretype PKCS12 -deststorepass `cat keystore.pin` -destkeypass `cat keystore.pin`   Export keystore and Import into truststore   keytool -export -alias server-cert -keystore keystore -storepass `cat keystore.pin` -file server-cert.crt keytool -import -alias server-cert -keystore truststore -storepass `cat keystore.pin` -file server-cert.crt     Docker Image & Container Download and run   sudo docker pull openidentityplatform/opendj sudo docker run -d --name opendj --restart=always -p 389:1389 -p 636:1636 -p 4444:4444 -e BASE_DN=o=opendj -e ROOT_USER_DN=cn=Manager -e ROOT_PASSWORD=Password123456 -e SECRET_VOLUME=/var/secrets/opendj -v /docker/certificates:/var/secrets/opendj:ro openidentityplatform/opendj   After building the container, it MUST be restarted immediately in order for recognizing the new certificates   sudo docker restart opendj   Verify that the certificate is the correct one, execute on the machine that runs the Docker environments: openssl s_client -connect localhost:636 -showcerts   Load the .ldif Use e.g. JXplorer and connect   Select the opendj node LDIF > Import File (my demo breakingbad.ldif is attached to this post) Skip any warnings and messages and continue to import the file   ThingWorx Tomcat If ThingWorx runs in Docker as well, a pre-defined keystore could be copied during image creation. Otherwise connect to the container via commandline: sudo docker exec -it <ThingworxImageName> /bin/sh Tomcat configuration cd /usr/local/openjdk-8/jre/lib/security openssl s_client -connect 10.164.132.9:636 -showcerts Copy the certifcate between BEGIN CERTIFACTE and END CERTIFICATE of above output into opendj.pem, e.g. echo "<cert_goes_here>" > opendj.pem Import the certificate keytool -keystore cacerts -import -alias opendj -file opendj.pem -storepass changeit   ThingWorx Composer As the IP address is used (the hostname is not mapped in Docker container) the certificate must have a SAN containing the IP address     Only works with the TWLDAPExample Directory Service not the ADDS1, because ADDS1 uses hard coded Active Directory queries and structures and therefore does not work with OpenDJ. User ID (cn) must be pre-created in ThingWorx, so the user can login. There is no automatic user creation by the Directory Service. Make sure the Thing is Enabled under General Information   Appendix LDAP Structure for breakingbad.ldif cn=Manager / Password123456 All users with password Password123456    
View full tip
Announcements