cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

We are troubleshooting an issue impacting community login, and it may be intermittently unavailable. Sorry for any inconvenience.

IoT Tips

Sort by:
Hello everyone,   After a recent presales experience, I wanted to make my knowledge available to you. The core of this article is to demonstrate how you can format a Flux request in Thingworx and post it to InfluxDB, with the aim of reporting the need for performance in calculations to InfluxDB. The following context is renewable energy. There is no detail about Kepware and how to connect to InfluxDB. As a prerequisite, you may like to read this article: Using Influx to store Value Stream properties from... - PTC Community     Introduction   The following InfluxDB usage has been developped for an electricity energy provider We deal with Wind farm, Solar farm and Battery Storage.   Technical Context Kepware is used as a source of data. A simulation for Wind assets based on excel file is configured, delivering sata in realtime. SQL Database also gather the same data than the simulation in Kepware. It is used to load historical data into InfluxDB, addressing cases of temporary data loss. Once back online, SQL help to records the lost data in InfluxDB and computes the KPIs. InfluxDB is used to store data overtime as well as calculated KPIs. Invoicing third party system is simulated to get electricity price according time of the day.   Orchestration between TWX and InfluxDB Thingworx v9.4.4 Set the numeric property to log Maintain control over execution logic Format Flux request with dynamic inputs to send to Influx DB  InfluxDB Cloud v2 Store logged property Enable quick data read Execute calculation Note: Free InfluxDB version is slower in write and read, and only 30 days data retention max.     Thingworx model and services   Thingworx context Due to the fact relevant numeric properties are logged overtime, new KPIs are calculated based on the logged data. In the following example, each Wind asset triggered each minute a calculation to get the monetary gain based on current power produced and current electricity price. The request is formated in Thingworx, pushed and executed in InfluxDB. Thus, Thingworx server memory is not used for this calculation.   Services breakdown CalculateMonetaryKPIs Entry point service to calculate monetary KPIs. Use the two following services: Trigger the FormatFlux service then inject it in Post service. Inputs: No input Output: NOTHING FormatFlux _CalculateMonetaryKPI Format the request in Flux format for monetary KPI calculation. Respect the Flux synthax used by InfluxDB. Inputs: bucketName (STRING) thingName (STRING) Output: TEXT PostTextToInflux Generic service to post the request to InfluxDB, whatever the request is Inputs: FluxQuery (TEXT) influxToken (STRING) influxUrl (STRING) influxOrgName (STRING) influxBucket (STRING) thingName (STRING) Output: INFOTABLE   Highlights - CalculateMonetaryKPIs Find in attachments the full script in "CalculateMonetaryKPIs script.docx". Url, token, organization and bucket are configured in the Persitence Provider used by the ValueStream. We dynamically get it from the ValueStream attached to this thing. From here, we can reuse it to set the inputs of two other services using “MyConfig”.   Highlights - FormatFlux_CalculateMonetaryKPI Find in attachments the full script in "FormatFlux_CalculateMonetaryKPI script.docx". The major part of this script is a text, in Flux synthax, where we inject dynamic values. The service get the last values of ElectricityPrice, Power and Capacity to calculate ImmediateMonetaryGain, PotentialMaxMonetaryGain and PotentialMonetaryLoss.   Flux logic might not be easy for beginners, so let's break down the intermediate variables created on the fly in the Flux request. Let’s take the example of the existing data in the bucket (with only two minutes of values): _time _measurement _field _value 2024-07-03T14:00:00Z WindAsset1 ElectricityPrice 0.12 2024-07-03T14:00:00Z WindAsset1 Power 100 2024-07-03T14:00:00Z WindAsset1 Capacity 150 2024-07-03T15:00:00Z WindAsset1 ElectricityPrice 0.15 2024-07-03T15:00:00Z WindAsset1 Power 120 2024-07-03T15:00:00Z WindAsset1 Capacity 160   The request articulates with the following steps: Get source value Get last price, store it in priceData _time ElectricityPrice 2024-07-03T15:00:00Z 0,15 Get last power, store it in powerData _time Power 2024-07-03T15:00:00Z 120 Get last capacity, store it in capacityData _time Capacity 2024-07-03T15:00:00Z 160 Join the three tables *Data on the same time. Last values of price, power and capacity maybe not set at the same time, so final joinedData may be empty. _time ElectricityPrice Power Capacity 2024-07-03T14:00:00Z 0,15 120 160 Perform calculations gainData store the result: ElectricityPrice * Power _time _measurement _field _value 2024-07-03T15:00:00Z WindAsset1 ImmediateMonetaryGain 18 maxGainData store the result: ElectricityPrice * Capacity lossData store the result: ElectricityPrice * (Capacity – Power) Add the result to original bucket   Highlights - PostTextToInflux Find in attachments the full script in "PostTextToInflux script.docx". Pretty straightforward script, the idea is to have a generic script to post a request. The header is quite original with the vnd.flux content type Url needs to be formatted according InfluxDB API     Well done!   Thanks to these steps, calculated values are stored in InfluxDB. Other services can be created to retrieve relevant InfluxDB data and visualize it in a mashup.     Last comment It was the first time I was in touch with Flux script, so I wasn't comfortable, and I am still far to be proficient. After spending more than a week browsing through InfluxDB documentation and running multiple tests, I achieved limited success but nothing substantial for a final outcome. As a last resort, I turned to ChatGPT. Through a few interactions, I quickly obtained convincing results. Within a day, I had a satisfactory outcome, which I fine-tuned for relevant use.   Here is two examples of two consecutive ChatGPT prompts and answers. It might need to be fine-tuned after first answer.   Right after, I asked to convert it to a Thingworx script format:   In this last picture, the script won’t work. The fluxQuery is not well formatted for TWX. Please, refer to the provided script "FormatFlux_CalculateMonetaryKPI script.docx" to see how to format the Flux query and insert variables inside. Despite mistakes, ChatGPT still mainly provides relevant code structure for beginners in Flux and is an undeniable boost for writing code.  
View full tip
Overview REST stands for representational state transfer and is a software architectural style common in the World Wide Web. Anything with a RESTful interface can be communicated with using standard REST syntax. ThingWorx has such an interface built-in to make viewing and updating Thing properties as well as executing services easy to do independently of the Web UI.   How to Use REST API The ThingWorx REST API is entirely accessible via URL using the following syntax:    (Precision LMS. Getting Started With ThingWorx 5.4 (Part 1 of Introduction to ThingWorx 5.4). PTC University. https://precisionlms.ptc.com/viewer/course/en/21332822/page/21332905.)   The above example shows how to access a service called “GetBlogEntriesWithComments” found on the “ThingWorxTrainingMaintenanceBlog” Thing. Notice that even though this service gets XML formatted data, the method is type “POST” and “GET” will not work in this scenario (Further reading: https://support.ptc.com/appserver/cs/view/solution.jsp?n=CS214689&lang=en_US).   In order to be able to run REST API calls from the browser, one must allow request method switching. This can be enabled by checking the box “Allow Request Method Switch” in PlatformSubsystem (Further reading: https://support.ptc.com/appserver/cs/view/solution.jsp?n=CS224211&lang=en_US).   Access REST API from Postman Postman is a commonly used REST client which can ping servers via REST API in a manner which mimics third party software. It is free and easy-to-use, with a full tutorial located here: https://www.getpostman.com/docs/   In order to make a request, populate the URL field with a properly formatted REST API call (see previous section). Parameters will not automatically be URL-encoded, but right-clicking on a highlighted portion of the URL and selecting EncodeURIComponent encodes the section.   Next click the headers tab. Here is where the content-type, accept, and authorization are set for the REST call. Accept refers to which response format the REST call is expecting while content-type refers to the format of the request being sent to the server. Authhorization is required for accessing ThingWorx, even via REST API (see previous section for examples authenticating using an app key, but in Postman you can also use Basic Auth using a username and password)   In Postman, there is also ample opportunity to modify the request body under the Body tab. There are several options here for setting parameters. Form-data and x-www-form-urlencoded both allow for setting key value pairs easily and cleanly, and in the latter case, encoding occurs automatically (e.g. “Hello World” becomes %22Hello%20World%22). Raw request types can contain anything and Postman will not touch anything entered except to replace environment variables. Whatever is placed in the text area under raw will get sent with the request (normally XML or JSON, as specified by content-type). Finally, binary allows for sending things which cannot normally be entered into Postman, e.g. image, text, or audio files.     REST API Examples For introductory level examples, see the previous Blog document found here: https://community.thingworx.com/docs/DOC-3315   Retrieving property values from “MyThing” using GET, the default method type (notice how no “method=GET” is required here, though it would still work with that as well): http://localhost/Thingworx/Things/MyThing/Properties/   Updating “MyProperty “with the value “hello” on “MyThing” using PUT: http://localhost/Thingworx/Things/MyThing/Properties/MyProperty?method=PUT&value=hello In Postman, you can send multiple property updates at once via query body (in this case updating all of the properties, the string “Prop1” and the number “Prop2” on MyThing) § Query: http://localhost/Thingworx/Things/MyThing/Properties/* § Query Type: PUT § Query Headers: Content-Type: application/json Authorization: Basic Auth (input username and password on Authorization tab and this will auto-populate) § Body JSON: {"Prop1":"hello world","Prop2":10} Note: you can also specify multiple properties as shown, but only update one at a time in Postman by utilizing the browser syntax given above   Calling “MyService” (a service on “TestThing)” with a String input parameter (“InputString”): http://localhost/Thingworx/Things/TestThing/Services/MyService?method=post&InputString=input   It is easier to pass things like XML and JSON into services using Postman. This query calls “MyJSONService” on “MyThing” with a JSON input parameter § Query: http://localhost/Thingworx/Things/MyThing/Services/MyJSONService § Query Type: § Queries Headers: Accept should match service output (text/html for String) Content-Type: application/json or Authorization: Basic Auth (input username and password on Authorization tab and this will auto-populate) Body JSON: {"InputJSON":"{\"JSONInput\":{\"PropertyName\":\"TestingProp\",\"PropertyValue\":\"Test\"}}"} Body XML:{"xmlInput": "<xml><name>User1</name></xml"}   Viewing “BasicMashup” using AppKey authentication (so no login is required because this Application Key is set-up to login as a user who has permissions to view the Mashup): http://localhost/Thingworx/Mashups/BasicMashup?appKey=b101903d-af6f-43ae-9ad8-0e8c604141af&x-thingworx-session=true Read more here: https://support.ptc.com/appserver/cs/view/solution.jsp?n=CS227935   Downloading Log Information from “ApplicationLog” (or other log types): http://localhost/Thingworx/Logs/ApplicationLog/Services/QueryLogEntries?method=POST   In Postman, more information can be passed into some queries via query body § Query: http://localhost/Thingworx/Logs/ApplicationLog/Services/QueryLogEntries Query Type: POST Query Headers: Accept: application/octet-stream or Content-Type: application/json Authorization: Basic Auth (input username and password on Authorization tab and this will auto-populate) Body: {\"searchExpression\":\"\",\"origin\":\"\",\"instance\":\"\",\"thread\":\"\", \"startDate\":1462457344702,\"endDate\":1462543744702,\"maxItems\":100}   Downloading “MyFile.txt” from “MyRepo” FileRepository (here, “/” refers to the home folder of this FileRepository and the full path would be something like “C:\ThingworxStorage\repository\MyRepo\MyFolder\MyFile.txt”): http://localhost/Thingworx/FileRepositoryDownloader?download-repository=MyRepo&download-path=/MyFolder/MyFile.txt   Uploading files to FileRepository type Things is a bit tricky as anything uploaded must be Base64 encoded prior to making the service call. In Postman, this is the configuration to used to send a file called “HelloWorld.txt”, containing the string “Hello World!”, to a folder called “FolderInRepo” on a FileRepository named “MyRepo”:   Query: http://localhost/Thingworx/Things/MyRepo/Services/SaveBinary Query Type: POST Query Headers: Accept: application/json Content-Type: application/json Authorization: Basic Auth (input username and password on Authorization tab and this will auto-populate) Body: {"path" : "/FolderInRepo/HelloWorld.txt", "content" : "SGVsbG8gV29ybGQh"} Notice here that the content has been encoded to Base64 using a free online service. In most cases, this step can be handled by programming language code more easily and for more challenging file content   Resources and other built-in Things can be accessed in similar fashion to user-created Things. This query searches for Things with the “GenericThing” ThingTemplate implemented: http://localhost/Thingworx/Resources/SearchFunctions/Services/SearchThingsByTemplate?method=POST&thingTemplate=GenericThing   Deleting “MyThing” (try using services for this instead when possible since they are likely safer): http://localhost/Thingworx/Things/MyThing1?method=DELETE&content-type=application/JSON   Exporting all data within ThingWorx using the DataExporter functionality: http://localhost/Thingworx/DataExporter?Accept=application/octet-stream   Exporting all entities which have the Model Tag “Application.TestTerm” within ThingWorx using the Exporter functionality: http://localhost/Thingworx/Exporter?Accept=text/xml&searchTags=Applications:TestTerm
View full tip
Learn how to use the DBConnection building block to create your own DB tables in ThingWorx.
View full tip
Build an Equipment Dashboard Guide Part 1   Overview   This project will introduce you to the principles of ThingWorx Foundation by creating an eqipment dashboard. Following the steps in this guide, you will create the building blocks of your first IoT application, including Things and Streams. We introduce the basics for creating an IoT application. NOTE: This guide's content aligns with ThingWorx 9.3. The estimated time to complete ALL 2 parts of this guide is 30 minutes.    Step 1: Learning Path Overview   This guide explains the steps to create an Industrial Eqipment Dashboard, and is part of the Connect and Monitor Industrial Plant Equipment Learning Path. You can use this guide independently from the full Learning Path. If you want to learn the basics of creating an eqipment dashboard with ThingWorx, this guide will be useful to you.When used as part of the Industrial Plant Learning Path, you should already have ThingWorx Kepware Server installed, and it should be sending data to ThingWorx Foundation. You also need to have previously created the Thing Shape and Thing Template used for this dashboard. We hope you enjoy this Learning Path.   Step 2: Create Thing   A Thing is used to digitally represent a specific component of your application in ThingWorx. In Java programming terms, a Thing is similar to an instance of a class. In this step, you will create a Thing that represents an individual Pump using the Thing Template we created in the previous guide. Using a Thing Template allows you to increase development velocity by creating multiple Things without re-entering the same information each time. In ThingWorx Foundation, navigate to Browse > Modeling > Things. Click + New. In the Name field, type MyPump. NOTE: This name, with matching capitalization, is required for the data display created in a later step.       4. If Project is not already set, click the + in the Project text box and select the PTCDefaultProject.       5. In the Base Thing Template field, search for and select the previously-created PumpTemplate.       6. At the top, click Save.          Manage Property Bindings At the top, click Properties and Alerts. At the top, click Manage Bindings. In the top-left Local > Search Things field, search for and select IndConn_Tag1. Drag-and-drop Simulation_Examples_Functions_Random3's + symbol onto the watts Property on the right. At the bottom-right of the pop-up, click Done. Note how the Tag from ThingWorx Kepware Server is now bound to the the watts Property. Click Save. Click Refresh repeatedly to confirm the watts Property value is changing.     Step 3: Store Data in Value Stream   Now that you have created the MyPump Thing to model your application in ThingWorx, you need a storage Entity to record changing Property values. This step shows how to save time-series data in a Value Stream already created in a previous guide. To learn more, refer to the Methods for Data Storage guide. Navigate to Browse > Modeling > Thing Templates. Click the previously-created PumpTemplate Thing Template to open it. Confirm you are on the General Information tab. If necessary, click Edit to allow changes. In the Value Stream field, search for and select IndConn_ValueStream. Click Save.   Step 4: Create Application UI   ThingWorx Foundation is used to create customized web applications that can display and interact with data from multiple sources. These web applications are called Mashups and are created using the Mashup Builder. The Mashup Builder is where you create your web application by dragging and dropping Widgets such as Grids, Charts, Maps, and Buttons onto a Canvas. All of the user interface elements in your application are Widgets. We will build a web application with three Widgets: Image showing a picture of the pump Value Display showing the pump serial number Line Chart showing the value of watts Property trend over time.   Create New Mashup   Navigate to Browse > Visualization > Mashups.   Click + New.   Keep the defaults and click OK.   In the Name field, type pump-dashboard. If Project is not already set, click the + in the Project text box and select the PTCDefaultProject. Click Save.   At the top, click Design.   Define Mashup Areas   At the top-left, ensure the Layout tab is selected. Click Add Bottom to split your UI into two halves.   Click the newly-created bottom-half to select it. Click Add Left.   Click the bottom-left container to select it. In the top-left Layout section, scroll down and select Fixed Size.   Type 200 in the Width text box that appeared, then press your keyboard’s Tab key to record your entry.   Add Widgets   In the top-left, click the Widgets tab.   In the Filter field, type image.   Drag-and-drop an Image Widget onto the lower-left area of the central Canvas. This Widget will show an image of the pump in use. 4. In a similar manner to what was just done with the Image Widget, drag-and-drop a Value Display Widget onto the top area. 5. Likewise, drag-and-drop a Line Chart Widget onto the lower-right area.   6. Click Save.          Click here to view Part 2 of this guide. 
View full tip
Build an Equipment Dashboard Guide Part 2   Step 5: Display Data   Now that you have configured the visual part of your application, you need to bind the Widgets in your Mashup to a data source.   Add Services to Mashup   In the top-right, ensure the Data tab is selected. Click the green + symbol. In the Entities Filter field, search for and select MyPump. In the Services Filter field, type GetPropertyValues. Click the right-arrow beside GetPropertyValues. Note how GetPropertyValues was added to the right-side under Selected Services Check the checkbox for Execute on Load. This causes the Service to execute when the Mashup starts.        7. In the Services Filter field, type QueryPropertyHistory. 8. Click the right-arrow beside QueryPropertyHistory. 9. Check the checkbox for Execute on Load. 10. Click Done to close the pop-up. Note how the Services have been added to the Data tab in the top-right.          11. Click Save. Now that we have access to the backend data, we want to bind it to our Widgets.   Value Display   Configure the Value Display to display the SerialNumber of the pump. Under the Data tab, expand GetPropertyValues > Returned Data > All Data. Drag-and-drop GetPropertyValues > serialNumber onto the Value Display Widget in the top section. On the Select Binding Target popup, select Data. Image   We want to use an Image Widget to display a thumbnail picture of the pump for easy reference. To do that, though, you first need to upload an image to Foundation by creating a Media Entity. Right-click the image below, then click "Save image as..." to download. Click Browse > Visualization > Media. Click + New. In the Name field, type pump-thumbnail. If Project is not already set, click the + in the Project text box and select the PTCDefaultProject. Under Image, click Change. Navigate to and select the pump-image.png file you just downloaded. On the navigation pop-up, click Open to close the pop-up and confirm the image selection. At the top of Foundation, click Save. Change Image to pump   We will now update the Image Widget to display the ThingWorx Media Entity we just created. Return to the pump-dashboard Mashup. Click the Image Widget to select it, and ensure that the bottom-left Properties tab is active. In the bottom-left Properties' Filter field, type SourceURL. For the SourceURL Property, search for and select pump-thumbnail. Click Save.   Line Chart   Configure the Line Chart to display Property values changing over time. In the top-right Data tab, expand QueryPropertyHistory > Returned Data. Drag and drop QueryPropertyHistory > All Data onto the Line Chart Widget in the bottom-right Canvas section. On the Select Binding Target pop-up, select Data. Ensure the Line Chart Widget is selected. On the Line Chart's Property panel in the bottom-left, in the Filter field, type XAxisField. For the XAxisField Property, select timestamp. In the Filter field, type LegendFilter. Check the checkbox for LegendFilter. Click Save.   Verify Data Bindings   You can see the configuration of data sources bound to Widgets in the bottom-center Connections pane. In the top-right Data tab, click GetPropertyValues. Check the diagram in the bottom-center Connections window to confirm a data source is bound to the Value Display Widget.       2. Also in the top-right Data tab, click QueryPropertyHistory. Confirm that the diagram shows it is bound to the Line Chart.         3. Click Save.     Step 6: Test Application   Browse to your Mashup and click View Mashup to launch the application. NOTE: You may need to enable pop-ups in your browser to see the Mashup. 2. Confirm that data is being displayed in each of the sections. 3. Open the MyPump Thing, then click the Properties and Alerts Tab. 4. Click Set Value on the line of the serialNumber Property. 5. Enter a value for the serial number, then click the Check-mark button. 6. Click Refresh to confirm the value is changed. 7. Refresh the browser window showing the dashboard to see the new serial number value.     Step 7: Next Steps   Congratulations! You've successfully completed the Build an Equipment Dashboard guide, and learned how to: Use Composer to create a Thing Shape and a Thing Template Make a Thing using a custom Thing Template Store Property change history in a Value Stream Create an applicaton UI with Mashup Builder Display data from connected devices Test a sample application   If you like to return to previous guides in the learning path, select Connect and Monitor Industrial Plant Equipment.    
View full tip
Connect and Monitor Industrial Plant Equipment Learning Path   Learn how to connect and monitor equipment that is used at a processing plant or on a factory floor.   NOTE: Complete the following guides in sequential order. The estimated time to complete this learning path is 180 minutes.   Create An Application Key  Install ThingWorx Kepware Server Connect Kepware Server to ThingWorx Foundation Part 1 Part 2 Create Industrial Equipment Model Build an Equipment Dashboard Part 1 Part 2
View full tip
In this post, I show how you can downsample time-series data on server side using the LTTB algorithm. The export comes with a service to setup sample data and a mashup which shows the data with weak to strong downsampling.   Motivation: Users displaying time series data on mashups and dashboards (usually by a service using a QueryPropertyHistory-flavor in the background) might request large amounts of data by selecting large date ranges to be visualized, or data being recorded in high resolution. The newer chart widgets in Thingworx can work much better with a higher number of data points to display. Some also provide their own downsampling so only the „necessary“ points are drawn (e.g. no need to paint beyond the screen‘s resolution). See discussion here. However, as this is done in the widgets, this means the data reduction happens on client site, so data is sent over the network only to be discarded. It would be beneficial to reduce the number of points delivered to the client beforehand. This would also improve the behavior of older widgets which don’t have support for downsampling. Many methods for downsampling are available. One option is partitioning the data and averaging out each partition, as described here. A disadvantage is that this creates and displays points which are not in the original data. This approach here uses Largest-Triangle-Three-Buckets (LTTB) for two reasons: resulting data points exist in the original data set and the algorithm preserves the shape of the original curve very well, i.e. outliers are displayed and not averaged out. It also seems computationally not too hard on the server. Setting it up: Import Entities from LTTB_Entities.xml Navigate to thing LTTB.TestThing in project LTTB, run service downsampleSetup to setup some sample data Open mashup LTTB.Sampling_MU: Initially, there are 8000 rows sent back. The chart widget decides how many of them are displayed. You can see the rowcount in the debug info. Using the button bar, you determine to how many points the result will be downsampled and sent to the client. Notice how the curve get rougher, but the shape is preserved. How it works: The potentially large result of QueryPropertyHistory is downsampled by running it through LTTB. The resulting Infotable is sent to the widget (see service LTTB.TestThing.getData). LTTB implementation itself is in service downsampleTimeseries     Debug mode allows you to see how much data is sent over the network, and how much the number decreases proportionally with the downsampling.   LTTB.TestThing.getData;   The export and the widget is done with TWX  9 but it's only the widget that really needs TWX 9. I guess the code would need some more error-checking for robustness, but it's a good starting point.  
View full tip
  Design Your Data Model Guide Part 3   Step 7: Prioritize     The first step in the design process is to use the Thing-Component Matrix to identify and prioritize groups of Components that are shared across multiple Things. These groups will be prioritized by number of shared Components, from highest to lowest, enabling is to break out the most commonly used groups of Components and package them into reusable pieces. Let’s examine our example Thing-Component Matrix to identify and prioritize groups. In the table below, we have done this and recognized that there are FOUR groups.   NOTE: Each item in our unique Thing-Component Matrix would also count as a group on its own. This can be dealt with almost separately from our process, though, because there is no overlap between different Things. The "Templates for Unique Components" and "Adding Components Directly to Things" sections in the Iterate step of this guide covers these "one-offs."     Step 8: Largest Group     The base building block we use most often is the Thing Template. To start the design process, the first step is to create a Thing Template for the largest Component group. Applying this to our Smart Factory scenario, we'll take the largest group ("Group 1") and turn it  into a Thing Template using the Entity Relationship Diagram schematic.   Since every item on our production line shares these Components, we will name this Thing Template Line Asset. Now, let's build this using our Entity Relationship Diagram.   The result is a ThingWorx Thing Template with five Properties, one Event, and one Subscription.     Step 9: Iterate     Once an initial base template has been created for the largest group, the rest of the groups can be added by selecting the appropriate entity type (Thing Template, Thing Shape, or directly-instantiated Thing). The following Entity Decision Flowchart explains which entity type is used in which scenario:   Now that we have established our "Line Asset" Thing Template for our largest group, the next step is to iterate through each of our remaining groups. Following the flowchart, we will identify what entity type it should be and add it to our design.   Group 2 - "System Connector"   The second group represents connectors into both of our internal business systems. We will call them System Connectors.   If we look at Group 2 versus our "Largest Group" Thing Template, we can see that there is no overlap between their Components. This represents the third branch of the Entity Decision Flowchart, which means we want to create a new Thing Template.   Following this rule, here is the resulting template:   Group 3 - "Hazardous Asset"   The third group represents line assets that require emergency shutdown capabilities because under certain conditions, the machinery can become dangerous. We will call these Hazardous Assets.   If we look at our two previous Thing Templates, we can see that there is full overlap of these Components with our previous largest group, the "Line Asset" Thing Template. This represents the fourth branch of the Entity Decision Flowchart, which means we want to create a CHILD Thing Template.   Following this rule, here is the resulting Thing Template:   Group 4 - "Inventory Manager"   The fourth group represents line assets that keep track of inventory count, to ensure the number of assembled-products is equal to the number of checked-products for quality.   If we look at our existing Thing Templates, we can see that there is some overlap of the Components in our "Hazardous Asset" and "Line Asset" Thing Templates. This represents the second branch of the Entity Decision Flowchart, which means we want to create a Thing Shape.   Following this rule, here is the resulting Thing Shape:   Templates for Unique Components   Now that we have handled all our shared component groups, we also want to look at the unique component groups. Since we have already established that each group in our Unique Thing-Component Matrix does not share its Components with other Things, we can create Child Thing Templates for these line assets.   NOTE: Refer to the finished design at the bottom to reference all of the inherited Thing Templates and Thing Shapes for these Child Thing Templates.   Adding Components Directly to Things   In many cases, we will have Components that only exist for a single Thing. This frequently occurs when there will be only one of something in a system. In our case, we will only need one "System Connector" for each of the Maintenance System and the Production Order System.   Instantiate Your Things   At this point, the Data Model is fully built. All we need to do now is instantiate the actual Things which represent our real-world machines and digital-connections.     Step 10: Validate     The final step of the model breakdown design process is validation through instantiation. This is the process by which we take our designed Thing Templates / Thing Shapes / Things and actually create them on the ThingWorx platform to ensure they meet all of our requirements. This is done by tracing back through the chain of inheritance for all the Things in the data model to ensure they contain all of the required Components from the Thing-Component Matrix. Once we have verified that each Thing contains all of its requirements, the data model is complete.   Using this technique on each of your Things, you can explicitly prove that all of the requirements have been met.   Step 11: Next Steps   Congratulations! You've successfully completed the Design Your Data Model Guide covering the first three steps of the proposed data model design strategy for ThingWorx. This guide has given you the basic tools to: Create user stories Identify endpoints in your system Break down your data model using an Entity Relationship Diagram Decide when to use Thing Templates vs. Thing Shapes vs. directly-instantiated Things     The next guide in the Design and Implement Data Models to Enable Predictive Analytics learning path is Data Model Implementation.
View full tip
Getting Started on the ThingWorx Platform Learning Path   Learn hands-on how ThingWorx simplifies the end-to-end process of implementing IoT solutions.   NOTE: Complete the following guides in sequential order. The estimated time to complete this learning path is 210 minutes.   Get Started with ThingWorx for IoT   Part 1 Part 2 Part 3 Part 4 Part 5 Data Model Introduction Configure Permissions Part 1 Part 2 Build a Predictive Analytics Model  Part 1 Part 2
View full tip
Data Model Introduction    Overview   This project will introduce the ThingWorx Foundation Data Model. Following the steps in this guide, you will consider data interactions based on user needs and requirements, as well as application modularity, reusability, and future updates. We will teach you how to think about a properly constructed foundation that will allow your application to be scalable, flexible, and more secure. NOTE: This guide's content aligns with ThingWorx 9.3. The estimated time to complete this guide is 30 minutes.    Step 1: Benefits   A Data Model creates a uniform representation of all items that interact with one another. There are multiple benefits to such an approach, and the ability to break up items and reuse components is considered a best practice. ThingWorx has adopted this model at a high level to represent individual components of an IoT solution. Feature Benefit Flexibility Once a model has been created, it is simple to update, modify, or remove components without needing to rework the system or retest existing components. Scalability It’s easy to clone and modify devices that are either identical or similar when changing from a Proof of Concept or Pilot Program to a Scaled Business Model. Interoperability Seamlessly plug into other applications. Collaboration A Data Model allows pre-defined links between components, meaning that various parts can be defined when designing the model so that multiple people can work on those individual parts without compromising the interoperability of the components. Seamless platform A Data Model allows for seamless integration with other systems. A properly-formed model will make it easier to create high-value IoT capabilities such as analytics, augmented/virtual reality, industrial connectivity, etc.   Step 2: Entities   Entities   Building an IoT solution in Foundation begins with defining your Data Model, the collection of Entities that represent your connected devices, business processes, and your application. Entities are the highest-level objects created and maintained in Foundation, as explained below.     Thing Shape   Thing Shapes provide a set of characteristics represented as Properties, Services, Events, and Subscriptions that are shared across a group of physical assets. A Thing Shape is best used for composition to describe relationships between objects in your model. They promote reuse of contained Properties and business logic that can be inherited by one or more Thing Templates. In Foundation, the model allows a Thing Template to implement one or more Thing Shapes, which is similar to a class definition in C++ that has multiple inheritance. When you make a change to the Thing Shape, the change is propagated to the Thing Templates and Things that implement that Thing Shape; so, maintaining the model is quick and easy.   Thing Template   Thing Templates provide base functionality with Properties, Services, Events, and Subscriptions that Thing instances use in their execution. Every Thing is created from a Thing Template. A Thing Template can extend another Thing Template. When you release a new version of a product, you simply add the additional characteristics of the version without having to redefine the entire model. This model configuration provides multiple levels of generalization of an asset. A Thing Template can derive one or more additional characteristics by implementing Thing Shapes. When you make a change to the Thing Template, the change is propagated to the Things that implement that Thing Template; so again, maintaining the model is quick and easy. A Thing Template can be used to classify the kind of a Thing or asset class or as a specific product model with unique capabilities. If you have two product models and their interaction with the solution is the same (same Properties, Services, and Events), you could model them as one Thing Template. Classifying Thing Templates is useful for aggregating Things into collections, which are useful in Mashups. You may want separate Thing Templates for indexing, searching, and future evolutions of the products   Thing   Things are representations of physical devices, assets, products, systems, people, or processes that have Properties and business logic. All Things are based on Thing Templates (inheritance) and can implement one or more Thing Shapes (composition). A Thing can have its own Properties, Services, Events, and Subscriptions and can inherit other Properties, Services, Events, and Subscriptions from its Thing Template and Thing Shape(s). How you model the interconnected Things, Thing Templates, and Thing Shapes is key to making your solution easy to develop and maintain in the future as the physical assets change. End users will interface with Things for information in applications and for reading/writing data.   Best Practice: Create a Thing Template to describe a Thing, then create an instance of that Thing Template as a Thing. This practice leverages inheritance in your model and reduces the amount of time you spend maintaining and updating your model.   Step 3: Inheritance Model   Defining Things, Thing Templates, and Thing Shapes in your Data Model allows your application to handle both simple and complex scenarios. Entity Function Thing Shapes Assemble individual components. Thing Templates Combine those components into fully functional objects. Thing Unique representation of a set of identical components defined by the Thing Template.       In this example, there is a Parent/Child model between two related Thing Templates. NOTE: Things and Thing Templates may only inherit ONE Thing Template. Both Things and Thing Templates may inherit any number of Thing Shapes. Thing Templates employ a linear-relationship, while Thing Shapes employ a modular-relationship. Any Thing or Thing Template may have any number of sub-components (i.e. Thing Shapes), but each Thing or Thing Template is just one description of one object as a whole. How you decide to compartmentalize your Data Model into Thing Shapes and Thing Templates to create the actual Things that you’ll be using is a custom design that will be specific to each implementation.   Step 4: Scenario   The ThingWorx Data Model provides a way for you to describe your connected devices and match the complexity of a real-world scenario. Things, Thing Templates, and Thing Shapes are building blocks that define your data model.     You can define the components of Things, Thing Templates, and Thing Shapes, including Properties, Services, Events, and Subscriptions. Component Definition Properties Each Property has a name, description, and a data type (Base Type). Depending on the base type, additional fields may be enabled. A simple scalar type, like a number or string, adds basic fields like default value. More complex base types have more options. Properties can be static (i.e. Model Number) or dynamic (i.e. Temperature). Services A Service is a method/function defined by a block of code that performs logic specifying actions a Thing can take. There are several implementation methods, or handlers (for example: Script, SQLQuery, and SQL command), for services depending on the template you use. The specific implementation of a user-defined Service is done via a server-side script. The Service can then be invoked through a URL, a REST client capable application, or by another Service in ThingWorx. When you create a new service, you can define input properties and an output. You can define individual runtime permissions for each Service. Events Events are triggers that define changes of state (example: device is on, temperature is above/below threshold) of an asset or system and often require an action to correct or respond to a change. Business logic and actions in a ThingWorx application are driven by Events. Subscriptions Action associated with an Event, primary method to set up intelligence in ThingWorx model which enable you to optimize/automate. Subscriptions use Javascript code to define what you want your application to do when the Event occurs.   NOTE: Anything inherited by a Thing Template or Thing will inherit the associated Components.     This diagram shows what a specific Inheritance Model might look like for a connected Tractor. There is one master Template at the top. In this case, it’s a collection of similar types of tractors. The parent Template inherits a few Shapes - an Engine and a Deck that have been used in previous designs. Importing them as Shapes allows us to reuse previous design work and expedite the development process. One of the child Templates incorporates another Shape, this time in the form of a GPS tracking device. Then, at the bottom, there are the specific tractors with individual serial numbers that will report their connected data back to an IoT Application.   Step 5: Next Steps   Congratulations! You've successfully completed the Data Model Introduction, and learned about: The function of a data model for your IoT application Data model components, including Thing, Thing, Shape and Thing Template How ThingWorx components correspond to connected devices Please comment on this post so we can improve this guide in future ThingWorx version iterations.   This guide is part of 2 learning paths: The next guide in the Getting Started on the ThingWorx Platform learning path is Configure Permissions.  The next guide in the Design and Implement Data Models to Enable Predictive Analytics learning path is Design Your Data Model.      
View full tip
Design Your Data Model Guide Part 1   Overview   This project will introduce the process of taking your IoT solution from concept to design. Following the steps in this guide, you will create a solution that doesn’t need to be constantly revamped, by creating a comprehensive Data Model before starting to build and test your solution. We will teach you how to utilize a few proposed best practices for designing the ThingWorx Data Model and provide some prescriptive methods to help you generate a high-quality framework that meets your business needs. NOTE: This guide’s content aligns with ThingWorx 9.3. The estimated time to complete ALL 3 parts of this guide is 60 minutes. All content is relevant but there are additional tools and design patterns you should be aware of. Please go to this link for more details.    Step 1: Data Model Methodology   We will start by outlining the overall process for the proposed Data Model Methodology.       Step Description 1 User Stories Identify who will use the application and what information they need. By approaching the design from a User perspective, you should be able to identify what elements are necessary for your system. 2 Data Sources Identify the real-world objects or systems which you are trying to model. To create a solid design, you need to identify what the “things” are in your system and what data or functionality they expose. 3 Model Breakdown Compose a representative model of modular components to enable uniformity and reuse of functionality wherever possible. Break down user requirements and data, identifying how the system will be modeled in Foundation. 4 Data Strategy Identify the sources of data, then evaluate how many different types of data you will have, what they are, and how your data should be stored. From that, you may determine the data types and data storage requirements. 5 Business Logic Strategy Examine the functional needs, and map them to your design for proper business logic implementation. Determine the business logic as a strategic flow of data, and make sure everything in your design fits together in logical chunks. 6 User Access Strategy Identify each user's access and permission levels for your application. Before you start building anything, it is important to understand the strategy behind user access. Who can see or do what? And why? NOTE: Due to the length of this subject, the ThingWorx Data Model Methodology has been divided into multiple parts. This guide focuses on the first three steps = User Stories, Data Sources, and Model Breakdown. Guides covering the last three steps are linked in the final Next Steps page.    Step 2: User Stories     With a user-based approach to design, you identify requirements for users at the outset of the process. This increases the likelihood of user satisfaction with the result. Utilizing this methodology, you consider each type of user that will be accessing your application and determine their requirements according to each of the following two categories: Category Requirement Details Functionality Determine what the user needs to do. This will define what kind of Services and Subscriptions will need to be in the system and which data elements and Properties must be gathered from the connected Things. Information What information do they need? Examine the functional requirements of the user to identify which pieces of information the users need to know in order to accomplish their responsibilities.   Factory Example   Let’s revisit our Smart Factory example scenario. The first step of the User Story phase of the design process is to identify the potential users of your system. In this example scenario, we have defined three different types of users for our solution: Maintenance Operations Management Each of these users will have a different role in the system. Therefore, they will have different functional and informational needs.   Maintenance   It is the maintenance engineer’s job to keep machines up and running so that the operator can assemble and deliver products. To do this well, they need access to granular data for the machine’s operating status to better understand healthy operation and identify causes of failure. They also need to integrate their maintenance request management system to consolidate their efforts and to create triggers for automatic maintenance requests generated by the connected machines. Required Functionality Get granular data values from all assets Get a list of maintenance requests Update maintenance requests Set triggers for automatic maintenance request generation Automatically create maintenance requests when triggers have been activated Required Information Granular details for each asset to better understand healthy asset behavior Current alert status for each asset When the last maintenance was performed on an asset When the next maintenance is scheduled for an asset Maintenance request for information, including creation date, due date, progress notes   Operations   The operator’s job is to keep the line running and make sure that it’s producing quality products. To do this, operators must keep track of how well their line is running (both in terms of speed and quality). They also need to be able to file maintenance requests when they have issues with the assets on their line. Required Functionality File maintenance request Get quality data from assets on their line Get performance data for the whole line Get a prioritized list of production orders for their line Create maintenance requests Required Information Individual asset performance metrics Full line performance metrics Product quality readings   Management   The production manager oversees the dispatch of production orders and ensures quotas are being met. Managers care about the productivity of all lines and the status of maintenance requests. Required Functional Create production orders Update production orders Cancel production orders Access line productivity data Elevate maintenance request priority Required Information Production line productivity levels (OEE) List of open maintenance requests   Step 3: Data Sources – Thing List     Thing List   Once you have identified the users' requirements, you'll need to determine what parts of your system must be connected. These will be the Things in your solution. Keep in mind that a Thing can represent many different types of connected endpoints. Here are some examples of possible Things in your system: Devices deployed in the field with direct connectivity or gateway-connectivity to Foundation Devices deployed in the field through third-party device clouds Remote databases Connections to external business systems (e.g., Salesforce.com, Weather.com, etc.)   Factory Example   In our Smart Factory example, we have already identified the users of the system and listed requirements for each of those users. The next step is to identify the Things in our solution. In our example, we are running a factory floor with multiple identical production lines. Each of these lines has multiple different devices associated with it. Let’s consider each of those items to be a connected Thing. Things in each line: Conveyor belt x 2 Pneumatic gate Robotic Arm Quality Check Camera Let's also assume we already have both a Maintenance Request System and a Production Order System that are in use today. To add this to our solution, we want to build a connector between Foundation and the existing system. These connectors will be Things as well. Internal system connection Thing for Production Order System Internal system connection Thing for Maintenance Request System NOTE: It is entirely possible to have scenarios in which you want to examine more granular-level details of your assets. For example, the arm and the hand of the assembly robot could be represented separately. There are endless possibilities, but for simplicity's sake, we will keep the list shorter and more high-level. Keep in mind that you can be as detailed as needed for this and future iterations of your solution. However, being too granular could potentially create unnecessary complexity and data overload.    Click here to view Part 2 of this guide.
View full tip
Design Your Data Model Guide Part 2   Step 4: Data Sources – Component Breakout   Component Breakout     Once you have a full list of Things in your system (as well as requirements for each user), the next step is to identify the information needed from each Thing (based on the user's requirements). This involves evaluating the available data and functionality for each Thing. You then align the data and functionality with the user's requirements to determine exactly what you need, while eliminating that which you do not. This is important, as there can be cost and security benefits to only collecting data you need, and leaving what you don't. NOTE: Remember from the Data Model Introduction that a Thing's Components include Properties, Services, Events, and Subscriptions.   Factory Example   Using the Smart Factory example, let’s go through the different Things and break down each Thing's components that are needed for each of our users.   Conveyor Belts   The conveyor belt is simple in operation but could potentially have a lot of available data. Maintenance Engineer - needs to know granular data for the belt and if it has any alerts emergency shutdown (service) machine state (on/off) (property) serial number (property) last maintenance date (property) next scheduled maintenance date(property) power consumption (property) belt speed (property) belt motor temp (property) belt motor rpm (property) error notification (event) auto-generated maintenance requests (subscription) Operator - needs to know if the belt is working as intended belt speed (property) alert status (event) Production Manager - wants access to the data the Operator can see but otherwise has no new requirements   Robotic Arm   The robotic arm has 3 axes of rotation as well as a clamp hand. Maintenance Engineer - needs to know granular data for the arm and if it has any alerts time since last pickup (property): how long it has been since the last part was picked up by this hand? product count (property): how many products the hand has completed emergency shutdown (service) machine state (on/off) (property) serial number (property) last maintenance date (property) next scheduled maintenance date (property) power consumption (property) arm rotation axis 1 (property) arm rotation axis 2 (property) arm rotation axis 3 (property) clamp pressure (property) clamp status (open/closed) (property) error notification (event) 15.auto-generated maintenance requests (subscription) Operator - needs to know if the robotic arm is working as intended clamp status (open/closed) (property) error notification (event) product count (property): How many products has the hand completed? Production Manager - wants access to the data the Operator can see but otherwise has no new requirements   Pneumatic Gate   The pneumatic gate has two states, open and closed. Maintenance Engineer - needs to know granular data for the gate and if it has any alerts emergency shutdown (service) machine state (on/off) (property) serial number (property) last maintenance date (property) next scheduled maintenance date (property) power consumption (property) gate status (open/closed) (property) error notification (event) auto-generated maintenance requests (subscription) Operator - needs to know if the pneumatic gate is working as intended. gate status (open/closed) (property) error notification (event) The Production Manager wants access to the data the Operator can see but otherwise has no new requirements   Quality Control Camera   The QC camera uses visual checks to make sure a product has been constructed properly. Maintenance Engineer - needs to know granular data for the camera and if it has any alerts machine state (property): on/off serial number (property) last maintenance date (property) next scheduled maintenance date (property) power consumption (property) current product quality reading (property) images being read (property) settings for production quality assessment (property) error notification (event) auto-generated maintenance requests (subscription) product count (property): how many products the camera has seen Operator - needs to keep track of the quality check results and if there are any problems with the camera setup settings for production quality assessment (property) error notification (event) bad quality flag (event) product count (property): how many products the camera has seen Production Manager - wants access to the data the Operator can see but otherwise has no new requirements   Maintenance Request System Connector   Determining the data needed from the Maintenance Request System is more complex than from the physical components, as it will be much more actively used by all of our users. It is important to note that the required functionality already exists in our system as is, but it needs bridges created to connect it to a centralized system. Maintenance Engineer - needs to receive and update maintenance requests maintenance engineer credentials (property): authentication with the maintenance system endpoint configuration for connecting to the system (property) get unfiltered list of maintenance requests (service) update description of maintenance request (service) close maintenance request (service) Operator - needs to create and track maintenance requests operator credentials (property): authentication with the maintenance system endpoint configuration for connecting to the system (property) create maintenance request (service) get filtered list of maintenance requests for this operator (service) Production Manager - needs to monitor the entire system - both the creation and tracking of maintenance requests; needs to prioritize maintenance requests to keep operations flowing smoothly production manager credentials (property): authentication with the maintenance system endpoint configuration for connecting to the system (property) create maintenance request (service) get unfiltered list of maintenance requests (service) update priority of maintenance request (service)   Production Order System Connector   Working with the Production Order System is also more complex than the physical components of the lines, as it will be more actively used by two of the three users. It is important to note that the required functionality already exists in our existing production order system as is, but it needs bridges created to connect to a centralized system. Maintenance Engineer - will not need to know anything about production orders, as it is outside the scope of their job needs Operator - needs to know which production orders have been set up for the line, and needs to mark orders as started or completed operator credentials (property): authentication with the production order system endpoint configuration for connecting to the system (property) mark themselves as working a specific production line (service) get a list of filtered production orders for their line (service) update production orders as started/completed (service) Production Manager - needs to view the status of all production orders and who is working on which line production manager credentials (property): authentication with the production order system endpoint configuration for connecting to the system (property) get a list of production lines with who is working them (service) get the list of production orders with filtering options (service) create new production orders (service) update existing production orders for quantity, and priority (service) assign a production order to a production line (service) delete production orders (service)   Step 5: Data Sources – Thing-Component Matrix     Now that you have identified the Components necessary to build your solution (as well as the Things involved in enabling said Components), you are almost ready to create your Data Model design. Before moving onto the design, however, it is very helpful to get a good picture of how these Components interact with different parts of your solution. To do that, we recommend using a Thing-Component Matrix. A Thing-Component Matrix is a grid in which you will list Things in rows and Components in columns. This allows you to identify where there are overlaps between Components. From there, you can break those Components down into reusable Groups. Really, all you're doing in this step is taking the list of individual Things and their corresponding Components and organizing them. Instead of thinking of each item's individually-required functionality, you are now thinking of how those Components might interact and/or be reused across multiple Things.   Sample Thing-Component Matrix   As a generic example, look at the chart presented here.   You have a series of Things down the rows, while there are a series of Components (i.e. Properties, Services, Events, and Subscriptions) in the columns. This allows you to logically visually identify how some of those Components are common across multiple Things (which is very important in determining our recommendations for when to use Thing Templates vs. Thing Shapes vs. directly-instantiated Things). If we were to apply this idea to our Smart Factory example, we would create two sections of our Thing-Component Matrix, i.e. the Overlapping versus Unique Components. NOTE: It is not necessary to divide your Thing Component Matrix between Overlapping vs Unique if you don't wish to do so. It is done here largely for the sake of readability.   Overlapping Matrix   This matrix represents all the overlapping Components that are shared by multiple types of Things in our system:   Unique Matrix   This matrix represents the Components unique to each type of Thing:     Step 6: Model Breakdown         Breaking down your use case into a Data Model is the most important part of the design process for ThingWorx. It creates the basis for which every other aspect of your solution is overlaid. To do it effectively, we will use a multi-step approach. This will allow us to identify parts we can group and separate, leading to a more modular design.   Entity Relationship Diagram   To standardize the represention of Data Models, it is important to have a unified view of what a representation might look like. For this example, we have developed an Entity Relationship Diagram schematic used for Data Model representation. We will use this representation to examine how to build a Data Model.   Breakdown Process   ThingWorx recommends following an orderly system when building the specifics of your Data Model. You've examined your users and their needs. You've determined the real-world objects and systems you want to model. You've broken down those real-world items by their Component functionality. Now, you will follow these steps to build a specific Data Model for your application. Step Description 1 Prioritize the Groups of Components from your Thing-Component Matrix by each Group's Component quantity. 2 Create a base Thing Template for the largest group. 3 Iterate over each Group deciding which entity type to create. 4 Validate the design through instantiation. In the next several pages, we'll examine each of these steps in-depth.   Click here to view Part 3 of this guide.   
View full tip
I recently had a customer who wanted to run services on ThingWorx from Power BI to retrieve existing operational data, and we were a bit stumped on how to pass the API key over in the headers, so I did a bit of Googling and pieced together the solution. It's not quite intuitive on the Power BI side, so I thought it would be helpful to share. If you have any other experience with integrating ThingWorx with Power BI, feel free to add a comment.    Prepare ThingWorx Create an Application Key that has Run Time execution access to the services you need. Understand the inputs needed for the service you would like. I'll have examples of none, one, an InfoTable, and multiple inputs.   Power BI Following the following steps in Power BI: 1. In Power BI, create a new blank query   2. On the left, right click on Query1 and go to the Advanced Editor:   3. Replace all of the body content with the following, replacing your API key, appropriate end point, and base URL as needed (this is an example with NO input parameters, I'll follow with examples of other parameters):     let appKey = "your-application-key-here", endpoint = "Things/YourThingNameHere/Services/YourServiceNameHere", baseUrl = "https://YourServerNameHere/Thingworx/", url = Text.Combine({baseUrl,endpoint}), body = "", request = Web.Contents( url, [ Headers = [ appKey = appKey, #"Content-Type" = "application/json", Accept = "application/json" ], Content = Text.ToBinary(body) ] ), Source = Json.Document(request) in Source       4. Click "Done", and now you'll have a warning about how to connect. Click the "Edit Credentials" button. 5. Leave it on Anonymous and click "Connect":   6. You should now see the return data coming from ThingWorx.   Note that I had a little trouble with this authentication initially and it saved the wrong method. To clear that out, go to the ribbon bar item "Data source settings" and select the server and clear it out.   Other Examples Here is an example for sending a single string parameter:   let appKey = "your-application-key-here", endpoint = "Things/YourThingNameHere/Services/YourServiceNameHere", baseUrl = "https://YourServerNameHere/Thingworx/", url = Text.Combine({baseUrl,endpoint}), body = "{""InputParameter"": ""InputValue""}", request = Web.Contents( url, [ Headers = [ appKey = appKey, #"Content-Type" = "application/json", Accept = "application/json" ], Content = Text.ToBinary(body) ] ), Source = Json.Document(request) in Source     Here's an example of sending a string and an integer: let appKey = "your-application-key-here", endpoint = "Things/YourThingNameHere/Services/YourServiceNameHere", baseUrl = "https://YourServerNameHere/Thingworx/", url = Text.Combine({baseUrl,endpoint}), body = "{""InputString"": ""Hello, world!"", ""InputNumber"" : 42}", request = Web.Contents( url, [ Headers = [ appKey = appKey, #"Content-Type" = "application/json", Accept = "application/json" ], Content = Text.ToBinary(body) ] ), Source = Json.Document(request) in Source   Here is an example for sending an InfoTable. Note that you must supply the dataShape with fieldDefinitions. If you're using an existing Data Shape, you can get the JSON by using the service GetDataShapeMetadataAsJSON() that is on the data shape.     let appKey = "your-application-key-here", endpoint = "Things/YourThingNameHere/Services/YourServiceNameHere", baseUrl = "https://YourServerNameHere/Thingworx/", url = Text.Combine({baseUrl,endpoint}), body = "{""propertyNames"": { ""rows"": [ { ""name"": ""FirstEntityName"", ""description"": ""The first entity"" }, { ""name"": ""SecondEntityName"", ""description"": ""The second entity"" }], ""dataShape"": { ""fieldDefinitions"": { ""name"": { ""name"": ""name"", ""aspects"": { ""isPrimaryKey"": true }, ""description"": ""Entity name"", ""baseType"": ""STRING"", ""ordinal"": 0 }, ""description"": { ""name"": ""description"", ""aspects"": {}, ""description"": ""Entity description"", ""baseType"": ""STRING"", ""ordinal"": 0 } } } }}", request = Web.Contents( url, [ Headers = [ appKey = appKey, #"Content-Type" = "application/json", Accept = "application/json" ], Content = Text.ToBinary(body) ] ), Source = Json.Document(request) in Source       If I find any more interesting ways to use Power BI with ThingWorx services, I'll add them on here.  
View full tip
Background Getting a performance benchmark of your running application is an important thing to do when deploying and scaling up an application in production.  This not only helps focus in on performance issues quickly, but also allows for safely planning for scaling up and resource sizing based on real concrete data.   I recently created a tool and made a post about capturing and analysing ThingWorx utilisation statistics to do such an analysis, as well as identifying potential performance bottlenecks. Although they are rich and precise, utilisation statistics fall short in a number of areas however - specifically being able to count and time specific service executions, as well as identifying and sorting based on the host executing the service.   Tomcat Access Log Analysis As ThingWorx is a Tomcat web application, Tomcat logs details of the requests being made to the application server and ThingWorx REST API.  The default settings include the host (IP address), date/timestamp, and request URI; which can be decoded to reveal relevant details like the calling entities and service executions.   Adding 3 key additional variables (%s %B %D) to the server.xml access log value also gives us the HTTP response code, service execution time, and bytes returned from Tomcat.  This is super useful as we can now determine exact time of service executions, and run statistics on their execution totals and execution time.     Once you have an access log file looking like the one above, you can attempt to load it into the access_log sheet in the analysis Excel workbook that I created.  You do this by click on the access_log table, then selecting "Data > Get Data > Data Source Settings".  You'll then be prompted with the following or similar pop-up allowing you to navigate to your access_log file to select and then load.     It should be noted that you'll have to Refresh the table after selecting the new access_log.txt file so that it is read in and populates the table.  You can do this by right-clicking on the table and saying Refresh, or using the Data > Refresh button.   This workbook relies on a number of formulas to slice and dice the timestamp, and during my attempts at importing I had significant issues with this due to some of the ways that Excel does things automatically without any manual options.  You really need to make sure that the timestamps are imported and converted correctly, or something in the workbook will likely not work as intended.  One thing that I had to do was to add 1 second to round up 00:00:00 for the first entries as this was being imported as a date without the time part, and then the next lines imported as a date/time.   Depending on how many lines your file is, you'll likely also have to "Fill Down" the formulas on the right side of the sheet which may be empty in the table after importing your new data set.  I had the best results by selecting the cells in question on the last row, then going down to the bottom corner, pushing and holding Shift, clicking on the last cell bottom right, and then selecting Home > Fill > Down to pull the formulas down from the top.   Once the data is loaded, you'll be able to start poking around.  The filters and sorting by the named columns is really helpful as you can start out by doing things like removing a particular host, sorting by longest execution times, selecting execution times greater than 4 seconds, or only showing activity aimed at a particular entity or service.     You really need to make sure that the imported data worked fine and looks perfect, as the next steps will totally break if not.  With the data loaded, you can now go to the Summary Data table and right-click on one of the tables and select Refresh.  This is reload the data in into the pivot table and re-run their calculations.   Once the refresh is complete, you should see the table summary like shown here; there are Day, Hour, and Minute expand/collapse buttons.  You should also see the Day, Hour, Month fields showing in the Field Definitions on the right.  This is the part that is painful -- if the dates are in the wrong format and Excel is unable to auto-detect everything in the same way, then you will not get these automatically created fields.     With the data reloaded, and Pivot Tables re-built, you should be able to go over to the Dashboard sheet to start looking at and analysing the graphs.  This one is showing the Top 10 services organised into hourly buckets with cumulated service execution times.     I'm not going to go into all of the workbooks features, but you can also individually select a set of key services that you want to have a look at together across both the execution count and execution time dimensions.     Next you can see the coordinated view of both total service execution time over number or service executions.  This is helpful for looking for patterns where a service may be executing longer but being triggered the same amount of times, compared to both being executed and taking more time.  I've created a YouTube video (see bottom) which goes through using all of the features as well as providing other pointers to using it.     Getting into a finer level of detail, this "bonus" sheet provides a Pivot Table and Pivot Chart which allows for exploring minimum, maximum and average execution time for a specific service.  Comparing this with the utilisation subsystem metrics taken during the same period now provide much deeper insight as we can pinpoint there the peaks were, how long they lasted, and where the slow executions were in relation to other services being executed at that time (example: identifying many queries/data processing occurring simultaneously).     Without further ado, you can download and play with my ThingWorx Tomcat Access Log Analysis Excel Workbook, and check out the recorded demonstration and explanation for more details on loading and analysis use. [YouTube] ThingWorx Tomcat Access Logs - Service Performance Analysis
View full tip
Back in 2018 an interesting capability was added to ThingWorx Foundation allowing you to enable statistical calculation of service and subscription execution.   We typically advise customers to approach this with caution for production systems as the additional overhead can be more than you want to add to the work the platform needs to handle.  This said, these statistics is used consciously can be extremely helpful during development, testing, and troubleshooting to help ascertain which entities are executing what services and where potential system bottlenecks or areas deserving performance optimization may lie.   Although I've used the Utilization Subsystem services for statistics for some time now, I've always found that the Composer table view is not sufficient for a deeper multi-dimensional analysis.  Today I took a first step in remedying this by getting these metrics into Excel and I wanted to share it with the community as it can be quite helpful in giving developers and architects another view into their ThingWorx applications and to take and compare benchmarks to ensure that the operational and scaling is happening as was expected when the application was put into production.   Utilization Subsystem Statistics You can enable and configure statistics calculation from the Subsystem Configuration tab.  The help documentation does a good job of explaining this so I won't mention it here.  Base guidance is not to use Persisted statistics, nor percentile calculation as both have significant performance impacts.  Aggregate statistics are less resource intensive as there are less counters so this would be more appropriate for a production environment.  Specific entity statistics require greater resources and this will scale up as well with the number of provisioned entities that you have (ie: 1,000 machines versus 10,000 machines) whereas aggregate statistics will remain more constant as you scale up your deployment and its load.   Utilization Subsystem Services In the subsystem Services tab, you can select "UtilizationSubsystem" from the filter drop down and you will see all of the relevant services to retrieve and reset the statistics.     Here I'm using the GetEntityStatistics service to get entity statistics for Services and Subscriptions.     Giving us something like this.      Using Postman to Save the Results to File I have used Postman to do the same REST API call and to format the results as HTML and to save these results to file so that they can be imported into Excel.   You need to call '/Thingworx/Subsystems/UtilizationSubsystem/Services/GetEntityStatistics' as a POST request with the Content-Type and Accept headers set to 'application/xml'.  Of course you also need to add an appropriately permissioned and secured AppKey to the headers in order to authenticate and get service execution authorization.     You'll note the Export Results > Save to a file menu over on the right to get your results saved.   Importing the HTML Results into Excel As simple as I would like to hope that getting a standard web formatted file into Excel should be, it didn't turn out to be as easy as I would have hoped and so I have to switch over to Windows to take advantage of Power Query.   From the Data ribbon, select Get Data > From File > From XML.  Then find and select the HTML file saved in the previous step.     Once it has loaded the file and done some preparation, you'll need to select the GetEntityStatistics table in the results on the left.  This should display all of the statistics in a preview table on the right.     Once the query completed, you should have a table showing your statistical data ready for... well... slicing and dicing.     The good news is that I did the hard part for you, so you can just download the attached spreadsheet and update the dataset with your fresh data to have everything parsed out into separate columns for you.     Now you can use the column filters to search for entity or service patterns or to select specific entities or attributes that you want to analyze.  You'll need to later clear the column filters to get your whole dataset back.     Updating the Spreadsheet with Fresh Data In order to make this data and its analysis more relevant, I went back and reset all of the statistics and took a new sample which was exactly one hour long.  This way I would get correct recent min/max execution time values as well as having a better understanding of just how many executions / triggers are happening in a one hour period for my benchmark.   Once I got the new HTML file save, I went into Excel's Data ribbon, selected a cell in the data table area, and clicked "Queries & Connections" which brought up the pane on the right which shows my original query.     Hovering over this query, I'm prompted with some stuff and I chose "Edit".     Then I clicked on the tiny little gear to the right of "Source" over on the pane on the right side.     Finally I was able to select the new file and Power Query opened it up for me.     I just needed to click "Close & Load" to save and refresh the query providing data to the table.     The only thing at this point is that I didn't have my nice little sparklines as my regional decimal character is not a period - so I selected the time columns and did a "Replace All" from '.' to ',' to turn them into numbers instead of text.     Et Voila!   There you have it - ready to sort, filter, search and review to help you better understand which parts of your application may be overly resource hungry, or even to spot faulty equipment that may be communicating and triggering workflows far more often than it should.   Specific vs General Depending on the type of analysis that you're doing you might find that the aggregate statistics are a better option.  As they'll be far, far less that the entity specific statistics they'll do a better job of giving you a holistic view of the types of things that are happening with your ThingWorx applications execution.   The entity specific data set that I'm showing here would be a better choice for troubleshooting and diagnostics to try to understand why certain customers/assets/machines are behaving strangely as we can specifically drill into these stats.  Keep in mind however that you should then compare these findings with the general baseline to see how this particular asset is behaving compared to the whole fleet.   As a size guideline - I did an entity specific version of this file for a customer with 1,000 machines and the Excel spreadsheet was 7Mb compared to the 30kb of the one attached here and just opening it and saving it was tough for Excel (likely due to all of my nested formulas).  Just keep this in mind as you use this feature as there is memory overhead meaning also garbage collection and associated CPU usage for such.
View full tip
Analytics projects typically involve using the Analytics API rather than the Analytics Builder to accomplish different tasks. The attached documentation provides examples of code snippets that can be used to automate the most common analytics tasks on a project such as: Creating a dataset Training a Model Real time scoring predictive and prescriptive Retrieving the validation metrics for a model Appending additional data to a dataset Retraining the model The documentation also provides examples that are specific to time series datasets. The attached .zip file contains both the document as well as some entities that you need to import in ThingWorx to access the services provided in the examples. 
View full tip
I've had a lot of questions over the years working with Azure IoT, Kepware, and ThingWorx that I really struggled getting answers to. I was always grateful when someone took the time to help me understand, and now it is time to repay the favour.   People ask me many things about Azure (in a ThingWorx context), and one of the common ones has been about MQTT communications from Kepware to ThingWorx using IoT Hub. Recently the topic has come up again as more and more of the ThingWorx expert community start to work with Azure IoT. Today, I took the time to build, test, validate, and share an approach and utilities to do this in cases where the Azure Industrial IoT OPC UA integration is overkill or simply a step later in the project plan. Enjoy!   End to end Integration of Kepware to ThingWorx using MQTT over Azure IoT (YoutTube 45 minute deep-dive)   ThingWorx entities for import (ThingWorx 9.0)   This approach can be quite good for a simple demo if you have a Kepware Integrator or Kepware Enterprise license, but the use of IoT Gateway for many servers and tags can be quite costly.   Those looking to leverage Azure IoT Hub for MQTT integration to ThingWorx would likely also find this recorded session and shared utilities quite helpful.   Cheers, Greg
View full tip
For a recent project, I was needing to find all of the children in a Network Hierarchy of a particular template type... so I put together a little script that I thought I'd share. Maybe this will be useful to others as well.   In my situation, this script lived in the Location template. This was useful so that I could find all the Sensor Things under any particular node, no matter how deep they are.   For example, given a network like this: Location 1 Sensor 1 Location 1A Sensor 2 Sensor 3 Location 1AA Sensor 4 Location 1B Sensor 5 If you run this service in Location 1, you'll get an InfoTable with these Things: Sensor 1 Sensor 2 Sensor 3 Sensor 4 Sensor 5 From Location 1A: Sensor 2 Sensor 3 Sensor 4 From Location 1AA: Sensor 4 From Location 1B: Sensor 5   For this service, these are the inputs/outputs: Inputs: none Output: InfoTable of type NetworkConnection   // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(AlertSummary) let result = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape({ infoTableName : "InfoTable", dataShapeName : "NetworkConnection" }); // since the hierarchy could contain locations or sensors, need to recursively loop down to get all the sensors function findChildrenSensors(thingName) { let childrenThings = Networks["Hierarchy_NW"].GetChildConnections({ name: thingName /* STRING */ }); for each (var row in childrenThings.rows) { // row.to has the name of the child Thing if (Things[row.to].IsDerivedFromTemplate({thingTemplateName: "Location_TT"})) { findChildrenSensors(row.to); } else if (Things[row.to].IsDerivedFromTemplate({thingTemplateName: "Sensor_TT"})) { result.AddRow(row); } } } findChildrenSensors(me.name);    
View full tip
Applicable Releases: ThingWorx Platform 7.0 to 8.5   Description:   Covers how to apply patch upgrades to ThingWorx installation, with the following agenda: How to read ThingWorx version Upgrading to a major/minor version of the platform Focus on upgrading to a patch version of the platform Upgrading extensions       Always check the patch release notes for additional information and specific steps
View full tip
Hiya,   I recently prepared a short demo which shows how to onboard and use Azure IoT devices in ThingWorx and added some usability tips and tricks to help others who might struggle with some of the things that I did.     The good news... I recorded and posted it to YouTube here.   •Connect Azure IoT Hub with ThingWorx (to be updated soon for 9.0 release) •Using the Azure IoT Dev Kit with ThingWorx •Getting the Azure IoT Hub Connector Up and Running (V3/8.5)   Enjoy, and don't hesitate to comment with your own tips and feedback.   Cheers,   Greg
View full tip
Here is a spreadsheet that I created which helps to estimate data transfer volumes for the purpose of estimating egress costs when transferring data out of region.   You find that there are a number of input parameters like numbers of assets, properties, file sizes, compression ratio, as well as a page with the cost elements which can be updated from the Interweb.    
View full tip
Announcements