cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
Connection server + InfluxDB + Grafana work together can display connection server metrics in graphic charts Download and set up connection server Download InfluxDB and configure InfluxDB edit <influxdb home>\influxdb.conf file Note: [admin] is InfluxDB admin login page, [http] is Grafana connected settings, [graphite] is connection server connected settings Run influxd.exe file with below commands (windows cmd) <path>/influxd -config <path>/influxdb.conf Run influx.exe file with below commands (in my case the port is 8008) <path>/influx -port 8008 Start connection server Login to InfluxDB by using URL http://localhost:8018. Choose Database graphite and Query SHOW MEASUREMENTS Note: There should be many connection server metrics Double click grafana-server.exe file (<path>/grafana-4.2.0.windows-x64/grafana-4.2.0/bin) Login to Grafana with URL http://localhost:3000 The default login username and password is admin/admin Create a new Data Source Note: Type, Url and Database properties are required. Save & Test. If the connection is set up successfully, there should be a green message pop-up. Create a new dashboard Add a new Row, and set up its metrics by create new querys Sample result:
View full tip
We are bringing the Liveworx UX Lab to you this year! Contribute to the design and development of PTC’s IoT & AR products from the comfort of your home.   Participate in online 1:1 session with PTC Researchers and Designers to take our prototypes & conceptual designs for a spin, share your expertise to directly impact the experience of our future products.   For all Liveworx 2020 UX Lab sessions, click here   IoT & AR session links: SOLUTION CENTRAL: User Management SOLUTION CENTRAL: PTC Solution Deployment SERVICE: Remote Monitoring Solution THINGWORX: Managing Building Blocks in Composer THINGWORX KEPWARE EDGE: Container Based Software at Scale EDGE: Next Generation IoT Edge (Asset) Manager THINGWORX: Asset Modeling DIGITAL PERFORMANCE MANAGEMENT SOLUTION: Bottleneck ID & Balanced Scorecard MANUFACTURING: PTC Solution Strategy using Building Blocks IMPLEMENTING AR and IOT: Successes and Difficulties VUFORIA EDITOR: Authoring Work Instructions VUFORIA EDITOR: Authoring Work Containing Augmented Reality
View full tip
Video Author:                     Asia Garrouj Original Post Date:            March 31, 2017 Applicable Releases:        ThingWorx Analytics 7.4 to 8.1   Description: This video will walk you through the first steps on how to set-up Analytics Manager for Real-Time Scoring and demonstrate how to create an Analysis Provider and start the ThingPredictor Agent.     Please Note: In this video, the startup command for the Agent has changed in Release 8.1.  Please refer to the PTC Help center  
View full tip
I've been working with the 7.x and 8.x versions of Thingworx over the last several months doing integrations. So I have a few development instances I've been working with. I'd like to go over some of the issues I've encountered and provide some potential best practices around how you work with Thingworx in development mode and transition to production. Typically, I'll create an instance and develop the integration using the Administrator user which is the only user created as you start up Thingworx instances. Lately, I've been having trouble with a lot of authentication failures as I build. Problem number 1: The new User Lockout feature. Around 7.2 a new User Lockout feature was added to Thingworx to help prevent brute-force cracking of passwords. You now are allowed only so many authentication failures in a given period of time before the user is automatically put in lock mode for a number of minutes. Unfortunately, (but realistically in a more secure manner), a lockout appears as more authentication failures. In reality, it is because the user you have just successfully authenticated to has been automatically locked. I came very close to wiping out an entire instance because I just couldn't get signed in. Then I remembered the lockout, so I worked on something else for a while and then tried to get back into the server and I was successful because the lock timeout had expired. To see the settings in your system, look at the User Management Subsystem configuration page. The Account Lockout Settings default to 5 failures in a 5 minute period that will result in a 15 minute lockout. One note is that setting the lockout to 0 doesn't disable it, it permanently locks out the account. It will have to be reset by a member of the administrators group. The screenshot below shows Problem number 2: AppKeys There is a new setting in the PlatformSubsystem that by default prevents the use of AppKeys in a URL. This setting is present because it is more secure. If you use an appKey as a query parameter in a URL, the appKey will be logged as clear text in your web browser's access log. This is a security risk - an AppKey, especially one not using a whitelist setting might make it possible for someone to gain access to your system by managing to see the access log for your system (maybe via some log analysis tool you feed your logs to). You can disable this setting, but it is not recommended for production - you may have to for a while because you may have code that uses this technique that must be updated before you can enforce the policy. You should deal with this in your production systems as soon as practical. See the graphic below on where this setting shows up. Problem number 3: REST API testing ​As a Thingworx developer, you're probably aware of tools like Curl, Postman and even the Web Browser that can let you exercise a REST API as you develop your code so you can validate your functionality. The REST guidelines specify that you should use the GET method to retrieve data, PUT to create data, POST to update data, etc. The issue is that it is easiest to test an API call if you can execute it from a web browser. However, the web browser always uses the GET method to make a request. This means that PUT and POST (along with other methods) will not work from your browser. Thingworx originally interpreted the incoming request and would internally reroute incoming requests to the POST or PUT functionality. This is also insecure because it makes it too easy to execute services from a browser. A setting was added to the PlatformSubsystem allow for a gradual transition to the more secure configuration. Turn this on in developer mode to simplify your testing of REST calls, but you should not leave it on in production mode as it provides a potential attack vector for your server. So I have some recommendations: 1) Set up an additional Administrative user upon installation If you only have one user defined and it gets locked out, you're stuck until the lockout times out. Worse, if for some reason you set the timeout value to 0, you're locked out forever by Thingworx. You're only choice will be to hack the database to unlock the user or to wipe out the instance and start over. I just went through a situation where I did create the second user and forgot to add it to the Administrators group. So I did something else for 20 minutes to make sure the lockout had cleared. Then I added the user to the Administrators group but got distracted and never pressed the Save button so it locked up again. Make sure you have the user created and functional immediately upon installing the instance - don't wait until you're getting locked out by some loop that's not authenticating properly. Even if you were logged in as your Administrator user, the lockout will cause a failure the next time you try to do something in Composer, like turn off the lockout checkbox! 2) Test your REST Calls with Curl or Postman - not the web browser Don't test your code in a loop until you've tested it in isolation to be sure it's not going to fail authentication for some reason (which may include violating the PlatformSubsystem settings above). Don't use the browser to do the testing - it will require disabling the secure settings. Use Curl or even better, Postman or a similar tool to test your REST calls - it will give you better formatted output than Curl. And you can easily put appKey in as a header (where it should be) instead of a parameter on the URL or in the body. 3) Tighten up your appKeys where possible. Since an appKey is effectively a user/password replacement, you should protect them in the same manner - keep them out of log files by not allowing them as URL parameters, and use the whitelist to keep them from being used for other purposes. If you have a server to server connection, whitelist the server who will be making the calls to you. What I'm not sure of is just whether this is really IP addresses only or if you can use a DNS name and it will look up the IP address and insure it is in fact coming from the expected source. Someone else might be able to comment on this. 4) Test with the PlatformSubsystem settings off Make sure you can run your server without the method redirect or appKey as parameter settings in the PlatformSubsystem. Those settings are potential security vulnerabilities. You may find some Thingworx code that requires one or the other of those settings. Please be sure to report this through PTC Tech Support so it can be fixed.
View full tip
Durable Queues Are Here by Tori Firewind, Principal Cloud Architect   Introduction Well folks, the durable queue is here, and it is… durable! We tried everything in our dev ops arsenal to bring it down, but no matter what we threw at it, Event Hub stayed up. No data was lost in any test scenario. ThingWorx 10.0 is a remarkably more mature and stable offering than ever before with its new use of Kafka to prevent data loss, as well as internal queue management and queue diagnostics features.   As we announced last quarter, new diagnostics features allow us to record diagnostic data from the moment a problem starts, ensuring RCA can begin immediately, without further time spent waiting for issues to occur again. These are highly configurable, and PTC is ready to support customers opting-in to acceleration-based diagnostics!   Coming out now is the new internal throttling mechanism within ThingWorx, which ensures that even when queues max out, regardless of what those queues are doing, ThingWorx remains up and capable of other activity. In some of our failed scale test scenarios, the event queue was maxed out for many hours, without any subsequent out of memory crash of the Platform. It was remarkably durable!   Even better if the durable queue is opted-in, because then those events also happen faster and more reliably. The durable events fire immediately within the Platform when a durable property is updated. Both of these go to Event Hub simultaneously. The load within Event Hub is balanced independently and processed more quickly than by ThingWorx, improving overall performance of both property updates and events, while still leaning heavily on ThingWorx for the web access and data redirect and storage.   When the lag is well controlled, the subject of most of the rest of this article, the property values go in, they come out within milliseconds, and the latency is not significant in spite of the added component. And of course, if something happens to the Platform in the meantime, never fear, for the data truly is preserved and accessible within Event Hub.   Data loss is a thing of the past with ThingWorx 10.0!     Configuration Situation The one sacrifice of durability is scale, which can be challenging with Event Hub.  There are some key considerations when optimizing ThingWorx for throughput, which should be considered necessary, as well as when sizing Event Hub.   ThingWorx Throughput Optimization Within ThingWorx, go to the system object and edit it (may require admin permissions). Modify the Configuration to reduce the overall number of threads, which in turn reduces the distribution of the Event Hub load, allowing each to be processed more quickly. Also lower the buffer scan rate for persistent properties so they flush more often.   Especially lower the max number of items before flushing, the buffer that usually delays writes to the database so as not to overwhelm it. That is less of a factor here as Event Hub has an internal load balancer. Event Hub is better for throughput than a database would be, and these are the settings to put on all opt-in queues for optimal performance.   Event Hub Sizing and Partition Optimization Within Event Hub, there are several types of processing units. We will focus on the lower two tiers, as the highest tier is very expensive and less common to use, and the concept is the same as the Premium (mid) tier.   The Standard Tier uses TUs, a.k.a. “Throughput Units”, and these are less performant but also less resource intensive, and so much less expensive. There is a maximum of 40 TUs overall, and 32 partitions per Event Hub in Standard. There is one Event Hub each for Logged Properties, Persistent Properties, and Unordered Events in both Standard and Premium.   Premium Tier instead uses PUs, a,k.a. “Processing Units” and these are more performant and more resource intensive, with lower commit request latency, meaning the commits within Event Hub are happening faster. The data is received faster, and the cost for this is greater, but the stability is also greater and the risk of runaway lag or eventual data loss much lower. The risks are much more mild than before, and recovery is discussed below.   In Premium, there is a maximum of 100 partitions per Event Hub, with 200 total per PU. There is a maximum of 16 PUs, and these go only in increments of 2. There are diminishing returns with more resources, however, directly proportional to the number of things. More things overall will reduce the write capabilities within Event Hub, as more CPU resources have to be spent on the network communication portion of the data exchange. Low Partitions Medium Partitions High Partitions   It is better to use more partitions than less, and a higher number of partitions will result in less latency and lower mean lag. There is always some lag, however, as it is calculated from the number of items queued versus completed. Both of these queues are very active, and healthy lag is usually between 60 – 80% of the total property wps, with peaks that do not increase over time. Sometimes the lag can be spikey, which must be considered in the alerting infrastructure.   The mean load should be significantly less than 16 PUs and load tested, so that there is room to scale up and recover any lag that accrues from the unpredictable nature of production systems. Always leave room for spikes!   Recovery of the Event Hub The short version: do not modify the partitions to resolve runaway lag.    If more partitions are added while the lag is falling behind, then instead of helping to catch up, they significantly delay the recovery. Anything currently in Event Hub will not be distributed across the new partitions, only new things that are added later, but all of the partitions will still be polled for data, including the new ones, which will slow things down even more.   The right way to deal with runaway lag is to increase the TUs or PUs to a decently higher setting temporarily, let the lag catch up, and then increase the number of partitions, and then wait and see how the server responds before finally downsizing once again. It is important to consider that there is a maximum size for processing data, and a maximum number of partitions per Event Hub, creating a hard upper limit for performance and scale.   Make sure any Event Hub instance is sized small enough to allow for upsize in the case of runaway lag. Edge load is not guaranteed to remain perfectly steady, generally speaking, there can be surprise disconnections, reconnections, and spikes in utilization. There really is no other way to ensure no data loss occurs to runaway lag, especially since there usually is no way to turn off the Edge load at will in Production. Lag grew to the hundreds of thousands quickly, was surely beyond recovery at this size. The partitions were increased at 11:45 to demonstrate the poor distribution of data processing within Event Hub. Around 15 hours to recover. Here it is up close, see how every partition is doing a tiny amount of work, and it takes quite a long time? Too much lag, and the data will be lost in one of two ways: not being added to an already full queue within Event Hub, or by erroring out as Event Hub tries to pass it back with a variety of errors. If Event Hub backs up too much to be recovered by upsizing, or it cannot be upsized enough, it can be deleted, and only the type of data affected will see loss, and with no downtime for ThingWorx.   A Healthy Example An XL ThingWorx deployment was used to ensure that the Platform was not the limiting factor. The required TUs and PUs are the figures calculated by the Grafana dashboard, coming from the Kafka metrics. The average latency for subscriptions is calculated by having a start datetime property (not logged or persisted) update when the rest of the property updates fire, and then an end datetime property update when the subscriptions to the persistent properties run; the timespan is then calculated and written to the script log.   This example was an XL sized application, 80k things, each thing with  20 properties total, 10 Logged and 10 Persistent properties, that write to Event Hub twice in a minute. There were 5 events as well to measure the latency, but due to the design of the test (property updates fired from a timer subscription), opting in to Durable Events causes performance issues that affect the test results. That is why events show up in the Event Queue, which does not happen in opt-in tests. : These are calculated by the Kafka Metrics dashboard:  Required TUs:    115                Required PUs:   2        These were what was configured for this test: TUs Configured: N/A              PUs Configured: 16                Partitions (respectively): 100, 100, 0 Average Latency for Subscriptions: < 100ms   The test begins at 11 am. Lag is steady and the spikes are not increasing over time (though they come close).   The property write rate includes the 20 properties that go to Event Hub, plus the 10 date time properties for measuring latency, and one additional info table property for a more realistic load.   This looks the same as it usually does, there is no change to performance.   This is high because of the design of the test, all of the things update on thing template level timer subscriptions. This is much lower with opt-in for durable events.     How durable!
View full tip
Reminder (and for some, announcement!) that the new ThingWorx 8 sizing guide is available here  https://www.ptc.com/en/support/refdoc/ThingWorx_Platform/8.0/ThingWorx_Platform_8_x_Sizing_Guide
View full tip
Introduction In-Memory Column stores the data in columnar format contrary to row format. Allowing users to run faster analytics which is also the idea behind this is to push the computation as close to the data store as possible. In this post I'll configure the Oracle database to enable this feature and then populate one or more tables in the In Memory Column store. This could be particularly helpful if you are using Oracle 12c as an external data store for storing data in database table via JDBC connection, current/historic values from DataTable, Streams or ValueStreams for running analytics or DMLs with lots of join and require lot of computation before the data is finally presented on to the Mashup(s). For this post I used the data generated by temperature sensor getting stored in ValueStream, exported to CSV from the ValueStream and imported it in the Oracle table. In-Memory Column Store vs In-Memory database Usage As mentioned above Oracle 12c version 12.1.2 comes with in built In-Memory Column Store feature. As the name suggest it allows data to be populated in RAM enabling high speed transaction and analytics on data without the need to traverse the HDD, and in some cases this is much faster than the buffer cache of the SGA. Without going into too much nitty-gritty it's important to note that In-Memory Column  Store does not equate to In-Memory database. While it could be possible to move the entire schema, if it's small enough, to fit in the defined memory for In-Memory Column Store, the idea however is to speed up the computation requiring analytics on one or more table(s) which are heavily queried by the users. If you are interested in In-Memory Database as persistence provider for ThingWorx please refer to the documentation Using SAP HANA as the Persistence Provider which is one of the option among other available Persistence Providers for ThingWorx. What changes are required to the current Oracle 12c installation or JDBC connection to ThingWorx? In-Memory Column Store feature is an inbuilt feature in Oracle 12.1.2 and only needs to be enabled, as it's not by default. This can be enabled without having to bring any sort of change to the following : 1. The existing SQL services created within ThingWorx 2. General application architecture accessing the tables in the Oracle database over JDBC 3. Existing Oracle 12c installation Getting Started What will it take to enable In-Memory Column Store? This feature can be enabled by following few steps : 1. Enable this feature in the Oracle 12.1.2 installation, by assigning some memory in RAM for InMemory Column 2. Adjust the SGA size for the database to incorporate the memory assigned to the In-Memory Column 3. Bounce the database As mentioned above though this is an inbuilt feature with Oracle 12.1.2, but is not enabled by default and we can confirm this by executing following SQL in SQL*Plus or Oracle SQL Developer connecting to database for which we are enabling this feature. SQL> show parameter INMEMORY; Things to consider before enabling 1. Ensure that the hardware/ VM hosting the Oracle installation have sufficient RAM, 2. Ensure to bump up the SGA by the amount of memory assigned to In-Memory Column store, failing to do so may lead to database failing to start and will require recovery Note: Minimum memory that can be assigned to In-Memory Column Store is 100M Setting it all up For my test setup I will be assigning 5G to the In-Memory Column Store and will add this amount to the current SGA, to do this let's start the SQL*Plus with the rights that will allow me to make changes to the existinng SGA, so i'm using sys@orcl as sysdba  (ORCL is the test DB name i have for my database) Step 1: Start SQL*Plus, e.g. sqlplus sys@orcl as sysdba Step 2: ALTER SYSTEM SET INMEMORY_SIZE = 5G SCOPE=SPFILE; Step 3: ALTER SYSTEM SET SGA_TARGET = 20G SCOPE=SPFILE; Once done, bounce the database. And that's it! We should now be able to confirm that, via SQL*Plus, certain amount of memory, 5G in my case, has been assigned to the In-Memory Column Store feature SQL> show parameter inmemory Populating the In-Memory Column Store In-Memory Column Store will only populate the data from the table only on the first use or if the table is marked critical which will tell Oracle to populate as soon as the database comes online after restart. For more detail on the commands concerning the In-Memory Column Store refer to the OTN webpage I'll now use the SensorHistory table in which i have the ValueStream's exported data in CSV format, currently this table is holding ~32million+ rows, and populate them in columnar architecture of the In Memory Column Store with following command: SQL>ALTER TABLE SENSORHISTORY INMEMORY; // marking the table to be eligible for In-Memory column with default parameters Just to confirm that the data is still not populated since we have only marked the table to be eligible for In-Memory Column Store, if I now query the dynamic view V$IM_SEGMENTS for current usage of the InMemory, it'll confirm this: So now let's populate the In-Memory with a query which would require full table scan, e.g. SQL> select property_name, count(*) from sensorhistory           Group by property_name; Let's recheck the dynamic view V$IM_SEGMENTS As mentioned above, that this is completely transparent to the application layer, so if you already have an existing JDBC connection in ThingWorx to Oracle, all the existing services created for that table will continue to work as expected. If you don't have an existing JDBC connection to Oracle, it can be created with usual steps with no special configuration for In-Memory. Creating JDBC connection I'm including this section for the purpose of completeness, if you already have a working JDBC connection to Oracle 12.1.2 you can skip to Conclusion below.Now for accessing the above database along with the In-Memory Column Store table we'll now setup the JDBC connection to the Oracle, for that download and import the TW_Download_Relational Databases Connectors.zip (ThingWorx Marketplace) > unzip to access the Oracle12Connector_Extension.zip Step 1 : Import the extension in the ThingWorx by navigating to Import/Export > Import > Extensions Step 2: Create a Thing using the OracleDBServer12 Template, part of the extension we just imported Step 3: Here's how a valid configuration would look like to successfully connect to the database, ORCL in this case Step 4: Navigate to the Properties in the Entity Information panel on the left and verify that the isConnected property value is True. Conclusion This is a very short introduction to what could be a setup for improving the data analytics performed on the stored data, manifold. The data in the In-Memory Column Store is not stored in conventional row format, rather in large columnar format. If the need is to have simple SQL queries with not so many joins it could be that the SGA Cache would be sufficient and probably be faster and you may not gain much by configuring the In-Memory Column Store. However, queries requiring heavy computation on large data sets, having In-Memory Column Store configured could bring manifold increase in performance. Therefore if you need more guidelines on where you'd want to use the In-Memory Column Store, feel free to give following listed good reads a try along with real world data use case for reference. I will try to find some time to run my own benchmark and will try to put it out in a separate blog on performance gain. 1. Oracle Database In-Memory with Oracle Database 12c Release 2 : Oracle white paper 2. When to Use Oracle Database In-Memory :  Identifying Use Cases for Application Acceleration 3. Oracle Database 12c In-Memory Option 4. Testing Oracle In-Memory Column Store @ CERN
View full tip
Key Functional Highlights Production Advisor is now available in the Freemium and Developer Kit downloads. Plant Managers are provided with real-time monitoring of production status and critical KPI’s such as utilization, performance, quality and OEE, by unifying data from disparate lines, assets and sensors. With Production Advisor, Plant Managers have the ability to detect and react instantly to production issues- reaching lower downtime, higher production throughput and better quality from the factory resources. Compatibility ThingWorx 8.0.1 KEPServerEX 6.2 KEPServerEX V6,1 and older as well as different OPC Servers (with Kepware OPC aggregator) Documentation ThingWorx Manufacturing Apps Setup and Configuration Guide: https://support.ptc.com/WCMS/files/173133/en/ThingWorxManufacturingAppsSetup_8-0-1.pdf ThingWorx Manufacturing Apps Customization Guide: https://support.ptc.com/WCMS/files/173135/en/ThingWorxManufacturingAppsCust_8-0-1.pdf Get Started Documentation on Portal: https://www.ptc.com/en/thingworx/manufacturing-apps/Dashboard/Get-Started (PTC users should use their normal login credentials and do not need to register on the portal) Download Freemium and Developer Kit (8.0.1) are available for download here: https://www.ptc.com/en/thingworx/manufacturing-apps/Dashboard (PTC users should use their normal login credentials and do not need to register on the portal ThingWorx Platform Extensions (8.1.0, released 1 Nov 2017) are available for download here: https://support.ptc.com/appserver/auth/it/esd/product.jsp?prodFamily=TWA
View full tip
  How to Display Data in Charts Guide Part 2   Step 4: Create Thing   In order for the Mashup to pull data into the display, you first need to create an Entity. In this example, we utilize a Thing with an Info Table Property, as well as a Time Series Property. We’ll use the DataShape we created in the last step to format the Info Table Property. Then, we'll assign the Info Table Property some default values to display in our non-time-series charts.We will change the values of the Time Series Property (which will record them to the Value Stream) for display in our time-series charts.   Info Table Property   From the Browse tab of ThingWorx Composer, click Modeling > Things, + New.   In the Name field, enter DDCThing. If Project is not set, search for and select PTCDefaultProject. In the Base Thing Template field, search for and select GenericThing. In the Value Stream field, search for and select DDCValueStream.   At the top, click Properties and Alerts.   Click + Add. In the Name field, enter InfoTableProperty. Select INFOTABLE from the Base Type drop-down. In the Data Shape field, search for and select DDCDataShape. Check the Persistent checkbox.   First Default Value   Check the box for Has Default Value. A new DDCDataShape button will appear under Has Default Value. 2. Click the new DDCDataShape button under Has Default Value. 3. Click + Add. 4. Enter the following values for each field: Field Value PrimaryKey 1 Label A - 25% Value 25 XAxis 1 Data 5 ASensor 5 BSensor 3 CSensor 1 XValue 1 YValue 5 BubbleValue 7     3. At the bottom-right of the pop-up, click the green Add button to apply the first Default Values.   Second Default Value   Click + Add. Enter the following values for each field: Field Value PrimaryKey 2 Label B - 35% Value 35 XAxis 2 Data 10 ASensor 10 BSensor 6 CSensor 2 XValue 2 YValue 10 BubbleValue 9   3. At the bottom-right of the pop-up, click the green Add button to apply the second Default Values.   Third Default Value   Click + Add. Enter the following values for each field: Field Value PrimaryKey 3 Label C - 40% Value 40 XAxis 3 Data 20 ASensor 15 BSensor 9 CSensor 3 XValue 3 YValue 20 BubbleValue 14     3. At the bottom-right of the pop-up, click the green Add button to apply the third Default Values. 4. At the bottom-right of the pop-up, click the green Save button to close the pop-up.   Time Series Property   At the top-right, click the "Check with a +" button for Done and Add. In the Name field, enter TimeSeriesProperty. Change the Base Type to NUMBER. Check the Persistent checkbox. Checking this box causes the last Value entered into the Property to persist through reboots of the system. Check the Logged box. Checking this box causes all changes to the Property to be logged if you have a Value Stream defined to record the changes. 6. At the top-right, click the "Check" button for Done.   7. At the top, click Save.   If the DDCThing's TimeSeriesProperty changes from this point onward, it will now be recorded in the DDCValueStream. Normally, any changes would come from an Edge IoT sensor of some type, but for the purposes of this guide, we will manually change the value repeatedly to simulate these types of changes.   Set 5   Under TimeSeriesProperty's Value column, click the "pencil" button for Set value of property.   In the slide-out at the top-right, enter 5.   At the top-right, click the "Check" button for Done.   Set 10   Under TimeSeriesProperty's Value column, click the "pencil" button for Set value of property. In the slide-out at the top-right, enter 10.   At the top-right, click the "Check" button for Done.   Set 15   Under TimeSeriesProperty's Value column, click the "pencil" button for Set value of property. In the slide-out at the top-right, enter 15.   At the top-right, click the "Check" button for Done. Set 20   Under TimeSeriesProperty's Value column, click the "pencil" button for Set value of property. In the slide-out at the top-right, enter 20.   At the top-right, click the "Check" button for Done. Set 25   Under TimeSeriesProperty's Value column, click the "pencil" button for Set value of property. In the slide-out at the top-right, enter 25.   At the top-right, click the "Check" button for Done. At the top, click Save.   Step 5: Create Mashup   Before we can bind the data to the various Chart Widgets, we first have to create a Mashup and add the charts to it. From the Browse tab of ThingWorx Composer, click Visualization > Mashups, + New.   Keep the default of Responsive, and click OK.   In the Name field, enter DDCMashup.   If Project is not set, search for and select PTCDefaultProject. At the top, click Save.   At the top, click Design.   Divide the Mashup   On the top-left under the Layout section, click two times on Add Left.   With the left-third selected in the central Canvas window, click Add Bottom.   With the middle-third selected in the central Canvas window, click Add Bottom. With the right-third selected in the central Canvas window, click Add Bottom.   At the top, click Save. Add the Charts   At the top-left, click on the Widgets tab.   Search for pie inside the Filter Widgets field in the top-left. Drag-and-drop a Pie Chart Widget onto the top-left section of the Canvas.   In the top-left, change Category to Legacy, and search for label chart inside the Filter Widgets field in the top-left. Drag-and-drop a Label Chart Widget onto the top-middle section of the Canvas. Note that, even the Label Chart is a Legacy Widget, it will still work. 6. Change Category back to Standard, and search for proportional inside the Filter Widgets field in the top-left. 7. Drag-and-drop a Proportional Chart Widget onto the top-right section of the Canvas. 8. Search for bubble inside the Filter Widgets field in the top-left. 9. Drag-and-drop a Bubble Chart Widget onto the bottom-left section of the Canvas. 10. Search for line inside the Filter Widgets field in the top-left. 11. Drag-and-drop a Line Chart Widget onto the bottom-middle section of the Canvas.   12. Search for event inside the Filter Widgets field in the top-left. 13. Drag-and-drop a Event Chart Widget onto the bottom-right section of the Canvas.   14. Click Save.   Click here to view Part 3 of this guide.   
View full tip
Video Author:                     Christophe Morfin Original Post Date:            June 2, 2017 Applicable Releases:        ThingWorx Analytics 7.4 to 8.1   Description: In this video we show a simple mashup and services in order to display the ThingPredictor's real time scoring results.  
View full tip
Slides used during the What's New in ThingWorx Manufacturing Apps 8.1 update training webinar held Nov. 15, 2017
View full tip
Axeda Machine Streams enables external Platform integrators to access the current, raw data from connected assets. The Platform can stream the data item, alarm, mobile location, and registration messages from connected assets to an ActiveMQ server or Azure Service Bus endpoint. Streamed data can be used for data analytics or reporting, or simply for storage. This article explains the Machine Streams Data Relay project that Axeda provides. This sample project illustrates how stream consumers can create their own projects to relay Machine Stream messages from ActiveMQ or Azure Service Bus into their environments. The Machine Streams Data Relay project was created using Apache Maven. The project operates by dispatching messages to a log message processor. Each machine streams message is logged to stdout. Note: The "Axeda Features Guide" provides a high level introduction to the Axeda Machine Streams feature. That PDF is available from PTC Support (http://support.ptc.com/).) Downloading and Installing the Project The machine-streams-data-relay project is provided as a tar.gz archive for Linux users and a .zip archive for Windows users. Each archive includes a Maven project with all source code. This page provides downloads and full source for the machine-steams-data-relay Maven project. The Data Relay project files are available from here. Prerequisites To download, build, and compile the machine-streams-data-relay project, you will need the following: Access to an Axeda Platform instance configured to stream asset data (for ActiveMQ endpoint this includes the Axeda provided ActiveMQ machine-streams plugin/overlay (axeda-jms-plugin-r<SVN_REVISION>-machine-streams.zip, which is provided here. ActiveMQ or Azure Service Bus server configured for Machine Streams. Instructions for configuring an ActiveMQ or Azure Service Bus server for Machine Streams are provided in the “Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints", available with all documentation from PTC Support (http://support.ptc.com/). At least one machine stream (Axeda Artisan Machine Streams Archetype) configured to stream data to the ActiveMQ or Azure Service Bus server for your assets. (Complete information about creating machine streams and adding machine stream support to the Axeda Platform is provided in the “Axeda v2 API/Services Developers Reference Guide” available from PTC Support (http://support.ptc.com/).) Access to the ActiveMQ or Azure Service Bus server configured as the endpoint for streamed Machine Streams content Oracle Java JDK 1.7 or greater and java and javac installed and available in your PATH (if you need instructions for this, see http://www.oracle.com/technetwork/java/javase/downloads/index.html) Maven 3.0.4 or greater and mvn installed and available in your PATH (if you need instructions for this, see http://maven.apache.org/download.cgi) Note: For the Machine Streams Data Relay project to work successfully, the Axeda Platform instance and the ActiveMQ or Azure Service Bus server instance must be configured with support for Axeda Machine Streams, and at least one machine stream must be configured to stream data. Complete information about configuring Axeda Platform for Axeda Machine Streams, including the data format for the resulting streams (XML or JSON) is available in the “Axeda v2 API/Services Developers Reference Guide.” Instructions for configuring an ActiveMQ or Azure Service Bus server for Machine Streams are provided in the “Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints” Reference Guide (available from PTC Support (http://support.ptc.com/)). Building the Project This page provides instructions for building the Data Relay project for Linux and for Windows environments. 1. Download and uncompress the project for your environment Linux: Click here for the machine-streams-data-relay-1.0.3-project.tar.gz # tar -zxvf machine-streams-data-relay-1.0.3-project.tar.gz # cd machine-streams-data-relay-1.0.3 Windows: Click here for the machine-streams-data-relay-1.0.3-project.zip Unzip the project to the following directory: C:\machine-streams-data-relay-1.0.3 2. Edit the ActiveMQ or Azure Service Bus configuration file (configAMQ.properties or configASB.properties) in src\main\scripts\ as needed. sample Config.properties files for the MachineStreamsDataRelay component For ActiveMQ broker endpoints - configAMQ.properties # The ActiveMQ broker URL. brokerURL=tcp://localhost:62000 # The ActiveMQ queue name to process messages from. # It can be a single queue: MachineStream.stream01 # Or a wildcard queue: MachineStream.> queueName=MachineStream.> # The username used to connect to the ActiveMQ queue username=axedaadmin # The password used to connect to the ActiveMQ queue password=zQXuLzhQgcyRZ25JCDXYEPBCT2kx48 # The number of ActiveMQ broker connections. numConnections=10 # The number of sessions per connection. Note that each session will create a separate thread. numSessionsPerConnection=5 # The number of concurrent threads used for processing machine streams messages. numProcessingThreads=100 # The type of message listener container. # default = single queue name per connection. # multiDestination = supports multiple queue names per connection messageListenerContainerType=default For Azure Service Bus broker endpoints - configASB.properties # The ASB broker URL. brokerURL=amqps://your-azure-service-bus-namespace.servicebus.windows.net # The ASB queues to process messages from. # It can be a single queue: MachineStream.stream01 # Or multiple queues separated by a comma: MachineStream.stream01,MachineStream.stream02 # Or a queue range defined by the following syntax: MachineStream.stream[01-20] queueName=MachineStream.stream[01-50] # The username used to connect to the ASB queue(s) username=your-azure-service-bus-username # The password used to connect to the ASB queue(s) password=the-password-for-your-azure-service-bus-username # The max number of ASB broker connections. numConnections=10 # The number of concurrent threads used for processing machine streams messages. numProcessingThreads=100 # The type of message listener container. # default = single queue name per connection. # multiDestination = supports multiple queue names per connection messageListenerContainerType=multiDestination Note: messageListenerContainerType is provided because Azure Service Bus does not support wildcard queue names. The configuration details are as follows: Name Description Value brokerURL location of the ActiveMQ or Azure Service Bus (broker) location of the ActiveMQ or Azure Service Bus server (broker) queueName Name of the ActiveMQ or Azure Service Bus queue from which you want to process messages To define a single queue: MachineStream.<insert single queue name here> To define a wildcard queue name for multiple queues:MachineStream. It can be a single queue:  MachineStream.stream01 Or multiple queues separated by a comma: MachineStream.stream01,MachineStream.stream02 Or a queue range defined by the following syntax: MachineStream.stream[01-20]: queueName=MachineStream.stream[01-50] (if you have multiple queues and you want to use ASB, then you have to use multiDestination and use the range) username username used to connect to the ActiveMQ or Azure Service Bus queue For ActiveMQ: username=axedaadmin For ASB: username=your-azure-service-bus-username password used to connect to the ActiveMQ or Azure Service Bus queue password used to connect to the ActiveMQ or ASB queue(s) numConnections number of ActiveMQ or Azure Service Bus broker connections Default is 10 broker connections numSessionsPerConnection The number of sessions per connection. Note that each session will create a separate thread. (This key is used infrequently.) APPLICABLE TO ACTIVEMQ ONLY. Default is 5 sessions per connection APPLICABLE TO ACTIVEMQ ONLY. numProcessingThreads The number of concurrent threads used for processing machine streams messages. Default is 100 concurrent threads messageListenerContainerType The type of message listener container. Default is single queue name per connection. Supports multiple queue names per connection 3. Build code using Maven.  Use -DskipTests option if you want to skip tests.  This will build all source code and produce a bin archive in the target directory. For Linux: # mvn package -DskipTests For Windows: c:\> mvn package -DskipTests 4. Enter the target directory and uncompress *bin.tar.gz archive and enter correct directory For Linux: # cd target # tar -zxvf machine-streams-data-relay-1.0.3-bin.tar.gz # cd machine-streams-data-relay-1.0.3 For Windows: c:\> cd target c:\> unzip machine-streams-data-relay-1.0.3.bin.zip c:\> cd machine-streams-data-relay-1.0.3 5. Start the application. For Linux: # ./machineStreamsDataRelay.sh <config properties file> for example: e.g. ./machineStreamsDataRelay.sh configOfYourChoice.properties For Windows: # ./machineStreamsDataRelay.sh <config properties file> for example: e.g. ./machineStreamDataRelay.bat configOfYourChoice.properties See the two example config files included within the project: configASB.properties (for Azure Service Bus) and configAMQ.properties (for ActiveMQ). 6. Scan the output. If your ActiveMQ configuration is correct, output similar to the following should appear, and no ERRORS should be shown: 2014-03-26 10:27:06.179 [main] INFO  [MessageListenerServiceImpl]: Initializing connections to tcp://localhost:62000 username=axedaadmin 2014-03-26 10:27:06.346 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 1: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.351 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 2: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.356 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 3: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.365 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 4: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.369 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 5: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.381 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 6: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.388 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 7: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.402 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 8: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.411 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 9: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.416 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 10: queue=MachineStream.> numSessions=5 If your Azure Service Bus configuration is correct, output similar to the following should appear, and no ERRORS should be shown: 2014-10-01 16:51:30.114 [main] INFO [MessageListenerServiceImpl]: Initializing Connections to amqps://acme.servicebus.windows.net username=owner 2014-10-01 16:51:31.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 0/9 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-2] INFO [MultiDestinationMessageListenerContainer]: Connection 2 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:31.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 0/10 queue consumers 2014-10-01 16:51:31.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 0/10 queue consumers 2014-10-01 16:51:31.621 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 0/10 queue consumers 2014-10-01 16:51:31.756 [ConnectionRecovery-thread-1] INFO [MultiDestinationMessageListenerContainer]: Connection 1 created 0/10 queue consumers 2014-10-01 16:51:32.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 9/10 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 9/10 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 7/9 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-2] INFO [MultiDestinationMessageListenerContainer]: Connection 2 created 10/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 7/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:32.623 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 9/10 queue consumers 2014-10-01 16:51:32.756 [ConnectionRecovery-thread-1] INFO [MultiDestinationMessageListenerContainer]: Connection 1 created 10/10 queue consumers 2014-10-01 16:51:32.833 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 1: numQueues=10 initTimeMillis=2631 millis 2014-10-01 16:51:32.833 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 2: numQueues=10 initTimeMillis=2488 millis 2014-10-01 16:51:33.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 10/10 queue consumers 2014-10-01 16:51:33.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 10/10 queue consumers 2014-10-01 16:51:33.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 9/9 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:33.623 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 10/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:35.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 10/10 queue consumers 2014-10-01 16:51:38.617 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 10/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 10/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 10/10 queue consumers 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 3: numQueues=10 initTimeMillis=8491 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 4: numQueues=10 initTimeMillis=8490 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 5: numQueues=10 initTimeMillis=8490 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 6: numQueues=10 initTimeMillis=3485 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 7: numQueues=10 initTimeMillis=3495 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 8: numQueues=10 initTimeMillis=3485 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 9: numQueues=10 initTimeMillis=8488 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 10: numQueues=9 initTimeMillis=3485 millis 7. To verify that messages are being streamed properly from the Axeda Platform, send DataItems from your connected Assets. You should see messages similar to the following. (Remember that each Asset you are testing must have an associated Machine Stream.) 2014-03-26 10:45:16.309 [pool-1-thread-1] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-00000000021d,false,Wed Mar 26 14:45:16 EDT 2014,temp,43,analog 2014-03-26 10:45:21.137 [pool-1-thread-2] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-000000000225,false,Wed Mar 26 14:45:21 EDT 2014,temp,43,analog 2014-03-26 10:45:26.134 [pool-1-thread-3] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-00000000022b,false,Wed Mar 26 14:45:26 EDT 2014,temp,44,analog 2014-03-26 10:45:31.135 [pool-1-thread-4] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-000000000231,false,Wed Mar 26 14:45:31 EDT 2014,temp,44,analog 2014-03-26 10:45:36.142 [pool-1-thread-5] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-000000000237,false,Wed Mar 26 14:45:36 EDT 2014,temp,45,analog 2014-03-26 10:45:41.146 [pool-1-thread-6] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-00000000023d,false,Wed Mar 26 14:45:41 EDT 2014,temp,45,analog Configuring a CustomMessageProcessor By default, the project is configured to use a LogMessageProcessor that logs each streamed message it receives to standard out. The project takes a StreamedMessage in either an XML or JSON format (as configured in the MachineStream SDKv2 object) and decodes the message into a StreamedMessage Java object. LogMessageProcessor.java implements the MessageProcessor interface. Here is the MessageProcessor.java interface: MessageProcessor.java package com.axeda.tools.streams.processor; import com.axeda.tools.streams.model.StreamedMessage; /** * This class defines the message processor method callback that will called for message processing. * Note that this methods will be called by multiple threads concurrently. */ public interface MessageProcessor { /** * Process a machine stream message. Note that this method will be called by multiple threads concurrently.  * The number of concurrent processing threads is defined in MachineStreamsConfig.getNumProcessingThreads(). * If you add code here that significantly slows down message processing, then there is the potential that * MessageListenerService threads will also block.  When the MessageListenerService threads block, this means that * messages will start to backup in the ActiveMQ or ASB message queues. If you are processing a large number of messages, * then you may need to adjust your configuration parameters or optimize your processMessage() code. * @param message machine streams message to process */public void processMessage(StreamedMessage message);} An additional class named CustomMessageProcessor.java has been provided so that you can provide your own custom message processing logic: CustomMessageProcessor.java package com.axeda.tools.streams.processor; import org.springframework.stereotype.Component; import com.axeda.tools.streams.model.StreamedAlarm; import com.axeda.tools.streams.model.StreamedDataItemMessage; import com.axeda.tools.streams.model.StreamedMessage; import com.axeda.tools.streams.model.StreamedMobileLocation; import com.axeda.tools.streams.model.StreamedRegistrationMessage; /** * This class was provided for customers to implement their own message processing business * logic. To use this class, change the @Autowired messageProcessor qualifier in * MessageProcessingServiceImpl.java to @Qualifier("customMessageProcessor") */ @Component("customMessageProcessor") public class CustomMessageProcessor implements MessageProcessor { /** * (non-Javadoc) * @see com.axeda.tools.streams.processor.MessageProcessor#processMessage(com.axeda.tools.streams.model.StreamedMessage) * * Process a machine stream message. Note that this method will be called by multiple threads * concurrently. The number of concurrent processing threads is defined in * MachineStreamsConfig.getNumProcessingThreads(). * If you add code here that significantly slows down message processing, then there is the * potential that MessageListenerService threads will also block. When the MessageListenerService * threads block, this means that messages will start to backup in the ActiveMQ or Azure Service Bus message queues. If you * are processing a large number of messages, then you may need to adjust your configuration parameters * or optimize your processMessage() code. */ @SuppressWarnings("unused") @Override public void processMessage(StreamedMessage message) { if (message instanceof StreamedDataItemMessage) { StreamedDataItemMessage dataItem = (StreamedDataItemMessage) message; // add your business logic here } else if (message instanceof StreamedAlarm) { StreamedAlarm alarm = (StreamedAlarm) message; // add your business logic here } else if (message instanceof StreamedMobileLocation) { StreamedMobileLocation mobileLocation = (StreamedMobileLocation) message; // add your business logic here } else if (message instanceof StreamedRegistrationMessage) { StreamedRegistrationMessage registration = (StreamedRegistrationMessage)message; // add your business logic here } } } The Axeda Platform Machine Streams feature currently support 4 different message types: StreamedDataItemMessage StreamedAlarm StreamedMobileLocation StreamedRegistrationMessage For each of the different message types, you should add your message processing business logic.  You may want to write each message to your favorite NoSql database or to a flat file. Once you have completed your changes to the CustomObjectMessageProcessor, then you must make one change in the MessageProcessingServiceImpl.java class to use this Spring bean. Uncomment this line  // @Qualifier("customMessageProcessor") Comment this line @Qualifier("logMessageProcessor") The following code snppet shows what your changes should look like when you are finished: MessageProcessingServiceImpl.java @Component("messageProcessingService") public class MessageProcessingServiceImpl implements MessageProcessingService private static final Logger LOGGER = LoggerFactory.getLogger(MessageProcessingServiceImpl.class); @Autowired private MessageDecoder messageDecoder; @Autowired // If you want to use the CustomMessageProcessor instead of the default LogMessageProcessor then change this Qualifier to @Qualifier("customMessageProcessor") //@Qualifier("logMessageProcessor") private MessageProcessor messageProcessor; private ExecutorService executorService;
View full tip
Use the Edge MicroServer (EMS), Foundation, and Analytics to engineer a Smart, Connected Products play for the Automotive segment.   NOTE: Complete the following guides in sequential order. The estimated time to complete this learning path is 240 minutes.   1. Use the Edge MicroServer (EMS) to Connect to ThingWorx  Part 1 Part 2 Part 3 2. Use the EMS to Create an Engine Simulator  Part 1 Part 2 Part 3 Part 4 3. Engine Simulator Data Storage  Part 1 Part 2 4. Build an Engine Analytical Model  Part 1 Part 2 Part 3 5. Manage an Engine Analytical Model  Part 1 Part 2 6. Engine Failure-Prediction GUI 7. Enhanced Engine Failure Visualization  Part 1 Part 2 Part 3
View full tip
Create Your Application Guide UI Part 2    Step 4: Bind Data to Widgets   Mashup Data Services allow you to access IoT data so you can push data to the platform's backend as well as pull data into the Mashup itself. In the table below, we describe some common Mashup Data Services that enable that functionality: Service Name Description GetProperties Pulls in a particular Thing's Properties, as well as the associated current values contained within those Properties. GetProperties has an option to automatically update the Mashup whenever a Property value changes. GetPropertyValues Pulls in a particular Thing's Properties and their associated current values. However, it does so in an All Data grouping, opening the possibility of assigning all Properties to the same Widget. This can be helpful with some Widgets, such as the List Widget. SetProperties Pushes a value from some Widget (Checkbox, TextBox, etc.) into the Property of a Thing. NOTE: Using a combination of a "Get" and "Set" Mashup Service establishes a bidirectional communication between the Mashup and the backend IoT data storage.   Import an Entity   In order to focus on the Mashup Builder in this lesson, we have created a Thing for you to download and import. This Thing has predefined Properties and associated Values which will simplify the demonstration of binding data to Widgets in subsequent steps. Download and unzip the attached MBQS_Entities.zip file. In the bottom-left, click the "up/down arrows" for Import/Export, then Import.   In the Import pop-up window, click Browse.   In the OS's pop-up, navigate-to, select, and open the MBQS_Entities.twx file you downloaded earlier.   Click Import.   Click Close.   Retrieve Data   You now have a Thing called MBQSThing from which you can retrieve data. To demonstrate this, we'll use the GetPropertyValues Service. At the top-right of Mashup Builder, click the Data tab.   Click the + symbol.   In the Add Data pop-up window's Entity filter field, type mbqs.   Select the MBQSThing.   In the Services Filter field, type getprop.   Click the right arrow to select GetPropertyValues. The GetPropertyValues Service for MBQSThing has now been added to the Selected Services section on the right side of the Add Data pop-up window. Check the box for Execute on Load under Selected Services. Checking this box causes the GetPropertyValues Service to execute as soon as the Mashup is loaded. 8. Click Done.        9. At the top, click Save. Note how the GetPropertyValues Service now appears in the top-right under the Data tab.   View Data Connections   In the top-left of the Mashup Builder, click the Explorer tab, select the Mashup itself, and you can see the connections between the Mashup and the Data sources.   Mashup Builder Section Description Data Previously empty, now has a reference to the GetPropertyValues Service of the MBQSThing. GetPropertyValues On the Data tab, click GetPropertyValues. The Connections window in the bottom-center will update. Bindings Shows the logical flow of the Mashup. In this instance, it shows how GetPropertyValues is called from the Event triggered by the Mashup being Loaded in a web browser. This means that all Properties of the MBQSThing will be available to the Mashup as soon as your UI is opened.   Bind Widget to Property   Follow the steps below to place a Widget in the Mashup and bind the Property named Gauge_Value (of MBQSThing) to it. On the left-side of the Mashup Builder, select the Widgets tab. In the Filter Widgets search box, type gauge. Drag-and-drop a Gauge Widget onto the central Canvas area.   On the right-side, expand GetPropertyValues > Returned Data > All Data. There is a clear left arrow pointing away from the Gauge_Value Property. This indicates that this Property is to be used to set the value on something else. If the arrow had been pointing towards the Gauge_Value Property, that would indicate that it was ready to accept an external value. The clear status of the arrow indicates that it has not yet been tied to anything.        5. Drag-and-drop MBQSThing > GetPropertyValues > Returned Data > All Data > Gauge_Value onto the Gauge Widget in the central Canvas area.          6. On the Select Binding Target pop-up, select Data.       7. At the top, click Save.       8. Click View Mashup. You must enable pop-ups in order to view the Mashup in a new tab.   In your new Mashup, notice that the Gauge has been set to the Gauge_Value default value of 25. In a real-world scenario, you would likely utilize an IoT sensor that would report back to the Thing storing the value. When the Mashup loads, that value would be set to the real-world sensor data value. In the next few steps, you will build a GUI using several different Widgets and Services.   Click here to view Part 3 of this guide.  
View full tip
As it is not available in support.ptc.com. Please provide Creo View and Thing View Widget Documentation or guide to view 3D Object in custom mashup/UI except for the ThingWorx Navigate app.   I am posting this request to the community. Not for this ThingWorx developers portal after discussing it with PTC technical support. Please refer to article CS291582.   LeighPTC, I have no option to do move to the community again. But this had happened.   The post Creo View and Thing View Widget Documentation to view 3D Object in custom mashup/UI. was moved by LeighPTC.   Please don't move this request to the ThingWorx developers portal.    So that PTC Customer can have Creo View and Thing View Widget Documentation to view 3D Object in custom mashup/UI. As it is not available.   Many thanks, Rahul
View full tip
Many users of our software have submitted cases regarding the Third-Party Components and their functions within ThingWorx Analytics. This short blog post will provide the main components used by our software and explain their functionality.   ThingWorx Analytics uses the following components in its default installation:   Apache ZooKeeper ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services ThingWorx Analytics uses ZooKeeper as the gatekeeper to API calls and processes to the Application Component Homepage: https://zookeeper.apache.org/ Apache Tomcat ​Apache Tomcat software is an open source implementation of the Java Servlet, JavaServer Pages, Java Expression Language and Java WebSocket technologies ThingWorx Analytics uses Tomcat to handle webservices and API communications This enables the use of ThingWorx Foundation (Core) mashups with ThingWorx Analytics Server Component Homepage: http://tomcat.apache.org/ PostgreSQL Server PostgreSQL is an open source object-relational database system ThingWorx Analytics uses PostgresSQL server to store analytical results for later retrieval Component Homepage: https://www.postgresql.org/
View full tip
Video Author:                     Mohammed Amine Chehaibi Original Post Date:            November 28, 2016 Applicable Releases:        ThingWorx Analytics 52 to 8.0   Description: In this video we will cover how to start your virtual image of ThingWorx Analytics using Oracle Virtual Box.    
View full tip
Scripto Editor is an enhanced Groovy Script Editor that allows the developer to compile and test uploaded Groovy Scripts on the fly.  Please note that Scripto Editor is not a replacement for an IDE and should be used mainly for debugging Groovy Scripts. Installation: Download the Javascript Scripto Editor archive attached to this post. Install the archive as a custom app - Log into the Axeda Platform - Navigate to Administration > More Links -> Extended Applications - Click Browse and select the file downloaded in step 1. - Set the URL as "ScriptoEditor" - Set the Default Index as ScriptoEditor.html - Set Dsplay Mode as Standalone - Optionally enter a ​Description​, such as Scripto Editor for Groovy Objects​ - Click Upload Open Scripto Editor by navigating to https://yourServicelink.axeda.com/apps/ScriptoEditor/ScriptoEditor.html Log in using your Axeda Platform credentials Double click any previously uploaded Groovy Script in the list to open Add or edit parameters in the Properties sidebar Test the script by clicking the Test tab in the sidebar and clicking "Run Test" Results will appear in the console at the bottom of the screen Save the Groovy Script by clicking "Save"Note: if the session expires before you have finished editing, the application will alert you with a pop up "Http Request Error".  You will be unable to save your changes - at this point it is recommended to open a new tab and copy over your changes back into Scripto Editor.  For this reason, Scripto Editor is not a replacement for an IDE and should be used sparingly for on-the-fly debugging.Additionally, any changes made in Scripto Editor will need to be manually copied back into the local development source code. WARNING: Scripto Editor has a 1000 line code limit.  If your custom objects are longer than this, Scripto Editor will truncate them when saving!!
View full tip
Previously Installing & Connecting C SDK to Federated ThingWorx with VNC Tunneling to the Edge device   Pre-requisite Download and install Web Sockets Tunnels Widget and Library Extension from PTC Marketplace   Configuring Tunnel Subsystem   1. Logon to ThingWorx Composer > System > Tunnel Subsystem > Configuration 2. Public host name used for tunnels & Public port used for tunnels parameters require publically address FQDN or IP of the instance running ThingWorx server and the port on which its listening. 3. From the screenshot above the TW802Neo is the name of the server and 443 is the port configured for ThingWorx to listen on 4. Navigate back to the RemoteThing we created above to connect our C SDK client to on the ThingWorx platform and ensure that the Enable Tunnelling is turned on   5. Click on Configuration and click Add My Tunnel button to configure where should the tunnel be opened to   6. In the above example Host and Port parameter is from the Ubuntu machine were my C SDK client is running together with the VNC server. If you are looking for more detail on how to configure these topics refer to the Simple diagnostic utility to analyse tunneling performance in ThingWorx 7. Once done, Save the entity   Configuring Remote Access & WebSocket tunnel widgets in a Mashup   1. Navigate to the ThingWorx Composer > Visualization > Mashups > New to create a Mashup 2. From the list of Widgets drag and drop following two widgets that are added as part of the Web Socket Tunnel Widget and Library extension Remote Access Web Socket Tunnel AcceptSelfSignedCert (if you are configuring ThingWorx with self-signed certificate)   4. Since I have my C SDK client binding to SteamSensor2 (remember it was created with RemoteThingWithTunnel ThingTemplate) I have that selected that as RemoteThingName 5. TunnelName is the vnc as configured in the SteamSensor2 configuration 6. For Web Socket Tunnel Widget following configuration is required RemoteThingName TunnelName VNCPassword 7. Parameters defined in a & b point are exactly the same as already done for the Remote Access Widget parameter above, the VNCPassword is the same password with which you have configured your VNC server with   8. Once done Save the mashup and View it   9. Click on the Remote Access to download the websocket adapter plugin, once downloaded click on it to initiate the websocket you will be prompted with following     Accept and Run   This will open the Remote tunnel with following confirmation, Don't click on OK else it will close   You can now utilize this tunnel to perform required action. Note that if in case there is no connection through this opened tunnel it will time out and will close automatically. 10. For doing remote desktop to the edge device I will use the Remote Device button , which will open a new browser window like so 11. Click on Connect to initiate the remote desktop session, like so 12. I can now start the terminal on the edge device and navigate through    Up Next Configuring ThingWorx Federation for Fetching data from the C SDK client from Publisher to subscriber ThingWorx entity
View full tip

Only logged in customers with a PTC active maintenance contract can view this content. Learn More

Video Author:                     Stefan Taka Original Post Date:            June 6, 2016   Description: In part 1 of the ThingWorx REST API tutorial, we will introduce you to the ThingWorx API structure, and also demonstrate how the API can be invoked.    Blog post with text examples found here:  REST API Overview and Examples    
View full tip
Announcements