cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Learn all about PTC Community Badges. Engage with PTC and see how many you can earn! X

IoT Tips

Sort by:
Large files could cause slow response times. In some cases large queries might cause extensively large response files, e.g. calling a ThingWorx service that returns an extensively large result set as JSON file.   Those massive files have to be transferred over the network and require additional bandwidth - for each and every call. The more bandwidth is used, the more time is taken on the network, the more the impact on performance could be. Imagine transferring tens or hundreds of MB for service calls for each and every call - over and over again.   To reduce the bandwidth compression can be activated. Instead of transferring MBs per service call, the server only has to transfer a couple of KB per call (best case scenario). This needs to be configured on Tomcat level. There is some information availabe in the offical Tomcat documation at https://tomcat.apache.org/tomcat-8.5-doc/config/http.html Search for the "compression" attribute.   Gzip compression   Usually Tomcat is compressing content in gzip. To verify if a certain response is in fact compressed or not, the Development Tools or Fiddler can be used. The Response Headers usually mention the compression type if the content is compressed:     Left: no compression Right: compression on Tomcat level   Not so straight forward - network vs. compression time trade-off   There's however a pitfall with compression on Tomcat side. Each response will add additional strain on time and resources (like CPU) to compress on the server and decompress the content on the client. Especially for small files this might be an unnecessary overhead as the time and resources to compress might take longer than just transferring a couple of uncompressed KB.   In the end it's a trade-off between network speed and the speed of compressing, decompressing response files on server and client. With the compressionMinSize attribute a compromise size can be set to find the best balance between compression and bandwith.   This trade-off can be clearly seen (for small content) here:     While the Size of the content shrinks, the Time increases. For larger content files however the Time will slightly increase as well due to the compression overhead, whereas the Size can be potentially dropped by a massive factor - especially for text based files.   Above test has been performed on a local virtual machine which basically neglegts most of the network related traffic problems resulting in performance issues - therefore the overhead in Time are a couple of milliseconds for the compression / decompression.   The default for the compressionMinSize is 2048 byte.   High potential performance improvement   Looking at the Combined.js the content size can be reduced significantly from 4.3 MB to only 886 KB. For my simple Mashup showing a chart with Temperature and Humidity this also decreases total load time from 32 to 2 seconds - also decreasing the content size from 6.1 MB to 1.2 MB!     This decreases load time and size by a factor of 16x and 5x - the total time until finished rendering the page has been decreased by a factor of almost 22x! (for this particular use case)   Configuration   To configure compression, open Tomcat's server.xml   In the <Connector> definitions add the following:   compression="on" compressibleMimeType="text/html,text/xml,text/plain,text/css,text/javascript,application/javascript,application/json"     This will use the default compressionMinSize of 2048 bytes. In addition to the default Mime Types I've also added application/json to compress ThingWorx service call results.   This needs to be configured for all Connectors that users should access - e.g. for HTTP and HTTPS connectors. For testing purposes I have a HTTPS connector with compression while HTTP is running without it.   Conclusion   If possible, enable compression to speed up content download for the client.   However there are some scenarios where compression is actually not a good idea - e.g. when using a WAN Accelerator or other network components that usually bring their own content compression. This not only adds unnecessary overhead but is compressing twice which might lead to errors on client side when decompressing the content.   Especially dealing with large responses can help decreasing impact on performance. As compressing and decompressing adds some overhead, the min size limit can be experimented with to find the optimal compromise between a network and compression time trade-off.
View full tip
Timers and schedulers can be useful tool in a Thingworx application.  Their only purpose, of course, is to create events that can be used by the platform to perform and number of tasks.  These can range from, requesting data from an edge device, to doing calculations for alerts, to running archive functions for data.  Sounds like a simple enough process.  Then why do most platform performance issues seem to come from these two simple templates? It all has to do with how the event is subscribed to and how the platform needs to process events and subscriptions.  The tasks of handling MOST events and their related subscription logic is in the EventProcessingSubsystem.  You can see the metrics of this via the Monitoring -> Subsystems menu in Composer.  This will show you how many events have been processed and how many events are waiting in queue to be processed, along with some other settings.  You can often identify issues with Timers and Schedulers here, you will see the number of queued events climb and the number of processed events stagnate. But why!?  Shouldn't this multi-threaded processing take care of all of that.  Most times it can easily do this but when you suddenly flood it with transaction all trying to access the same resources and the same time it can grind to a halt. This typically occurs when you create a timer/scheduler and subscribe to it's event at a template level.  To illustrate this lets look at an example of what might occur.  In this scenario let's imagine we have 1,000 edge devices that we must pull data from.  We only need to get this information every 5 minutes.  When we retrieve it we must lookup some data mapping from a DataTable and store the data in a Stream.  At the 5 minute interval the timer fires it's event.  Suddenly all at once the EventProcessingSubsystem get 1000 events.  This by itself is not a problem, but it will concurrently try to process as many as it can to be efficient.  So we now have multiple transactions all trying to query a single DataTable all at once.  In order to read this table the database (no matter which back end persistence provider) will lock parts or all of the table (depending on the query).  As you can probably guess things begin to slow down because each transaction has the lock while many others are trying to acquire one.  This happens over and over until all 1,000 transactions are complete.  In the mean time we are also doing other commands in the subscription and writing Stream entries to the same database inside the same transactions.  Additionally remember all of these transactions and data they access must be held in memory while they are running.  You also will see a memory spike and depending on resource can run into a problem here as well. Regular events can easily be part of any use case, so how would that work!  The trick to know here comes in two parts.  First, any event a Thing raises can be subscribed to on that same Thing.  When you do this the subscription transaction does not go into the EventProcessingSubsystem.  It will execute on the threads already open in memory for that Thing.  So subscribing to a timer event on the Timer Thing that raised the event will not flood the subsystem. In the previous example, how would you go about polling all of these Things.  Simple, you take the exact logic you would have executed on the template subscription and move it to the timer subscription.  To keep the context of the Thing, use the GetImplimentingThings service for the template to retrieve the list of all 1,000 Things created based on it.  Then loop through these things and execute the logic.  This also means that all of the DataTable queries and logic will be executed sequentially so the database locking issue goes away as well.  Memory issues decrease also because the allocated memory for the quries is either reused or can be clean during garbage collection since the use of the variable that held the result is reallocated on each loop. Overall it is best not to use Timers and Schedulers whenever possible.  Use data triggered events, UI interactions or Rest API calls to initiate transactions whenever possible.  It lowers the overall risk of flooding the system with recourse demands, from processor, to memory, to threads, to database.  Sometimes, though, they are needed.  Follow the basic guides in logic here and things should run smoothly!
View full tip
A common issue that is seen when trying to deploy, design or scale up a ThingWorx application is performance.  Slow response, delayed data and the application stopping have all been seen when a performance problems either slowly grows or suddenly pops up.  There are some common themes that are seen when these occur typically around application model or design.  Here are a few of the common problems and some thoughts on what to do about them or how to avoid them. Service Execution This covers a wide range of possibilities and is most commonly seen when trying to scale an application.  Data access within a loop is one particular thing to avoid.  Accessing data from a Thing, other service or query may be fast when only testing it on 100 loops, but when the application grows and you have 1000 suddenly it's slow.  Access all data in one query and use that as an in memory reference.  Writing data to a data store (Stream, Datatable or ValueStream) then querying that same data in one service can cause problems as well.  Run the query first then use all the data you have in the service variables.   To troubleshoot service executions there are a few methods that can be used.  Some for will not be practical for a production system since it is not always advisable to change code without testing first. Used browser development tools to see the execution time of a service.  This is especially helpful when a mashup is slow to load or respond.  It will allow quickly identifying which of multiple services may be the issue. Addition of logging in a service.  Once a service is identified adding simple logging points in the service can narrow what code in the service cases the slow down (it may be another service call).  These logging statements show up in the script logs with time stamps ( you can also log the current time with the logging statements). Use the test button in Composer.  This is a simple on but if the service does not have many parameters (or has defaults) it's a fast and easy way to see how long a service takes to return,' When all else fails you can get thread dumps from the JVM.  ThingWorx Support created an extension that assists with this.  You can find it on the Marketplace with instructions on how to use it.  You can manually examine the output files or open a ticket with support to allow them to assist.  Just be careful of doing memory dumps, there are much larger, hard to analyse and take a lot of memory.  https://marketplace.thingworx.com/tools/thingworx-support-tools Queries ​These of course are services too but a specific type.  Accessing data in ThingWorx storage structures or from external sources seems fairly straight forward but can be tricky when dealing with large data sets.  When designing and dealing with internal platform storage refer to this guide as a baseline to decide where to store data...  Where Should I Store My Thingworx Data?   NEVER store historical data in infotable properties.  These are held in memory (even if they are persistent) and as they grow so will the JVM memory use until the application runs out of it.  We all know what happens then.  Finally one other note that has causes occasional confusion.  The setting on a query service or standard ThingWorx query service that limits the number of records returned.  This is how many records are returned to from the service at the end of processing, not how many are processed or loaded in memory.  That number may be much higher and could cause the same types of issues. Subscriptions and Events ​This is similar to service however there is an added element frequency.  Typical events are data change and timers/schedulers.  This again is often an issue only when scaling up the number of Things or amount of data that need to be referenced.  A general reference on timers and schedulers can be found here.  This also describes some of the event processing that takes place on the platform.  Timers and Schedulers - Best Practice For data change events be very cautions about adding these to very rapidly changing property values.  When a property is updating very quickly, for example two times each second, the subscription to that event must be able to complete in under 0.5 seconds to stay ahead of processing.  Again this may work for 5-10 Things with properties but will not work with 500 due to resources, speed and need to briefly lock the value to get an accurate current read.  In these cases any data processing should be done at the edge when possible (or in the originating system) and pushed to the platform in a separate property or service call.  This allows for more parallel processing since it is de-centralized. A good practice for allowing easier testing of these types of subscription code is to take all of the script/logic and move it to a service call.  Then pass any of the needed event data to parameters in the service.  This allows for easier debug since the event does not need to fire to make the logic execute.  In fact it can essentially be stand alone by the test button in Composer. Mashup Performance This​ one can be very tricky since additional browser elements and rendering can come into play. Sometimes service execution is the root of the issue and reviewed above, other times it is UI elements and design that cause slow down. The Repeater widget is a common culprit. The biggest thing to note here is that each repeater will need to render every element that is repeated and all of the data and formatting for each of those widgets in the repeated mashup. So any complex mashup that is repeated many times may become slow to load. You can minimize this to a degree based on the Load/Unload setting of the widget and when the slowness is more acceptable (when loading or when scrolling). When a mashup is launched from Composer it comes with some debugging tools built in to see errors and execution. Using these with browser debug tools can be very helpful. Scaling an Application When initially modeling an application scale must be considered from the start. It is a challenge (but not impossible) to modify an application after deployment or design to be very efficient. Many times new developers on the ThingWorx platform fall into what I call the .Net trap. Back when .Net was released one of the quote I recall hearing about it's inefficiencies was "memory is cheap". It was more cost efficient to purchase and install more memory than to take extra development time to optimize memory use. This was absolutely true for installed applications where all of the code was complied and stored on every system. Web based applications are not quite a forgiving since most processing and execution is done on the single central web server. Keep this in mind especially when creating Shapes, Templates and Subscriptions. While you may be writing one piece of code when this code is repeated on 1,000 Things they will all be in memory and all be executing this code in parallel. You can quickly see how competition for resources, locks on databases and clean access to in memory structures can slow everything down (and just think when there are 10,000 pieces of that same code!!). Two specific things around this must be stated again (though they were covered in the above sections). Data held in properties has fast access since it is in JVM memory. But this is held in memory for each individual Thing, so hold 5 MB of information in one Thing seems small, loading 10,000 Thing mean instant use of 50 GB of memory!! Next execution of a service. When 10 things are running a service execution takes 2 seconds. Slow but not too bad and may not be too noticeable in the UI. Now 10,000 Things competing for the same data structure and resources. I have seen execution time jump to 2 minutes or more. Aside from design the best thing you can do is TEST on a scaled up structure. If you will have 1,000 Things next year test your application early at that level of deployment to help identify any potential bottlenecks early. Never assume more memory will alleviate the issue. Also do NOT test scale on your development system. This introduces edits changes and other variables which can affect actual real world results. Have a QA system setup that mirrors a production environment and simulate data and execution load. Additional suggestions are welcome in comments and will likely update this as additional tool and platform updates change.
View full tip
Putting this out because this is a difficult problem to troubleshoot if you don't do it right. Let's say you have an application where you have visibility permissions in effect. So you have Users group removed from the Everyone Organization Now you have a Thing "Thing1" with Properties that are being logged to a ValueStream "VS1" What do you need to make this work? Obviously the necessary permissions to Write the values to the Thing1 and read the values from Thing1 (for UI) But for visibility what you'll need is: Visibility to Thing1 (makes sense) Visibility to the Persistence Provider of the ValueStream VS1 !!!! Nope you don't need Visibility to the ValueStream itself, but you DO need Visibility to the Persistence Provider of that ValueStream The way the lack of this permission was showing in the Application Log was a message about trying to provide a Null value.
View full tip
This document is designed to help troubleshoot some commonly seen issues while installing or upgrading the ThingWorx application, prior or instead of contacting Tech Support. This is not a defined template for a guaranteed solution, but rather a reference guide that provides an opportunity to eliminate some of the possible root causes. While following the installation guide and matching the system requirements is sufficient to get a successfully running instance of ThingWorx, some issues could still occur upon launching the app for the first time. Generally, those issues arise from minor environmental details and can be easily fixed by aligning with the proper installation process. Currently, the majority of the installation hiccups are coming from the postgresql side. That being said, the very first thing to note, whether it's a new user trying out the platform or a returning one switching the database to postgresql, note that: Postgresql database must be installed, configured, and running prior to the further Thingworx installation. ThingWorx 7.0+: Installation errors out with 'failed to succeed more than the maximum number of allowed acquisition attempts' Platform being shut down because System Ownership cannot be acquired error ERROR: relation "system_version" does not exist Resolution: Generally, this type of error point at the security/permission issue. As all of the installation operations should be performed by a root/Administrator role, the following points should be verified: Ensure both Tomcat and ThingworxPlatform folders have relevant read/write permissions The title and contents of the configuration file in the ThingworxPlatform folder has changed from 6.x to 7.x Check if the right configuration file is in the folder Verify if the name and password provided in this configuration file matches the ones set in the Postgres DB Run the Database cleanup script, and then set up the database again. Verufy by checking the thingworx table space (about 53 tables should be created)     Thingworx Application: Blank screen, no errors in the logs, "waiting for <url> " gears running be never actually loading, eventually times out     Resolution: Ensure that Java in tomcat is pointing to the right path, should be something like this: C:\Program Files\Java\jre1.8.0_101\bin\server\jvm.dll 6.5+ Postgres:   Error when executing thingworxpostgresDBSetup.bat psql:./thingworx-database-setup.sql:1: ERROR: could not set permissions on directory "D:/ThingworxPostgresqlStorage": Permission denied     Resolution:     The error means that the postgres user was not able to create a directory in the ‘ThingworxPostgresStorage’ directory. As it's related to the security/permission, the following steps can be taken to clear out the error: Assigning read/write permissions to everyone user group to fix the script execution and then execute the batch file: Right-click on ‘ThingworxPostgresStorage’ directory -> Share with -> specific people. Select drop-down, add everyone group and update the permission level to Read/Write. Click Share. Executing the batch file as admin. 2. Installation error message "relation root_entity_collection does not exist" is displayed with Postgresql version of the ThingWorx platform. Resolution:     Such an error message is displayed only if the schema parameter passed to thingworxPostgresSchemaSetup.sh script  is different than $USER or PUBLIC. To clear out the error: Edit the Postgresql configuration file, postgresql.conf, to add to the SEARCH_PATH item your own schema. Other common errors upon launching the application. Two of the most commonly seen errors are 404 and 401.  While there can be a numerous reasons to see those errors, here are the root causes that fall under the "very likely" category: 404 Application not found during a new install: Ensure Thingworx.war was deployed -- check the hard drive directory of Tomcat/webapps and ensure Thingworx.war and Thingworx folder are present as well as the ThingworxStorage in the root (or custom selected location) Ensure the Thingworx.war is not corrupted (may re-download from the support and compare the size) 401 Application cannot be accessed during a new install or upgrade: For Postgresql, ensure the database is running and is connected to, also see the Basic Troubleshooting points below. Verify the tomcat, java, and database (in case of postgresql) versions are matching the system requirement guide for the appropriate platform version Ensure the updrade was performed according to the guide and the necessary folders were removed (after copying as a preventative measure). Ensure the correct port is specified in platform-settings.json (for Postgresql), by default the connection string is jdbc:postgresql://localhost:5432/thingworx Again, it should be kept in mind that while the symptoms are common and can generally be resolved with the same solution, every system environment is unique and may require an individual approach in a guaranteed resolution. Basic troubleshooting points for: Validating PostgreSQL installation Postgres install troubleshooting java.lang.NullPointerException error during PostgreSQL installation ***CRITICAL ERROR: permission denied for relation root_entity_collection Error while running scripts: Could not set permissions on directory "/ThingworxPostgresqlStorage":Permission Denied Acquisition Attempt Failed error Resolution: Ensure 'ThingworxStorage', 'ThingworxPlatform' and 'ThingworxPostgresqlStorage' folders are created The folders have to be present in the root directory unless specifically changed in any configurations Recommended to grant sufficient privileges (if not all) to the database user (twadmin) Note: While running the script in order to create a database, if a schema name other than 'public' is used, the "search_path" in "postgresql.conf" must be changed to reflect 'NewSchemaName, public' Grant permission to user for access to root folders containing 'ThingworxPostgresqlStorage' and 'ThingworxPlatform' The password set for the default 'twadmin' in the pgAdmin III tool must match the password set in the configuration file under the ThingworxPlatform folder Ensure THINGWORX_PLATFORM_SETTINGS variable is set up Error: psql:./thingworx-database-setup.sql:14: ERROR:  could not create directory "pg_tblspc/16419/PG_9.4_201409291/16420": No such file or directory psql:./thingworx-database-setup.sql:16: ERROR:  database "thingworx" does not exist Resolution: Replacing /ThingworxPostgresqlStorage in the .bat file by C:\ThingworxPostgresqlStorage and omitting the -l option in the command window. Also, note the following error Troubleshooting Syntax Error when running postgresql set up scripts
View full tip
Back in 2018 an interesting capability was added to ThingWorx Foundation allowing you to enable statistical calculation of service and subscription execution.   We typically advise customers to approach this with caution for production systems as the additional overhead can be more than you want to add to the work the platform needs to handle.  This said, these statistics is used consciously can be extremely helpful during development, testing, and troubleshooting to help ascertain which entities are executing what services and where potential system bottlenecks or areas deserving performance optimization may lie.   Although I've used the Utilization Subsystem services for statistics for some time now, I've always found that the Composer table view is not sufficient for a deeper multi-dimensional analysis.  Today I took a first step in remedying this by getting these metrics into Excel and I wanted to share it with the community as it can be quite helpful in giving developers and architects another view into their ThingWorx applications and to take and compare benchmarks to ensure that the operational and scaling is happening as was expected when the application was put into production.   Utilization Subsystem Statistics You can enable and configure statistics calculation from the Subsystem Configuration tab.  The help documentation does a good job of explaining this so I won't mention it here.  Base guidance is not to use Persisted statistics, nor percentile calculation as both have significant performance impacts.  Aggregate statistics are less resource intensive as there are less counters so this would be more appropriate for a production environment.  Specific entity statistics require greater resources and this will scale up as well with the number of provisioned entities that you have (ie: 1,000 machines versus 10,000 machines) whereas aggregate statistics will remain more constant as you scale up your deployment and its load.   Utilization Subsystem Services In the subsystem Services tab, you can select "UtilizationSubsystem" from the filter drop down and you will see all of the relevant services to retrieve and reset the statistics.     Here I'm using the GetEntityStatistics service to get entity statistics for Services and Subscriptions.     Giving us something like this.      Using Postman to Save the Results to File I have used Postman to do the same REST API call and to format the results as HTML and to save these results to file so that they can be imported into Excel.   You need to call '/Thingworx/Subsystems/UtilizationSubsystem/Services/GetEntityStatistics' as a POST request with the Content-Type and Accept headers set to 'application/xml'.  Of course you also need to add an appropriately permissioned and secured AppKey to the headers in order to authenticate and get service execution authorization.     You'll note the Export Results > Save to a file menu over on the right to get your results saved.   Importing the HTML Results into Excel As simple as I would like to hope that getting a standard web formatted file into Excel should be, it didn't turn out to be as easy as I would have hoped and so I have to switch over to Windows to take advantage of Power Query.   From the Data ribbon, select Get Data > From File > From XML.  Then find and select the HTML file saved in the previous step.     Once it has loaded the file and done some preparation, you'll need to select the GetEntityStatistics table in the results on the left.  This should display all of the statistics in a preview table on the right.     Once the query completed, you should have a table showing your statistical data ready for... well... slicing and dicing.     The good news is that I did the hard part for you, so you can just download the attached spreadsheet and update the dataset with your fresh data to have everything parsed out into separate columns for you.     Now you can use the column filters to search for entity or service patterns or to select specific entities or attributes that you want to analyze.  You'll need to later clear the column filters to get your whole dataset back.     Updating the Spreadsheet with Fresh Data In order to make this data and its analysis more relevant, I went back and reset all of the statistics and took a new sample which was exactly one hour long.  This way I would get correct recent min/max execution time values as well as having a better understanding of just how many executions / triggers are happening in a one hour period for my benchmark.   Once I got the new HTML file save, I went into Excel's Data ribbon, selected a cell in the data table area, and clicked "Queries & Connections" which brought up the pane on the right which shows my original query.     Hovering over this query, I'm prompted with some stuff and I chose "Edit".     Then I clicked on the tiny little gear to the right of "Source" over on the pane on the right side.     Finally I was able to select the new file and Power Query opened it up for me.     I just needed to click "Close & Load" to save and refresh the query providing data to the table.     The only thing at this point is that I didn't have my nice little sparklines as my regional decimal character is not a period - so I selected the time columns and did a "Replace All" from '.' to ',' to turn them into numbers instead of text.     Et Voila!   There you have it - ready to sort, filter, search and review to help you better understand which parts of your application may be overly resource hungry, or even to spot faulty equipment that may be communicating and triggering workflows far more often than it should.   Specific vs General Depending on the type of analysis that you're doing you might find that the aggregate statistics are a better option.  As they'll be far, far less that the entity specific statistics they'll do a better job of giving you a holistic view of the types of things that are happening with your ThingWorx applications execution.   The entity specific data set that I'm showing here would be a better choice for troubleshooting and diagnostics to try to understand why certain customers/assets/machines are behaving strangely as we can specifically drill into these stats.  Keep in mind however that you should then compare these findings with the general baseline to see how this particular asset is behaving compared to the whole fleet.   As a size guideline - I did an entity specific version of this file for a customer with 1,000 machines and the Excel spreadsheet was 7Mb compared to the 30kb of the one attached here and just opening it and saving it was tough for Excel (likely due to all of my nested formulas).  Just keep this in mind as you use this feature as there is memory overhead meaning also garbage collection and associated CPU usage for such.
View full tip
This Best Practices document should offer some guidelines and tips & tricks on how to work with Timers and Schedulers in ThingWorx. After exploring the configuration and creation of Timers and Schedulers via the UI or JavaScript Services, this document will also highlight some of the most common performance issues and troubleshooting techniques.   Timers and Schedulers can be used to run jobs or fire events on a regular basis. Both are implemented as Thing Templates in ThingWorx. New Timer and Scheduler Things can be created based on these Templates to introduce time based actions. Timers can be used to fire events in a certain interval, defined in the Timer's Update Rate (default is 60000 milliseconds = 1 minute). Schedulers can be used to run jobs based on a cron pattern (such as once a day or once an hour). Schedulers will also allow for a more detailed time based setup, e.g. based on seconds, hours, days of week or days months etc. Events fired by both Timers and Schedulers can be subscribed to with Subscriptions which can be utilized to execute custom service scripts, e.g. to generate "fake" or random demo data to update Remote Things in a test environment. In general subscriptions and scripts can be used to e.g. run regular maintenance tasks or periodically required functions (e.g. for data aggregation) For more information about setting up Timers and Schedulers it's recommended to also have a look at the following content:   How to set up and configure Timers How to set up and configure Schedulers How to create and configure Timers and Schedulers via JavaScript Services Events and Subscriptions for Timers and Schedulers   Example   The following example will illustrate on how to create a Timer Thing updating a Remote Thing using random values. To avoid any conflicts with permissions and visibility, use the Administrator user to create Things.   Remote Thing   Create a new Thing based on the Remote Thing Template, called myRemoteThing. Add two properties, numberA and numberB - both Integers and marked as persistent. Save myRemoteThing. Timer Thing   Create a new Thing based on the Timer Template, called myTimerThing. In the Configuration, change the Update Rate to 5000, to fire the Event every 5 seconds. User Context to Administrator. This will run the related services with the Administrator's user visibility and permissions. Save myTimerThing. Subscriptions   To update the myRemoteThing properties when the Timer Event fires, there are two options: Configure a Subscription on myRemoteThing and listen to Timer Events on the myTimerThing. Configure a Subscription on myTimerThing and listen to Timer Events on itself as a source. In this example, let's go with the first option and Edit myRemoteThing. Create a new Subscription pointing to myTimerThing as a Source. Select the Timer Event Note that if no source is selected, the Timer Event is not availabe, as myRemoteThing is based on the Remote Thing Template and not the Timer Template Enable the Subscription. In the Script area use the following code to assign two random numbers to the Thing's custom properties: me.numberA = Math.floor(Math.random() * 100); me.numberB = Math.floor(Math.random() * 100); Save myRemoteThing. Validation   The Subscription will be enabled and active on saving it. Switch to the myRemoteThing Properties Refreshing the Values will show updates with random numbers between 0 and 99 every 5 seconds (Timer Update Rate).   Performance considerations   Timers and Schedulers are handled via the Event Processing Subsystem. Metrics that impact current performance can be seen in Monitoring > Subsystems > Event Processing Implementing Timers and Schedulers on a Thing Template level might flood the system with services executions originating from Subscriptions to Timer / Scheduler triggered Events. Subscribing to another Thing's Events will be handled via the Event Processing Subsystem. Subscribing to an Event on the same Thing will not be handled via the Event Processing Subsystem, but rather execute on the already open in memory Thing. If Timers and Schedulers are not necessarily needed, the Services can be triggered e.g. via Data Change Events, UI Interactions etc. Recursion can be a hidden performance contributer where a Subscription to a certain Event executes a service, triggering another Event with recursive dependencies. Ensure there are no circular dependencies and service calls across Entities. If possible, reads for each and every action from disk should be avoided. Performance can be increased by storing relevant information in memory and using Streams or Datatables or for persistence. If possible, call other Services from within the Subscription instead of handling all code within the Subscription itself. For full details, see also Timers and Schedulers - Best Practice   How to identify and troubleshoot technical issues   Check the Event Processing Subsystem for any spikes in queued Events (tasks submitted) while the total number of tasks completed is not or only slowly increasing. For a historical overview, search the ApplicationLog for "Thingworx System Metrics" to get system metrics since the server has been (re-) started. In the ApplicationLog the message "Subsystem EventProcessingSubsystem is started" indicates that the Subsystem is indeed started and available. Use custom loggers in Services to get more context around errors and execution in the ScriptLog Custom Loggers can be used to identify if Events have fired and Subscriptions are actually triggered Example: logger.debug("myThing: executing subscribed service") For issues with Service execution, see also CS268218 Infinite loops in Services could render the server unresponsive and might flood the system with various Events To change the timing for a Timer, restarting the Thing is not enough. The Timer must be disabled and enabled at the desired start time. Schedulers will allow for a much more flexible timing and setting / changing execution times in advance. For further analysis it's recommended to generate Thread Dumps to get more information about the current state of Threads in the JVM. The ThingWorx Support Tools Extension can help in generating those. See also CS245547 for more information and usage.
View full tip
Welcome to the ThingWorx Manufacturing Apps Community! The ThingWorx Manufacturing Apps are easy to deploy, pre-configured role-based starter apps that are built on PTC’s industry-leading IoT platform, ThingWorx. These Apps provide manufacturers with real-time visibility into operational information, improved decision making, accelerated time to value, and unmatched flexibility to drive factory performance.   This Community page is open to all users-- including licensed ThingWorx users, Express (“freemium”) users, or anyone interested in trying the Apps. Tech Support community advocates serve users on this site, and are here to answer your questions about downloading, installing, and configuring the ThingWorx Manufacturing Apps.     A. Sign up: ThingWorx Manufacturing Apps Community: PTC account credentials are needed to participate in the ThingWorx Community. If you have not yet registered a PTC eSupport account, start with the Basic Account Creation page.   Manufacturing Apps Web portal: Register a login for the ThingWorx Manufacturing Apps web portal, where you can download the free trial and navigate to the additional resources discussed below.     B. Download: Choose a download/packaging option to get started.   i. Express/Freemium Installer (best for users who are new to ThingWorx): If you want to quickly install ThingWorx Manufacturing Apps (including ThingWorx) use the following installer: Download the Express/Freemium Installer   ii. 30-day Developer Kit trial: To experience the capabilities of the ThingWorx Platform with the Manufacturing Apps and create your own Apps: Download the 30-day Developer Kit trial   iii. Import as a ThingWorx Extension (for users with a Manufacturing Apps entitlement-- including ThingWorx commercial customers, PTC employees, and PTC Partners): ThingWorx Manufacturing apps can be imported as ThingWorx extensions into an existing ThingWorx Platform install (v8.1.0). To locate the download, open the PTC Software Download Page and expand the following folders:   ThingWorx Platform | Release 8.x | ThingWorx Manufacturing Apps Extension | Most Recent Datacode     C. Learn After downloading the installer or extensions, begin with Installation and Configuration.   Follow the steps laid out in the ThingWorx Manufacturing Apps Setup and Configuration Guide 8.2   Find helpful getting-started guides and videos available within the 'Get Started' section of the ThingWorx Manufacturing Apps Portal.     D. Customize Once you have successfully downloaded, installed, and configured the Manufacturing Apps, begin to explore the deeper potential of the Apps and the ThingWorx Platform.   Follow along with the discussion and steps contained in the ThingWorx Manufacturing Apps and Service Apps Customization Guide  8.2   Also contained within the the 'Get Started' page of the ThingWorx Manufacturing Apps Portal, find the "Evolve and Expand" section, featuring: -Custom Plant Layout application -Custom Asset Advisor application -Global Plant View application -Thingworx Manufacturing Apps Technical Lab with Sigma Tile (Raspberry Pi application) -Configuring the Apps with demo data set and simulator -Additional Advanced Documentation     E. Get help / give feedback / interact Use the ThingWorx Manufacturing Apps Community page as a resource to find documentation, peruse past forum threads, or post a question to start a discussion! For advanced troubleshooting, licensed users are encouraged to submit support tickets to the PTC My eSupport portal.
View full tip
Validator widgets provide an easy way to evaluate simple expressions and allow users to see different outcomes in a Mashup. Using a validator is fairly intuitive for simple expressions, such as is my field blank? But if we need to evaluate a more complex scenario based on multiple parameters, then we can user our validator with a backing service that will perform more complex analytics. To show how services can work in conjunction with a validator widget, let’s consider a slightly more complicated scenario such as: A web form needs to validate that the zip or postal code entered by the user is consistent with the country the user selected. Let’s go through the steps of validating our form: Create a service with two input parameters. Our service will use regex to validate a postal code based on the user’s country.  Here’s some sample code we could use on an example Thing: //Input parameters: Country and PostalCode (strings) //Country-based regular expressions: var reCAD = /^[ABCEGHJKLMNPRSTVXY]{1}\d{1}[A-Z]{1} *\d{1}[A-Z]{1}\d{1}$/; var reUS = /^\d{5}(-\d{4})?$/; var reUK = /^[A-Za-z]{1,2}[\d]{1,2}([A-Za-z])?\s?[\d][A-Za-z]{2}$/; var search = ""; //Validate based on Country: if (Country==="CAD")                search = reCAD.exec(PostalCode); else if (Country==="US")                search = reUS.exec(PostalCode); else if (Country==="UK")                search = reUK.exec(PostalCode); (search == null) ? result = false: result = true; Set up a simple mashup to collect the parameters and pass them into our service Add a validator widget Configure the validator widget to parse the output of the service. ServiceInvokeComplete on the service should trigger the evaluation and pass the result of the service into a new parameter on the widget called ServiceResult (Boolean). The expression for the evaluator would then be: ServiceResult? true:false Based on the output of the validator, provide a message confirming the postal code is valid or invalid Add a button to activate the service and the validator evaluation Of course, in addition to providing a message we can also use the results of the validator to activate additional services (such as writing the results of the form to the database). For an example you can import into your ThingWorx environment, please see the attached .zip file which contains a sample mashup and a test thing with a validator service.
View full tip
Background Getting a performance benchmark of your running application is an important thing to do when deploying and scaling up an application in production.  This not only helps focus in on performance issues quickly, but also allows for safely planning for scaling up and resource sizing based on real concrete data.   I recently created a tool and made a post about capturing and analysing ThingWorx utilisation statistics to do such an analysis, as well as identifying potential performance bottlenecks. Although they are rich and precise, utilisation statistics fall short in a number of areas however - specifically being able to count and time specific service executions, as well as identifying and sorting based on the host executing the service.   Tomcat Access Log Analysis As ThingWorx is a Tomcat web application, Tomcat logs details of the requests being made to the application server and ThingWorx REST API.  The default settings include the host (IP address), date/timestamp, and request URI; which can be decoded to reveal relevant details like the calling entities and service executions.   Adding 3 key additional variables (%s %B %D) to the server.xml access log value also gives us the HTTP response code, service execution time, and bytes returned from Tomcat.  This is super useful as we can now determine exact time of service executions, and run statistics on their execution totals and execution time.     Once you have an access log file looking like the one above, you can attempt to load it into the access_log sheet in the analysis Excel workbook that I created.  You do this by click on the access_log table, then selecting "Data > Get Data > Data Source Settings".  You'll then be prompted with the following or similar pop-up allowing you to navigate to your access_log file to select and then load.     It should be noted that you'll have to Refresh the table after selecting the new access_log.txt file so that it is read in and populates the table.  You can do this by right-clicking on the table and saying Refresh, or using the Data > Refresh button.   This workbook relies on a number of formulas to slice and dice the timestamp, and during my attempts at importing I had significant issues with this due to some of the ways that Excel does things automatically without any manual options.  You really need to make sure that the timestamps are imported and converted correctly, or something in the workbook will likely not work as intended.  One thing that I had to do was to add 1 second to round up 00:00:00 for the first entries as this was being imported as a date without the time part, and then the next lines imported as a date/time.   Depending on how many lines your file is, you'll likely also have to "Fill Down" the formulas on the right side of the sheet which may be empty in the table after importing your new data set.  I had the best results by selecting the cells in question on the last row, then going down to the bottom corner, pushing and holding Shift, clicking on the last cell bottom right, and then selecting Home > Fill > Down to pull the formulas down from the top.   Once the data is loaded, you'll be able to start poking around.  The filters and sorting by the named columns is really helpful as you can start out by doing things like removing a particular host, sorting by longest execution times, selecting execution times greater than 4 seconds, or only showing activity aimed at a particular entity or service.     You really need to make sure that the imported data worked fine and looks perfect, as the next steps will totally break if not.  With the data loaded, you can now go to the Summary Data table and right-click on one of the tables and select Refresh.  This is reload the data in into the pivot table and re-run their calculations.   Once the refresh is complete, you should see the table summary like shown here; there are Day, Hour, and Minute expand/collapse buttons.  You should also see the Day, Hour, Month fields showing in the Field Definitions on the right.  This is the part that is painful -- if the dates are in the wrong format and Excel is unable to auto-detect everything in the same way, then you will not get these automatically created fields.     With the data reloaded, and Pivot Tables re-built, you should be able to go over to the Dashboard sheet to start looking at and analysing the graphs.  This one is showing the Top 10 services organised into hourly buckets with cumulated service execution times.     I'm not going to go into all of the workbooks features, but you can also individually select a set of key services that you want to have a look at together across both the execution count and execution time dimensions.     Next you can see the coordinated view of both total service execution time over number or service executions.  This is helpful for looking for patterns where a service may be executing longer but being triggered the same amount of times, compared to both being executed and taking more time.  I've created a YouTube video (see bottom) which goes through using all of the features as well as providing other pointers to using it.     Getting into a finer level of detail, this "bonus" sheet provides a Pivot Table and Pivot Chart which allows for exploring minimum, maximum and average execution time for a specific service.  Comparing this with the utilisation subsystem metrics taken during the same period now provide much deeper insight as we can pinpoint there the peaks were, how long they lasted, and where the slow executions were in relation to other services being executed at that time (example: identifying many queries/data processing occurring simultaneously).     Without further ado, you can download and play with my ThingWorx Tomcat Access Log Analysis Excel Workbook, and check out the recorded demonstration and explanation for more details on loading and analysis use. [YouTube] ThingWorx Tomcat Access Logs - Service Performance Analysis
View full tip
If the ThingWorx 7.4 installation with MSSQL db doesn't start with a license error on the splash screen, despite taking the necessary steps for specifying the license path, the error might be misleading and the problem is actually lying in the database connection. Going to THingworxStorage/logs and opening ApplicationLog.log might reveal the following (or similar ) errors: 2017-04-13 10:26:17.993-0400 [L: INFO] [O: c.t.s.ThingWorxServer] [I: ] [U: SuperUser] [S: ] [T: localhost-startStop-1] Sending Post Start Notifications... 2017-04-13 10:26:17.999-0400 [L: ERROR] [O: c.t.p.m.MssqlModelExceptionTranslator] [I: ] [U: SuperUser] [S: ] [T: localhost-startStop-1] [message: Could not create a transaction for ThingworxPersistenceProvider] 2017-04-13 10:26:18.001-0400 [L: ERROR] [O: E.c.t.p.d.FileTransferDocumentModelProvider] [I: ] [U: SuperUser] [S: ] [T: localhost-startStop-1] [context: Could not create a transaction for ThingworxPersistenceProvider][message: Could not create a transaction for ThingworxPersistenceProvider] 2017-04-13 10:26:18.004-0400 [L: ERROR] [O: c.t.p.m.MssqlModelExceptionTranslator] [I: ] [U: SuperUser] [S: ] [T: localhost-startStop-1] [message: Could not create a transaction for ThingworxPersistenceProvider] 2017-04-13 10:26:18.005-0400 [L: ERROR] [O: c.t.s.s.f.FileTransferSubsystem] [I: ] [U: SuperUser] [S: ] [T: localhost-startStop-1] Error loading queued transfer jobs from persistence provider 2017-04-13 10:26:18.006-0400 [L: ERROR] [O: c.t.p.m.MssqlModelExceptionTranslator] [I: ] [U: SuperUser] [S: ] [T: localhost-startStop-1] [message: Could not create a transaction for ThingworxPersistenceProvider] 2017-04-13 10:26:18.006-0400 [L: ERROR] [O: E.c.t.p.d.FileTransferDocumentModelProvider] [I: ] [U: SuperUser] [S: ] [T: localhost-startStop-1] [context: Could not create a transaction for ThingworxPersistenceProvider][message: Could not create a transaction for ThingworxPersistenceProvider] 2017-04-13 10:26:18.007-0400 [L: ERROR] [O: c.t.p.m.MssqlModelExceptionTranslator] [I: ] [U: SuperUser] [S: ] [T: localhost-startStop-1] [message: Could not create a transaction for ThingworxPersistenceProvider] 2017-04-13 10:26:18.008-0400 [L: ERROR] [O: c.t.s.s.f.FileTransferSubsystem] [I: ] [U: SuperUser] [S: ] [T: localhost-startStop-1] Error loading queued transfer jobs from persistence provider 2017-04-13 10:26:18.008-0400 [L: INFO] [O: c.t.s.ThingWorxServer] [I: ] [U: SuperUser] [S: ] [T: localhost-startStop-1] Thingworx Server Application...ON A few things to check here. First, how the database was set up. The standard expected port if 1433, however, if you choose a different port - ensure that the port is available. If it isn't - update the port setting in SQL configuration manager. Another possible root cause -- the database scripts did not run properly upon the setup. Currently, there is an active Jira PSPT-3587 for resolving the script issue (NOTE, this thread to be updated once the Jira is fixed). There are two options here, either to run the sql files manually (found in the "install" folder of the downloadable), or edit the bat scripts manually. It's recommended to edit the script files instead because running the sql commands allows more room for a mistake. The following line in the bat scripts needs to be edited to have the ".\" removed: sqlcmd.exe -S %server%\%serverinstance%,%port% -U %adminusername% -v loginname=%loginname% -v database=%database% -v thingworxusername=%thingworxusername% -v schema=%schema%  -i .\thingworx-database-setup.sql Note, that the scripts need to be run from the same directory ("/install")
View full tip
Want to do a REST call from ThingWorx Want to use REST to send request to External System. Want to get data from other system using REST Here is how you can do this.... ThingWorx has ContentLoaderFunctions API which provides services to load or post content to and from other web applications. One can issue an HTTP request using any of the allowed actions (GET, POST, PUT, DELETE). List of available ContentLoaderFunctions: Delete GetCookies GetJSON GetText GetXML LoadBinary LoadImage LoadJSON LoadMediaEntity LoadText LoadXML PostBinary PostImage PostJSON PostMultipart PostText PostXML PutBinary PutJSON PutText PutXML Example: Using LoadXML snippet in a custom service to retrieve an XML document from a specific URL Insert the LoadXML snippet into a custom service var params = {     proxyScheme: undefined /* STRING */,     headers: "{ 'header1':'value1','header2':'value2'}" /* JSON */,     ignoreSSLErrors: false /* BOOLEAN */,     useProxy: undefined /* BOOLEAN */,     proxyHost: undefined /* STRING */,       url: "http://some_url/sampleXMLDocument.xml" /* STRING */,     timeout: 30000 /* NUMBER */,     proxyPort: undefined /* INTEGER */,     password: "fakePassword" /* STRING */,     username: "Administrator"/* STRING */ }; var result = Resources["ContentLoaderFunctions"].LoadXML(params); The snippet above contains an example of how to format any headers in JSON that need to be passed in, the URL that points directly to some XML document, a password, username, timeout, and ignoreSSLErrors set to false When LoadXML is exercised it will retrieve the XML document, and this can then be parsed or handled however is necessary To see the XML document that is returned from this service the service can be called from a third-party client, such as Postman Note: If a proxy or username and password are required to connect to the URL, those parameter MUST be specified Using the PostXML snippet in a custom service to send content to another URL, in this example, another service in Composer Insert the PostXML snippet into a custom service var content = "<xml><tag1>NAME</tag1><tag2>AGE</tag2></xml>"; var params = {  url: "http://localhost/Thingworx/Things/thingName/Services/serviceName?postParameter=parameterName" /* STRING */, content: content /* STRING */, password: "admin" /* STRING */, username: "Administrator" /* STRING */ }; var result = Resources["ContentLoaderFunctions"].PostXML(params); When posting XML content to another ThingWorx service the postParameter header must be defined in the url parameter for the PostXML snippet  The postParameter header, in the url parameter, is set equal to the name of the input parameter for the service we are POSTing to Change the parameterName variable in the url to the name of the input parameter defined for the service The content parameter is set to the XML content that will be passed into the function or manually specified Note: When declaring namespace URLs in an element make sure that there is a white space in between each declaration ex: <root xmlns:h="http://www.w3.org/TR/html4/" xmlns:f="http://www.w3schools.com/furniture">
View full tip
Thingworx provides a library of InfoTable functions, one of the most powerful ones being DeriveFields (besides that I use Aggregate and Query a lot and ... getRowCount) DeriveFields can generate additional columns to your InfoTable and fill that with values that can be derived from ... nearly anything! Hard coded, based on a Service you call, based on a Property Value, based on other values within the InfoTable you are adding the column to. Just remember for this Service (as well as Aggregate), no spaces between different column definitions and use a , (comma) as separator. Here are some two powerful examples: //Calling another function using DeriveFields //Note that the value thingTemplate is the actual value in the row of column thingTemplate! var params = {   types: "STRING" /* STRING */,   t: AllItems /* INFOTABLE */,   columns: "BaseTemplate" /* STRING */,     expressions: "Things['PTC.RemoteMonitoring.GeneralServices'].RetrieveBaseTemplate({ThingTemplateName:thingTemplate})" /* STRING */ }; // result: INFOTABLE var AllItemsWithBase = Resources["InfoTableFunctions"].DeriveFields(params); //Getting values from other Properties //to in this case is the value of the row in the column to //Note the use of , and no spaces //NOTE: You can make this even more generic with something like Things[to][propName] var params = {     types: "NUMBER,STRING,STRING,LOCATION" /* STRING */,     t: AllAssets /* INFOTABLE */,     columns: "Status,StatusLabel,Description,AssetLocation" /* STRING */,     expressions: "Things[to].Status,Things[to].StatusLabel,Things[to].description,Things[to].AssetLocation" /* STRING */ }; // result: INFOTABLE var AllAssetsWithStatus = Resources["InfoTableFunctions"].DeriveFields(params);
View full tip
JavaMelody is an open source (LGPL) application that measures and calculates statistical information based on application usage. The resulting data can be viewed in a variety of formats including evolution charts, which track various operations and server attributes over time. There are also robust reporting options that allow data to be exported in either HTML of PDF formats. Installation Installation is fairly simple and can be done in just a few minutes. Download the distribution from JavaMelody Wiki and extract the javamelody.jar, available at https://github.com/javamelody/javamelody/releases Step 1: Download the java melody file (in unix, use the following command*): wget javamelody.googlecode.com/files/javamelody-1.49.0.zip Note: Ensure the latest version available at the link provided above before executing the unix command, modify the version accordingly. Step 2: Extract the zip file (using the following command in unix, note the version from step 1); unzip javamelody-1.49.0.zip Step 3: Copy the javamelody.jar and jrobin-x.jar from the javamelody installable to the WEB-INF/lib directory of the war file deployed in the tomcat using the following command in unix: cp -pr javamelody-1.49.0 jrobin-x.jar /opt/tomcat/server/webapps/<application name>/WEB-INF/lib Step 4: Edit the web.xml file from WEB-INF directory of the war file deployed in the tomcat and add the following lines in the web.xml before the description of the servlet.ie. mostly at the starting of the web.xml file.                 <filter> <filter-name>monitoring</filter-name>                <filter-class>net.bull.javamelody.MonitoringFilter</filter-class>        </filter>        <filter-mapping>                <filter-name>monitoring</filter-name>                <url-pattern>/*</url-pattern>        </filter-mapping>        <listener>                <listener-class>net.bull.javamelody.SessionListener</listener-class>        </listener> Step 5: Restart the tomcat server after editing the web.xml and access the javamelody page using the following url pattern: http://<hostname on which tomcat is configured>:<Port number on which the application is accessed>/<application name>/monitoring The url can be customized in the configuration file. Reports can be viewed in weekly, daily, or monthly formats. They can also be downloaded or can be sent over email in pdf format. iText library for WebApps and Java’s Mail and Activation libraries are required on the server in order to use the mail session. The report provides the same information that can be found in monitoring web page with both high-level and detailed information. CPU&Memory usage: Detailed SQL Information: SQL Statistics: Server Requests: System threads, caches: Data Caches: System Overhead ​On the JavaMelody Wiki, https://github.com/javamelody/javamelody/wiki/Overhead​ one can find a healthy discussion about system overhead. It seems that the general consensus is that  the overhead cost caused by JavaMelody is very low and that the feature is safe to enable full-time in QA environment. ->JavaMelody records only statistics and not events, so the overhead of memory is quite minimal. ->No  I/O on the wire and minimal I/O on disk. If no problem arises, it can be considered to enable JavaMelody on the production environment as well. Using a tool like JavaMelody can lead to valuable insights on how to optimize servers or uncover otherwise hidden issues, providing value that exceeds the overhead cost.
View full tip
For those of you that aren't aware - the newest version of the Eclipse Plugin for Extension Development was made available last week in the ThingWorx Marketplace here. Because of the infancy of the product, there is not an official process for supplying release notes along with the plugin.  These are not official or all encompassing, but cover the main items worked on for 7.0. New Features: Added Configuration Table Wizard for code generation SDK Javadocs now automatically linked to SDK resources on project creation When creating a Service, Trace logging statements are generated inside of it (along with appropriate initializers) ThingWorx Source actions are now available from right click menu within a .java file Bugs: Fixed problem where some BaseTypes are not uppercase in annotations when generating code Fixed error when Creating and importing Extension Projects when the Eclipse install has a space in the file path Fixed inconsistent formatting in the metadata.xml when adding new Entities We are hoping to have a more official Release Note process for the next release.  Feel free to reply with questions or concerns.
View full tip
Hi all, Here is the recording of the expert session hosted in September 3rd. For full-sized viewing, click on the YouTube link in the player controls Your feedback is very important to us! After watching the recording, please take 2 min to complete this survey  
View full tip
Background: In very rare situations, it is possible that the Firewall-Friendly Agent process may stop running. If the Agent is not running, no machine monitoring or communication with the Cloud Server is possible. Recommendation: WatchDog is a little known yet very helpful feature available with Firewall-Friendly Agents. This program lets you monitor whether an Agent is running; if it’s not running, WatchDog can restart that Agent if needed. If the Agent process fails, WatchDog can bring it back up! WatchDog can also be configured to watch other processes. You can configure WatchDog to run as a service (for Windows) or daemon (for Linux). You will register the Watchdog to run as a service.  The Watchdog configuration file will specify the process(es) to be monitored and what to do when one exits. The options are to attempt to restart the process or to restart the system. Note: Watchdog detects only if a watched process exits. It will not detect or report on processes that may be “hanging”. Need more information? For information about configuring and using WatchDog, see the Agent User’s Guide for your Agent: either Axeda® Platform Axeda® Gateway User’s Guide (PDF) or Axeda® Platform Axeda® Connector User’s Guide (PDF).
View full tip
There are many choices in life and ThingWorx offers some persistence provider options as well. As of ThingWorx release 8.2, five Database options are provided. 1 PostgreSQL  9.4.5 minimum 2 DataStax Enterprise Edition 4.6.3,5 3 SAP HANA  SPS 11, 12 4 Microsoft SQL Server 2014 and later 5 H2 (version info is not available, maybe because it's an embedded?) H2 is for small scale, mainly for testing purpose, PostgreSQL and Microsoft SQL Server are for middle scale and finally DataStax Enterprise Edition is for big scale. I don't have enough information about SAP HANA so would like to leave it untouched in my comment... I don't have a number as to how many customers are using which database but my gut feeling tells me that PostgreSQL is a popular option, especially cost-wise. PostgreSQL offers powerful tools, such as logging and utilities, to troubleshoot issues.   In this post I would like to cover some useful information you can retrieve by using pgstattuple and pgstatindex of contrib module. By default, PostgreSQL takes a good care of fragmentation and reindex by itself. But in some cases, there's a situation that you want to review status of the database to narrow down the cause of your troubleshooting issue. There are many ways to achieve it but contrib module is provided to review stats of tables and indexes. As explained in this article, it is recommended to keep the number of records in value_stream and stream less than 100,000. That means you'll insert and delete many records when running ThingWorx. What happens then? If you delete(/update) a record in a table, PostgreSQL keeps the previous record in a page but mark it as deleted(and inserts a new record when it's update operation) If the number of those logically deleted records increases, PostgreSQL needs to access many pages of the table to obtain records which meets the criteria user might experience slow performance because of this Those logically deleted records will be ultimately removed from pages when vacuum is run   If you have installed contrib module and enabled it, you can review stats of tables by command below; select * from pgstattuple('stream');                             //This returns the stats of stream table select * from pgstatindex('stream_id_time_index');    //This returns the stats of an index on stream table   pgstattuple returns information below (I modified the format to make it more readable in this post) and meaning of each items are explained in the document .   table_len tuple_count tuple_len tuple_percent dead_tuple_count dead_tuple_len dead_tuple_percent free_space free_percent  8192 1 33 0.4 3  97 1.18 8004 97.71   Before obtaining the stat, I Inserted 4 records and Deleted 3 records and therefore it shows that tuple_count (the active record is 1) and dead_tuple_count (the logically deleted records are 3) and dead_tuple_percent is 1.18. If dead_tuple_percent is high, that means the table is not vacuumed or many DML were executed after the last vacuum operation and this could be the cause of the slow ststem performance.   * IMPORTANT: pgstattuple, pgstatindex consumes resources so it's recommended to run them during the maintenance window.   Takaaki
View full tip
Sometimes the following error is seen filling up the application log: "HTTP header: referrer". It does not cause noticeable issues, but does fill up the log. This article has been updated to reflect the workaround: https://www.ptc.com/en/support/article?n=CS223714 To clear the error, locate the validation.properties ​inside the ThingworxStorage\esapi directory. Then  change the values of both Validator.HTTPHeaderValue_cookie and Validator.HTTPHeaderValue_referer to: Validator.HTTPHeaderValue_cookie= ^.*$ Validator.HTTPHeaderValue_referer= ^.*$
View full tip
Preface   In this blog post, we will discuss how to Start and Stop ThingWorx Analytics, as well as some other useful triaging/troubleshooting commands. This applies to all flavors of the native Linux installation of the Application.   In order to perform these steps, you will have to have sudo or ROOT access on the host machine; as you will have to execute a shell script and be able to view the outputs.   The example screenshots below were taken on a virtual CentOS 7 Server with a GUI as ROOT user.     Checking ThingWorx Analytics Server Application Status   1. Change directory to the installation destination of the ThingWorx Analytics (TWA) Application. In the screenshot below, the application is installed to the /opt/ThingWorxAnalyticsServer directory   2. In the install directory, there are a series of folders and files. You can use the ​ls​ command to see a list of files and folders in the installation directory.     a. You will need to go navigate one more level down into the ./ThingWorxAnalyticsServer/bin​ ​directory by using command ​cd ./bin​     b. As you can see above, we used the ​pwd​ command to verify that we are in the correct directory.   3. In the ./ThingWorxAnalyticsServer/bin directory, there should be three shell files: configure-apirouter.sh, configure-user.sh, and twas.sh     a. To run a status check of the application, use the command ./twas.sh status           i. This will provide a list of outputs, and a few warning messages. This is normal, see screenshot below:      b. You will have a series of services, which will have a green active (running) or red not active (stopped).           i. List of services: twas-results-ms.service - ThingWorx Analytics - Results Microservice twas-data-ms.service - ThingWorx Analytics - Data Microservice twas-analytics-ms.service - ThingWorx Analytics - Analytics Microservice twas-profiling-ms.service - ThingWorx Analytics - Profiling Microservice twas-clustering-ms.service - ThingWorx Analytics - Clustering Microservice twas-prediction-ms.service - ThingWorx Analytics - PredictionMicroservice twas-training-ms.service - ThingWorx Analytics - Training Microservice twas-validation-ms.service - ThingWorx Analytics - Validation Microservice twas-apirouter.service - ThingWorx Analytics - API Router twas-edge-ms.service - ThingWorx Analytics - Edge Microservice   Starting and Stopping ThingWorx Analytics   If you encounter any errors or stopped services in the above, a good solution would be to restart the TWA Server application.   There are two methods to restart the application, one being the restart ​command, the other would be using the stop​ and ​start​ commands.   Method 1 - Restart Command:   1. In the same ./ThingWorxAnalyticsServer/bin​ ​directory, run the following command: ./twas.sh restart     a. The output of a successful restart will look like the following: 2. The restart should only take a few seconds to complete   Method 2 - Stop / Start Commands:   1. In the same ./ThingWorxAnalyticsServer/bin​ ​directory, run the following command: ./twas.sh stop 2. After the application stops, run the following command: ./twas.sh start   Note: You can confirm the status of the TWA Server application by following the steps in the "Checking ThingWorx Analytics Server Application Status" section above.
View full tip
Licensing summary, installing 8.1: - Now instance specific license used which prevents license sharing and protects our intellectual property further - 2 paths for getting up and running with a license:          -Connected: platform-settings configuration only          -Disconnected: text file generated, self-serve creation from PTC support portal, license_capability_response.bin generated to be placed in platform folder   Connected / Disconnected mode LicensingSubsystem::GetLicenseState returns : UNINITIALIZED in disconnected mode LICENSE_EXISTS in connected mode   User Journey Refer to Licensing ThingWorx 8.1 and Later for specific instructions for generating a license for your instance   NOTE: Each instance needs a valid license file to be running.   Troubleshooting: - If any misconfiguration or connection failures occur, error messages will be thrown to Application log Example: unable to fetch license file with device id; Unable to connect to the PTC license server. Please make sure the LicensingConnectionSettings settings... - Connection attempt will occur upon ThingWorx startup. If connection can't be made, instance will be following disconnected scenario  -Make sure Pop Up Blockers are turned off     Platform Settings - Licensing License Connection Settings (in platform-settings.json - plain text sample) -new section for licensing connection strings: -user name - used to connect to ptc support portal -password - encrypted (recommended) or plain text password used to connect to ptc support portal   "LicensingConnectionSettings": { "username":"<PTC_support_portal_username>", "password":"<PTC_support_portal_password>" }    New Services on Licensing Subsystem - GetInstanceID - returns instance/device ID which is created at startup - WriteLicenseUsageData - writes encrypted license usage (same as system data table) to ThingworxStorage\reports\LicenseUsageReport folder   Instance ID The license file is now bound to the platform Instance ID aka Device ID - (unlike most other PTC products where the license is bound to the hardware : CPUID, MAC, ...) This Instance ID is generated during first startup and stored in the database Instance ID is accessible in composer with LicensingSubsystem::GetInstanceID   Q: What happens when license files are bad or missing? A: If there is an invalid license file, with 8.0 valid license.bin needs to be in the folder before starting up; tomcat will crash. In 8.1 no need for license file as long as connected (platform-settings.json), no need to take an extra step, dynamic connection.  In disconnected scenario, ThingWorx will run for 30 days in limited mode with monitoring mashup accessible to check logs.   Q: Where is the InstanceID stored ? A: It's stored in the database   Q: Will the instanceID change during updates (minor 8.1.0 to 8.1.1) A: Device id's don't change.   Q: What will happen in disconnected scenario if there is no valid license after 30 days? The application will not start anymore or user is not able to login? A:  It shuts down, so there is no more "limited" mode. However, a user can come along on day 55 (for example) and can drop in a valid license and start the web app to get it fully running.   Q: What happens if the customer has to reinstall the platform after the license has been fetched ? Is it possible to "return" the license ? A: Reinstalling the platform results in the generation of a new Device ID, therefore a new license file will need to be generated.  
View full tip