cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Help us improve the PTC Community by taking this short Community Survey! X

IoT Tips

Sort by:
Applicable Releases: ThingWorx Navigate 1.6.0 to 8.5.0   Description:   Covers Single Sign On concepts and main items to take into account when defining SSO architecture, with the following agenda: What is Single Sign on? What is PTC Strategy for Single Sign on? How does PingFederate fits in the existing SSO Federation? What products currently can be configured for SSO? Optional SSO in Navigate 1.6 What are the key diferences in SSO over the different Navigate versions? What are we doing internally to prepare for SSO? What resources are available for SSO discussions?         Related articles for further information Related Service: Configure SSO for Navigate
View full tip
Applicable Releases: ThingWorx Platform 7.0 to 8.5   Description:   Covers how to apply patch upgrades to ThingWorx installation, with the following agenda: How to read ThingWorx version Upgrading to a major/minor version of the platform Focus on upgrading to a patch version of the platform Upgrading extensions       Always check the patch release notes for additional information and specific steps
View full tip
Applicable Releases: ThingWorx Platform 7.0 to 8.5   Description:   Covers the main topics that need to be considered when evaluating or designing a scalability strategy for the environment and applications The agenda contains the following topics: Introduction Connection Server Federation High Availability       Some of the databases mentioned in the presentation are no longer supported Please refer to the compatibility matrix to check supported databases Related Articles How to configure a Federate architecture Is high availability supported in ThingWorx  
View full tip
Applicable Releases: ThingWorx Platform 7.0 to 8.4   Description:   A practical example of how to build a data model in ThingWorx following a pre-defined design Following topics are covered: Review existing Design Plan Build all required entities in ThingWorx Composer Test the model and review scalability and reusability         The session was recorded using the old ThingWorx Composer, but the concepts are still applicable Related Success Service - Principles of Thingworx Modeling Related Service - Design your Thingworx Model
View full tip
Applicable Releases: ThingWorx Platform 7.0 to 8.4   Description:   Concepts and basic Mashup design using an use case as example Following topics are covered: Recap Scenario Requirements and review concept design Introduction to Mashup Builder Design Mashup to visualize data     Related Success Service The session was recorded using the old ThingWorx Composer, but the concepts are still applicable
View full tip
Applicable Releases: ThingWorx Platform 7.0 to 8.5   Description:   Introduction to Edge connectivity in Thingworx Foundation: Edge concept and definition Available edge products Why use Edge products What is Edge Microserver and Lua Script Resource What are the SDKs What are connection servers AlwaysOn and HTTP protocols ThingTemplates to connect remote devices     The session was recorded in an old ThingWorx version, but all the concepts are still applicable
View full tip
Applicable Releases: ThingWorx Platform 8.3 to 8.5   Description:   Installation walkthrough of ThingWorx foundation using PostgreSQL, materializing some main steps that might be difficult to read in the installation guides       Reference installation guides for each version
View full tip
Applicable Releases: ThingWorx Platform 7.0 to 8.4   Description:   Strategy and tools for Thingworx application backups Backup Terminology and concepts Drivers to define a backup strategy Tips for executing backup in a Thingworx instance: Tomcat, certificates, Configuration and file system data, application specific files, database     Neo4J database mentioned in the session is no longer supported For more information check Best Practices for ThingWorx Backup
View full tip
Applicable Releases: ThingWorx Platform 7.0 to 8.5   Description:   Main concepts and best practices for devops methodology such as Naming Conventions Setup and management of environments for development and testing Import/Export process and application deployment Use of Tags and Project to control your development Coding Standards Validation best practices         For project packaging and deployment, make sure to check the content about Solution Central created after this session was released
View full tip
  Whether your ThingWorx instances are deployed on premise, in the cloud or a hybrid of the two, I’d like you to imagine: You have a super cool app. You want to deploy it securely. You’re not a security expert. What do you do? How do you know how to securely deploy your app? Where do you go for security best practices? Introducing the new ThingWox Secure Deployment Hub!   The ThingWorx Secure Deployment Hub is a new section of our support site that will introduce you to the ThingWorx security landscape and direct you to security resources pertaining to the Edge, the platform and beyond.   From permissions and provisioning, to subsystems and SSO, the hub is packed with our recommendations and best practices for you to deploy your app in a secure fashion.   Happy deploying! Kaya
View full tip
The Property Set Approach This article details an approach developed by Prachi Rath and Roy Clarke, refined by the EDC team in the December 2019's Remote Monitoring of Assets Reference Benchmark , and used to handle multi-property business rules in an Enterprise ThingWorx application.   Introduction If there are logic rules which depend upon multiple properties, and each property receives its updates one at a time, then each property will need to have an identical subscription, because there is no way for any one subscription to know the most up-to-date values for the other properties. This inefficient approach would create redundancy and sizing constraints, reducing the capacity of the application to scale up to the Enterprise level. The Property Set Approach resolves this issue by sending in all property updates as one Info Table or JSON property (called the “property set”), which can then have a single subscription. The property set is assembled on the Edge when an update needs to be sent, and then the Platform dissects, processes, and stores the data within this property set as required by the business logic.   This approach also involves caching the last property value into a runtime variable so that it can be referenced within the business logic subscription without having to be retrieved from the database. This can significantly improve the runtime of the subscription, reducing the number of resources required to sustain the business logic and ensuring that any alerts or events resulting from the business logic occur as soon as possible. It also reduces the load on the database, ensuring that data ingestion can complete unhindered.   So, while there are many benefits to this approach, it is also more complicated. It tightly couples the development of the Edge and Platform code and increases the application complexity, making it slightly less easy to maintain the application in the long run. The property set also requires a little more bandwidth and a more stable internet connection between Edge devices and the Platform since there is more metadata in an Info Table property, and therefore every update is slightly larger than it would be otherwise. So this approach is only recommended when multi-property rules are a requirement of the application and a stable internet connection exists between the Edge and Platform.   Platform Implementation I. Create an Info Table (or JSON) Property This tutorial uses the out-of-the-box Data Shape called NamedVTQ for the Info Table property, which is defined on a Thing Template as a remote property. It is important that this is not marked as persistent or logged, as the purpose is to reduce the amount of database writes and reads required by the Platform. The Info Table property has the following property definition:     <PropertyDefinition aspect.dataChangeType="ALWAYS" aspect.dataShape="NamedVTQ" aspect.isPersistent="false" baseType="INFOTABLE" isLocalOnly="false" name="numberPropertySetAsInfotable"/>       II. Create a Data Change Event Subscription for the Info Table Property The subscription has three parts: Cache the last value for the property in a runtime variable Start off the business rules processing, sending in the whole Info Table Send the Info Table to be logged as individual local properties in the database     // First step caches the last Value, refer to the next step… // Second step sets off the business rules processing with the Info Table me.ScaleTestBusinessRuleForPropertySetAsInfotable({ PropertySetAsInfotable: eventData.newValue.value }); // Third step sends the Info Table as one property into a service which parses it into the // individual properties, updating both the runtime properties on the remote thing and the database me.UpdatePropertyValues({ values: eventData.newValue.value /* INFOTABLE */ });       III. Set-Up Caching Each property which needs to be cached should be created on the Thing Template level and named in a similar way, say by placing the word “Last” at the end, such as “Property1” => “Property1Last”, “Property2” => “Property2Last”, etc. This property should NOT be logged or persistent, as the point of this is to store the most recent value in memory, removing any superfluous dependency on database queries in the process. Note that while storing the property in runtime memory makes it much more accessible, it also means that the property needs to be rewritten manually upon Platform restart. Additional code (not provided here) must be written to populate these properties from the database upon application start-up.   The following code should be placed in the data change event subscription (option 1 in the case where only a few properties need caching, or option 2 if every property value needs to be cached):   Option 1: Some but Not All Properties Need Caching     // Names of properties for which you want to cache the last value var propertyNames = ['number1', 'number2']; // Loop through the properties and cache their time if they are found in the property set propertyNames.map(assignLast); // This function can be split into two functions for Age and Last separately if need be function assignLast(propertyName) { logger.debug("Looping for property -> "+ propertyName); var searchprop = new Object(); searchprop.name = propertyName; property = eventData.newValue.value.Find(searchprop); if(property){ logger.debug("Found Row. Name= " + property.name); var lastPropertyName = propertyName+"Last"; if(property.value) { // Set the cache property on me, this entity, to the current property value me[lastPropertyName] = me[propertyName]; } } else { logger.debug("Property Not Found in property set -> " + propertyName); } }       Option 2: All Properties Need Caching     var rowCount = eventData.newValue.value.getRowCount(); for(var i=0; i<rowCount; i++){ logger.warn("property name->" + eventData.newValue.value[i].name + "----- property new value->" + eventData.newValue.value[i].value.value); var propertyName = eventData.newValue.value[i].name; var lastPropertyName = propertyName+"Last"; me[lastPropertyName] = me[propertyName]; logger.warn("done last subscription, last property value for lastPropertyName" + me[lastPropertyName]); }         Useful Platform Code Snippets I. Age Calculation     var date1 = new Date(); var date2 = me.GetPropertyTime({ propertyName: propertyName /* STRING */ }); var result = millisToMinutesAndSeconds (dateDifference(date1, date2) ); // This function converts from an unintelligibly large number in milliseconds to something formatted in minutes and seconds function millisToMinutesAndSeconds(millis) { var minutes = Math.floor(millis / 60000); var seconds = ((millis % 60000) / 1000).toFixed(0); return (seconds == 60 ? (minutes+1) + ":00" : minutes + ":" + (seconds < 10 ? "0" : "") + seconds); }       II. Sort the Info Table by Time     var params = { sortColumn: "time" /* STRING */, t: me.propertySet/* INFOTABLE */, ascending: ascending /* BOOLEAN */ }; var result = Resources["InfoTableFunctions"].Sort(params);       III. Search the Info Table for a Property     var searchprop = new Object(); searchprop.name = propertyName; property = PropertySetAsInfotable.Find(searchprop); if(property === null){ logger.info("Property Not Found -> " + propertyNumber1); } else { logger.info("Found Row. Name= [" + property.name + "], value= " + property.value.value); }         Edge Implementation This example implementation uses the .NET Edge SDK to build a property set Info Table at the Edge.   I. Define the Data Shape A standard Data Shape is used (NamedVTQ), but because this Data Shape is not exposed in the Edge SDK code, it has to be created manually.     // Data Shape definition for NamedVTQ FieldDefinitionCollection namedVTQFields = new FieldDefinitionCollection(); namedVTQFields.addFieldDefinition(new FieldDefinition(CommonPropertyNames.PROP_NAME, BaseTypes.STRING)); namedVTQFields.addFieldDefinition(new FieldDefinition(CommonPropertyNames.PROP_VALUE, BaseTypes.VARIANT)); namedVTQFields.addFieldDefinition(new FieldDefinition(CommonPropertyNames.PROP_TIME, BaseTypes.DATETIME)); namedVTQFields.addFieldDefinition(new FieldDefinition(CommonPropertyNames.PROP_QUALITY, BaseTypes.STRING)); base.defineDataShapeDefinition("NamedVTQ", namedVTQFields);     II. Define the Info Table Property The property defined should NOT be logged or persistent, and it can be read-only, since data is always pushed from the Edge and read from the server cache when accessed on the Platform. Note that the push type of the info table property MUST be set to "ALWAYS" (if set to "VALUE", the data change event will only fire if the number of rows changes).   // Property Set Definitions [ThingworxPropertyDefinition( name = "DevicePropertySet", description = "Alternative representation of properties as an Info Table for rules processing", baseType = "INFOTABLE", category = "Status", aspects = new string[] { "isReadOnly:true", "isPersistent:false", "isLogged:false", "dataShape:NamedVTQ", "cacheTime:0", "pushType:ALWAYS" } ) ]     III. Define a Property to Store the GOOD Quality Status   private static String QUALITY_STATUS_GOOD = QualityStatus.GOOD.name();     IV. Define Functions to Populate the Value Collections  An Info Table is really just made up of many Value Collections, where each Value Collection is considered a row. These services take in the name and value of a property and return a Value Collection object which can be added to the property set Info Table.   public ValueCollection createNumberValueCollection(String name, double value) { ValueCollection vc = new ValueCollection(); // Add quality and time entries to the Value Collection vc.SetStringValue(CommonPropertyNames.PROP_QUALITY, QUALITY_STATUS_GOOD); vc.SetDateTimeValue(CommonPropertyNames.PROP_TIME, new DatetimePrimitive(DateTime.UtcNow)); vc.SetStringValue(CommonPropertyNames.PROP_NAME, name); vc.SetNumberValue(CommonPropertyNames.PROP_VALUE, value); return vc; } public ValueCollection createBooleanValueCollection(String name, Boolean value) { ValueCollection vc = new ValueCollection(); // Add quality and time entries to the Value Collection vc.SetStringValue(CommonPropertyNames.PROP_QUALITY, QUALITY_STATUS_GOOD); vc.SetDateTimeValue(CommonPropertyNames.PROP_TIME, new DatetimePrimitive(DateTime.UtcNow)); vc.SetStringValue(CommonPropertyNames.PROP_NAME, name); vc.SetBooleanValue(CommonPropertyNames.PROP_VALUE, value); return vc; }     V. Build the Property Set Call this code from the processScanRequest method to build the property set.   // Create an instance of a new Info Table using the standard "NamedVTQ" Data Shape InfoTable propertySet = new InfoTable(getDataShapeDefinition("NamedVTQ")); // Set name/value for Temperature using convenience function propertySet.addRow(createNumberValueCollection("Temperature", temperature)); // Set name/value for Pressure using convenience function propertySet.addRow(createNumberValueCollection("Pressure", pressure)); // Set name/value for TotalFlow using convenience function propertySet.addRow(createNumberValueCollection("TotalFlow", this._totalFlow)); // Set name/value for InletValve using convenience function propertySet.addRow(createBooleanValueCollection("InletValve", inletValveStatus)); // Set name/value for FaultStatus using convenience function propertySet.addRow(createBooleanValueCollection("FaultStatus", faultStatus)); // Set the property set Info Table property base.setProperty("DevicePropertySet", propertySet);     VI. Update the subscribed properties These two lines of code update the properties and events, actually sending the property set (containing all property updates) to the Platform.   base.updateSubscribedProperties(15000); base.updateSubscribedEvents(60000);     Conclusion Following these steps will enable the Edge to build a property set before sending any property updates to the Platform. The Platform can then rely on caching to process the business logic with no database dependency, which is faster and more efficient than any other approach. Finally the updates are still written to the database, so in the end, there is no functional difference between using a property set and binding each property individually. Please don't hesitate to comment here with any questions about this approach.
View full tip
Joint Study by Cognizant and the Economist Intelligence Unit (EIU) - Making products smart can deliver game-changing innovation,enriched customer experiences and new, across-the board levels of efficiency. From R&D and manufacturing, through distribution and after-sales support, product data is changing how products are built, sold and cared for. Our latest research reveals practical steps business leaders can take to benefit from this quickly intensifying and accelerating trend.
View full tip
One of the signature features of the Axeda Platform is our alarm notification, signalling and auditing capabilities.   Our dashboard offers a simplified view into assets that are in an alarm state, and provides interaction between devices and operators.  For some customers the dashboard may be too extensive for their application needs.  The Axeda Platform from versions 6.6 onward provide a number of ways of interacting with Alarms to allow you to present this data to remote clients (Android, iOS, etc.) or to build extended business logic around alarm processing. If one were to create a remote management application for Android, for example, there are the REST APIs available to interact with Assets and Alarms.  For aggregate operations where network traffic and round-trip time can be a concern, we have our Scripto API also available that allows you to use the Custom Object functionality to deliver information on many different aggregating criteria, and allow developers to get the data needed to build the applications to solve their business requirements. Shown below is a REST API call you might make to retrieve all alarms between a certain time and date. POST:   https://INSTANCENAME/services/v2/rest/alarm/find <v2:AlarmCriteria xmlns:v2="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">    <v2:date xsi:type="v2:BetweenDateQuery">     <v2:start>2015-01-01T00:00:00.000Z</v2:start>     <v2:end>2015-01-31T23:59:59.000Z</v2:end>   </v2:date>   <v2:states/> </v2:AlarmCriteria>   In a custom object, this would like like the following: import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* def q = new com.axeda.services.v2.BetweenDateQuery() q.start = new Date() q.end = new Date() ac = new AlarmCriteria(date:q) aresults = alarmBridge.find(ac)   Using the same API endpoint, here's how you would retrieve data by severity: <v2:AlarmCriteria xmlns:v2="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">    <v2:severity xsi:type="v2:GreaterThanEqualToNumericQuery">     <v2:value>900</v2:value>   </v2:severity>   <v2:states/> </v2:AlarmCriteria>   Or in a custom object: import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.*   def q = new com.axeda.services.v2.GreaterThanEqualToNumericQuery() q.value = 900 ac = new AlarmCriteria(severity:q) aresults = alarmBridge.find(ac)   Currently the Query Types do not map properly in JSON objects - use XML to perform these types of queries via the REST APIs. References: Axeda v2 API/Services Developer's Reference Guide 6.6 Axeda Platform Web Services Developer Reference v2 REST 6.6 Axeda v2 API/Services Developer's Reference Guide 6.8 Axeda Platform Web Services Developer Reference v2 REST 6.8
View full tip
Background: Firewall-Friendly Agents can be configured for server certificate authentication in the Axeda Builder project or via the Axeda Deployment Utility. When server certificate authentication is configured, the Agent will compare the certificate chain sent by the Platform to a local copy of the CA certificate chain stored in the SSLCACert.pem file in the Agent’s home directory. The certificate validation compares three things: Does the name of the Platform certificate match the name in the request? Does the CA certificate match the CA certificate that signed the Platform certificate? Is the Platform or CA certificate not expired? If the answer to any of these questions is “no”, then connection is refused and the Agent does not communicate further with the Platform. To determine if certificate trouble is an issue, see the Agent log: EKernel.log or xGate.log. Recommendation: For Agent-Platform communications, we recommend always using SSL/HTTPS. If the Agent is not configured to validate the server certificate (via the trusted CA certificate), the system is vulnerable to a number of security attacks, including “man in the middle” attacks. This is critically important from a security perspective. Note: For on-premise customers, if the Platform certificate needs to change, always update the SSLCACert.pem file on all Agents before updating the Platform certificate. (If the certificate is changed on the Platform before it is changed on the Agents, communications from the Agent will stop.) Note: Axeda ODC automatically notifies on-demand customers about any certificate updates and renewals. At this point, though, Axeda ODC certificate updates are not scheduled for several years. Finally, it is recommended that your Axeda Builder project always specify “Validate Server Certificate” and set the encryption level to the strongest level supported by the Web server. Axeda recommends 168 bit encryption, which will use one of the following encryption ciphers: AES256-SHA or DES-CBC3-SHA. Need more information? For information about configuring and managing Agent certificate authentication, see Using SSL with Axeda® Platform Guide.
View full tip
Background: Axeda Agents can be configured with standard drivers to collect event-driven data, which is then sent to the Platform. Axeda provides many standard event-driven data (EDD) drivers for use with the Axeda Agent (as explained in Axeda® Agents EDD Toolkit Reference (PDF)). All EDD  drivers are configured by an xml file and enabled in Axeda Builder, through the Agent Data Items configuration. You can configure an EDD driver to send important information from your process to the Agent, including data items, events and alarms. The manner in which you configure your drivers will affect the ability for your project to operate efficiently. Recommendation: Use drivers to reduce the amount of data sent to the platform. Instead of sending data items to the Platform, which then generates an event or alarm, it is possible to use the drivers to scan for specific data points or conditions and send an event or alarm. Before you can configure your agents, you first need to determine how often you will need your agent to send data to the Enterprise server. Two example workflows and recommendations: If you want to monitor a data item every second or two, configure the Agent to do the monitoring If you want to trend information once per day, perform that logic at the Enterprise Server. These examples may address your actual use case or your needs may fall somewhere in between. Ultimately, you want to consider that time scale (how often you want to monitor or trend data) and resulting data volume should drive how your system handles data. More data is available at the Agent, and at a higher frequency, then that needed at the Platform. Processing at the Agent ensures that only the important results are communicated to the Platform, leading to a “cleaner” experience for the Platform. Using this guidance as a best practice will help reduce network traffic for your customers as well as ensure the best experience for Enterprise users using server data in their dashboards, reports, and custom applications. Need more information? For information about the standard EDD Drivers, see the Axeda® Agents: EDD Drivers Reference (PDF).
View full tip
Background: The frequency with which an Agent checks its connection to the Axeda Cloud Server is called the Agent “ping rate” (also known as heartbeat). (For Axeda IDM Agents, ping rate is referred to as “poll rate”; the meaning is the same.) Pings are a very important aspect of Firewall-Friendly communication. All communication between the Agent and the Cloud Server is initiated by the Agent. In addition to indicating the Agent is still active, the Ping also gives the Cloud Server an opportunity to send commands back to the Agent on the Ping acknowledgement. The ping rate effectively defines how long users must wait before they can deliver a command or request to an Agent. Typical commands may include setting a data item, starting an Access Session, or running a script. The place where Ping rate is most noticeable to system users is when requesting a remote session. When a session request has been submitted by the user, the Cloud Server waits for the next Agent ping in order to send down the command to begin the session process. A longer ping rate means the remote session takes longer to get started. (Note that the same is true of any command initiated from the Axeda Cloud Server.) Ping traffic comprises the majority of inbound traffic to the Cloud Server. The higher the ping rate, the more resources are consumed on the Server and the greater the requirements for network bandwidth for the customer. Unnecessarily high ping rates will result in an increase in network traffic on your customer's network. By default, the ping rate for Firewall-Friendly Agents is 60 seconds, or every 1 minute. The Agent ping rate is set using Axeda Builder when configuring the project. The ping rate can also be set via an action from the Axeda Cloud Server. When set via an action, the new ping rate is in effect until the next Agent restart (at which time the Agent will go back to the default ping rate set in the project). The Axeda Cloud Server also uses Agent ping rate to determine when assets are missing. One of the model settings is to define how many missed pings (or missed pings and time) will cause a device to be marked as missing. The default setting for new models is to mark assets as missing after they’ve missed 3 consecutive pings. Recommendations: Make sure that your Agents’ ping rates are set to reasonable frequencies. The ping rate should be set based on use case and not necessarily volume. The recommended practice is to make sure the ping rate is never set less than 60 seconds. Where possible the ping rate should be set to 2 minutes or higher. In the end, it is often user expectations around starting Access sessions that drives the ping rate value. If only occasional user access is required, one recommendation is to dynamically adjust the ping rate when conditions require expedited communication with the Cloud Server. One use case is to expedite a remote session when a device is in alarm condition or when an end user needs assistance. In this case you would temporarily increase the ping rate. This can be done using an action from the Cloud Server, by downloading a software package ping rate update, or by Agent extension using the SDK. (For information about using the Agent SDK, see the Axeda® Platform Extending Axeda® Agents PDF.) You can configure alerts to indicate if an asset is missing. Axeda recommends that you configure the alert to a reasonable time given your resources and the expense of tracking every missing asset. A reasonable missing alert for your organization may be 1-2 days, meaning the Server generates the missing asset alert only after the asset has been missing for one or two days, based on its ping rate, and an asset should be marked as missing only after 15 missed pings or 30 minutes (whichever is less). The most common cause of a missing asset is not an issue with the device but rather the loss of Internet connectivity. Note: Any communication from an Agent also serves the function of a Ping. E.g., if the ping rate is set to 30 minutes and the device is sending a data value every 5 minutes, the effective Ping rate is 5 minutes. Need more information? For information about specifying Agent ping rate, see the online help in Axeda® Builder (Enterprise Server Settings). If setting the ping rate from Platform actions or verifying Agent ping rates, see the online help of the Axeda® Connected Management Applications.
View full tip
The RabbitMQ Management plugin provides a web-based interface into the inner workings of the messaging bus behind ThingWorx Flow. It is installed by the Flow installers but is an HTTP service by default and is a totally different web server than the NGINX used to front-end ThingWorx Flow. This will describe how to integrate it into the NGINX on your ThingWorx Flow server. This is necessitated by some recent browser behavior changes that make it very hard to get to the http port once you've used an https service on the same machine from the same browser.   First - let's find the user name and password for the RabbitMQ Management plugin. On a Linux server, the file /etc/rabbitmq/definitions.json will hold the name and password for the plugin's UI:         "users": [{                 "name": "flowuser",                 "password": "1780edc6b8628ace2ace72465cdc7b048c88",                 "tags": "administrator"         }],   On a Windows server, the definitions.json file can be found under [flow install location]\modules\RabbitMQ.   Of course, access to these directories should be limited.   Second - let's integrate the plugin into NGINX The best way to integrate the plugin into Flow is to let NGINX reverse proxy to the other http server running the UI for the plugin, which is exactly what happens for Thingworx itself. That way, only NGINX has to be configured for https and no other ports need to be opened to allow access to the plugin.   You need to find the file vhost-flow.conf on your system. On Linux, this will be /etc/nginx/conf.d/vhost-flow.conf. On Windows, it will be at C:\Program Files\nginx-[version]\conf\conf.d\vhost-flow.conf by default. Add the following fragment after the last location xxx {…} segment in the file:       # deal with the rabbitMQ admin tool     location ~* /rabbitmq/api/(.*?)/(.*) {         proxy_pass http://127.0.0.1:15672/api/$1/%2F/$2?$query_string;         proxy_buffering                    off;         proxy_set_header Host              $http_host;         proxy_set_header X-Real-IP         $remote_addr;         proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;         proxy_set_header X-Forwarded-Proto $scheme;     }       location /rabbitmq {         rewrite ^/rabbitmq$ /rabbitmq/ permanent;      }       location ~* /rabbitmq/(.*) {         rewrite ^/rabbitmq/(.*)$ /$1 break;         proxy_pass http://127.0.0.1:15672;         proxy_buffering                    off;         proxy_set_header Host              $http_host;         proxy_set_header X-Real-IP         $remote_addr;         proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;         proxy_set_header X-Forwarded-Proto $scheme;     }   This makes the request for /rabbitmq get pushed over to the web server at port 15672 on the Flow server.   Test the updated config file with (nginx may not exist in your normal path): nginx -t   Restart the NGINX service: Linux (one of these will work depending upon your Linux version): systemctl restart nginx service nginx restart Windows: Net stop ThingWorxOrchestrationNginx Net start ThingWorxOrchestrationNginx -or- use the Services app to restart the service   Thanks to https://groups.google.com/forum/#!topic/rabbitmq-users/l_IxtiXeZC8 for the needed config changes.   You can now use https://yourserver/rabbitmq to get to the login page for the management plugin. Login with the user and password from the definitions.json file on your system and you can now monitor the behavior of your RabbitMQ environment.
View full tip
Background: In very rare situations, it is possible that the Firewall-Friendly Agent process may stop running. If the Agent is not running, no machine monitoring or communication with the Cloud Server is possible. Recommendation: WatchDog is a little known yet very helpful feature available with Firewall-Friendly Agents. This program lets you monitor whether an Agent is running; if it’s not running, WatchDog can restart that Agent if needed. If the Agent process fails, WatchDog can bring it back up! WatchDog can also be configured to watch other processes. You can configure WatchDog to run as a service (for Windows) or daemon (for Linux). You will register the Watchdog to run as a service.  The Watchdog configuration file will specify the process(es) to be monitored and what to do when one exits. The options are to attempt to restart the process or to restart the system. Note: Watchdog detects only if a watched process exits. It will not detect or report on processes that may be “hanging”. Need more information? For information about configuring and using WatchDog, see the Agent User’s Guide for your Agent: either Axeda® Platform Axeda® Gateway User’s Guide (PDF) or Axeda® Platform Axeda® Connector User’s Guide (PDF).
View full tip
Having trouble remembering how to get into Flow? How about make /Flow the URL?   Since the Flow environment uses NGINX to front-end the various components that make up Flow, there is a very sophisticated set of rewrites and proxy_pass directives in the NGINX configuration. All you have to do is add another 'location' fragment to the vhost-flow.conf file that will push /Flow over to /Thingworx/Composer/apps/flow:       location /Flow {       rewrite ^/Flow$ $proxy_scheme://$server_host/Thingworx/Composer/apps/flow permanent;     }   On Linux, the file should be at /etc/nginx/conf.d/vhost-flow.conf   On Windows, the file should be at c:\Program Files\nginx-[version]\conf\conf.d\vhost-flow.conf   Test the updated config file with (nginx may not exist in your normal path): nginx -t   Restart the NGINX service: Linux (one of these will work depending upon your Linux version): systemctl restart nginx service nginx restart Windows: Net stop ThingWorxOrchestrationNginx Net start ThingWorxOrchestrationNginx -or- use the Services app to restart the service   From this point forward https://yourserver/Flow will take you to ThingWorx Flow's home page.
View full tip
Background: Customer machines create files containing data, configuration, log data, etc. These files may contain critical information for service technicians about the asset and its data. How can you make sure you get this important information to the Cloud as soon as it’s available? Use File Watcher, a standard tool available with Firewall-Friendly Agents. You can configure File Watcher to monitor a directory or file specification and automatically upload the file(s) to the Axeda Cloud. By monitoring selected directories and files on the device, the File Watcher determines which files have changed or appeared and then upload those files to the Axeda Cloud Server. Sometimes the size of the file being written or copied into the file watch target is large enough that it takes time to complete. Without taking the right precautions, the file upload may try to start before the file creation process is completed. Recommendations: To ensure that large files don’t cause problems, we recommend using a “move” or “rename” operation instead of a file copy operation to place your file into a watched directory. Unlike a “copy” operation, using autonomous operations like move or rename will ensure that the Agent doesn’t detect a file is in the midst of growing and prematurely begin to upload that operation. In situations when it isn’t possible to use the autonomous “move” or “rename” operations, you can implement a delay. Setting a delay in the File Watcher configuration will prevent the Agent from sending files to the Axeda CloudServer before those files have transferred completely to the watched file directory. Finally, use file compression wisely. You need to balance the benefits from compressing files before sending with the potential adverse impact that compression will have on the Agent. Compressing very large files BEFORE sending to the Platform will affect the Agent; however, compressing smaller files can be beneficial. If the Agent’s computer is of lower power, the CPU may have to work overtime to compress huge files or to send huge uncompressed files. As a general rule, compressing files before sending is recommended. However, depending on your Agent and network setup, it may prove more beneficial to stream uncompressed files rather than compress first and then send. Need more information? For information about configuring and using file watchers, see the Axeda® Builder User’s Guide (PDF) or the online help in Axeda Builder.
View full tip
OpenDJ is a directory server which is also the base for WindchillDS. It can be used for centralized user management and ThingWorx can be configured to login with users from this Directory Service.   Before we start Pre-requisiste Docker on Ubuntu JKS keystore with a valid certificate JKS keystore is stored in /docker/certificates - on the machine that runs the Docker environments Certificate is generated with a Subject Alternative Name (SAN) extension for hostname, fully qualified hostname and IP address of the opendj (Docker) server Change the blue phrases to the correct passwords, machine names, etc. when following the instructions If possible, use a more secure password than "Password123456"... the one I use is really bad   Related Links https://hub.docker.com/r/openidentityplatform/opendj/ https://backstage.forgerock.com/docs/opendj/2.6/admin-guide/#chap-change-certs https://backstage.forgerock.com/knowledge/kb/article/a43576900   Configuration Generate the PKCS12 certificate Assume this is our working directory on the Docker machine (with the JKS certificate in it)   cd /docker/certificates   Create .pin file containing the keystore password   echo "Password123456" > keystore.pin   Convert existing JKS keystore into a new PKCS12 keystore   keytool -importkeystore -srcalias muc-twx-docker -destalias server-cert -srckeystore muc-twx-docker.jks -srcstoretype JKS -srcstorepass `cat keystore.pin` -destkeystore keystore -deststoretype PKCS12 -deststorepass `cat keystore.pin` -destkeypass `cat keystore.pin`   Export keystore and Import into truststore   keytool -export -alias server-cert -keystore keystore -storepass `cat keystore.pin` -file server-cert.crt keytool -import -alias server-cert -keystore truststore -storepass `cat keystore.pin` -file server-cert.crt     Docker Image & Container Download and run   sudo docker pull openidentityplatform/opendj sudo docker run -d --name opendj --restart=always -p 389:1389 -p 636:1636 -p 4444:4444 -e BASE_DN=o=opendj -e ROOT_USER_DN=cn=Manager -e ROOT_PASSWORD=Password123456 -e SECRET_VOLUME=/var/secrets/opendj -v /docker/certificates:/var/secrets/opendj:ro openidentityplatform/opendj   After building the container, it MUST be restarted immediately in order for recognizing the new certificates   sudo docker restart opendj   Verify that the certificate is the correct one, execute on the machine that runs the Docker environments: openssl s_client -connect localhost:636 -showcerts   Load the .ldif Use e.g. JXplorer and connect   Select the opendj node LDIF > Import File (my demo breakingbad.ldif is attached to this post) Skip any warnings and messages and continue to import the file   ThingWorx Tomcat If ThingWorx runs in Docker as well, a pre-defined keystore could be copied during image creation. Otherwise connect to the container via commandline: sudo docker exec -it <ThingworxImageName> /bin/sh Tomcat configuration cd /usr/local/openjdk-8/jre/lib/security openssl s_client -connect 10.164.132.9:636 -showcerts Copy the certifcate between BEGIN CERTIFACTE and END CERTIFICATE of above output into opendj.pem, e.g. echo "<cert_goes_here>" > opendj.pem Import the certificate keytool -keystore cacerts -import -alias opendj -file opendj.pem -storepass changeit   ThingWorx Composer As the IP address is used (the hostname is not mapped in Docker container) the certificate must have a SAN containing the IP address     Only works with the TWLDAPExample Directory Service not the ADDS1, because ADDS1 uses hard coded Active Directory queries and structures and therefore does not work with OpenDJ. User ID (cn) must be pre-created in ThingWorx, so the user can login. There is no automatic user creation by the Directory Service. Make sure the Thing is Enabled under General Information   Appendix LDAP Structure for breakingbad.ldif cn=Manager / Password123456 All users with password Password123456    
View full tip