cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - New to the community? Learn how to post a question and get help from PTC and industry experts! X

IoT Tips

Sort by:
The Axeda Platform has a mature data model that's important to understand when planning to build applications. First, this will introduce the existing objects and how they relate to each other. Axeda Agents can communicate Model – the definition of a type of asset. The model consists of a set of dataitems (its inputs and outputs) and alarms. The platform applies logic to a model, so as assets grow, the system is scalable in terms of management. Asset – or sometimes called Device. An asset has an identifier called a Serial Number which must be unique within its model. Agents report information in terms of the asset. Logic is applied to data and events about that asset. Dataitem – a named reading, such as a sensor or computer value. Dataitems are timestamped values in a sequence. For example, hourly temperatures, or odometer readings, or daily usage statistics. The number of named dataitems is unlimited. Dataitems can be written as well as read, so a value can be sent to an “output”. A dataitem can be a Digital (boolean), Analog (real value) or a String. Mobile Location - a lat/long pair typically read from GPS. This is used to map assets as they move. Alarms – have a name, severity, description, active flag, timestamp, and optional embedded dataitem and value. Alarms sent from an agent may result from logic that detects a condition, or from traps, error codes, etc. An alarm indicates something that's wrong. Files – arbitrary files can be uploaded from an agent. Files are sent with a Hint string. This is metadata that allows rules to process the file based on something other than the file extension. Files are often uploaded when an alarm has been raised, or on demand from a user or rule. Axeda agents have flexible ways to send and receive this information based on time, data changes, user request, etc. The Adaptive Machine Messaging Protocol (AMMP) allows anyone to make an agent that interacts with Axeda Platform using the same data model. Axeda Platform is asset-centric. An asset is an instance of a Model. Each asset is identified by its Model and Serial Number pair. Associated with an asset is Organization (typically the customer) Location or home of the asset (a street address). A location is in a Region. Contacts – people who have a relationship to the asset. Contacts have a role, such as Owner, or Service Agent. Asset Groups – assets are members of groups, and groups can be used to grant privileges, for navigation, or to apply commands. Properties – are additional named attributes of an asset. Properties do not have a time series history like a dataitem. The value of a property may be used to dynamically group assets. Condition – the current condition may be good, warning, error, or needs maintenance, based on the existence of alarms, for example. And, of course, dataitems, alarms and files. Information is processed and organized in the context of an asset, but the processing is managed for models. The only scalable way to manage a lot of assets is to apply rules by kind of asset, not individual assets. Rules apply logic to data as it happens. When a new dataitem is reported, a rule may check against its threshold. When an alarm is created, a rule may create a trouble ticket, or notify the user. All types of rules in the platform – Expression Rules, State Machines, and Threshold Rules – are event based. Rules apply to models, or sometimes to all (such as a standard way of notification on Alarms). The only exceptions are rules that apply to user logins and rules on a system timer. Software packages are another entity in the Platform. Packages are used to distribute files, software, patches, etc. and to script some commands around their delivery. So packages often upgrade or patch software, or load a new option of help file. Packages are defined for a model, and the deployment may be automatic or manual, to one or many.User logins are members of user groups. User groups have both privileges (what they can do) and visibility (which assets they can see). User group visibility allows the group to access assets in an Asset Group, or a Model or Region. How do solutions take advantage of this? Dataitems can be configured to store no data, current value, or history. History is needed if you want to see the temperature plot over the last day. Many times, current value is all that's needed to process rules and see the state of an asset. The option not to store a dataitem makes sense if the dataitem is only used to run a rule, or if it will just be sent to another application. An agent can send a dataitem string to the server, and the server puts the string on the Message Queue to deliver to another application. In a pass-through mode, the dataitem doesn't need to be stored at all. A similar situation is if a string dataitem is parsed by the rule calling a Groovy script. The script can parse the string (which may be XML or part of a log file) and use the SDK to do some action. Alarms are almost always used to notify people that they should do something. Alarms in Axeda have a lifecycle that corresponds to how people interact with them. An alarm begins its life when it's created. From that point, the alarm can be Acknowledged – this means that someone has seen it Escalated – the alarm condition hasn't been fixed for some time Closed – the end of the Alarm's life Suppressed – the alarm is logged in the history, but users don't see it. Set an alarm to suppressed when its just an annoyance and doesn't have any action required. Disabled – occurrences of this alarm are thrown away. Rules don't even see them. The Suppressed and Disabled modes are applied to an alarm of a given name, because they affect all future alarms by that name. Files are uploaded for a few reasons. Log files are typically uploaded so a service tech can diagnose a problem. Data files can be uploaded so a script or external system can process the file and take appropriate action. This can be another way of sending information that doesn't fit in a dataitem. The configuration of an asset – both hardware and software – is called Inventory, and the inventory of assets is important in diagnostics, planning spare parts, knowing what patch to apply, and many more. Extended Objects are attributes that can be added to the objects described here, or can be complete objects that live on their own. Your application can read and write these objects or attributes, and query them. The use is up to you. Resources You can find more information on the architecture of the platform in the Introduction to the Axeda Platform. The Platform SDK and Web Services expose most of these objects for configuration as well as runtime. That means an application can provision models and assets, create the rules and apply them to models, then monitor the behavior of assets, all through Web Services.
View full tip
While working with the Axeda Platform you will come across guard rails that limit sizes, recurrence, and duration of certain actions.  When you run into these limitations, it may be an opportunity to re-examine the architecture of your solution and improve efficiency. What this tutorial covers This tutorial discusses the kinds of limits exist across the Platform, however it does not include the exact values of the limits as these may vary across instances.  Skip to the last section on System Configuration to see how to determine the read-only properties of your Axeda Instance.  You can also contact your Axeda Support technician to find out more about how these properties are configured. Types of Limits discussed: Rule Sniper Domain Object Field Length Constraints File Store Limits System Configuration Avoiding Rule Sniper Issues There are two ways a rule can be sniped from statistics (recursive rules are done differently) – frequency count and execution time. When a rule is killed, an email will be sent explaining the statistics behind the event. So what these numbers actually mean… CurrentAverageExecTime = loadExecTime / frequencyCount This determines which rule is sniped... This is the longest running rule on average, NOT the most running per time period. FrequencyCount = how many times this rule ran in this period This is for the rule in general - not this period TotalExecTime = total time this rule has executed for in a time MaxExecTime = longest time this rule has ever taken to run ExecCount = number of times this rule has ever run MaxFrequencyCount = max number of times this rule has ever run in a period The Rule Sniper monitors all the rules as a unit. When the entire system is beyond the “load point” it chooses the heaviest hitting rule and kills it. Some definitions: Execution count Execution count is how many time the rule has ran since it was last enabled. Maximum execution time Maximum execution time is the max time a rule can run. This is controlled by the setting of the following in your DRMConfig.properties: com.axeda.drm.rules.statistics.rule-time-threshold Total execution time Total execution is the time that the rule actually ran. Frequency count Frequency is how many times the same expression rule runs in a set period of time. The period of time is set in DRMConfig.properties by: com.axeda.drm.rules.statistics.rule-frequency-period Maximum frequency count Max frequency is the maximum times the expression can run Recursive expression Rules could be triggered from actions such as file uploads, device registration and data item changes.  A scenario may occur in which an Expression Rule initiates a Then or Else action that triggers itself, such as a Data type Expression Rule setting a data item.  This scenario has led to the existence of the Rule Sniper, which disables Expression Rules that are triggered several times in quick succession.  At times an Expression Rule may be sniped simply for being triggered too many times in too short a period of time, even though the rule was not recursive. Setting a Data Item from a Data type Rule In one scenario, one data item comes in, say Temperature, and you need to set a different data item of Climate based on the value of Temperature.  Without any checking, a Data type Rule that sets a Data Item Value will trigger itself, leading to a recursive rule execution that will be shut down by the Rule Sniper.  A way to do this without the rule being sniped is to check in the If expression that the data item change triggering the rule is the one we are interested in, as opposed to the data item that is changed because it was set by the Rule. If:  Temperature.changed && Temperature.value > 75 Then: SetDataItem("Climate", "Hot")   Since it was the Climate that changed as a result of the Then statement, the rule will not be triggered again. ***Update:  In an ironic twist of fate, it turns out that the solution above only works for data items that are set to be stored On Change rather than Stored data items.  Stored data items are updated whenever a new value is entered, even if it is the same value. In this case, Temperature.changed would not trigger because the value would be the same, only the timestamp would be different.  This would matter if you had the possibility of the same value happening twice consectively and needed the rule to trigger both times, but not on any other data item. The correct solution is the following: If: (!Temperature.changed || Temperature.changed) && Temperature.value > 75 Then: SetDataItem("Climate", "Hot") Admittedly inelegant, this works because if any other data item is passed in, Temperature will not be passed in so there will be no value for Temperature.changed.  If Temperature is passed in, it will trigger either one of the cases (not changed if the value is the same, changed if it isn't). An alternate solution is to make use of the consecutive property of the Expression Rule. "Execute action each time rule evaluates to true" corresponds to the consecutive property, which determines whether the rule will fire every time the If expression evaluates to true.  If the consecutive property is true, it will fire every time.  If it is false, the rule will trigger one time when the If expression evaluates to true, and then it won't be triggered again until the If expression evaluates to false, and then to true again. With the consecutive property set to true, in our scenario above whenever the Temperature changes and is over 75, it will set Climate to Hot.  With consecutive set to false, the rule will set Climate to Hot once, and then Temperature will have to fall below 75 and then rise above 75 again to trigger the rule again. Recurring Actions Sometimes you may need a recurring action to take place.  An example would be if you don't need to evaluate a temperature in real time as it changes, but can check its status periodically.  If the recurrence either requires or can tolerate a set delay, the best practice is to use a Rule Timer.  A Rule Timer allows you to execute an Expression Rule on a schedule much like a cron job.  In fact, the Rule Timer syntax is expressed in crontab format. In order to use a Rule Timer, create an Expression Rule of type System Timer or Asset Timer.  The Asset Timer allows you to scope the rule to a certain set of assets like other rules, while a System Timer is not scoped to assets.  This makes a System Timer more appropriate for a rule that would execute a Custom Object, as opposed to one that creates an alarm directly on an asset. Then create the Timer itself, which will allow you to set the schedule. Navigate to Configuration > New > Rule Timer With a Rule Timer, you can set a rule to run automatically with a preset delay and avoid the recurrence limit on the rule. For more information on the Rule Sniper, there is a Salesforce Knowledgebase Solution article available to Axeda customers called What are the Rule Sniper and Rule Executor Monitor Features For and How Do They Work? as well as the Rules Design and Best Practices Guide. Domain Object Field Length Constraints Every stored object has limits on the length of its fields, such as name and value.  If a script attempts to store a value for a field that exceeds the field length constraints, the value will be truncated to the maximum limit. The maximum size of a data item value in the database is 4000 bytes. Two additional constraints are a limit on number of lines in a custom object (typically 1000 lines) and on the size of a stored data accumulation that can be read out as a string (1MB). The Help documentation available through the Axeda Applications Console contains information regarding field constraints (such as the Help pages on String Length Constraints at http://<<yourdomain>>.axeda.com/help/en/rule_action_data_entry_string.htm ). Limits on File Store Configurable quota limits exist on files that can be uploaded to the Axeda File Store via the SDK v2 FileInfoBridge.  These limits will prevent creating FileUploadSessions, creating or updating FileInfos, or uploading file data if they are exceeded. File count: maximum number of files that can be stored on the system Maximum file size: the maximum size of any one file Total stored bytes: the total bytes for all files that may be stored on the system The configuration of these limits can be found on your system by navigating to Administration > System Configuration as described below and searching for "file" in the Read-Only Properties. System Configuration The System Configuration link under the Administration tab is a useful reference for viewing Read-Only properties of how your instance is configured. Check here when troubleshooting to determine any limit that may influence your app's implementation. Common Question An expression rule has a Data Trigger and in the Then Statement it sets a data item. Why is it getting disabled? Answer:  The rule is being recursively triggered so the Rule Sniper is disabling it.
View full tip
Requirements:  Axeda 6.1.6+ The Axeda Applications User Interface can be extended to accommodate varying degrees of customization.  This ability to customize the base product enables repurposing the Axeda Applications User Interface to serve a specific audience. What this tutorial covers This tutorial discusses three ways to extend the Axeda Applications User Interface, which can be achieved via the following features: Customizing the Look and Feel - Use your own custom stylesheet to replace the default page styles, even on a per-user basis Extended UI Modules - Insert your own Extended UI Module into the Service > Asset Dashboard Custom Tab - Create a custom tab that loads content from a custom M2M application Customizing the Look and Feel of the Axeda Applications User Interface You can add style changes into a user.css file which you then upload like any other custom application, via the Administration > Extensions tab as a zip archive.  Make sure to adhere to the expected directory structure and follow the naming convention for the zip archive. Images - store image files in a directory called <userName>/images Styles - store user.css and any style sheet(s) that it imports in a directory called <userName>/styles Documentation - store documentation files in a directory called <userName>/doc. The naming convention is to name the archive by the username of the user who should be able to see the changes, i.e. jsmith is the username so jsmith.zip is the archive name. For step-by-step instructions for customizing the UI, Axeda customers may refer to http://<<yourdomain>>.axeda.com/help/en/stylesheets_for_user_branding.htm andhttp://<<yourdomain>>.axeda.com/help/en/upload_user_branding.htm . Extended UI Modules Extended UI Modules can be added to the Asset Dashboard to provide custom content alongside the default modules.  The modules can contain the output of a custom object or a custom application, all within the context of the particular asset being viewed. Create the Extended UI Content Option 1: an Extended UI Type Custom Object Navigate to Configuration > New > Custom Object This Custom Object should output HTML with any Javascript and/or CSS styling embedded inline.  Parameters may be defined here and made available to the script as "parameters.label". Example: def iframehtml = """<html>   <head>     <script type='text/javascript' src='https://www.google.com/jsapi'></script>     <script type='text/javascript'>       google.load('visualization', '1', {packages:['gauge']});       google.setOnLoadCallback(drawChart);       function drawChart() {         var data = new google.visualization.DataTable();         data.addColumn('string', 'Label');         data.addColumn('number', 'Value');         data.addRows([           ['$parameters.label', $parameters.value]         ]);         var options = {           redFrom: 90, redTo: 100,           yellowFrom:75, yellowTo: 90,           minorTicks: 5         };         var chart = new google.visualization.Gauge(document.getElementById('chart_div'));         chart.draw(data, options);       }     </script>   </head>   <body style="background: white;">     <div id='chart_div'></div>   </body> </html>​ """ return ['Content-Type': 'text/html', 'Content': iframehtml.toString()]      Option 2: A Custom Application Create a zip file that contains an index html file at the root of the directory, any stylesheets, scripts and images you prefer and upload the zip as a Custom Application (see the example zip file included at the end of this article). Navigate to Administration > Extensions .  Enter the information for the zip file and upload. Create the Extended UI Object Option 1: Using the Axeda Applications Console Navigate to Configuration > New > Extended UI Module Note that the parameters are entered in URI format  myvalue=mykey&othervalue=otherkey If Content Source is set to Custom Application rather than Custom Object, the Custom Applications will become available as the Extended UI Module content. Option 2: Use Axeda Artisan Check out Developing with Axeda Artisan in order to make use of this method.  Add the Extended UI Module to the apc-metadata.xml and it will be created for you automatically on Maven upload.  Note that Artisan does not support Model Preferences, so you will still have to add the module through the Axeda UI as described below. <extendedUIModule>     <!-- you can create the module here, but you still have to use the Axeda Console to apply it to the model where the module should show up -->     <title>extendedUI_name</title>     <height>180</height>     <source>         <type>CUSTOM_APPLICATION</type>         <name>customapp_name</name>     </source> </extendedUIModule> Add the Extended UI Module to the Model Preferences Navigate to Configuration > View > Model and click Preferences under UI Configuration next to the model that should display the Extended UI Module for its assets. Click Asset Dashboard Layout Select the Extended UI Module from the left and click the arrow to add it to the desired column.  The asterisks indicate Extended UI Modules, as opposed to default modules. Click Submit and navigate to an Asset Dashboard to see the module displayed. Now you have an Extended UI Module with your custom content. Custom Tabs Upload a custom application as a custom tab. And there you have it. For Artisan developers, to enable a custom application as a custom tab, insert the following into the apc-metadata.xml: <application>     <description>string</description>     <applicationId>string</applicationId>     <indexFile>string</indexFile>     <zipFile>relative path from apc-metadata.xml to the zip file containing the application files</zipFile>     <customTab>         <tabPrivilegeName>the privilege name required for the tab to be shown</tabPrivilegeName>         <afterTab>the name of the tab after which to place this tab</afterTab>         <showFooter>[true|false]</showFooter>         <tabNames>             <label>                 <locale>the i18n locale (for example en_US or ja_JP)</locale>                 <name>the name to be displayed for the locale</name>             </label>         </tabNames>     </customTab> </application>      Authentication within Extended UI Components When working with Custom Applications in custom tabs or modules, the user session ID is made available through a special variable that you can access from the landing page (such as index.html) only: %%SESSIONID_TOKEN%%      This variable is substituted directly for the session id, which makes the authentication for viewing the Extended UI component appear seamless to the end user. In order to make this ID available for AJAX calls, the index.html file should store the session ID as it is initializing.  Additionally, index.html should instruct the browser not to cache the page, or the session ID may mistakenly be used to authenticate after it expires. In index.html: <html>     <head>         <title>My Custom App</title> <META HTTP-EQUIV="CACHE-CONTROL" CONTENT="NO-CACHE"> <link media="screen" href="styles/axeda.css" rel="stylesheet" type="text/css"/> <script src="scripts/jquery-1.9.0.min.js" type="text/javascript"></script> <script type="text/javascript">             $(window).load(function () {                 App.init(encodeURIComponent("%%SESSIONID_TOKEN%%"));             })         </script>     </head>    In App.js: App.init = function (sessionID) {         // put initial processing here         storeSessionId( sessionID )         App.callScriptoWithStoredSessionID()     }    That's it!  You can now customize the look and feel of the Axeda Applications Console, as well as add an Extended UI Module and a Custom Tab. Further Reading Developing with Axeda Artisan Axeda Sample Application: Populating A Web Page with Data Items Common Questions I want to display my custom app on a custom tab. How should I manage authentication within my custom tab app? Answer:  Use Javascript to store the session ID injected as a variable into the index.html page, then use that to authenticate Scripto calls to the Axeda Platform. Are there example programs to get started? Answer:  There are several examples of Artisan projects to get started Axeda Sample Application: Populating A Web Page with Data Items An Axeda instance - https://<customerInstance>.axeda.com/artisan
View full tip
Distributed Timer and Scheduler Execution in a ThingWorx High Availability (HA) Cluster Written by Desheng Xu and edited by Mike Jasperson    Overview Starting with the 9.0 release, ThingWorx supports an “active-active” high availability (or HA) configuration, with multiple nodes providing redundancy in the event of hardware failures as well as horizontal scalability for workloads that can be distributed across the cluster.   In this architecture, one of the ThingWorx nodes is elected as the “singleton” (or lead) node of the cluster.  This node is responsible for managing the execution of all events triggered by timers or schedulers – they are not distributed across the cluster.   This design has proved challenging for some implementations as it presents a potential for a ThingWorx application to generate imbalanced workload if complex timers and schedulers are needed.   However, your ThingWorx applications can overcome this limitation, and still use timers and schedulers to trigger workloads that will distribute across the cluster.  This article will demonstrate both how to reproduce this imbalanced workload scenario, and the approach you can take to overcome it.   Demonstration Setup   For purposes of this demonstration, a two-node ThingWorx cluster was used, similar to the deployment diagram below:   Demonstrating Event Workload on the Singleton Node   Imagine this simple scenario: You have a list of vendors, and you need to process some logic for one of them at random every few seconds.   First, we will create a timer in ThingWorx to trigger an event – in this example, every 5 seconds.     Next, we will create a helper utility that has a task that will randomly select one of the vendors and process some logic for it – in this case, we will simply log the selected vendor in the ThingWorx ScriptLog.     Finally, we will subscribe to the timer event, and call the helper utility:     Now with that code in place, let's check where these services are being executed in the ScriptLog.     Look at the PlatformID column in the log… notice that that the Timer and the helper utility are always running on the same node – in this case Platform2, which is the current singleton node in the cluster.   As the complexity of your helper utility increases, you can imagine how workload will become unbalanced, with the singleton node handling the bulk of this timer-driven workload in addition to the other workloads being spread across the cluster.   This workload can be distributed across multiple cluster nodes, but a little more effort is needed to make it happen.   Timers that Distribute Tasks Across Multiple ThingWorx HA Cluster Nodes   This time let’s update our subscription code – using the PostJSON service from the ContentLoader entity to send the service requests to the cluster entry point instead of running them locally.       const headers = { "Content-Type": "application/json", "Accept": "application/json", "appKey": "INSERT-YOUR-APPKEY-HERE" }; const url = "https://testcluster.edc.ptc.io/Thingworx/Things/DistributeTaskDemo_HelperThing/services/TimerBackend_Service"; let result = Resources["ContentLoaderFunctions"].PostJSON({ proxyScheme: undefined /* STRING */, headers: headers /* JSON */, ignoreSSLErrors: undefined /* BOOLEAN */, useNTLM: undefined /* BOOLEAN */, workstation: undefined /* STRING */, useProxy: undefined /* BOOLEAN */, withCookies: undefined /* BOOLEAN */, proxyHost: undefined /* STRING */, url: url /* STRING */, content: {} /* JSON */, timeout: undefined /* NUMBER */, proxyPort: undefined /* INTEGER */, password: undefined /* STRING */, domain: undefined /* STRING */, username: undefined /* STRING */ });   Note that the URL used in this example - https://testcluster.edc.ptc.io/Thingworx - is the entry point of the ThingWorx cluster.  Replace this value to match with your cluster’s entry point if you want to duplicate this in your own cluster.   Now, let's check the result again.   Notice that the helper utility TimerBackend_Service is now running on both cluster nodes, Platform1 and Platform2.   Is this Magic?  No!  What is Happening Here?   The timer or scheduler itself is still being executed on the singleton node, but now instead of the triggering the helper utility locally, the PostJSON service call from the subscription is being routed back to the cluster entry point – the load balancer.  As a result, the request is routed (usually round-robin) to any available cluster nodes that are behind the load balancer and reporting as healthy.   Usually, the load balancer will be configured to have a cookie-based affinity - the load balancer will route the request to the node that has the same cookie value as the request.  Since this PostJSON service call is a RESTful call, any cookie value associated with the response will not be attached to the next request.  As a result, the cookie-based affinity will not impact the round-robin routing in this case.   Considerations to Use this Approach   Authentication: As illustrated in the demo, make sure to use an Application Key with an appropriate user assigned in the header. You could alternatively use username/password or a token to authenticate the request, but this could be less ideal from a security perspective.   App Deployment: The hostname in the URL must match the hostname of the cluster entry point.  As the URL of your implementation is now part of your code, if deploy this code from one ThingWorx instance to another, you would need to modify the hostname/port/protocol in the URL.   Consider creating a variable in the helper utility which holds the hostname/port/protocol value, making it easier to modify during deployment.   Firewall Rules: If your load balancer has firewall rules which limit the traffic to specific known IP addresses, you will need to determine which IP addresses will be used when a service is invoked from each of the ThingWorx cluster nodes, and then configure the load balancer to allow the traffic from each of these public IP address.   Alternatively, you could configure an internal IP address endpoint for the load balancer and use the local /etc/hosts name resolution of each ThingWorx node to point to the internal load balancer IP, or register this internal IP in an internal DNS as the cluster entry point.
View full tip
One of the killer features of the Axeda Platform is the Axeda Console, a browser-based online portal where developers and business users alike can browse information in an out-of-the-box graphical user interface.  The Axeda Console is functional, re-brandable and extensible, and can easily form the foundation for a customized connected product experience. Let's take a tour of the Axeda Console and explore what it means to have a full-featured connected app right at the start of your development. What this tutorial covers This tutorial discusses the landscape of the online browser-based suite of tools accessible to Axeda customers.  It does not do a deep dive into each of the available applications, but rather serves as an introduction to the user interface. Sections of the Axeda Applications Console that are discussed: Landing Page (Home) User Preferences Asset Dashboard Axeda Help Note: This article features screenshots from Axeda 6.5 which is the current release as of July 1, 2013.  In prior versions the Axeda Applications Console has also been referred to as ServiceLink.  Stay tuned for Axeda 6.6! What can I do from here? From the landing page for the Axeda Console, you can access recent assets in your right sidebar or search for assets in the left sidebar.  Each of the links in the main Welcome text corresponds to a main tab. Troubleshoot, Monitor, and Service Assets - (Service tab) An Overview of the status of assets, filterable by a search on fields such as serial number, model, organization, etc. Access and Control Remote Assets - (Access tab) If you are familiar with Windows Remote Desktop, this will seem familiar.  This allows you to log into and control an asset as if you were typing from a physical keyboard directly into it without having to be on the same network or in the same location.  This is particularly useful when the asset is behind a firewall or other controlled network. Install and Deploy Software Updates - (Software tab) This tool provides the ability to create, view, configure, delete and deploy software packages (like a file that contains an update) to assets. View Usage Data and Asset Charts - (Usage tab) You can use the Axeda Usage application to track and analyze asset usage. Add New Assets, Organizations and Models - (Configuration tab) Find tools here for creating, updating and deleting domain objects. Administrator Users, Groups and Assets - (Administration tab) Manage users, groups, auditing, and system-setup tasks The remaining tabs that are not linked from the Home page are either custom tabs or less frequently used tabs (depending on use case). The custom tabs are examples of custom applications that are not distributed out of the box with an Axeda instance. Wireless - (custom tab) an integration with the Jasper API that allows the user to monitor SIMs activated in their assets Maintenance - track information about the operation of machines against service cycles Case - manage the resolutions of asset issues Report - (requires an additional license) provides a suite of standard reports, custom reports may also be added Dashboard - allows you to create a landing page that displays information that is interesting to you Simulator - (custom tab) an app that allows you to set data items, alarms, mobile locations, and geofences on an asset For more details on Custom Tabs and the Extended UI, please take a look at [Extending the UI - Custom Tabs and Modules]. (Coming soon) User Preferences Each user in an Axeda instance has a certain set of privileges and visibility, which determine what actions she can take and what information she can see.  A user also has control over certain aspects of her own use of the Axeda Console, which are configurable from the yellow Preferences link, located in the top right corner of the page. This opens up the User Preferences page. The User Preferences link allows you to set defaults for your user only.  From here you can change the following settings: User Attributes (email and password) Locale - Change the locale which also sets the display language Time Zone - Change the time zone as displayed in the Applications Console (note that this does NOT affect individual asset time zone.  Asset time zone is reported by the agent) Notification Styles - Specify which contact methods are appropriate for you and for what severity of triggered alert Default Application - Set which tab should open up when you log into the Axeda Console Items Per Page (Long Table) - For longer listings of items, how many rows should be displayed Items Per Page (Short Table) - For shorter listings of items, how many rows should be displayed Asset Dashboard As the asset is the center of the Axeda universe, so the Asset Dashboard could be considered the central feature of the Axeda Console.  You can open up the dashboard for any particular asset by clicking it in the Service tab or in the Recent Assets shortcuts. You can also add modules within the Asset Dashboard that are either a custom application or the output of an Extended UI Module type custom object.  From the Asset Dashboard you have an at-a-glance view of this asset's current data, alarms, uploaded files, and location to name a few. The Asset Dashboard is built for viewing information about the asset.  To perform create/read/update/delete functions on the asset, you will need to search using the Configuration tool instead. To view a list of models or any domain object available for configuration, click the drop down arrow next to the View sub-tab and select the object name. Once you have the list of models displayed, click the Preferences link on the model to access a Model Preferences Dashboard that allows you to configure the model image, the modules displayed, and other features of the Asset Dashboard. Axeda Help As part of learning more about the Axeda Console, make use of the documentation available to you by clicking the Help link in the top right corner of the page. This will open a pop-up which contains information about the page you have open.  It allows you to do a deep dive into any aspect of the Axeda Console, and includes search and a browsable index on Axeda topics. Make sure to research topics in the Help section while troubleshooting your assets and applications.
View full tip
With ThingWorx, we can already use univariate anomaly alerts (on a single sensor value). However, in many situations, the readings from an individual sensor may not tell you much about the overall issue and a multivariate anomaly detector can be more useful. This post is intended to provide an overview of the Azure Anomaly Detector and how it can be integrated with ThingWorx. The attachment contains: A document with detailed instructions about the setup; A .csv file with the multivariate timeseries dataset; A .twx file with some entities that need to be imported in ThingWorx as well as the CSVParser extension that needs to be installed; A .zip file that will need to uploaded in an Azure Blob Container at some point in the setup
View full tip
We will host a live Expert Session: "5 Common Mistakes for Developing Scalable IoT Applications" on June 22nd, 11h00 EST.   Please find below the description of the expert session and the registration link.   Expert Session: 5 Common Mistakes for Developing Scalable IoT Applications Date and Time: June 22nd, 11h00 EST Duration: 1 hour Host: Tori Firewind, Mike Jasperson and Prachi Rath - Enterprise Deployment Center Registration Here: https://www.ptc.com/en/resources/iiot/webcast/5-common-dev-mistakes-for-scalable-iot-applications    Description: To build scalable applications, it’s necessary to identify the common mistakes made and ensure to avoid them at the early stages of development.   In this expert session, the PTC Enterprise Deployment Team will elaborate on why scalability is important and how one can avoid the common development pitfalls in IoT.    Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’. You can also suggest topics for upcoming sessions using this small form.   Here are some recorded sessions that might be of your interest. You can find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search Thingworx Active Active Clustering This session will cover the main aspects of the High Availability Clustering feature launched with the ThingWorx 9.0 release.   Recoding Link Upgrade to Thingworx 9 – How to Plan / Evaluate Impacts This session highlights the key points you should evaluate to properly plan your upgrade to Thingworx 9. Recording Link Top 5 items to check for Thingworx Performance Troubleshooting How to troubleshoot performance issues in a Thingworx Environment? Here we cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support     Recording Link
View full tip
Recently I have been accompanying an integration partner and end customer around an issue experienced with ThingWorx resource exhaustion.  Early on it seemed like this was an issue with the ThingWorx Azure IoT Hub Connector as it would freeze up and become unresponsive.  Following a root cause analysis it became clear that it was actually caused by a lack of a number of standard cloud design patterns, which if used would have automatically adapted operation of the overall solution to be far more resilient as well as resource optimized.   The way that the logic was structured, it prioritized job execution on entities with the oldest last success time and would continue to retry these executions (IoT Direct Methods) every few seconds until successful.  There were a number of problems here, but I'll unpack a few in order to tie the problem to the solution via design patterns.   1) No exception handling When the direct method execution failed/timed out or the system reported being unable to execute the remote service, this response was not used to adapt the solutions behavior. 2) No backoff retry mechanism As exceptions were not caught, an adaptive retry mechanism with incremental or exponential backoff could not be leveraged to limit the impact of the build up of the failing retries. 3) No exception tracking Tracking that exceptions were occurring and counting them would allow powering an exponential backoff retry algorithm (with jitter), a Cancel or Circuit Breaker pattern (stop doing something which is just broken), as well as provided alerting to address specific areas of the distributed solution experiencing issues. 4) Conflicting priorities It was interesting to see the manifestation of the conflicting interests of wanting to ensure checks and balances (had all needed data) and system resiliency.  Retries and resource usage built up exponentially due to the transient error instead of backing them off.  Trying so hard to get the needed data from failing sensors meant that operational sensors were deprioritized and their data was not received either - spreading the localized issue to the whole system.   Around the time that I shared my recommendations and some examples of how to make the solution more resilient, one of my technical colleagues at Microsoft shared some extremely interesting and relevant design patterns documented by Microsoft as a part of the "Microsoft Azure Well-Architected Framework".  This framework with included Design Patterns for specific cloud application goals allows applying well-known industry standard approaches to dealing with the challenges of large scale distributed enterprise systems (reliability, performance, cost optimization).   She later then shared this blog post describing exactly the exponential backoff retry with jitter pattern which we had together recommended to the systems integrator.   What's interesting for us ThingWorx people is that this framework from Microsoft is about well-architected cloud solutions and does not specifically reference the Azure stack, and as such many of these approaches and design practices can be employed in your ThingWorx applications.  What are you waiting for?  Go check them out!
View full tip
Check our expert session recorded library! The recordings will also be published in our Customer events library, posted on each event. Stay tunned!   Your feedback is very important to us! After watching the recordings, please take 2 min to complete this survey   Thingworx Foundation Session Name Link Duration Thingworx Mashup 101 - Do's and Don'ts Recording link 00:33:41 Thingworx Active Active Clustering (High Availability Recording link 00:26:24 Upgrade to Thingworx 9 – How to Plan / Evaluate Impacts Recording link 00:27:02 Thingworx Flow Overview Recording link 00:43:40 Top 5 items to check for Thingworx Performance Troubleshooting Recording link 00:26:55 ThingWorx DEVOPS QuickStart Guide Recording link 00:45:05 ThingWorx Backup And Recovery Recording Link 00:20:14 Expert Session - Designing your Data Model in Thingworx Recording link 00:26:45 ThingWorx Installation Recording link 00:15:07 Expert Session - Introduction To Edge Connectivity Recording link 00:15:56 Expert Session - Basic Mashup Design in Thingworx Recording link 00:36:31 Expert Session - Extensions101 Recording Link 00:30:08 Expert Session – Developing your Data Model in Thingworx Recording link 00:39:19 Thingworx Scalability Recording link 00:09:18 Expert Sessions - ThingWorx Patch Upgrade Recording link 00:03:19   Thingworx Navigate Session Name Link Duration Understanding license requirements for Thingworx Navigate Recording link 00:32:40 Navigate SSL and Authentication Recording Link 00:34:30 Navigate 3D Viewer Recording Link 00:43:25 Component Based App Development Recording Link 00:24:07 Navigate 9.0 – What’s new Recording link 00:27:07 Overview of SSO Implementation for ThingWorx Navigate and Windchill with PingFederate Recording link 00:18:36 Identifying the right SSO mix for Navigate 1 6 Recording link 00:57:56 Navigate Configuration - PingFederate Automation Script Recording link 00:51:07 Expert Session - Navigate Configuration/Windchill Authentication Recording link 00:23:07 What’s new with Navigate 1.8 and the new Navigate 1.8 installer Recording link 01:05:26 Creating an I*E task for use in Navigate Recording link 00:05:36   Vuforia Expert Capture Session Name Link Duration VEC In a Nutshell Video Link 00:31:39
View full tip
We will host a live Expert Session: "Top 5 Thingworx environment monitoring best practices" on March 25th, 10h00 EST.   Please find below the description of the expert session and the registration link.   Expert Session: Top 5 Thingworx environment monitoring best practices Date and Time: March 25th, 10h00 EST Duration: 1 hour Host: Tori Firewind, Tim Atwood and Dave Bernbeck from Enterprise Deployment Center Registration Here: https://www.ptc.com/en/resources/iiot/webcast/top-5-thingworx-monitoring-best-practices    In this session, we will be reviewing the main monitoring practices to keep a heathy environment and discuss the main issues from the audience. Bring your questions!.   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’. You can also suggest topics for upcoming sessions using this small form.   Here are some recorded sessions that might be of your interest. You can find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search   Thingworx Active Active Clustering This session will cover the main aspects of the High Availability Clustering feature launched with the ThingWorx 9.0 release.   Recoding Link Upgrade to Thingworx 9 – How to Plan / Evaluate Impacts This session highlights the key points you should evaluate to properly plan your upgrade to Thingworx 9. Recording Link Top 5 items to check for Thingworx Performance Troubleshooting How to troubleshoot performance issues in a Thingworx Environment? Here we cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support     Recording Link
View full tip
We will host a live Expert Session: "Top 5 Thingworx environment monitoring best practices" on March 25th, 10h00 EST.   Please find below the description of the expert session and the registration link.   Expert Session: Top 5 Thingworx environment monitoring best practices Date and Time: March 25th, 10h00 EST Duration: 1 hour Host: Tori Firewind, Tim Atwood and Dave Bernbeck from Enterprise Deployment Center - Enterprise Deployment Center Registration Here: https://www.ptc.com/en/resources/iiot/webcast/top-5-thingworx-monitoring-best-practices    In this session, we will be reviewing the main monitoring practices to keep a heathy environment and discuss the main issues from the audience. Bring your questions!.   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’. You can also suggest topics for upcoming sessions using this small form.   Here are some recorded sessions that might be of your interest. You can find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search   Thingworx Active Active Clustering This session will cover the main aspects of the High Availability Clustering feature launched with the ThingWorx 9.0 release.   Recoding Link Upgrade to Thingworx 9 – How to Plan / Evaluate Impacts This session highlights the key points you should evaluate to properly plan your upgrade to Thingworx 9. Recording Link Top 5 items to check for Thingworx Performance Troubleshooting How to troubleshoot performance issues in a Thingworx Environment? Here we cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support     Recording Link
View full tip
Hi All,   We will host a live Expert Session: "Thignworx Active Active Clustering" on January 21th 8h00 EST. Please find below the description of the expert session and the registration link.   Expert Session: Thignworx Active Active Clustering Date and Time: January 21th 8h00 EST Duration: 1 hour Host: Ayush Tiwari - IoT Product Manager Registration Here: https://www.ptc.com/en/customer-success/expert-sessions-for-thingworx-foundation-webcasts (scroll down, the session is in the bottom of the page)   Description: This session will cover the main aspects of the High Availability Clustering feature for High Availability configuration launched with the ThingWorx 9.0 release. Join us and bring your questions with you!    Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’. You can also suggest topics for upcoming sessions using this small form.   Here are some recorded sessions that might be of your interest. You can find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search Upgrade to Thingworx 9 – How to Plan / Evaluate Impacts This session highlights the key points you should evaluate to properly plan your upgrade to Thingworx 9   Recording Link Thingworx Flow Overview Flow is a powerful component of the ThingWorx platform.  This session will take the Flow discussion beyond basic applications and into more customized and complex solutions.​ This will focus on use cases, main features such as triggers, connector options, main enhancements for Thingworx 9.0 and a short demonstration   Recoding Link
View full tip
Hi All,   We will host a live Expert Session: "Thignworx Active Active Clustering" on January 21th 8h00 EST. Please find below the description of the expert session and the registration link.   Expert Session: Thignworx Active Active Clustering Date and Time: January 21th 8h00 EST Duration: 1 hour Host: Ayush Tiwari - IoT Product Manager Registration Here: https://www.ptc.com/en/customer-success/expert-sessions-for-thingworx-foundation-webcasts (scroll down, the session is in the bottom of the page)   Description: This session will cover the main aspects of the High Availability Clustering feature for High Availability configuration launched with the ThingWorx 9.0 release.   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’. You can also suggest topics for upcoming sessions using this small form.   Here are some recorded sessions that might be of your interest. You can find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search Upgrade to Thingworx 9 – How to Plan / Evaluate Impacts This session highlights the key points you should evaluate to properly plan your upgrade to Thingworx 9   Recording Link Thingworx Flow Overview Flow is a powerful component of the ThingWorx platform.  This session will take the Flow discussion beyond basic applications and into more customized and complex solutions.​ This will focus on use cases, main features such as triggers, connector options, main enhancements for Thingworx 9.0 and a short demonstration   Recoding Link
View full tip
From the documentation, a SOLR node is only needed in case of using DataTables.  If the SOLR configuration field left blank, the extension will request to provide an input. Are SOLR nodes required or optional in order to use DSE with TW (in the hypothetical case of not using DataTables)?      -- As for functionality of the Thingworx, a Solr node is not required. However, the extension does try to validate the configuration, and hence, at this point, a SOLR node is mandatory to properly configure the extension. This will be fixed in the future. When there are 2 entries for addresses, one for a Cassandra Cluster and one for a Solr Cluster, are they the same Cluster, or different Clusters?      -- They could be either. There can be one machine with SOLR enabled and using the same IP for both Cassandra and Solr. However, it's not recommend for production workloads. It would be perfectly fine for development or test environments. In a Cluster, in order to have Solr and Cassandra nodes, use of Datacenters is required. Even if a Datacenter isn't explicitlydefined, a default install of DSE will create two data centers called "Cassandra" and "Solr" which is what would be seen see in the default "Cassandra Keyspace Settings" property in the configuration. If the user does create Datacenters with specific names then they will have to update the "Cassandra Keyspace Settings" property to reflect the same. I.e. replication = {'class':'NetworkTopologyStrategy', 'Cassandra':1, 'Solr':1} The number in front (1 being the default) represents the replication factor (https://docs.datastax.com/en/cql/3.1/cql/cql_using/update_ks_rf_t.html) depending on the number of nodes in each data center
View full tip
The purpose of this post is to provide some ideas and help diagnosing issues in mashup. First, check if the problem occurs at mashup runtime or in design(edit) mode. Runtime: Is the issue visual or related to improper service execution? (e.g, "my data is displaying correctly but the styling or formatting is wrong" -- visual, "my data is displayed incorrectly but the styling and formatting is right" -- improper service execution) For visual/styling/formatting issues, return to the edit mode of mashup, and ensure the proper style definitions were set up. Ensure the logic behind the connections is correct. Check configuration of the widget(s) involved. Were there any changes made to the styles after the mashup was saved and run the first time? If so, try - clearing the browser cache;  -reconnecting the dependent entity with the style involved in the issue. If the problem persists, contact technical support to raise a cosmetic defect ticket. For improper service execution, return to the composer and use the "test" button on the service to execute and validate the output. If the outputs are incorrect, check the code inside of the service. If the outputs come out as expected, try reconnecting the service in the mashup design mode and clearing the browser cache. If the issue is related to the data from the user database not displaying  -- ensure the database connectivity and proper credentials. If the problem persists, reach out to the technical support to raise a defect.    2.   Design/edit mode: If the widgets are not displaying correctly or not appearing in the list: Check the extensions involved are appearing under the extension manager. Re-upload if needed and restart the composer. If the Google Maps widget is not showing in the mashup the first time of being used, allow up to 2 hrs to load and cache. Submit a ticket to technical support, including the screenshots of the issue. For other styling, formatting, or improper display issues at design time: document the observation and supply the screenshots to the technical support team for investigation. Note: See Tools and approaches used in troubleshooting Twx issues.
View full tip
We will host a live Expert Session: "Thingworx Flow Overview" on December 10th, 8h00 EST.   Please find below the description of the expert session and the registration link.   Expert Session: Thingworx Flow Overview Date and Time: December 10th, 8h00 EST Duration: 1 hour Host: Antony Moffa; Vinay Vaidya - Thingworx IoT Platfom Senior Directors Registration Here: https://www.ptc.com/en/customer-success/expert-sessions-for-thingworx-foundation-webcasts    Description: Overview of Thingworx Flow, an application for integration and orchestration between systems. This will focus on use cases, main features such as triggers, connector options, main enhancements for Thingworx 9.0 and a short demonstration.   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’. You can also suggest topics for upcoming sessions using this small form.   Here are some recorded and upcoming sessions that might be of your interest. You can also find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search Top 5 items to check for Thingworx Performance Troubleshooting How to troubleshoot performance issues in a Thingworx Environment? Here we cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support   Recording Link Upgrade to Thingworx 9 – How to Plan / Evaluate Impacts This session will highlight the key points you should evaluate to properly plan your upgrade to Thingworx 9 Register Here Active Active Clustering This session will cover the main aspects of the High Availability Clustering feature launched with the ThingWorx 9.0 release Register Here
View full tip
We will host a live Expert Session: "Thingworx Flow Overview" on December 10th, 8h00 EST.   Please find below the description of the expert session and the registration link.   Expert Session: Thingworx Flow Overview Date and Time: December 10th, 8h00 EST Duration: 1 hour Host: Antony Moffa; Vinay Vaidya - Thingworx IoT Platfom Senior Directors Registration Here: https://www.ptc.com/en/customer-success/expert-sessions-for-thingworx-foundation-webcasts    Description: Overview of Thingworx Flow, an application for integration and orchestration between systems. This will focus on use cases, main features such as triggers, connector options, main enhancements for Thingworx 9.0 and a short demonstration.   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’. You can also suggest topics for upcoming sessions using this small form.   Here are some recorded  and upcoming sessions that might be of your interest. You can also find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search Top 5 items to check for Thingworx Performance Troubleshooting How to troubleshoot performance issues in a Thingworx Environment? Here we cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support   Recording Link Upgrade to Thingworx 9 – How to Plan / Evaluate Impacts This session will highlight the key points you should evaluate to properly plan your upgrade to Thingworx 9 Register Here Active Active Clustering This session will cover the main aspects of the High Availability Clustering feature launched with the ThingWorx 9.0 release Register Here
View full tip
I've had a lot of questions over the years working with Azure IoT, Kepware, and ThingWorx that I really struggled getting answers to. I was always grateful when someone took the time to help me understand, and now it is time to repay the favour.   People ask me many things about Azure (in a ThingWorx context), and one of the common ones has been about MQTT communications from Kepware to ThingWorx using IoT Hub. Recently the topic has come up again as more and more of the ThingWorx expert community start to work with Azure IoT. Today, I took the time to build, test, validate, and share an approach and utilities to do this in cases where the Azure Industrial IoT OPC UA integration is overkill or simply a step later in the project plan. Enjoy!   End to end Integration of Kepware to ThingWorx using MQTT over Azure IoT (YoutTube 45 minute deep-dive)   ThingWorx entities for import (ThingWorx 9.0)   This approach can be quite good for a simple demo if you have a Kepware Integrator or Kepware Enterprise license, but the use of IoT Gateway for many servers and tags can be quite costly.   Those looking to leverage Azure IoT Hub for MQTT integration to ThingWorx would likely also find this recorded session and shared utilities quite helpful.   Cheers, Greg
View full tip

Only logged in customers with a PTC active maintenance contract can view this content. Learn More

Hi all, Here is the recording of the expert session hosted in September 3rd. For full-sized viewing, click on the YouTube link in the player controls Your feedback is very important to us! After watching the recording, please take 2 min to complete this survey  
View full tip
Hi all,   Here is the recording of the expert session hosted in August 25th. For full-sized viewing, click on the YouTube link in the player controls.
View full tip
Announcements