cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
Data is NOT free. It is easy to overlook the cost of data collection, but all data incurs some cost when it is collected. Data collection in and of itself does not bring business value. If you don’t know why you’re collecting the data, then you probably won’t use it once you have it. For a wireless product, it is felt in the cost of bytes transferred, which makes for an expensive solution, but happy Telco's. Even for wired installations, data transfer isn’t free. Imagine a supermarket with 20 checkout lanes - with only a 56K DSL line - and the connection is shared with the credit card terminals, so it is important to upload only the necessary data during business hours. For the end user, too much data leads to information clutter. Too much information increases the time necessary to locate and access critical data. All enterprise applications have some associated "Infrastructure Tax", and the Axeda Platform is no exception. This is the cost of maintaining the existing infrastructure, as well as increasing capacity through the addition of new systems infrastructure. This includes: The cost of the physical hardware The additional software licenses The cost of the network bandwidth The cost of IT staff to maintain the servers The cost of attached storage Optimizing your data profile will maximize the performance of your existing infrastructure. Scaling decisions should be based on load because 50,000 well defined Assets can yield less data than 2,000 extremely "chatty" Assets. Types of Data To develop your data profile, first identify the types of data you’re collecting. "Actionable Data": This is used to drive business logic. This is your most crucial data, and tends to be "real-time" "Informational Data": This changes at a very low rate, and represents properties of your assets as opposed to status "Historical Data": Sometimes you need to step back to appreciate a work of art. Historical data is best viewed with a wide lens to identify trends "Payload Data": Data which is being packaged and shipped to an external system Actionable Data Actionable Data controls the flow of business logic and has three common attributes: It tends to represent the status of the Asset It typically the highest priority data you will receive It usually has more frequent occurrences than other data Informational Data Informational Data is typically system or software data of which some examples include: OS Version Firmware information Geographical region Historical Data Historical Data will represent the results of long-term operations and is typically used for operational review of trends. May be sourced either from Data Items, File uploads or Web Services operations May feed the Axeda integrated business intelligence solution, or internal customer BI systems Payload Data Payload data travels through the Cloud to your system of record. In this case, the Axeda Platform is a key actor in your system, but its presence is not directly visible to the end user Data Types Key Points Understanding the nature of your data helps to inform your data collection strategy. The four primary attributes are the following: Frequency Quantity Storage Format Knowing what to store, what to process and what to pass through for storage is the first key to optimizing your data profile. The "everything first" approach is an easy choice, but a tough one from which to realize value. A "bottom up" or use-case driven approach will add data incrementally, and will reveal the subset of data you actually need to be collecting.Knowing your target audience for the data is the next step. A best practice to better understand who is trying to innovate and how they are looking to do it begins with questions such as the following: Is marketing looking for trends to highlight? Is R&D looking for areas to improve the product? Is the Service team looking to pro-actively troubleshoot assets in the field? Is Sales looking to sell more consumables? Is Finance trying to resolve a billing dispute? Answers to these questions will help determine which data contributes to solving real business problems. Most Service technicians only access a handful of pieces of information about an Asset while troubleshooting, regardless of how many they have access to. It’s important to close the information loop when finding out which data is actually being used.In addition to understanding the correct target audience and their goals, milestone events are also opportunities to revisit your strategy, specifically times like: New Model rollouts Migration to the Cloud New program launch Once your data profile has been established, the next phase of optimization is to plan the way the data will be received. Strategies Data Item vs. File Upload A decision should be made as to the best way to transfer data to the Axeda Platform, whether that is data items, events, alarms or file transfers. Here's a Best Practice approach that's fairly universal: Choose a Data Item if: (a)You are sending Actionable Data, or (b)You are sending discreet Informational Data Choose a File Upload if: (a)You are sending bulk Data which does not need to trigger an immediate response, or (b)You intend to forward the Data to an external system Agent-Side Business Logic Keep in mind that the Axeda Platform allows business logic to be implemented before transmitting any data. The Agent can be configured to determine when Data needs to be sent via numerous mechanisms: Scripts provide the ability to trigger on-demand uploads of data, either via a human UI interaction or an automated process The "Black Box" configuration allows for a rolling sample window, and will only upload the data in the window based on a configured condition Agent Rules Agent Rules allow the Agent to monitor internal data values to decide when to send data to the Cloud. Data can be continuously sampled and compared against configured thresholds to determine when a value worthy of transmission is encountered. This provides a very powerful mechanism to filter outbound data. The example below shows a graphical representation of how an Agent might monitor a data flow and transmit only when it reaches an Absolute-high value of 1200: Axeda provides a versatile platform for managing the flow of data through your Asset ecosystem. It helps to cultivate an awareness not only of what the data set is but what it represents and to whom it has value. While data is cheap, the hidden costs of data transmission make it worthwhile to do your "data profiling homework" or risk paying a high price in the longer term.
View full tip
The Axeda Platform has a mature data model that's important to understand when planning to build applications. First, this will introduce the existing objects and how they relate to each other. Axeda Agents can communicate Model – the definition of a type of asset. The model consists of a set of dataitems (its inputs and outputs) and alarms. The platform applies logic to a model, so as assets grow, the system is scalable in terms of management. Asset – or sometimes called Device. An asset has an identifier called a Serial Number which must be unique within its model. Agents report information in terms of the asset. Logic is applied to data and events about that asset. Dataitem – a named reading, such as a sensor or computer value. Dataitems are timestamped values in a sequence. For example, hourly temperatures, or odometer readings, or daily usage statistics. The number of named dataitems is unlimited. Dataitems can be written as well as read, so a value can be sent to an “output”. A dataitem can be a Digital (boolean), Analog (real value) or a String. Mobile Location - a lat/long pair typically read from GPS. This is used to map assets as they move. Alarms – have a name, severity, description, active flag, timestamp, and optional embedded dataitem and value. Alarms sent from an agent may result from logic that detects a condition, or from traps, error codes, etc. An alarm indicates something that's wrong. Files – arbitrary files can be uploaded from an agent. Files are sent with a Hint string. This is metadata that allows rules to process the file based on something other than the file extension. Files are often uploaded when an alarm has been raised, or on demand from a user or rule. Axeda agents have flexible ways to send and receive this information based on time, data changes, user request, etc. The Adaptive Machine Messaging Protocol (AMMP) allows anyone to make an agent that interacts with Axeda Platform using the same data model. Axeda Platform is asset-centric. An asset is an instance of a Model. Each asset is identified by its Model and Serial Number pair. Associated with an asset is Organization (typically the customer) Location or home of the asset (a street address). A location is in a Region. Contacts – people who have a relationship to the asset. Contacts have a role, such as Owner, or Service Agent. Asset Groups – assets are members of groups, and groups can be used to grant privileges, for navigation, or to apply commands. Properties – are additional named attributes of an asset. Properties do not have a time series history like a dataitem. The value of a property may be used to dynamically group assets. Condition – the current condition may be good, warning, error, or needs maintenance, based on the existence of alarms, for example. And, of course, dataitems, alarms and files. Information is processed and organized in the context of an asset, but the processing is managed for models. The only scalable way to manage a lot of assets is to apply rules by kind of asset, not individual assets. Rules apply logic to data as it happens. When a new dataitem is reported, a rule may check against its threshold. When an alarm is created, a rule may create a trouble ticket, or notify the user. All types of rules in the platform – Expression Rules, State Machines, and Threshold Rules – are event based. Rules apply to models, or sometimes to all (such as a standard way of notification on Alarms). The only exceptions are rules that apply to user logins and rules on a system timer. Software packages are another entity in the Platform. Packages are used to distribute files, software, patches, etc. and to script some commands around their delivery. So packages often upgrade or patch software, or load a new option of help file. Packages are defined for a model, and the deployment may be automatic or manual, to one or many.User logins are members of user groups. User groups have both privileges (what they can do) and visibility (which assets they can see). User group visibility allows the group to access assets in an Asset Group, or a Model or Region. How do solutions take advantage of this? Dataitems can be configured to store no data, current value, or history. History is needed if you want to see the temperature plot over the last day. Many times, current value is all that's needed to process rules and see the state of an asset. The option not to store a dataitem makes sense if the dataitem is only used to run a rule, or if it will just be sent to another application. An agent can send a dataitem string to the server, and the server puts the string on the Message Queue to deliver to another application. In a pass-through mode, the dataitem doesn't need to be stored at all. A similar situation is if a string dataitem is parsed by the rule calling a Groovy script. The script can parse the string (which may be XML or part of a log file) and use the SDK to do some action. Alarms are almost always used to notify people that they should do something. Alarms in Axeda have a lifecycle that corresponds to how people interact with them. An alarm begins its life when it's created. From that point, the alarm can be Acknowledged – this means that someone has seen it Escalated – the alarm condition hasn't been fixed for some time Closed – the end of the Alarm's life Suppressed – the alarm is logged in the history, but users don't see it. Set an alarm to suppressed when its just an annoyance and doesn't have any action required. Disabled – occurrences of this alarm are thrown away. Rules don't even see them. The Suppressed and Disabled modes are applied to an alarm of a given name, because they affect all future alarms by that name. Files are uploaded for a few reasons. Log files are typically uploaded so a service tech can diagnose a problem. Data files can be uploaded so a script or external system can process the file and take appropriate action. This can be another way of sending information that doesn't fit in a dataitem. The configuration of an asset – both hardware and software – is called Inventory, and the inventory of assets is important in diagnostics, planning spare parts, knowing what patch to apply, and many more. Extended Objects are attributes that can be added to the objects described here, or can be complete objects that live on their own. Your application can read and write these objects or attributes, and query them. The use is up to you. Resources You can find more information on the architecture of the platform in the Introduction to the Axeda Platform. The Platform SDK and Web Services expose most of these objects for configuration as well as runtime. That means an application can provision models and assets, create the rules and apply them to models, then monitor the behavior of assets, all through Web Services.
View full tip
Adaptive Machine Messaging Protocol The Adaptive Machine Messaging Protocol (AMMP) is a simple, byte-efficient, lightweight messaging protocol used to facilitate Internet of Things (IoT) communications and to build IoT connectivity into your product. Using a RESTful API, AMMP provides a semantic structure for IoT information exchange and leverages HTTPS as the means for sending and receiving messages between an edge device and the Axeda® Machine Cloud®. AMMP uses JavaScript Object Notation (JSON) allowing any device that is capable of making an HTTP transmission to interact with the Axeda Platform. Utilizing a common network transport that is friendly to local network proxies and firewalls, and at the same time using JSON for a compact, human-readable, language-independent, and easily constructed data representation, AMMP simplifies device communication and reduces the work needed to connect to the Axeda Machine Cloud. For complete information about the Adaptive Machine Messaging Protocol, refer to the Adaptive Machine Messaging Protocol (AMMP) Technical Reference. AMMP Toolkits The AMMP Toolkits are libraries that allows you to connect your devices to the Axeda Platform using AMMP.  The AMMP Toolkits support transmission of data, alarms, events, locations; error handling and reporting; as well as exchanging files with the Axeda Platform. AMMP Android-Based Toolkit The AMMP Android-Based Toolkit library conforms to the AMMP Protocol Version 1.1. AMMP Android Toolkit AMMP Android Toolkit Developers Reference AMMP Protocol v1.1 Technical Reference AMMP Java-Based ToolkitThe AMMP Java-Based Toolkit library conforms to the AMMP Protocol Version 1.1.  AMMP Java Toolkit AMMP Java Toolkit Developers Reference AMMP Protocol v1.1 Technical Reference AMMP C-Based ToolkitThe AMMP C-Based Toolkit library conforms to the AMMP Protocol Version 1.1. AMMP C Toolkit AMMP C Toolkit Developers Reference AMMP Protocol v1.1 Technical Reference The above resources may be found at the PTC Support Portal.
View full tip
While working with the Axeda Platform you will come across guard rails that limit sizes, recurrence, and duration of certain actions.  When you run into these limitations, it may be an opportunity to re-examine the architecture of your solution and improve efficiency. What this tutorial covers This tutorial discusses the kinds of limits exist across the Platform, however it does not include the exact values of the limits as these may vary across instances.  Skip to the last section on System Configuration to see how to determine the read-only properties of your Axeda Instance.  You can also contact your Axeda Support technician to find out more about how these properties are configured. Types of Limits discussed: Rule Sniper Domain Object Field Length Constraints File Store Limits System Configuration Avoiding Rule Sniper Issues There are two ways a rule can be sniped from statistics (recursive rules are done differently) – frequency count and execution time. When a rule is killed, an email will be sent explaining the statistics behind the event. So what these numbers actually mean… CurrentAverageExecTime = loadExecTime / frequencyCount This determines which rule is sniped... This is the longest running rule on average, NOT the most running per time period. FrequencyCount = how many times this rule ran in this period This is for the rule in general - not this period TotalExecTime = total time this rule has executed for in a time MaxExecTime = longest time this rule has ever taken to run ExecCount = number of times this rule has ever run MaxFrequencyCount = max number of times this rule has ever run in a period The Rule Sniper monitors all the rules as a unit. When the entire system is beyond the “load point” it chooses the heaviest hitting rule and kills it. Some definitions: Execution count Execution count is how many time the rule has ran since it was last enabled. Maximum execution time Maximum execution time is the max time a rule can run. This is controlled by the setting of the following in your DRMConfig.properties: com.axeda.drm.rules.statistics.rule-time-threshold Total execution time Total execution is the time that the rule actually ran. Frequency count Frequency is how many times the same expression rule runs in a set period of time. The period of time is set in DRMConfig.properties by: com.axeda.drm.rules.statistics.rule-frequency-period Maximum frequency count Max frequency is the maximum times the expression can run Recursive expression Rules could be triggered from actions such as file uploads, device registration and data item changes.  A scenario may occur in which an Expression Rule initiates a Then or Else action that triggers itself, such as a Data type Expression Rule setting a data item.  This scenario has led to the existence of the Rule Sniper, which disables Expression Rules that are triggered several times in quick succession.  At times an Expression Rule may be sniped simply for being triggered too many times in too short a period of time, even though the rule was not recursive. Setting a Data Item from a Data type Rule In one scenario, one data item comes in, say Temperature, and you need to set a different data item of Climate based on the value of Temperature.  Without any checking, a Data type Rule that sets a Data Item Value will trigger itself, leading to a recursive rule execution that will be shut down by the Rule Sniper.  A way to do this without the rule being sniped is to check in the If expression that the data item change triggering the rule is the one we are interested in, as opposed to the data item that is changed because it was set by the Rule. If:  Temperature.changed && Temperature.value > 75 Then: SetDataItem("Climate", "Hot")   Since it was the Climate that changed as a result of the Then statement, the rule will not be triggered again. ***Update:  In an ironic twist of fate, it turns out that the solution above only works for data items that are set to be stored On Change rather than Stored data items.  Stored data items are updated whenever a new value is entered, even if it is the same value. In this case, Temperature.changed would not trigger because the value would be the same, only the timestamp would be different.  This would matter if you had the possibility of the same value happening twice consectively and needed the rule to trigger both times, but not on any other data item. The correct solution is the following: If: (!Temperature.changed || Temperature.changed) && Temperature.value > 75 Then: SetDataItem("Climate", "Hot") Admittedly inelegant, this works because if any other data item is passed in, Temperature will not be passed in so there will be no value for Temperature.changed.  If Temperature is passed in, it will trigger either one of the cases (not changed if the value is the same, changed if it isn't). An alternate solution is to make use of the consecutive property of the Expression Rule. "Execute action each time rule evaluates to true" corresponds to the consecutive property, which determines whether the rule will fire every time the If expression evaluates to true.  If the consecutive property is true, it will fire every time.  If it is false, the rule will trigger one time when the If expression evaluates to true, and then it won't be triggered again until the If expression evaluates to false, and then to true again. With the consecutive property set to true, in our scenario above whenever the Temperature changes and is over 75, it will set Climate to Hot.  With consecutive set to false, the rule will set Climate to Hot once, and then Temperature will have to fall below 75 and then rise above 75 again to trigger the rule again. Recurring Actions Sometimes you may need a recurring action to take place.  An example would be if you don't need to evaluate a temperature in real time as it changes, but can check its status periodically.  If the recurrence either requires or can tolerate a set delay, the best practice is to use a Rule Timer.  A Rule Timer allows you to execute an Expression Rule on a schedule much like a cron job.  In fact, the Rule Timer syntax is expressed in crontab format. In order to use a Rule Timer, create an Expression Rule of type System Timer or Asset Timer.  The Asset Timer allows you to scope the rule to a certain set of assets like other rules, while a System Timer is not scoped to assets.  This makes a System Timer more appropriate for a rule that would execute a Custom Object, as opposed to one that creates an alarm directly on an asset. Then create the Timer itself, which will allow you to set the schedule. Navigate to Configuration > New > Rule Timer With a Rule Timer, you can set a rule to run automatically with a preset delay and avoid the recurrence limit on the rule. For more information on the Rule Sniper, there is a Salesforce Knowledgebase Solution article available to Axeda customers called What are the Rule Sniper and Rule Executor Monitor Features For and How Do They Work? as well as the Rules Design and Best Practices Guide. Domain Object Field Length Constraints Every stored object has limits on the length of its fields, such as name and value.  If a script attempts to store a value for a field that exceeds the field length constraints, the value will be truncated to the maximum limit. The maximum size of a data item value in the database is 4000 bytes. Two additional constraints are a limit on number of lines in a custom object (typically 1000 lines) and on the size of a stored data accumulation that can be read out as a string (1MB). The Help documentation available through the Axeda Applications Console contains information regarding field constraints (such as the Help pages on String Length Constraints at http://<<yourdomain>>.axeda.com/help/en/rule_action_data_entry_string.htm ). Limits on File Store Configurable quota limits exist on files that can be uploaded to the Axeda File Store via the SDK v2 FileInfoBridge.  These limits will prevent creating FileUploadSessions, creating or updating FileInfos, or uploading file data if they are exceeded. File count: maximum number of files that can be stored on the system Maximum file size: the maximum size of any one file Total stored bytes: the total bytes for all files that may be stored on the system The configuration of these limits can be found on your system by navigating to Administration > System Configuration as described below and searching for "file" in the Read-Only Properties. System Configuration The System Configuration link under the Administration tab is a useful reference for viewing Read-Only properties of how your instance is configured. Check here when troubleshooting to determine any limit that may influence your app's implementation. Common Question An expression rule has a Data Trigger and in the Then Statement it sets a data item. Why is it getting disabled? Answer:  The rule is being recursively triggered so the Rule Sniper is disabling it.
View full tip
There's a reason why many "Hello World" tutorials begin with writing to the logging tool.  Developers live in the console, inserting breakpoints and watching variables while debugging.  Especially with interconnected, complex systems, logging becomes crucial to saving developer hours and shipping on time. What this tutorial covers This tutorial introduces the three principal methods of monitoring the status of assets and the output of operations on an Axeda instance. Audit Logs Custom Object Log Reports The Audit Log You can filter the Audit Log by date range or by category. A list of the Audit Categories is available in the Help documentation at http://<<yourhost>>.axeda.com/help/en/audit_log_categories_for_filtering... You can write to the Audit Log from an Expression Rule or from a Custom Object. Writing to the Audit Log from an Expression Rule Use the Audit Action to write to the Audit Log: Audit("data-management", "The temperature " + DataItem.temp.value + "was reported at " + FormatDate(Now(), "yyyy/MM/dd hh:mm:ss")) You can insert values from the Variables or Functions list by using the plus operator as a string concatenator. Writing to the Audit Log from a Custom Object import com.axeda.common.sdk.id.Identifier import com.axeda.drm.sdk.Context import com.axeda.drm.sdk.audit.AuditCategory import com.axeda.drm.sdk.audit.AuditMessage auditMessage(Context.getSDKContext(), "data_management", "Thread started timestamp has ended.", context.device.id) private def auditMessage(Context CONTEXT, String category, String message, Identifier assetId) {     AuditCategory auditCategory = AuditCategory.getByString(category) ?: AuditCategory.DATA_MANAGEMENT     if (assetId == null) {         new AuditMessage(CONTEXT, "com.axeda.drm.rules.functions.AuditLogAction", auditCategory, [message]).store()     } else {         new AuditMessage(CONTEXT, "com.axeda.drm.rules.functions.AuditLogAction",    auditCategory, [message], assetId).store()     } }     In either case, a message written in the context of an Asset will be displayed on the Asset Dashboard (assuming the Audit module is enabled for the Asset Dashboard in the Model Preferences). The Custom Object Log The Configuration (6.1-6.5)/Manage(6.6+) tab provides access to the Custom Objects log when they are selected from the View sub-menu: This links allows you to open or save a zip archive of text files called customobject.logX where X is a digit that indicates that the log has rolled over into the next file (ie, customobject.log1).  The most current is customobject.log without a digit.  These files contain logging information in chronological order, identified by Custom Object name.  The log contains full stack traces of exceptions, as well as text written to the log. ERROR 2013-06-20 18:26:02,613 [sstreBinaryReturn,ajp-10.16.70.164-8009-6] Exception occurred in sstreBinaryReturn.groovy: java.lang.NullPointerException     at com.axeda.platform.sdk.v1.services.extobject.ExtendedObject.getPropertyByName(ExtendedObject.java:276)     at com.axeda.platform.sdk.v1.services.extobject.ExtendedObject$getPropertyByName.call(Unknown Source)    The Logger object in Custom Objects is a custom class ScriptoDebuggerLogger that is injected into the script and does not need to be explicitly imported. The following attributes are available for the Logger object: logger.info() logger.debug() logger.error()   All objects can be converted to a String by using the dump() function. logger.info(context.device.dump()) Additionally, a Javascript utility can be used with all SDK v2 domain objects and some SDK v1 domain objects to get a JSON pretty-print string of their attributes. import net.sf.json.JSONArray logger.info(JSONArray.fromObject(context.device).toString(2)) // Outputs: [{   "buildVersion": "",   "condition":  {     "detail": "",     "id": "3",     "label": "",     "restUrl": "",     "systemId": "3"   },   "customer":  {     "detail": "",     "id": "2",     "label": "Default Organization",     "restUrl": "",     "systemId": "2"   },   "dateRegistered":  {     "date": 31,     "day": 4,     "hours": 18,     "minutes": 39,     "month": 0,     "seconds": 31,     "time": 1359657571070,     "timezoneOffset": 0,     "year": 113   },   "description": "",   "detail": "mwc_location_1",   "details": null,   "gateways": [],   "id": "12345",   "label": "",   "location":  {     "detail": "Default Organization",     "id": "2",     "label": "Default Location",     "restUrl": "",     "systemId": "2"   },   "model":  {     "detail": "mwc_location",     "id": "4321",     "label": "standalone",     "restUrl": "",     "systemId": "4321"   },   "name": "mwc_location_1",   "pingRate": 10000,   "properties": [],   "restUrl": "",   "serialNumber": "mwc_location_1",   "sharedKey": [],   "systemId": "12345",   "timeZone": "America/New_York" }] ​Custom object logs may be retrieved by navigating to the Configuration (6.1-6.5)/Manage(6.6+) tab and selecting Custom Objects from the View sub-menu. Click the "Log" button at the bottom of the table and save, then view customobject.log in a text editor. Reports Reports provide a summary of data about the state of objects on the Axeda Platform.  Report titles are generally indicative of what they're reporting, such as Missing Devices Report, Auditing Report, Users Report. A separate license is needed in order to use the Reports feature.  New report types can only be created by a Reports administrator. To run a report, click Run from the Reports Tab. You can manage Reports from the Administration tab. These three tools together offer a full view of the state of domain objects on the Axeda Platform.  Make sure to take advantage of them while troubleshooting assets and applications.
View full tip
This project is a simple custom tab that allows you to search all models and see their assets with basic information.  It is packaged as an Axeda SDK v2 Artisan project. Further Reading Developing with Axeda Artisan (Axeda Platform v6.8 and later) Axeda Sample Application: Populating A Web Page with Data Items Extending the Axeda Platform UI - Custom Tabs and Modules
View full tip
Requirements:  Axeda 6.1.6+ The Axeda Applications User Interface can be extended to accommodate varying degrees of customization.  This ability to customize the base product enables repurposing the Axeda Applications User Interface to serve a specific audience. What this tutorial covers This tutorial discusses three ways to extend the Axeda Applications User Interface, which can be achieved via the following features: Customizing the Look and Feel - Use your own custom stylesheet to replace the default page styles, even on a per-user basis Extended UI Modules - Insert your own Extended UI Module into the Service > Asset Dashboard Custom Tab - Create a custom tab that loads content from a custom M2M application Customizing the Look and Feel of the Axeda Applications User Interface You can add style changes into a user.css file which you then upload like any other custom application, via the Administration > Extensions tab as a zip archive.  Make sure to adhere to the expected directory structure and follow the naming convention for the zip archive. Images - store image files in a directory called <userName>/images Styles - store user.css and any style sheet(s) that it imports in a directory called <userName>/styles Documentation - store documentation files in a directory called <userName>/doc. The naming convention is to name the archive by the username of the user who should be able to see the changes, i.e. jsmith is the username so jsmith.zip is the archive name. For step-by-step instructions for customizing the UI, Axeda customers may refer to http://<<yourdomain>>.axeda.com/help/en/stylesheets_for_user_branding.htm andhttp://<<yourdomain>>.axeda.com/help/en/upload_user_branding.htm . Extended UI Modules Extended UI Modules can be added to the Asset Dashboard to provide custom content alongside the default modules.  The modules can contain the output of a custom object or a custom application, all within the context of the particular asset being viewed. Create the Extended UI Content Option 1: an Extended UI Type Custom Object Navigate to Configuration > New > Custom Object This Custom Object should output HTML with any Javascript and/or CSS styling embedded inline.  Parameters may be defined here and made available to the script as "parameters.label". Example: def iframehtml = """<html>   <head>     <script type='text/javascript' src='https://www.google.com/jsapi'></script>     <script type='text/javascript'>       google.load('visualization', '1', {packages:['gauge']});       google.setOnLoadCallback(drawChart);       function drawChart() {         var data = new google.visualization.DataTable();         data.addColumn('string', 'Label');         data.addColumn('number', 'Value');         data.addRows([           ['$parameters.label', $parameters.value]         ]);         var options = {           redFrom: 90, redTo: 100,           yellowFrom:75, yellowTo: 90,           minorTicks: 5         };         var chart = new google.visualization.Gauge(document.getElementById('chart_div'));         chart.draw(data, options);       }     </script>   </head>   <body style="background: white;">     <div id='chart_div'></div>   </body> </html>​ """ return ['Content-Type': 'text/html', 'Content': iframehtml.toString()]      Option 2: A Custom Application Create a zip file that contains an index html file at the root of the directory, any stylesheets, scripts and images you prefer and upload the zip as a Custom Application (see the example zip file included at the end of this article). Navigate to Administration > Extensions .  Enter the information for the zip file and upload. Create the Extended UI Object Option 1: Using the Axeda Applications Console Navigate to Configuration > New > Extended UI Module Note that the parameters are entered in URI format  myvalue=mykey&othervalue=otherkey If Content Source is set to Custom Application rather than Custom Object, the Custom Applications will become available as the Extended UI Module content. Option 2: Use Axeda Artisan Check out Developing with Axeda Artisan in order to make use of this method.  Add the Extended UI Module to the apc-metadata.xml and it will be created for you automatically on Maven upload.  Note that Artisan does not support Model Preferences, so you will still have to add the module through the Axeda UI as described below. <extendedUIModule>     <!-- you can create the module here, but you still have to use the Axeda Console to apply it to the model where the module should show up -->     <title>extendedUI_name</title>     <height>180</height>     <source>         <type>CUSTOM_APPLICATION</type>         <name>customapp_name</name>     </source> </extendedUIModule> Add the Extended UI Module to the Model Preferences Navigate to Configuration > View > Model and click Preferences under UI Configuration next to the model that should display the Extended UI Module for its assets. Click Asset Dashboard Layout Select the Extended UI Module from the left and click the arrow to add it to the desired column.  The asterisks indicate Extended UI Modules, as opposed to default modules. Click Submit and navigate to an Asset Dashboard to see the module displayed. Now you have an Extended UI Module with your custom content. Custom Tabs Upload a custom application as a custom tab. And there you have it. For Artisan developers, to enable a custom application as a custom tab, insert the following into the apc-metadata.xml: <application>     <description>string</description>     <applicationId>string</applicationId>     <indexFile>string</indexFile>     <zipFile>relative path from apc-metadata.xml to the zip file containing the application files</zipFile>     <customTab>         <tabPrivilegeName>the privilege name required for the tab to be shown</tabPrivilegeName>         <afterTab>the name of the tab after which to place this tab</afterTab>         <showFooter>[true|false]</showFooter>         <tabNames>             <label>                 <locale>the i18n locale (for example en_US or ja_JP)</locale>                 <name>the name to be displayed for the locale</name>             </label>         </tabNames>     </customTab> </application>      Authentication within Extended UI Components When working with Custom Applications in custom tabs or modules, the user session ID is made available through a special variable that you can access from the landing page (such as index.html) only: %%SESSIONID_TOKEN%%      This variable is substituted directly for the session id, which makes the authentication for viewing the Extended UI component appear seamless to the end user. In order to make this ID available for AJAX calls, the index.html file should store the session ID as it is initializing.  Additionally, index.html should instruct the browser not to cache the page, or the session ID may mistakenly be used to authenticate after it expires. In index.html: <html>     <head>         <title>My Custom App</title> <META HTTP-EQUIV="CACHE-CONTROL" CONTENT="NO-CACHE"> <link media="screen" href="styles/axeda.css" rel="stylesheet" type="text/css"/> <script src="scripts/jquery-1.9.0.min.js" type="text/javascript"></script> <script type="text/javascript">             $(window).load(function () {                 App.init(encodeURIComponent("%%SESSIONID_TOKEN%%"));             })         </script>     </head>    In App.js: App.init = function (sessionID) {         // put initial processing here         storeSessionId( sessionID )         App.callScriptoWithStoredSessionID()     }    That's it!  You can now customize the look and feel of the Axeda Applications Console, as well as add an Extended UI Module and a Custom Tab. Further Reading Developing with Axeda Artisan Axeda Sample Application: Populating A Web Page with Data Items Common Questions I want to display my custom app on a custom tab. How should I manage authentication within my custom tab app? Answer:  Use Javascript to store the session ID injected as a variable into the index.html page, then use that to authenticate Scripto calls to the Axeda Platform. Are there example programs to get started? Answer:  There are several examples of Artisan projects to get started Axeda Sample Application: Populating A Web Page with Data Items An Axeda instance - https://<customerInstance>.axeda.com/artisan
View full tip
Scripto Editor is an enhanced Groovy Script Editor that allows the developer to compile and test uploaded Groovy Scripts on the fly.  Please note that Scripto Editor is not a replacement for an IDE and should be used mainly for debugging Groovy Scripts. Installation: Download the Javascript Scripto Editor archive attached to this post. Install the archive as a custom app - Log into the Axeda Platform - Navigate to Administration > More Links -> Extended Applications - Click Browse and select the file downloaded in step 1. - Set the URL as "ScriptoEditor" - Set the Default Index as ScriptoEditor.html - Set Dsplay Mode as Standalone - Optionally enter a ​Description​, such as Scripto Editor for Groovy Objects​ - Click Upload Open Scripto Editor by navigating to https://yourServicelink.axeda.com/apps/ScriptoEditor/ScriptoEditor.html Log in using your Axeda Platform credentials Double click any previously uploaded Groovy Script in the list to open Add or edit parameters in the Properties sidebar Test the script by clicking the Test tab in the sidebar and clicking "Run Test" Results will appear in the console at the bottom of the screen Save the Groovy Script by clicking "Save"Note: if the session expires before you have finished editing, the application will alert you with a pop up "Http Request Error".  You will be unable to save your changes - at this point it is recommended to open a new tab and copy over your changes back into Scripto Editor.  For this reason, Scripto Editor is not a replacement for an IDE and should be used sparingly for on-the-fly debugging.Additionally, any changes made in Scripto Editor will need to be manually copied back into the local development source code. WARNING: Scripto Editor has a 1000 line code limit.  If your custom objects are longer than this, Scripto Editor will truncate them when saving!!
View full tip
Introduction The Edge MicroServer (EMS) and Lua Script Resource (LSR) are Edge software that can be used to connect remote devices to the ThingWorx platform. Using a Gateway is beneficial because, this will allow you to run one instance of the EMS on a server and then many instances of the LSR on different devices all over the world. All communication to the platform will be handled by this one EMS Gateway server. The EMS Gateway can be set up in two different types of scenarios: Self-Identifying Remote Things and Explicitly defined Remote Things​. The scenario I'm going to discuss below will involve explicitly defined Remote Things, a ThingWorx server, an EMS, and a LSR. We will need at least 1 server to run the ThingWorx platform and EMS, but these can always be on separate servers as well. We will also need some other machine or device that will run the LSR. Visit the support downloads page to find the latest EMS releases. The LSR is contained within the EMS download. You can also navigate to the Edge Support site to read more about the EMS and LSR oif this is the first time you have ever configured one. The "ThingWorx WebSocket-based Edge MicroServer Developer's Guide" is also provided inside of the zip file that contains the EMS for further information. Setting up the EMS Once we have obtained the EMS download from the support site (see the section above for links) we can begin creating our config.json file. The image below is a working config.json file for using the EMS as a Gateway. The settings in here are particular to my personal IP addresses and Application Key, but the concept remains the same, and I will go into further detail on the necessary sections, below the image. ws_servers The host and port parameters are always set to the IP address and port that the ThingWorx platform is being hosted on When the EMS and ThingWorx platform are on the same server, "localhost" can be used instead of an IP address appKey The appKey section is the value of an Application Key in the ThingWorx platform that should be used for the authentication of the EMS to the platform An Application Key will need to be created and assigned a user with proper priveledges prior to authenticating certificates The certificates section should be validating and pointing to proper certificates, but in the example above I am not validating any certificates for the sake of simplicity More can be read about the certificates sections here logger The logging section is out of scope of this article, but further reading on ​logger​ configurations can be found here The section in the example above will work for basic logging needs http_server The http_server section configuration parameters will tell the EMS what host and port to spin up a server on and if there is authentication necessary by any LSRs trying to connect The LSR has settings that will explicitly call out whatever value is set to the host and port in this section, so make sure to set these to an open port that is not in use or blocked behind a firewall Further reading on the http_server section can be found here auto_bind You can see above that there are two objects defined in the auto_bind section. One of these is binding the EMS to an EMSGateway Thing in the platform called "EdgeGateway" and the other is defined in the config.lua file for the LSR The gateway parameter is set to true only in the object, "EdgeGateway", that is being used for the EMS to bind to The host and port defined for the "OtherEdgeThing" should point to the port and IP address that the LSR is running on in the other device By default, the LSR runs on port 8001, but you can always double check the listening port by finding the Process Identification (PID) number of the luaScriptResource.exe and then matching the PID to the corresponding line item in the output of netstat -ao command in a console window The protocol can be set to "http" in an example application, but make sure to use "https" when security is of concern All further reading on the sections of the config.json file can be found in the config.json.complete file included with the EMS download and on the Edge Help Center under the "Creating a Configuration File" section and the "Viewing All Options" section. Setting up the LSR In this example, the LSR is going to run on a separate server and point to the EMS server. Below is a screenshot of two very important additions (rap_host and rap_port) to the default config.lua file: rap_host The rap_host field should be set to the IP address where the EMS is hosted rap_port The rap_port field should be set to the port parameter defined in the config.json http_server section script_resource_host The ​script_resource_host​ field must be set to ensure that the EMS will know what IP address to communicate with the LSR at scripts.OtherEdgeThing This line is necessary to identify what the name of the LSR is that will register with the EMS to bind to the platform "OtherEdgeThing" can be changed to anything, but make sure that the auto_bind section in the config.json aligns with what you've defined in the config.lua file at this line Running the EMS and LSR Now that we have configured the LSR and EMS to point to each other and the platform we can try running both of these applications to make sure we are successful. Make sure the ThingWorx platform is running Create a RemoteThing with the name given in the auto_bind section for the LSR we are connecting Create an EMSGateway with the name given in the auto_bind section for the EMS as a Gateway to bind to Start the EMS This can be done by double clicking the wsems.exe when in Windows, running it as a service, or running it directly from the command line Start the LSR This can be done by double clicking the luaScriptResource.exe when in Windows, running it as a service, or running it directly from the command line Navigate to the ThingWorx platform and make sure that the Things you have created are connected Do this by navigating to the Properties menu option and refreshing the isConnected property You should be able to browse remote properties and services for each bound RemoteThing, and this means you have successfully setup the EMS as a Gateway device to external LSR applications running on remote devices Any further questions about browsing remote properties or other configuration settings in the .config files is most likely addressed in the Edge Help Center under the EMS section​, and if not, feel free to comment directly on this document.
View full tip
This code snippet finds an uploaded file associated with an asset and emails it to a destination email address.  It uses a data accumulator to create a temporary file. import org.apache.commons.codec.binary.Base64; import java.util.Date; import java.util.Properties; import java.io.StringWriter import java.io.PrintWriter import com.axeda.drm.sdk.Context import com.axeda.drm.sdk.data.* import com.axeda.drm.sdk.device.* import groovy.json.JsonSlurper import javax.activation.DataHandler; import javax.activation.FileDataSource; import org.apache.axiom.attachments.ByteArrayDataSource; import com.axeda.platform.sdk.v1.services.ServiceFactory; import com.thoughtworks.xstream.XStream; import javax.mail.Authenticator; import javax.mail.Message; import javax.mail.MessagingException; import javax.mail.Multipart; import javax.mail.PasswordAuthentication; import javax.mail.Session; import javax.mail.Transport; import javax.mail.internet.AddressException; import javax.mail.internet.InternetAddress; import javax.mail.internet.MimeBodyPart; import javax.mail.internet.MimeMessage; import javax.mail.internet.MimeMultipart; try {     Context ctx = Context.create(parameters.username)     DeviceFinder dfinder = new DeviceFinder(ctx)     def bytes     dfinder.setSerialNumber(parameters.serial_number)     Device d = dfinder.find()     UploadedFileFinder uff = new UploadedFileFinder(ctx)     uff.device = d     def ufiles = uff.findAll()     UploadedFile ufile     if (ufiles.size() > 0) {         ufile = ufiles[0]         File f = ufile.extractFile()         def slurper = new JsonSlurper()         def objects = slurper.parseText(f.getText())         def bugreport = objects.objects[0].mobj_update[0].bugreport         String from = "demo@axeda.com";         String to = "destination@axeda.com";         String subject = "My file";         String mailContent = "Attaching test";         String filename = "payload.tar.gz";         def dataStoreIdentifier = "FILE-IO-SUB-testing"         def daSvc = new ServiceFactory().dataAccumulatorService         if (daSvc.doesAccumulationExist(dataStoreIdentifier, d.id.value)) {             daSvc.deleteAccumulation(dataStoreIdentifier, d.id.value)         }         daSvc.writeChunk(dataStoreIdentifier, d.id.value, bugreport);         InputStream is = daSvc.streamAccumulation(dataStoreIdentifier, d.id.value)         Base64 base64 = new Base64()         ByteArrayDataSource rawData = new ByteArrayDataSource(base64.decodeBase64(is.getBytes()));         // You need to create a properties object to store mail server         // smtp information such as the host name and the port number.         // With this properties we create a Session object from         // which we'll create the Message object.         Properties properties = new Properties();         properties.put("mail.smtp.host","mail01.bo2.axeda.com");         properties.put("mail.smtp.port", "25");         properties.put("mail.smtp.auth", "true");         Authenticator authenticator = new CustomAuthenticator();         Session session = Session.getInstance(properties, authenticator);         MimeMessage message = new MimeMessage(session);         message.setFrom(new InternetAddress(from));         message.setRecipient(Message.RecipientType.TO, new InternetAddress(to));         message.setSubject(subject);         message.setSentDate(new Date());         // Set the email message text.         MimeBodyPart messagePart = new MimeBodyPart();         messagePart.setText(mailContent);         // Set the email attachment file         MimeBodyPart attachmentPart = new MimeBodyPart();         //      FileDataSource fileDataSource = new FileDataSource(file)         attachmentPart.setDataHandler(new DataHandler(rawData))  //fileDataSource));         attachmentPart.setFileName(filename);         Multipart multipart = new MimeMultipart();         multipart.addBodyPart(messagePart);         multipart.addBodyPart(attachmentPart);         // Set the content         message.setContent(multipart);         // Send the message with attachment         Transport.send(message);     } } catch (Exception e) {     logger.info(e.message)     StringWriter logStringWriter = new StringWriter();     PrintWriter logPrintWriter = new PrintWriter(logStringWriter)     e.printStackTrace(logPrintWriter)     logger.info(logStringWriter.toString()) } // This class is the implementation of the Authenticator // Where you need to implement the getPasswordAuthentication // to provide the username and password public class CustomAuthenticator extends Authenticator {     protected PasswordAuthentication getPasswordAuthentication() {         String username = "";         String password = "";         return new PasswordAuthentication(username, password);     } } static byte[] getBytes(File file) throws IOException {     return getBytes(new FileInputStream(file)); } static byte[] getBytes(InputStream is) throws IOException {     ByteArrayOutputStream answer = new ByteArrayOutputStream(); // reading the content of the file within a byte buffer     byte[] byteBuffer = new byte[8192];     int nbByteRead /* = 0*/;     try {         while ((nbByteRead = is.read(byteBuffer)) != -1) { // appends buffer             answer.write(byteBuffer, 0, nbByteRead);         }     } finally {         is.close()     }     return answer.toByteArray(); }
View full tip
This code snippet creates then deletes a data item to illustrate CRUD technique. Parameter:  model_number import com.axeda.drm.sdk.Context import com.axeda.drm.sdk.device.ModelFinder import com.axeda.drm.sdk.device.Model import com.axeda.drm.sdk.device.DeviceFinder import com.axeda.drm.sdk.data.CurrentDataFinder import com.axeda.drm.sdk.device.Device import com.axeda.drm.sdk.data.HistoricalDataFinder import groovy.xml.MarkupBuilder import com.axeda.drm.sdk.device.DataItem import com.axeda.drm.services.device.DataItemType /* * DeleteDataItem.groovy * * Delete a data item. * * @param model_number        -   (REQ):Str name of the model. * * @author Sara Streeter <sstreeter@axeda.com> */ def response = [:] def writer = new StringWriter() def xml = new MarkupBuilder(writer) try { // getUserContext is supported as of release 6.1.5 and higher     final def CONTEXT = Context.getUserContext() // find the model     def modelFinder = new ModelFinder(CONTEXT)     modelFinder.setName(parameters.model_name)     Model model = modelFinder.findOne() // throw exception if no model found     if (!model) {         throw new Exception("No model found for ${parameters.model_name}.")     } // Add a dummy data item DataItem dataitem = new DataItem(CONTEXT, model, DataItemType.STRING, "MyDataItem"); dataitem.store(); // find the data items on the model model.dataItems.each{     logger.info(it.name)     if (it.name=="MyDataItem"){         it.delete()     } } } catch (def ex) {       xml.Response() {     Fault {           Code('Groovy Exception')           Message(ex.getMessage())           StringWriter sw = new StringWriter();           PrintWriter pw = new PrintWriter(sw);           ex.printStackTrace(pw);           Detail(sw.toString())         }       } } return ['Content-Type': 'text/xml', 'Content': writer.toString()]
View full tip
This document is designed to help troubleshoot some commonly seen issues while installing or upgrading the ThingWorx application, prior or instead of contacting Tech Support. This is not a defined template for a guaranteed solution, but rather a reference guide that provides an opportunity to eliminate some of the possible root causes. While following the installation guide and matching the system requirements is sufficient to get a successfully running instance of ThingWorx, some issues could still occur upon launching the app for the first time. Generally, those issues arise from minor environmental details and can be easily fixed by aligning with the proper installation process. Currently, the majority of the installation hiccups are coming from the postgresql side. That being said, the very first thing to note, whether it's a new user trying out the platform or a returning one switching the database to postgresql, note that: Postgresql database must be installed, configured, and running prior to the further Thingworx installation. ThingWorx 7.0+: Installation errors out with 'failed to succeed more than the maximum number of allowed acquisition attempts' Platform being shut down because System Ownership cannot be acquired error ERROR: relation "system_version" does not exist Resolution: Generally, this type of error point at the security/permission issue. As all of the installation operations should be performed by a root/Administrator role, the following points should be verified: Ensure both Tomcat and ThingworxPlatform folders have relevant read/write permissions The title and contents of the configuration file in the ThingworxPlatform folder has changed from 6.x to 7.x Check if the right configuration file is in the folder Verify if the name and password provided in this configuration file matches the ones set in the Postgres DB Run the Database cleanup script, and then set up the database again. Verufy by checking the thingworx table space (about 53 tables should be created)     Thingworx Application: Blank screen, no errors in the logs, "waiting for <url> " gears running be never actually loading, eventually times out     Resolution: Ensure that Java in tomcat is pointing to the right path, should be something like this: C:\Program Files\Java\jre1.8.0_101\bin\server\jvm.dll 6.5+ Postgres:   Error when executing thingworxpostgresDBSetup.bat psql:./thingworx-database-setup.sql:1: ERROR: could not set permissions on directory "D:/ThingworxPostgresqlStorage": Permission denied     Resolution:     The error means that the postgres user was not able to create a directory in the ‘ThingworxPostgresStorage’ directory. As it's related to the security/permission, the following steps can be taken to clear out the error: Assigning read/write permissions to everyone user group to fix the script execution and then execute the batch file: Right-click on ‘ThingworxPostgresStorage’ directory -> Share with -> specific people. Select drop-down, add everyone group and update the permission level to Read/Write. Click Share. Executing the batch file as admin. 2. Installation error message "relation root_entity_collection does not exist" is displayed with Postgresql version of the ThingWorx platform. Resolution:     Such an error message is displayed only if the schema parameter passed to thingworxPostgresSchemaSetup.sh script  is different than $USER or PUBLIC. To clear out the error: Edit the Postgresql configuration file, postgresql.conf, to add to the SEARCH_PATH item your own schema. Other common errors upon launching the application. Two of the most commonly seen errors are 404 and 401.  While there can be a numerous reasons to see those errors, here are the root causes that fall under the "very likely" category: 404 Application not found during a new install: Ensure Thingworx.war was deployed -- check the hard drive directory of Tomcat/webapps and ensure Thingworx.war and Thingworx folder are present as well as the ThingworxStorage in the root (or custom selected location) Ensure the Thingworx.war is not corrupted (may re-download from the support and compare the size) 401 Application cannot be accessed during a new install or upgrade: For Postgresql, ensure the database is running and is connected to, also see the Basic Troubleshooting points below. Verify the tomcat, java, and database (in case of postgresql) versions are matching the system requirement guide for the appropriate platform version Ensure the updrade was performed according to the guide and the necessary folders were removed (after copying as a preventative measure). Ensure the correct port is specified in platform-settings.json (for Postgresql), by default the connection string is jdbc:postgresql://localhost:5432/thingworx Again, it should be kept in mind that while the symptoms are common and can generally be resolved with the same solution, every system environment is unique and may require an individual approach in a guaranteed resolution. Basic troubleshooting points for: Validating PostgreSQL installation Postgres install troubleshooting java.lang.NullPointerException error during PostgreSQL installation ***CRITICAL ERROR: permission denied for relation root_entity_collection Error while running scripts: Could not set permissions on directory "/ThingworxPostgresqlStorage":Permission Denied Acquisition Attempt Failed error Resolution: Ensure 'ThingworxStorage', 'ThingworxPlatform' and 'ThingworxPostgresqlStorage' folders are created The folders have to be present in the root directory unless specifically changed in any configurations Recommended to grant sufficient privileges (if not all) to the database user (twadmin) Note: While running the script in order to create a database, if a schema name other than 'public' is used, the "search_path" in "postgresql.conf" must be changed to reflect 'NewSchemaName, public' Grant permission to user for access to root folders containing 'ThingworxPostgresqlStorage' and 'ThingworxPlatform' The password set for the default 'twadmin' in the pgAdmin III tool must match the password set in the configuration file under the ThingworxPlatform folder Ensure THINGWORX_PLATFORM_SETTINGS variable is set up Error: psql:./thingworx-database-setup.sql:14: ERROR:  could not create directory "pg_tblspc/16419/PG_9.4_201409291/16420": No such file or directory psql:./thingworx-database-setup.sql:16: ERROR:  database "thingworx" does not exist Resolution: Replacing /ThingworxPostgresqlStorage in the .bat file by C:\ThingworxPostgresqlStorage and omitting the -l option in the command window. Also, note the following error Troubleshooting Syntax Error when running postgresql set up scripts
View full tip
The past few years has seen a tremendous explosion in the numbers of devices with Internet connectivity, and the Axeda product has continued to evolve to meet the requirements of these devices, which are often memory and CPU constrained.  Beginning with the Axeda Platform 6.6 release, the Axeda Adaptive Machine Messaging Protocol (AMMP) has been available that allows any device that can make an HTTP GET/POST request to send data to the Axeda Machine Cloud.  AMMP uses standard, JSON messages sent inside HTTP requests. How to get AMMP AMMP is a separately available product that can be added to an Axeda installation.  Customers should discuss with their Account Managers to get access to the Axeda AMMP product offering.  AMMP is a device codec that requires the installation of the Axeda AnyDevice Codec Server (ACS) component.  This configuration presents a new endpoint for customer installations - an https://example.axeda.com instance will have an ACS instance available at https://example-connect.axeda.com.  Self-hosted customers can create their own hosting/URL infrastructure, but this is the default available to Axeda On-Demand Center customers. How to set up assets to talk AMMP​ Before you can start deploying devices to talk AMMP to an Axeda instance, a Model Communication Profile must be created for any Model that is expected to communicate with the AMMP protocol.  This is required in order to configure the proper egress mechanism so that the device can get access to messages waiting for it on the Axeda Platform.​  Attached to this document is a file called ​​AMMP_CreateCommProfile.groovy.zip​. ​ This file contains a Groovy-based script that is copied into the Platform as a Custom Object, and then executed via Scripto - this is needed to be run for each device Model that requires access via AMMP. ​First Requests​ Since all requests are via HTTP, it is useful to acquire a client that can perform REST API requests.  One highly recommended client is Postman (available in Chrome and standalone versions). The first time a device connects to the Axeda Machine Cloud it should send a registration message.  A registration message must contain a model and serial number and can optionally include a ping rate.  The ping rate is how often the server should expect to hear from the device. The server will mark a device as off-line if it does not get a message before a configurable number ping times pass.  A device should also send a registration message whenever it powers up. Using Postman, select POST  from the drop down next to the URL.  Enter the URL as shown below to register a new device. https://example-connect.axeda.com/ammp/assets/1​ (Customers should use their own instances in the following examples) Click ​Headers​ to add a header named 'Content-Type' with value of 'application/json': Click the Body button and enter the line below.  The "mn" field below should be an already registered and configured model. {  "id": { "mn" : "ExampleModel", "sn" : "ExampleDevice-001", "tn" : 0  },  "pingRate": 60  }      Click the Send button - an HTTP 200 OK response should be the expected result.  Logging into the platform and searching for ​'ExampleDevice-001' should show that it is registered Now that a device has been registered is is now possible to send live information to the Axeda Machine Cloud. Sending information is also done with a POST to a URL.  Instead of the asset's resource, we will be sending to the data resource.  Using Postman, the HTTP POST request will be structured like follows: http://example-connect.axeda.com/ammp/data/1/ExampleModel!ExampleDevice-001 Change the body as follows: {"alarms":[{"name":"over_temp","description":"freezer hot" }]}      Click on the 'Send' button and after the response returns, the alarm can be seen in the Asset Status Page in the Axeda Machine Cloud. Four different types of information can be sent to the data resource and any or all can be included in one POST message.  All four types can have an optional time and priority field. If no time is specified, the time on the server at arrival will be added. Alarm Field Name Purpose Expected Data Type Required? Format, Range Default Value Default Behavior name Identify the alarm String Yes Length: 0 <= N <= ? Valid String Characters description Describe the alarm String No Length: 0 <= N <= ? None severity Describe the severity Integer No Range: 0 <= N <= 1000 0 cause Identify the cause String No Length: 0 <= N <= ? None reason Describe the cause String No Length: 0 <= N <= ? None time Time when the alarm occurred No ISO-8601 or Unix Epoch None Use server time if <Default Value> priority How much priority the server should give to processing No 1 <= N <= 100 1 Alarm Example JSON {     "alarms": [         {             "name": "RadiationLeak",             "description": "A radiation leak has been detected",             "severity": 1000,             "cause": "CoolantPipeBurst",             "reason": "The main coolant pipe exploded",             "time": 1364443200000,             "priority": 100         }     ] }      Once alarms reach the Axeda Machine Cloud, they will be in the "Started" state.  Once an alarm is received, it can be "Acknowledged", "Escalated", or "Closed". Event Field Name Purpose Expected Data Type Required? Format, Range Default Value Default Behavior name Identify the event String Yes Length: 0 <= N <=? Valid String Characters description Describe the event String No Length: 0 <= N <= ? None time Time when the event  occurred Integer (Epoch Timestamp) String (ISO-8601) No ISO-8601 or Unix Epoch None Use server time if <Default Value> priority How much priority the server should give to processing Integer No 1 <= N <= 100 1 Event Example JSON {     "events": [         {             "name": "RadiationLeak",             "description": "A radiation leak has been detected",             "time": 1364443200000,             "priority": 100         }     ] }      Mobile Location Field Name Purpose Expected Data Type Required? Format, Range Default Value Default Behavior latitude Latitude Float Yes -90 <= N <= +90 longitude Longitude Float Yes -180 <= N <= +180 altitude Altitude/Elevation Float No Unbounded time Time when the event occurred Integer (Epoch Timestamp) String (ISO-8601 Timestamp) No ISO-8601 or Unix Epoch None Use server time if <Default Value> priority How much priority the server should give to processing Integer No 1 <= N <= 100 1 Location Example JSON {     "locations": [         {             "latitude": 42.034061,             "longitude": -71.237472,             "altitude": 0.0,             "time": 1364443200000,             "priority": 100         }     ] }      Note: The platform records the history of all the mobile locations a device has reported.  This has implications for the positions displayed in the Asset Status Map Charting components. Data Item Set Field Name Purpose Expected Data Type Required? Format, Range Default Value Default Behavior dataItems A collection of key/value pairs Object<String, JSON Type> Yes Unbounded time Time when the data items were sampled Integer (Epoch Timestamp) String (ISO-8601 Timestamp) No ISO-8601 or Unix Epoch None Use server time if <Default Value> priority How much priority the server should give to processing Integer No 1 <= N <= 100 1 Data Item Set Example JSON { "data" :[   {     "dataItems": {       "CurrentSong": "Comfortably Numb",       "PreviousSong": "Rain When I Die",       "NextSong": "Whole Lotta Love",       "FreeMemory": 1237.24,       "DebugModeEnabled": true     },     "time": 1364443200000, "priority": 100   },   { "dataItems":{       "bar":"camp",       "pot1":23.3     },     "time": 1364443234000, "priority": 100 } ]}      Data Items are sent as sets that share a common recording time and priority.  Data Item values follow JSON representation standards and can be: string, numeric, or Boolean. Below is an example message showing all four information types that can be sent: {   "alarms": [         {             "name": "RadiationLeak",             "description": "A radiation leak has been detected",             "time": 1364443200000         }     ],     "events": [         {             "name": "Foo",             "description": "A Foo occurred"         }     ],     "data":[         {             "dataItems":{               "bar":"camp",               "pot1":23.3},             "time": 1364443200000         }     ],     "locations": [         {             "latitude": 32.00,             "longitude": -78.00     }     ] }      Polling ​Axeda AMMP is designed to provide a minimalist communication protocol between devices.  As such each request has egress items returned in the HTTP response body, no matter the type of data sent as the request body.  This is so that extra requests to retrieve egress do not have to be made.  But if a device has no updates to make, it is still able to make periodic polling requests to the Codec Server to get any egress items available to it. This request is simply a REST request of HTTP POST to the example URL: https://example-connect.axeda.com/ammp/assets/1/ExampleModel!ExampleDevice-001​ ​with an empty request body. Next steps and caveats Keep in mind that egress data can be returned in ANY HTTP response body.  If a device has a programming error or a power failure occurs before the request is processed, then it is possible that request can be lost - permanently.  The Axeda Platform does not replay egress items once it has been delivered to the device.  Additional logical facilities are available on the Axeda Platform to be able to provide replay/retry communications to the device. Bibliography​ Using Axeda Scripto Axeda AMMP Technical Reference (1.2.0 Dec 2014)
View full tip
The following script takes a parameter of a model name, a device serial number and a data item name, finds the asset location and uses that longitude to determine the current TimeZone.  It then converts the Timezone of the data item timestamp to an Eastern Standard Timezone timestamp. import groovy.xml.MarkupBuilder import com.axeda.drm.sdk.Context import java.util.TimeZone import com.axeda.drm.sdk.data.* import com.axeda.drm.sdk.device.* import com.axeda.common.sdk.jdbc.*; import net.sf.json.JSONObject import net.sf.json.JSONArray import com.axeda.drm.sdk.mobilelocation.MobileLocationFinder import com.axeda.drm.sdk.mobilelocation.MobileLocation import com.axeda.drm.sdk.mobilelocation.CurrentMobileLocationFinder def response try {     Context ctx = Context.getUserContext()     ModelFinder mfinder = new ModelFinder(ctx)     mfinder.setName(parameters.model_name)     Model m = mfinder.find()     DeviceFinder dfinder = new DeviceFinder(ctx)     dfinder.setModel(m);     dfinder.setSerialNumber(parameters.device)     Device d = dfinder.find()     CurrentMobileLocationFinder cmlFinder = new CurrentMobileLocationFinder(ctx);     cmlFinder.setDeviceId(d.id.getValue());     MobileLocation ml = cmlFinder.find();     def lng = -72.158203125     if (ml?.lng){         lng = ml?.lng     }     // set boundaries for timezones - longitudes     def est = setUSTimeZone(-157.95415000000003)     def tz = setUSTimeZone(lng)     CurrentDataFinder cdfinder = new CurrentDataFinder(ctx, d)     DataValue dvalue = cdfinder.find(parameters.data_item_name)     def adjtime = convertToNewTimeZone(dvalue.getTimestamp(),tz,est)     def results = JSONObject.fromObject(lat: ml?.lat, lng: ml?.lng, current: [name: dvalue.dataItem.name, time: adjtime.format("MM/dd/yyyy HH:mm"), value: dvalue.asString()]).toString(2)     response = results } catch (Exception e) {     response = [                 message: "Error: " + e.message             ]     response =  JSONObject.fromObject(response).toString(2) } return ['Content-Type': 'application/json', 'Cache-Control':'no-cache', 'Content': response] def setUSTimeZone(lng){     TimeZone tz     // set boundaries for US timezones by longitude     if (lng <= -67.1484375 && lng > -85.517578125){         tz = TimeZone.getTimeZone("EST");     }     else if (lng <= -85.517578125 && lng > -96.591796875){         tz = TimeZone.getTimeZone("CST");     }     else if (lng <= -96.591796875 && lng > -113.90625){         tz = TimeZone.getTimeZone("MST");     }     else if (lng <= -113.90625){         tz = TimeZone.getTimeZone("PST");     }     logger.info(tz)     return tz } public Date convertToNewTimeZone(Date date, TimeZone oldTimeZone, TimeZone newTimeZone){     long oldDateinMilliSeconds=date.time - oldTimeZone.rawOffset     // oldtimeZone.rawOffset returns the difference(in milliSeconds) of time in that timezone with the time in GMT     // date.time returns the milliseconds of the date     Date dateInGMT=new Date(oldDateinMilliSeconds)     long convertedDateInMilliSeconds = dateInGMT.time + newTimeZone.rawOffset     Date convertedDate = new Date(convertedDateInMilliSeconds)     return convertedDate }
View full tip
The following script is a component of the Axeda Connected Configuration (CMDB) feature.  It is used to provide configuration data for controlling package deployments via Connected Content (SCM). ​ ConfigItem_CRU.groovy *Takes a POST request, not parameters import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.drm.sdk.scripto.Request import com.axeda.services.v2.ConfigurationItem import com.axeda.services.v2.ConfigurationItemCriteria import com.axeda.services.v2.AssetConfiguration import com.axeda.services.v2.Asset import com.axeda.services.v2.ExecutionResult import groovy.json.JsonSlurper import net.sf.json.JSONObject import groovy.xml.MarkupBuilder /** * ConfigItem_CRU.groovy * ----------------------- * * Reads in json from an http post request and reads, adds, deletes or updates Configuration Items. * * * @note this parses a post and does not take any additional parameters. * * @author sara streeter <sstreeter@axeda.com> */ def contentType = "application/json" final def serviceName = "ConfigItem_CRU" def response = [:] def writer = new StringWriter() def xml = new MarkupBuilder(writer) try {     // BUSINESS LOGIC BEGIN     def assetId     def validationOnly     def validationResponse = ""     List<ConfigurationItem> configItemList     if (Request?.body != null && Request?.body !="") {         def slurper = new JsonSlurper()         def request = slurper.parseText(Request?.body)         assetId = request.result.assetId         validationOnly = request.result.validationOnly?.toBoolean()         if (request.result.items != null && request.result.items.size() > 0){             configItemList = request.result.items.inject([]) { target, item ->               if (item && item.path != "" && item.key != "" && item.path != null && item.key != null){                     ConfigurationItem configItem = new ConfigurationItem()                     configItem.path = item.path + item.key                     configItem.value = item.value                     target << configItem                 }                 target             }         }     }       if (assetId != null) {               def asset = assetBridge.find([assetId])[0]             AssetConfiguration config = assetConfigurationBridge.getAssetConfiguration(assetId, "")               def itemToDelete                        if (config == null) {                     createConfigXML(xml)                     AssetConfiguration configToCreate = assetConfigurationBridge.fromXml(writer.toString(), asset.id)                     ExecutionResult result = assetConfigurationBridge.create(configToCreate)                     AssetConfiguration config2 = assetConfigurationBridge.getAssetConfiguration(asset.id, "")                     config = config2                     itemToDelete = "/Item"                 }                 if (configItemList != null && configItemList?.size() > 0){                 List<ConfigurationItem> compareList = config.items                 def intersectingCompareItems = compareList.inject(["save": [], "delete": []]) { map, item ->                     // find whether to delete                     def foundItem = configItemList.findAll{ compare -> item?.path == compare?.path && item?.value == compare?.value  }                     map[foundItem.size() > 0 ? "save" : "delete"] << item                     map                 }               intersectingCompareItems.delete = intersectingCompareItems.delete.collect{it.path}               if (itemToDelete){                 intersectingCompareItems.delete.add(itemToDelete)               }                 def intersectingConfigItems = configItemList.inject(["old": [], "new": []]) { map, item ->                     // find whether it's old                     def foundItem = compareList.findAll{ compare -> item?.path == compare?.path && item?.value == compare?.value }                     map[foundItem.size() > 0 ? "old" : "new"] << item                     map                 }                 assetConfigurationBridge.deleteConfigurationItems(config, intersectingCompareItems.delete)                 assetConfigurationBridge.appendConfigurationItems(config, intersectingConfigItems.new)               def exResult = assetConfigurationBridge.validate(config)               if (exResult.successful){                     validationResponse = "success"                     if (!validationOnly){                         assetConfigurationBridge.update(config)                     }               }                 else {                     validationResponse = exResult.failures[0]?.details                 }             }             response = [                 assetId: assetId,                 items: config?.items?.collect { item ->                 def origpath = item.path                 def lastSlash = origpath.lastIndexOf("/")                 def key = origpath.substring(lastSlash + 1, origpath.length())                        def path = origpath.replace("/" + key, "")                 path += "/"                     [                         path: path,                         key: key,                         value: item.value                     ]                 },                 validationResponse: validationResponse             ]       }         else {             throw new Exception("Error: Asset Id must be provided.")         } } catch (Exception ex) {       logger.error ex   response = [           error:  [                   type: "Backend Application Error"                   , msg: ex.getLocalizedMessage()           ]   ] } return ['Content-Type': 'application/json', 'Content': JSONObject.fromObject(response).toString(2)] /** * Create the Success response. * * @param xml : The xml response.<br> * @param info : If this is set to "1" the info element will be included in the response.<br> * @param infos : Collection of information to include within the info element of the response.<br> */ private void createConfigXML(xml) {     xml.Item() }  
View full tip
In this post, I show how you can downsample time-series data on server side using the LTTB algorithm. The export comes with a service to setup sample data and a mashup which shows the data with weak to strong downsampling.   Motivation: Users displaying time series data on mashups and dashboards (usually by a service using a QueryPropertyHistory-flavor in the background) might request large amounts of data by selecting large date ranges to be visualized, or data being recorded in high resolution. The newer chart widgets in Thingworx can work much better with a higher number of data points to display. Some also provide their own downsampling so only the „necessary“ points are drawn (e.g. no need to paint beyond the screen‘s resolution). See discussion here. However, as this is done in the widgets, this means the data reduction happens on client site, so data is sent over the network only to be discarded. It would be beneficial to reduce the number of points delivered to the client beforehand. This would also improve the behavior of older widgets which don’t have support for downsampling. Many methods for downsampling are available. One option is partitioning the data and averaging out each partition, as described here. A disadvantage is that this creates and displays points which are not in the original data. This approach here uses Largest-Triangle-Three-Buckets (LTTB) for two reasons: resulting data points exist in the original data set and the algorithm preserves the shape of the original curve very well, i.e. outliers are displayed and not averaged out. It also seems computationally not too hard on the server. Setting it up: Import Entities from LTTB_Entities.xml Navigate to thing LTTB.TestThing in project LTTB, run service downsampleSetup to setup some sample data Open mashup LTTB.Sampling_MU: Initially, there are 8000 rows sent back. The chart widget decides how many of them are displayed. You can see the rowcount in the debug info. Using the button bar, you determine to how many points the result will be downsampled and sent to the client. Notice how the curve get rougher, but the shape is preserved. How it works: The potentially large result of QueryPropertyHistory is downsampled by running it through LTTB. The resulting Infotable is sent to the widget (see service LTTB.TestThing.getData). LTTB implementation itself is in service downsampleTimeseries     Debug mode allows you to see how much data is sent over the network, and how much the number decreases proportionally with the downsampling.   LTTB.TestThing.getData;   The export and the widget is done with TWX  9 but it's only the widget that really needs TWX 9. I guess the code would need some more error-checking for robustness, but it's a good starting point.  
View full tip
  ThingWorx Dev Portal users,    As you may have heard in prior communication, the ThingWorx Dev Portal  Developer Portal | Developer Portal : ThingWorx is being retired.         All ThingWorx Dev Portal content will remain accessible until September 28, 2022. Your favorite and the most useful ThingWorx Dev Portal content will be copied into the PTC Community  in our IoT Tips board.  The Community Team is in the process of making changes to all our “Community Tips” boards.        Subscribe and watch for an official announcement on our Community Announcements board about the change. The Community Tips Board changes go into effect on September 1st.  You can preview the ThingWorx Developer Portal content being migrated here.   Please let us know if you have any questions. 
View full tip
Using the Solution Central API Pitfalls to Avoid by Victoria Firewind, IoT EDC   Introduction The Solution Central API provides a new process for publishing ThingWorx solutions that are developed or modified outside of the ThingWorx Platform. For those building extensions, using third party libraries, or who just are more comfortable developing in an IDE external to ThingWorx, the SC API makes it simple to still utilize Solution Central for all solution management and deployment needs, according to ThingWorx dev ops best practices. This article hones in one some pitfalls that may arise while setting up the infrastructure to use the SC API and assumes that there is already AD integration and an oauth token fetcher application configured for these requests.   CURL One of the easiest ways to interface with the SC API is via cURL. In this way, publishing solutions to Solution Central really involves a series of cURL requests which can be scripted and automated as part of a mature dev ops process. In previous posts, the process of acquiring an oauth token is demonstrated. This oauth token is good for a few moments, for any number of requests, so the easiest thing to do is to request a token once before each step of the process.   1. GET info about a solution (shown) or all solutions (by leaving off everything after "solutions" in the URL)     $RESULT=$(curl -s -o test.zip --location --request GET "https://<your_sc_url>/sc/api/solutions/org.ptc:somethingoriginal12345:1.0.0/files/SampleTwxExtension.zip" ` --header "Authorization: Bearer $ACCESS_TOKEN" ` --header 'Content-Type: application/json' ` )     Shown here in the URL, is the GAV ID (Group:Artifact_ID:Version). This is shown throughout the Swagger UI (found under Help within your Solution Central portal) as {ID}, and it includes the colons. To query for solutions, see the different parameter options available in the Swagger UI found under Help in the SC Portal (cURL syntax for providing such parameters is shown in the next example).   Potential Pitfall: if your solution is not published yet, then you can get the information about it, where it exists in the SC repo, and what files it contains, but none of the files will be downloadable until it is published. Any attempt to retrieve unpublished files will result in a 404.   2. Create a new solution using POST     $RESULT=$(curl -s --location --request POST "https://<your_sc_url>/sc/api/solutions" ` --header "Authorization: Bearer $ACCESS_TOKEN" ` --header 'Content-Type: application/json' ` -d '"{\"groupId\": \"org.ptc\", \"artifactId\": \"somethingelseoriginal12345\", \"version\": \"1.0.0\", \"displayName\": \"SampleExtProject\", \"packageType\": \"thingworx-extension\", \"packageMetadata\": {}, \"targetPlatform\": \"ThingWorx\", \"targetPlatformMinVersion\": \"9.3.1\", \"description\": \"\", \"createdBy\": \"vfirewind\"}"' )     It will depend on your Powershell or Bash settings whether or not the escape characters are needed for the double quotes, and exact syntax may vary. If you get a 201 response, this was successful.   Potential Pitfalls: the group ID and artifact ID syntax are very particular, and despite other sources, the artifact ID often cannot contain capital letters. The artifact ID has to be unique to previously published solutions, unless those solutions are first deleted in the SC portal. The created by field does not need to be a valid ThingWorx username, and most of the parameters given here are required fields.   3.  PUT the files into the project     $RESULT=$(curl -L -v --location --request PUT "<your_sc_url>/sc/api/solutions/org.ptc:somethingelseoriginal12345:1.0.0/files" ` --header "Authorization: Bearer $ACCESS_TOKEN" ` --header 'Accept: application/json' ` --header 'x-sc-primary-file:true' ` --header 'Content-MD5:08a0e49172859144cb61c57f0d844c93' ` --header 'x-sc-filename:SampleTwxExtension.zip' ` -d "@SampleTwxExtension.zip" ) $RESULT=$(curl -L --location --request PUT "https://<your_sc_url>/sc/api/solutions/org.ptc:somethingelseoriginal12345:1.0.0/files" ` --header "Authorization: Bearer $ACCESS_TOKEN" ` --header 'Accept: application/json' ` --header 'Content-MD5:fa1269ea0d8c8723b5734305e48f7d46' ` --header 'x-sc-filename:SampleTwxExtension.sha' ` -d "@SampleTwxExtension.sha" )     This is really TWO requests, because both the archive of source files and its hash have to be sent to Solution Central for verifying authenticity. In addition to the hash file being sent separately, the MD5 checksum on both the source file archive and the hash has to be provided, as shown here with the header parameter "Content-MD5". This will be a unique hex string that represents the contents of the file, and it will be calculated by Azure as well to ensure the file contains what it should.   There are a few ways to calculate the MD5 checksums and the hash: scripts can be created which use built-in Windows tools like certutil to run a few commands and manually save the hash string to a file:      certutil -hashfile SampleTwxExtension.zip MD5 certutil -hashfile SampleTwxExtension.zip SHA256 # By some means, save this SHA value to a file named SampleTwxExtension.sha certutil -hashfile SampleTwxExtension.sha MD5       Another way is to use Java to generate the SHA file and calculate the MD5 values:      public class Main { private static String pathToProject = "C:\\Users\\vfirewind\\eclipse-workspace\\SampleTwxExtension\\build\\distributions"; private static String fileName = "SampleTwxExtension"; public static void main(String[] args) throws NoSuchAlgorithmException, FileNotFoundException { String zip_filename = pathToProject + "\\" + fileName + ".zip"; String sha_filename = pathToProject + "\\" + fileName + ".sha"; File zip_file = new File(zip_filename); FileInputStream zip_is = new FileInputStream(zip_file); try { // Calculate the MD5 of the zip file String md5_zip = DigestUtils.md5Hex(zip_is); System.out.println("------------------------------------"); System.out.println("Zip file MD5: " + md5_zip); System.out.println("------------------------------------"); } catch(IOException e) { System.out.println("[ERROR] Could not calculate MD5 on zip file named: " + zip_filename + "; " + e.getMessage()); e.printStackTrace(); } try { // Calculate the hash of the zip and write it to a file String sha = DigestUtils.sha256Hex(zip_is); File sha_output = new File(sha_filename); FileWriter fout = new FileWriter(sha_output); fout.write(sha); fout.close(); System.out.println("[INFO] SHA: " + sha + "; written to file: " + fileName + ".sha"); // Now calculate MD5 on the hash file FileInputStream sha_is = new FileInputStream(sha_output); String md5_sha = DigestUtils.md5Hex(sha_is); System.out.println("------------------------------------"); System.out.println("Zip file MD5: " + md5_sha); System.out.println("------------------------------------"); } catch (IOException e) { System.out.println("[ERROR] Could not calculate MD5 on file name: " + sha_filename + "; " + e.getMessage()); e.printStackTrace(); } }     This method requires the use of a third party library called the commons codec. Be sure to add this not just to the class path for the Java project, but if building as a part of a ThingWorx extension, then to the build.gradle file as well:     repositories { mavenCentral() } dependencies { compile fileTree(dir:'twx-lib', include:'*.jar') compile fileTree(dir:'lib', include:'*.jar') compile 'commons-codec:commons-codec:1.15' }       Potential Pitfalls: Solution Central will only accept MD5 values provided in hex, and not base64. The file paths are not shown here, as the archive file and associated hash file shown here were in the same folder as the cURL scripts. The @ syntax in Powershell is very particular, and refers to reading the contents of the file, in this case, or uploading it to SC (and not just the string value that is the name of the file). Every time the source files are rebuilt, the MD5 and SHA values need to be recalculated, which is why scripting this process is recommended.   4. Do another PUT request to publish the project      $RESULT=$(curl -L --location --request PUT "https://<your_sc_url>/sc/api/solutions/org.ptc:somethingelseoriginal12345:1.0.0/publish" ` --header "Authorization: Bearer $ACCESS_TOKEN" ` --header 'Accept: application/json' ` --header 'Content-Type: application/json' ` -d '"{\"publishedBy\": \"vfirewind\"}"' )     The published by parameter is necessary here, but it does not have to be a valid ThingWorx user for the request to work. If this request is successful, then the solution will show up as published in the SC Portal:    Other Pitfalls Remember that for this process to work, the extensions within the source file archive must contain certain identifiers. The group ID, artifact ID, and version have to be consistent across a couple of files in each extension: the metadata.xml file for the extension and the project.xml file which specifies which projects the extensions belong to within ThingWorx. If any of this information is incorrect, the final PUT to publish the solution will fail.   Example Metadata File:     <?xml version="1.0" encoding="UTF-8"?> <Entities> <ExtensionPackages> <ExtensionPackage artifactId="somethingoriginal12345" dependsOn="" description="" groupId="org.ptc" haCompatible="false" minimumThingWorxVersion="9.3.0" name="SampleTwxExtension" packageVersion="1.0.0" vendor=""> <JarResources> <FileResource description="" file="sampletwxextension.jar" type="JAR"></FileResource> </JarResources> </ExtensionPackage> </ExtensionPackages> <ThingPackages> <ThingPackage className="SampleTT" description="" name="SampleTTPackage"></ThingPackage> </ThingPackages> <ThingTemplates> <ThingTemplate aspect.isEditableExtensionObject="false" description="" name="SampleTT" thingPackage="SampleTTPackage"></ThingTemplate> </ThingTemplates> </Entities>       Example Projects XML File:     <?xml version="1.0" encoding="UTF-8"?> <Entities> <Projects> <Project artifactId="somethingoriginal12345" dependsOn="{&quot;extensions&quot;:&quot;&quot;,&quot;projects&quot;:&quot;&quot;}" description="" documentationContent="" groupId="org.ptc" homeMashup="" minPlatformVersion="" name="SampleExtProject" packageVersion="1.0.0" projectName="SampleExtProject" publishResult="" state="DRAFT" tags=""> </Project> </Projects> </Entities>       Another large issue that may come up is that requests often fail with a 500 error and without any message. There are often more details in the server logs, which can be reviewed internally by PTC if a support case is opened. Common causes of 500 errors include missing parameter values that are required, including invalid characters in the parameter strings, and using an API URL which is not the correct endpoint for the type of request. Another large cause of 500 errors is providing MD5 or hash values that are not valid (a mismatch will show differently).    Another common error is the 400 error, which happens if any of the code that SC uses to parse the request breaks. A 400 error will also occur if the files are not being opened or uploaded correctly due to some issue with the @ syntax (mentioned above).  Another common 400 error is a mismatch between the provided MD5 value for the zip or SHA file, and the one calculated by Azure ("message: Md5Mismatch"), which can indicate that there has been some corruption in the content of the upload, or simply that the MD5 values aren't being calculated correctly. The files will often say they have 100% uploaded, even if they aren't complete, errors appear in the console, or the size of the file is smaller than it should be if it were a complete upload (an issue with cURL).   Conclusion Debugging with cURL can be a challenge. Note that adding "-v" to a cURL command provides additional information, such as the number of bytes in each request and a reprint of the parameters to ensure they were read correctly. Even still, it isn't always possible for SC to indicate what the real cause of an issue is. There are many things that can go wrong in this process, but when it goes right, it goes very right. The SC API can be entirely scripted and automated, allowing for seamless inclusion of externally-developed tools into a mature dev ops process.
View full tip
User Load Testing in ThingWorx Java Client Tutorial Written by Tori Firewind, IoT EDC   Introduction As stated in previous posts, user load testing is a critical component of ensuring a ThingWorx solution is Enterprise-ready. Even a sturdy new feature that seems to function well in development can run into issues once larger loads are thrown into the mix. That's why no piece of code should be considered production-ready until it has undergone not just unit and integration testing (detailed in our Comprehensive DevOps Guide), but also load testing that ensures a positive user experience and an adequately sized server to facilitate the user load.    The EDC has spent quite a few posts detailing the process of setting up an accurate, real-world testing suite using JMeter for ThingWorx. In this piece, we detail an alternative approach that makes use of the Java Spring Boot Framework to call rest requests against the ThingWorx server and simulate the user load. This Java Client tutorial produces a very immature user load client, one which would still take a lot of development to function as flexibly as the JMeter tutorial counterpart. For Java developers, however, this is still a very attractive approach; it allows for more custom, robust testing suites that come only as an investment made in a solid testing tool.   For someone experienced in Java, the risk is smaller of overlooking some aspect of simulation that JMeter may have handled automatically. For example, JMeter automatically creates more than one HTTP session, and it's much easier to implement randomized user logins instead of one account. The Java Client could do it with some extra work (not demonstrated here), but it uses just the Administrator login by default for a quick and dirty sort of load test, one focused less on the customer experience and more on server and database performance under the strain of the user requests (the method used in our sizing guidance, for instance, to see if a server is sized correctly).   The amount of time required to develop a Java Client isn't so bad for a Java developer, and when compared with learning the JMeter Framework, might be a better investment. A tool like this can handle a greater number of threads on a single testing VM; JMeter caps out around 250 threads per client on an 8Gb VM (under ideal conditions), while a Java Client can have thousands of threads easily. Likewise, a Java Client has less memory overhead than JMeter, less concern for garbage collection, and less likelihood that influence from heap memory management will affect the test results.   However, remember that everything in a Java Client has to be built from scratch and maintained over time. That means that beyond the basic tutorial here, there needs to be some kind of metrics gathering and analysis tool implemented (JMeter has built-in reporting tools), the calls need to be randomized, and not called at set intervals like they are here (which is not a very accurate representation of user load compared to a real-world scenario), and the number of users accessing the system at once should probably vary over time (to resemble peak usage hours). JMeter has a recording tool to ensure all the necessary REST requests to simulate a mashup load are made, so great care has to be taken to ensure all of the necessary REST calls for a mashup are made by the Java Client if a true simulation is called for by that approach.    Java Client Tutorial   Conclusion Neither a Java Client nor a JMeter testing suite is inherently better than the other, and both have their place within PTC's various testing processes. The best test of all is to stand up any sort of user load testing client, either of these approaches, at the same time as the UAT or QA user experience testing. QA testers who load and click about on mashups in true, user fashion can then see most accurately how the mashups will perform and what the users will experience in the Enterprise-ready, production application once the changes go out.
View full tip
This script illustrates how to call a Groovy script as an external web service.  This example also applies to calling any external web service that relies on a username and password. Parameters: external_username external_password script_name import com.axeda.drm.sdk.Context import com.axeda.drm.sdk.device.DeviceFinder import com.axeda.drm.sdk.data.CurrentDataFinder import com.axeda.drm.sdk.device.Device import com.axeda.drm.sdk.data.HistoricalDataFinder import com.axeda.drm.sdk.device.DataItem import net.sf.json.JSONObject import com.axeda.drm.sdk.device.ModelFinder import groovyx.net.http.* import static groovyx.net.http.ContentType.* import static groovyx.net.http.Method.* /** * CallScriptoAsExternalWebService.groovy * * This script illustrates how to call a Groovy script as an external web service. * * @param external_username       -   (REQ):Str Username for the external web service. * @param external_password       -   (REQ):Str Password for the external web service. * @param script_name             -   (REQ):Str Script Name to call. * * */ def result try { validateParameters(actual: parameters, expected: ["external_username", "external_password", "script_name"]) // authentication tokens (username + password) def auth_tokens = [username: parameters.external_username, password: parameters.external_password] http = new HTTPBuilder( 'http://platform.axeda.com/services/v1/rest/Scripto/execute/'+parameters.script_name ) // pass in dummy parameters to the script for illustration def parammap = [key1: "val1", key2: "val2"] // Call the script     http.request (GET, JSON) {       uri.query = auth_tokens + parammap       response.success = { resp, json ->         // traverse the wrapped json response     result = json.wsScriptoExecuteResponse.content.$          }       response.failure = { resp ->         result = response.failure       }      } } catch (Throwable any) {     logger.error any.localizedMessage } return ['Content-Type': 'application/json', 'Content': result] static def validateParameters(Map args) {     if (!args.containsKey("actual")) {         throw new Exception("validateParameters(args) requires 'actual' key.")     }     if (!args.containsKey("expected")) {         throw new Exception("validateParameters(args) requires 'expected' key.")     }     def config = [             require_username: false     ]     Map actualParameters = args.actual.clone() as Map     List expectedParameters = args.expected     config.each { key, value ->         if (args.options?.containsKey(key)) {             config[key] = args.options[key]         }     }     if (!config.require_username) { actualParameters.remove("username") }     expectedParameters.each { paramName ->         if (!actualParameters.containsKey(paramName) || !actualParameters[paramName]) {             throw new IllegalArgumentException(                     "Parameter '${paramName}' was not found in the query; '${paramName}' is a reqd. parameter.")         }     } }
View full tip
Calling external services from M2M applications is a critical aspect of building end-to-end solutions.  Knowing how to apply network timeouts when connecting to external servers can prevent unexpected and problematic network hang-ups. Let's investigate how to create a safe networking flow using HttpClient, HttpBuilder, and Apache’s FTPClient class. Background Custom Objects called from Expression Rules have a configurable maximum execution time.  This is set by the com.axeda.drm.rules.statistics.rule-time-threshold property.  Without this safeguard in place long running or misbehaved Custom Objects can cause internal processing queues to fill and the server will suffer a performance degradation. In Java (and Groovy) all network calls internally use InputStream.read() to establish the socket connection and to read data from the socket.  It is possible for faulty external servers (such as an FTP server) to hang and not properly respond.  This means that the InputStream.read() method will continuously wait for the server to respond with data, and the server will never respond.  According to the Java spec, InputStream.read() may be uninterruptable while it is waiting for data.  This means that if a Custom Object has exceeded the com.axeda.drm.rules.statistics.rule-time-threshold the Rule Sniper will still not be able to interrupt the Custom Object’s execution if it is waiting on InputStream.read().  Because the Custom Object cannot be stopped, the internal processing queues will eventually fill. Even though InputStream.read() is uninterruptable it is still possible to set timeouts for it to be able to give up on a connection.  Beyond that, we want to make sure that the connection is completely disconnected. Types of Timeouts There are typically two types of timeouts that should be set when making calls over the web: the Connection Timeout and the Socket Timeout.  The Connection Timeout is the maximum amount of time that should be allowed when establishing the bi-directional socket connection between the client and server.  Behind the scenes socket connection involves resolving the domain name of the server to an IP address, and then the server opening a port to connect with the client’s port.  The Socket Timeout is the timeout that limits the amount of time each socket operation is allowed to take.  It limits the amount of time InputStream.read() will listen for a server’s response.  If a server is faulty or overloaded it may take a long time (or forever) to respond to a request.  This timeout limits the amount of time the client will wait for the server to respond. When making any calls from a Custom Object to an external server (either making WebService calls, or FTP transfers), you should always set the Connection Timeout and the Socket Timeout.  Always try to keep the timeouts as reasonably small as possible.  Failure to do so could unexpectedly impact your Axeda server.  Consider a Custom Object that takes an average of 10 seconds to run is called to make an external WebService call once a minute. This will not cause any issues and the  system will be stable.  If the external server suddenly has a performance degredation and now the external WebService call takes over a minute to run, the execution queue will eventually fill, causing performance degradation to the Axeda system.  To protect against this scenario, set the timeouts to limit the call to one minute, and log whenever the time limit is exceeded. Examples Provided below are examples of properly set timeouts and thorough connection management use HttpClient, HttpBuilder, and FTPClient.  All of these examples assume they are being executed from Custom Objects. By default, set the Connection Timeout to 10 seconds.  In normal circumstances, connections should not take more then 10 seconds.  If they are exceeding this time there is a good chance of networking issues between the client and server. The Socket Timeout can vary per use-case.  The examples provided set the Socket Timeout to 30 seconds, which should be sufficient for typical WebService calls and small to medium sized FTP file transfers.  Depending exactly on what is being done, the timout may have to be increased.  If you expect the call to go over 5 minutes please contact Axeda Support to investigate increasing  com.axeda.drm.rules.statistics.rule-time-threshold property (which defaults to 5 minutes). ​HttpClient​ //HttpClient import org.apache.http.client.HttpClient import org.apache.http.impl.client.DefaultHttpClient import org.apache.http.client.methods.HttpGet import org.apache.http.HttpResponse import org.apache.http.params.BasicHttpParams import org.apache.http.params.HttpParams import org.apache.http.params.HttpConnectionParams int TENSECONDS  = 10*1000 int THIRTYSECONDS = 30*1000 final HttpParams httpParams = new BasicHttpParams() //Establishing the connection should take <10 seconds in most circumstances HttpConnectionParams.setConnectionTimeout(httpParams, TENSECONDS) //The data transfer/call should take <30 seconds.  Adjust as necessary if receiving large data sets. HttpConnectionParams.setSoTimeout(httpParams, THIRTYSECONDS) HttpClient hc = new DefaultHttpClient(httpParams) try {   //Simply get the contents of http://www.axeda.com and log it to the Custom Object Log   HttpGet get = new HttpGet("http://www.axeda.com")   HttpResponse response = hc.execute(get)   BufferedReader br = new BufferedReader( new InputStreamReader( response.getEntity().getContent()))   br.readLines().each {     logger.info it   } } finally {   //Make sure to shutdown the connectionManager   hc.getConnectionManager().shutdown() } return true https://gist.github.com/axeda/5189092/raw/2f7b93c5f96ed8f445df4364b885486bc6fa1feb/HttpClientTimeouts.groovy HttpBuilder import groovyx.net.http.HTTPBuilder import static groovyx.net.http.ContentType.* import static groovyx.net.http.Method.* int TENSECONDS  = 10*1000; int THIRTYSECONDS = 30*1000; HTTPBuilder builder = new HTTPBuilder('http://www.axeda.com') //HTTPBuilder has no direct methods to add timeouts.  We have to add them to the HttpParams of the underlying HttpClient builder.getClient().getParams().setParameter("http.connection.timeout", new Integer(TENSECONDS)) builder.getClient().getParams().setParameter("http.socket.timeout", new Integer(THIRTYSECONDS)) try {   //Simply get the contents of http://www.axeda.com and log it to the Custom Object Log   builder.request(GET, TEXT){     response.success = { resp, res ->       res.readLines().each {         logger.info it       }       }   } } finally {   //Make sure to always shut down the HTTPBuilder when you’re done with it   builder.shutdown() } return true https://gist.github.com/axeda/5189102/raw/66bb3a4f4f096681847de1d2d38971e6293c4c6b/HttpBuilderTimeouts.groovy FtpClient Apache’s FTPClient has a third type of timeout, the Default Timeout.  The Default Timeout is a timeout that further ensures that socket timeouts are always used.  Note: Default Timeout does not set a timeout for the .connect() method. import org.apache.commons.net.ftp.* import java.io.InputStream import java.io.ByteArrayInputStream String ftphost = "127.0.0.1" String ftpuser = "test" String ftppwd = "test" int ftpport = 21 String ftpDir = "tmp/FTP" int TENSECONDS  = 10*1000 int THIRTYSECONDS = 30*1000 //Declare FTP client FTPClient ftp = new FTPClient() try {   ftp.setConnectTimeout(TENSECONDS)   ftp.setDefaultTimeout(TENSECONDS)   ftp.connect(ftphost, ftpport)   //30 seconds to log on.  Also 30 seconds to change to working directory.   ftp.setSoTimeout(THIRTYSECONDS)   def reply = ftp.getReplyCode()   if (!FTPReply.isPositiveCompletion(reply))   {     throw new Exception("Unable to connect to FTP server")   }   if (!ftp.login(ftpuser, ftppwd))   {     throw new Exception("Unable to login to FTP server")   }   if (!ftp.changeWorkingDirectory(ftpDir))   {     throw new Exception("Unable to change working directory on FTP server")   }   //Change the timeout here for a large file transfer that will take over 30 seconds   //ftp.setSoTimeout(THIRTYSECONDS);   ftp.setFileType(FTPClient.ASCII_FILE_TYPE)   ftp.enterLocalPassiveMode()   String filetxt = "Some String file content"   InputStream is = new ByteArrayInputStream(filetxt.getBytes('US-ASCII'))   try   {     if (!ftp.storeFile("myFile.txt", is))     {       throw new Exception("Unable to write file to FTP server")     }   } finally   {     //Make sure to always close the inputStream     is.close()   } } catch(Exception e) {   //handle exceptions here by logging or auditing } finally {   //if the IO is timed out or force disconnected, exceptions may be thrown when trying to logout/disconnect   try   {     //10 seconds to log off.  Also 10 seconds to disconnect.     ftp.setSoTimeout(TENSECONDS);     ftp.logout();     //depending on the state of the server the .logout() may throw an exception,     //we want to ensure complete disconnect.   }   catch(Exception innerException)   {       //You potentially just want to log that there was a logout exception.     }   finally   {     //Make sure to always disconnect.  If not, there is a chance you will leave hanging sockects     ftp.disconnect();   } } return true https://gist.github.com/axeda/5189120/raw/83545305a38d03b6a73a80fbf4999be3d6b3e74e/FtpClientConnectionTimeouts.groovy
View full tip
Announcements