cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - You can Bookmark boards, posts or articles that you'd like to access again easily! X

IoT Tips

Sort by:
This document is designed to help troubleshoot some commonly seen issues while installing or upgrading the ThingWorx application, prior or instead of contacting Tech Support. This is not a defined template for a guaranteed solution, but rather a reference guide that provides an opportunity to eliminate some of the possible root causes. While following the installation guide and matching the system requirements is sufficient to get a successfully running instance of ThingWorx, some issues could still occur upon launching the app for the first time. Generally, those issues arise from minor environmental details and can be easily fixed by aligning with the proper installation process. Currently, the majority of the installation hiccups are coming from the postgresql side. That being said, the very first thing to note, whether it's a new user trying out the platform or a returning one switching the database to postgresql, note that: Postgresql database must be installed, configured, and running prior to the further Thingworx installation. ThingWorx 7.0+: Installation errors out with 'failed to succeed more than the maximum number of allowed acquisition attempts' Platform being shut down because System Ownership cannot be acquired error ERROR: relation "system_version" does not exist Resolution: Generally, this type of error point at the security/permission issue. As all of the installation operations should be performed by a root/Administrator role, the following points should be verified: Ensure both Tomcat and ThingworxPlatform folders have relevant read/write permissions The title and contents of the configuration file in the ThingworxPlatform folder has changed from 6.x to 7.x Check if the right configuration file is in the folder Verify if the name and password provided in this configuration file matches the ones set in the Postgres DB Run the Database cleanup script, and then set up the database again. Verufy by checking the thingworx table space (about 53 tables should be created)     Thingworx Application: Blank screen, no errors in the logs, "waiting for <url> " gears running be never actually loading, eventually times out     Resolution: Ensure that Java in tomcat is pointing to the right path, should be something like this: C:\Program Files\Java\jre1.8.0_101\bin\server\jvm.dll 6.5+ Postgres:   Error when executing thingworxpostgresDBSetup.bat psql:./thingworx-database-setup.sql:1: ERROR: could not set permissions on directory "D:/ThingworxPostgresqlStorage": Permission denied     Resolution:     The error means that the postgres user was not able to create a directory in the ‘ThingworxPostgresStorage’ directory. As it's related to the security/permission, the following steps can be taken to clear out the error: Assigning read/write permissions to everyone user group to fix the script execution and then execute the batch file: Right-click on ‘ThingworxPostgresStorage’ directory -> Share with -> specific people. Select drop-down, add everyone group and update the permission level to Read/Write. Click Share. Executing the batch file as admin. 2. Installation error message "relation root_entity_collection does not exist" is displayed with Postgresql version of the ThingWorx platform. Resolution:     Such an error message is displayed only if the schema parameter passed to thingworxPostgresSchemaSetup.sh script  is different than $USER or PUBLIC. To clear out the error: Edit the Postgresql configuration file, postgresql.conf, to add to the SEARCH_PATH item your own schema. Other common errors upon launching the application. Two of the most commonly seen errors are 404 and 401.  While there can be a numerous reasons to see those errors, here are the root causes that fall under the "very likely" category: 404 Application not found during a new install: Ensure Thingworx.war was deployed -- check the hard drive directory of Tomcat/webapps and ensure Thingworx.war and Thingworx folder are present as well as the ThingworxStorage in the root (or custom selected location) Ensure the Thingworx.war is not corrupted (may re-download from the support and compare the size) 401 Application cannot be accessed during a new install or upgrade: For Postgresql, ensure the database is running and is connected to, also see the Basic Troubleshooting points below. Verify the tomcat, java, and database (in case of postgresql) versions are matching the system requirement guide for the appropriate platform version Ensure the updrade was performed according to the guide and the necessary folders were removed (after copying as a preventative measure). Ensure the correct port is specified in platform-settings.json (for Postgresql), by default the connection string is jdbc:postgresql://localhost:5432/thingworx Again, it should be kept in mind that while the symptoms are common and can generally be resolved with the same solution, every system environment is unique and may require an individual approach in a guaranteed resolution. Basic troubleshooting points for: Validating PostgreSQL installation Postgres install troubleshooting java.lang.NullPointerException error during PostgreSQL installation ***CRITICAL ERROR: permission denied for relation root_entity_collection Error while running scripts: Could not set permissions on directory "/ThingworxPostgresqlStorage":Permission Denied Acquisition Attempt Failed error Resolution: Ensure 'ThingworxStorage', 'ThingworxPlatform' and 'ThingworxPostgresqlStorage' folders are created The folders have to be present in the root directory unless specifically changed in any configurations Recommended to grant sufficient privileges (if not all) to the database user (twadmin) Note: While running the script in order to create a database, if a schema name other than 'public' is used, the "search_path" in "postgresql.conf" must be changed to reflect 'NewSchemaName, public' Grant permission to user for access to root folders containing 'ThingworxPostgresqlStorage' and 'ThingworxPlatform' The password set for the default 'twadmin' in the pgAdmin III tool must match the password set in the configuration file under the ThingworxPlatform folder Ensure THINGWORX_PLATFORM_SETTINGS variable is set up Error: psql:./thingworx-database-setup.sql:14: ERROR:  could not create directory "pg_tblspc/16419/PG_9.4_201409291/16420": No such file or directory psql:./thingworx-database-setup.sql:16: ERROR:  database "thingworx" does not exist Resolution: Replacing /ThingworxPostgresqlStorage in the .bat file by C:\ThingworxPostgresqlStorage and omitting the -l option in the command window. Also, note the following error Troubleshooting Syntax Error when running postgresql set up scripts
View full tip
One of the killer features of the Axeda Platform is the Axeda Console, a browser-based online portal where developers and business users alike can browse information in an out-of-the-box graphical user interface.  The Axeda Console is functional, re-brandable and extensible, and can easily form the foundation for a customized connected product experience. Let's take a tour of the Axeda Console and explore what it means to have a full-featured connected app right at the start of your development. What this tutorial covers This tutorial discusses the landscape of the online browser-based suite of tools accessible to Axeda customers.  It does not do a deep dive into each of the available applications, but rather serves as an introduction to the user interface. Sections of the Axeda Applications Console that are discussed: Landing Page (Home) User Preferences Asset Dashboard Axeda Help Note: This article features screenshots from Axeda 6.5 which is the current release as of July 1, 2013.  In prior versions the Axeda Applications Console has also been referred to as ServiceLink.  Stay tuned for Axeda 6.6! What can I do from here? From the landing page for the Axeda Console, you can access recent assets in your right sidebar or search for assets in the left sidebar.  Each of the links in the main Welcome text corresponds to a main tab. Troubleshoot, Monitor, and Service Assets - (Service tab) An Overview of the status of assets, filterable by a search on fields such as serial number, model, organization, etc. Access and Control Remote Assets - (Access tab) If you are familiar with Windows Remote Desktop, this will seem familiar.  This allows you to log into and control an asset as if you were typing from a physical keyboard directly into it without having to be on the same network or in the same location.  This is particularly useful when the asset is behind a firewall or other controlled network. Install and Deploy Software Updates - (Software tab) This tool provides the ability to create, view, configure, delete and deploy software packages (like a file that contains an update) to assets. View Usage Data and Asset Charts - (Usage tab) You can use the Axeda Usage application to track and analyze asset usage. Add New Assets, Organizations and Models - (Configuration tab) Find tools here for creating, updating and deleting domain objects. Administrator Users, Groups and Assets - (Administration tab) Manage users, groups, auditing, and system-setup tasks The remaining tabs that are not linked from the Home page are either custom tabs or less frequently used tabs (depending on use case). The custom tabs are examples of custom applications that are not distributed out of the box with an Axeda instance. Wireless - (custom tab) an integration with the Jasper API that allows the user to monitor SIMs activated in their assets Maintenance - track information about the operation of machines against service cycles Case - manage the resolutions of asset issues Report - (requires an additional license) provides a suite of standard reports, custom reports may also be added Dashboard - allows you to create a landing page that displays information that is interesting to you Simulator - (custom tab) an app that allows you to set data items, alarms, mobile locations, and geofences on an asset For more details on Custom Tabs and the Extended UI, please take a look at [Extending the UI - Custom Tabs and Modules]. (Coming soon) User Preferences Each user in an Axeda instance has a certain set of privileges and visibility, which determine what actions she can take and what information she can see.  A user also has control over certain aspects of her own use of the Axeda Console, which are configurable from the yellow Preferences link, located in the top right corner of the page. This opens up the User Preferences page. The User Preferences link allows you to set defaults for your user only.  From here you can change the following settings: User Attributes (email and password) Locale - Change the locale which also sets the display language Time Zone - Change the time zone as displayed in the Applications Console (note that this does NOT affect individual asset time zone.  Asset time zone is reported by the agent) Notification Styles - Specify which contact methods are appropriate for you and for what severity of triggered alert Default Application - Set which tab should open up when you log into the Axeda Console Items Per Page (Long Table) - For longer listings of items, how many rows should be displayed Items Per Page (Short Table) - For shorter listings of items, how many rows should be displayed Asset Dashboard As the asset is the center of the Axeda universe, so the Asset Dashboard could be considered the central feature of the Axeda Console.  You can open up the dashboard for any particular asset by clicking it in the Service tab or in the Recent Assets shortcuts. You can also add modules within the Asset Dashboard that are either a custom application or the output of an Extended UI Module type custom object.  From the Asset Dashboard you have an at-a-glance view of this asset's current data, alarms, uploaded files, and location to name a few. The Asset Dashboard is built for viewing information about the asset.  To perform create/read/update/delete functions on the asset, you will need to search using the Configuration tool instead. To view a list of models or any domain object available for configuration, click the drop down arrow next to the View sub-tab and select the object name. Once you have the list of models displayed, click the Preferences link on the model to access a Model Preferences Dashboard that allows you to configure the model image, the modules displayed, and other features of the Asset Dashboard. Axeda Help As part of learning more about the Axeda Console, make use of the documentation available to you by clicking the Help link in the top right corner of the page. This will open a pop-up which contains information about the page you have open.  It allows you to do a deep dive into any aspect of the Axeda Console, and includes search and a browsable index on Axeda topics. Make sure to research topics in the Help section while troubleshooting your assets and applications.
View full tip
Integrating LDAP authentication into Thingworx is fairly simple. Since release 5.0 and later, the out-of-the-box (OOTB) Thingworx authenticators already include the necessary code to validate a user's credentials against an LDAP server. These authenticators look to see if an LDAP server is connected every time a user attempts a login, and then further check to see if this user exists in the LDAP server. If the username does exist in LDAP, then Thingworx will check if the password entered is a match to the password stored within LDAP. If the password entered does not match the password stored in LDAP, then Thingworx will next check if the password matches the one stored in Thingworx for that user. So in order for a user to login to Thingworx, they must have a user Thing created for them within Thingworx Composer (this can be done programmatically, see below), and a valid password which matches either an LDAP account password or the password as it is set for that user on the Thing in Thingworx Composer. The first thing a developer needs to do to integrate LDAP is configure their Thingworx instance so that it can find the LDAP server and access its contents. This is done by importing an XML file which will allow the developer to see a Thing that comes with the Thingworx platform (see attached file "directoryServices.xml"). The Thing that needs configuring is called ApacheDS3 and it is a DirectoryServices Thing. The largest task for a developer to do to integrate LDAP into Thingworx involves importing their LDAP users into Thingworx. Getting the LDAP usernames out of the LDAP server will vary depending on which distribution of LDAP is in use. However, once the developer acquires this information, using it to create users in Thingworx is simple. The developer will need to create a Thing Service which creates a dummy password and assigns the LDAP username in the parameters. Then they can pass the parameters into the CreateUser service of the “EntitiyServices” resource: var params = { password: "SOMETHING_COMPLICATED", //dummy password does not matter, but you don't want an accidental match, so make it something very complicated, and standard to your company's LDAP users name: ldap_username, //retrieve from LDAP description: "This user was created as part of LDAP import", //can be whatever you'd like tags: undefined }; Resources["EntityServices"].CreateUser(params); // no return Any users created in this way will be redirected to Squeal if there is no home mashup assigned, so you will have to add an additional bit of code which assigns the home mashups to users, looping through something like this: var params = {     name: "dashboard" //replace this with String name of dashboard (must exist) }; Users[username].SetHomeMashup(params); For full steps on integrating LDAP and Thingworx, including instructions on how to set up an ApacheDS test LDAP server, see the Thingworx support article titled “Integrate LDAP Authentication and Import LDAP User Directory into Thingworx” (reference document – CS221840).
View full tip
From the documentation, a SOLR node is only needed in case of using DataTables.  If the SOLR configuration field left blank, the extension will request to provide an input. Are SOLR nodes required or optional in order to use DSE with TW (in the hypothetical case of not using DataTables)?      -- As for functionality of the Thingworx, a Solr node is not required. However, the extension does try to validate the configuration, and hence, at this point, a SOLR node is mandatory to properly configure the extension. This will be fixed in the future. When there are 2 entries for addresses, one for a Cassandra Cluster and one for a Solr Cluster, are they the same Cluster, or different Clusters?      -- They could be either. There can be one machine with SOLR enabled and using the same IP for both Cassandra and Solr. However, it's not recommend for production workloads. It would be perfectly fine for development or test environments. In a Cluster, in order to have Solr and Cassandra nodes, use of Datacenters is required. Even if a Datacenter isn't explicitlydefined, a default install of DSE will create two data centers called "Cassandra" and "Solr" which is what would be seen see in the default "Cassandra Keyspace Settings" property in the configuration. If the user does create Datacenters with specific names then they will have to update the "Cassandra Keyspace Settings" property to reflect the same. I.e. replication = {'class':'NetworkTopologyStrategy', 'Cassandra':1, 'Solr':1} The number in front (1 being the default) represents the replication factor (https://docs.datastax.com/en/cql/3.1/cql/cql_using/update_ks_rf_t.html) depending on the number of nodes in each data center
View full tip
The purpose of this post is to provide some ideas and help diagnosing issues in mashup. First, check if the problem occurs at mashup runtime or in design(edit) mode. Runtime: Is the issue visual or related to improper service execution? (e.g, "my data is displaying correctly but the styling or formatting is wrong" -- visual, "my data is displayed incorrectly but the styling and formatting is right" -- improper service execution) For visual/styling/formatting issues, return to the edit mode of mashup, and ensure the proper style definitions were set up. Ensure the logic behind the connections is correct. Check configuration of the widget(s) involved. Were there any changes made to the styles after the mashup was saved and run the first time? If so, try - clearing the browser cache;  -reconnecting the dependent entity with the style involved in the issue. If the problem persists, contact technical support to raise a cosmetic defect ticket. For improper service execution, return to the composer and use the "test" button on the service to execute and validate the output. If the outputs are incorrect, check the code inside of the service. If the outputs come out as expected, try reconnecting the service in the mashup design mode and clearing the browser cache. If the issue is related to the data from the user database not displaying  -- ensure the database connectivity and proper credentials. If the problem persists, reach out to the technical support to raise a defect.    2.   Design/edit mode: If the widgets are not displaying correctly or not appearing in the list: Check the extensions involved are appearing under the extension manager. Re-upload if needed and restart the composer. If the Google Maps widget is not showing in the mashup the first time of being used, allow up to 2 hrs to load and cache. Submit a ticket to technical support, including the screenshots of the issue. For other styling, formatting, or improper display issues at design time: document the observation and supply the screenshots to the technical support team for investigation. Note: See Tools and approaches used in troubleshooting Twx issues.
View full tip
This document attached to this blog entry actually came out of my first exposure to using the C SDK on a Raspberry PI. I took notes on what I had to do to get my own simple edge application working and I think it is a good introduction to using the C SDK to report real, sampled data. It also demonstrates how you can use the C SDK without having to use HTTPS. It demonstrates how to turn off HTTPS support. I would appreciate any feedback on this document and what additions might be useful to anyone else who tries to do this on their own.
View full tip
The ThingWorx EMS and SDK based applications follow a three step process when connecting to the Platform: Establish the physical websocket:  The client opens a websocket to the Platform using the host and port that it has been configured to use.  The websocket URL exposed at the Platform is /Thingworx/WS.  TLS will be negotiated at this time as well. Authenticate:  The client sends a AUTH message to the platform, containing either an App Key (recommended) or username/password.  The AUTH message is part of the Thingworx AlwaysOn protocol.  If the client attempts to send any other message before the AUTH, the server will disconnect it.  The server will also disconnect the client if it does not receive an AUTH message within 15 seconds.  This time is configurable in the WSCommunicationSubsystem Configuration tab and is named "Amount of time to wait for authentication message (secs)." Once authenticated the SDK/EMS is able to interact with the Platform according to the permissions applied to its credentials.  For the EMS, this means that any client making HTTP calls to its REST interface can access Platform functionality.  For this reason, the EMS only listens for HTTP connections on localhost (this can be changed using the http_server.host setting in your config.json). At this point, the client can make requests to the platform and interact with it, much like a HTTP client can interact with the Platform's REST interface.  However, the Platform can still not direct requests to the edge. Bind:  A BIND message is another message type in the ThingWorx AlwaysOn protocol.  A client can send a BIND message to the Platform containing one or more Thing names or identifiers.  When the Platform receives the BIND message, it will associate those Things with the websocket it received the BIND message over.  This will allow the Platform to send request messages to those Things, over the websocket.  It will also update the isConnected and lastConnection time properties for the newly bound Things. A client can also send an UNBIND request.  This tells the Platform to remove the association between the Thing and the websocket.  The Thing's isConnected property will then be updated to false. For the EMS, edge applications can register using the /Thingworx/Things/LocalEms/Services/AddEdgeThing service (this is how the script resource registers Things).  When a registration occurs, the EMS will send a BIND message to the Platform on behalf of that new resource.  Edge applications can de-register (and have an UNBIND message sent) by calling /Thingworx/Things/LocalEms/RemoveEdgeThing.
View full tip
This is a slide deck I created while learning how to post data from an Arduino to ThingWorx using MQTT protocol.
View full tip
Everywhere in the Thingworx Platform (even the edge and extensions) you see the data structure called InfoTables.  What are they?  They are used to return data from services, map values in mashup and move information around the platform.  What they are is very simple, how they are setup and used is also simple but there are a lot of ways to manipulate them.  Simply put InfoTables are JSON data, that is all.  However they use a standard structure that the platform can recognize and use. There are two peices to an InfoTable, the DataShape definition and the rows array.  The DataShape is the definition of each row value in the rows array.  This is not accessible directly in service code but there are function and structures to manipulate it in services if needed. Example InfoTable Definitions and Values: { dataShape: {     fieldDefinitions : {           name: "ColOneName", baseType: "STRING"     },     {           name: "ColTwoName", baseType: "NUMBER"     }, rows: [     {ColOneName: "FirstValue", ColTwoName: 13},     {ColOneName: "SecondValue, ColTwoName: 14}     ] } So you can see that the dataShape value is made up of a group of JSON objects that are under the fieldDefinitions element.  Each field is a "name" element, which of course defined the field name, and the "baseType" element which is the Thingworx primitive type of the named field.  Typically this structure is automatically created by using a DataShape object that is defined in the platform.  This is also the reason DataShapes need to be defined, so that fields can be defined not only for InfoTables, but also for DataTables and Streams.  This is how Mashups know what the structure of the data is when creating bindings to widgets and other parts of the platform can display data in a structured format. The other part is the "rows" element which contains an array of of JSON objects which contain the actual data in the InfoTable. Accessing the values in the rows is as simple as using standard JavaScript syntax for JSON.  To access the number in the first row of the InfoTable referenced above (if the name of the InfoTable variable is "MyInfoTable") is done using MyInfoTable.rows[0].ColTowName.  This would return a value of 13.  As you can not the JSON array index starts at zero. Looping through an InfoTable in service script is also very simple.  You can use the index in a standard "for loop" structure, but a little cleaner way is to use a "for each loop" like this... for each (row in MyInfoTable.rows) {     var colOneVal = row.ColOneName;     ... } It is important to note that outputs of many base services in the platform have an output of the InfoTable type and that most of these have system defined datashapes built into the platform (such as QueryDataTableEntries, GetImplimentingThings, QueryNumberPropertyHistory and many, many more).  Also all service results from query services accessing external databases are returned in the structure of an InfoTable. Manipulating an InfoTable in script is easy using various functions built into the platform.  Many of these can be found in the "Snippets" tab of the service editor in Composer in both the InfoTableFunctions Resource and InfoTable Code Snippets. Some of my favorites and most commonly used... Create a blank InfoTable: var params = {   infoTableName: "MyTable" }; var MyInfoTable= Resources["InfoTableFunctions"].CreateInfoTable(params); Add a new field to any InfoTable: MyInfoTable.AddField({name: "ColNameThree", baseType: "BOOLEAN"}); Delete a field: MyInfoTable.RemoveField("ColNameThree"); Add a data row: MyInfoTable.AddRow({ColOneName: "NewRowValue", ColTwoName: 15}); Delete one or more data row matching the values defined (Note you can define multiple field in this statement): //delete all rows that have a value of 13 in ColNameOne MyInfoTable.Delete({ColNameOne: 13}); Create an InfoTable using a predefined DataShape: var params = {   infoTableName: "MyInfoTable",   dataShapeName: "dataShapeName" }; var MyInfoTable = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); There are many more functions built into the platform, including ones to filter, sort and query rows.  These can be extremely useful when tying to return limited or more strictly structured InfoTable data.  Hopefully this gives you a better understanding and use of this critical part of the Thingworx Platform.
View full tip
Hello everyone, This post is meant to fill the gap that Basic Rules of ThingWorx Development is having. You can follow these rules even before starting the development process and keep them in mind to have an organized and easy to maintain application. I will update this post in the future with more best practices and advice. Best Practices and suggestions: In order to have a clean and quick progress in any project the approach should be modular. If the modular approach is implemented also the development process should be thought of in a modular way. This will give much needed independence to each individual developer especially if the team concurrently works on the same instance. Some rules need to be in place in order for the project to be as smooth as possible: Every developer must have its own user. This is more important when developing on the same Thingworx instance but it’s a good practice when developing on individual instances as well. Every developer will be responsible for complete modules, from the respective screens of the GUI to the functionality services and business logic. If concurrent work on the same Entity needs to happen then communication between the developers and time sharing on that entity is needed without developers overwriting each other’s code. Don't decide to go into edit mode if there is someone else already editing. That will get you to a dead end. For the point no. 3 to work, after editing an Entity each user must press the Cancel Edit button and leave that Entity in View mode. When searching for services or properties developers should avoid pressing on the name of the Entity which is a link that directly opens the Entity in Edit mode they should rather use the button with the magnifying glass to the left of the name that will then take them in View mode. As a result of the modular approach each module will have its own Utility Thing that will contain services, properties, events and subscriptions that help develop the functionality for that module. Each module will have its own tags and the format could be: <Client_Name><GUI/Business><Module_Name>   8. The integration of the modules will be done in the Master by a single person in charge with that master or by each developer at a time.   9. Depending on the case the Data Model could be treated as a module in its own right or can be integrated in each module if the project permits. How to manage multiple users working on the same code in Composer: (Thanks to Pai Chung) Currently Thingworx within the development environment allows you to heavily document all your works, that includes ‘Save with Comment’. We encourage the use of the Documentation field and the ‘Save with Comment’ option. However generally development is not isolated to one environment. Thingworx provides several ways to back up the information. Backup – this is a true Database backup that creates an additional database in ThingworxBackupStorage and basically can be used as a restore, by copying it back into ThingworxStorage Export to ThingworxStorage – this is a full model export (with or without data) that can be triggered at any time. It can use Date filters to export according to Modified date. This is server side. Export to File – this allows you to export a single or group of entities/data according to a variety of filters. This is client side. Export to Source Controlled Entities – this allows you to export to a file folder structure or Zip that can be easily checked into a Source Control system. How to approach Source Control: After some initial modeling, Export to Source Control Entities and check this into your Source Control system From this point forward all developers have to follow a Check in/check out process Every time an Entity Group security setting is made, Export to ThingworxStorage and also check that into Source Control overwrite the previous. All in use Extensions should be in one zip and also reside in Source Control To do a restore or deploy Install the Platform Install extensions Import from ThingworxStorage the last Export checked in Import each single Entity file, in the proper order. Import each single Data file   6.  Clean up dead entities (if there is a reference list) Additional steps to take to help safeguard the development. Make sure the Automatic backup is running Export the Entity to a subfolder with the Date of the Edit     3.  Full Export to ThingworxStorage to run every day after development stars - This can be scripted and triggered by a timer or scheduler subscription (<Server>/Thingworx/ExportDatabase/?WithData=true). In this way you have a backup with everything that was on before you started working each day so you can roll back if an error occurs. CONTINUED 7 Sep 2015 How to organize wiring needs when developing the GUI: Starting from the idea that we can divide the GUI elements in Display Elements and Action Elements I have created a common form in order to be filled with information necessary for the wiring of that Element. UI Element Type Display Element / User Action Element Thing Name Name of the thing where data / service is found Service Name Service inside the Thing that returns the data / is the subject of the action Property(ies) Name Thing property / column name (when service returns an infotable) for Data Elements / Input parameters for the service to be run if User Action Element Additional Logic Additional information regarding the way the information sources change when preconditions are met. Usually means new services or mashup logic is needed.  I suggest that an additional companion document to the GUI description document to be created. This document will contain the previous form (table) for each screen/slide so that the work on specific screen/slide could be done independently. To be continued...
View full tip
Joint Study by Cognizant and the Economist Intelligence Unit (EIU) - Making products smart can deliver game-changing innovation,enriched customer experiences and new, across-the board levels of efficiency. From R&D and manufacturing, through distribution and after-sales support, product data is changing how products are built, sold and cared for. Our latest research reveals practical steps business leaders can take to benefit from this quickly intensifying and accelerating trend.
View full tip
One of the signature features of the Axeda Platform is our alarm notification, signalling and auditing capabilities.   Our dashboard offers a simplified view into assets that are in an alarm state, and provides interaction between devices and operators.  For some customers the dashboard may be too extensive for their application needs.  The Axeda Platform from versions 6.6 onward provide a number of ways of interacting with Alarms to allow you to present this data to remote clients (Android, iOS, etc.) or to build extended business logic around alarm processing. If one were to create a remote management application for Android, for example, there are the REST APIs available to interact with Assets and Alarms.  For aggregate operations where network traffic and round-trip time can be a concern, we have our Scripto API also available that allows you to use the Custom Object functionality to deliver information on many different aggregating criteria, and allow developers to get the data needed to build the applications to solve their business requirements. Shown below is a REST API call you might make to retrieve all alarms between a certain time and date. POST:   https://INSTANCENAME/services/v2/rest/alarm/find <v2:AlarmCriteria xmlns:v2="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">    <v2:date xsi:type="v2:BetweenDateQuery">     <v2:start>2015-01-01T00:00:00.000Z</v2:start>     <v2:end>2015-01-31T23:59:59.000Z</v2:end>   </v2:date>   <v2:states/> </v2:AlarmCriteria>   In a custom object, this would like like the following: import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* def q = new com.axeda.services.v2.BetweenDateQuery() q.start = new Date() q.end = new Date() ac = new AlarmCriteria(date:q) aresults = alarmBridge.find(ac)   Using the same API endpoint, here's how you would retrieve data by severity: <v2:AlarmCriteria xmlns:v2="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">    <v2:severity xsi:type="v2:GreaterThanEqualToNumericQuery">     <v2:value>900</v2:value>   </v2:severity>   <v2:states/> </v2:AlarmCriteria>   Or in a custom object: import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.*   def q = new com.axeda.services.v2.GreaterThanEqualToNumericQuery() q.value = 900 ac = new AlarmCriteria(severity:q) aresults = alarmBridge.find(ac)   Currently the Query Types do not map properly in JSON objects - use XML to perform these types of queries via the REST APIs. References: Axeda v2 API/Services Developer's Reference Guide 6.6 Axeda Platform Web Services Developer Reference v2 REST 6.6 Axeda v2 API/Services Developer's Reference Guide 6.8 Axeda Platform Web Services Developer Reference v2 REST 6.8
View full tip
Background: Firewall-Friendly Agents can be configured for server certificate authentication in the Axeda Builder project or via the Axeda Deployment Utility. When server certificate authentication is configured, the Agent will compare the certificate chain sent by the Platform to a local copy of the CA certificate chain stored in the SSLCACert.pem file in the Agent’s home directory. The certificate validation compares three things: Does the name of the Platform certificate match the name in the request? Does the CA certificate match the CA certificate that signed the Platform certificate? Is the Platform or CA certificate not expired? If the answer to any of these questions is “no”, then connection is refused and the Agent does not communicate further with the Platform. To determine if certificate trouble is an issue, see the Agent log: EKernel.log or xGate.log. Recommendation: For Agent-Platform communications, we recommend always using SSL/HTTPS. If the Agent is not configured to validate the server certificate (via the trusted CA certificate), the system is vulnerable to a number of security attacks, including “man in the middle” attacks. This is critically important from a security perspective. Note: For on-premise customers, if the Platform certificate needs to change, always update the SSLCACert.pem file on all Agents before updating the Platform certificate. (If the certificate is changed on the Platform before it is changed on the Agents, communications from the Agent will stop.) Note: Axeda ODC automatically notifies on-demand customers about any certificate updates and renewals. At this point, though, Axeda ODC certificate updates are not scheduled for several years. Finally, it is recommended that your Axeda Builder project always specify “Validate Server Certificate” and set the encryption level to the strongest level supported by the Web server. Axeda recommends 168 bit encryption, which will use one of the following encryption ciphers: AES256-SHA or DES-CBC3-SHA. Need more information? For information about configuring and managing Agent certificate authentication, see Using SSL with Axeda® Platform Guide.
View full tip
Background: In the event that a Gateway/Connector Agent is offline or unable to connect to the Axeda Cloud Server, it uses an internal message queue to store information until the connection is restored. The message queue size is configured in the Axeda Builder project. By default, the queue is 200KB in size. Depending on how frequently your Agent sends data or how much data your Agent is collecting and trying to send, 200KB may be too small.  If the queue is too small, the data will “overflow” the queue. The queue is kept in memory only; data is not stored to disk and will be removed in a First-In-First-Out (FIFO) manner when the queue overflows. If you see queue overflow error messages in the Agent log (either EKernel.log and xGate.log), it may be time to change the size of the outbound message queue. The correct size setting for the Agent outbound message queue takes three variables into consideration: How much information you are sending? What is the maximum expected duration for loss of connection to the Internet (Cloud Server)? How much memory is available for your process? The more information the Agent is trying to send, the larger the queue size setting should be. Consider also that if your Agents are offline (disconnected) for a long period of time, they will likely accumulate lots of data, which may overflow the outbound message queue. If this is the case, you’ll need to increase the queue or risk losing data. Recommendation: Consider how the Agent operates (offline/online data collection) and how much data may be queued. When selecting the size of the queue, it’s important to maintain a balance between protecting against data loss and not occupying too much memory. If you do determine that you need to increase the outbound message queue size based, it’s important to note that Axeda recommends a maximum size of the outbound message queue of about 2MB. Need more information? For information about specifying Agent outbound message queue size, see the online help in Axeda® Builder (Enterprise Server Settings). For information about how the Agent delivers data to the Platform (via EEnterpriseProxy/xgEnterpriseProxy), see the Agent user’s guide for your Agent: either Axeda® Platform Axeda® Gateway User’s Guide (PDF) or Axeda® Platform Axeda® Connector User’s Guide (PDF). Axeda Support Site links: Axeda® Gateway User’s Guide, Axeda® Connector User’s Guide.
View full tip
Background: Axeda Agents can be configured with standard drivers to collect event-driven data, which is then sent to the Platform. Axeda provides many standard event-driven data (EDD) drivers for use with the Axeda Agent (as explained in Axeda® Agents EDD Toolkit Reference (PDF)). All EDD  drivers are configured by an xml file and enabled in Axeda Builder, through the Agent Data Items configuration. You can configure an EDD driver to send important information from your process to the Agent, including data items, events and alarms. The manner in which you configure your drivers will affect the ability for your project to operate efficiently. Recommendation: Use drivers to reduce the amount of data sent to the platform. Instead of sending data items to the Platform, which then generates an event or alarm, it is possible to use the drivers to scan for specific data points or conditions and send an event or alarm. Before you can configure your agents, you first need to determine how often you will need your agent to send data to the Enterprise server. Two example workflows and recommendations: If you want to monitor a data item every second or two, configure the Agent to do the monitoring If you want to trend information once per day, perform that logic at the Enterprise Server. These examples may address your actual use case or your needs may fall somewhere in between. Ultimately, you want to consider that time scale (how often you want to monitor or trend data) and resulting data volume should drive how your system handles data. More data is available at the Agent, and at a higher frequency, then that needed at the Platform. Processing at the Agent ensures that only the important results are communicated to the Platform, leading to a “cleaner” experience for the Platform. Using this guidance as a best practice will help reduce network traffic for your customers as well as ensure the best experience for Enterprise users using server data in their dashboards, reports, and custom applications. Need more information? For information about the standard EDD Drivers, see the Axeda® Agents: EDD Drivers Reference (PDF).
View full tip
Background: The frequency with which an Agent checks its connection to the Axeda Cloud Server is called the Agent “ping rate” (also known as heartbeat). (For Axeda IDM Agents, ping rate is referred to as “poll rate”; the meaning is the same.) Pings are a very important aspect of Firewall-Friendly communication. All communication between the Agent and the Cloud Server is initiated by the Agent. In addition to indicating the Agent is still active, the Ping also gives the Cloud Server an opportunity to send commands back to the Agent on the Ping acknowledgement. The ping rate effectively defines how long users must wait before they can deliver a command or request to an Agent. Typical commands may include setting a data item, starting an Access Session, or running a script. The place where Ping rate is most noticeable to system users is when requesting a remote session. When a session request has been submitted by the user, the Cloud Server waits for the next Agent ping in order to send down the command to begin the session process. A longer ping rate means the remote session takes longer to get started. (Note that the same is true of any command initiated from the Axeda Cloud Server.) Ping traffic comprises the majority of inbound traffic to the Cloud Server. The higher the ping rate, the more resources are consumed on the Server and the greater the requirements for network bandwidth for the customer. Unnecessarily high ping rates will result in an increase in network traffic on your customer's network. By default, the ping rate for Firewall-Friendly Agents is 60 seconds, or every 1 minute. The Agent ping rate is set using Axeda Builder when configuring the project. The ping rate can also be set via an action from the Axeda Cloud Server. When set via an action, the new ping rate is in effect until the next Agent restart (at which time the Agent will go back to the default ping rate set in the project). The Axeda Cloud Server also uses Agent ping rate to determine when assets are missing. One of the model settings is to define how many missed pings (or missed pings and time) will cause a device to be marked as missing. The default setting for new models is to mark assets as missing after they’ve missed 3 consecutive pings. Recommendations: Make sure that your Agents’ ping rates are set to reasonable frequencies. The ping rate should be set based on use case and not necessarily volume. The recommended practice is to make sure the ping rate is never set less than 60 seconds. Where possible the ping rate should be set to 2 minutes or higher. In the end, it is often user expectations around starting Access sessions that drives the ping rate value. If only occasional user access is required, one recommendation is to dynamically adjust the ping rate when conditions require expedited communication with the Cloud Server. One use case is to expedite a remote session when a device is in alarm condition or when an end user needs assistance. In this case you would temporarily increase the ping rate. This can be done using an action from the Cloud Server, by downloading a software package ping rate update, or by Agent extension using the SDK. (For information about using the Agent SDK, see the Axeda® Platform Extending Axeda® Agents PDF.) You can configure alerts to indicate if an asset is missing. Axeda recommends that you configure the alert to a reasonable time given your resources and the expense of tracking every missing asset. A reasonable missing alert for your organization may be 1-2 days, meaning the Server generates the missing asset alert only after the asset has been missing for one or two days, based on its ping rate, and an asset should be marked as missing only after 15 missed pings or 30 minutes (whichever is less). The most common cause of a missing asset is not an issue with the device but rather the loss of Internet connectivity. Note: Any communication from an Agent also serves the function of a Ping. E.g., if the ping rate is set to 30 minutes and the device is sending a data value every 5 minutes, the effective Ping rate is 5 minutes. Need more information? For information about specifying Agent ping rate, see the online help in Axeda® Builder (Enterprise Server Settings). If setting the ping rate from Platform actions or verifying Agent ping rates, see the online help of the Axeda® Connected Management Applications.
View full tip
Background: In very rare situations, it is possible that the Firewall-Friendly Agent process may stop running. If the Agent is not running, no machine monitoring or communication with the Cloud Server is possible. Recommendation: WatchDog is a little known yet very helpful feature available with Firewall-Friendly Agents. This program lets you monitor whether an Agent is running; if it’s not running, WatchDog can restart that Agent if needed. If the Agent process fails, WatchDog can bring it back up! WatchDog can also be configured to watch other processes. You can configure WatchDog to run as a service (for Windows) or daemon (for Linux). You will register the Watchdog to run as a service.  The Watchdog configuration file will specify the process(es) to be monitored and what to do when one exits. The options are to attempt to restart the process or to restart the system. Note: Watchdog detects only if a watched process exits. It will not detect or report on processes that may be “hanging”. Need more information? For information about configuring and using WatchDog, see the Agent User’s Guide for your Agent: either Axeda® Platform Axeda® Gateway User’s Guide (PDF) or Axeda® Platform Axeda® Connector User’s Guide (PDF).
View full tip
Background: Customer machines create files containing data, configuration, log data, etc. These files may contain critical information for service technicians about the asset and its data. How can you make sure you get this important information to the Cloud as soon as it’s available? Use File Watcher, a standard tool available with Firewall-Friendly Agents. You can configure File Watcher to monitor a directory or file specification and automatically upload the file(s) to the Axeda Cloud. By monitoring selected directories and files on the device, the File Watcher determines which files have changed or appeared and then upload those files to the Axeda Cloud Server. Sometimes the size of the file being written or copied into the file watch target is large enough that it takes time to complete. Without taking the right precautions, the file upload may try to start before the file creation process is completed. Recommendations: To ensure that large files don’t cause problems, we recommend using a “move” or “rename” operation instead of a file copy operation to place your file into a watched directory. Unlike a “copy” operation, using autonomous operations like move or rename will ensure that the Agent doesn’t detect a file is in the midst of growing and prematurely begin to upload that operation. In situations when it isn’t possible to use the autonomous “move” or “rename” operations, you can implement a delay. Setting a delay in the File Watcher configuration will prevent the Agent from sending files to the Axeda CloudServer before those files have transferred completely to the watched file directory. Finally, use file compression wisely. You need to balance the benefits from compressing files before sending with the potential adverse impact that compression will have on the Agent. Compressing very large files BEFORE sending to the Platform will affect the Agent; however, compressing smaller files can be beneficial. If the Agent’s computer is of lower power, the CPU may have to work overtime to compress huge files or to send huge uncompressed files. As a general rule, compressing files before sending is recommended. However, depending on your Agent and network setup, it may prove more beneficial to stream uncompressed files rather than compress first and then send. Need more information? For information about configuring and using file watchers, see the Axeda® Builder User’s Guide (PDF) or the online help in Axeda Builder.
View full tip