cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
This Zip file contains the Axeda patch (axeda-jms-plugin-<version>-machine-streams) required for proper installation and configuration of an Apache ActiveMQ server to use with the Axeda Machine Streams service (which is supported for Axeda Platform v6.8 and later). Note: Information about the Axeda Machine Streams feature is provided in the Axeda Features Guide available from the Axeda Support site, http://help.axeda.com. This patch overlay needs to be applied to the v5.8.0 ActiveMQ server installed as the Axeda Machine Streams endpoint broker, so that Axeda Platform can send streamed content to that server endpoint. Complete instructions for installing and configuring an Apache ActiveMQ server for Axeda Machine Streams are provided in the reference, Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints. This guide is available with all Axeda product documentation from the PTC Support site.
View full tip
Hello everyone, This post is meant to fill the gap that Basic Rules of ThingWorx Development is having. You can follow these rules even before starting the development process and keep them in mind to have an organized and easy to maintain application. I will update this post in the future with more best practices and advice. Best Practices and suggestions: In order to have a clean and quick progress in any project the approach should be modular. If the modular approach is implemented also the development process should be thought of in a modular way. This will give much needed independence to each individual developer especially if the team concurrently works on the same instance. Some rules need to be in place in order for the project to be as smooth as possible: Every developer must have its own user. This is more important when developing on the same Thingworx instance but it’s a good practice when developing on individual instances as well. Every developer will be responsible for complete modules, from the respective screens of the GUI to the functionality services and business logic. If concurrent work on the same Entity needs to happen then communication between the developers and time sharing on that entity is needed without developers overwriting each other’s code. Don't decide to go into edit mode if there is someone else already editing. That will get you to a dead end. For the point no. 3 to work, after editing an Entity each user must press the Cancel Edit button and leave that Entity in View mode. When searching for services or properties developers should avoid pressing on the name of the Entity which is a link that directly opens the Entity in Edit mode they should rather use the button with the magnifying glass to the left of the name that will then take them in View mode. As a result of the modular approach each module will have its own Utility Thing that will contain services, properties, events and subscriptions that help develop the functionality for that module. Each module will have its own tags and the format could be: <Client_Name><GUI/Business><Module_Name>   8. The integration of the modules will be done in the Master by a single person in charge with that master or by each developer at a time.   9. Depending on the case the Data Model could be treated as a module in its own right or can be integrated in each module if the project permits. How to manage multiple users working on the same code in Composer: (Thanks to Pai Chung) Currently Thingworx within the development environment allows you to heavily document all your works, that includes ‘Save with Comment’. We encourage the use of the Documentation field and the ‘Save with Comment’ option. However generally development is not isolated to one environment. Thingworx provides several ways to back up the information. Backup – this is a true Database backup that creates an additional database in ThingworxBackupStorage and basically can be used as a restore, by copying it back into ThingworxStorage Export to ThingworxStorage – this is a full model export (with or without data) that can be triggered at any time. It can use Date filters to export according to Modified date. This is server side. Export to File – this allows you to export a single or group of entities/data according to a variety of filters. This is client side. Export to Source Controlled Entities – this allows you to export to a file folder structure or Zip that can be easily checked into a Source Control system. How to approach Source Control: After some initial modeling, Export to Source Control Entities and check this into your Source Control system From this point forward all developers have to follow a Check in/check out process Every time an Entity Group security setting is made, Export to ThingworxStorage and also check that into Source Control overwrite the previous. All in use Extensions should be in one zip and also reside in Source Control To do a restore or deploy Install the Platform Install extensions Import from ThingworxStorage the last Export checked in Import each single Entity file, in the proper order. Import each single Data file   6.  Clean up dead entities (if there is a reference list) Additional steps to take to help safeguard the development. Make sure the Automatic backup is running Export the Entity to a subfolder with the Date of the Edit     3.  Full Export to ThingworxStorage to run every day after development stars - This can be scripted and triggered by a timer or scheduler subscription (<Server>/Thingworx/ExportDatabase/?WithData=true). In this way you have a backup with everything that was on before you started working each day so you can roll back if an error occurs. CONTINUED 7 Sep 2015 How to organize wiring needs when developing the GUI: Starting from the idea that we can divide the GUI elements in Display Elements and Action Elements I have created a common form in order to be filled with information necessary for the wiring of that Element. UI Element Type Display Element / User Action Element Thing Name Name of the thing where data / service is found Service Name Service inside the Thing that returns the data / is the subject of the action Property(ies) Name Thing property / column name (when service returns an infotable) for Data Elements / Input parameters for the service to be run if User Action Element Additional Logic Additional information regarding the way the information sources change when preconditions are met. Usually means new services or mashup logic is needed.  I suggest that an additional companion document to the GUI description document to be created. This document will contain the previous form (table) for each screen/slide so that the work on specific screen/slide could be done independently. To be continued...
View full tip
Joint Study by Cognizant and the Economist Intelligence Unit (EIU) - Making products smart can deliver game-changing innovation,enriched customer experiences and new, across-the board levels of efficiency. From R&D and manufacturing, through distribution and after-sales support, product data is changing how products are built, sold and cared for. Our latest research reveals practical steps business leaders can take to benefit from this quickly intensifying and accelerating trend.
View full tip
A student in the Axeda Groovy course had some good questions. Instead of answering via email, I thought I'd answer here, so we could share the knowledge. Domain Objects - How can I customize or extend Axeda provided domain objects? (Eg., I need to store whether a user is external or internal user) How can I achieve this? This is a perfect use case for the Axeda Extended Data API. Documentation on the API can be found in the Axeda Platform v1 API Developer's Reference Guide. A good, "Getting Started," topic is available on Axeda Mentor - Getting Started With the Axeda Extended Data API. To customize/extend/decorate Axeda-provided domain objects, use Extended Objects. Extended Objects can be thought of in the abstract as a collection of custom database table rows. The analogy isn't perfect, but it helps with understanding. The use case in the question is very common. Extended Objects can stand on their own, or they can be associated with an Axeda domain object with the setInternalID() method. In the following sample code, we "decorate" an Axeda User object with two new fields, isExternalUser and companyID: private void updateAdditionalFieldsForUser(User user, boolean isExternalUser, String companyID) {   // GET OBJECT TYPE - This example assumes the ExtendedObjectType has already been created   ExtendedObjectType extObjectType = extendedObjectService.findExtendedObjectTypeByClassname("com.axeda.drm.sdk.user.User")   // GET PROPERTY TYPES   PropertyType isExternalUserPropertyType = findOrCreatePropertyType(extObjectType, "IS_EXTERNAL_USER")   PropertyType companyIdPropertyType = findOrCreatePropertyType(extObjectType, "COMPANY_ID")      /* GET THE EXTENDED OBJECT   * Note - the example provides a findExtendedObject() method that searches by INTERNAL ID. This linkage via the internal ID   * associates a domain object to an ExtendedObject, allowing us to "decorate" it with custom attributes   */   ExtendedObject extObject = findExtendedObject(extObjectType, user.id.value)   if (extObject == null)   {     extObject = new ExtendedObject()     extObject.setExtendedObjectType(extObjectType)     extObject.setInternalObjectId(user.id.value)     extObject.addProperty(createExtendedProperty(isExternalUserPropertyType, isExternalUser.toString()))     extObject.addProperty(createExtendedProperty(companyIdPropertyType, companyID))     extendedObjectService.createExtendedObject(extObject)   }   else   {     addOrUpdateExtendedProperty(extObject, isExternalUserPropertyType, "IS_EXTERNAL_USER", isExternalUser.toString())     addOrUpdateExtendedProperty(extObject, companyIdPropertyType, "COMPANY_ID", companyID)     extendedObjectService.updateExtendedObject(extObject)   } }  /**  * Adds or Updates an extended property.  *  * @param extendedObject the extended object to associate with the property  * @param propertyType the type of the property  * @param propertyName the name of the property  * @param propertyValue the value of the property  */ private void addOrUpdateExtendedProperty(ExtendedObject extendedObject, PropertyType propertyType, String propertyName, String propertyValue) {   Property extendedProperty = extendedObject.getPropertyByName(propertyName)   if (extendedProperty == null)   {   extendedProperty = createExtendedProperty(propertyType, propertyValue)   extendedObject.addProperty(extendedProperty)   }   else   {   extendedProperty.setValue(propertyValue)   } }  /**  * Retrieves and returns the extended object with the given type and internal id.  *  * @param extObjectType the type of the desired external object  * @param internalObjectId the internal id of the desired extended object  * @return the extended object matching the given details or <code>null</code> in case there's no match  */ private ExtendedObject findExtendedObject(ExtendedObjectType extObjectType, Long internalObjectId) {   ExtendedObjectSearchCriteria searchCriteria = new ExtendedObjectSearchCriteria()   searchCriteria.setExtendedObjectTypeId(extObjectType.getId())   searchCriteria.setInternalObjectId(internalObjectId)    List<ExtendedObject> extObjects = extendedObjectService.findExtendedObjects(searchCriteria, -1, 0, "name")    if (!extObjects?.isEmpty())   {    return extObjects[0]   }   return null }  /**  * Gets a property type given its name and the associated extended object type.  * If it cannot find the property type it will create it.  *  * @param extendedObjectType the associated extended object type for the property type  * @param propertyTypeName the property type name  * @return the property type  */  private PropertyType findOrCreatePropertyType(ExtendedObjectType extendedObjectType, String propertyTypeName)  {     PropertyType propertyType = extendedObjectType.getPropertyTypeByName(propertyTypeName)     if (propertyType == null)     {       propertyType = new PropertyType()       propertyType.setDataType(PropertyDataType.String)       propertyType.setName(propertyTypeName)       propertyType.setExtendedObjectType(extendedObjectType)       extendedObjectService.createPropertyType(propertyType)       extendedObjectType.addPropertyType(propertyType)     }     return propertyType  } By using Extended Objects, and associating them with Axeda domain objects via an internal ID, we can decorate/extend/customize those domain objects with use-case-specific attributes.
View full tip
In the ptc-windchill-extension-1.0.0-14.zip archive there is an extension called infotableselector_ExtensionPackage.zip​ . This extension enables the use of the Widget called Infotable Selector, which can be used to clear the selection in a grid. For how to use this widget, take a look at the picture:
View full tip
Today we're going to learn how to use the Axeda Platform SDK v2 APIs to upload a file to the platform and create a software package.  This document is a work in progress, but we're going to show you everything you need to get started.  In my case I am using the very useful and easy to use Postman REST Client app available from the Chrome Store.  I'll be using some terms below (API Object Names) that can be found in the documents listed in the bibliography at the end of this article. Assumptions (Replace these with your own versions): username:  joe, password: password1! platform instance:  axedaplatform.example.com First things first, we need to authenticate to the platform and get a session id (header x_axeda_wss_sessionid). (Note: Postman does not automatically URL encode query parameters - this can be especially important for the password) GET:  https://axedaplatform.example.com/services/v1/rest/Auth/login?principal.username=joe&password=password1! You'll receive a response like this following: <ns1:WSSessionInfo xmlns:ns1="http://type.v1.webservices.sl.axeda.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="ns1:WSSessionInfo">     <ns1:created>2015-06-02T15:16:49 +0000</ns1:created>     <ns1:expired>false</ns1:expired>     <ns1:sessionId>1a5XXXXX-d9aa-47f2-ac4f-28765ce5dbc5</ns1:sessionId>     <ns1:sessionTimeout>1800</ns1:sessionTimeout> </ns1:WSSessionInfo>                Excellent, now we have a session id! For the rest of the API calls (unless otherwise indicated), all of the following headers are set to the following: x_axeda_wss_sessionid: 1a5XXXXX-d9aa-47f2-ac4f-28765ce5dbc5 Content-Type: application/xml Accept: application/xml The next step is to get our ModelReference: POST:  https://axedaplatform.example.com/services/v2/rest/model/findOne <?xml version="1.0" encoding="UTF-8"?> <ModelCriteria xmlns="http://www.axeda.com/services/v2"> <modelNumber>MyModelName</modelNumber> </ModelCriteria>          Which will return output like: <v2:Model xmlns:v2="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"         id="MyModelName" systemId="6141" label="managed" detail="MyModelName"         restUrl="https://sandbox.axeda.com/services/v2/rest/model/id/6141">     <v2:name>MyModelName</v2:name>     <v2:modelNumber>MyModelName</v2:modelNumber>     <v2:autoRegisterAssets>false</v2:autoRegisterAssets>     <v2:type>MANAGED</v2:type> ... </v2:Model>          The key piece of information we need from that request is the systemId. A little bit about our file (lorem-ipsum.txt): Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer nec odio. Praesent libero. Sed cursus ante dapibus diam. Sed nisi. Nulla quis sem at nibh elementum imperdiet. Duis sagittis ipsum. Praesent mauris. Fusce nec tellus sed augue semper porta. Mauris massa. Vestibulum lacinia arcu eget nulla. File-size: 307 MD5 Sum: 22b229c7ecc49cfa11255beb06c7f4fe The next step is to create a FileUploadSession and upload our file.  This will create for us the FileInfoReference we need to create our SoftwarePackage. PUT:  https://axedaplatform.example.com/services/v2/rest/file/session BODY: <?xml version="1.0"?> <FileUploadSession xmlns='http://www.axeda.com/services/v2'>   <files>     <file xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:type='FileInfo'>       <filename>lorem-ipsum.txt</filename>       <md5>22b229c7ecc49cfa11255beb06c7f4fe</md5>       <filesize>307</filesize>       <contentType>application/text</contentType>     </file>   </files>   <expirationDate/>   <status/>   <updatedDate/>   <username/>   <version/> </FileUploadSession>              And our response if all goes OK (HTTP 200) looks like the following: <v2:ExecutionResult xmlns:v2="http://www.axeda.com/services/v2"         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" successful="true" totalCount="1">     <v2:succeeded>         <v2:success xsi:type="v2:FileUploadSessionSuccessfulOperation">             <v2:ref>16265</v2:ref>             <v2:id>16265</v2:id>             <v2:uploadUri>sftp://DISABLED</v2:uploadUri>             <v2:session systemId="16265" label="16265" detail="16265"                 restUrl="https://sandbox.axeda.com/services/v2/rest/file/id/16265">                 <v2:files>                     <v2:file xsi:type="v2:FileInfo" id="1068731" systemId="1068731"                         label="lorem-ipsum.txt" detail="1068731"> ... </v2:success> </v2:succeeded> </v2:ExecutionResult>         In this case, we just need the value of <v2:file systemId>, which is 1068731. TIME TO UPLOAD THE FILE CONTENTS!!! PUT: https://axedaplatform.example.com/services/v2/rest/file/1068731/content/ Extra Headers: X-File-Name: lorem-ipsum.txt X-File-Size: 307 Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW BODY:  There needs to be a mime-part called 'file-content' that contains the contents or lorem-ipsum.txt ----WebKitFormBoundary7MA4YWxkTrZu0gW Content-Disposition: form-data; name="file-content"; filename="cfk-lorem-ipsum.txt" Content-Type: text/plain ----WebKitFormBoundary7MA4YWxkTrZu0gW         Note:  If using Postman, SoapUI or other automated tool, this will be handled automatically for you - do not specify a Content-Type header in this case. And our response, assuming an HTTP 200: <v2:ExecutionResult xmlns:v2="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" successful="true" totalCount="1">     <v2:succeeded>         <v2:success>             <v2:ref>1068731</v2:ref>             <v2:id>1068731</v2:id>         </v2:success>     </v2:succeeded>     <v2:failures /> </v2:ExecutionResult>        This is just confirming our success!  Excellent.  Now we come to the SoftwarePackage.  We need two key pieces of information, the ModelReference (6141) and the FileInfoReference (1068731): POST: https://axedaplatform.example.com/services/v2/rest/softwarePackage Headers: Our defaults, Content-Type and x_axeda_wss_sessionid BODY: <?xml version="1.0" encoding="UTF-8"?> <SoftwarePackage xmlns="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">   <name>TEST-REST-PACKAGE</name>   <model systemId="6141" />   <version>1.0.0.1</version>   <primaryAgentsOnly>true</primaryAgentsOnly>   <retriesEnabled>true</retriesEnabled>   <instructions>     <instruction xsi:type="DownloadFileInstruction">       <file xsi:type="FileInfo" systemId="1068731"/>       <destinationDirectory>C:\temp</destinationDirectory>       <compressed>false</compressed>       <executable>false</executable>       <pathRelative>false</pathRelative>       <overwriteExistingEnabled>true</overwriteExistingEnabled>     </instruction>   </instructions> </SoftwarePackage>        And our results: <v2:ExecutionResult xmlns:v2="http://www.axeda.com/services/v2"          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" successful="true" totalCount="1">     <v2:succeeded>         <v2:success>             <v2:ref>TEST-REST-PACKAGE||1.0.0.1</v2:ref>             <v2:id>45863</v2:id>         </v2:success>     </v2:succeeded>     <v2:failures /> </v2:ExecutionResult>        And PROOF! I hope this helps you in your projects, and helps demystify the Axeda Platform REST API a little for you. Regards, -Chris Bibliography (Documents available from Support Portal): Axeda v2 API/Services Developer's Reference Guide_6.8 Axeda Platform Web Services Developer Reference v2REST_6.8 Change History: 2015-09-24 : Change HTTP Methods of session create and content send to PUT from POST
View full tip
The Axeda Platform has long had the ability to write custom logic to retrieve, manipulate and create data.  In the current versions of the Platform, there are two classes of API, Version 1 (v1) and Version 2 (v2).  The v1 APIs allow a developer to work with data on the Platform, but all of the APIs are subject to the maxQueryResults configuration property, which by default limits the number of results per query to 1000. For some subsets of data, this can be inadequate to process data.  In comes the v2 API, which introduces pagination. One of the first things a new user does when exploring the V2 API, is something like the following: HistoricalDataItemValueCriteria criteria = new HistoricalDataItemValueCriteria() criteria.assetId = '9701' criteria.startDate = '2014-07-23T12:33:00Z' criteria.endDate = '2014-07-23T12:44:00Z' DataItemBridge dbridge = com.axeda.sdk.v2.dsl.Bridges.dataItemBridge FindDataItemValueResult results = dbridge.findHistoricalValues(criteria)           And they get frustrated when they only get the same 100 rows of data.  Repeat after me: V2 API invocations (find operations) are limited to batches of 100 results at a time! But that's not the end of the story.  With a small change, the query above can be tuned to iterate through all results that match the search criteria:  HistoricalDataItemValueCriteria criteria = new HistoricalDataItemValueCriteria() criteria.assetId = '9701' criteria.startDate = '2014-07-23T12:33:00Z' criteria.endDate = '2014-07-23T12:44:00Z' criteria.pageNumber = 1 criteria.pageSize = 100 // Default. DataItemBridge dbridge = com.axeda.sdk.v2.dsl.Bridges.dataItemBridge FindDataItemValueResult results = dbridge.findHistoricalValues(criteria) tcount = 0 while ( (results = dbridge.findHistoricalValues(criteria)) != null  && tcount < results .totalCount) {   results.dataItems.each { res ->     tcount++   }   criteria.pageNumber = criteria.pageNumber + 1 }    I currently recommend that people avoid using the count() or countDomainObjectByCriteria() functions if you're then going to call a find.  Currently both the count*() and find functions compute total results, and doubles execution time of just those two calls.  Total count is only computed when running the first find() operation, so the code pattern above is so far the most efficient way I've seen to run these operations on the platform. So having covered how to do this in code (custom objects), let's turn our attention to the REST APIs - the other entry-point for using these capabilities.  The REST API doesn't offer a count*() function, but the first find() invocation (if using XML) brings back totalCount as part of the result set.  You can use this in your application to decide how many times to call the REST end-point to retrieve your data.  So for the example above: POST:  https://customer-sandbox.axeda.com/services/v2/dataItem/findHistoricalValues HEADERS: Content-Type: application/xml Accept: application/xml BODY: <?xml version="1.0" encoding="UTF-8"?> <HistoricalDataItemValueCriteria xmlns="http://www.axeda.com/services/v2" pageSize="100" pageNumber="1"> <assetId>9701</assetId> <StartDate>2014-07-23T12:33:00Z</StartDate> <endDate>2014-07-23T12:35:02Z</endDate> </HistoricalDataItemValueCriteria>      RESULTS: <v2:FindAssetResult totalCount="1882" xmlns:v2="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">    <v2:criteria pageSize="100" pageNumber="1">       <v2:name>*</v2:name>       <v2:propertyNames/>    </v2:criteria>    <v2:assets>    </v2:assets> </v2:FindAssetResult>      Or JSON: POST:  https://customer-sandbox.axeda.com/services/v2/dataItem/findHistoricalValues HEADERS: Content-Type: application/xml Accept: application/xml BODY: {   "id":  9701,   "startDate": "2014-07-23T12:33:00Z",   "endDate": "2014-07-23T12:35:02Z",   "pageNumber": 1,   "pageSize": 2 }      And that's how you work around the maxQueryResults limitation of the v1 APIs.  Some APIs do not currently have matching v2 Bridges (e.g. MobileLocation and DataItemAssociation), in which case the limitation will still apply.  Creative use of the query Criteria will allow you to work around these limitations as we continue to improve the V2 API. Regards, -Chris
View full tip
One of the signature features of the Axeda Platform is our alarm notification, signalling and auditing capabilities.   Our dashboard offers a simplified view into assets that are in an alarm state, and provides interaction between devices and operators.  For some customers the dashboard may be too extensive for their application needs.  The Axeda Platform from versions 6.6 onward provide a number of ways of interacting with Alarms to allow you to present this data to remote clients (Android, iOS, etc.) or to build extended business logic around alarm processing. If one were to create a remote management application for Android, for example, there are the REST APIs available to interact with Assets and Alarms.  For aggregate operations where network traffic and round-trip time can be a concern, we have our Scripto API also available that allows you to use the Custom Object functionality to deliver information on many different aggregating criteria, and allow developers to get the data needed to build the applications to solve their business requirements. Shown below is a REST API call you might make to retrieve all alarms between a certain time and date. POST:   https://INSTANCENAME/services/v2/rest/alarm/find <v2:AlarmCriteria xmlns:v2="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">    <v2:date xsi:type="v2:BetweenDateQuery">     <v2:start>2015-01-01T00:00:00.000Z</v2:start>     <v2:end>2015-01-31T23:59:59.000Z</v2:end>   </v2:date>   <v2:states/> </v2:AlarmCriteria>   In a custom object, this would like like the following: import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* def q = new com.axeda.services.v2.BetweenDateQuery() q.start = new Date() q.end = new Date() ac = new AlarmCriteria(date:q) aresults = alarmBridge.find(ac)   Using the same API endpoint, here's how you would retrieve data by severity: <v2:AlarmCriteria xmlns:v2="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">    <v2:severity xsi:type="v2:GreaterThanEqualToNumericQuery">     <v2:value>900</v2:value>   </v2:severity>   <v2:states/> </v2:AlarmCriteria>   Or in a custom object: import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.*   def q = new com.axeda.services.v2.GreaterThanEqualToNumericQuery() q.value = 900 ac = new AlarmCriteria(severity:q) aresults = alarmBridge.find(ac)   Currently the Query Types do not map properly in JSON objects - use XML to perform these types of queries via the REST APIs. References: Axeda v2 API/Services Developer's Reference Guide 6.6 Axeda Platform Web Services Developer Reference v2 REST 6.6 Axeda v2 API/Services Developer's Reference Guide 6.8 Axeda Platform Web Services Developer Reference v2 REST 6.8
View full tip
Background: Firewall-Friendly Agents can be configured for server certificate authentication in the Axeda Builder project or via the Axeda Deployment Utility. When server certificate authentication is configured, the Agent will compare the certificate chain sent by the Platform to a local copy of the CA certificate chain stored in the SSLCACert.pem file in the Agent’s home directory. The certificate validation compares three things: Does the name of the Platform certificate match the name in the request? Does the CA certificate match the CA certificate that signed the Platform certificate? Is the Platform or CA certificate not expired? If the answer to any of these questions is “no”, then connection is refused and the Agent does not communicate further with the Platform. To determine if certificate trouble is an issue, see the Agent log: EKernel.log or xGate.log. Recommendation: For Agent-Platform communications, we recommend always using SSL/HTTPS. If the Agent is not configured to validate the server certificate (via the trusted CA certificate), the system is vulnerable to a number of security attacks, including “man in the middle” attacks. This is critically important from a security perspective. Note: For on-premise customers, if the Platform certificate needs to change, always update the SSLCACert.pem file on all Agents before updating the Platform certificate. (If the certificate is changed on the Platform before it is changed on the Agents, communications from the Agent will stop.) Note: Axeda ODC automatically notifies on-demand customers about any certificate updates and renewals. At this point, though, Axeda ODC certificate updates are not scheduled for several years. Finally, it is recommended that your Axeda Builder project always specify “Validate Server Certificate” and set the encryption level to the strongest level supported by the Web server. Axeda recommends 168 bit encryption, which will use one of the following encryption ciphers: AES256-SHA or DES-CBC3-SHA. Need more information? For information about configuring and managing Agent certificate authentication, see Using SSL with Axeda® Platform Guide.
View full tip
Background: In the event that a Gateway/Connector Agent is offline or unable to connect to the Axeda Cloud Server, it uses an internal message queue to store information until the connection is restored. The message queue size is configured in the Axeda Builder project. By default, the queue is 200KB in size. Depending on how frequently your Agent sends data or how much data your Agent is collecting and trying to send, 200KB may be too small.  If the queue is too small, the data will “overflow” the queue. The queue is kept in memory only; data is not stored to disk and will be removed in a First-In-First-Out (FIFO) manner when the queue overflows. If you see queue overflow error messages in the Agent log (either EKernel.log and xGate.log), it may be time to change the size of the outbound message queue. The correct size setting for the Agent outbound message queue takes three variables into consideration: How much information you are sending? What is the maximum expected duration for loss of connection to the Internet (Cloud Server)? How much memory is available for your process? The more information the Agent is trying to send, the larger the queue size setting should be. Consider also that if your Agents are offline (disconnected) for a long period of time, they will likely accumulate lots of data, which may overflow the outbound message queue. If this is the case, you’ll need to increase the queue or risk losing data. Recommendation: Consider how the Agent operates (offline/online data collection) and how much data may be queued. When selecting the size of the queue, it’s important to maintain a balance between protecting against data loss and not occupying too much memory. If you do determine that you need to increase the outbound message queue size based, it’s important to note that Axeda recommends a maximum size of the outbound message queue of about 2MB. Need more information? For information about specifying Agent outbound message queue size, see the online help in Axeda® Builder (Enterprise Server Settings). For information about how the Agent delivers data to the Platform (via EEnterpriseProxy/xgEnterpriseProxy), see the Agent user’s guide for your Agent: either Axeda® Platform Axeda® Gateway User’s Guide (PDF) or Axeda® Platform Axeda® Connector User’s Guide (PDF). Axeda Support Site links: Axeda® Gateway User’s Guide, Axeda® Connector User’s Guide.
View full tip
Background: Axeda Agents can be configured with standard drivers to collect event-driven data, which is then sent to the Platform. Axeda provides many standard event-driven data (EDD) drivers for use with the Axeda Agent (as explained in Axeda® Agents EDD Toolkit Reference (PDF)). All EDD  drivers are configured by an xml file and enabled in Axeda Builder, through the Agent Data Items configuration. You can configure an EDD driver to send important information from your process to the Agent, including data items, events and alarms. The manner in which you configure your drivers will affect the ability for your project to operate efficiently. Recommendation: Use drivers to reduce the amount of data sent to the platform. Instead of sending data items to the Platform, which then generates an event or alarm, it is possible to use the drivers to scan for specific data points or conditions and send an event or alarm. Before you can configure your agents, you first need to determine how often you will need your agent to send data to the Enterprise server. Two example workflows and recommendations: If you want to monitor a data item every second or two, configure the Agent to do the monitoring If you want to trend information once per day, perform that logic at the Enterprise Server. These examples may address your actual use case or your needs may fall somewhere in between. Ultimately, you want to consider that time scale (how often you want to monitor or trend data) and resulting data volume should drive how your system handles data. More data is available at the Agent, and at a higher frequency, then that needed at the Platform. Processing at the Agent ensures that only the important results are communicated to the Platform, leading to a “cleaner” experience for the Platform. Using this guidance as a best practice will help reduce network traffic for your customers as well as ensure the best experience for Enterprise users using server data in their dashboards, reports, and custom applications. Need more information? For information about the standard EDD Drivers, see the Axeda® Agents: EDD Drivers Reference (PDF).
View full tip
Background: The frequency with which an Agent checks its connection to the Axeda Cloud Server is called the Agent “ping rate” (also known as heartbeat). (For Axeda IDM Agents, ping rate is referred to as “poll rate”; the meaning is the same.) Pings are a very important aspect of Firewall-Friendly communication. All communication between the Agent and the Cloud Server is initiated by the Agent. In addition to indicating the Agent is still active, the Ping also gives the Cloud Server an opportunity to send commands back to the Agent on the Ping acknowledgement. The ping rate effectively defines how long users must wait before they can deliver a command or request to an Agent. Typical commands may include setting a data item, starting an Access Session, or running a script. The place where Ping rate is most noticeable to system users is when requesting a remote session. When a session request has been submitted by the user, the Cloud Server waits for the next Agent ping in order to send down the command to begin the session process. A longer ping rate means the remote session takes longer to get started. (Note that the same is true of any command initiated from the Axeda Cloud Server.) Ping traffic comprises the majority of inbound traffic to the Cloud Server. The higher the ping rate, the more resources are consumed on the Server and the greater the requirements for network bandwidth for the customer. Unnecessarily high ping rates will result in an increase in network traffic on your customer's network. By default, the ping rate for Firewall-Friendly Agents is 60 seconds, or every 1 minute. The Agent ping rate is set using Axeda Builder when configuring the project. The ping rate can also be set via an action from the Axeda Cloud Server. When set via an action, the new ping rate is in effect until the next Agent restart (at which time the Agent will go back to the default ping rate set in the project). The Axeda Cloud Server also uses Agent ping rate to determine when assets are missing. One of the model settings is to define how many missed pings (or missed pings and time) will cause a device to be marked as missing. The default setting for new models is to mark assets as missing after they’ve missed 3 consecutive pings. Recommendations: Make sure that your Agents’ ping rates are set to reasonable frequencies. The ping rate should be set based on use case and not necessarily volume. The recommended practice is to make sure the ping rate is never set less than 60 seconds. Where possible the ping rate should be set to 2 minutes or higher. In the end, it is often user expectations around starting Access sessions that drives the ping rate value. If only occasional user access is required, one recommendation is to dynamically adjust the ping rate when conditions require expedited communication with the Cloud Server. One use case is to expedite a remote session when a device is in alarm condition or when an end user needs assistance. In this case you would temporarily increase the ping rate. This can be done using an action from the Cloud Server, by downloading a software package ping rate update, or by Agent extension using the SDK. (For information about using the Agent SDK, see the Axeda® Platform Extending Axeda® Agents PDF.) You can configure alerts to indicate if an asset is missing. Axeda recommends that you configure the alert to a reasonable time given your resources and the expense of tracking every missing asset. A reasonable missing alert for your organization may be 1-2 days, meaning the Server generates the missing asset alert only after the asset has been missing for one or two days, based on its ping rate, and an asset should be marked as missing only after 15 missed pings or 30 minutes (whichever is less). The most common cause of a missing asset is not an issue with the device but rather the loss of Internet connectivity. Note: Any communication from an Agent also serves the function of a Ping. E.g., if the ping rate is set to 30 minutes and the device is sending a data value every 5 minutes, the effective Ping rate is 5 minutes. Need more information? For information about specifying Agent ping rate, see the online help in Axeda® Builder (Enterprise Server Settings). If setting the ping rate from Platform actions or verifying Agent ping rates, see the online help of the Axeda® Connected Management Applications.
View full tip
Background: In very rare situations, it is possible that the Firewall-Friendly Agent process may stop running. If the Agent is not running, no machine monitoring or communication with the Cloud Server is possible. Recommendation: WatchDog is a little known yet very helpful feature available with Firewall-Friendly Agents. This program lets you monitor whether an Agent is running; if it’s not running, WatchDog can restart that Agent if needed. If the Agent process fails, WatchDog can bring it back up! WatchDog can also be configured to watch other processes. You can configure WatchDog to run as a service (for Windows) or daemon (for Linux). You will register the Watchdog to run as a service.  The Watchdog configuration file will specify the process(es) to be monitored and what to do when one exits. The options are to attempt to restart the process or to restart the system. Note: Watchdog detects only if a watched process exits. It will not detect or report on processes that may be “hanging”. Need more information? For information about configuring and using WatchDog, see the Agent User’s Guide for your Agent: either Axeda® Platform Axeda® Gateway User’s Guide (PDF) or Axeda® Platform Axeda® Connector User’s Guide (PDF).
View full tip
Background: Customer machines create files containing data, configuration, log data, etc. These files may contain critical information for service technicians about the asset and its data. How can you make sure you get this important information to the Cloud as soon as it’s available? Use File Watcher, a standard tool available with Firewall-Friendly Agents. You can configure File Watcher to monitor a directory or file specification and automatically upload the file(s) to the Axeda Cloud. By monitoring selected directories and files on the device, the File Watcher determines which files have changed or appeared and then upload those files to the Axeda Cloud Server. Sometimes the size of the file being written or copied into the file watch target is large enough that it takes time to complete. Without taking the right precautions, the file upload may try to start before the file creation process is completed. Recommendations: To ensure that large files don’t cause problems, we recommend using a “move” or “rename” operation instead of a file copy operation to place your file into a watched directory. Unlike a “copy” operation, using autonomous operations like move or rename will ensure that the Agent doesn’t detect a file is in the midst of growing and prematurely begin to upload that operation. In situations when it isn’t possible to use the autonomous “move” or “rename” operations, you can implement a delay. Setting a delay in the File Watcher configuration will prevent the Agent from sending files to the Axeda CloudServer before those files have transferred completely to the watched file directory. Finally, use file compression wisely. You need to balance the benefits from compressing files before sending with the potential adverse impact that compression will have on the Agent. Compressing very large files BEFORE sending to the Platform will affect the Agent; however, compressing smaller files can be beneficial. If the Agent’s computer is of lower power, the CPU may have to work overtime to compress huge files or to send huge uncompressed files. As a general rule, compressing files before sending is recommended. However, depending on your Agent and network setup, it may prove more beneficial to stream uncompressed files rather than compress first and then send. Need more information? For information about configuring and using file watchers, see the Axeda® Builder User’s Guide (PDF) or the online help in Axeda Builder.
View full tip
<p>We live in a connected world where we can (want!) to receive instant updates and notifications. ThingWorx leverages the power of Web 2.0 and its Always-On technology to deliver that, but our friendly SMS providers have also provided an easy and powerful way that can be used to deliver SMS notifications right to your phone. Email to Text!</p><div>Set up a 'notification' Thing using our MailServer Template, set up your outgoing e-mail server and you are now ready to invoke the 'SendMessage' service on a given event. All you need now is the email address of your SMS number, which you can find by following this link: <a href="http://sms411.net/how-to-send-email-to-a-phone/" target="_blank"><span style="font-size:8.5pt;line-height:115%;font-family:&quot;Arial&quot;,&quot;sans-serif&quot;">List of e-Mail to SMS  addresses</span></a></div><p class="MsoNormal"><o:p></o:p></p><div><p class="MsoNormal"><o:p></o:p></p></div><p></p>
View full tip
Connecting to other databases seemed to be a hot desirable from our training feedback. So we've just added a video tutorial in the Wiki for connecting ThingWorx to SQL Server (or SQL Server Express). See topic 7.04 or go to the Video Appendix. In essence connecting to other databases like mySQL or Oracle will work the same way except you will have to change the Database URL and JDBC reference. Also let me take this opportunity to wish everyone a blessed holiday season!
View full tip
One commonly asked question is what are the correct settings for the Configuration Tables tab when creating/setting up a Database Thing to connect to a SQL Server (2005 or later) database.  There are a couple of ways to do this but the tried and true settings are listed below. connectionValidationString - SELECT GetDate() jDBCConnectionURL - jdbc:sqlserver://servername;databaseName=databasename jDBCDriverClass - com.microsoft.sqlserver.jdbc.SQLServerDriver Max number of connections in the pool - 5 (this can be modified based on number of concurrent connections required) Database Password - databaseusername Database User Name - databaseuserpassword <br> The jdbc driver file sqljdbc4.jar is by default installed with the ThingWorx server.  It is located in TomcatDir\webapps\Thingworx\WEB-INF\lib\
View full tip
Remember that when you are calling an external URL to fetch data via an API call to another system that you must encode special characters specifically. For example the URL that you may type into a browser to test may look like this: https://someserver.somwhere.com:443/apicall?parameter1=test string&parameter2=test^number but when scripting that into a string variable you'll need to replace the space and the carrot with the proper encoded values (%20 and %5E) var params = { username : "me", password : "password", url : "https://someserver.somwhere.com:443/apicall?parameter1=test%20string&parameter2=test%5Enumber", ignoreSSLErrors : false, timeout : 60, headers : headers }; var result = Resources['ContentLoaderFunctions'].LoadXML(params); also note that in this instance we're making a secure connection therefore port 443 (typically the default) was explicitly specified...
View full tip
Announcements