cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
Steps to to navigate to different mashups on double click of different google map markers in ThingWorx Create a DataShape with two fields Location MashupName Create a DataTable using the DataShape Add required Location data and corresponding MashupName to be opened in the DataTable Create a mashup Add GoogleMap and Navigate widget to the mashup Bind All Data from QueryDataTableEntries service of the DataTable in the mashup to the GoogleMap widget by dragging and dropping that item onto the GoogleMap and selecting the Data option In GoogleMap widget properties select LocationField as Location Bind MashupName from Selected Row(s) in QueryDataTableEntries service of the DataTable in the mashup to the Navigate widget by dragging and dropping that item onto the Mashup Link and selecting the MashupName option Bind the DoubleClick event of GoogleMap widget properties in the mashup to Navigate widget by dragging and dropping that item onto the Mashup Link and selecting the Navigate option Save and view mashup Copy of my article as blog post: https://support.ptc.com/appserver/cs/view/solution.jsp?n=CS249997&lang=en_US
View full tip
The term ‘Extension’ or ‘Plugin’ often has many different meanings. For example, from the point of view of a Product Manager it often means an ‘easy’ way to add additional functionality to an existing piece of software. In contrast, from the point of view of a Software Developer it often means new syntax to memorize, extensive amounts of API documentation to read and very often weeks or even months of trial and error to make this ‘easy’ addition of functionality.  We at ThingWorx recognized that in order to be a true Platform we need to take action to make the creation of Extensions easier at all phases of the process. Our development team took on the challenge of understanding what it is that normally makes this such a difficult process, and try and find solutions. We took to the open source community looking for a Platform that could offer our Extension developers a wide array of functionality that was easily accessible and familiar. This is where the Eclipse IDE comes in. We were able to create a Plugin of our own for the Eclipse IDE that makes it easy for an Extension Developer to create an Extension Project, generate ThingWorx specific code, manage all the project configuration and build files and also package the Extension. We can do all of this without having a developer read any API documentation or manually write any code, leaving the Extension Developer to focus on what they are best at, which is adding that additional functionality we mentioned earlier. Extensibility and the true nature of a Platform Extensibility is a core aspect of any true Platform as it allows users to add functionality at any time to meet new and changing requirements. The capabilities of extensions are almost endless but here are a few examples: Adding new UI Widgets to be used to visualize data Adding any custom third party Libraries to be used seamlessly in ThingWorx Easily accessing REST APIs outside of ThingWorx Creating helper Resources to be used across the entire platform Add custom Entities easily to multiple ThingWorx instances Provide custom Authenticators and Directory Services As you can see, it is possible to do practically anything that you or our community might find useful for the Internet of Things. This is the nature of a true ‘Platform’. How do I get started developing an extension? There are three steps that will help you dive into Extension Development quickly. First, an instance of ThingWorx Foundation and the ability to navigate the UI, called Composer. Second, a basic understanding of the ThingWorx Model, or “Thing Model”, is necessary. Finally, you will need an installation of the Eclipse IDE with the ThingWorx Eclipse Plugin installed to get started developing your extension. 1)  Getting familiar with ThingWorx Foundation The easiest way to get started playing with the ThingWorx platform is to head over to the Developer Portal and spin up a hosted ThingWorx Foundation server. This is as easy as clicking the ‘Create Foundation Server’ button and a 30-day hosted instance will be created for you to start using as your own personal development playground. If you prefer to set up and work in your own environment, you can also download a Developer Trial Edition to host on your own machine. In order to get familiar with ThingWorx Foundation, I recommend going through our ThingWorx Foundation Quickstart Guide that introduces you to the core building blocks of the platform as well as guide you through a typical scenario of creating a simple IoT application. 2)  Understanding the Thing Model Basics If you are already familiar with the Thing Model and know the basics of using the ThingWorx Platform, then you can probably skip over this section. If you aren’t, or just want a refresher, I’ll go over the basics here. The Thing Model is a collection of Entities that define your solution or business model in ThingWorx. You need a Thing Model for a few reasons.  For those in software development, the Thing Model and its benefits are very similar to those of the Object Oriented programming model.  A good model allows you to maximize reusability, maintainability, and encapsulation.  Having a sound Thing Model means that the future of your IIoT solution will be minimally affected by things like migration, iterative changes, permission changes and security vulnerabilities. The three most commonly used and most important Entities within your model are Things, Thing Templates, and Thing Shapes. These entity types will be the main building blocks for your Thing Model. These are only a few of the Entity Types provided by ThingWorx. It is not necessary but definitely recommended, to have a more comprehensive understanding of the Thing Model, and to work with the entire collection of Entity Types within the ThingWorx Platform by going through the ThingWorx Foundation Quickstart Guide on the Developer Portal. 3)  Dive into the Eclipse Plugin and develop your own extension Lastly, if you don’t have Eclipse IDE installed, head over to the eclipse.org download page and get one installed on your machine. Once you have that, you can find the Eclipse Plugin on our Marketplace here. In a next step, you will want to create a new Extension Project. We added a ThingWorx Extension perspective that enables all of the custom functionality.   Once in the correct perspective, we tried to make our plugin as intuitive as possible. We did this by following as many of the Eclipse usability standards as we could, which means that if you are familiar with the Eclipse IDE you should be able to find most of the ThingWorx functionality on your own. For Example, a simple ‘File’ >> ‘New’ will show you all the options for creating a Project. Creating a new ThingWorx Extension project requires a Name and a ThingWorx Extension SDK (found on the ThingWorx Marketplace) of the version of ThingWorx that you are building your extension for.   By utilizing the capabilities of the Eclipse IDE, we were able to automate the creation of many of the artifacts that had slowed extension developers down. The wizard allows the plugin to handle creating and managing a build file and a metadata.xml as well as manage all of the project dependencies. Once you have an Extension Project, you can use the ThingWorx menus to create your Entities. These actions will create the necessary Java files and manage their applicable entry within the metadata.xml of your project.   After creating your Entity, you can right click the applicable java file which will show you the ‘ThingWorx Source’ menu. This houses the options to generate the code for additional characteristics (Services, Properties, etc.) making the need to learn all of the custom annotations and method signatures a much less daunting process.   Once you have generated some code with the Plugin, it is time to get started implementing your solution - This is the point where my expertise ends and yours begins! If you are interested in getting some more in-depth information on this topic, check out these additional resources: Tutorial: We have created a tutorial that guides you step-by-step through the entire process of developing a ThingWorx extension. Webcast: Watch this 60-minute, interactive deep-dive into IIoT development and learn how to use the Eclipse Plugin to rapidly create a custom ThingWorx extension. Head over to the Developer Portal and start bringing all of your great ideas to life!
View full tip
Thingworx actually provides some services for this, but it exports them to an XML file. I'm pretty sure that there are people who will be able to turn this into something easily legible in a mashup. There are two services in CollectionFunctions ExportUserPermissions ImportUserPermissions
View full tip
The System user is pivotal in securing your application and the simplest approach is to assign the System user to ALL Collections and give it Runtime Service Execute. These Collection Permissions ONLY Export to ThingworxStorage vs. the File Export, it becomes quite painful to manage this and then roll this out to a new machine. Best and fastest solution? Script the Assignment, you can take this script which does it for the System user and extend it to include any other Collection Level permissions you might need to set, like adding Entity Create Design Time for the System user. --------------------------------------------------------- //@ThingworxExtensionApiMethod(since={6,6}) //public void AddCollectionRunTimePermission(java.lang.String collectionName, //       java.lang.String type, //       java.lang.String resource, //       java.lang.String principal, //       java.lang.String principalType, //       java.lang.Boolean allow) //    throws java.lang.Exception // //Service Category: //    Permissions // //Service Description: //    Add a run time permission. // //Parameters: //    collectionName - Collection name (Things, Users, ThingShapes, etc.) - STRING //    type - Permission type (PropertyRead PropertyWrite ServiceInvoke EventInvoke EventSubscribe) - STRING //    resource - Resource name (* = all or enter a specific resource to override) - STRING //    principal - Principal name (name of user or group) - STRING //    principalType - Principal type (User or Group) - STRING //    allow - Permission (true = allow, false = deny) - BOOLEAN //Throws: //    java.lang.Exception - If an error occurs //   var params = {     modelTags: undefined /* TAGS */,     type: undefined /* STRING */ }; // result: INFOTABLE dataShape: EntityCount var EntityTypeList = Subsystems["PlatformSubsystem"].GetEntityCount(params); for each (var row in EntityTypeList.rows) {     try {         var params = {             principal: "System" /* STRING */,             allow: true /* BOOLEAN */,             resource: "*" /* STRING */,             type: "ServiceInvoke" /* STRING */,             principalType: "User" /* STRING */,             collectionName: row.name /* STRING */         };         // no return         Resources["CollectionFunctions"].AddCollectionRunTimePermission(params);     }     catch(err) {     } }
View full tip
When we do connections to JDBC connected databases, as much as possible we want to leverage the Database Server side capabilities to do the querying, aggregating and everything else for us. So if we need to do filtering or math add it to the query statement so it is done server side, by the Database server not the Thingworx runtime server. Besides that there is much more that can be done, since databases are powerful (all the good stuff like joins, union, distinct, generated and calculated fields and what not) The more we can do Database server side the better it is for the Thingworx runtime performance. Now everyone hopefully knows the [[  ]] parameter substitution. So we can easily build an SQL service that has several input parameters, it will look like: Select * from table where item1=[[par1]] AND item2 =[[par2]] etc. But we can take this up a notch with the super powerful yet super dangerous <<   >> Now we can do a service that just says <<sqlQuery>> and use another service to build something like: select * from table where item1 in “val1,val2,val3” etc. If you can avoid it, only use [[  ]] but if needed there always is <<   >> but you must make sure that you properly secure that service with the system user and validate the service that invokes this service, since <<  >> is vulnerable to SQL Injection
View full tip
In this video we cover the process of installing ThingWorx Analytics Server 52.1. Make sure to have reviewed the part 1 video about pre requisite   Updated Link for access to this video:  Installing ThingWorx Analytics Server: Part 2 of 2
View full tip
In this video we review the prequisite needed prior of installing ThingWorx analytics server 52.1   Updated Link for access to this video:  Installing ThingWorx Analytics Server: Part 1 - Prerequisites
View full tip
ThingWorx Analytics is capable of being assembled in multiple Operating Systems. In this post, we will discuss common issues that have been encountered by other users. Permissions Denied – Read/Write access to Third Party Components This is encountered when executing the desired Shell script to begin the creation process. In MacOS and Linux you may encounter a “Permissions Denied” error on the two required components in the creation, the packer-post-processor-vhd and packer components. Error Message This will result in a Terminal dialog message that will read “Process Completed, No Artifacts Created”. This indicates that the Packer Script has failed to complete the task, and the desired appliance images were not created. To correct this issue, you will have to change the permissions of the packer-post-processor-vhd and packer components to be able to be read and executable by the user account that is attempting to create the appliance. Solution Run the following commands in the Virtual Machine terminal (you may need to run as SUDO or as Root): chmod +x packer-post-processor-vhd ​chmod +x packer After running the above command, run the Shell script of the desired VM Appliance output. This should resolve the issue with “Permission Denied” while executing the build scripts. Error Starting Appliance in VirtualBox Users have experienced this issue at the first run of the Appliance, right after it has been assembled. This issue is unique to VirtualBox versions 5.0 and above. Error Message – Dialog Box If you encounter the error depicted below, please check under settings for the imported OVA for any errors: This issue is the result of invalid settings in the Appliance Configuration. You will need to check for Invalid Settings, by navigating to the Settings Menu for the Appliance: The “Invalid settings detected” indicates that when the Product was assembled, some configuration settings were not applied correctly by the creation tool scripts. Solution Hover your mouse over the settings and it will direct you to cause, in this case it is due to remote monitor setup. Just change the settings in Display (Remote Display Tab) by unchecking the Enable Server button. Press OK after unchecking the “Enable Server” option, and start the Appliance.
View full tip
The following is a set of custom objects that will trigger from an Expression Rule, and cause a file uploaded by a remote agent to be sent on to the Thingworx platform instance of your choice once completed.  The expression rules should be configured as so: ​Type: ​File ​IF: ​1 == 1 ​THEN: ​ExecuteCustomObject("SendUploadedFiles") SendUploadedFiles.groovy: import com.axeda.drm.sdk.data.UploadedFile import static com.axeda.sdk.v2.dsl.Bridges.* logger.info("Executing AsyncExecutor") //Spawn async thread for each upload compressedFile.getFiles().each {   UploadedFile upFile ->     // last parameter is a timeout for the execution measured in seconds (200 minutes)     customObjectBridge.initiateAsync("SendToThingWorx",                                     [                                       fileID: upFile.id,                                       hint: parameters.hint,                                       deviceID: context.device.id                                     ], 200 * 60) }     SendToThingworx.groovy: import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* import groovyx.net.http.HTTPBuilder import static groovyx.net.http.ContentType.* import static groovyx.net.http.Method.* import com.axeda.drm.sdk.data.*  // UploadedFileFinder stuff. import com.axeda.drm.sdk.Context import com.axeda.common.sdk.id.Identifier import org.apache.commons.codec.binary.Base64 def retStr = "this is a sample groovy script. your username: ${parameters.username}\n" def context = Context.getAdminContext() def thingName = 'ExampleThing' def thingworxHost = 'sample.cloud.thingworx.com' def twxApplicationKey = '00000000-0000-0000-0000-00000000000' def fileid = parameters.fileID def finder = new UploadedFileFinder(context) finder.setId( new Identifier( fileid.toLong()) ) UploadedFile uf = finder.find() def is = new FileInputStream( uf.extractFile() ) retStr += "UF: ${uf.name} ${uf.fileSize} ${uf.actualFileName}\n" logger.info  "UF: ${uf.name} ${uf.fileSize} ${uf.actualFileName}\n" def bOut = new ByteArrayOutputStream() int b = 0 int count = 0 while ( (b = is.read()) != -1  ) {     count ++     bOut.write(b) } is?.close() byte[] bRes = bOut.toByteArray() logger.info "Length: ${bRes.length}" retStr += "Count: ${count}  Length: ${bRes.length}\n" def b64 = new Base64() def outputStr = b64.encodeBase64String(bRes) retStr += "Length of base64 string: ${outputStr.length()}\n" logger.info "Length of base64 string: ${outputStr.length()}\n" logger.info "===========================================" logger.info outputStr logger.info "===========================================" def http = new HTTPBuilder("https://${thingworxHost}") http.request(POST, JSON) {     uri.path = "/Thingworx/Things/${thingName}/Services/SaveBinary"     body  = [path: uf.name, content: outputStr ]     headers = [appKey: twxApplicationKey ,                       Accept: 'application/json',                       "content-type": "application/json"             ]     response.success = { resp ->         println "POST response status: ${resp.statusLine}"         logger.info "POST RESPONSE status: ${resp.statusLine}"     }     response.failure = { resp ->         logger.info "RequestMessage: ${resp.statusLine}"         logger.info "Request failed: ${resp.status}"     } } return retStr    
View full tip
Let's consider that we have two Streams Stream1 and Stream2 with same DataShape StreamDS. DataShape StreamDS has two fields Id (number) and Name (string). We want to copy all the entries from Stream1 to Stream2. Steps: 1. Open Stream1 Stream in Composer and run GetStreamEntriesWithData service. 2. In the popup click on Create DataShape from Result option to create a new DataShape GetStreamEntriesDS. 3. Create a Service and use JavaScript like below (Added Comments for Details): // Create Temporary Infotable to hold output of GetStreamEntriesWithData Service var paramsForInfotable = {   infoTableName: "InfoTable" /* STRING */,   dataShapeName: "GetStreamEntriesDS" /* DATASHAPENAME */ }; // result: INFOTABLE var InfotableForCopy = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(paramsForInfotable); //Save output of GetStreamEntriesWithData Service to Temporary Infotable InfotableForCopy var paramsForGetStreamEntriesWithDataService = {   oldestFirst: false /* BOOLEAN */,   maxItems: 10000 /* NUMBER */ }; // result: INFOTABLE dataShape: "GetStreamEntriesDS" InfotableForCopy = Things["Stream1"].GetStreamEntriesWithData(paramsForGetStreamEntriesWithDataService); // Read the data from Infotable row by row and add it to new Stream var tableLength = InfotableForCopy.rows.length; for (var x = 0; x < tableLength; x++) {   var row = InfotableForCopy.rows ; // values:INFOTABLE(Datashape: StreamDS) var values = Things["Stream2"].CreateValues(); values.Id = row.Id; //NUMBER values.Name = row.Name; //STRING var paramsForAddStreamEntryService = {   sourceType: row.sourceType /* STRING */,   values: values /* INFOTABLE*/,   location: row.location /* LOCATION */,   source: row.source /* STRING */,   timestamp: row.timestamp /* DATETIME */,   tags: row.tags /* TAGS */ }; // AddStreamEntry(tags:TAGS, timestamp:DATETIME, source:STRING, values:INFOTABLE(StreamDS), location:LOCATION):NOTHING Things["Stream2"].AddStreamEntry(paramsForAddStreamEntryService); } var result = InfotableForCopy;
View full tip
Putting this out because this is a difficult problem to troubleshoot if you don't do it right. Let's say you have an application where you have visibility permissions in effect. So you have Users group removed from the Everyone Organization Now you have a Thing "Thing1" with Properties that are being logged to a ValueStream "VS1" What do you need to make this work? Obviously the necessary permissions to Write the values to the Thing1 and read the values from Thing1 (for UI) But for visibility what you'll need is: Visibility to Thing1 (makes sense) Visibility to the Persistence Provider of the ValueStream VS1 !!!! Nope you don't need Visibility to the ValueStream itself, but you DO need Visibility to the Persistence Provider of that ValueStream The way the lack of this permission was showing in the Application Log was a message about trying to provide a Null value.
View full tip
Get MQTT (like mosquitto) operating with SSL - use http://rockingdlabs.dunmire.org/exercises-experiments/ssl-client-certs-to-secure-mqtt as your primary guide to building out your self-signed CA cert and your server cert and key. Simply follow their directions with the one caveat of setting IPLIST and HOSTLIST environment variables prior to executing the generate-CA.sh script. This will be necessary for hosted environments like AWS where the actual IP address of the system cannot be used to access the server from the internet. Put the external facing IP address in IPLIST and the external facing fully qualified domain name (FQDN) into HOSTLIST. If you have multiple usable ip addresses or hostname aliases, enclose them in quotes and separate them with spaces  (export IPLIST="1.2.3.4 5.6.7.8") Complete steps 1-3 in the instructions above. This is sufficient to get the MQTT traffic encrypted and use it with Thingworx. Do not proceed until you can make a mosquitto_pub and mosquitto_sub pass data using the --cafile option and get an error if you do not supply the --cafile option. Make sure you have a copy of the ca.crt file generated by the script above to reference in the commands. Note that it may be necessary to use the ip address rather than the FQDN. mosquitto_sub --cafile path/to/ca.crt -h ipaddr -t topic mosquitto_pub --cafile path/to/ca.crt -h ipaddr -t topic -m message Create an MQTT Thing in Thingworx based on the MQTT ThingTemplate. Create a property in the new thing for sending messages to the MQTT broker. In the configuration page for the new MQTT Thing, set the serverName, serverPort and check the useSSL checkbox. In the Property to MQTT topic mappings, create a publish entry that points to the property you created in the thing and set the topic to the mqtt topic on which you want to publish . The ca.crt file created in the above script is the certificate for a new Certificate Authority (self-signed, so not really official). Clients may have to import this certificate into their trusted CA Root store in order to make the encryption work. Add the ca.crt file from the mqtt broker system to a keystore file that will become tomcat's truststore (the list of CAs trusted by the server). See the Tomcat documentation if you need to configure https on tomcat as well. Create a new keystore if one does not already exist as a truststore. keytool -import -trustcacerts -file /path/to/ca/ca.crt -alias CA_ALIAS -keystore path/to/TrustStore -storepass mypassword). Replace the CA_ALIAS with some identifying string like MyPrivateCertificateAuthority. It did not appear to care about the CA_ALIAS value used. Replace path/to/truststore to point to the file that already exists or you want to create. Add the following to the CATALINA_OPTS for starting tomcat -Djavax.net.ssl.trustStore=path/to/TrustStore"  -Djavax.net.ssl.trustStorePassword=xxxx Replace path/to/TrustStore with the pathname of the file you created / updated with keytool above. Replace xxxx with whatever password you used in the keytool command above Restart tomcat. Check the mqtt Thing for its isConnected property. It should now be true. If it is not, then check the log files for mosquitto and for tomcat looking for SSL issues. Change the property value and see it appear in the output of a (properly constructed) mosquitto_sub --cafile path/to/TrustStore -t test somewhere.
View full tip
In this blog we will have a look at the installation of the Thingworx Analytics Builder extension. This is used as guideline but make sure to check the Help Center for your release as steps do vary with versions. The installation has been divided in 3 parts: Introduction and import of the extension into Thingworx Platform Video Link : 1568 Configuration of the extension Note: For release 8.1, the Settings menu differs from previous versions, seeWhat's New in ThingWorx Analytics Builder 8.1 between times 00:12 sec to 00:40 sec for up to date menu selection. Video Link : 1572 Installation of the UploadThing module Note: this step no longer applies as of ThingWorx Analytics 8.1 Video Link : 1573 Useful links: PTC Download page for Thingworx Analytics PTC Reference Document page for Thingworx Analytics How to copy files from Windows to Linux ?
View full tip
In this video we cover the installation of the UploadThing module. This video applies to ThingWorx Analytics 52.2 till 8.0. This is no longer applicable with ThingWorx Analytics 8.1   Useful links: How to copy files from Windows to Linux Updated Link for access to this video:  Installing Thingworx Analytics Builder: Part 3 of 3  
View full tip
In this video we cover the different configuration steps required for ThingWorx Analytics Builder extension This video applies to ThingWorx Analytics 52.1 till 8.1.   Note though: - this video uses Classic Composer, the same operations can be done using the New Composer starting with version 8.0 as illustrated in the Help Center - For release 8.1, the Settings menu differs from previous versions, see Video Link : 2079 between times 00:12 sec to 00:40 sec for up to date menu selection.   Updated Link for access to this video:  Installing Thingworx Analytics Builder: Part 2 of 3
View full tip
In this video we cover: a short introduction of Thingworx Analytics Builder The import of the Thingworx Analytics Builder extension   This video applies to ThingWorx Analytics 52.1 till 8.1   Updated Link for access to this video:  Installing Thingworx Analytics Builder:  Part 1 of 3
View full tip
Sometimes M2M Assets should poll the platform on demand, such as in the case of avoiding excessive data charges from chatty assets.  A mechanism was developed that instructs the Asset to contact (poll) the platform for actions that the Asset needs to act on such as File Uploads, Set DataItem, etc. The Shoulder Tap SMS message is the platform’s way of contacting the Asset – tapping it on the shoulder to let it know there’s a message waiting for it.  The Asset responds by polling for the waiting message.  This implementation in the platform provides a way to configure the Model Profile that is responsible for sending an SMS Shoulder Tap message to an M2M Asset.  The Model Profile contains model-wide instructions for how and when a Shoulder Tap message should be sent. How does it work? The M2M asset is set not to poll the Axeda Platform for a long period, but the Enterprise user has some actions that the Asset needs to act upon such as FOTA (Firmware Over-the-Air).       Software package deployed to M2M Asset from Axeda Platform and put into Egress queue.       The Shoulder Tap mechanism executes a Custom Object that then sends a message to the Asset through a delivery method like SMS, UDP, etc.       The Asset’s SMS, etc. handler receives the message and the Asset then sends a POLL to the Platform and acts upon the action in the egress queue How do you make Shoulder Tap work for your M2M Assets? The first step is to create a Model Profile, the model profile will tell Asset of this model, how to communicate. For Example, if the Model Profile is Shoulder Tap, then the mechanism used to communicate to the Asset will imply Shoulder Tap.  Execute the attached custom object, createSMSModelProfile.groovy, and it will create a Model Profile named "SMSModelProfile". When you create a new Model, you will see  “SMSModelProfile“ appear in the Communication Profile dropdown list as follows: The next step is to create the Custom Object Transport script which is responsible for sending out the SMS or other method of communication to the Asset.  In this example the custom object is be named SMSCustomObject​.  The contents of this custom object are outside the scope of this article, but could be REST API calls to Twilio, Jasper or to a wireless provider's REST APIs to communicate with the remote device using an SMS message.   This could also be used with the IntegrationPublisher API to send a JMS message to a server the customer controls which could then be used to talk directly with custom libraries that are not directly compatible with the Axeda Platform. Once the Shoulder Tap scripting has been tested and is working correctly, you can now directly send a Shoulder Tap to the Asset from an action or through an ExtendedUI Module, such as shown below: import com.axeda.platform.sdk.v1.services.ServiceFactory; final ServiceFactory sFact = new ServiceFactory() def assetId = (Long)parameters.get("assetId") def stapService = ServiceFactory.getShoulderTapService() stapService.sendShoulderTap( assetId ) See Extending the Axeda Platform UI - Custom Tabs and Modules for more about creating and configuring Extended UI Modules. What about Retries? maxRetryCount  - This built in attribute’s value defines the number of times the platform will retry to send the Shoulder Tap message before it gives up. retryInterval -The retry interval that can be used if the any message delivery needs to be retried. Retry Count and Interval are configured in the Model Profile Custom Object like so: final DeliveryMethodDescriptor dmd = new DeliveryMethodDescriptor(); fdmd.setMaxRetryCount(2); fdmd.setRetryInterval(60);
View full tip
One of the recurring patterns on the Axeda Platform is making requests from custom objects to other services, to be called either via Scripto, or through Expression Rules that help integrate Axeda data with your custom systems or third parties such as Salesforce.com.  Java developers would normally use a URLConnection to do this, but due to security requirements, access to the URLConnection API is sandboxed, and the HTTPBuilder API is provided instead. Below is a short example of GETting a payload from http://www.mocky.io/v2/57d02c05100000c201208cb5 to your custom object.  One of the requirements of many services is being able to pass in API keys as part of the header request.  While in this example the API key is embedded in the code, the recommended way of storing API keys on the Axeda Platform is to use the External Credential lockbox API.  This allows you to change the API keys securely without needing to change code. import groovyx.net.http.HTTPBuilder import static groovyx.net.http.ContentType.* import static groovyx.net.http.Method.* def http = new HTTPBuilder('https://www.mocky.io') http.request( GET, JSON ) {     uri.path = '/v2/57d02c05100000c201208cb5'     uri.headers.'appKey' = '7661392f-2372-4cba-a921-f1263c938090'     response.success = { resp ->         println "POST response status: ${resp.statusLine}"         logger.info "POST RESPONSE status: ${resp.statusLine}"         assert resp.statusLine.statusCode == 201     } } An example for Salesforce might look like so: import groovyx.net.http.HTTPBuilder import static groovyx.net.http.ContentType.* import static groovyx.net.http.Method.* def xml_body = """<?xml version="1.0" encoding="utf-8" ?> <env:Envelope xmlns:xsd="http://www.w3.org/2001/XMLSchema"     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"     xmlns:env="http://schemas.xmlsoap.org/soap/envelope/">   <env:Body>     <n1:login xmlns:n1="urn:partner.soap.sforce.com">       <n1:username>johndoe@example.com</n1:username>       <n1:password>Password+SECRETKEY</n1:password>     </n1:login>   </env:Body> </env:Envelope> """ def http = new HTTPBuilder('https://login.salesforce.com/') http.request( POST ) {     uri.path = '/services/Soap/u/35.0 '     body = xml_body     response.success = { resp ->         println "POST response status: ${resp.statusLine         logger.info "POST RESPONSE status: ${resp.statusLine}"         assert resp.statusLine.statusCode == 201     } } This request will give you a security token you can use in future calls to Salesforce APIs; you would use Groovy's native XmlSlurper/XmlParser to parse the response and get the session id to use in future requests.  You would then use this session id like in the following example to get the available REST resources: import groovyx.net.http.HTTPBuilder import static groovyx.net.http.ContentType.* import static groovyx.net.http.Method.* def http = new HTTPBuilder('https://na1.salesforce.com/') http.request( POST ) {     uri.path = '/services/data/v29.0'     uri.headers.'Authorization' = 'Bearer SESSIONID'     response.success = { resp ->         println "POST response status: ${resp.statusLine}"         logger.info "POST RESPONSE status: ${resp.statusLine}"         assert resp.statusLine.statusCode == 201     } } Further reading: HttpBuilder Wiki - https://github.com/jgritman/httpbuilder/wiki Groovy Xml Processing - http://groovy-lang.org/processing-xml.html
View full tip
Attached is a description about Ensemble Learning Techniques.
View full tip
Best Practices in Data Preparation for ThingWorx Analytics Data Preparation is an important phase in the process of Data Analysis when using ThingWorx Analytics. Basically, it is getting your Data from being Raw Data that you might have gathered through your Operational system or from your Data warehouse to the kind of Data ready to be analyzed. In this Document we will be using “Talend Data Preparation Free Desktop” as a Tool to illustrate some examples of the Data Preparations process. This tool could be downloaded under the following Link: https://www.talend.com/products/data-preparation (You could also choose to use another tool) We would also use the Beanpro Dataset in our Examples and illustrations. Checking data formats The analysis starts with a raw data file. The user needs to make sure that the data files can be read. Raw data files come in many different shapes and sizes. For example, spreadsheet data is formatted differently than web data or Sensors collected data and so forth. In ThingWorx Analytics the Data Format acceptable are CSV. So the Data retrieved needs to be inputted into that format before it could be uploaded to TWA Data Example (BeanPro dataset used): After that is done the user needs to actually look at what each field contains. For example, a field is listed as a character field could actually contains none character data. Verify data types Verifying the data types for each feature or field in the Dataset used.  All data falls into one of four categories that affect what sort of analytics that could be applied to it: Nominal data is essentially just a name or an identifier. Ordinal data puts records into order from lowest to highest. Interval data represents values where the differences between them are comparable. Ratio data is like interval data except that it also allows for a value of 0. It's important to understand which categories your data falls into before you feed it into ThingWorx Analytics. For example when doing Predictive Analytics TWA would not accept a Nominal Data Field as Goal. The Goal feature data would have to be of a numerical non nominal type so this needs to be confirmed in an early stage.                 Creating a Data Dictionary A data dictionary is a metadata description of the features included in the Dataset when displayed it consists of a table with 3 columns: - The first column represents a label: that is, the name of a feature, or a combination of multiple (up to 3) features which are fields in the used Dataset. It points to “fieldname” in the configuration json file. - The second column is the Datatype value attached to the label. (Integer, String, Datetime…). It points to “dataType” in the configuration json file. - The third column is a description of the Feature related to the label used in the first column. It points to “description” in the configuration json file. In the context of TWA this Metadata is represented by a Data configuration “json” file that would be uploaded before even uploading the Dataset itself. Sample of BeanPro dataset configuration file below: Verify data accuracy Once it is confirmed that the data is formatted the way that is acceptable by TWA, the user still need to make sure it's accurate and that it makes sense. This step requires some knowledge of the subject area that the Dataset is related to. There isn't really a cut-and-dried approach to verifying data accuracy. The basic idea is to formulate some properties that you think the data should exhibit and test the data to see if those properties hold. Are stock prices always positive? Do all the product codes match the list of valid ones? Essentially, you're trying to figure out whether the data really is what you've been told it is. Identifying outliers Outliers are data points that are distant from the rest of the distribution. They are either very large or very small values compared with the rest of the dataset. Outliers are problematic because they can seriously compromise the Training Models that TWA generates. A single outlier can have a huge impact on the value of the mean. Because the mean is supposed to represent the center of the data, in a sense, this one outlier renders the mean useless. When faced with outliers, the most common strategy is to delete them. Example of the effect of an Outlier in the Feature “AVG Technician Tenure” in BeanPro Dataset:   Dataset with No Outlier: Dataset with Outlier: Deal with missing values Missing values are one of the most common (and annoying) data problems you will encounter. In TWA dealing with the Null values is done by one of the below methods: - Dropping records with missing values from your Dataset. The problem with this is that missing values are frequently not just random little data glitches so this would consider as the last option. - Replacing the NULL values with average values of the responses from the other records of the same field to fill in the missing value Transforming the Dataset - Selecting only certain columns to load which would be relevant to records where salary is not present (salary = null). - Translating coded values: (e.g., if the source system codes male as "1" and female as "2", but the warehouse codes male as "M" and female as "F") - Deriving a new calculated value: (e.g., sale_amount = qty * unit_price) - Transposing or pivoting (turning multiple columns into multiple rows or vice versa) - Splitting a column into multiple columns (e.g., converting a comma-separated list, specified as a string in one column, into individual values in different columns) Please note that: Issue with Talend should be reported to the Talend Team Data preparation is outside the scope of PTC Technical Support so please use this article as an advisable Best Practices document
View full tip
The following is valid  for ThingWorx Analytics (TWA) 52.0.2 till 8.0 For release 8.3.0 and above see How to score new data in ThingWorx Analytics 8.3.x ?   Overview The main steps are as follow: Create a dataset Configure the dataset Upload data to the dataset Optimize the dataset Create filters for training and scoring data Train the model Execute scoring on existing data Upload new data to dataset Execute scoring on new data TWA models are dataset centric, which means a model created with one dataset cannot be reused with a different dataset. In order to be able to score new data, a specific feature, record purpose in the below example, is included in the dataset. This feature needs to be included from the beginning when the data is first uploaded to TWA. A filter on that feature can then be created to allow to isolate desired data. When new data comes in, they are added to the original dataset but with a specific value for the filtered feature (record purpose), which allows to discriminate and score only those new records. Process Create a dataset This example uses the beanpro demo dataset Create dataset is done through a POST on datasets REST API as below 2. Configure dataset This is done through a POST on <dataset>/configuration REST API 3.      Upload data         4.      Optimize the dataset         5.      Create filters The dataset includes a feature named record purpose created especially to differentiate between the rows to be used for training and the rows to be used for scoring. New data to be added will have record purpose set to scoringnew, which will allow to execute a scoring job limited to those filtered new rows Filter for training data: Filter for new scoring data        6.      Train the model This is done through a POST on <dataset>/prediction API        7.      Score the training data This is done through a POST on <dataset>/predictive_scores API. Note the use of the filter TrainingData created earlier. This allow to score only the rows with training as value for record purpose feature. Note: scoring could also be done without filter at this stage, in which case all the data in the dataset will be scored and not just the ones with training fore record purpose   Retrieving the scoring result show all the records in the dataset:   8.      Upload new data The newly uploaded csv file should only contains new record. This will be appended to the existing ones.   Note that the new record (it could be more than one) has a value scoringnew for the record purpose feature: This will allow to use the previously created filter ScoringNewData so that a new scoring job will only take into account this new record.   9.      Scoring new data A POST on API predictive_scores is executed however using the filter ScoringNewData. This results in only the newly added data to be scored and therefore a much quicker execution time too. Retrieving the scoring result shows only the new record:
View full tip
Announcements