Community Tip - Your Friends List is a way to easily have access to the community members that you interact with the most! X

IoT Tips

Sort by:
This script will return, in XML format, all models for a particular user. It is designed to be called as a web service, in which case the username parameter will be supplied by the platform from the user authentication information passed in the web service call. You should define this script as a Custom Object of type Action. You can test this script in the Groovy development environment on the platform by explicitly supplying the username parameter (i.e., your email address). import com.axeda.drm.sdk.Context; import com.axeda.drm.sdk.user.User; import com.axeda.drm.sdk.device.*; import com.axeda.drm.sdk.model.*; import com.axeda.common.sdk.jdbc.StringQuery; import java.util.*; import groovy.xml.MarkupBuilder import org.custommonkey.xmlunit.* import com.axeda.common.sdk.id.Identifier; def writer def xml try {   String username = parameters.username   Context ctx = Context.create(username);   ModelFinder mf = new ModelFinder(ctx);   List dList = mf.findAll();   Context.create();   writer = new StringWriter()   xml = new MarkupBuilder(writer)   xml.Response() {     for (d in dList) Model('name': d.getName());   } } catch (Exception ex) {   writer = new StringWriter()   xml = new MarkupBuilder(writer)   xml.Response() {     Fault {       Code('Groovy Exception')       Message(ex.getMessage())       StringWriter sw = new StringWriter();       PrintWriter pw = new PrintWriter(sw);       ex.printStackTrace(pw);       Detail(sw.toString())     }   } } //logger.info(writer.toString()); return ['Content-Type': 'text/xml', 'Content': writer.toString()]
View full tip
While I was writing my previous post about Purging ValueStream entries from Deleted Things , it occurred to me that a similar issue would happen to those using the InfluxDB as persistence provider for their ValueStreams.   I wanted to take the opportunity that the logic is still fresh in my mind and extend the utility to do the same thing with InfluxDB. I also used the opportunity to create a dynamic InfoTable as it's been a while since I did it.    Also, it shows how to directly interact with the InfluxDB through its API.    The modification is based on a new thing called influxDBConnector  which has the following services:   dropMeasurements: Deletes all the DB entries related to a Thing; getAllEntries: Queries all entries related to a Thing. It has a string output; getAllEntriesTable: Queries all entries related to a Thing. Infotable output; getMissingThings: Queries ThingWorx Things tables and InfluxDB measurements. It filters to have in the result only the Measurements that are not in the Things table; showMeasurements; Gets all the measurements in InfluxDB   Also the following properties: database: Influx database name used to persist the value stream data; InfluxLink: hostname:port to the InfluxDB server   Like the previous example, I created a sample mashup (purgeInfluxDBStreamsMashup ) to help to execute the services:   Obviously this utility expects that there's an existing Influx persistence provider.   Once more: these services affect directly the DB and need to be used carefully and only by Administrators.  Make sure you fully test it before using it in production.   The attached XML contains the complete utility for PostgreSQL and InfluxDB.   Hope it helps Ewerton  
View full tip
Pushbullet is a lightweight notifications platform and can be a way to explore Alerts and Subscriptions Basically create an an Alert on a property and Subscribe to that Alert Adding Alert to Property Humidity Adding Subscription The PTC-PushBulletHelper is just a generic Thing with a service called PushNotification var json = {     "body": Message,     "title":"Temperature fault",     "type":"note" }; var accessHeader = {     "Access-Token": "o.Hnm2DeiABcmbwuc7FSDmfWjfadiLXx2M" }; var params = {      proxyScheme: undefined /* STRING */,     headers: accessHeader /* JSON */,      ignoreSSLErrors: undefined /* BOOLEAN */,      useNTLM: undefined /* BOOLEAN */,      workstation: undefined /* STRING */,      useProxy: undefined /* BOOLEAN */,      withCookies: undefined /* BOOLEAN */,      proxyHost: undefined /* STRING */,      url: 'https://api.pushbullet.com/v2/pushes' /* STRING */,      content: json /* JSON */,      timeout: undefined /* NUMBER */,      proxyPort: undefined /* INTEGER */,      password: undefined /* STRING */,      domain: undefined /* STRING */,      username: undefined /* STRING */ }; // result: JSON var result = Resources["ContentLoaderFunctions"].PostJSON(params); You can test the Helper PushNotification service Next you can test the subscription
View full tip
First of all wishing everyone a blessed 2017 So here is a little something that hopefully can be helpful for all you Thingworx developers! This is a 'Remote Monitoring Application Starter' Mainly this is created around Best Practices for Security and provides a lot of powerful Modeling and Mashup techniques. Also has some cool Dashboard techniques Everything is documented in accompanying documents also in the zip (sorry went through a few steps to get this up properly. Install instructions: Thingworx Remote Monitoring Starter Application – Installation Guide Files All files needed are in a Folder called: RemoteMonitoringStarter, this is an Export to ThingworxStorage Extensions Not included, but the application uses the GoogleWidgetsExtension (Google Map) Steps Import Google Map extension. Place RemoteMonitoringStarter folder in the ThingworxStorage exports folder. From Thingworx do an Import from ThingworxStorage – Include Data, Use Default Persistence Provider, do NOT ignore Subsystems. After the import has finished, go to Organizations and open Everyone. In the Organization remove Users from the Everyone organization unit. Go to DataTables and open PTC.RemoteMonitoring.Simulation.DT Go to Services and execute SetSimulationValues Go to the UserManagementSubsystem In the Configuration section add PTC.RemoteMonitoring.Session.TS to the Session. Note: This step may already be done. Note: Screen shots provided at the end. Account Passwords FullAdmin/FullAdmin All other users have a password of: password. NOTE: You may have to Reset your Administrator password using the FullAdmin account. I also recommend changing the passwords after installing.
View full tip
This Groovy script takes any dataitem values and writes them to properties of the same name - if they exist. The rule to call this script needs to be a data trigger such as: If: some condition Then: ExecuteCustomObject("CopyParameters") The script uses the default context that contains an asset (device) and the default parameter dataItems that contains the current reported dataitems (from an agent) import com.axeda.drm.sdk.user.User import com.axeda.drm.sdk.data.DataValue import groovy.lang.PropertyValue import com.axeda.drm.sdk.device.DevicePropertyFinder import com.axeda.drm.sdk.device.Property import com.axeda.drm.sdk.device.PropertyType import com.axeda.drm.sdk.device.DeviceProperty logger.info "Executing groovy script for device: " + context?.device?.serialNumber if (dataItems != null) {   logger.info "** Data Items **"   // show data item values   dataItems?.each {di ->     logger.info "dataitem: ${di.name} = ${di.value} = ${di.timestamp}" }   def dataItemMap = [:]   dataItems.each{ dataItemMap[it.name] = it }   DevicePropertyFinder dpf = new DevicePropertyFinder (context.context)   dpf.type = PropertyType.DEVICE_TYPE   dpf.id = context.device.id   DeviceProperty dp = dpf.findOne()   List<Property> props = dp.getProperties()   props.each {Property prop->     if (dataItemMap.containsKey(prop.name)) {       prop.value = dataItemMap[prop.name].value?.toString()       //logger.info "Setting ${prop.name} to ${dataItemMap[prop.name].value?.toString()}"     }   }   dp.store() }
View full tip
This code snippet finds an uploaded file associated with an asset and emails it to a destination email address.  It uses a data accumulator to create a temporary file. import org.apache.commons.codec.binary.Base64; import java.util.Date; import java.util.Properties; import java.io.StringWriter import java.io.PrintWriter import com.axeda.drm.sdk.Context import com.axeda.drm.sdk.data.* import com.axeda.drm.sdk.device.* import groovy.json.JsonSlurper import javax.activation.DataHandler; import javax.activation.FileDataSource; import org.apache.axiom.attachments.ByteArrayDataSource; import com.axeda.platform.sdk.v1.services.ServiceFactory; import com.thoughtworks.xstream.XStream; import javax.mail.Authenticator; import javax.mail.Message; import javax.mail.MessagingException; import javax.mail.Multipart; import javax.mail.PasswordAuthentication; import javax.mail.Session; import javax.mail.Transport; import javax.mail.internet.AddressException; import javax.mail.internet.InternetAddress; import javax.mail.internet.MimeBodyPart; import javax.mail.internet.MimeMessage; import javax.mail.internet.MimeMultipart; try {     Context ctx = Context.create(parameters.username)     DeviceFinder dfinder = new DeviceFinder(ctx)     def bytes     dfinder.setSerialNumber(parameters.serial_number)     Device d = dfinder.find()     UploadedFileFinder uff = new UploadedFileFinder(ctx)     uff.device = d     def ufiles = uff.findAll()     UploadedFile ufile     if (ufiles.size() > 0) {         ufile = ufiles[0]         File f = ufile.extractFile()         def slurper = new JsonSlurper()         def objects = slurper.parseText(f.getText())         def bugreport = objects.objects[0].mobj_update[0].bugreport         String from = "demo@axeda.com";         String to = "destination@axeda.com";         String subject = "My file";         String mailContent = "Attaching test";         String filename = "payload.tar.gz";         def dataStoreIdentifier = "FILE-IO-SUB-testing"         def daSvc = new ServiceFactory().dataAccumulatorService         if (daSvc.doesAccumulationExist(dataStoreIdentifier, d.id.value)) {             daSvc.deleteAccumulation(dataStoreIdentifier, d.id.value)         }         daSvc.writeChunk(dataStoreIdentifier, d.id.value, bugreport);         InputStream is = daSvc.streamAccumulation(dataStoreIdentifier, d.id.value)         Base64 base64 = new Base64()         ByteArrayDataSource rawData = new ByteArrayDataSource(base64.decodeBase64(is.getBytes()));         // You need to create a properties object to store mail server         // smtp information such as the host name and the port number.         // With this properties we create a Session object from         // which we'll create the Message object.         Properties properties = new Properties();         properties.put("mail.smtp.host","mail01.bo2.axeda.com");         properties.put("mail.smtp.port", "25");         properties.put("mail.smtp.auth", "true");         Authenticator authenticator = new CustomAuthenticator();         Session session = Session.getInstance(properties, authenticator);         MimeMessage message = new MimeMessage(session);         message.setFrom(new InternetAddress(from));         message.setRecipient(Message.RecipientType.TO, new InternetAddress(to));         message.setSubject(subject);         message.setSentDate(new Date());         // Set the email message text.         MimeBodyPart messagePart = new MimeBodyPart();         messagePart.setText(mailContent);         // Set the email attachment file         MimeBodyPart attachmentPart = new MimeBodyPart();         //      FileDataSource fileDataSource = new FileDataSource(file)         attachmentPart.setDataHandler(new DataHandler(rawData))  //fileDataSource));         attachmentPart.setFileName(filename);         Multipart multipart = new MimeMultipart();         multipart.addBodyPart(messagePart);         multipart.addBodyPart(attachmentPart);         // Set the content         message.setContent(multipart);         // Send the message with attachment         Transport.send(message);     } } catch (Exception e) {     logger.info(e.message)     StringWriter logStringWriter = new StringWriter();     PrintWriter logPrintWriter = new PrintWriter(logStringWriter)     e.printStackTrace(logPrintWriter)     logger.info(logStringWriter.toString()) } // This class is the implementation of the Authenticator // Where you need to implement the getPasswordAuthentication // to provide the username and password public class CustomAuthenticator extends Authenticator {     protected PasswordAuthentication getPasswordAuthentication() {         String username = "";         String password = "";         return new PasswordAuthentication(username, password);     } } static byte[] getBytes(File file) throws IOException {     return getBytes(new FileInputStream(file)); } static byte[] getBytes(InputStream is) throws IOException {     ByteArrayOutputStream answer = new ByteArrayOutputStream(); // reading the content of the file within a byte buffer     byte[] byteBuffer = new byte[8192];     int nbByteRead /* = 0*/;     try {         while ((nbByteRead = is.read(byteBuffer)) != -1) { // appends buffer             answer.write(byteBuffer, 0, nbByteRead);         }     } finally {         is.close()     }     return answer.toByteArray(); }
View full tip
Analytics projects typically involve using the Analytics API rather than the Analytics Builder to accomplish different tasks. The attached documentation provides examples of code snippets that can be used to automate the most common analytics tasks on a project such as: Creating a dataset Training a Model Real time scoring predictive and prescriptive Retrieving the validation metrics for a model Appending additional data to a dataset Retraining the model The documentation also provides examples that are specific to time series datasets. The attached .zip file contains both the document as well as some entities that you need to import in ThingWorx to access the services provided in the examples. 
View full tip
When using Value Streams to log historical data, there's a service to purge the ValueStream entries from the Thing itself. But, what to do when a Thing that once logged values into a ValueStream was deleted? Currently, there's no OOTB way to delete these entries if they're not being used anymore. Currently, I was asked this question and wanted to share this with the entire community. I created a utility application that queries directly the TWX DB for Things that are present in the ValueStream but don't exist anymore and allows a user to purge it.    These services are considering  PostgreSQLServer as persistence provider for the ValueStreams. The services can be modified if you're using SQLServer.  Do not apply for InfluxDB persistence providers   The twxDBConnector thing is based on the PostgreSqlServer template, that is present in the Relational Databases extension. It has 4  main services:   getEntriesToPurge:  Queries the TWX DB for all the entries related to a Thing. It does not consider the ValueStream id, so it will purge all the entries across all value streams. Requires a Thing name as an input; getMissingThings: Queries Things that are present in the ValueStream DB table that are not present in the Things table, meaning that they were deleted; purgeThingEntries: Purges the entries related to a Thing. It does not consider the ValueStream id, so it will purge all the entries across all value streams; purgeAllEntries: Purge all the entries related to Things that were deleted.   The queries can be modified to allow the selection of the value stream to be cleaned.   I also added a sample mashup that leverages the services.       The twxDBConnector has a configuration table that requires the DB Connection string, user and password.   You can also do it directly from the DB using PGAdmin and purge it all.   DELETE FROM value_stream WHERE value_stream.entry_id IN --Queries all entries in the value stream table that belong to an inexistent thing (SELECT entry_id FROM value_stream LEFT JOIN thing_model ON value_stream.source_id = thing_model.name WHERE thing_model.name IS NULL)   Attention: These services are changing directly the TWX DB, so use it carefully.   To use it:   Import the PostgresSQLServer Extension (you might need to change the JAR in the extension depending on the TWX version you're using); Import the entities from the purgeVSEntries.xml Thanks  @dsantos for the help on optimizing the queries.   Hope it helps. Ewerton  
View full tip
This post is part of the series Forced Root Cause Monitoring via Mashups and Modal Popups To not feel lost or out of context, it's recommended to read the main post first. Testing the Mashups Open the rcp_MashupMain in a new browser window For this test I find it easier to have the rcp_AlertThing and the Mashup in two windows side-by-side to each other The Mashup should be completely empty right now Nothing in the historic table (Grid) The Selected Reason is blank The Checkbox is false In the rcp_AlertThing switch the trigger to false The following will now happen The new value will be automatically pushed to Mashup The checkbox will switch to true The validator now throws the TRUE Event, as the condition is met and the trigger is indeed true The TRUE Event will invoke the Navigation Widget's Navigate service and the modal popup will be opened The user now only has the option to select one of the three states offered by the Radio Button selector, everything else will be greyed out After choosing any option, the SelectionChanged Event will be fired and trigger setting the selectedState as well as closing the popup The PopupClosed Event in our MashupMain will then be fired and populate the selectedState parameter into the textbox (just for display) and will also call the SetProperties service on our Thing, updating the selectedReason with the selectedState parameter value Once the property is set and persisted into the ValueStream via the SetProperties' ServiceInvokeCompleted Event, we clear the trigger (back to false) and update the Grid with the new data In the AlertThing, refresh the properties to actually see the trigger false and the selectedReason to whatever the user selected Note: When there is a trigger state and the trigger is set to true the popup will always be shown, even if the user refreshes the UI or the browser window. This is to avoid cheating the system by not entering a root cause for the current issue. As the popup is purely depending on the trigger flag, only clearing the flag can unblock this state. The current logic does not consider to close the popup when the flag is cleared - this could however be implemented using the Validator's FALSE Event and adding additional logic
View full tip
This post is part of the series Forced Root Cause Monitoring via Mashups and Modal Popups To not feel lost or out of context, it's recommended to read the main post first. Create a Popup Mashup Create a new Mashup called "rcp_MashupPopup" as Page and Static Save and switch to the Design tab Design Edit the Mashup Properties Set "Width" to 500 Set "Height" to 300 Add a new Label Set "Text" to "Something went wrong - what happend?" Set "Alignment" to "Center Aligned" Set "Width" to 230 Set "Top" to 55 Set "Left" to 130 Add a new Radio Button Set "Button States" to "rcp_AlertStateDefinition" Set "Top" to 145 Set "Left" to 25 Set "Width" to 450 Set "Height" to 100 In the Workspace tab, select the "Mashup" Click on Configure Mashup Parameters Add Parameter Name: "selectedState" BaseType: NUMBER Click Done Save the Mashup Connections Select the Radio Button Drag and drop its Selected Value property to the Mashup and bind it to the selectedState Mashup Parameter Drag and drop its SelectionChanged event to the Mashup and bind it to the CloseIfPopup service Save the Mashup
View full tip
Build an Equipment Dashboard Guide Part 2   Step 5: Display Data   Now that you have configured the visual part of your application, you need to bind the Widgets in your Mashup to a data source.   Add Services to Mashup   In the top-right, ensure the Data tab is selected. Click the green + symbol. In the Entities Filter field, search for and select MyPump. In the Services Filter field, type GetPropertyValues. Click the right-arrow beside GetPropertyValues. Note how GetPropertyValues was added to the right-side under Selected Services Check the checkbox for Execute on Load. This causes the Service to execute when the Mashup starts.        7. In the Services Filter field, type QueryPropertyHistory. 8. Click the right-arrow beside QueryPropertyHistory. 9. Check the checkbox for Execute on Load. 10. Click Done to close the pop-up. Note how the Services have been added to the Data tab in the top-right.          11. Click Save. Now that we have access to the backend data, we want to bind it to our Widgets.   Value Display   Configure the Value Display to display the SerialNumber of the pump. Under the Data tab, expand GetPropertyValues > Returned Data > All Data. Drag-and-drop GetPropertyValues > serialNumber onto the Value Display Widget in the top section. On the Select Binding Target popup, select Data. Image   We want to use an Image Widget to display a thumbnail picture of the pump for easy reference. To do that, though, you first need to upload an image to Foundation by creating a Media Entity. Right-click the image below, then click "Save image as..." to download. Click Browse > Visualization > Media. Click + New. In the Name field, type pump-thumbnail. If Project is not already set, click the + in the Project text box and select the PTCDefaultProject. Under Image, click Change. Navigate to and select the pump-image.png file you just downloaded. On the navigation pop-up, click Open to close the pop-up and confirm the image selection. At the top of Foundation, click Save. Change Image to pump   We will now update the Image Widget to display the ThingWorx Media Entity we just created. Return to the pump-dashboard Mashup. Click the Image Widget to select it, and ensure that the bottom-left Properties tab is active. In the bottom-left Properties' Filter field, type SourceURL. For the SourceURL Property, search for and select pump-thumbnail. Click Save.   Line Chart   Configure the Line Chart to display Property values changing over time. In the top-right Data tab, expand QueryPropertyHistory > Returned Data. Drag and drop QueryPropertyHistory > All Data onto the Line Chart Widget in the bottom-right Canvas section. On the Select Binding Target pop-up, select Data. Ensure the Line Chart Widget is selected. On the Line Chart's Property panel in the bottom-left, in the Filter field, type XAxisField. For the XAxisField Property, select timestamp. In the Filter field, type LegendFilter. Check the checkbox for LegendFilter. Click Save.   Verify Data Bindings   You can see the configuration of data sources bound to Widgets in the bottom-center Connections pane. In the top-right Data tab, click GetPropertyValues. Check the diagram in the bottom-center Connections window to confirm a data source is bound to the Value Display Widget.       2. Also in the top-right Data tab, click QueryPropertyHistory. Confirm that the diagram shows it is bound to the Line Chart.         3. Click Save.     Step 6: Test Application   Browse to your Mashup and click View Mashup to launch the application. NOTE: You may need to enable pop-ups in your browser to see the Mashup. 2. Confirm that data is being displayed in each of the sections. 3. Open the MyPump Thing, then click the Properties and Alerts Tab. 4. Click Set Value on the line of the serialNumber Property. 5. Enter a value for the serial number, then click the Check-mark button. 6. Click Refresh to confirm the value is changed. 7. Refresh the browser window showing the dashboard to see the new serial number value.     Step 7: Next Steps   Congratulations! You've successfully completed the Build an Equipment Dashboard guide, and learned how to: Use Composer to create a Thing Shape and a Thing Template Make a Thing using a custom Thing Template Store Property change history in a Value Stream Create an applicaton UI with Mashup Builder Display data from connected devices Test a sample application   If you like to return to previous guides in the learning path, select Connect and Monitor Industrial Plant Equipment.    
View full tip
Help Center link on how to control file transfers from the edge client using the EdgeControlled ThingShape.   The EdgeControlled ThingShape is a default entity included with ThingWorx that allows you to manage the amount of egress being sent from the platform to the Edge.   At the time of writing this post, the available 'When Disconnected' settings for a remotely bound property in ThingWorx are 'Fold' and 'Ignore'. Setting a property to 'Fold' while using this EdgeControlled ThingShape is necessary whether the device is connected all the time or only for brief updates.   To use this ThingShape in a real world scenario you might code your edge client to invoke the DequeueEgress REST API function available through this ThingShape. The parameter you pass in is then the number of messages you would like the client to receive. The result of this function is how many messages the platform then actually sent.   A quick setup: 1. Create a RemoteThing entity in ThingWorx 2. Create an ApplicationKey entity in ThingWorx 3. Setup an edge client to bind to that RemoteThing using the specified ApplicationKey 4. Manage Bindings on the Properties page of the RemoteThing, and pull in a few properties you would like to send property updates to 5. Set the 'When Disconnected' value to 'Fold' for each property you want to queue messages for   5a. Set any other settings on the properties you'd like; ie. persistence, logged 6. Save the Thing 7. Add the EdgeControlled ThingShape to the Thing 8. Save the Thing 9. Update property values, see exceptions thrown, but the value will be queued 10. Invoke DequeueEgress on the RemoteThing, with the number of messages to send to the edge client passed in as the parameter value   10a. Notice 'Fold' means only the last value set for a property will be sent to the edge client. There is currently no retention available for any values previously set to the property and stored as the message to be sent. Those values are lost upon a new value coming in before it's dequeued. 11. Verify the edge client has received the expected egress, and the return result of the DequeueEgress function was the expected # of messages sent.
View full tip
Shown below is example code that when deployed in the appropriate container, will allow an end-user to talk to the Axeda Platform Integration Queue. A customer should supply their unique values for the following properties: queueName user password url import java.util.Properties; import javax.jms.*; import javax.naming.*; public class SampleConsumer {     private String queueName = "com.axeda.integration.ACME.queue";     private String user = "system";     private String password = "manager"; //private String url = "ssl://hostname:61616";   private String url = "tcp://hostname:61616";     private boolean transacted;     private boolean isRunning = false;     public static void main(String[] args) throws NamingException, JMSException     {         SampleConsumer consumer = new SampleConsumer();         consumer.run();     }     public SampleConsumer()     {         /** For SSL connections only, add the following: **/ //        System.setProperty("javax.net.ssl.keyStore", "path/to/client.ks"); //        System.setProperty("javax.net.ssl.keyStorePassword", "password"); //        System.setProperty("javax.net.ssl.trustStore", "path/to/client.ts");     }     public void run() throws NamingException, JMSException     {           isRunning = true;            //JNDI properties         Properties props = new Properties();         props.setProperty(Context.INITIAL_CONTEXT_FACTORY, "org.apache.activemq.jndi.ActiveMQInitialContextFactory");         props.setProperty(Context.PROVIDER_URL, url);            //specify queue propertyname as queue.jndiname         props.setProperty("queue.slQueue", queueName);            javax.naming.Context ctx = new InitialContext(props);         ConnectionFactory connectionFactory = (ConnectionFactory)ctx.lookup("ConnectionFactory");         Connection connection = connectionFactory.createConnection(user, password);         connection.start();            Session session = connection.createSession(transacted, Session.AUTO_ACKNOWLEDGE);            Destination destination = (Destination)ctx.lookup("slQueue");         //Using Message selector ObjectClass = ‘AlarmImpl’         MessageConsumer consumer = session.createConsumer(destination, "ObjectClass= 'LinkedList'");            while (isRunning)         {             System.out.println("Waiting for message...");             Message message = consumer.receive(1000);             if (message != null && message instanceof TextMessage) {                 TextMessage txtMsg = (TextMessage)message;                 System.out.println("Received: " + txtMsg.getText());             }         }         System.out.println("Closing connection");         consumer.close();         session.close();         connection.close();     } }
View full tip
When an Expression Rule of type File calls a Groovy script, the script is provided with the implicit object compressedFile.  This example shows how the compressedFile object can be used. This Expression Rule uses the Groovy script 'LastLine' to return the last line of the file, and sets the dataItem 'lastLine' to the returned value: Type:  File IF:      some condition e.g. File.hint=="hint" THEN:  SetDataItem("lastline", str(ExecuteCustomObject("LastLine"))) The LastLine script uses the implicit object 'compressedFile': import com.axeda.drm.sdk.scm.CompressedFile if (compressedFile != null) {     File file = compressedFile.getFiles()[0].extractFile()     def result =  file.eachLine { line ->         return line     } }
View full tip
This post is part of the series Forced Root Cause Monitoring via Mashups and Modal Popups To not feel lost or out of context, it's recommended to read the main post first. Required Logic The following logic will help us realizing this particular use case: The trigger property on the AlertThing switches from false to true. The MashupMain will receive dynamic Property updates via the AlertThing.GetProperties service. It will validate the value of the trigger Property and if it's true the MashupMain will show the MashupPopup as a modal popup. A modal popup will be exclusively in the foreground, so the user cannot interact with anything else in the Mashup except the modal popup. In the modal popup the user chooses one of the pre-defined AlertStateDefinitions. When a State is selected, the popup will set the State as a Mashup Parameter, pass this to the MashupMain and the popup close itself. When the MashupPopup is closed, the MashupMain will read the Mashup Parameter The MashupMain will set the selectedReason in the AlertThing to the selected value. It will also reset the trigger property to false. This allows the property to be set to true again to trigger another forced popup. On any value change the AlertThing will store the selectedReason State in a ValueStream to capture historic information on which root causes were selected at which time. The ValueStream information will be displayed as a table in a GridWidget in the MashupMain once the new properties have been set.
View full tip
Persistent properties are stored in ThingWorx database while non-persistent properties are stored in memory. This means that the persistent values do not get erased or deleted if the thing restarts or platform restarts. The persistent properties can also be retrieved in the same way as non-persistent properties. To explain better how we can retrieve persistent data, please consider the below example: I took a device group which has multiple devices and defined 2 persistent properties. One is serial number and the other is firmware version number. Now in a mashup builder, the user gives the serial number and the corresponding firmware version will be retrieved. I achieved this by writing services for that thing. Created a Data shape with the name DeviceData and defined two filed definitions. One is Serial number and the other is firmware version. Created a thing template with the name DevGroup. Added a property in the thing template with the name DeviceData and selected the basetype as info table. Also selected the previously created data shape (Device Data) in data shape field. And made this property as persistent data by selecting Is Persistent. Saved the property. Created two devices with the names Device1 and Device2. Both the devices use the above template. Now for each device set the property values by clicking the set button in properties link. These values will be persistent, meaning they do not change even after refresh or when you restart ThingWorx. You can even set these properties at run time by just creating a service. Created GetFirmversion service which retrieves the firm version of given service number. Created mashup as shown below: . Once you select Device1, enter the serial number in the numeric entry field and click query button, the firmware version of the given serial number is displayed along with the data of device1. If we enter a wrong number, no data will be displayed. You can also set an error message instead of displaying empty values. Similar case when we select the device2 data. This is one way to retrieve persistent data. You can also obtain the same in many ways.
View full tip
In the process of working with a customer, I was curious as to the throughput of a file sent via the Axeda Connected Content feature to one of the Axeda Agent Gateways.   I took a random 50 megabyte blob of data (/dev/urandom) and sent it to one of my test Gateways via a Package deployment: DEBUG   xgEnterpriseProxy: Enterprise Queue Empty INFO    xgSM:  ... Download percent done = 11% INFO    xgSM:  ... Download percent done = 21% INFO    xgSM:  ... Download percent done = 31% INFO    xgSM:  ... Download percent done = 41% INFO    xgSM:  ... Download percent done = 51% INFO    xgSM:  ... Download percent done = 61% INFO    xgSM:  ... Download percent done = 71% INFO    xgSM:  ... Download percent done = 81% INFO    xgSM:  ... Download percent done = 91% INFO    xgSM:  ... Download percent done = 100% DEBUG   xgSM: >>  INTERNAL DEBUG MESSAGE << :  Download time is 4 seconds DEBUG   xgSM: >>  INTERNAL DEBUG MESSAGE << :  Upgrading.  Backing up files to C:\temp\CFKGW\AxedaBackup DEBUG   xgSM: >>  INTERNAL DEBUG MESSAGE << :  Extracting downloaded files from DefaultProject\CFKGW\Downloads\141581_143281.tar.gz to directory C:\temp\CFKGW\ DEBUG   xgSM: >>  INTERNAL DEBUG MESSAGE << :  Extraction Finished About 12MB per second.  This was a sandbox in the PTC On-Demand Center.  Not bad, but not necessarily representative of a real production system.  This sandbox doesn't have 1000 devices trying to get this file at once.  So some benchmarking in your configuration and environment certainly needs to be done. So that done, I thought I'd up the ante - 700 megabytes this time! DEBUG   xgEnterpriseProxy: Enterprise Queue Empty INFO    xgSM:  ... Download percent done = 10% INFO    xgSM:  ... Download percent done = 20% INFO    xgSM:  ... Download percent done = 30% INFO    xgSM:  ... Download percent done = 40% INFO    xgSM:  ... Download percent done = 50% INFO    xgSM:  ... Download percent done = 60% INFO    xgSM:  ... Download percent done = 70% INFO    xgSM:  ... Download percent done = 80% INFO    xgSM:  ... Download percent done = 90% INFO    xgSM:  ... Download percent done = 100% DEBUG   xgSM: >>  INTERNAL DEBUG MESSAGE << :  Download time is 66 seconds So 10MB per second. Directory of C:\temp\cfkgw 05/17/2017  01:32 PM    <DIR>          . 05/17/2017  01:32 PM    <DIR>          .. 05/17/2017  01:03 PM         1,048,576 1mb.dat 05/17/2017  01:04 PM        52,428,800 50meg-randomdata.dat 05/17/2017  01:32 PM       734,003,200 700mb.dat 05/17/2017  01:03 PM    <DIR>          AxedaBackup                3 File(s)    787,480,576 bytes Not bad at all! 
View full tip
This script will return, in XML format, details of all alarms for a particular asset, identified by serial number. It is designed to be called as a web service, in which case the username parameter will be supplied by the platform from the user authentication information passed in the web service call, and the id parameter will be supplied as an argument to the call. You should define this script as a Custom Object of type Action. You can test this script in the Groovy development environment on the platform by explicitly supplying the username and id parameters (i.e., your email address and "asset1"). import com.axeda.drm.sdk.Context; import com.axeda.drm.sdk.device.DeviceFinder; import com.axeda.drm.sdk.device.Device; import com.axeda.drm.sdk.data.AlarmFinder; import com.axeda.drm.sdk.data.Alarm; import com.axeda.drm.sdk.mobilelocation.MobileLocation; import com.axeda.common.sdk.jdbc.StringQuery; import com.axeda.common.sdk.id.Identifier; import groovy.xml.MarkupBuilder; try {   logger.info "parameters: " + parameters   if (!parameters.id) {     throw new Exception("parameters.id required");   }   // operate in the context of the user calling the service   Context ctx = Context.create(parameters.username);   // setup the finders   DeviceFinder df = new DeviceFinder(ctx);   df.id = new Identifier(Long.parseLong(parameters.id));   // find the device and its data   logger.info "Finding asset"   Device device = df.find();   if (!device) {     throw new Exception("Unable to find asset with id "+ parameters.id);   }   AlarmFinder af = new AlarmFinder(ctx);   af.device = device;   // generate the XML   writer = new StringWriter();   xml = new MarkupBuilder(writer);   xml.Alarms() {     af.findAll().each() { Alarm alarm ->       xml.Alarm('id':alarm.id) {         xml.DataItemName(alarm.dataItemName);         xml.DataItemValue(alarm.dataItemValue);         xml.Date(alarm.date.time);         xml.Description(alarm.description);         xml.MobileLocation() {           MobileLocation ml = alarm.mobileLocation;           if (ml) {             xml.BeginTimeStamp(ml.beginTimeStamp);             xml.EndTimeStamp(ml.endTimeStamp);             xml.Lat(ml.lat);             xml.Lng(ml.lng);             xml.Alt(ml.alt);             xml.IsCurrentLocation(ml.isCurrentLocation());           }         }         xml.Name(alarm.name);         xml.Note(alarm.note);         xml.Severity(alarm.severity);         xml.State(alarm.state);       }     }   }   // return the results   return ['Content-Type': 'text/xml', 'Content': writer.toString()] } catch (Exception ex) {   // return the exception   writer = new StringWriter();   xml = new MarkupBuilder(writer);   xml.error() {     faultcode("ErrorType.Exception");     faultstring(ex.message);   }   return ['Content-Type': 'text/xml', 'Content': writer.toString()] }
View full tip
This Groovy script is called from an ExpressionRule of type Alarm. For example, in an Expression rule IF: Alarm.severity > 500 THEN: ExecuteCustomObject("SMSMe", "[numberToSMS]")    calls the script "SMSMe" with the parameter phoneNumber.  The ExecuteCustomObject provides a way to call from this simple two-line business rule into a modern programming environment with access to the complete Platform SDK, as well as all the features of the Groovy language.  In Groovy, it's straightforward to use the built-in httpclient library to POST an HTTP request to Twilio to send an SMS to the specified cellphone number. Groovy Scripts The groovy script named SMSMe is executed by the Expression Rule above and connects to the Twilio Server passing in a list of parameters. When you register with Twilio, you will be given an ACCOUNT SID (apiID) and an AUTH TOKEN (apiPass). These two strings need to be in the Groovy Script below: String apiID = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'  String apiPass = 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'     Parameters Variable Name      Display Name phoneNumber      phoneNumber import org.apache.commons.httpclient.Credentials import org.apache.commons.httpclient.HostConfiguration import org.apache.commons.httpclient.HttpClient import org.apache.commons.httpclient.UsernamePasswordCredentials import org.apache.commons.httpclient.auth.AuthScope import org.apache.commons.httpclient.methods.GetMethod import org.apache.commons.httpclient.methods.PostMethod import org.apache.commons.httpclient.NameValuePair //logger.info "Calling ${parameters.phoneNumber}" String twilioHost = 'api.twilio.com' String apiID = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' String aipPass = 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA' HostConfiguration hc = new HostConfiguration() hc.setHost(twilioHost, 443, "https") def url = "/2008-08-01/Accounts/$apiID/SMS/Messages" def client = new HttpClient() Credentials defaultcreds = new UsernamePasswordCredentials(apiID, aipPass) client.getState().setCredentials(null, null, defaultcreds) PostMethod post = new PostMethod(url); post.addParameter 'IfMachine', 'Continue' post.addParameter 'Method', 'POST' post.addParameter 'From', '[YourNumber]' post.addParameter 'To', parameters.phoneNumber post.addParameter 'Body', 'This is an SMS from Axeda' client.executeMethod(hc, post); //logger.info message = "Status:" + post.getStatusText() //logger.info post.getResponseBodyAsString() post.releaseConnection();   
View full tip
Calling external services from M2M applications is a critical aspect of building end-to-end solutions.  Knowing how to apply network timeouts when connecting to external servers can prevent unexpected and problematic network hang-ups. Let's investigate how to create a safe networking flow using HttpClient, HttpBuilder, and Apache’s FTPClient class. Background Custom Objects called from Expression Rules have a configurable maximum execution time.  This is set by the com.axeda.drm.rules.statistics.rule-time-threshold property.  Without this safeguard in place long running or misbehaved Custom Objects can cause internal processing queues to fill and the server will suffer a performance degradation. In Java (and Groovy) all network calls internally use InputStream.read() to establish the socket connection and to read data from the socket.  It is possible for faulty external servers (such as an FTP server) to hang and not properly respond.  This means that the InputStream.read() method will continuously wait for the server to respond with data, and the server will never respond.  According to the Java spec, InputStream.read() may be uninterruptable while it is waiting for data.  This means that if a Custom Object has exceeded the com.axeda.drm.rules.statistics.rule-time-threshold the Rule Sniper will still not be able to interrupt the Custom Object’s execution if it is waiting on InputStream.read().  Because the Custom Object cannot be stopped, the internal processing queues will eventually fill. Even though InputStream.read() is uninterruptable it is still possible to set timeouts for it to be able to give up on a connection.  Beyond that, we want to make sure that the connection is completely disconnected. Types of Timeouts There are typically two types of timeouts that should be set when making calls over the web: the Connection Timeout and the Socket Timeout.  The Connection Timeout is the maximum amount of time that should be allowed when establishing the bi-directional socket connection between the client and server.  Behind the scenes socket connection involves resolving the domain name of the server to an IP address, and then the server opening a port to connect with the client’s port.  The Socket Timeout is the timeout that limits the amount of time each socket operation is allowed to take.  It limits the amount of time InputStream.read() will listen for a server’s response.  If a server is faulty or overloaded it may take a long time (or forever) to respond to a request.  This timeout limits the amount of time the client will wait for the server to respond. When making any calls from a Custom Object to an external server (either making WebService calls, or FTP transfers), you should always set the Connection Timeout and the Socket Timeout.  Always try to keep the timeouts as reasonably small as possible.  Failure to do so could unexpectedly impact your Axeda server.  Consider a Custom Object that takes an average of 10 seconds to run is called to make an external WebService call once a minute. This will not cause any issues and the  system will be stable.  If the external server suddenly has a performance degredation and now the external WebService call takes over a minute to run, the execution queue will eventually fill, causing performance degradation to the Axeda system.  To protect against this scenario, set the timeouts to limit the call to one minute, and log whenever the time limit is exceeded. Examples Provided below are examples of properly set timeouts and thorough connection management use HttpClient, HttpBuilder, and FTPClient.  All of these examples assume they are being executed from Custom Objects. By default, set the Connection Timeout to 10 seconds.  In normal circumstances, connections should not take more then 10 seconds.  If they are exceeding this time there is a good chance of networking issues between the client and server. The Socket Timeout can vary per use-case.  The examples provided set the Socket Timeout to 30 seconds, which should be sufficient for typical WebService calls and small to medium sized FTP file transfers.  Depending exactly on what is being done, the timout may have to be increased.  If you expect the call to go over 5 minutes please contact Axeda Support to investigate increasing  com.axeda.drm.rules.statistics.rule-time-threshold property (which defaults to 5 minutes). ​HttpClient​ //HttpClient import org.apache.http.client.HttpClient import org.apache.http.impl.client.DefaultHttpClient import org.apache.http.client.methods.HttpGet import org.apache.http.HttpResponse import org.apache.http.params.BasicHttpParams import org.apache.http.params.HttpParams import org.apache.http.params.HttpConnectionParams int TENSECONDS  = 10*1000 int THIRTYSECONDS = 30*1000 final HttpParams httpParams = new BasicHttpParams() //Establishing the connection should take <10 seconds in most circumstances HttpConnectionParams.setConnectionTimeout(httpParams, TENSECONDS) //The data transfer/call should take <30 seconds.  Adjust as necessary if receiving large data sets. HttpConnectionParams.setSoTimeout(httpParams, THIRTYSECONDS) HttpClient hc = new DefaultHttpClient(httpParams) try {   //Simply get the contents of http://www.axeda.com and log it to the Custom Object Log   HttpGet get = new HttpGet("http://www.axeda.com")   HttpResponse response = hc.execute(get)   BufferedReader br = new BufferedReader( new InputStreamReader( response.getEntity().getContent()))   br.readLines().each {     logger.info it   } } finally {   //Make sure to shutdown the connectionManager   hc.getConnectionManager().shutdown() } return true https://gist.github.com/axeda/5189092/raw/2f7b93c5f96ed8f445df4364b885486bc6fa1feb/HttpClientTimeouts.groovy HttpBuilder import groovyx.net.http.HTTPBuilder import static groovyx.net.http.ContentType.* import static groovyx.net.http.Method.* int TENSECONDS  = 10*1000; int THIRTYSECONDS = 30*1000; HTTPBuilder builder = new HTTPBuilder('http://www.axeda.com') //HTTPBuilder has no direct methods to add timeouts.  We have to add them to the HttpParams of the underlying HttpClient builder.getClient().getParams().setParameter("http.connection.timeout", new Integer(TENSECONDS)) builder.getClient().getParams().setParameter("http.socket.timeout", new Integer(THIRTYSECONDS)) try {   //Simply get the contents of http://www.axeda.com and log it to the Custom Object Log   builder.request(GET, TEXT){     response.success = { resp, res ->       res.readLines().each {         logger.info it       }       }   } } finally {   //Make sure to always shut down the HTTPBuilder when you’re done with it   builder.shutdown() } return true https://gist.github.com/axeda/5189102/raw/66bb3a4f4f096681847de1d2d38971e6293c4c6b/HttpBuilderTimeouts.groovy FtpClient Apache’s FTPClient has a third type of timeout, the Default Timeout.  The Default Timeout is a timeout that further ensures that socket timeouts are always used.  Note: Default Timeout does not set a timeout for the .connect() method. import org.apache.commons.net.ftp.* import java.io.InputStream import java.io.ByteArrayInputStream String ftphost = "127.0.0.1" String ftpuser = "test" String ftppwd = "test" int ftpport = 21 String ftpDir = "tmp/FTP" int TENSECONDS  = 10*1000 int THIRTYSECONDS = 30*1000 //Declare FTP client FTPClient ftp = new FTPClient() try {   ftp.setConnectTimeout(TENSECONDS)   ftp.setDefaultTimeout(TENSECONDS)   ftp.connect(ftphost, ftpport)   //30 seconds to log on.  Also 30 seconds to change to working directory.   ftp.setSoTimeout(THIRTYSECONDS)   def reply = ftp.getReplyCode()   if (!FTPReply.isPositiveCompletion(reply))   {     throw new Exception("Unable to connect to FTP server")   }   if (!ftp.login(ftpuser, ftppwd))   {     throw new Exception("Unable to login to FTP server")   }   if (!ftp.changeWorkingDirectory(ftpDir))   {     throw new Exception("Unable to change working directory on FTP server")   }   //Change the timeout here for a large file transfer that will take over 30 seconds   //ftp.setSoTimeout(THIRTYSECONDS);   ftp.setFileType(FTPClient.ASCII_FILE_TYPE)   ftp.enterLocalPassiveMode()   String filetxt = "Some String file content"   InputStream is = new ByteArrayInputStream(filetxt.getBytes('US-ASCII'))   try   {     if (!ftp.storeFile("myFile.txt", is))     {       throw new Exception("Unable to write file to FTP server")     }   } finally   {     //Make sure to always close the inputStream     is.close()   } } catch(Exception e) {   //handle exceptions here by logging or auditing } finally {   //if the IO is timed out or force disconnected, exceptions may be thrown when trying to logout/disconnect   try   {     //10 seconds to log off.  Also 10 seconds to disconnect.     ftp.setSoTimeout(TENSECONDS);     ftp.logout();     //depending on the state of the server the .logout() may throw an exception,     //we want to ensure complete disconnect.   }   catch(Exception innerException)   {       //You potentially just want to log that there was a logout exception.     }   finally   {     //Make sure to always disconnect.  If not, there is a chance you will leave hanging sockects     ftp.disconnect();   } } return true https://gist.github.com/axeda/5189120/raw/83545305a38d03b6a73a80fbf4999be3d6b3e74e/FtpClientConnectionTimeouts.groovy
View full tip
Back in 2018 an interesting capability was added to ThingWorx Foundation allowing you to enable statistical calculation of service and subscription execution.   We typically advise customers to approach this with caution for production systems as the additional overhead can be more than you want to add to the work the platform needs to handle.  This said, these statistics is used consciously can be extremely helpful during development, testing, and troubleshooting to help ascertain which entities are executing what services and where potential system bottlenecks or areas deserving performance optimization may lie.   Although I've used the Utilization Subsystem services for statistics for some time now, I've always found that the Composer table view is not sufficient for a deeper multi-dimensional analysis.  Today I took a first step in remedying this by getting these metrics into Excel and I wanted to share it with the community as it can be quite helpful in giving developers and architects another view into their ThingWorx applications and to take and compare benchmarks to ensure that the operational and scaling is happening as was expected when the application was put into production.   Utilization Subsystem Statistics You can enable and configure statistics calculation from the Subsystem Configuration tab.  The help documentation does a good job of explaining this so I won't mention it here.  Base guidance is not to use Persisted statistics, nor percentile calculation as both have significant performance impacts.  Aggregate statistics are less resource intensive as there are less counters so this would be more appropriate for a production environment.  Specific entity statistics require greater resources and this will scale up as well with the number of provisioned entities that you have (ie: 1,000 machines versus 10,000 machines) whereas aggregate statistics will remain more constant as you scale up your deployment and its load.   Utilization Subsystem Services In the subsystem Services tab, you can select "UtilizationSubsystem" from the filter drop down and you will see all of the relevant services to retrieve and reset the statistics.     Here I'm using the GetEntityStatistics service to get entity statistics for Services and Subscriptions.     Giving us something like this.      Using Postman to Save the Results to File I have used Postman to do the same REST API call and to format the results as HTML and to save these results to file so that they can be imported into Excel.   You need to call '/Thingworx/Subsystems/UtilizationSubsystem/Services/GetEntityStatistics' as a POST request with the Content-Type and Accept headers set to 'application/xml'.  Of course you also need to add an appropriately permissioned and secured AppKey to the headers in order to authenticate and get service execution authorization.     You'll note the Export Results > Save to a file menu over on the right to get your results saved.   Importing the HTML Results into Excel As simple as I would like to hope that getting a standard web formatted file into Excel should be, it didn't turn out to be as easy as I would have hoped and so I have to switch over to Windows to take advantage of Power Query.   From the Data ribbon, select Get Data > From File > From XML.  Then find and select the HTML file saved in the previous step.     Once it has loaded the file and done some preparation, you'll need to select the GetEntityStatistics table in the results on the left.  This should display all of the statistics in a preview table on the right.     Once the query completed, you should have a table showing your statistical data ready for... well... slicing and dicing.     The good news is that I did the hard part for you, so you can just download the attached spreadsheet and update the dataset with your fresh data to have everything parsed out into separate columns for you.     Now you can use the column filters to search for entity or service patterns or to select specific entities or attributes that you want to analyze.  You'll need to later clear the column filters to get your whole dataset back.     Updating the Spreadsheet with Fresh Data In order to make this data and its analysis more relevant, I went back and reset all of the statistics and took a new sample which was exactly one hour long.  This way I would get correct recent min/max execution time values as well as having a better understanding of just how many executions / triggers are happening in a one hour period for my benchmark.   Once I got the new HTML file save, I went into Excel's Data ribbon, selected a cell in the data table area, and clicked "Queries & Connections" which brought up the pane on the right which shows my original query.     Hovering over this query, I'm prompted with some stuff and I chose "Edit".     Then I clicked on the tiny little gear to the right of "Source" over on the pane on the right side.     Finally I was able to select the new file and Power Query opened it up for me.     I just needed to click "Close & Load" to save and refresh the query providing data to the table.     The only thing at this point is that I didn't have my nice little sparklines as my regional decimal character is not a period - so I selected the time columns and did a "Replace All" from '.' to ',' to turn them into numbers instead of text.     Et Voila!   There you have it - ready to sort, filter, search and review to help you better understand which parts of your application may be overly resource hungry, or even to spot faulty equipment that may be communicating and triggering workflows far more often than it should.   Specific vs General Depending on the type of analysis that you're doing you might find that the aggregate statistics are a better option.  As they'll be far, far less that the entity specific statistics they'll do a better job of giving you a holistic view of the types of things that are happening with your ThingWorx applications execution.   The entity specific data set that I'm showing here would be a better choice for troubleshooting and diagnostics to try to understand why certain customers/assets/machines are behaving strangely as we can specifically drill into these stats.  Keep in mind however that you should then compare these findings with the general baseline to see how this particular asset is behaving compared to the whole fleet.   As a size guideline - I did an entity specific version of this file for a customer with 1,000 machines and the Excel spreadsheet was 7Mb compared to the 30kb of the one attached here and just opening it and saving it was tough for Excel (likely due to all of my nested formulas).  Just keep this in mind as you use this feature as there is memory overhead meaning also garbage collection and associated CPU usage for such.
View full tip
Announcements