cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Learn all about the Community Ranking System, a fun gamification element of the PTC Community. X

IoT Tips

Sort by:
A Feature - a piece of information that is potentially useful for prediction. Any attribute could be a feature, as long as it is useful to the model. Feature engineering – Feature engineering is the process of transforming raw data into features that better represent the underlying problem to the predictive models, resulting in improved model accuracy on unseen data. It’s a vaguely agreed space of tasks related to designing feature sets for Machine Learning applications. Components: First, understanding the properties of the task you’re trying to solve and how they might interact with the strengths and limitations of the model you are going to use. Second, experimental work were you test your expectations and find out what actually works and what doesn’t. Feature engineering as a technique, has three sub categories of techniques: Feature selection, Dimension reduction and Feature generation. Feature Selection: Sometimes called feature ranking or feature importance, this is the process of ranking the attributes by their value to predictive ability of a model. Algorithms such as decision trees automatically rank the attributes in the data set. The top few nodes in a decision tree are considered the most important features from a predictive stand point. As a part of a process, feature selection using entropy based methods like decision trees can be employed to filter out less valuable attributes before feeding the reduced dataset to another modeling algorithm. Regression type models usually employ methods such as forward selection or backward elimination to select the final set of attributes for a model. For example: Project Development decision-tree:                                                  Dimension Reduction: This is sometimes called feature extraction. The most classic example of dimension reduction is principle component analysis or PCA. PCA allows us to combine existing attributes into a new data frame consisting of a much reduced number of attributes by utilizing the variance in the data. The attributes which "explain" the highest amount of variance in the data form the first few principal components and we can ignore the rest of the attributes if data dimensionality is a problem from a computational standpoint. Feature Generation or Feature Construction: Quite simply, this is the process of manually constructing new attributes from raw data. It involves intelligently combining or splitting existing raw attributes into new one which have a higher predictive power. For example a date stamp may be used to generate 2 new attributes such as AM and PM which may be useful in discriminating whether day or night has a higher propensity to influence the response variable. Feature construction is essentially a data transformation process. Tips for Better Feature Engineering Tip 1: Think about inputs you can create by rolling up existing data fields to a higher/broader level or category. As an example, a person’s title can be categorized into strategic or tactical. Those with titles of “VP” and above can be coded as strategic. Those with titles “Director” and below become tactical. Strategic contacts are those that make high-level budgeting and strategic decisions for a company. Tactical are those in the trenches doing day-to-day work.  Other roll-up examples include: Collating several industries into a higher-level industry: Collate oil and gas companies with utility companies, for instance, and call it the energy industry, or fold high tech and telecommunications industries into a single area called “technology.” Defining “large” companies as those that make $1 billion or more and “small” companies as those that make less than $1 billion.   Tip 2: Think about ways to drill down into more detail in a single field. As an example, a contact within a company may respond to marketing campaigns, and you may have information about his or her number of responses. Drilling down, we can ask how many of these responses occurred in the past two weeks, one to three months, or more than six months in the past. This creates three additional binary (yes=1/no=0) data fields for a model. Other drill-down examples include: Cadence: Number of days between consecutive marketing responses by a contact: 1–7, 8–14, 15–21, 21+ Multiple responses on same day flag (multiple responses = 1, otherwise =0) Tip 3: Split data into separate categories also called bins. For example, annual revenue for companies in your database may range from $50 million (M) to over $1 billion (B). Split the revenue into sequential bins: $50–$200M, $201–$500M, $501M–$1B, and $1B+. Whenever a company falls with the revenue bin it receives a one; otherwise the value is zero. There are now four new data fields created from the annual revenue field. Other examples are: Number of marketing responses by contact: 1–5, 6–10, 10+ Number of employees in company: 1–100, 101–500, 502–1,000, 1,001–5,000, 5,000+ Tip 4: Think about ways to combine existing data fields into new ones. As an example, you may want to create a flag (0/1) that identifies whether someone is a VP or higher and has more than 10 years of experience. Other examples of combining fields include: Title of director or below and in a company with less than 500 employees Public company and located in the Midwestern United States You can even multiply, divide, add, or subtract one data field by another to create a new input. Tip 5: Don’t reinvent the wheel – use variables that others have already fashioned. Tip 6: Think about the problem at hand and be creative. Don’t worry about creating too many variables at first, just let the brainstorming flow.
View full tip
Sampling Strategy​ This Blog Post will cover the 4 sampling Strategies that are available in ThingWorx Analytics.  It will tell you how the sampling strategy runs behind the scenes, when you may want to use that strategy, and will give you the pros and cons of each strategy. SAMPLE_WITH_REPLACEMENT This strategy is not often used by professionals but still may be useful in certain circumstances.  When you sample with replacement, the value that you randomly selected is then returned to the sample pool.  So there is a chance that you can have the same record multiple times in your sample. Example Let’s say you have a hat that contain 3 cards with different people’s names on them. John Sarah Tom Let’s say you make 2 random selections. The first selection you pull out the name Tom. When you sample with replacement, you would put the name Tom back into the hat and then randomly select a card again.  For your second selection, it is possible to get another name like Sarah, or the same one you selected, Tom. Pros May find improved models in smaller datasets with low row counts Cons The Accuracy of the model may be artificially inflated due to duplicates in the sample SAMPLE_WITHOUT_REPLACEMENT This is the default setting in ThingWorx Analytics and the most commonly used sampling strategy by professionals.  The way this strategy works is after the value is randomly selected from the sample pool, it is not returned.  This ensures that all the values that are selected for the sample, are unique. Example Let’s say you have a hat that contain 3 cards with different people’s names on them. John Sarah Tom Let’s say you make 2 random selections. The first selection you pull out the name Tom. When you sample without replacement, you would randomly select a card from the hat again without adding the card Tom.  For your second selection, you could only get the Sarah or John card. Pros This is the sampling strategy that is most commonly used It will deliver the best results in most cases Cons May not be the best choice if the desired goal is underrepresented in the dataset UPSAMPLE_AND_SAMPLE_WITHOUT_REPLACEMENT This is useful when the desired goal is underrepresented in the dataset.  The features that represent the desired outcome of the goal are copied multiple times so they represent a larger share of the total dataset. Example Let’s say you are trying to discover if a patient is at risk for developing a rare condition, like chronic kidney failure, that affects around .5% of the US population.  In this case, the most accurate model that would be generated would say that no one will get this condition, and according to the numbers, it would be right 99.5% of the time.  But in reality, this is not helpful at all to the use case since you want to know if the patient is at risk of developing the condition. To avoid this from happening, copies are made of the records where the patient did develop the condition so it represents a larger share of the dataset.  Doing this will give ThingWorx Analytics more examples to help it generate a more accurate model. Pros Patterns from the original dataset remain intact Cons Longer training time DOWNSAMPLE_AND_SAMPLE_WITHOUT_REPLACEMENT This is also useful when the desired goal is underrepresented in the dataset. In downsample and sample without replacement, some features that do not represent the desired goal outcome are removed. This is done to increase the desired features percentage of the dataset. Example Let’s continue using the medical example from above.  Instead of creating copies of the desired records, undesired record are removed from the dataset.  This causes the records where patients did develop the condition to occupy a larger percentage of the dataset. Pros Shorter training time Cons Patterns from the original dataset may be lost
View full tip
1. Add an Json parameter Example: { ​    "rows":[         {             "email":"example1@ptc.com"         },         {             "name":"Qaqa",             "email":"example2@ptc.com"         }     ] } 2. Create an Infotable with a DataShape usingCreateInfoTableFromDataShape(params) 3. Using a for loop, iterate through each Json object and add it to the Infotable usingInfoTableName.AddRow(YourRowObjectHere) Example: var params = {     infoTableName: "InfoTable",     dataShapeName : "jsontest" }; var infotabletest = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); for(var i=0; i<json.rows.length; i++) {     infotabletest.AddRow({name:json.rows.name,email:json.rows.email}); }
View full tip
There are multiples approaches to improve the performance Increase the NetWork bandwidth between client PC and ThingWorx Server Reduce the unnecessary handover when client submit requests to ThingWorx server through NetWork Here are suggestions to reduce the unnecessary handover between client and server Eliminate the use of proxy servers between client and ThingWorx It is compulsory to download Combined.version.date.xxx.xxx.js file when the first time to load mashup page (TTFB and Content Download time). Loading performance is affected by using of proxy servers between client and ThingWorx Server. This is testing result with proxy server set up This is the test result after eliminiating proxy server from the same environment Cut off extensions that not used in ThingWorx server After installed extensions, the size of Combined.version.date.xxx.xxx.js increased Avoid Http/Https request errors There is a https request error when calling Google map. It takes more than 20 seconds
View full tip
Welcome to the Thingworx Community area for code examples and sharing.   We have a few how-to items and basic guidelines for posting content in this space.  The Jive platform the our community runs on provides some tools for posting and highlighting code in the document format that this area is based on.  Please try to follow these settings to make the area easy to use, read and follow.   At the top of your new document please give a brief description of the code sample that you are posting. Use the code formatting tool provided for all parts of code samples (including if there are multiple in one post). Try your best to put comments in the code to describe behavior where needed. You can edit documents, but note each time you save them a new version is created.  You can delete old versions if needed. You may add comments to others code documents and modify your own code samples based on comments. If you have alternative ways to accomplish the same as an existing code sample please post it to the comments. We encourage everyone to add alternatives noted in comments to the main post/document.   Format code: The double blue arrows allow you to select the type of code being inserted and will do key word highlighting as well as add line numbers for reference and discussions.
View full tip
1. Create a network and added all Entities that implement from a specific ThingShape in the network 2. Create a ThingShape mashup as below Note: Bind the Entity parameter to DynamicThingShapes_TracotrShape's service GetProperties input EntityName. Laso bind mashup RefreshRequested event to that service 3. Create a mashup named ContentShape, add Tree widget and ContainedMashp in it 4. Bind Service GetNetworkConnection's Selected Row(s) result and Selected RowsChanged event to ContainedMashup widget Note: Master can total replace ThingShape mashup. Suggest to use Master after ThingWorx 6.0
View full tip
Timers and schedulers can be useful tool in a Thingworx application.  Their only purpose, of course, is to create events that can be used by the platform to perform and number of tasks.  These can range from, requesting data from an edge device, to doing calculations for alerts, to running archive functions for data.  Sounds like a simple enough process.  Then why do most platform performance issues seem to come from these two simple templates? It all has to do with how the event is subscribed to and how the platform needs to process events and subscriptions.  The tasks of handling MOST events and their related subscription logic is in the EventProcessingSubsystem.  You can see the metrics of this via the Monitoring -> Subsystems menu in Composer.  This will show you how many events have been processed and how many events are waiting in queue to be processed, along with some other settings.  You can often identify issues with Timers and Schedulers here, you will see the number of queued events climb and the number of processed events stagnate. But why!?  Shouldn't this multi-threaded processing take care of all of that.  Most times it can easily do this but when you suddenly flood it with transaction all trying to access the same resources and the same time it can grind to a halt. This typically occurs when you create a timer/scheduler and subscribe to it's event at a template level.  To illustrate this lets look at an example of what might occur.  In this scenario let's imagine we have 1,000 edge devices that we must pull data from.  We only need to get this information every 5 minutes.  When we retrieve it we must lookup some data mapping from a DataTable and store the data in a Stream.  At the 5 minute interval the timer fires it's event.  Suddenly all at once the EventProcessingSubsystem get 1000 events.  This by itself is not a problem, but it will concurrently try to process as many as it can to be efficient.  So we now have multiple transactions all trying to query a single DataTable all at once.  In order to read this table the database (no matter which back end persistence provider) will lock parts or all of the table (depending on the query).  As you can probably guess things begin to slow down because each transaction has the lock while many others are trying to acquire one.  This happens over and over until all 1,000 transactions are complete.  In the mean time we are also doing other commands in the subscription and writing Stream entries to the same database inside the same transactions.  Additionally remember all of these transactions and data they access must be held in memory while they are running.  You also will see a memory spike and depending on resource can run into a problem here as well. Regular events can easily be part of any use case, so how would that work!  The trick to know here comes in two parts.  First, any event a Thing raises can be subscribed to on that same Thing.  When you do this the subscription transaction does not go into the EventProcessingSubsystem.  It will execute on the threads already open in memory for that Thing.  So subscribing to a timer event on the Timer Thing that raised the event will not flood the subsystem. In the previous example, how would you go about polling all of these Things.  Simple, you take the exact logic you would have executed on the template subscription and move it to the timer subscription.  To keep the context of the Thing, use the GetImplimentingThings service for the template to retrieve the list of all 1,000 Things created based on it.  Then loop through these things and execute the logic.  This also means that all of the DataTable queries and logic will be executed sequentially so the database locking issue goes away as well.  Memory issues decrease also because the allocated memory for the quries is either reused or can be clean during garbage collection since the use of the variable that held the result is reallocated on each loop. Overall it is best not to use Timers and Schedulers whenever possible.  Use data triggered events, UI interactions or Rest API calls to initiate transactions whenever possible.  It lowers the overall risk of flooding the system with recourse demands, from processor, to memory, to threads, to database.  Sometimes, though, they are needed.  Follow the basic guides in logic here and things should run smoothly!
View full tip
Concepts of Anomaly Detection used in ThingWatcher ThingWatcher is based on anomaly detection with the normal distribution. What does that mean? Actually,  normally distributed metrics follow a set of probabilistic rules. Upcoming values who follow those rules are recognized as being “normal” or “usual”. Whereas value who break those rules are recognized as being unusual. What is a normal distribution? A normal distribution is a very common probability distribution. In real life, the normal distribution approximates many natural phenomena. A data set is known as “normally distributed” when most of the data aggregate around it's mean, in a symmetric way. Also, it's extreme values get less and less likely to appear. Example When a factory is making 1 kg sugar bags it doesn’t always produce exactly 1 kg. In reality, it is around 1 kg. Most of the time very close to 1 kg and very rarely far from 1 kg. Indeed, the production of 1 kg sugar bag follows a normal distribution. Mathematical rules When a metric appears to be normally distributed it follows some interesting law. As does the sugar bag example. The mean and the median are the same. Both are equal to 1000. It’s because of  the perfectly symmetric “bell-shape” It is the standard deviation called sigma σ that defines how the normal distribution is spread around the mean. In this example σ = 20 68% of all values fall between [mean-σ; mean+σ] For the sugar bag [980; 1020] 95% of all values fall between [mean-2*σ; mean+2*σ] For the sugar bag [960; 1040] 99,7% of all values fall between [mean-3*σ; mean+3*σ] For the sugar bag [940; 1060] The last 3 rules are also known as the 68–95–99.7 rule also called the three-sigma rule of thumb When the rules get broken: it’s an anomaly As previously stated, When a system has been proven normally distributed, it follows a set of rules. Those rules become the model representing the normal behavior of the metric. Under normal conditions, upcoming values will match the normal distribution and the model will be followed. But what happens when the rules get broken? This is when things turn different as something unusual is happening. In theory, in a normal distribution, no values are impossible. If the weights of the bags of sugar were really distributed, we would probably find a bag of sugar of 860 g every billion products. In reality, we approximate this sugar bag example as normally distributed. Also, almost impossible value are approximated as impossible Techniques of Anomaly Detection Technique n°1: outlier value An almost impossible value could be considered as an anomaly. When the value deviates too much from the mean, let’s say by ± 4σ, then we can consider this almost impossible value as an anomaly. (This limit can also be calculated using the percentile). Sugar bags who weigh less than 920 g or more than 1080 g are considered anomalous. Chances are, there is a problem in the production chain. This provides a simple way to define maximum and minimum thresholds. Technique 2: detecting change in the normal distribution Technique n°2 can detect unusual distribution fast, using only some points. But it can’t detect anomalies who move from one sigma σ to another in a usual manner. To detect this kind of anomaly we use a “window” of n last elements. If the mean and standard derivation of this window change too much from usual then we can deduce an anomaly. Using a big window with a lot of values is more stable, but it requires more time to detect the anomaly. The bigger the window is the more stable it becomes. But it would require more time to detect the anomaly as it needs to aggregate more values for the detection.
View full tip
In this particular scenario, the server is experiencing a severe performance drop.The first step to check first is the overall state of the server -- CPU consumption, memory, disk I/O. Not seeing anything unusual there, the second step is to check the Thingworx condition through the status tool available with the Tomcat manager. Per the observation: Despite 12 GB of memory being allocated, only 1 GB is in use. Large number of threads currently running on the server is experiencing long run times (up to 35 minutes) Checking Tomcat configuration didn't show any errors or potential causes of the problem, thus moving onto the second bullet, the threads need to be analyzed. That thread has been running 200,936 milliseconds -- more than 3 minutes for an operation that should take less than a second. Also, it's noted that there were 93 busy threads running concurrently. Causes: Concurrency on writing session variable values to the server. The threads are kept alive and blocking the system. Tracing the issue back to the piece of code in the service recently included in the application, the problem has been solved by adding an IF condition in order to perform Session variable values update only when needed. In result, the update only happens once a shift. Conclusion: Using Tomcat to view mashup generated threads helped identify the service involved in the root cause. Modification required to resolve was a small code change to reduce the frequency of the session variable update.
View full tip
Hi everybody, In this blogpost I want to share with you my local ThingWorx installation, with some optimizations that I did for local development. -use the -XX:+UseConcMarkSweepGC . This uses the older Garbage Collector from the JVM, instead of the newer G1GC recommended by the ThingWorx Installation guide since version 7.2. The advantage of ConcMarkSweepGC is that the startup time is faster and the total memory footprint of the Tomcat is far lower than G1GC. -use -agentlib:jdwp=transport=dt_socket,address=1049,server=y,suspend=n. This allows using your Java IDE of choice to connect directly to the Tomcat server, then debugging your Extension code, or even the ThingWorx code using the Eclipse Class Decompilers for example. Please modify the 1049 to your port of choice for exposing the server debugging port. -use -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=60000 -Dcom.sun.management.jmxremote.ssl=false                  -Dcom.sun.management.jmxremote.authenticate=false           This sets up the server to allow JMX monitoring. I usually use VisualVM from the JDK bin folder, but you can use any JMX monitoring tool.           This uses no Authentication, no SSL and uses port 6000 - modify if you need. I usually startup Tomcat manually from a folder via startup.bat, and the setenv.bat looks like: set JAVA_HOME=C:\Program Files\Java\jdk1.8.0_102 set JRE_HOME=C:\Program Files\Java\jdk1.8.0_102 set THINGWORX_PLATFORM_SETTINGS=D:\Work\servers\apache-tomcat-8.0.33 // this is where the platform-settings.json file is located set CATALINA_OPTS=-d64 -XX:+UseNUMA -XX:+UseConcMarkSweepGC -Dfile.encoding=UTF-8 -agentlib:jdwp=transport=dt_socket,address=1049,server=y,suspend=n -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=60000 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false In this mode I can look at any errors in almost real time from the console and it makes killing the server for Java Extension reload a breeze -> Ctrl+C Please don't hesitate to provide feedback on this document, I certainly welcome it. Be warned: THESE ARE NOT PRODUCTION SETTINGS. Best regards, Vladimir
View full tip
First of all wishing everyone a blessed 2017 So here is a little something that hopefully can be helpful for all you Thingworx developers! This is a 'Remote Monitoring Application Starter' Mainly this is created around Best Practices for Security and provides a lot of powerful Modeling and Mashup techniques. Also has some cool Dashboard techniques Everything is documented in accompanying documents also in the zip (sorry went through a few steps to get this up properly. Install instructions: Thingworx Remote Monitoring Starter Application – Installation Guide Files All files needed are in a Folder called: RemoteMonitoringStarter, this is an Export to ThingworxStorage Extensions Not included, but the application uses the GoogleWidgetsExtension (Google Map) Steps Import Google Map extension. Place RemoteMonitoringStarter folder in the ThingworxStorage exports folder. From Thingworx do an Import from ThingworxStorage – Include Data, Use Default Persistence Provider, do NOT ignore Subsystems. After the import has finished, go to Organizations and open Everyone. In the Organization remove Users from the Everyone organization unit. Go to DataTables and open PTC.RemoteMonitoring.Simulation.DT Go to Services and execute SetSimulationValues Go to the UserManagementSubsystem In the Configuration section add PTC.RemoteMonitoring.Session.TS to the Session. Note: This step may already be done. Note: Screen shots provided at the end. Account Passwords FullAdmin/FullAdmin All other users have a password of: password. NOTE: You may have to Reset your Administrator password using the FullAdmin account. I also recommend changing the passwords after installing.
View full tip
Since it's somewhat unclear on how to set up the reset password feature through the login form, these steps might be a little more helpful. Assuming the mail extension has already been imported into the Thingworx platform and properly configured - say, PassReset - (test with SendMessage service to verify), let's go ahead and create a new user - Blank, and a new organization that will have that user assigned as a member - Test. Let's open the configuration tab for the organization, assign the PassReset mail thing as the mail server, assign login image, style, prompt (optional), check the Allow Password Reset, then the rest looks like this: Onto the Email content part, it is not possible to save the organization as is at the moment: Clicking on the question mark for the Email content will provide the following requirements: Now this is when it might not be too clear. The tokens [[:user:]], [[:organization:]], [[:url:]] can be used in the email body and at the runtime will be replaced with the actual Usernames, organization, and the reset password url. Out of those fiels, only [[:url:]] token is required. So, it is sufficient to place only [[:url:]] in the body and save the organization: Then, when going to the FormLogin, at <your thingworx host:port>/Thingworx/FormLogin/<organization name>, a password reset button is available: Filling out the User information in the reset field, the email gets sent to the user address specified and the proper message appears: Since in this example only the [[:url:]]  token has been used in the email content, the email received will look like this: To troubleshoot any errors that might be seen in the process of retrieving the password reset link, it's helpful to check your browser developer tools and Thingworx application log for details.
View full tip
Putting this out because this is a difficult problem to troubleshoot if you don't do it right. Let's say you have an application where you have visibility permissions in effect. So you have Users group removed from the Everyone Organization Now you have a Thing "Thing1" with Properties that are being logged to a ValueStream "VS1" What do you need to make this work? Obviously the necessary permissions to Write the values to the Thing1 and read the values from Thing1 (for UI) But for visibility what you'll need is: Visibility to Thing1 (makes sense) Visibility to the Persistence Provider of the ValueStream VS1 !!!! Nope you don't need Visibility to the ValueStream itself, but you DO need Visibility to the Persistence Provider of that ValueStream The way the lack of this permission was showing in the Application Log was a message about trying to provide a Null value.
View full tip
Sometimes M2M Assets should poll the platform on demand, such as in the case of avoiding excessive data charges from chatty assets.  A mechanism was developed that instructs the Asset to contact (poll) the platform for actions that the Asset needs to act on such as File Uploads, Set DataItem, etc. The Shoulder Tap SMS message is the platform’s way of contacting the Asset – tapping it on the shoulder to let it know there’s a message waiting for it.  The Asset responds by polling for the waiting message.  This implementation in the platform provides a way to configure the Model Profile that is responsible for sending an SMS Shoulder Tap message to an M2M Asset.  The Model Profile contains model-wide instructions for how and when a Shoulder Tap message should be sent. How does it work? The M2M asset is set not to poll the Axeda Platform for a long period, but the Enterprise user has some actions that the Asset needs to act upon such as FOTA (Firmware Over-the-Air).       Software package deployed to M2M Asset from Axeda Platform and put into Egress queue.       The Shoulder Tap mechanism executes a Custom Object that then sends a message to the Asset through a delivery method like SMS, UDP, etc.       The Asset’s SMS, etc. handler receives the message and the Asset then sends a POLL to the Platform and acts upon the action in the egress queue How do you make Shoulder Tap work for your M2M Assets? The first step is to create a Model Profile, the model profile will tell Asset of this model, how to communicate. For Example, if the Model Profile is Shoulder Tap, then the mechanism used to communicate to the Asset will imply Shoulder Tap.  Execute the attached custom object, createSMSModelProfile.groovy, and it will create a Model Profile named "SMSModelProfile". When you create a new Model, you will see  “SMSModelProfile“ appear in the Communication Profile dropdown list as follows: The next step is to create the Custom Object Transport script which is responsible for sending out the SMS or other method of communication to the Asset.  In this example the custom object is be named SMSCustomObject​.  The contents of this custom object are outside the scope of this article, but could be REST API calls to Twilio, Jasper or to a wireless provider's REST APIs to communicate with the remote device using an SMS message.   This could also be used with the IntegrationPublisher API to send a JMS message to a server the customer controls which could then be used to talk directly with custom libraries that are not directly compatible with the Axeda Platform. Once the Shoulder Tap scripting has been tested and is working correctly, you can now directly send a Shoulder Tap to the Asset from an action or through an ExtendedUI Module, such as shown below: import com.axeda.platform.sdk.v1.services.ServiceFactory; final ServiceFactory sFact = new ServiceFactory() def assetId = (Long)parameters.get("assetId") def stapService = ServiceFactory.getShoulderTapService() stapService.sendShoulderTap( assetId ) See Extending the Axeda Platform UI - Custom Tabs and Modules for more about creating and configuring Extended UI Modules. What about Retries? maxRetryCount  - This built in attribute’s value defines the number of times the platform will retry to send the Shoulder Tap message before it gives up. retryInterval -The retry interval that can be used if the any message delivery needs to be retried. Retry Count and Interval are configured in the Model Profile Custom Object like so: final DeliveryMethodDescriptor dmd = new DeliveryMethodDescriptor(); fdmd.setMaxRetryCount(2); fdmd.setRetryInterval(60);
View full tip
Expression rules are the heart of the Axeda Platforms processing capability. These rules have an If-Then-Else structure that's easy to create and understand. We think they're like a formula in a spreadsheet. For example, say your asset has a dataitem reading for temperature: IF: temperature > 80 THEN: CreateAlarm("High Temp", 100)                      This rule compares the temperature to 80 every time a reading is received. When this happens, the rule creates an alarm with name "High Temp" and severity 100. Dataitems represent readings from an asset. They are typically sensors or monitoring parameters in an application. But also think of dataitems as variables. The rule can be changed to IF: temperature > threshold                      so that each asset has its own threshold that can be adjusted independently. Look at the complete list of Expression Rule triggers - the events that trigger a rule to run variables - the information you can access in an expression functions - the functions that can be used within an expression actions - these are called in the Then or Else part of an expression to make something happen A rule can calculate a new value. For example, if you wanted to know the max temperature IF: temperature > maxTemperature THEN: SetDataItem("maxTemperature" temperature) To convert a temperature in celsius to fahrenheit IF: temperature THEN: SetDataItem("tempF", temperature*9/5 + 32) The If simply names the variable, so any change to that variable triggers the rule to run. There may be lots of other dataitems reported for an asset, and changes to the other dataitems should not recalculate the temperature. When rules should run only when an asset is in a particular mode or state, or when there is a complex sequence to model, read about how State Machines come to the rescue. Creating and Testing an Expression Rule ​ We're going to create a simple Expression Rule and show it running in a few steps. Above, you saw a rule that created an alarm when temperature > 80. Now, we will make one that converts a temperature in F to one in C. An Expression Rule consists of a few things: Name Description - an optional field to describe the rule Trigger - what makes this rule run? The trigger tells the rule if it applies to Alarms, Data, Files, or many others. If - the logic expression for the condition to evaluate Then - the logic to run if the condition is true Else - the logic to run if the condition is false To begin, log into an Axeda Platform instance. Navigate to the Manage tab Select ​New​, then ​Expression Rule​ Enter this Expression Rule information Name: TempConvert Type: Data Description: Enabled: Leave checked If: TempC Then: SetDataItem("TempF", TempC*9/5 + 32) If you click on functions or scroll down for actions in the Expression Tree, you will see a description in Details. Click the Apply to Asset​ button to select models and specific assets to apply this rule to. Now that you have an Expression Rule, lets try it. Testing the Expression Rule (NEEDS UPDATING) You can test the expression rule by simulating the TempC data using Axeda Simulator, as instructed below. Or, you can use the Expression Rules Debugger to simulate the reading and display the results. For information about using the Expression Rules Debugger, see the Expression Rules Debugger documentation in the on-line Help system.Simulate a TempC reading Launch the Axeda Simulator The Axeda Simulator will launch in a new browser window Enter your registered email address, Developer Connection password, and click Login.       Select asset1 from the Asset dropdown. Under the Data tab, enter the dataitem name TempC, and a value like 28: Then Click Send. To see the exciting result, go back to the Platform window and navigate to the Service tab: and you should see that 28C = 82.4F. You created an Expression Rule that triggers when a value of TempC is received, and creates a new dataitem TempF with a calculated value. This rule applies to your model, but if you had many models of assets, it could apply to as many as you want. You could change the rule to do the conversion only If: TempC > 9 and simulate inputs to see that this is the new behavior. Further Reading Read about how Rule Timers can trigger rules to run on a scheduled basis. (TODO)
View full tip
Overview The Axeda Platform is a secure and scalable foundation to build and deploy enterprise-grade applications for connected products, both wired and wireless. This article provides you with a detailed feature overview and helpful links to more in-depth articles and tutorials for a deeper dive into key areas. Types of Connected Product Applications M2M applications can span many vertical markets and industries. Virtually every aspect of business and personal life is being transformed by ubiquitous connectivity. Some examples of M2M applications include: Vehicle Telematics and Fleet Management - Monitor and track the location, movements, status, and behavior of a vehicle or fleet of vehicles Home Energy Monitoring - Smart energy sensors and plugs providing homeowners with remote control and cost-saving suggestions Smart Television and Entertainment Delivery - Integrated set-top box providing in-view interaction with other devices – review your voicemails while watching a movie, chat with your Facebook friends, etc. Family Location Awareness - Set geofences on teenagers, apply curfews, locate family members in real time, with vehicle speed and condition Supply Chain Optimization - Combine status at key inspection points with logistics and present distribution managers with an interactive, real-time control panel Telemedicine - Self-monitoring/testing, telecardiology, teleradiology Why Use a Platform? Have you ever built a Web application? If so, you probably didn't create the Web server as part of your project. Web servers or Web application servers like Apache HTTPd or JBoss manage TCP sockets, threads, the HTTP protocol, the compilation of scripts, and include thousands of other base services. By leveraging the work done by the dedicated individuals who produced those core services, you were able to rapidly build an application that provided value to your users.  The Axeda Platform is analogous to a Web server or a Web application server, but provides services and a design paradigm that makes connected product development streamlined and efficient. Anatomy of an Axeda Connected Product Connected Products can really be anything that your product and imagination can offer, but it is helpful to take pause for some common considerations that apply to most, if not all of these types of solutions. Getting Connected - Bring your product or equipment to the network so that it can provide information to the solution, and react to commands and configuration changes orchestrated by the application logic. Manage and Orchestrate - Script your business logic in the cloud to tie together remote information with information from other business systems or cloud-based services, react to real-time conditions, and facilitate batch operations to synchronize, analyze, and integrate. Present and Report - Build your user experiences, enabling people to interact with your connected product, manage workflows around business processes, or facilitate data analysis. Let's take a look at the Axeda Platform services that help with each of these solution considerations. Getting Connected Wired & Wireless Getting connected can assume all sorts of shapes, depending on the environment of your product and the economics of your solution. The Axeda Platform makes no assumption about connectivity, but instead provides features and functionality to help you connect.For wireless applications, especially those which may use cellular or satellite communications, the speed and cost of communication will be an important factor.  For this reason, Axeda has created the Adaptive Machine Messaging Protocol (AMMP), a cross-platform, efficient HTTP-based protocol that you can use to communicate bi-directionally with the platform.  The protocol is expressive, robust, secure, and most importantly, able to be implemented on a wide range of hardware and programming environments. When you are faced with connecting legacy products that may be communicating with a proprietary messaging protocol, the Axeda Platform can be extended with Codecs to "learn" your protocol by translating your device's communication format into a form that the platform can understand. This is a great option for retrofitting existing, deployed products to get connectivity and value today, while designing your next-generation of product with AMMP support built-in. Manage and Orchestrate The Data Model defines the information and its behavior in the Axeda Platform. Rules Rules form the heart of a dynamic connected application. There are three types of rules that can be leveraged for your orchestration layer: Expression rules run in the cloud, and are configured and associated with your assets through the Axeda Admin UI or SDK. These rules have an If-Then-Else structure that's easy to create and understand. They're like a formula in a spreadsheet. For example, say your asset has a dataitem reading for temperature: [TODO: Image of dataitem TEMP] This rule compares the temperature to 80 every time a reading is received. When this happens, the rule creates an alarm with name "High Temp" and severity 100.Learn more about Expression Rules. State Machines help organize Expression Rules into manageable groups that apply to assets when the assets are in a certain state. For example, if your asset were a refrigerated truck, and you were interested in receiveing an alert when the temperature within the cargo area rose above a preset threshold, you would not want this rule to be applied when your truck asset is empty and parked in the distribution center lot. In this case, you might organize your rules into a state machine called “TruckStatus”. The TruckStatus state machine would then consist of a set of states that you define, and the rules that execute when the truck is in a particular state. [TODO - show state machine image?] State “Parked”: IF doorOpen THEN … State “In Transit”: IF temperature > 40 THEN… State “Maintenance”: <no rules> You can learn more about state machines in an upcoming technical article soon. Scripting Using Axeda Custom Objects, you can harness the power of the Axeda SDK to gain access to the complete set of platform data and functionality, all within a script that you customize. Custom Object scripts can be invoked in an Expression Rule to provide customized and flexible business logic for your application. Custom Object scripts are written in a powerful scripting language called Groovy, which is 100% Java syntax compatible. Groovy also offers very modern, concise syntax options that make your code simple and easy to understand. Groovy can also implement the body of a web service method. We call this Scripto. Scripto allows you to write code and call that code by name through a REST web service. This allows a client application to call a set of customized web services that return exactly the information and format needed by the application. Here is a beginning tutorial on Scripto.  This site includes many Scripto examples called by different Rich Internet Applications (RIA). Location-Based Services Knowing where something is, and where it has been, opens a world of possible application features. The Axeda Platform can keep track of an asset’s current and historical location, which allows your applications to plot the current position of your assets on a map, or show a breadcrumb trail of where a particular asset has been. Geofences are virtual perimeters around geographic locations. You can define a geofence around a physical location by describing a center point and radius, or by “drawing” a polygon around an arbitrary shape. For instance, you may have a geofence around the Boston metro that is defined as the center of town with a 10-mile radius. You may then compare an asset’s location to this geofence in a rule and trigger events to occur. IF InNamedGeofence(“Boston”) THEN CreateAlarm(…) You can learn more about geofences and other location-oriented rule features in an upcoming tutorial. Integration Queue In today’s software landscape, almost no complete solution is an island unto itself. Business systems need to interoperate by sharing data and events, so that specialized systems can do their job without tight coupling. Messaging is a robust and capable pattern for bridging the gap between systems, especially those that are hosted in the cloud. The Axeda Platform provides a message queue that can be subscribed to by external systems to trigger processes and workflows in those systems, based on events that are being tracked within the platform. A simple ExpressionRule can react to a condition by placing a message in the integration queue as follows: IF Alarm.severity > 100 THEN PublishObject() A message is then placed in the queue describing the platform event, and another business system may subscribe to these messages and react accordingly. Web Services Web Services are at the heart of a cloud-based API stack, and the Axeda Platform provides full comprehensiveness or flexibility. The platform exposes Web Service operations for all platform data and configuration meta data. As a result, you can determine the status of an asset, query historical data for assets, search for assets based on their current location, or even configure expression rules and other configuration settings all through a modern Web Service API, using standard SOAP and REST communication protocols. Scripto Web Service APIs simplify system integration in a loosely-coupled, secure way, and we have a commitment to offering a comprehensive collection of standard APIs into the Axeda Platform. But we can't have an API that suits every need exactly. You may want data in a particular format, such as CSV, JSON, or XML. Or some logic should be applied and its inefficient to query lots of data to look for the bit you want. Wouldn’t you rather make the service on the other side do exactly what you want, and give it to you in exactly the format you need? That is Scripto – the bridge between the power and efficiency of the Axeda Custom Object scripting engine, and a Web Service client. Using Scripto, you can code a script in the Groovy language, using the Axeda SDK and potentially mashing up results from other systems, all within the platform, and expose your script to an external consumer via a single, REST-based Web Service operation. You create your own set of Web Services that do exactly what you want. This powerful combination let’s you simplify your Web Service client code, and give you easy access and maintainability to the scripted logic. Present Rich Internet Applications are a great way to build engaging, information-rich user experiences. By exposing platform data and functions via Web Services and Scripto, you can use your tool of choice for developing your front-end. In fact, if you choose a technology that doesn’t require a server-side rendering engine, such as HTML + AJAX, Adobe Flash, or Microsoft Silverlight, then you can upload your application UI files to the Axeda Platform and let the platform serve your URL! Addition references for using RIAs on the Axeda Platform: Axeda Sample Application: Populating A Web Page with Data Items Extending the Axeda Platform UI - Custom Tabs and Modules Far-Front-Ends and Other Systems If a client-only user RIA interface is not an option for you, you can still use Web Services to integrate platform information into other server-side presentation technologies, such as Microsoft Sharepoint portal or a Java EE Web Application. You can also get lightning-fast updates for your users with the Axeda Machine Streams that uses ActiveMQ JMS technology to stream data in real-time from the Axeda Platform to your custom data warehouses.  While your users are viewing a drill down on a set of assets, they can receive asynchronous notifications about real-time data changes, without the need to constantly poll Web Services. Summary Building Connected Products and applications for them is incredibly rewarding job. Axeda provides a set of core services and a framework for bringing your application to market.
View full tip
Often times to set up our environment securely, we will assign Entity Type permissions, which is much easier then to remember to assign it to every single ThingShape, ThingTemplate, Thing etc. However did you know that these security settings only export when doing an Export to ThingworxStorage? So you either must maintain a list of these settings and re-apply them when starting on a new environment or: 1. Set up your Groups (and Users although hopefully all permissions you set up are assigned to Groups as a Best Practice) 2. Set up your Entity Type Permissions 3. Create an Export using Export to ThingworxStorage and export everything Now you have an import ready any time you need to deploy Thingworx anew. NOTE: Obviously this means you need to maintain that export any time changes are made to those permissions, unfortunately that also means another export of ALL which can be less desirable, since it can include Test objects unfinished items etc. As such one may have to maintain some local instance to keep a clean Import/Export.
View full tip
I know most of us very happily use the Administrator account in Thingworx, however this is bad bad practice for development and even administration of the platform! Administrator is there by default and should be used to set up your initial users, which should include your Actual Platform Administrator (with a strong password of course) After that change the Administrator Password and Remove them from the Administrators group. I recommend this as a Best Practice even in your own Development environments, but especially in Runtime. Your very first steps would like: Install Thingworx Log in as Administrator Set up the new Platform Administrator account Remove Administrator from Administrators group Change Administrator password.
View full tip
This tutorial applies to Axeda version 6.1.6+, with sections applicable to 6.5+ (indicated below) Custom objects (or Groovy scripts) are the backbone of Axeda custom applications.  As the developer, you decide what content type to give the data returned by the script. What this tutorial covers? This tutorial provides examples of outputting data in different formats from Groovy scripts and consuming that data via Javascript using the jQuery framework.  While Javascript and jQuery are preferred by the Axeda Innovation team, any front end technology that can consume web services can be used to build applications on the Axeda Machine Cloud.  On the same note, the formats discussed in this article are only a few examples of the wide variety of content types that Groovy scripts can output via Scripto.  The content types available via Scripto are limited only by their portability over the TCP protocol, a qualification which includes all text-based and downloadable binary mime types.  As of July 2013, the UDP protocol (content streaming) is not supported by the current version of the Axeda Platform. Formats discussed in this article: 1) JSON 2) XML 3) CSV 4) Binary content with an emphasis on image files (6.5+) For a tutorial on how to create custom objects that work with custom applications, check out Using Google Charts API with Scripto.  For a discussion of what Scripto is and how it relates to Groovy scripts and Axeda web services, take a look at Unleashing the Power of the Axeda Platform via Scripto. Serializing Data JSON For those building custom applications with Javascript, serializing data from scripts into JSON is a great choice, as the data is easily consumable as native Javascript objects. The net.sf.json JSON library is available to use in the SDK.  It offers an easy way to serialize objects on the Platform, particularly v2 SDK objects. import net.sf.json.JSONArray import static com.axeda.sdk.v2.dsl.Bridges.* def asset = assetBridge.findById(parameters.assetId) def response = JSONArray.fromObject(asset).toString(2) return ["Content-Type": "application/json", "Content": response] Outputs: [{     "buildVersion": "",     "condition": {         "detail": "",         "id": "3",         "label": "",         "restUrl": "",         "systemId": "3"     },     "customer": {         "detail": "",         "id": "2",         "label": "Default Organization",         "restUrl": "",         "systemId": "2"     },     "dateRegistered": {         "date": 11,         "day": 1,         "hours": 18,         "minutes": 7,         "month": 2,         "seconds": 49,         "time": 1363025269253,         "timezoneOffset": 0,         "year": 113     },     "description": "",     "detail": "testasset",     "details": null,     "gateways": [],     "id": "12345",     "label": "",     "location": {         "detail": "Default Organization",         "id": "2",         "label": "Default Location",         "restUrl": "",         "systemId": "2"     },     "model": {         "detail": "testmodel",         "id": "2345",         "label": "standalone",         "restUrl": "",         "systemId": "2345"     },     "name": "testasset",     "pingRate": 0,     "properties": [         {             "detail": "",             "id": "1",             "label": "TestProperty",             "name": "TestProperty",             "parentId": "2345",             "restUrl": "",             "systemId": "1",             "value": ""         },         {             "detail": "",             "id": "4",             "label": "TestProperty0",             "name": "TestProperty0",             "parentId": "2345",             "restUrl": "",             "systemId": "4",             "value": ""         },         {             "detail": "",             "id": "3",             "label": "TestProperty1",             "name": "TestProperty1",             "parentId": "2345",             "restUrl": "",             "systemId": "3",             "value": ""         },         {             "detail": "",             "id": "2",             "label": "TestProperty2",             "name": "TestProperty2",             "parentId": "2345",             "restUrl": "",             "systemId": "2",             "value": ""         }     ],     "restUrl": "",     "serialNumber": "testasset",     "sharedKey": [],     "systemId": "12345",     "timeZone": "GMT" }] This output can be traversed as Javascript object with its nodes accessible using dot (.) notation. For example, if you set the above JSON as the content of variable "json", you can access it in the following way, without any preliminary parsing needed: assert json[0].condition.id == 3 If you use jQuery, a Javascript library, feel free to make use of axeda.js, which contains utility functions to pass data to and from the Axeda Platform.  One function in particular is used in most example custom applications found on this site, the axeda.callScripto function.  It relies on the jQuery ajax function to make the underlying call. /**   * makes a call to the enterprise platform services with the name of a script and passes   * the script any parameters provided.   *   * default is GET if the method is unknown   *   * Notes: Added POST semantics - plombardi @ 2011-09-07   *   * original author: Zack Klink & Philip Lombardi   * added on: 2011/7/23   */ // options - localstoreoff: "yes" for no local storage, contentType: "application/json; charset=utf-8", axeda.callScripto = function (method, scriptName, scriptParams, attempts, callback, options) {   var reqUrl = axeda.host + SERVICES_PATH + 'Scripto/execute/' + scriptName + '?sessionid=' + SESSION_ID   var contentType = options.contentType ? options.contentType : "application/json; charset=utf-8"   var local   var daystring = keygen()   if (options.localstoreoff == null) {   if (localStorage) {   local = localStorage.getItem(scriptName + JSON.stringify(scriptParams))   }   if (local != null && local == daystring) {   return dfdgen(reqUrl + JSON.stringify(scriptParams))   } else {   localStorage.setItem(scriptName + JSON.stringify(scriptParams), daystring)   }   }   return $.ajax({   type: method,   url: reqUrl,   data: scriptParams,   contentType: contentType,   dataType: "text",   error: function () {   if (attempts) {   expiredSessionLogin();   setTimeout(function () {   axeda.callScripto('POST', scriptName, scriptParams, attempts - 1, callback, options)   }, 1500);   }   },   success: function (data) {   if (options.localstoreoff == null) {   localStorage.setItem(reqUrl + JSON.stringify(scriptParams), JSON.stringify([data]))   }   if (contentType.match("json")) {   callback(unwrapResponse(data))   } else {   callback(data)   }   }   }) }; Using the axeda.callScripto function: var postToPlatform = function (scriptname, callback, map) {         var options = {             localstoreoff: "yes",             contentType: "application/json; charset=utf-8"         }        // Javascript object "map" has to be stringified to post to Axeda Platform         axeda.callScripto("POST", scriptname, JSON.stringify(map), 2, function (json) {             // callback gets the JSON object output by the Groovy script             callback(json)         }, options)     } The JSON object is discussed in more detail here. Back to Top XML XML is the preferred language of integration with external applications and services. Groovy provides utilities to make XML serialization a trivial exercise. import groovy.xml.MarkupBuilder import static com.axeda.sdk.v2.dsl.Bridges.* def writer = new StringWriter() def xml = new MarkupBuilder(writer) def findAssetResult = assetBridge.find(new AssetCriteria(modelNumber: parameters.modelName)) // find operation returns AssetReference class. Contains asset id only def assets = findAssetResult.assets      xml.Response() {   Assets() {   assets.each { AssetReference assetRef ->   def asset = assetBridge.findById(assetRef.id)               // asset contains a ModelReference object instead of a Model.  ModelReference has a detail property, not a name property   Asset() {   id(asset.id)   name(asset.name)   serial_number(asset.serialNumber)   model_id(asset.model.id)   model_name(asset.model.detail)   }   }   }   } return ['Content-Type': 'text/xml', 'Content': writer.toString()] Output: <Assets>   <Asset>   <id>98765</id>   <name>testasset</name>   <serial_number>testasset</serial_number>   <model_id>4321</model_id>   <model_name>testmodel</model_name>   </Asset> </Assets Although XML is not a native Javascript object as is JSON, Javascript libraries and utilities are available for parsing XML into an object traversable in Javascript. For more information on parsing XML in Javascript, see W3 Schools XML Parser.  For those using jQuery, check out the jQuery.parseXML function. Back to Top Outputting Files (Binary content types) CSV CSV comes in handy for spreadsheet generation as it is compatible with Microsoft Excel. The following example is suitable for Axeda version 6.1.6+ as it makes use of the Data Accumulator feature to create a downloadable file. import com.axeda.drm.sdk.device.ModelFinder import com.axeda.drm.sdk.Context import com.axeda.drm.sdk.scripto.Request import com.axeda.common.sdk.id.Identifier import com.axeda.drm.sdk.device.Model import com.axeda.drm.sdk.device.DataItem import com.axeda.drm.sdk.device.DataItemValue import com.axeda.drm.sdk.data.DataValue import com.axeda.drm.sdk.device.DeviceFinder import com.axeda.drm.sdk.device.Device import com.axeda.drm.sdk.mobilelocation.MobileLocation import com.axeda.drm.sdk.data.DataValueList import com.axeda.drm.sdk.data.CurrentDataFinder import com.axeda.drm.sdk.mobilelocation.CurrentMobileLocationFinder import groovy.xml.MarkupBuilder import com.axeda.platform.sdk.v1.services.ServiceFactory /* * ExportObjectToCSV.groovy * * Creates a csv file from either all assets of a model of a single asset that can then be used to import them back into another system. * * @param model        -   (REQ):Str model name. * @param serial        -   (OPT):Str serial number. * * @author Sara Streeter <sstreeter@axeda.com> */ def writer = new StringWriter() def xml = new MarkupBuilder(writer) InputStream is try {    Context CONTEXT = Context.getSDKContext()    ModelFinder modelFinder = new ModelFinder(CONTEXT)     modelFinder.setName(Request.parameters.model)    Model model = modelFinder.find()    DeviceFinder deviceFinder = new DeviceFinder(CONTEXT)    deviceFinder.setModel(model)    List<Device> devices = [] def exportkey = model.name Device founddevice if (Request.parameters.serial){     deviceFinder.setSerialNumber(Request.parameters.serial)    founddevice = deviceFinder.find()    logger.info(founddevice?.serialNumber)    if (founddevice != null){    devices.add(founddevice)    }    else throw new Exception("Device ${Request.parameters.serial} cannot be found.")    exportkey += "${founddevice.serialNumber}" } else {     devices = deviceFinder.findAll()     exportkey += "all" } // use a Data Accumulator to store the information def dataStoreIdentifier = "FILE-CSV-export_____" + exportkey def daSvc = new ServiceFactory().dataAccumulatorService if (daSvc.doesAccumulationExist(dataStoreIdentifier, devices[0].id.value)) {   daSvc.deleteAccumulation(dataStoreIdentifier, devices[0].id.value) } List<DataItem> dataItemList = devices[0].model.dataItems def firstrow = [ "model", "serial", "devicename", "conditionname", "currentlat","currentlng" ]                     def tempfirstrow = dataItemList.inject([]){list, dataItem ->             list << dataItem.name;             list         }         firstrow += tempfirstrow            firstrow = firstrow.join(',')         firstrow += '\n'         daSvc.writeChunk(dataStoreIdentifier, devices[0].id.value, firstrow);     CurrentMobileLocationFinder currentMobileLocationFinder = new CurrentMobileLocationFinder(CONTEXT) devices.each{ device ->                 CurrentDataFinder currentDataFinder = new CurrentDataFinder(CONTEXT, device)                 currentMobileLocationFinder.deviceId = device.id.value                 MobileLocation mobileLocation = currentMobileLocationFinder.find()                 def lat = 0                 def lng = 0                 if (mobileLocation){                     lat = mobileLocation?.lat                     lng = mobileLocation?.lng                 }                 def row =                 [                     device.model.name,                     device.serialNumber,                     device.name,                     device.condition?.name,                     lat,                     lng                     ]                                     def temprow = dataItemList.inject([]){ subList,dataItem ->                         DataValue value = currentDataFinder.find(dataItem.name)                                             def val = "NULL"                         val = value?.asString() != "?" ? value?.asString() : val                         subList <<  val                         subList                     }                 row += temprow                 row = row.join(',')                 row += '\n'                 daSvc.writeChunk(dataStoreIdentifier, devices[0].id.value, row);             }    // stream the data accumulator to create the file is = daSvc.streamAccumulation(dataStoreIdentifier, devices[0].id.value) def disposition = 'attachment; filename=CSVFile' + exportkey + '.csv' return ['Content-Type': 'text/csv', 'Content-Disposition':disposition, 'Content': is.text] } catch (def ex) {    xml.Response() {        Fault {            Code('Groovy Exception')            Message(ex.getMessage())            StringWriter sw = new StringWriter();            PrintWriter pw = new PrintWriter(sw);            ex.printStackTrace(pw);            Detail(sw.toString())        }    } logger.info(writer.toString()) return ['Content-Type': 'text/xml', 'Content': writer.toString()] } return ['Content-Type': 'text/xml', 'Content': writer.toString()] Back to Top Image Files (6.5+) The FileStore in Axeda version 6.5+ allows fine-grained control of uploaded and downloaded files. As Groovy scripts can return binary data via Scripto, this allows use cases such as embedding a Groovy script url as the source for an image. The following example uses the FileStore API to create an Image out of a valid image file, scales it to a smaller size and stores this smaller file. import com.axeda.drm.sdk.Context import com.axeda.drm.sdk.data.* import com.axeda.drm.sdk.device.* import com.axeda.drm.sdk.mobilelocation.CurrentMobileLocationFinder import com.axeda.drm.sdk.mobilelocation.MobileLocation import com.axeda.drm.sdk.mobilelocation.MobileLocationFinder import com.axeda.sdk.v2.bridge.FileInfoBridge import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.ExecutionResult import com.axeda.services.v2.FileInfo import com.axeda.services.v2.FileInfoReference import com.axeda.services.v2.FileUploadSession import net.sf.json.JSONObject import groovy.json.JsonBuilder import net.sf.json.JSONArray import com.axeda.drm.sdk.scripto.Request import org.apache.commons.io.IOUtils import org.apache.commons.lang.exception.ExceptionUtils import com.axeda.common.sdk.id.Identifier import groovy.json.* import javax.imageio.ImageIO; import java.awt.RenderingHints import java.awt.image.BufferedImage import java.io.ByteArrayOutputStream; import java.awt.* import java.awt.geom.* import javax.imageio.* import java.awt.image.* import java.awt.Graphics2D import javax.imageio.stream.ImageInputStream /*    Image-specific FileStore entry point to post and store files */ def contentType = "application/json" final def serviceName = "StoreScaledImage" // Create a JSON Builder def json = new JsonBuilder() // Global try/catch. Gotta have it, you never know when your code will be exceptional! try {       Context CONTEXT = Context.getSDKContext()     def filesList = []     def datestring = new Date().time     InputStream inputStream = Request.inputStream       def reqbody = Request.body     // all of our Request Parameters are available here     def params = Request.parameters     def filename = Request?.headers?.'Content-Disposition' ?     Request?.headers?.'Content-Disposition' : "file___" + datestring + ".txt"     def filelabel = Request.parameters.filelabel ?: filename     def description = Request.parameters.description ?: filename     def contType = Request.headers?."content-type" ?: "image/jpeg"     def tag = Request.parameters.tag ?: "cappimg"     def encoded = Request.parameters.encoded?.toBoolean()   def dimlimit = params.dimlimit ? params.dimlimit : 280     // host is available in the headers when the script is called with AJAX     def domain = Request.headers?.host     byte[] bytes = IOUtils.toByteArray(inputStream);     def fileext = filename.substring(filename.indexOf(".") + 1,filename.size())     def outerMap = [:]     // check that file extension matches an image type     if (fileext ==~ /([^\s]+(\.(?i)(jpg|jpeg|png|gif|bmp))$)/){         if (inputStream.available() > 0) {                 def scaledImg                               try {                     def img = ImageIO.read(inputStream)                     def width = img?.width                              def height = img?.height                     def ratio = 1.0                     def newBytes                                       if (img){                                               if (width > dimlimit || height > dimlimit){                             // shrink by the smaller side so it can still be over the limit                             def dimtochange = width > height ? height : width                             ratio = dimlimit / dimtochange                                                       width = Math.floor(width * ratio).toInteger()                             height = Math.floor(height * ratio).toInteger()                         }                                             newBytes = doScale(img, width, height, ratio, fileext)                      if (newBytes?.size() > 0){                         bytes = newBytes                      }                     }                 }                 catch(Exception e){                     logger.info(e.localizedMessage)                                   }                                           outerMap.byteCount = bytes.size()                    FileInfoBridge fib = fileInfoBridge                 FileInfo myImageFile = new FileInfo(filelabel: filelabel,                                                     filename: filename,                                                     filesize: bytes?.size(),                                                     description: description,                                                     tags: tag                                                     )                    myImageFile.contentType = contType                    FileUploadSession fus = new FileUploadSession();                 fus.files = [myImageFile]                    ExecutionResult fer = fileUploadSessionBridge.create(fus);                 myImageFile.sessionId = fer.succeeded.getAt(0)?.id                               ExecutionResult fileInfoResult = fib.create(myImageFile)                               if (fileInfoResult.successful) {                     outerMap.fileInfoSave = "File Info Saved"                     outerMap.sessionId = "File Upload SessionID: "+fer.succeeded.getAt(0)?.id                     outerMap.fileInfoId = "FileInfo ID: "+fileInfoResult?.succeeded.getAt(0)?.id                     ExecutionResult er = fib.saveOrUpdate(fileInfoResult.succeeded.getAt(0).id,new ByteArrayInputStream(bytes))                     def fileInfoId = fileInfoResult?.succeeded.getAt(0)?.id                     String url = "${domain}/services/v1/rest/Scripto/execute/DownloadFile?fileId=${fileInfoId}"                     if (er.successful) {                         outerMap.url = url                     } else {                         outerMap.save = "false"                         logger.info(logFailure(er,outerMap))                     }                 } else {                     logger.info(logFailure(fileInfoResult, outerMap))                 }                } else {                 outerMap.bytesAvail = "No bytes found to upload"             }         } else {             outerMap.imagetype = "Extension $fileext is not a supported image file type."         }     filesList << outerMap     // return the JSONBuilder contents     // we specify the content type, and any object as the return (even an outputstream!)     return ["Content-Type": contentType,"Content":JSONArray.fromObject(filesList).toString(2)]     // alternately you may just want to serial an Object as JSON:     // return ["Content-Type": contentType,"Content":JSONArray.fromObject(invertedMessages).toString(2)] } catch (Exception e) {     // I knew you were exceptional!     // we'll capture the output of the stack trace and return it in JSON     json.Exception(             description: "Execution Failed!!! An Exception was caught...",             stack: ExceptionUtils.getFullStackTrace(e)     )     // return the output     return ["Content-Type": contentType, "Content": json.toPrettyString()] } def doScale(image, width, height, ratio, fileext){     if (image){     ByteArrayOutputStream baos = new ByteArrayOutputStream();     def bytes      def scaledImg = new BufferedImage( width, height, BufferedImage.TYPE_INT_RGB )        Graphics2D g = scaledImg.createGraphics();         g.setRenderingHint(RenderingHints.KEY_INTERPOLATION, RenderingHints.VALUE_INTERPOLATION_BILINEAR);         g.scale(ratio,ratio)         g.drawImage(image, null, null);         g.dispose();              ImageIO.write( scaledImg, fileext, baos )       baos.flush()       bytes = baos.toByteArray()       baos.close()     }     else {         logger.info("image to be scaled is null")         return false     }   return bytes   } private void logFailure(ExecutionResult fileInfoResult, LinkedHashMap outerMap) {     outerMap.message = fileInfoResult.failures.getAt(0)?.message     outerMap.source = fileInfoResult.failures.getAt(0)?.sourceOfFailure     outerMap.details = fileInfoResult.failures.getAt(0)?.details?.toString()     outerMap.fileInfoSave = "false" } The next example makes use of the jQuery framework to upload an image to this script via an http POST. Note: This snippet is available as a jsFiddle at http://jsfiddle.net/LrxWF/18/ With HTML5 button: <input type="file" id="fileinput" value="Upload" /> var PLATFORM_HOST = document.URL.split('/apps/')[0]; // this is how you would retrieve the host on an Axeda instance var SESSION_ID = null // usually retrieved from login function included below /*** * Depends on jQuery 1.7+ and HTML5, assumes an HTML5 element such as the following: * <input type="file" id="fileinput" value="Upload" /> * **/ $("#fileinput").off("click.filein").on("click.filein", function () {     fileUpload() }) var fileUpload = function () {     $("#fileinput").off('change.fileinput')     $("#fileinput").on('change.fileinput', function (event) {         if (this.files && this.files.length > 0) {             handleFiles("http://" + PLATFORM_HOST, this.files)         }     }) } var handleFiles = function (host, files) {     $.each(files, function (index, file) {         var formData = new FormData();         var filename = file.name         formData.append(filename, file)         var url = host + '/services/v1/rest/Scripto/execute/StoreScaledImage?filelabel=' + filename + "&tag=myimg"         url = setSessionId(url)         jQuery.ajax(url, {             beforeSend: function (xhr) {                 xhr.setRequestHeader('Content-Disposition', filename);             },             cache: false,             cache: false,             processData: false,             type: 'POST',             contentType: false,             data: formData,             success: function (json) {                 refreshPage(json)                 console.log(json)             }         });     }) } var setSessionId = function (url) {     // you would already have this from logging in     return url + "&sessionid=" + SESSION_ID } var refreshPage = function (json) {     // here you would refresh your page with the returned JSON     return } /*** *  The following functions are not used in this demonstration, however they are necessary for a complete app and are found in axeda.js  http://gist.github.com/axeda/4340789 ***/     function login(username, password, success, failure) {         var reqUrl = host + SERVICES_PATH + 'Auth/login';         localStorage.clear()         return $.get(reqUrl, {             'principal.username': username,                 'password': password         }, function (xml) {             var sessionId = $(xml).find("ns1\\:sessionId, sessionId").text()             // var sessionId = $(xml).find("[nodeName='ns1:sessionId']").text(); - no longer works past jquery 1.7             if (sessionId) {                 // set the username and password vars for future logins.                         storeSession(sessionId);                 success(SESSION_ID); // return the freshly stored contents of SESSION_ID             } else {                 failure($(xml).find("faultstring").text());             }         }).error(function () {             $('#loginerror').html('Login Failed, please try again')         });     }; function storeSession(sessionId) {     var date = new Date();     date.setTime(date.getTime() + SESSION_EXPIRATION);     SESSION_ID = sessionId     document.cookie = APP_NAME + '_sessionId=' + SESSION_ID + '; expires=' + date.toGMTString() + '; path=/';     return true; }; The return JSON includes a URL that you can use as the source for images: [{   "byteCount": 14863,   "fileInfoSave": "File Info Saved",   "sessionId": "File Upload SessionID: 01234",   "fileInfoId": "FileInfo ID: 12345",   "url": "http://yourdomain.axeda.com/services/v1/rest/Scripto/execute/DownloadFile?fileId=12345" }] The DownloadFile Custom Object looks like the following: import static com.axeda.sdk.v2.dsl.Bridges.* import javax.activation.MimetypesFileTypeMap import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* import com.axeda.drm.sdk.scripto.Request def knowntypes = [          [png: 'image/png']         ,[gif: 'image/gif']         ,[jpg: 'image/jpeg']     ] def params = Request.parameters.size() > 0 ? Request.parameters : parameters def response = fileInfoBridge.getFileData(params.fileId) def fileinfo = fileInfoBridge.findById(params.fileId) def type = fileinfo.filename.substring(fileinfo.filename.indexOf('.') + 1,fileinfo.filename.size()) type = returnType(knowntypes, type) def contentType = params.type ?: (type ?: 'image/jpg') return ['Content': response, 'Content-Disposition': contentType, 'Content-Type':contentType] def returnType(knowntypes, ext){     return knowntypes.find{ it.containsKey(ext) }?."$ext" } Make sure to append a valid session id to the end of the URL when using it as the source for an image. The techniques discussed above can be applied to any type of binary file output with consideration for the type of file being processed. A Word on Streaming Content streaming such as streaming of video or audio files over UDP is not currently supported by the Axeda Platform.
View full tip
Data is NOT free. It is easy to overlook the cost of data collection, but all data incurs some cost when it is collected. Data collection in and of itself does not bring business value. If you don’t know why you’re collecting the data, then you probably won’t use it once you have it. For a wireless product, it is felt in the cost of bytes transferred, which makes for an expensive solution, but happy Telco's. Even for wired installations, data transfer isn’t free. Imagine a supermarket with 20 checkout lanes - with only a 56K DSL line - and the connection is shared with the credit card terminals, so it is important to upload only the necessary data during business hours. For the end user, too much data leads to information clutter. Too much information increases the time necessary to locate and access critical data. All enterprise applications have some associated "Infrastructure Tax", and the Axeda Platform is no exception. This is the cost of maintaining the existing infrastructure, as well as increasing capacity through the addition of new systems infrastructure. This includes: The cost of the physical hardware The additional software licenses The cost of the network bandwidth The cost of IT staff to maintain the servers The cost of attached storage Optimizing your data profile will maximize the performance of your existing infrastructure. Scaling decisions should be based on load because 50,000 well defined Assets can yield less data than 2,000 extremely "chatty" Assets. Types of Data To develop your data profile, first identify the types of data you’re collecting. "Actionable Data": This is used to drive business logic. This is your most crucial data, and tends to be "real-time" "Informational Data": This changes at a very low rate, and represents properties of your assets as opposed to status "Historical Data": Sometimes you need to step back to appreciate a work of art. Historical data is best viewed with a wide lens to identify trends "Payload Data": Data which is being packaged and shipped to an external system Actionable Data Actionable Data controls the flow of business logic and has three common attributes: It tends to represent the status of the Asset It typically the highest priority data you will receive It usually has more frequent occurrences than other data Informational Data Informational Data is typically system or software data of which some examples include: OS Version Firmware information Geographical region Historical Data Historical Data will represent the results of long-term operations and is typically used for operational review of trends. May be sourced either from Data Items, File uploads or Web Services operations May feed the Axeda integrated business intelligence solution, or internal customer BI systems Payload Data Payload data travels through the Cloud to your system of record. In this case, the Axeda Platform is a key actor in your system, but its presence is not directly visible to the end user Data Types Key Points Understanding the nature of your data helps to inform your data collection strategy. The four primary attributes are the following: Frequency Quantity Storage Format Knowing what to store, what to process and what to pass through for storage is the first key to optimizing your data profile. The "everything first" approach is an easy choice, but a tough one from which to realize value. A "bottom up" or use-case driven approach will add data incrementally, and will reveal the subset of data you actually need to be collecting.Knowing your target audience for the data is the next step. A best practice to better understand who is trying to innovate and how they are looking to do it begins with questions such as the following: Is marketing looking for trends to highlight? Is R&D looking for areas to improve the product? Is the Service team looking to pro-actively troubleshoot assets in the field? Is Sales looking to sell more consumables? Is Finance trying to resolve a billing dispute? Answers to these questions will help determine which data contributes to solving real business problems. Most Service technicians only access a handful of pieces of information about an Asset while troubleshooting, regardless of how many they have access to. It’s important to close the information loop when finding out which data is actually being used.In addition to understanding the correct target audience and their goals, milestone events are also opportunities to revisit your strategy, specifically times like: New Model rollouts Migration to the Cloud New program launch Once your data profile has been established, the next phase of optimization is to plan the way the data will be received. Strategies Data Item vs. File Upload A decision should be made as to the best way to transfer data to the Axeda Platform, whether that is data items, events, alarms or file transfers. Here's a Best Practice approach that's fairly universal: Choose a Data Item if: (a)You are sending Actionable Data, or (b)You are sending discreet Informational Data Choose a File Upload if: (a)You are sending bulk Data which does not need to trigger an immediate response, or (b)You intend to forward the Data to an external system Agent-Side Business Logic Keep in mind that the Axeda Platform allows business logic to be implemented before transmitting any data. The Agent can be configured to determine when Data needs to be sent via numerous mechanisms: Scripts provide the ability to trigger on-demand uploads of data, either via a human UI interaction or an automated process The "Black Box" configuration allows for a rolling sample window, and will only upload the data in the window based on a configured condition Agent Rules Agent Rules allow the Agent to monitor internal data values to decide when to send data to the Cloud. Data can be continuously sampled and compared against configured thresholds to determine when a value worthy of transmission is encountered. This provides a very powerful mechanism to filter outbound data. The example below shows a graphical representation of how an Agent might monitor a data flow and transmit only when it reaches an Absolute-high value of 1200: Axeda provides a versatile platform for managing the flow of data through your Asset ecosystem. It helps to cultivate an awareness not only of what the data set is but what it represents and to whom it has value. While data is cheap, the hidden costs of data transmission make it worthwhile to do your "data profiling homework" or risk paying a high price in the longer term.
View full tip
The Axeda Platform has a mature data model that's important to understand when planning to build applications. First, this will introduce the existing objects and how they relate to each other. Axeda Agents can communicate Model – the definition of a type of asset. The model consists of a set of dataitems (its inputs and outputs) and alarms. The platform applies logic to a model, so as assets grow, the system is scalable in terms of management. Asset – or sometimes called Device. An asset has an identifier called a Serial Number which must be unique within its model. Agents report information in terms of the asset. Logic is applied to data and events about that asset. Dataitem – a named reading, such as a sensor or computer value. Dataitems are timestamped values in a sequence. For example, hourly temperatures, or odometer readings, or daily usage statistics. The number of named dataitems is unlimited. Dataitems can be written as well as read, so a value can be sent to an “output”. A dataitem can be a Digital (boolean), Analog (real value) or a String. Mobile Location - a lat/long pair typically read from GPS. This is used to map assets as they move. Alarms – have a name, severity, description, active flag, timestamp, and optional embedded dataitem and value. Alarms sent from an agent may result from logic that detects a condition, or from traps, error codes, etc. An alarm indicates something that's wrong. Files – arbitrary files can be uploaded from an agent. Files are sent with a Hint string. This is metadata that allows rules to process the file based on something other than the file extension. Files are often uploaded when an alarm has been raised, or on demand from a user or rule. Axeda agents have flexible ways to send and receive this information based on time, data changes, user request, etc. The Adaptive Machine Messaging Protocol (AMMP) allows anyone to make an agent that interacts with Axeda Platform using the same data model. Axeda Platform is asset-centric. An asset is an instance of a Model. Each asset is identified by its Model and Serial Number pair. Associated with an asset is Organization (typically the customer) Location or home of the asset (a street address). A location is in a Region. Contacts – people who have a relationship to the asset. Contacts have a role, such as Owner, or Service Agent. Asset Groups – assets are members of groups, and groups can be used to grant privileges, for navigation, or to apply commands. Properties – are additional named attributes of an asset. Properties do not have a time series history like a dataitem. The value of a property may be used to dynamically group assets. Condition – the current condition may be good, warning, error, or needs maintenance, based on the existence of alarms, for example. And, of course, dataitems, alarms and files. Information is processed and organized in the context of an asset, but the processing is managed for models. The only scalable way to manage a lot of assets is to apply rules by kind of asset, not individual assets. Rules apply logic to data as it happens. When a new dataitem is reported, a rule may check against its threshold. When an alarm is created, a rule may create a trouble ticket, or notify the user. All types of rules in the platform – Expression Rules, State Machines, and Threshold Rules – are event based. Rules apply to models, or sometimes to all (such as a standard way of notification on Alarms). The only exceptions are rules that apply to user logins and rules on a system timer. Software packages are another entity in the Platform. Packages are used to distribute files, software, patches, etc. and to script some commands around their delivery. So packages often upgrade or patch software, or load a new option of help file. Packages are defined for a model, and the deployment may be automatic or manual, to one or many.User logins are members of user groups. User groups have both privileges (what they can do) and visibility (which assets they can see). User group visibility allows the group to access assets in an Asset Group, or a Model or Region. How do solutions take advantage of this? Dataitems can be configured to store no data, current value, or history. History is needed if you want to see the temperature plot over the last day. Many times, current value is all that's needed to process rules and see the state of an asset. The option not to store a dataitem makes sense if the dataitem is only used to run a rule, or if it will just be sent to another application. An agent can send a dataitem string to the server, and the server puts the string on the Message Queue to deliver to another application. In a pass-through mode, the dataitem doesn't need to be stored at all. A similar situation is if a string dataitem is parsed by the rule calling a Groovy script. The script can parse the string (which may be XML or part of a log file) and use the SDK to do some action. Alarms are almost always used to notify people that they should do something. Alarms in Axeda have a lifecycle that corresponds to how people interact with them. An alarm begins its life when it's created. From that point, the alarm can be Acknowledged – this means that someone has seen it Escalated – the alarm condition hasn't been fixed for some time Closed – the end of the Alarm's life Suppressed – the alarm is logged in the history, but users don't see it. Set an alarm to suppressed when its just an annoyance and doesn't have any action required. Disabled – occurrences of this alarm are thrown away. Rules don't even see them. The Suppressed and Disabled modes are applied to an alarm of a given name, because they affect all future alarms by that name. Files are uploaded for a few reasons. Log files are typically uploaded so a service tech can diagnose a problem. Data files can be uploaded so a script or external system can process the file and take appropriate action. This can be another way of sending information that doesn't fit in a dataitem. The configuration of an asset – both hardware and software – is called Inventory, and the inventory of assets is important in diagnostics, planning spare parts, knowing what patch to apply, and many more. Extended Objects are attributes that can be added to the objects described here, or can be complete objects that live on their own. Your application can read and write these objects or attributes, and query them. The use is up to you. Resources You can find more information on the architecture of the platform in the Introduction to the Axeda Platform. The Platform SDK and Web Services expose most of these objects for configuration as well as runtime. That means an application can provision models and assets, create the rules and apply them to models, then monitor the behavior of assets, all through Web Services.
View full tip
Announcements