cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 
Sort by:
First we need to Understand below terms: Quantitative Variable: A quantitative variable is naturally measured as a number for which meaningful arithmetic operations make sense. Examples: Height, age, crop yield, GPA, salary, temperature, area, air pollution index (measured in parts per million), etc. Categorical variable: Any variable that is not quantitative is categorical. Categorical variables take a value that is one of several possible categories. As naturally measured, categorical variables have no numerical meaning. Examples: Hair color, gender, field of study, college attended, political affiliation, status of disease infection. Ordinal Variables: An ordinal variable is a categorical variable for which the possible values are ordered. Ordinal variables can be considered “in between” categorical and quantitative variables. Example: Educational level might be categorized as     1: Elementary school education     2: High school graduate     3: Some college     4: College graduate     5: Graduate degree •    In this example (and for many ordinal variables), the quantitative differences between the categories are uneven, even though the differences between the labels are the same. (e.g., the difference between 1 and 2 is four years, whereas the difference between 2 and 3 could be anything from part of a year to several years) •    Thus it does not make sense to take a mean of the values. •    Common mistake: Treating ordinal variables like quantitative variables without thinking about whether this is appropriate in the particular situation at hand. Ordinal regression: In statistics, ordinal regression (also called "ordinal classification") is a type of regression analysis used for predicting an ordinal variable. The Ordinal Regression procedure allows you to build models, generate predictions, and evaluate the importance of various predictor variables in cases where the dependent (target) variable is ordinal in nature. Ordinal dependents and linear regression: When you are trying to predict ordinal responses, the usual linear regression models don't work very well. Those methods can work only by assuming that the outcome (dependent) variable is measured on an interval scale. Because this is not true for ordinal outcome variables, the simplifying assumptions on which linear regression relies are not satisfied, and thus the regression model may not accurately reflect the relationships in the data. In particular, linear regression is sensitive to the way you define categories of the target variable. With an ordinal variable, the important thing is the ordering of categories. So, if you collapse two adjacent categories into one larger category, you are making only a small change, and models built using the old and new categorizations should be very similar. Unfortunately, because linear regression is sensitive to the categorization used, a model built before merging categories could be quite different from one built after. Below are some examples pf ordered logistic regression: Example 1: A marketing research firm wants to investigate what factors influence the size of soda (small, medium, large or extra large) that people order at a fast-food chain. These factors may include what type of sandwich is ordered (burger or chicken), whether or not fries are also ordered, and age of the consumer. While the outcome variable, size of soda, is obviously ordered, the difference between the various sizes is not consistent. The difference between small and medium is 10 ounces, between medium and large 8, and between large and extra large 12. Example 2: A researcher is interested in what factors influence modaling in Olympic swimming. Relevant predictors include at training hours, diet, age, and popularity of swimming in the athlete’s home country. The researcher believes that the distance between gold and silver is larger than the distance between silver and bronze. Example 3: A study looks at factors that influence the decision of whether to apply to graduate school. College juniors are asked if they are unlikely, somewhat likely, or very likely to apply to graduate school. Hence, our outcome variable has three categories. Data on parental educational status, whether the undergraduate institution is public or private, and current GPA is also collected. The researchers have reason to believe that the “distances” between these three points are not equal. For example, the “distance” between “unlikely” and “somewhat likely” may be shorter than the distance between “somewhat likely” and “very likely”. How to use and get result by Ordinal Regression: Clink this link for PDF                                                                                                                                                                                                                                                                                                                        PDF source: http://www.norusis.com
View full tip
Excited to announce ThingWorx 8.1 is officially available in our Support Portal. Please find the release notes below. The following feature enhancements and bug fixes exist in ThingWorx 8.1.0: Enhancements Platform: • Metrics Reporting is enabled by default, which allows usage, performance, and diagnostics data to be sent to a PTC server daily. For more information about this setting, see Platform Subsystem. • You can add and configure Notifications in New Composer. For more information, see Adding Notifications. • License files are now instance specific.. • Security for application keys has been enhanced. The defualt expiration date has been changed to 24 hours if it is not explictly set. • Additional capability has been added to New Composer. • Improvements to anomaly detection accuracy have been added. As a result, data collection restart is no longer necessary after a long gap and the H2 database that installs with the Training Microservice is stored in memory, not as a persisted file. For more information, see Anomaly Detection. • You can now load configuration/project files from KEPServerEX instances Bug Fixes Platform • Fixed an issue where Tomcat failed to start when using SAP HANA. TW-22191 • Fixed an issue that was preventing ThingWorx from starting after the File Transfer Subsystem was disabled. TW-22177 • Fixed an issue where the change history of a Mashup was automatically updated even if no changes were made. TW-22114 • Fixed an issue that was preventing the ServiceInvokeCompleted event from working after performing an in-place upgrade. TW-21784 • Fixed an issue where alert notifications were not being sent to recipients after removing a recipient. TW-21585 • Fixed an issue where the Add button in the Services page did not display after creating a Data Table. TW-21518 • Fixed an issue with alert notifications for entities containing periods in the name. TW-21347 • Fixed an issue that was causing connected assets to display as disconnected in ThingWorx Utilities. UTL-4698 • Fixed an issue where data bind was lost after changing Read-Only settings to Read/Write in Composer. TW-23506 • Fixed an issue that was causing a MetricsReportingTask error after enabling ThingWorx Performance Advisor. TW-21141 • Fixed an issue with the ThingWorx authentication window when specifying the site while using FF and IE. TW-21271 Mashup Builder • Fixed an issue with the List widget that was causing incorrect tooltips to display. TW-24012 TW-23961 TW-23038 • Fixed an issue where Chrome was automatically retrying Remote Service calls when a timeout occurred. TW-23828 • Fixed an issue after restarting the ThingWorx web app where the Runtime or Composer’s index.html were missing. TW-23984 • Fixed an issue where closing a modal dialogue did not remove the disabled state from an element. TW-11217 • Fixed an issue when creating a popup with the Navigation widget. The tab sequence of the popup was dependent on the original mashup. TW-11151 • Fixed an issue with localized values of data columns when using the Data Filter widget. TW-11059 Extensions  • Fixed an issue where CSV parser extension import failed if the text file that was being imported did not include a new line character at the end of the last line of text. TW-21863 • Fixed an issue with the Advanced Grid widget where the Reset button was not localized. TW-21457 • Fixed an issue with the jQuery library used by the WebSocketTunnel_ExtensionPackage widget. Note If you are using the WebSocketTunnel_ ExtensionPackage, you will need to upgrade to version 3.0.2 if you are upgrading to ThingWorx 8.1.0. To upgrade the extension, go to the Web Sockets Tunnel Widget and Library page of the ThingWorx Marketplace. TW-24465 End of Life Information SQUEAL functionality has been discontinued in 8.1. System requirements: http://support.ptc.com/WCMS/files/173583/en/ThingWorx_Core_8.1_System_Requirements_1.0.pdf Installation guide: http://support.ptc.com/WCMS/files/173600/en/Installing_ThingWorx_8.1_1.0_.pdf ThingWorx 8.1 Cross Platform Highlights: Security ThingWorx 8.1 Cross Platform Highlights and Q&A: Licensing
View full tip
    About   This is part of a ThingBerry related blog post series.         ThingBerry is ThingWorx installed on a RaspBerry Pi, which can be used for portable demonstrations without the need of utilizing e.g. customer networks. Instead the ThingBerry provides its own custom WIFI hotspot and allows Things to connect and send / receive demo data on a small scale.   In this particual blog post we'll discuss on how to connect a ESP8266 module to the ThingBerry WIFI hotspot and send data from a DHT-11 sensor via the MQTT protocol.   As the ThingBerry is a highly unsupported environment for ThingWorx, please see this blog post for all related warnings.   Install MQTT broker on the ThingBerry     To install mosquitto as a MQTT broker, log in to the ThingBerry and run     sudo apt-get install mosquitto   This will provide a basic broker installation, which is good enough for this example. MQTT clients (including ThingWorx) will connect to this broker to exchange messages. There will be no added security like encrypted traffic shown in this example, it's however good practise to secure MQTT broker / client connections.   While the ESP8266 module is publishing information, ThingWorx will subscribe to the corresponding topics to update its internal property values with what is sent by the ESP8266 module.   For more information on MQTT, how to configure it for ThingWorx or more security relevant information also see   https://community.thingworx.com/message/5063#5063 https://community.thingworx.com/community/developers/blog/2016/08/08/securing-mqtt-connection-to-thingworx-platform?sr=tcontent   Configure the ESP8266     There are too many instructions on the web already on how to initially setup the ESP8266 and use it with the Arduino IDE. I'll therefore just refer to Google which covers the topic more extensively than I ever could.   All coding in this example is done in the Arduino IDE and is pushed to the ESP8266 (NodeMCU) via USB. For this you might need to install a CH340g USB driver for the NodeMCU.   In the Arduino IDE under Tools, I have set my environment to   Board: NodeMCU 1.0 (ESP-12E Module) CPU Frequency: 80 MHz Flash Size: 4M (3M SPIFFS) Upload Speed: 115200 Port: COM3   Under Sketch > Include Library > Manage Libraries add / install the following libraries:   DHT sensor library by Adafruit Adafruit Unified Sensor by Adafruit PubSubClient by Nick O'Leary   These bring the libraries necessary to read data from the DHT-11 sensor and to configure the ESP8266 as MQTT client.     Wiring the DHT-11 sensor     The following image shows the PINs on the ESP8266     I'm using a DHT-11 sensor with cables included and already fixed to a board with 3 PINs. In case you're using a different version, there might be additional components and wiring required, like a resistor etc. Google might help here as well.     Ensure that neither board nor sensor are plugged in, and the ESP8266 is powered off.   To hook the sensor up to the ESP8266, join   ( - ) to GND ( + ) to 3.3V (out) to D3   After all the connections are made, connect the ESP8266 via USB to a computer / laptop with the Arudino IDE configured.   Coding   In the Arduino IDE use the following code - adjust the WIFI settings and the MQTT broker configuration. Ensure to rename the ESP_xx name / topic to something more meaningful, e.g. a specific device name (or just leave it as is if in doubt).   Use the ssid and wpa_passphrase from the hostapd.conf used to configure the ThingBerry as WIFI hotspot.   Copy&paste the code below into the Arduino IDE, verify it and upload it to the ESP8266.     If searching for a WIFI connection, the device's blue LED will blink. A successful connection to the broker and publishing the values will result in a static blue LED. In case the LED is off, the connection to the broker is lost or messages cannot be published.   For troubleshooting, use the Serial Monitor function (at 115200 baud) in the Arduino IDE. In case sensor data cannot be read but the wiring is correct and the code addressing the correct PIN verify the sensor is indeed working. It took me a long time to figure out that the first sensor I used was a defective device.   The current configuration sends updates every 10 seconds - longer intervals might make more sense, but can trigger a timeout for the MQTT broker. In this case the program will re-connect automatically and log corresponding messages in the Serial Monitor. This might seem like an error, but is indeed intended behavior by the code and the MQTT broker.     Configure MQTT Thing in ThingWorx     Create a new Thing in ThingWorx based on the MQTT Template. Add two properties:   temperature humidity   Both set to persistent and logged and Data Change Type to ALWAYS. Also configure a Value Stream to log a history of values.   In the configuration, add two more subscriptions. Activate the "subscribe" checkbox and map name (local property) to topic (MQTT topic), e.g.   name = temperature; topic = ESP_xx/temp name = humidity; topic = ESP_xx/hum   Ensure the correct servernames, ports etc. are configured (an empty servername will use the localhost).   Save the configuration. Property values should now be updated from the MQTT broker, depending on what the device is sending.   Code   #include "DHT.h" #include "PubSubClient.h" #include "ESP8266WiFi.h"   /* * * Configure parameters for sensor and network / MQTT connections * */   // setup DHT 11 pin and sensor   #define DHTPin D3 #define DHTTYPE DHT11   // setup WiFi credentials   #define WLAN_SSID "mySSID" #define WLAN_PASS "WIFIpassword"   // setup MQTT   #define MQTTBROKER "mqttbrokerhostname" #define MQTTPORT 1883   // setup built-in blue LED   #define LED 2   /* * ============================================================ * * DO NOT CHANGE ANYTHING BELOW * (unless you know what you're doing) * */   // initiate DHT   DHT dht(DHTPin, DHTTYPE);   // initiate MQTT client   WiFiClient wifiClient; PubSubClient client(MQTTBROKER, MQTTPORT, wifiClient);   /* * setup */   void setup() {     // switch off internal LED     pinMode(LED, OUTPUT);   digitalWrite(LED, HIGH);     // start serial monitor     Serial.begin(115200);     // start DHT     dht.begin();     // start WiFi     WiFi.begin(WLAN_SSID, WLAN_PASS);   }   /* * the loop */   void loop() {     // while not connected to WiFi, print "."   // after connection exit the loop   // blink LED while having no WiFi signal     boolean wifiReconnect = false;     while (WiFi.status() != WL_CONNECTED) {       digitalWrite(LED, LOW);       delay(200);       Serial.print(".");       digitalWrite(LED, HIGH);       delay(300);       wifiReconnect = true;     }     // if WiFi has reconnected, print new connection information and turn on LED     if (wifiReconnect == true) {       // print connection information and local IP address, mac address       Serial.println();     Serial.println("WiFi connected");     Serial.println(WiFi.localIP());     Serial.println(WiFi.macAddress());     Serial.println();       // turn on built-in LED to indiciate successful WiFi connection       digitalWrite(LED, LOW);     }     // if MQTT client is not connected, connect again   // turn on built-in LED to indicate a successful connection     if (!client.connected()) {       Serial.println("Disconnected from MQTT server... trying to connect");       if (client.connect("ESP_xx")) {         Serial.println("Connected to MQTT server");       Serial.println("Topic = ESP_xx");         digitalWrite(LED, LOW);       } else {         Serial.println("MQTT connection failed");         digitalWrite(LED, HIGH);       }       Serial.println();     }     // read temperature and humidity from sensor     float t = dht.readTemperature();   float h = dht.readHumidity();     if (isnan(t) || isnan(h)) {       // if temperature or humidity is not a number, print error       Serial.println("Failed retrieving data from DHT sensor");     } else {       // print temperature and humidity       Serial.print(t);     Serial.print("° - ");     Serial.print(h);     Serial.print("%");     Serial.println();       // only send values to MQTT broker, if client is connected       if (client.connected()) {         // boolean to check for errors during payload transfer         bool isError = false;         // create payload and publish values via MQTT client       // use buffer to convert float to char*         char buffer[10];         dtostrf(t, 0, 0, buffer);         if (client.publish("ESP_xx/temp", buffer)) {             Serial.print("  published /temp  ");           } else {             Serial.print("  failed /temp  ");         isError = true;           }            dtostrf(h, 0, 0, buffer);         if (client.publish("ESP_xx/hum", buffer)) {             Serial.print("  published /hum  ");           } else {             Serial.print("  failed /hum  ");         isError = true;           }         Serial.println();         // on error, turn off LED         if (isError == true) {           digitalWrite(LED, HIGH);         } else {             digitalWrite(LED, LOW);           }       }     }     // sleep for 10 seconds   // if sleep > default mosquitto timeout : a reconnect is forced for each update-cycle     delay(10000);   }
View full tip
This Expert Session will walk you through the Components involved in the ThingWorx Studio Augmented Reality Environment, a detailed Architecture, supported devices, and exploring the resources. The session shall provide great insight into the working and the technicalities involved in the ThingWorx Studio.   For full-sized viewing, click on the YouTube link in the player controls.   Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
Check out this new KCS article which links to all known best practice documents available for ThingWorx. This article will get larger in time as more articles are published related to the Dos and Don'ts of building an IoT application! Do you know when to use timers, and where to implement their subscriptions? How about ensuring info tables are used at the proper time, and data tables at others? Pesky performance issues wherein ThingWorx runs slow for apparently no reason? All of these questions and more are addressed here!
View full tip
While it is not a requirement, it is a best practice to install KEPServerEX (v6.2 or higher) before installing ThingWorx (v8.0.1 or higher). If ThingWorx is already installed, close the application and complete the install of KEPServerEX by following these install instructions: How do I download and install KEPServerEX? Now, when you attempt to launch ThingWorx, if you are presented with a "null pointer exception" error, follow this workaround: 1. Navigate to the 'PostgreSQL\installer' directory, within the directory where the Manufacturing Apps are installed. By default this will be: <ThingWorx install path>\ThingWorxManufacturingApps\PostgreSQL\installer 2. Run the 'vcredist.exe' located there. This application should re-install the conflicting redistributables, and you should be able to launch ThingWorx again normally.
View full tip
KEPServerEX requires the 32-bit version of Java if you are using the IoT Gateway Plug-in. If you do not have the 32-bit version installed and attempt to connect the IoT Gateway, the KEPServerEX Event Log will report the following error: “IoT Gateway failed to start, 32-bit JRE required." Some of the Manufacturing Applications training content relies on this Plug-in, as well. As a best practice, it is recommended that both the 32-bit and 64-bit versions of Java be installed. This install is available for download from the Oracle website, here: Java SE Runtime Environment 8 - Downloads
View full tip
This blog addresses a few points that are related to scoring with ThingWorx Analytics. It, particularly, brings a clearer understanding of the concepts behind the values of the scores that are generated when performing a scoring job.   Scoring Outputs:   It is important to note that when training an analytics model, the method is to create a generalizable model from a relatively small training dataset.   By its nature, we expect the training process to see a limited subset and not an exhaustive list of all possible values for many constraints, especially for time and practicality.   As such, these generalized models will be expected to handle unseen data in the form of new combinations or values outside of previously observed ranges (more on this below).   One common way to see scores that exceed the observed ranges in training, under the assumption that the goals are continuous, is to use prescriptive scoring.   Prescriptive scoring attempts to find optimal values for a lever, meaning tunable, features in order to maximize or minimize score values. See the prescriptive scoring documentation and functionality for more information.   min/max constraints: these are constraints that are placed upon the inputs for training and expected inputs for scoring.   •          For training: If theses ranges were provided as part of the upload process, then training will raise exceptions regarding invalid data. However, if the ranges are not provided - they will be inferred from the data and, as such, training will not see values outside of observed ranges.   •          For scoring: Validation of the ranges will only be performed on the inputs - not the outputs. It is very important to note that the handling of these "constraints" is dependent upon the data type.   For categorical (e.g. colors) and ordinal data (e.g. shirt sizes), the constraints are strict and data that was not observed in training will raise exceptions during scoring.   However, for continuous values (e.g. temperature ranges) these constraints are more informational in nature. For predictive scoring, our code will accept records with values outside of those ranges.   The rule of thumb is that values slightly outside these ranges are acceptable and that as the values stray farther from the ranges, the accuracy of the model degrades very quickly.   For prescriptive scoring, these constraints are used to determine the acceptable ranges of values to try when determining the optimal values. Values outside of these constraints will NOT be tried.
View full tip
A Feature - a piece of information that is potentially useful for prediction. Any attribute could be a feature, as long as it is useful to the model. Feature engineering – Feature engineering is the process of transforming raw data into features that better represent the underlying problem to the predictive models, resulting in improved model accuracy on unseen data. It’s a vaguely agreed space of tasks related to designing feature sets for Machine Learning applications. Components: First, understanding the properties of the task you’re trying to solve and how they might interact with the strengths and limitations of the model you are going to use. Second, experimental work were you test your expectations and find out what actually works and what doesn’t. Feature engineering as a technique, has three sub categories of techniques: Feature selection, Dimension reduction and Feature generation. Feature Selection: Sometimes called feature ranking or feature importance, this is the process of ranking the attributes by their value to predictive ability of a model. Algorithms such as decision trees automatically rank the attributes in the data set. The top few nodes in a decision tree are considered the most important features from a predictive stand point. As a part of a process, feature selection using entropy based methods like decision trees can be employed to filter out less valuable attributes before feeding the reduced dataset to another modeling algorithm. Regression type models usually employ methods such as forward selection or backward elimination to select the final set of attributes for a model. For example: Project Development decision-tree:                                                  Dimension Reduction: This is sometimes called feature extraction. The most classic example of dimension reduction is principle component analysis or PCA. PCA allows us to combine existing attributes into a new data frame consisting of a much reduced number of attributes by utilizing the variance in the data. The attributes which "explain" the highest amount of variance in the data form the first few principal components and we can ignore the rest of the attributes if data dimensionality is a problem from a computational standpoint. Feature Generation or Feature Construction: Quite simply, this is the process of manually constructing new attributes from raw data. It involves intelligently combining or splitting existing raw attributes into new one which have a higher predictive power. For example a date stamp may be used to generate 2 new attributes such as AM and PM which may be useful in discriminating whether day or night has a higher propensity to influence the response variable. Feature construction is essentially a data transformation process. Tips for Better Feature Engineering Tip 1: Think about inputs you can create by rolling up existing data fields to a higher/broader level or category. As an example, a person’s title can be categorized into strategic or tactical. Those with titles of “VP” and above can be coded as strategic. Those with titles “Director” and below become tactical. Strategic contacts are those that make high-level budgeting and strategic decisions for a company. Tactical are those in the trenches doing day-to-day work.  Other roll-up examples include: Collating several industries into a higher-level industry: Collate oil and gas companies with utility companies, for instance, and call it the energy industry, or fold high tech and telecommunications industries into a single area called “technology.” Defining “large” companies as those that make $1 billion or more and “small” companies as those that make less than $1 billion.   Tip 2: Think about ways to drill down into more detail in a single field. As an example, a contact within a company may respond to marketing campaigns, and you may have information about his or her number of responses. Drilling down, we can ask how many of these responses occurred in the past two weeks, one to three months, or more than six months in the past. This creates three additional binary (yes=1/no=0) data fields for a model. Other drill-down examples include: Cadence: Number of days between consecutive marketing responses by a contact: 1–7, 8–14, 15–21, 21+ Multiple responses on same day flag (multiple responses = 1, otherwise =0) Tip 3: Split data into separate categories also called bins. For example, annual revenue for companies in your database may range from $50 million (M) to over $1 billion (B). Split the revenue into sequential bins: $50–$200M, $201–$500M, $501M–$1B, and $1B+. Whenever a company falls with the revenue bin it receives a one; otherwise the value is zero. There are now four new data fields created from the annual revenue field. Other examples are: Number of marketing responses by contact: 1–5, 6–10, 10+ Number of employees in company: 1–100, 101–500, 502–1,000, 1,001–5,000, 5,000+ Tip 4: Think about ways to combine existing data fields into new ones. As an example, you may want to create a flag (0/1) that identifies whether someone is a VP or higher and has more than 10 years of experience. Other examples of combining fields include: Title of director or below and in a company with less than 500 employees Public company and located in the Midwestern United States You can even multiply, divide, add, or subtract one data field by another to create a new input. Tip 5: Don’t reinvent the wheel – use variables that others have already fashioned. Tip 6: Think about the problem at hand and be creative. Don’t worry about creating too many variables at first, just let the brainstorming flow.
View full tip
There are multiples approaches to improve the performance Increase the NetWork bandwidth between client PC and ThingWorx Server Reduce the unnecessary handover when client submit requests to ThingWorx server through NetWork Here are suggestions to reduce the unnecessary handover between client and server Eliminate the use of proxy servers between client and ThingWorx It is compulsory to download Combined.version.date.xxx.xxx.js file when the first time to load mashup page (TTFB and Content Download time). Loading performance is affected by using of proxy servers between client and ThingWorx Server. This is testing result with proxy server set up This is the test result after eliminiating proxy server from the same environment Cut off extensions that not used in ThingWorx server After installed extensions, the size of Combined.version.date.xxx.xxx.js increased Avoid Http/Https request errors There is a https request error when calling Google map. It takes more than 20 seconds
View full tip
Welcome to the Thingworx Community area for code examples and sharing.   We have a few how-to items and basic guidelines for posting content in this space.  The Jive platform the our community runs on provides some tools for posting and highlighting code in the document format that this area is based on.  Please try to follow these settings to make the area easy to use, read and follow.   At the top of your new document please give a brief description of the code sample that you are posting. Use the code formatting tool provided for all parts of code samples (including if there are multiple in one post). Try your best to put comments in the code to describe behavior where needed. You can edit documents, but note each time you save them a new version is created.  You can delete old versions if needed. You may add comments to others code documents and modify your own code samples based on comments. If you have alternative ways to accomplish the same as an existing code sample please post it to the comments. We encourage everyone to add alternatives noted in comments to the main post/document.   Format code: The double blue arrows allow you to select the type of code being inserted and will do key word highlighting as well as add line numbers for reference and discussions.
View full tip
Concepts of Anomaly Detection used in ThingWatcher ThingWatcher is based on anomaly detection with the normal distribution. What does that mean? Actually,  normally distributed metrics follow a set of probabilistic rules. Upcoming values who follow those rules are recognized as being “normal” or “usual”. Whereas value who break those rules are recognized as being unusual. What is a normal distribution? A normal distribution is a very common probability distribution. In real life, the normal distribution approximates many natural phenomena. A data set is known as “normally distributed” when most of the data aggregate around it's mean, in a symmetric way. Also, it's extreme values get less and less likely to appear. Example When a factory is making 1 kg sugar bags it doesn’t always produce exactly 1 kg. In reality, it is around 1 kg. Most of the time very close to 1 kg and very rarely far from 1 kg. Indeed, the production of 1 kg sugar bag follows a normal distribution. Mathematical rules When a metric appears to be normally distributed it follows some interesting law. As does the sugar bag example. The mean and the median are the same. Both are equal to 1000. It’s because of  the perfectly symmetric “bell-shape” It is the standard deviation called sigma σ that defines how the normal distribution is spread around the mean. In this example σ = 20 68% of all values fall between [mean-σ; mean+σ] For the sugar bag [980; 1020] 95% of all values fall between [mean-2*σ; mean+2*σ] For the sugar bag [960; 1040] 99,7% of all values fall between [mean-3*σ; mean+3*σ] For the sugar bag [940; 1060] The last 3 rules are also known as the 68–95–99.7 rule also called the three-sigma rule of thumb When the rules get broken: it’s an anomaly As previously stated, When a system has been proven normally distributed, it follows a set of rules. Those rules become the model representing the normal behavior of the metric. Under normal conditions, upcoming values will match the normal distribution and the model will be followed. But what happens when the rules get broken? This is when things turn different as something unusual is happening. In theory, in a normal distribution, no values are impossible. If the weights of the bags of sugar were really distributed, we would probably find a bag of sugar of 860 g every billion products. In reality, we approximate this sugar bag example as normally distributed. Also, almost impossible value are approximated as impossible Techniques of Anomaly Detection Technique n°1: outlier value An almost impossible value could be considered as an anomaly. When the value deviates too much from the mean, let’s say by ± 4σ, then we can consider this almost impossible value as an anomaly. (This limit can also be calculated using the percentile). Sugar bags who weigh less than 920 g or more than 1080 g are considered anomalous. Chances are, there is a problem in the production chain. This provides a simple way to define maximum and minimum thresholds. Technique 2: detecting change in the normal distribution Technique n°2 can detect unusual distribution fast, using only some points. But it can’t detect anomalies who move from one sigma σ to another in a usual manner. To detect this kind of anomaly we use a “window” of n last elements. If the mean and standard derivation of this window change too much from usual then we can deduce an anomaly. Using a big window with a lot of values is more stable, but it requires more time to detect the anomaly. The bigger the window is the more stable it becomes. But it would require more time to detect the anomaly as it needs to aggregate more values for the detection.
View full tip
Hi everybody, In this blogpost I want to share with you my local ThingWorx installation, with some optimizations that I did for local development. -use the -XX:+UseConcMarkSweepGC . This uses the older Garbage Collector from the JVM, instead of the newer G1GC recommended by the ThingWorx Installation guide since version 7.2. The advantage of ConcMarkSweepGC is that the startup time is faster and the total memory footprint of the Tomcat is far lower than G1GC. -use -agentlib:jdwp=transport=dt_socket,address=1049,server=y,suspend=n. This allows using your Java IDE of choice to connect directly to the Tomcat server, then debugging your Extension code, or even the ThingWorx code using the Eclipse Class Decompilers for example. Please modify the 1049 to your port of choice for exposing the server debugging port. -use -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=60000 -Dcom.sun.management.jmxremote.ssl=false                  -Dcom.sun.management.jmxremote.authenticate=false           This sets up the server to allow JMX monitoring. I usually use VisualVM from the JDK bin folder, but you can use any JMX monitoring tool.           This uses no Authentication, no SSL and uses port 6000 - modify if you need. I usually startup Tomcat manually from a folder via startup.bat, and the setenv.bat looks like: set JAVA_HOME=C:\Program Files\Java\jdk1.8.0_102 set JRE_HOME=C:\Program Files\Java\jdk1.8.0_102 set THINGWORX_PLATFORM_SETTINGS=D:\Work\servers\apache-tomcat-8.0.33 // this is where the platform-settings.json file is located set CATALINA_OPTS=-d64 -XX:+UseNUMA -XX:+UseConcMarkSweepGC -Dfile.encoding=UTF-8 -agentlib:jdwp=transport=dt_socket,address=1049,server=y,suspend=n -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=60000 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false In this mode I can look at any errors in almost real time from the console and it makes killing the server for Java Extension reload a breeze -> Ctrl+C Please don't hesitate to provide feedback on this document, I certainly welcome it. Be warned: THESE ARE NOT PRODUCTION SETTINGS. Best regards, Vladimir
View full tip
First of all wishing everyone a blessed 2017 So here is a little something that hopefully can be helpful for all you Thingworx developers! This is a 'Remote Monitoring Application Starter' Mainly this is created around Best Practices for Security and provides a lot of powerful Modeling and Mashup techniques. Also has some cool Dashboard techniques Everything is documented in accompanying documents also in the zip (sorry went through a few steps to get this up properly. Install instructions: Thingworx Remote Monitoring Starter Application – Installation Guide Files All files needed are in a Folder called: RemoteMonitoringStarter, this is an Export to ThingworxStorage Extensions Not included, but the application uses the GoogleWidgetsExtension (Google Map) Steps Import Google Map extension. Place RemoteMonitoringStarter folder in the ThingworxStorage exports folder. From Thingworx do an Import from ThingworxStorage – Include Data, Use Default Persistence Provider, do NOT ignore Subsystems. After the import has finished, go to Organizations and open Everyone. In the Organization remove Users from the Everyone organization unit. Go to DataTables and open PTC.RemoteMonitoring.Simulation.DT Go to Services and execute SetSimulationValues Go to the UserManagementSubsystem In the Configuration section add PTC.RemoteMonitoring.Session.TS to the Session. Note: This step may already be done. Note: Screen shots provided at the end. Account Passwords FullAdmin/FullAdmin All other users have a password of: password. NOTE: You may have to Reset your Administrator password using the FullAdmin account. I also recommend changing the passwords after installing.
View full tip
Sometimes M2M Assets should poll the platform on demand, such as in the case of avoiding excessive data charges from chatty assets.  A mechanism was developed that instructs the Asset to contact (poll) the platform for actions that the Asset needs to act on such as File Uploads, Set DataItem, etc. The Shoulder Tap SMS message is the platform’s way of contacting the Asset – tapping it on the shoulder to let it know there’s a message waiting for it.  The Asset responds by polling for the waiting message.  This implementation in the platform provides a way to configure the Model Profile that is responsible for sending an SMS Shoulder Tap message to an M2M Asset.  The Model Profile contains model-wide instructions for how and when a Shoulder Tap message should be sent. How does it work? The M2M asset is set not to poll the Axeda Platform for a long period, but the Enterprise user has some actions that the Asset needs to act upon such as FOTA (Firmware Over-the-Air).       Software package deployed to M2M Asset from Axeda Platform and put into Egress queue.       The Shoulder Tap mechanism executes a Custom Object that then sends a message to the Asset through a delivery method like SMS, UDP, etc.       The Asset’s SMS, etc. handler receives the message and the Asset then sends a POLL to the Platform and acts upon the action in the egress queue How do you make Shoulder Tap work for your M2M Assets? The first step is to create a Model Profile, the model profile will tell Asset of this model, how to communicate. For Example, if the Model Profile is Shoulder Tap, then the mechanism used to communicate to the Asset will imply Shoulder Tap.  Execute the attached custom object, createSMSModelProfile.groovy, and it will create a Model Profile named "SMSModelProfile". When you create a new Model, you will see  “SMSModelProfile“ appear in the Communication Profile dropdown list as follows: The next step is to create the Custom Object Transport script which is responsible for sending out the SMS or other method of communication to the Asset.  In this example the custom object is be named SMSCustomObject​.  The contents of this custom object are outside the scope of this article, but could be REST API calls to Twilio, Jasper or to a wireless provider's REST APIs to communicate with the remote device using an SMS message.   This could also be used with the IntegrationPublisher API to send a JMS message to a server the customer controls which could then be used to talk directly with custom libraries that are not directly compatible with the Axeda Platform. Once the Shoulder Tap scripting has been tested and is working correctly, you can now directly send a Shoulder Tap to the Asset from an action or through an ExtendedUI Module, such as shown below: import com.axeda.platform.sdk.v1.services.ServiceFactory; final ServiceFactory sFact = new ServiceFactory() def assetId = (Long)parameters.get("assetId") def stapService = ServiceFactory.getShoulderTapService() stapService.sendShoulderTap( assetId ) See Extending the Axeda Platform UI - Custom Tabs and Modules for more about creating and configuring Extended UI Modules. What about Retries? maxRetryCount  - This built in attribute’s value defines the number of times the platform will retry to send the Shoulder Tap message before it gives up. retryInterval -The retry interval that can be used if the any message delivery needs to be retried. Retry Count and Interval are configured in the Model Profile Custom Object like so: final DeliveryMethodDescriptor dmd = new DeliveryMethodDescriptor(); fdmd.setMaxRetryCount(2); fdmd.setRetryInterval(60);
View full tip
Expression rules are the heart of the Axeda Platforms processing capability. These rules have an If-Then-Else structure that's easy to create and understand. We think they're like a formula in a spreadsheet. For example, say your asset has a dataitem reading for temperature: IF: temperature > 80 THEN: CreateAlarm("High Temp", 100)                      This rule compares the temperature to 80 every time a reading is received. When this happens, the rule creates an alarm with name "High Temp" and severity 100. Dataitems represent readings from an asset. They are typically sensors or monitoring parameters in an application. But also think of dataitems as variables. The rule can be changed to IF: temperature > threshold                      so that each asset has its own threshold that can be adjusted independently. Look at the complete list of Expression Rule triggers - the events that trigger a rule to run variables - the information you can access in an expression functions - the functions that can be used within an expression actions - these are called in the Then or Else part of an expression to make something happen A rule can calculate a new value. For example, if you wanted to know the max temperature IF: temperature > maxTemperature THEN: SetDataItem("maxTemperature" temperature) To convert a temperature in celsius to fahrenheit IF: temperature THEN: SetDataItem("tempF", temperature*9/5 + 32) The If simply names the variable, so any change to that variable triggers the rule to run. There may be lots of other dataitems reported for an asset, and changes to the other dataitems should not recalculate the temperature. When rules should run only when an asset is in a particular mode or state, or when there is a complex sequence to model, read about how State Machines come to the rescue. Creating and Testing an Expression Rule ​ We're going to create a simple Expression Rule and show it running in a few steps. Above, you saw a rule that created an alarm when temperature > 80. Now, we will make one that converts a temperature in F to one in C. An Expression Rule consists of a few things: Name Description - an optional field to describe the rule Trigger - what makes this rule run? The trigger tells the rule if it applies to Alarms, Data, Files, or many others. If - the logic expression for the condition to evaluate Then - the logic to run if the condition is true Else - the logic to run if the condition is false To begin, log into an Axeda Platform instance. Navigate to the Manage tab Select ​New​, then ​Expression Rule​ Enter this Expression Rule information Name: TempConvert Type: Data Description: Enabled: Leave checked If: TempC Then: SetDataItem("TempF", TempC*9/5 + 32) If you click on functions or scroll down for actions in the Expression Tree, you will see a description in Details. Click the Apply to Asset​ button to select models and specific assets to apply this rule to. Now that you have an Expression Rule, lets try it. Testing the Expression Rule (NEEDS UPDATING) You can test the expression rule by simulating the TempC data using Axeda Simulator, as instructed below. Or, you can use the Expression Rules Debugger to simulate the reading and display the results. For information about using the Expression Rules Debugger, see the Expression Rules Debugger documentation in the on-line Help system.Simulate a TempC reading Launch the Axeda Simulator The Axeda Simulator will launch in a new browser window Enter your registered email address, Developer Connection password, and click Login.       Select asset1 from the Asset dropdown. Under the Data tab, enter the dataitem name TempC, and a value like 28: Then Click Send. To see the exciting result, go back to the Platform window and navigate to the Service tab: and you should see that 28C = 82.4F. You created an Expression Rule that triggers when a value of TempC is received, and creates a new dataitem TempF with a calculated value. This rule applies to your model, but if you had many models of assets, it could apply to as many as you want. You could change the rule to do the conversion only If: TempC > 9 and simulate inputs to see that this is the new behavior. Further Reading Read about how Rule Timers can trigger rules to run on a scheduled basis. (TODO)
View full tip
Overview The Axeda Platform is a secure and scalable foundation to build and deploy enterprise-grade applications for connected products, both wired and wireless. This article provides you with a detailed feature overview and helpful links to more in-depth articles and tutorials for a deeper dive into key areas. Types of Connected Product Applications M2M applications can span many vertical markets and industries. Virtually every aspect of business and personal life is being transformed by ubiquitous connectivity. Some examples of M2M applications include: Vehicle Telematics and Fleet Management - Monitor and track the location, movements, status, and behavior of a vehicle or fleet of vehicles Home Energy Monitoring - Smart energy sensors and plugs providing homeowners with remote control and cost-saving suggestions Smart Television and Entertainment Delivery - Integrated set-top box providing in-view interaction with other devices – review your voicemails while watching a movie, chat with your Facebook friends, etc. Family Location Awareness - Set geofences on teenagers, apply curfews, locate family members in real time, with vehicle speed and condition Supply Chain Optimization - Combine status at key inspection points with logistics and present distribution managers with an interactive, real-time control panel Telemedicine - Self-monitoring/testing, telecardiology, teleradiology Why Use a Platform? Have you ever built a Web application? If so, you probably didn't create the Web server as part of your project. Web servers or Web application servers like Apache HTTPd or JBoss manage TCP sockets, threads, the HTTP protocol, the compilation of scripts, and include thousands of other base services. By leveraging the work done by the dedicated individuals who produced those core services, you were able to rapidly build an application that provided value to your users.  The Axeda Platform is analogous to a Web server or a Web application server, but provides services and a design paradigm that makes connected product development streamlined and efficient. Anatomy of an Axeda Connected Product Connected Products can really be anything that your product and imagination can offer, but it is helpful to take pause for some common considerations that apply to most, if not all of these types of solutions. Getting Connected - Bring your product or equipment to the network so that it can provide information to the solution, and react to commands and configuration changes orchestrated by the application logic. Manage and Orchestrate - Script your business logic in the cloud to tie together remote information with information from other business systems or cloud-based services, react to real-time conditions, and facilitate batch operations to synchronize, analyze, and integrate. Present and Report - Build your user experiences, enabling people to interact with your connected product, manage workflows around business processes, or facilitate data analysis. Let's take a look at the Axeda Platform services that help with each of these solution considerations. Getting Connected Wired & Wireless Getting connected can assume all sorts of shapes, depending on the environment of your product and the economics of your solution. The Axeda Platform makes no assumption about connectivity, but instead provides features and functionality to help you connect.For wireless applications, especially those which may use cellular or satellite communications, the speed and cost of communication will be an important factor.  For this reason, Axeda has created the Adaptive Machine Messaging Protocol (AMMP), a cross-platform, efficient HTTP-based protocol that you can use to communicate bi-directionally with the platform.  The protocol is expressive, robust, secure, and most importantly, able to be implemented on a wide range of hardware and programming environments. When you are faced with connecting legacy products that may be communicating with a proprietary messaging protocol, the Axeda Platform can be extended with Codecs to "learn" your protocol by translating your device's communication format into a form that the platform can understand. This is a great option for retrofitting existing, deployed products to get connectivity and value today, while designing your next-generation of product with AMMP support built-in. Manage and Orchestrate The Data Model defines the information and its behavior in the Axeda Platform. Rules Rules form the heart of a dynamic connected application. There are three types of rules that can be leveraged for your orchestration layer: Expression rules run in the cloud, and are configured and associated with your assets through the Axeda Admin UI or SDK. These rules have an If-Then-Else structure that's easy to create and understand. They're like a formula in a spreadsheet. For example, say your asset has a dataitem reading for temperature: [TODO: Image of dataitem TEMP] This rule compares the temperature to 80 every time a reading is received. When this happens, the rule creates an alarm with name "High Temp" and severity 100.Learn more about Expression Rules. State Machines help organize Expression Rules into manageable groups that apply to assets when the assets are in a certain state. For example, if your asset were a refrigerated truck, and you were interested in receiveing an alert when the temperature within the cargo area rose above a preset threshold, you would not want this rule to be applied when your truck asset is empty and parked in the distribution center lot. In this case, you might organize your rules into a state machine called “TruckStatus”. The TruckStatus state machine would then consist of a set of states that you define, and the rules that execute when the truck is in a particular state. [TODO - show state machine image?] State “Parked”: IF doorOpen THEN … State “In Transit”: IF temperature > 40 THEN… State “Maintenance”: <no rules> You can learn more about state machines in an upcoming technical article soon. Scripting Using Axeda Custom Objects, you can harness the power of the Axeda SDK to gain access to the complete set of platform data and functionality, all within a script that you customize. Custom Object scripts can be invoked in an Expression Rule to provide customized and flexible business logic for your application. Custom Object scripts are written in a powerful scripting language called Groovy, which is 100% Java syntax compatible. Groovy also offers very modern, concise syntax options that make your code simple and easy to understand. Groovy can also implement the body of a web service method. We call this Scripto. Scripto allows you to write code and call that code by name through a REST web service. This allows a client application to call a set of customized web services that return exactly the information and format needed by the application. Here is a beginning tutorial on Scripto.  This site includes many Scripto examples called by different Rich Internet Applications (RIA). Location-Based Services Knowing where something is, and where it has been, opens a world of possible application features. The Axeda Platform can keep track of an asset’s current and historical location, which allows your applications to plot the current position of your assets on a map, or show a breadcrumb trail of where a particular asset has been. Geofences are virtual perimeters around geographic locations. You can define a geofence around a physical location by describing a center point and radius, or by “drawing” a polygon around an arbitrary shape. For instance, you may have a geofence around the Boston metro that is defined as the center of town with a 10-mile radius. You may then compare an asset’s location to this geofence in a rule and trigger events to occur. IF InNamedGeofence(“Boston”) THEN CreateAlarm(…) You can learn more about geofences and other location-oriented rule features in an upcoming tutorial. Integration Queue In today’s software landscape, almost no complete solution is an island unto itself. Business systems need to interoperate by sharing data and events, so that specialized systems can do their job without tight coupling. Messaging is a robust and capable pattern for bridging the gap between systems, especially those that are hosted in the cloud. The Axeda Platform provides a message queue that can be subscribed to by external systems to trigger processes and workflows in those systems, based on events that are being tracked within the platform. A simple ExpressionRule can react to a condition by placing a message in the integration queue as follows: IF Alarm.severity > 100 THEN PublishObject() A message is then placed in the queue describing the platform event, and another business system may subscribe to these messages and react accordingly. Web Services Web Services are at the heart of a cloud-based API stack, and the Axeda Platform provides full comprehensiveness or flexibility. The platform exposes Web Service operations for all platform data and configuration meta data. As a result, you can determine the status of an asset, query historical data for assets, search for assets based on their current location, or even configure expression rules and other configuration settings all through a modern Web Service API, using standard SOAP and REST communication protocols. Scripto Web Service APIs simplify system integration in a loosely-coupled, secure way, and we have a commitment to offering a comprehensive collection of standard APIs into the Axeda Platform. But we can't have an API that suits every need exactly. You may want data in a particular format, such as CSV, JSON, or XML. Or some logic should be applied and its inefficient to query lots of data to look for the bit you want. Wouldn’t you rather make the service on the other side do exactly what you want, and give it to you in exactly the format you need? That is Scripto – the bridge between the power and efficiency of the Axeda Custom Object scripting engine, and a Web Service client. Using Scripto, you can code a script in the Groovy language, using the Axeda SDK and potentially mashing up results from other systems, all within the platform, and expose your script to an external consumer via a single, REST-based Web Service operation. You create your own set of Web Services that do exactly what you want. This powerful combination let’s you simplify your Web Service client code, and give you easy access and maintainability to the scripted logic. Present Rich Internet Applications are a great way to build engaging, information-rich user experiences. By exposing platform data and functions via Web Services and Scripto, you can use your tool of choice for developing your front-end. In fact, if you choose a technology that doesn’t require a server-side rendering engine, such as HTML + AJAX, Adobe Flash, or Microsoft Silverlight, then you can upload your application UI files to the Axeda Platform and let the platform serve your URL! Addition references for using RIAs on the Axeda Platform: Axeda Sample Application: Populating A Web Page with Data Items Extending the Axeda Platform UI - Custom Tabs and Modules Far-Front-Ends and Other Systems If a client-only user RIA interface is not an option for you, you can still use Web Services to integrate platform information into other server-side presentation technologies, such as Microsoft Sharepoint portal or a Java EE Web Application. You can also get lightning-fast updates for your users with the Axeda Machine Streams that uses ActiveMQ JMS technology to stream data in real-time from the Axeda Platform to your custom data warehouses.  While your users are viewing a drill down on a set of assets, they can receive asynchronous notifications about real-time data changes, without the need to constantly poll Web Services. Summary Building Connected Products and applications for them is incredibly rewarding job. Axeda provides a set of core services and a framework for bringing your application to market.
View full tip
Often times to set up our environment securely, we will assign Entity Type permissions, which is much easier then to remember to assign it to every single ThingShape, ThingTemplate, Thing etc. However did you know that these security settings only export when doing an Export to ThingworxStorage? So you either must maintain a list of these settings and re-apply them when starting on a new environment or: 1. Set up your Groups (and Users although hopefully all permissions you set up are assigned to Groups as a Best Practice) 2. Set up your Entity Type Permissions 3. Create an Export using Export to ThingworxStorage and export everything Now you have an import ready any time you need to deploy Thingworx anew. NOTE: Obviously this means you need to maintain that export any time changes are made to those permissions, unfortunately that also means another export of ALL which can be less desirable, since it can include Test objects unfinished items etc. As such one may have to maintain some local instance to keep a clean Import/Export.
View full tip
I know most of us very happily use the Administrator account in Thingworx, however this is bad bad practice for development and even administration of the platform! Administrator is there by default and should be used to set up your initial users, which should include your Actual Platform Administrator (with a strong password of course) After that change the Administrator Password and Remove them from the Administrators group. I recommend this as a Best Practice even in your own Development environments, but especially in Runtime. Your very first steps would like: Install Thingworx Log in as Administrator Set up the new Platform Administrator account Remove Administrator from Administrators group Change Administrator password.
View full tip
This tutorial applies to Axeda version 6.1.6+, with sections applicable to 6.5+ (indicated below) Custom objects (or Groovy scripts) are the backbone of Axeda custom applications.  As the developer, you decide what content type to give the data returned by the script. What this tutorial covers? This tutorial provides examples of outputting data in different formats from Groovy scripts and consuming that data via Javascript using the jQuery framework.  While Javascript and jQuery are preferred by the Axeda Innovation team, any front end technology that can consume web services can be used to build applications on the Axeda Machine Cloud.  On the same note, the formats discussed in this article are only a few examples of the wide variety of content types that Groovy scripts can output via Scripto.  The content types available via Scripto are limited only by their portability over the TCP protocol, a qualification which includes all text-based and downloadable binary mime types.  As of July 2013, the UDP protocol (content streaming) is not supported by the current version of the Axeda Platform. Formats discussed in this article: 1) JSON 2) XML 3) CSV 4) Binary content with an emphasis on image files (6.5+) For a tutorial on how to create custom objects that work with custom applications, check out Using Google Charts API with Scripto.  For a discussion of what Scripto is and how it relates to Groovy scripts and Axeda web services, take a look at Unleashing the Power of the Axeda Platform via Scripto. Serializing Data JSON For those building custom applications with Javascript, serializing data from scripts into JSON is a great choice, as the data is easily consumable as native Javascript objects. The net.sf.json JSON library is available to use in the SDK.  It offers an easy way to serialize objects on the Platform, particularly v2 SDK objects. import net.sf.json.JSONArray import static com.axeda.sdk.v2.dsl.Bridges.* def asset = assetBridge.findById(parameters.assetId) def response = JSONArray.fromObject(asset).toString(2) return ["Content-Type": "application/json", "Content": response] Outputs: [{     "buildVersion": "",     "condition": {         "detail": "",         "id": "3",         "label": "",         "restUrl": "",         "systemId": "3"     },     "customer": {         "detail": "",         "id": "2",         "label": "Default Organization",         "restUrl": "",         "systemId": "2"     },     "dateRegistered": {         "date": 11,         "day": 1,         "hours": 18,         "minutes": 7,         "month": 2,         "seconds": 49,         "time": 1363025269253,         "timezoneOffset": 0,         "year": 113     },     "description": "",     "detail": "testasset",     "details": null,     "gateways": [],     "id": "12345",     "label": "",     "location": {         "detail": "Default Organization",         "id": "2",         "label": "Default Location",         "restUrl": "",         "systemId": "2"     },     "model": {         "detail": "testmodel",         "id": "2345",         "label": "standalone",         "restUrl": "",         "systemId": "2345"     },     "name": "testasset",     "pingRate": 0,     "properties": [         {             "detail": "",             "id": "1",             "label": "TestProperty",             "name": "TestProperty",             "parentId": "2345",             "restUrl": "",             "systemId": "1",             "value": ""         },         {             "detail": "",             "id": "4",             "label": "TestProperty0",             "name": "TestProperty0",             "parentId": "2345",             "restUrl": "",             "systemId": "4",             "value": ""         },         {             "detail": "",             "id": "3",             "label": "TestProperty1",             "name": "TestProperty1",             "parentId": "2345",             "restUrl": "",             "systemId": "3",             "value": ""         },         {             "detail": "",             "id": "2",             "label": "TestProperty2",             "name": "TestProperty2",             "parentId": "2345",             "restUrl": "",             "systemId": "2",             "value": ""         }     ],     "restUrl": "",     "serialNumber": "testasset",     "sharedKey": [],     "systemId": "12345",     "timeZone": "GMT" }] This output can be traversed as Javascript object with its nodes accessible using dot (.) notation. For example, if you set the above JSON as the content of variable "json", you can access it in the following way, without any preliminary parsing needed: assert json[0].condition.id == 3 If you use jQuery, a Javascript library, feel free to make use of axeda.js, which contains utility functions to pass data to and from the Axeda Platform.  One function in particular is used in most example custom applications found on this site, the axeda.callScripto function.  It relies on the jQuery ajax function to make the underlying call. /**   * makes a call to the enterprise platform services with the name of a script and passes   * the script any parameters provided.   *   * default is GET if the method is unknown   *   * Notes: Added POST semantics - plombardi @ 2011-09-07   *   * original author: Zack Klink & Philip Lombardi   * added on: 2011/7/23   */ // options - localstoreoff: "yes" for no local storage, contentType: "application/json; charset=utf-8", axeda.callScripto = function (method, scriptName, scriptParams, attempts, callback, options) {   var reqUrl = axeda.host + SERVICES_PATH + 'Scripto/execute/' + scriptName + '?sessionid=' + SESSION_ID   var contentType = options.contentType ? options.contentType : "application/json; charset=utf-8"   var local   var daystring = keygen()   if (options.localstoreoff == null) {   if (localStorage) {   local = localStorage.getItem(scriptName + JSON.stringify(scriptParams))   }   if (local != null && local == daystring) {   return dfdgen(reqUrl + JSON.stringify(scriptParams))   } else {   localStorage.setItem(scriptName + JSON.stringify(scriptParams), daystring)   }   }   return $.ajax({   type: method,   url: reqUrl,   data: scriptParams,   contentType: contentType,   dataType: "text",   error: function () {   if (attempts) {   expiredSessionLogin();   setTimeout(function () {   axeda.callScripto('POST', scriptName, scriptParams, attempts - 1, callback, options)   }, 1500);   }   },   success: function (data) {   if (options.localstoreoff == null) {   localStorage.setItem(reqUrl + JSON.stringify(scriptParams), JSON.stringify([data]))   }   if (contentType.match("json")) {   callback(unwrapResponse(data))   } else {   callback(data)   }   }   }) }; Using the axeda.callScripto function: var postToPlatform = function (scriptname, callback, map) {         var options = {             localstoreoff: "yes",             contentType: "application/json; charset=utf-8"         }        // Javascript object "map" has to be stringified to post to Axeda Platform         axeda.callScripto("POST", scriptname, JSON.stringify(map), 2, function (json) {             // callback gets the JSON object output by the Groovy script             callback(json)         }, options)     } The JSON object is discussed in more detail here. Back to Top XML XML is the preferred language of integration with external applications and services. Groovy provides utilities to make XML serialization a trivial exercise. import groovy.xml.MarkupBuilder import static com.axeda.sdk.v2.dsl.Bridges.* def writer = new StringWriter() def xml = new MarkupBuilder(writer) def findAssetResult = assetBridge.find(new AssetCriteria(modelNumber: parameters.modelName)) // find operation returns AssetReference class. Contains asset id only def assets = findAssetResult.assets      xml.Response() {   Assets() {   assets.each { AssetReference assetRef ->   def asset = assetBridge.findById(assetRef.id)               // asset contains a ModelReference object instead of a Model.  ModelReference has a detail property, not a name property   Asset() {   id(asset.id)   name(asset.name)   serial_number(asset.serialNumber)   model_id(asset.model.id)   model_name(asset.model.detail)   }   }   }   } return ['Content-Type': 'text/xml', 'Content': writer.toString()] Output: <Assets>   <Asset>   <id>98765</id>   <name>testasset</name>   <serial_number>testasset</serial_number>   <model_id>4321</model_id>   <model_name>testmodel</model_name>   </Asset> </Assets Although XML is not a native Javascript object as is JSON, Javascript libraries and utilities are available for parsing XML into an object traversable in Javascript. For more information on parsing XML in Javascript, see W3 Schools XML Parser.  For those using jQuery, check out the jQuery.parseXML function. Back to Top Outputting Files (Binary content types) CSV CSV comes in handy for spreadsheet generation as it is compatible with Microsoft Excel. The following example is suitable for Axeda version 6.1.6+ as it makes use of the Data Accumulator feature to create a downloadable file. import com.axeda.drm.sdk.device.ModelFinder import com.axeda.drm.sdk.Context import com.axeda.drm.sdk.scripto.Request import com.axeda.common.sdk.id.Identifier import com.axeda.drm.sdk.device.Model import com.axeda.drm.sdk.device.DataItem import com.axeda.drm.sdk.device.DataItemValue import com.axeda.drm.sdk.data.DataValue import com.axeda.drm.sdk.device.DeviceFinder import com.axeda.drm.sdk.device.Device import com.axeda.drm.sdk.mobilelocation.MobileLocation import com.axeda.drm.sdk.data.DataValueList import com.axeda.drm.sdk.data.CurrentDataFinder import com.axeda.drm.sdk.mobilelocation.CurrentMobileLocationFinder import groovy.xml.MarkupBuilder import com.axeda.platform.sdk.v1.services.ServiceFactory /* * ExportObjectToCSV.groovy * * Creates a csv file from either all assets of a model of a single asset that can then be used to import them back into another system. * * @param model        -   (REQ):Str model name. * @param serial        -   (OPT):Str serial number. * * @author Sara Streeter <sstreeter@axeda.com> */ def writer = new StringWriter() def xml = new MarkupBuilder(writer) InputStream is try {    Context CONTEXT = Context.getSDKContext()    ModelFinder modelFinder = new ModelFinder(CONTEXT)     modelFinder.setName(Request.parameters.model)    Model model = modelFinder.find()    DeviceFinder deviceFinder = new DeviceFinder(CONTEXT)    deviceFinder.setModel(model)    List<Device> devices = [] def exportkey = model.name Device founddevice if (Request.parameters.serial){     deviceFinder.setSerialNumber(Request.parameters.serial)    founddevice = deviceFinder.find()    logger.info(founddevice?.serialNumber)    if (founddevice != null){    devices.add(founddevice)    }    else throw new Exception("Device ${Request.parameters.serial} cannot be found.")    exportkey += "${founddevice.serialNumber}" } else {     devices = deviceFinder.findAll()     exportkey += "all" } // use a Data Accumulator to store the information def dataStoreIdentifier = "FILE-CSV-export_____" + exportkey def daSvc = new ServiceFactory().dataAccumulatorService if (daSvc.doesAccumulationExist(dataStoreIdentifier, devices[0].id.value)) {   daSvc.deleteAccumulation(dataStoreIdentifier, devices[0].id.value) } List<DataItem> dataItemList = devices[0].model.dataItems def firstrow = [ "model", "serial", "devicename", "conditionname", "currentlat","currentlng" ]                     def tempfirstrow = dataItemList.inject([]){list, dataItem ->             list << dataItem.name;             list         }         firstrow += tempfirstrow            firstrow = firstrow.join(',')         firstrow += '\n'         daSvc.writeChunk(dataStoreIdentifier, devices[0].id.value, firstrow);     CurrentMobileLocationFinder currentMobileLocationFinder = new CurrentMobileLocationFinder(CONTEXT) devices.each{ device ->                 CurrentDataFinder currentDataFinder = new CurrentDataFinder(CONTEXT, device)                 currentMobileLocationFinder.deviceId = device.id.value                 MobileLocation mobileLocation = currentMobileLocationFinder.find()                 def lat = 0                 def lng = 0                 if (mobileLocation){                     lat = mobileLocation?.lat                     lng = mobileLocation?.lng                 }                 def row =                 [                     device.model.name,                     device.serialNumber,                     device.name,                     device.condition?.name,                     lat,                     lng                     ]                                     def temprow = dataItemList.inject([]){ subList,dataItem ->                         DataValue value = currentDataFinder.find(dataItem.name)                                             def val = "NULL"                         val = value?.asString() != "?" ? value?.asString() : val                         subList <<  val                         subList                     }                 row += temprow                 row = row.join(',')                 row += '\n'                 daSvc.writeChunk(dataStoreIdentifier, devices[0].id.value, row);             }    // stream the data accumulator to create the file is = daSvc.streamAccumulation(dataStoreIdentifier, devices[0].id.value) def disposition = 'attachment; filename=CSVFile' + exportkey + '.csv' return ['Content-Type': 'text/csv', 'Content-Disposition':disposition, 'Content': is.text] } catch (def ex) {    xml.Response() {        Fault {            Code('Groovy Exception')            Message(ex.getMessage())            StringWriter sw = new StringWriter();            PrintWriter pw = new PrintWriter(sw);            ex.printStackTrace(pw);            Detail(sw.toString())        }    } logger.info(writer.toString()) return ['Content-Type': 'text/xml', 'Content': writer.toString()] } return ['Content-Type': 'text/xml', 'Content': writer.toString()] Back to Top Image Files (6.5+) The FileStore in Axeda version 6.5+ allows fine-grained control of uploaded and downloaded files. As Groovy scripts can return binary data via Scripto, this allows use cases such as embedding a Groovy script url as the source for an image. The following example uses the FileStore API to create an Image out of a valid image file, scales it to a smaller size and stores this smaller file. import com.axeda.drm.sdk.Context import com.axeda.drm.sdk.data.* import com.axeda.drm.sdk.device.* import com.axeda.drm.sdk.mobilelocation.CurrentMobileLocationFinder import com.axeda.drm.sdk.mobilelocation.MobileLocation import com.axeda.drm.sdk.mobilelocation.MobileLocationFinder import com.axeda.sdk.v2.bridge.FileInfoBridge import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.ExecutionResult import com.axeda.services.v2.FileInfo import com.axeda.services.v2.FileInfoReference import com.axeda.services.v2.FileUploadSession import net.sf.json.JSONObject import groovy.json.JsonBuilder import net.sf.json.JSONArray import com.axeda.drm.sdk.scripto.Request import org.apache.commons.io.IOUtils import org.apache.commons.lang.exception.ExceptionUtils import com.axeda.common.sdk.id.Identifier import groovy.json.* import javax.imageio.ImageIO; import java.awt.RenderingHints import java.awt.image.BufferedImage import java.io.ByteArrayOutputStream; import java.awt.* import java.awt.geom.* import javax.imageio.* import java.awt.image.* import java.awt.Graphics2D import javax.imageio.stream.ImageInputStream /*    Image-specific FileStore entry point to post and store files */ def contentType = "application/json" final def serviceName = "StoreScaledImage" // Create a JSON Builder def json = new JsonBuilder() // Global try/catch. Gotta have it, you never know when your code will be exceptional! try {       Context CONTEXT = Context.getSDKContext()     def filesList = []     def datestring = new Date().time     InputStream inputStream = Request.inputStream       def reqbody = Request.body     // all of our Request Parameters are available here     def params = Request.parameters     def filename = Request?.headers?.'Content-Disposition' ?     Request?.headers?.'Content-Disposition' : "file___" + datestring + ".txt"     def filelabel = Request.parameters.filelabel ?: filename     def description = Request.parameters.description ?: filename     def contType = Request.headers?."content-type" ?: "image/jpeg"     def tag = Request.parameters.tag ?: "cappimg"     def encoded = Request.parameters.encoded?.toBoolean()   def dimlimit = params.dimlimit ? params.dimlimit : 280     // host is available in the headers when the script is called with AJAX     def domain = Request.headers?.host     byte[] bytes = IOUtils.toByteArray(inputStream);     def fileext = filename.substring(filename.indexOf(".") + 1,filename.size())     def outerMap = [:]     // check that file extension matches an image type     if (fileext ==~ /([^\s]+(\.(?i)(jpg|jpeg|png|gif|bmp))$)/){         if (inputStream.available() > 0) {                 def scaledImg                               try {                     def img = ImageIO.read(inputStream)                     def width = img?.width                              def height = img?.height                     def ratio = 1.0                     def newBytes                                       if (img){                                               if (width > dimlimit || height > dimlimit){                             // shrink by the smaller side so it can still be over the limit                             def dimtochange = width > height ? height : width                             ratio = dimlimit / dimtochange                                                       width = Math.floor(width * ratio).toInteger()                             height = Math.floor(height * ratio).toInteger()                         }                                             newBytes = doScale(img, width, height, ratio, fileext)                      if (newBytes?.size() > 0){                         bytes = newBytes                      }                     }                 }                 catch(Exception e){                     logger.info(e.localizedMessage)                                   }                                           outerMap.byteCount = bytes.size()                    FileInfoBridge fib = fileInfoBridge                 FileInfo myImageFile = new FileInfo(filelabel: filelabel,                                                     filename: filename,                                                     filesize: bytes?.size(),                                                     description: description,                                                     tags: tag                                                     )                    myImageFile.contentType = contType                    FileUploadSession fus = new FileUploadSession();                 fus.files = [myImageFile]                    ExecutionResult fer = fileUploadSessionBridge.create(fus);                 myImageFile.sessionId = fer.succeeded.getAt(0)?.id                               ExecutionResult fileInfoResult = fib.create(myImageFile)                               if (fileInfoResult.successful) {                     outerMap.fileInfoSave = "File Info Saved"                     outerMap.sessionId = "File Upload SessionID: "+fer.succeeded.getAt(0)?.id                     outerMap.fileInfoId = "FileInfo ID: "+fileInfoResult?.succeeded.getAt(0)?.id                     ExecutionResult er = fib.saveOrUpdate(fileInfoResult.succeeded.getAt(0).id,new ByteArrayInputStream(bytes))                     def fileInfoId = fileInfoResult?.succeeded.getAt(0)?.id                     String url = "${domain}/services/v1/rest/Scripto/execute/DownloadFile?fileId=${fileInfoId}"                     if (er.successful) {                         outerMap.url = url                     } else {                         outerMap.save = "false"                         logger.info(logFailure(er,outerMap))                     }                 } else {                     logger.info(logFailure(fileInfoResult, outerMap))                 }                } else {                 outerMap.bytesAvail = "No bytes found to upload"             }         } else {             outerMap.imagetype = "Extension $fileext is not a supported image file type."         }     filesList << outerMap     // return the JSONBuilder contents     // we specify the content type, and any object as the return (even an outputstream!)     return ["Content-Type": contentType,"Content":JSONArray.fromObject(filesList).toString(2)]     // alternately you may just want to serial an Object as JSON:     // return ["Content-Type": contentType,"Content":JSONArray.fromObject(invertedMessages).toString(2)] } catch (Exception e) {     // I knew you were exceptional!     // we'll capture the output of the stack trace and return it in JSON     json.Exception(             description: "Execution Failed!!! An Exception was caught...",             stack: ExceptionUtils.getFullStackTrace(e)     )     // return the output     return ["Content-Type": contentType, "Content": json.toPrettyString()] } def doScale(image, width, height, ratio, fileext){     if (image){     ByteArrayOutputStream baos = new ByteArrayOutputStream();     def bytes      def scaledImg = new BufferedImage( width, height, BufferedImage.TYPE_INT_RGB )        Graphics2D g = scaledImg.createGraphics();         g.setRenderingHint(RenderingHints.KEY_INTERPOLATION, RenderingHints.VALUE_INTERPOLATION_BILINEAR);         g.scale(ratio,ratio)         g.drawImage(image, null, null);         g.dispose();              ImageIO.write( scaledImg, fileext, baos )       baos.flush()       bytes = baos.toByteArray()       baos.close()     }     else {         logger.info("image to be scaled is null")         return false     }   return bytes   } private void logFailure(ExecutionResult fileInfoResult, LinkedHashMap outerMap) {     outerMap.message = fileInfoResult.failures.getAt(0)?.message     outerMap.source = fileInfoResult.failures.getAt(0)?.sourceOfFailure     outerMap.details = fileInfoResult.failures.getAt(0)?.details?.toString()     outerMap.fileInfoSave = "false" } The next example makes use of the jQuery framework to upload an image to this script via an http POST. Note: This snippet is available as a jsFiddle at http://jsfiddle.net/LrxWF/18/ With HTML5 button: <input type="file" id="fileinput" value="Upload" /> var PLATFORM_HOST = document.URL.split('/apps/')[0]; // this is how you would retrieve the host on an Axeda instance var SESSION_ID = null // usually retrieved from login function included below /*** * Depends on jQuery 1.7+ and HTML5, assumes an HTML5 element such as the following: * <input type="file" id="fileinput" value="Upload" /> * **/ $("#fileinput").off("click.filein").on("click.filein", function () {     fileUpload() }) var fileUpload = function () {     $("#fileinput").off('change.fileinput')     $("#fileinput").on('change.fileinput', function (event) {         if (this.files && this.files.length > 0) {             handleFiles("http://" + PLATFORM_HOST, this.files)         }     }) } var handleFiles = function (host, files) {     $.each(files, function (index, file) {         var formData = new FormData();         var filename = file.name         formData.append(filename, file)         var url = host + '/services/v1/rest/Scripto/execute/StoreScaledImage?filelabel=' + filename + "&tag=myimg"         url = setSessionId(url)         jQuery.ajax(url, {             beforeSend: function (xhr) {                 xhr.setRequestHeader('Content-Disposition', filename);             },             cache: false,             cache: false,             processData: false,             type: 'POST',             contentType: false,             data: formData,             success: function (json) {                 refreshPage(json)                 console.log(json)             }         });     }) } var setSessionId = function (url) {     // you would already have this from logging in     return url + "&sessionid=" + SESSION_ID } var refreshPage = function (json) {     // here you would refresh your page with the returned JSON     return } /*** *  The following functions are not used in this demonstration, however they are necessary for a complete app and are found in axeda.js  http://gist.github.com/axeda/4340789 ***/     function login(username, password, success, failure) {         var reqUrl = host + SERVICES_PATH + 'Auth/login';         localStorage.clear()         return $.get(reqUrl, {             'principal.username': username,                 'password': password         }, function (xml) {             var sessionId = $(xml).find("ns1\\:sessionId, sessionId").text()             // var sessionId = $(xml).find("[nodeName='ns1:sessionId']").text(); - no longer works past jquery 1.7             if (sessionId) {                 // set the username and password vars for future logins.                         storeSession(sessionId);                 success(SESSION_ID); // return the freshly stored contents of SESSION_ID             } else {                 failure($(xml).find("faultstring").text());             }         }).error(function () {             $('#loginerror').html('Login Failed, please try again')         });     }; function storeSession(sessionId) {     var date = new Date();     date.setTime(date.getTime() + SESSION_EXPIRATION);     SESSION_ID = sessionId     document.cookie = APP_NAME + '_sessionId=' + SESSION_ID + '; expires=' + date.toGMTString() + '; path=/';     return true; }; The return JSON includes a URL that you can use as the source for images: [{   "byteCount": 14863,   "fileInfoSave": "File Info Saved",   "sessionId": "File Upload SessionID: 01234",   "fileInfoId": "FileInfo ID: 12345",   "url": "http://yourdomain.axeda.com/services/v1/rest/Scripto/execute/DownloadFile?fileId=12345" }] The DownloadFile Custom Object looks like the following: import static com.axeda.sdk.v2.dsl.Bridges.* import javax.activation.MimetypesFileTypeMap import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* import com.axeda.drm.sdk.scripto.Request def knowntypes = [          [png: 'image/png']         ,[gif: 'image/gif']         ,[jpg: 'image/jpeg']     ] def params = Request.parameters.size() > 0 ? Request.parameters : parameters def response = fileInfoBridge.getFileData(params.fileId) def fileinfo = fileInfoBridge.findById(params.fileId) def type = fileinfo.filename.substring(fileinfo.filename.indexOf('.') + 1,fileinfo.filename.size()) type = returnType(knowntypes, type) def contentType = params.type ?: (type ?: 'image/jpg') return ['Content': response, 'Content-Disposition': contentType, 'Content-Type':contentType] def returnType(knowntypes, ext){     return knowntypes.find{ it.containsKey(ext) }?."$ext" } Make sure to append a valid session id to the end of the URL when using it as the source for an image. The techniques discussed above can be applied to any type of binary file output with consideration for the type of file being processed. A Word on Streaming Content streaming such as streaming of video or audio files over UDP is not currently supported by the Axeda Platform.
View full tip
Data is NOT free. It is easy to overlook the cost of data collection, but all data incurs some cost when it is collected. Data collection in and of itself does not bring business value. If you don’t know why you’re collecting the data, then you probably won’t use it once you have it. For a wireless product, it is felt in the cost of bytes transferred, which makes for an expensive solution, but happy Telco's. Even for wired installations, data transfer isn’t free. Imagine a supermarket with 20 checkout lanes - with only a 56K DSL line - and the connection is shared with the credit card terminals, so it is important to upload only the necessary data during business hours. For the end user, too much data leads to information clutter. Too much information increases the time necessary to locate and access critical data. All enterprise applications have some associated "Infrastructure Tax", and the Axeda Platform is no exception. This is the cost of maintaining the existing infrastructure, as well as increasing capacity through the addition of new systems infrastructure. This includes: The cost of the physical hardware The additional software licenses The cost of the network bandwidth The cost of IT staff to maintain the servers The cost of attached storage Optimizing your data profile will maximize the performance of your existing infrastructure. Scaling decisions should be based on load because 50,000 well defined Assets can yield less data than 2,000 extremely "chatty" Assets. Types of Data To develop your data profile, first identify the types of data you’re collecting. "Actionable Data": This is used to drive business logic. This is your most crucial data, and tends to be "real-time" "Informational Data": This changes at a very low rate, and represents properties of your assets as opposed to status "Historical Data": Sometimes you need to step back to appreciate a work of art. Historical data is best viewed with a wide lens to identify trends "Payload Data": Data which is being packaged and shipped to an external system Actionable Data Actionable Data controls the flow of business logic and has three common attributes: It tends to represent the status of the Asset It typically the highest priority data you will receive It usually has more frequent occurrences than other data Informational Data Informational Data is typically system or software data of which some examples include: OS Version Firmware information Geographical region Historical Data Historical Data will represent the results of long-term operations and is typically used for operational review of trends. May be sourced either from Data Items, File uploads or Web Services operations May feed the Axeda integrated business intelligence solution, or internal customer BI systems Payload Data Payload data travels through the Cloud to your system of record. In this case, the Axeda Platform is a key actor in your system, but its presence is not directly visible to the end user Data Types Key Points Understanding the nature of your data helps to inform your data collection strategy. The four primary attributes are the following: Frequency Quantity Storage Format Knowing what to store, what to process and what to pass through for storage is the first key to optimizing your data profile. The "everything first" approach is an easy choice, but a tough one from which to realize value. A "bottom up" or use-case driven approach will add data incrementally, and will reveal the subset of data you actually need to be collecting.Knowing your target audience for the data is the next step. A best practice to better understand who is trying to innovate and how they are looking to do it begins with questions such as the following: Is marketing looking for trends to highlight? Is R&D looking for areas to improve the product? Is the Service team looking to pro-actively troubleshoot assets in the field? Is Sales looking to sell more consumables? Is Finance trying to resolve a billing dispute? Answers to these questions will help determine which data contributes to solving real business problems. Most Service technicians only access a handful of pieces of information about an Asset while troubleshooting, regardless of how many they have access to. It’s important to close the information loop when finding out which data is actually being used.In addition to understanding the correct target audience and their goals, milestone events are also opportunities to revisit your strategy, specifically times like: New Model rollouts Migration to the Cloud New program launch Once your data profile has been established, the next phase of optimization is to plan the way the data will be received. Strategies Data Item vs. File Upload A decision should be made as to the best way to transfer data to the Axeda Platform, whether that is data items, events, alarms or file transfers. Here's a Best Practice approach that's fairly universal: Choose a Data Item if: (a)You are sending Actionable Data, or (b)You are sending discreet Informational Data Choose a File Upload if: (a)You are sending bulk Data which does not need to trigger an immediate response, or (b)You intend to forward the Data to an external system Agent-Side Business Logic Keep in mind that the Axeda Platform allows business logic to be implemented before transmitting any data. The Agent can be configured to determine when Data needs to be sent via numerous mechanisms: Scripts provide the ability to trigger on-demand uploads of data, either via a human UI interaction or an automated process The "Black Box" configuration allows for a rolling sample window, and will only upload the data in the window based on a configured condition Agent Rules Agent Rules allow the Agent to monitor internal data values to decide when to send data to the Cloud. Data can be continuously sampled and compared against configured thresholds to determine when a value worthy of transmission is encountered. This provides a very powerful mechanism to filter outbound data. The example below shows a graphical representation of how an Agent might monitor a data flow and transmit only when it reaches an Absolute-high value of 1200: Axeda provides a versatile platform for managing the flow of data through your Asset ecosystem. It helps to cultivate an awareness not only of what the data set is but what it represents and to whom it has value. While data is cheap, the hidden costs of data transmission make it worthwhile to do your "data profiling homework" or risk paying a high price in the longer term.
View full tip
Announcements