cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
Original Post Date:            June 6, 2016   Description: This is a video tutorial on creating a Stream, adding a Data Shape with properties, and writing values to the Stream.    
View full tip
Video Author:                     Mohammed Amine Chehaibi Original Post Date:            June 29, 2017 Applicable Releases:        ThingWorx Analytics 8.0 to 8.1   Description:​ In this video you will learn how to: To bind a property of an existing entity to the KEPServerEx Data Feed To create an Alert on that property and monitor it's behavior    
View full tip
Video Author:                    Christophe Morfin Original Post Date:            June 9, 2017 Applicable Releases:        ThingWorx Analytics 8.0   Description: In this video we go through the steps to install ThingWorx Analytics Server 8.0.    
View full tip
Use Case: You’ve published a model from Analytics Builder to Analytics Manager, and then used service CreateOrUpdateThingTemplateForModel on resource TW.AnalysisServices.ModelManagementServicesAPI. A thing created from the resulting template will have an infotable called “data” which needs to be populated in order to trigger an Analysis Event & Job. For example you might have been following the online documentation for Analytics Manager > Working with Thing Predictor > Demo: Using Thing Predictor, link here. This script makes it easy to create a line of test data into field "data" on your thing to trigger the analysis event & job. Also fields causalTechnique, goalName and importantFieldCount are set programmatically, these are needed for the analysis event & job. Also this script might be useful as a general example of how to write to an infotable property on a thing. The JavaScript code is shown here and also attached as a text file to this post: me.causalTechnique = 'FULL_RANGE' me.goalName = 'predict_Compressor_failure' me.importantFieldCount = 3 // ThingPredictor.test_3f1a6a31-e388-4232-9e47-284572658a4a.InputParamsdataDataShape entry object //var newEntry = new Object(); var params = { infoTableName : "InfoTable", dataShapeName : "ThingPredictor.test-integer_afebaef3-b2cf-4347-824c-a39c11ddbb4a.InputParamsdataDataShape" }; // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(ThingPredictor.test_3f1a6a31-e388-4232-9e47-284572658a4a.InputParamsdataDataShape) var myInfoTable = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); // 2 - CREATE INFOTABLE ROW USING object var newEntry = new Object(); newEntry._Pressure = 10.5; // NUMBER newEntry._Temperature = 45.1; // NUMBER newEntry._VibrationX = 81; // NUMBER newEntry._VibrationY = 65; // NUMBER //newEntry.key = 4; // STRING - isPrimaryKey = true // 3 - ADD INFOTABLE ROW USING TO INFOTABLE myInfoTable.AddRow(newEntry); // 3 – PERSIST INFOTABLE TO THE THING PROPERTY ‘data’ me.data = myInfoTable;
View full tip
I have implemented an Edge Nano Server that offers the following advantages: Easy to setup Not limited to HTML protocol.  For example, an edge device can be implemented that connects to devices via Bluetooth Code can be found here: GitHub - cschellberg/EdgeGateway Code contains EdgeNanoServer, docker installation scripts(for installing Thingworx Platform), and a test client done in python. Don Schellberg Consultant
View full tip
In this post we will take a look at using an existing JavaScript Library. The library will we will use is agGrid  which provides a very extensive Grid UI component. The objectives are To see how to add the library Use an external source to populate the grid Provide a click action when a user selects a row (Part 2) (see attachments - import AAGridExtensionExample as an extension and import as File PTC-ExternalSources-Entities ) Previous Posts for reference Widget Extensions Introduction Widget Extensions Click Event Widget Extensions Date Picker Widget Extensions Google Bounce We will not worry about CSS - I'm working on a post for that using Thingworx 8.2 CustomClass (CSS) feature. Also I will assume you have worked through the Widget Extensions Introduction The image below image below shows the resulting UI after grid population and a user clicked a row The following provides the high level areas of interest Steps 1. Create a Working Folder for example  AGGrid as in previous posts setup your ui folder and metadata file 2. Think of a name for the Extension - we will use aggrid and add a folder with this name under ui folder 3. create the required files as per previous posts - Note the jslibrary folder is where aagrid resides     Below shows the jslibrary folder the main file we care about is the ag-grid (we could use the min file but initially have the full makes debugging easier) 4. Setup the metadata file 5. Understand some of the agGrid requirements To create a grid we need to use the function agGrid which comes from the ag-grid.js       myGrid = new agGrid.Grid(gridContainer, gridOptions ); The gridContainer is where the grid will be placed in the DOM and the gridOptions is a definition object that holds all the settings for the grid before it is created. Using a init function inside the runtime.js (see previous posts for runtime)  we can get the gridContainer  by using a snippet like this document.getElementById(gridElementId); The gridOptions takes the form of a json object - note there are many options please refer to the agGrid documentation for more info. Our focus will be  columnDefs , rowData to start with. These 2 define the layout and the contents of the grid The columnsDefs takes the form of an Array of JSON basically headerName: and field The image below shows a hard-coded approach I took initially To make this more generic I created a Thingworx datashape and used a service script GetColumndefs to populate and output the columnDefs service script example uses a PTC-ExternalSourcesHelper thing below is the GetColumnsDefs service The next point of focus will be the gridOptions and the rowData (JSON  array of data ) based on the same definition as the columnDefs Both the columnDefs and the GridDataAsJSON (which turns into rowData)  shown below are setup in the ide.js file (see previous posts for ide) Returning back to the services we need to get some Grid data from an external source. For that we will create a GetRSSFeed and use that inside GetRSSAsJSON The GetRSSFeed  looks like this and uses the url input More Top Stories - Google News The GetRSSAsJSON looks like this looking back at the code maybe I should changed to result.rows when returning the GridData  but for now it works. The last thing is getting the data from the services and we use the updateProperty ( previous posts for ide ). Here we check for the property and set and pass the RawData to the drawaggrid function The drawaggrid takes in the rowData and uses the columnDefs to understand the format. Also the last thing the drawaggrid  function is create the actual grid. (Finally!) 5. lat but not least - Wire it all up in a Mashup! The first set is to zip up the Extension and Import  (see previous posts) The next is to create a Mashup and add the PTC-ExternalSourcesHelper entity and wire up the GetColumsDefs and the GetRSSAsJSON to the agGrid widget and then preview and hopefully it all works - I will upload the Extension and Entities shortly See you in Part 2 not yet created! (see attachments - import AAGridExtensionExample as an extension and import as File PTC-ExternalSources-Entities )
View full tip
Previous Posts Widget Extensions Introduction Widget Extensions Click Event Widget Extensions Date Picker I was asked was it possible to make the Google Maps indicator bounce if a property was set to true. The answer is yes. Open up the google maps extension and locate the googlemap.ide.js Make the above changes. Open up the googlemap.runtime.js and search for if (showMarkers) { after the if add the following below Make sure you have a property needsAttension on a returned Thing. If the value is true it will bounce! After viewing Mashup there are 4 locations but one needs attention.
View full tip
Mapping previous versions of ThingWorx Analytics API to ThingWorx Analytics 8.1 Services Since ThingWorx Analytics 8.1, the classic server monolith has been replaced by a series of independent microservices. This new structure groups services around specific elements of functionality (data, training, results). Thus the use of the previous API commands to access ThingWorx Analytics functions has been replaced by the use of ThingWorx Services. Those Services exist within specific Microservice Things accessible in the ThingWorx Platform 8.1. The table below shows a mapping of the most common previous API commands from version 8.0 and previous versions to the version 8.1 related services. The table below does not contain an exhaustive listing either of API commands nor of Services. The API commands used below are samples which might require further information like headers and Body once used. These are used in the table below for reference purposes. Previous API Command Purpose Sample Syntax TWA 8.1 Service Analytics Thing related to Service Service description 1 Version Info GET: http://<IP Address>:8080/1.0/about/versioninfo VersionInfo This service is available in each Mircorservice Thing inheriting from Analytics Server Returns the internal version number for a specific microservice. The first two digits = ThingWorx Core version. The next three digits = version of the microservice. 2 Registering new Dataset POST: http://<IP Address>:8080/1.0/datasets/ CreateDataset Data Microservice Creates the dataset uploads the data along with its metadata and optimizes it automatically. 3 Checking Dataset Status GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name> ListCreatedDatasets Data Microservice This old functionality is replaced by a Service that lists all the created Datasets 4 Creating Metadata POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/configuration CreateDataset Data Microservice (Check line 2 for further information) 5 Checking Dataset Configuration GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/configuration GetDatasetSchema Data Microservice Retrieves the metadata from a dataset. 6 Loading Dataset CSV POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/data CreateDataset Data Microservice (Check line 2 for further information) 7 Checking Job Status GET: http://<IP Address>:8080/1.0/status/<Job ID> GetJobStatus Available in all created Microservices inheriting from AnalyticsJob Server Retrieves the status of a specific job 8 Signals Job POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/signals CreateJob Signals Microservice Create a job to identify signals 9 Signal Results Job GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/signals/<Job ID>/results RetrieveResult Signals Microservice Retrieve a result of a Signals job 10 Profile Job POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/profiles CreateJob Profiling Microservice Creates a job to generate profiles. 11 Profile Result Job GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/profiles/<Job ID>/results RetrieveResult Profiling Micorservice Retrieve the results of a profiles job. 12 Train Model Job POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/prediction CreateJob Training Micorservice Create a prediction model job. 13 Train Model Result Job GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/prediction/<Job ID>/results RetrieveModel Training Microservice Only retrieves the PMML model. But if a holdout for validation was specified in the CreateJob, a validation job is auto-created and runs. 14 Scoring Job POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/predictive_scores BatchScore Prediction Microservice Submit Predictive Scoring Job 15 Scoring Job Result GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/predictive_scores/<Job ID>/results RetrieveResult Prediction Microservice Retrieve results from prediction scoring jobs
View full tip
Introduction to the ThingWorx Composer and a demonstration of how you go about building out the design plan.   For full-sized viewing, click on the YouTube link in the player controls.   Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
Aron Semle, Manufacturing Apps Solution Manager discusses and demonstrates new capabilities in the ThingWorx Manufacturing Apps 8.1 release.
View full tip
In the last while I've seen a few things which got me thinking about how value is created or unlocked from connected data.  Multiple components are required to create, send, store and manage the data created by edge devices, doing these things enables value to be unlocked, but what does it take to unlock the value? It was this ​article which first got me interested in considering this question.  In particular it was a section near the bottom of the article where the author describes a number of creative business use cases for car manufacturers which could be enabled by connected data.  If this author could come up with several creative and potentially valuable use case examples for one industry I started to wonder what other sorts of use cases could exist in other industries?  Could there be a series of use of use cases which a little variation be applied to different industries? The second article which further sparked my interest in where the value originates from, is this one​ on using a cryptocurrency with IOT.  While the idea of using a blockchain like technology with IOT is intriguing, it was the second image in the article (below) which resonated with me on value.  This image is a graphical representation of the connection between the key components of a connected system and it makes it clear that each component has a critical role to play and the whole system and missing anyone of the parts and the system doesn't function.  This image make it clear that it's the "Analyze" phase which drives the action to do something, and it's taking an action which is the reason the systems reason for existing.    Which brings me to the third and final article describing Industry 4.0.  Like the other two articles, it wasn't the main point of the article I found most interesting, rather it was the image below, and in particular the side bar 'Value Creation through' which brought me back to the question of where value comes from.  The idea that in a manufacturing setting, value can be created through product or process innovations as well as through new business models is intriguing.  I think a fourth idea missing from this list, is one were network effects from getting more and more proprietary data creating a compounding effect, like with Facebook or LinkedIn.  If there are at least four modes of value creation, maybe there others? While these articles caused me to ask some questions, none of them really answered the question of where the value is unlocked. To answer the question I decided to restate the question to be "how is value unlocked from data" making the assumption the value is derived from the data.  This question is a little easier to address.  The best visual representation of the answer I've seen is the data value road map (below) from the Creating a Data-Driven Organization book which was released a couple of years ago.  While I think the author is probably missing at least two boxes above 'optimization' ("new business models" and "data driven network effects") I think the graphic does a good job communicating that as the value created from data increases, the complexity of the analytic task also increases; suggesting the value is unlocked by the analytics.  For me, the value from a system of connected devices is unlocked from the "analysis" phase as seen in the first image. But in order to perform the "analysis" I think requires two things.  First asking the right high value questions of the data (product managers/beginning with the end in mind/use cases) and then using the right set of technologies to address those questions which in many instances means Artificial Intelligence of some sort.  Interestingly although artificial intelligence is required for many high value use cases, both parts of the analysis require distinctly human skills (the right use cases & controlling the technology) to create externalized intelligence and generate value. Creating a Data-Driven Organization: Practical Advice from the Trenches 1, Carl Anderson, eBook - Amazon.com
View full tip
We are pleased to announce that the Expert Sessions video series is now available in the ThingWorx Community. We are kicking off this availability with a new space dedicated to these helpful technical videos. In the first round of videos, we are highlighting two ThingWorx Foundation videos that are designed to provide foundational knowledge to get you up and running on the ThingWorx IoT platform. New Expert Sessions Available Now ThingWorx Foundation - Installation is an introduction to installing the ThingWorx platform. The video includes information on the environment, prerequisites, and configuration steps when installing ThingWorx, and includes walkthroughs of installing with H2 and PostgreSQL databases, an introduction and demonstration of the Linux installation script, solutions to common installation problems and more. ThingWorx Foundation - Scalability talks about platform sizing with dependency on the type of environment and correlated scalability options. The video educates you about federation and high availability as well as provides visual diagrams to understand the architecture of different ThingWorx solutions. What is an Expert Session? Expert Sessions are focused, technical webcasts (both recorded and live) where PTC subject matter experts share knowledge and best practices on topics related to the design, development, deployment and operation of PTC software. Expert Sessions are designed using five categories: Get Started, Design, Develop, Deploy, and Operate. Additional Expert Sessions will be highlighted here in the ThingWorx Community every few weeks. Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
Parquet Data Format used in ThingWorx Analytics   Starting ThingWorx Analytics Version 8.1 Data storage will no longer require the installation of a PostgreSQL database. Instead, uploaded CSV data is converted to the optimized  Apache Parquet format and stored directly in the file system. This Blog explains some the features of Apache Parquet justifying this transition in ThingWorx Analytics Data Storage. features What is Apache Parquet: Apache Parquet is a column-oriented data store of the Apache Hadoop ecosystem. It is compatible with most of the data processing frameworks in the Hadoop environment. It provides efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk. Below is an illustration of the Columnar Storage model: Apache Parquet Features and Benefits: Apache Parquet is implemented using the record shredding and assembly algorithm taking into account the complex data structures that can be used to store the data. Apache Parquet stores data where the values in each column are physically stored in contiguous memory locations.  Due to the columnar storage, Apache Parquet provides the following benefits: Column-wise compression is efficient and saves storage space Compression techniques specific to a type can be applied as the column values tend to be of the same type Queries that fetch specific column values need not read the entire row data thus improving performance Different encoding techniques can be applied to different columns Some advantages of using Parquet for ThingWorx Analytics: Apart from the above benefits of using Parquet which amount to higher efficiency and increased performance, below are some advantages that apply specifically to ThingWorx Analytics This change in ThingWorx Analytics from using a Database to using Parquet removes the limitations on the number of data columns the system can handle. It also allows for streamlining the dataset creation process. Since the data is converted to a Parquet format, there is no need to separately optimize the dataset. Even when new data is appended to an existing dataset, a new partition is added and re-optimization is optional but not required. Data could be appended easily so there is no longer a need to re-load the full Dataset when new Data values are added The illustration below shows the transition from Row-based Data Storage model VS the columnar based Storage of Parquet
View full tip
First we need to Understand below terms: Quantitative Variable: A quantitative variable is naturally measured as a number for which meaningful arithmetic operations make sense. Examples: Height, age, crop yield, GPA, salary, temperature, area, air pollution index (measured in parts per million), etc. Categorical variable: Any variable that is not quantitative is categorical. Categorical variables take a value that is one of several possible categories. As naturally measured, categorical variables have no numerical meaning. Examples: Hair color, gender, field of study, college attended, political affiliation, status of disease infection. Ordinal Variables: An ordinal variable is a categorical variable for which the possible values are ordered. Ordinal variables can be considered “in between” categorical and quantitative variables. Example: Educational level might be categorized as     1: Elementary school education     2: High school graduate     3: Some college     4: College graduate     5: Graduate degree •    In this example (and for many ordinal variables), the quantitative differences between the categories are uneven, even though the differences between the labels are the same. (e.g., the difference between 1 and 2 is four years, whereas the difference between 2 and 3 could be anything from part of a year to several years) •    Thus it does not make sense to take a mean of the values. •    Common mistake: Treating ordinal variables like quantitative variables without thinking about whether this is appropriate in the particular situation at hand. Ordinal regression: In statistics, ordinal regression (also called "ordinal classification") is a type of regression analysis used for predicting an ordinal variable. The Ordinal Regression procedure allows you to build models, generate predictions, and evaluate the importance of various predictor variables in cases where the dependent (target) variable is ordinal in nature. Ordinal dependents and linear regression: When you are trying to predict ordinal responses, the usual linear regression models don't work very well. Those methods can work only by assuming that the outcome (dependent) variable is measured on an interval scale. Because this is not true for ordinal outcome variables, the simplifying assumptions on which linear regression relies are not satisfied, and thus the regression model may not accurately reflect the relationships in the data. In particular, linear regression is sensitive to the way you define categories of the target variable. With an ordinal variable, the important thing is the ordering of categories. So, if you collapse two adjacent categories into one larger category, you are making only a small change, and models built using the old and new categorizations should be very similar. Unfortunately, because linear regression is sensitive to the categorization used, a model built before merging categories could be quite different from one built after. Below are some examples pf ordered logistic regression: Example 1: A marketing research firm wants to investigate what factors influence the size of soda (small, medium, large or extra large) that people order at a fast-food chain. These factors may include what type of sandwich is ordered (burger or chicken), whether or not fries are also ordered, and age of the consumer. While the outcome variable, size of soda, is obviously ordered, the difference between the various sizes is not consistent. The difference between small and medium is 10 ounces, between medium and large 8, and between large and extra large 12. Example 2: A researcher is interested in what factors influence modaling in Olympic swimming. Relevant predictors include at training hours, diet, age, and popularity of swimming in the athlete’s home country. The researcher believes that the distance between gold and silver is larger than the distance between silver and bronze. Example 3: A study looks at factors that influence the decision of whether to apply to graduate school. College juniors are asked if they are unlikely, somewhat likely, or very likely to apply to graduate school. Hence, our outcome variable has three categories. Data on parental educational status, whether the undergraduate institution is public or private, and current GPA is also collected. The researchers have reason to believe that the “distances” between these three points are not equal. For example, the “distance” between “unlikely” and “somewhat likely” may be shorter than the distance between “somewhat likely” and “very likely”. How to use and get result by Ordinal Regression: Clink this link for PDF                                                                                                                                                                                                                                                                                                                        PDF source: http://www.norusis.com
View full tip
Updates: App Keys defaults - Now stored in secure keystore - Newly created app keys stored automatically - On upgrated existing app keys are migrated to secure keystore Change the app key default expiration time to 1 day - Changed from 100 years - UI date picker - If date not picked now defaults to 1 day Best Practice: - Carefully consider expiration - Set to desired value at time of creation - Scripts should carefully choose time -Knowledge base article in the works Edge SSL updates C SDK TLS/SSL: C-SDK support for OpenSSL: - Version 1.0.2 that supports tls  1.2. - Tomcat 8 compatible ciphers - EMS will follow soon BYO SSL - Abstraction layer &Documentation - Path to building any SSL for supported environments - Porting - Different version of open ssl: straight forward - Other SSL: some expertise required - Enables other SSL providers: - Burden to validate on SDK developer Possibilities: -AxTLS -WolfSSL -Mocana EMS improvements SafeInt Library -C++ library -Helps prevent integer overflows Better certificate loading support -EMS and LUA script resource can authenticate -Bidirectional EMS's HTTP server now defaults to requiring authentication for LSR Overall theme: secure by default Q: If appkey expired in 1 day, does a new one get automatically created? A: Automatic one is not created, change the expiration date when creating the app key. When it expires - have to create a new one.
View full tip
If you've installed and used version 8.0.0 of the Manufacturing Apps and would like to extend your usage beyond the trial expiration time, we invite you to use the new version of the Apps (8.0.1), which now includes Production Advisor.   You will need to uninstall and reinstall the application to begin this process.  You will be able to preserve your configuration data by exporting it, but you will only be able to import it in the commercial version of the Application.  To export your configuration: navigate to the hamburger menu at the top right of the screen and select "Export Configuration and Data" To move to the latest release of the free edition of the application (8.0.1): 1. Uninstall the applications.  Navigate to the Start Menu > ThingWorx Manufacturing Apps > Uninstall. 2. Download the latest version: https://www.ptc.com/en/thingworx/manufacturing-apps/Dashboard/Download-Apps 3. Run the installer.   If you have any questions or concerns, feel free to post a question in the manufacturing apps community, or in response to this post; we will be happy to assist you!   - The PTC Manufacturing Team
View full tip
    About   This is part of a ThingBerry related blog post series.         ThingBerry is ThingWorx installed on a RaspBerry Pi, which can be used for portable demonstrations without the need of utilizing e.g. customer networks. Instead the ThingBerry provides its own custom WIFI hotspot and allows Things to connect and send / receive demo data on a small scale.   In this particual blog post we'll discuss on how to connect a ESP8266 module to the ThingBerry WIFI hotspot and send data from a DHT-11 sensor via the MQTT protocol.   As the ThingBerry is a highly unsupported environment for ThingWorx, please see this blog post for all related warnings.   Install MQTT broker on the ThingBerry     To install mosquitto as a MQTT broker, log in to the ThingBerry and run     sudo apt-get install mosquitto   This will provide a basic broker installation, which is good enough for this example. MQTT clients (including ThingWorx) will connect to this broker to exchange messages. There will be no added security like encrypted traffic shown in this example, it's however good practise to secure MQTT broker / client connections.   While the ESP8266 module is publishing information, ThingWorx will subscribe to the corresponding topics to update its internal property values with what is sent by the ESP8266 module.   For more information on MQTT, how to configure it for ThingWorx or more security relevant information also see   https://community.thingworx.com/message/5063#5063 https://community.thingworx.com/community/developers/blog/2016/08/08/securing-mqtt-connection-to-thingworx-platform?sr=tcontent   Configure the ESP8266     There are too many instructions on the web already on how to initially setup the ESP8266 and use it with the Arduino IDE. I'll therefore just refer to Google which covers the topic more extensively than I ever could.   All coding in this example is done in the Arduino IDE and is pushed to the ESP8266 (NodeMCU) via USB. For this you might need to install a CH340g USB driver for the NodeMCU.   In the Arduino IDE under Tools, I have set my environment to   Board: NodeMCU 1.0 (ESP-12E Module) CPU Frequency: 80 MHz Flash Size: 4M (3M SPIFFS) Upload Speed: 115200 Port: COM3   Under Sketch > Include Library > Manage Libraries add / install the following libraries:   DHT sensor library by Adafruit Adafruit Unified Sensor by Adafruit PubSubClient by Nick O'Leary   These bring the libraries necessary to read data from the DHT-11 sensor and to configure the ESP8266 as MQTT client.     Wiring the DHT-11 sensor     The following image shows the PINs on the ESP8266     I'm using a DHT-11 sensor with cables included and already fixed to a board with 3 PINs. In case you're using a different version, there might be additional components and wiring required, like a resistor etc. Google might help here as well.     Ensure that neither board nor sensor are plugged in, and the ESP8266 is powered off.   To hook the sensor up to the ESP8266, join   ( - ) to GND ( + ) to 3.3V (out) to D3   After all the connections are made, connect the ESP8266 via USB to a computer / laptop with the Arudino IDE configured.   Coding   In the Arduino IDE use the following code - adjust the WIFI settings and the MQTT broker configuration. Ensure to rename the ESP_xx name / topic to something more meaningful, e.g. a specific device name (or just leave it as is if in doubt).   Use the ssid and wpa_passphrase from the hostapd.conf used to configure the ThingBerry as WIFI hotspot.   Copy&paste the code below into the Arduino IDE, verify it and upload it to the ESP8266.     If searching for a WIFI connection, the device's blue LED will blink. A successful connection to the broker and publishing the values will result in a static blue LED. In case the LED is off, the connection to the broker is lost or messages cannot be published.   For troubleshooting, use the Serial Monitor function (at 115200 baud) in the Arduino IDE. In case sensor data cannot be read but the wiring is correct and the code addressing the correct PIN verify the sensor is indeed working. It took me a long time to figure out that the first sensor I used was a defective device.   The current configuration sends updates every 10 seconds - longer intervals might make more sense, but can trigger a timeout for the MQTT broker. In this case the program will re-connect automatically and log corresponding messages in the Serial Monitor. This might seem like an error, but is indeed intended behavior by the code and the MQTT broker.     Configure MQTT Thing in ThingWorx     Create a new Thing in ThingWorx based on the MQTT Template. Add two properties:   temperature humidity   Both set to persistent and logged and Data Change Type to ALWAYS. Also configure a Value Stream to log a history of values.   In the configuration, add two more subscriptions. Activate the "subscribe" checkbox and map name (local property) to topic (MQTT topic), e.g.   name = temperature; topic = ESP_xx/temp name = humidity; topic = ESP_xx/hum   Ensure the correct servernames, ports etc. are configured (an empty servername will use the localhost).   Save the configuration. Property values should now be updated from the MQTT broker, depending on what the device is sending.   Code #include "DHT.h" #include "PubSubClient.h" #include "ESP8266WiFi.h" /* * * Configure parameters for sensor and network / MQTT connections * */ // setup DHT 11 pin and sensor #define DHTPin D3 #define DHTTYPE DHT11 // setup WiFi credentials #define WLAN_SSID "mySSID" #define WLAN_PASS "WIFIpassword" // setup MQTT #define MQTTBROKER "mqttbrokerhostname" #define MQTTPORT 1883 // setup built-in blue LED #define LED 2 /* * ============================================================ * * DO NOT CHANGE ANYTHING BELOW * (unless you know what you're doing) * */ // initiate DHT DHT dht(DHTPin, DHTTYPE); // initiate MQTT client WiFiClient wifiClient; PubSubClient client(MQTTBROKER, MQTTPORT, wifiClient); /* * setup */ void setup() { // switch off internal LED pinMode(LED, OUTPUT); digitalWrite(LED, HIGH); // start serial monitor Serial.begin(115200); // start DHT dht.begin(); // start WiFi WiFi.begin(WLAN_SSID, WLAN_PASS); } /* * the loop */ void loop() { // while not connected to WiFi, print "." // after connection exit the loop // blink LED while having no WiFi signal boolean wifiReconnect = false; while (WiFi.status() != WL_CONNECTED) { digitalWrite(LED, LOW); delay(200); Serial.print("."); digitalWrite(LED, HIGH); delay(300); wifiReconnect = true; } // if WiFi has reconnected, print new connection information and turn on LED if (wifiReconnect == true) { // print connection information and local IP address, mac address Serial.println(); Serial.println("WiFi connected"); Serial.println(WiFi.localIP()); Serial.println(WiFi.macAddress()); Serial.println(); // turn on built-in LED to indiciate successful WiFi connection digitalWrite(LED, LOW); } // if MQTT client is not connected, connect again // turn on built-in LED to indicate a successful connection if (!client.connected()) { Serial.println("Disconnected from MQTT server... trying to connect"); if (client.connect("ESP_xx")) { Serial.println("Connected to MQTT server"); Serial.println("Topic = ESP_xx"); digitalWrite(LED, LOW); } else { Serial.println("MQTT connection failed"); digitalWrite(LED, HIGH); } Serial.println(); } // read temperature and humidity from sensor float t = dht.readTemperature(); float h = dht.readHumidity(); if (isnan(t) || isnan(h)) { // if temperature or humidity is not a number, print error Serial.println("Failed retrieving data from DHT sensor"); } else { // print temperature and humidity Serial.print(t); Serial.print("° - "); Serial.print(h); Serial.print("%"); Serial.println(); // only send values to MQTT broker, if client is connected if (client.connected()) { // boolean to check for errors during payload transfer bool isError = false; // create payload and publish values via MQTT client // use buffer to convert float to char* char buffer[10]; dtostrf(t, 0, 0, buffer); if (client.publish("ESP_xx/temp", buffer)) { Serial.print(" published /temp "); } else { Serial.print(" failed /temp "); isError = true; } dtostrf(h, 0, 0, buffer); if (client.publish("ESP_xx/hum", buffer)) { Serial.print(" published /hum "); } else { Serial.print(" failed /hum "); isError = true; } Serial.println(); // on error, turn off LED if (isError == true) { digitalWrite(LED, HIGH); } else { digitalWrite(LED, LOW); } } } // sleep for 10 seconds // if sleep > default mosquitto timeout : a reconnect is forced for each update-cycle delay(10000); }
View full tip
The accuracy of a predictive model can be boosted in two ways: Either by embracing Feature engineering or by applying boosting algorithms straight away. There are multiple boosting algorithms like Gradient Boosting, XGBoost, AdaBoost, Gentle Boost etc. Every algorithm has its own underlying mathematics and a slight variation is observed while applying them. While working with boosting algorithms, we have come across two frequently occurring buzzwords: Bagging and Boosting. Bagging: It is an approach where you take random samples of data, build learning algorithms and take simple means to find bagging probabilities. Boosting: Boosting is similar, however the selection of sample is made more intelligently. We subsequently give more and more weight to hard to classify observations. Below are Default Algorithms used in Predictive Models generated in ThingWorx Analytics: Decision Tree Gradient Boost Linear regression Neural Net Random Forrest Logistic Regression Gradient boosting is a machine learning technique for regression and classification problems, which produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees. It builds the model in a stage-wise fashion like other boosting methods do, and it generalizes them by allowing optimization of an arbitrary differential loss function. Let’s begin with an easy example: Assume, you are given a previous model M to improve on. Currently you observe that the model has an accuracy of 80% (any metric). How do you go further about it? One simple way is to build an entirely different model using new set of input variables and trying better ensemble learners. On the contrary, we have a much simpler way to suggest. It goes like this: Y = M(x) + error What if we are able to see that error is not a white noise but have same correlation with outcome(Y) value. What if we can develop a model on this error term? Like:error = G(x) + error2 Probably, we will see error rate will improve to a higher number, say 84%. Let’s take another step and regress against error2: error2 = H(x) + error3 Now we combine all these together: Y = M(x) + G(x) + H(x) + error3 This probably will have a accuracy of even more than 84%. What if we can find an optimal weights for each of the three learners: Y = alpha * M(x) + beta * G(x) + gamma * H(x) + error4 How Gradient Boosting Works: 1. Loss Function: The loss function used depends on the type of problem being solved. It must be differential, but many standard loss functions are supported and you can define your own. A benefit of the gradient boosting framework is that a new boosting algorithm does not have to be derived for each loss function that may want to be used, instead, it is a generic enough framework that any differential loss function can be used. 2. Weak Learner: Decision trees are used as the weak learner in gradient boosting. Specifically regression trees are used that output real values for splits and whose output can be added together, allowing subsequent models outputs to be added and “correct” the residuals in the predictions. Trees are constructed in a greedy manner, choosing the best split points based on purity scores like Gini or to minimize the loss. 3. Additive Model: Trees are added one at a time, and existing trees in the model are not changed. A gradient descent procedure is used to minimize the loss when adding trees. we have weak learner sub-models or more specifically decision trees. After calculating the loss, to perform the gradient descent procedure, we must add a tree to the model that reduces the loss. Improvements to Basic Gradient Boosting: 1. Tree Constraints: It is important that the weak learners have skill but remain weak. Below are some constraints that can be imposed on the construction of decision trees: Number of trees: ​Generally adding more trees to the model can be very slow to over fit. The advice is to keep adding trees until no further improvement is observed. Tree depth: Deeper trees are more complex trees and shorter trees are preferred. Generally, better results are seen with 4-8 levels. Number of nodes or number of leaves: like depth, this can constrain the size of the tree, but is not constrained to a symmetrical structure if other constraints are used. Number of observations per split: Imposes a minimum constraint on the amount of training data at a training node before a split can be considered Minimum improvement to loss: Is a constraint on the improvement of any split added to a tree. 2. Weighted Updates: The contribution of each tree to this sum can be weighted to slow down the learning by the algorithm. This weighting is called a shrinkage or a learning rate. "Each update is simply scaled by the value of the “learning rate parameter v". 3. Stochastic Gradient Boosting: At each iteration a sub sample of the training data is drawn at random (without replacement) from the full training data set. The randomly selected sub sample is then used, instead of the full sample, to fit the base learner. 4. Penalized Gradient Boosting: The additional regularization term helps to smooth the final learnt weights to avoid over-fitting. Intuitively, the regularized objective will tend to select a model employing simple and predictive functions.
View full tip
It’s critical for us to configure all correct parameters while running your application in Production environment or even in development env. While GUI makes it very user-friendly and easy to set up the right values in the right fields, it's useful to know how to do the same programmatically/without the "Configure Tomcat" utility. One way, if you're using Tomcat as a Windows service, you can adjust the JVM options by going to the bin dir and running: tomcat8 //US//MYSERVICENAME ++JvmOptions=-Dexample.license.directory="C:\Program Files\example" Turn the service off before you do this and restart it when you finish. cd $CATALINA_HOME .\bin\service.bat install tomcat .\bin\tomcat8.exe //US//tomcat8 --JvmMs=512 --JvmMx=1024 --JvmSs=1024 Setting the --JvmXX parameters may not be enough. You may also need to specify the JVM memory values explicitly. From the command line it may look like this: bin\tomcat8w.exe //US//tomcat8 --JavaOptions=-Xmx=1024;-Xms=512;.. Be careful not to override the other JavaOptions. But the best and recommended way is to use setenv.sh/setenv.bat (Linux/Windows respectively). It isn't in the as-downloaded Tomcat. But if you look in catalina.sh/catalina.bat, there's a check for a file called setenv. If it's there, it's run. That's where you set JAVA_OPTS, CATALINA_OPTS, etc. We use it to set JAVA_HOME, JAVA_OPTS, CATALINA_OPTS and JPDA_ADDR. Putting all your environment variables into this file is ideal because then you don't have to change the stock startup scripts. Then when monitoring the log we can see the parameters taken:
View full tip
This blog is intended to help diagnose and fix the most common issues that may be encountered when working with ThingWatcher. It cannot be stressed strongly enough that you should be familiar with your data including the average time interval between data points, and the collection duration and certainty threshold you specified. Before you start troubleshooting ThingWatcher, check that result and training microservices is running. For testing result microservices open a web browser and paste result URL; http://<IP of microservices>:<Port of results microservices>/results/models (e.g., http://localhost:8096/results/models) For testing training microservices open a web browser and paste training URL; http://<IP of microservices>:<Port of training microservices>/training (e.g., http://localhost:8091/training) If you see either: {"values":[],"total":0,"next":null,"previous":null} or a list of training jobs in JSON format, this means the result and training microservice service is available. 1. Question. I haven't seen an anomaly but I believe that my 'property' is anomalous?         This can be caused by different reasons, here are the most common causes: The certainty is too high. If the certainty is too high ThingWatcher is conservative in its categorization of "true positives" and therefore may emit more "false negatives". Reducing the certainty will change this behavior but note that ThingWatcher may now categorize too many "false positives" as a result. In other words, ThingWatcher may detect the desired anomalies but also some non-anomalies. The 'property' is anomalous during training data collection. If ThingWatcher creates a predictive model from anomalous data, it may not be able to detect the desired anomalies during MONITORING because the data does not really appear to be anomalous. So ThingWatcher treats this pattern as 'normal'. Therefore, ensure that 'property' values are also non-anomalous during training. There are long time gaps during the monitoring state so ThingWatcher stays in Buffering and categorizes these data points as non-anomalous. 2. Question. ThingWatcher detects an anomaly but my 'property' is non-anomalous? The certainty might be too low. In this case, ThingWatcher reports anomalies when the incoming data pattern looks even slightly different from the expected data pattern. ThingWatcher might need more training data. If the 'property' data has a pattern that occurs over a long time span, ThingWatcher needs to collect multiple cycles of all these patterns in order to detect a true anomaly without emitting too many false positives. 3. Question. ThingWatcher is in FAILED State, why?     There are many possible reasons for a failed state, here are the most likely problems that can cause a failed state. ThingWatcher emits a FAILED ThingWatcher State because the training service has not been setup or is down. similarly, the result service is not available. NotemessageText=Unexpected exception. {Throwable=[ConnectException: Operation timed out}]]messageText=Unexpected exception. {Throwable=[ConnectException: Connection refused}]]. Note that ThingWatcher is still able to collect all training data and you will only begin to see these failed states after ThingWatcher tried to post the training request. ThingWatcher emits a FAILED ThingWatcher State because time gaps prevent the data collection for training.You will see this warning in the log messages : "A long time gap was detected in the data that is greater than the threshold of {n}". This means you have a long gap in the training data and ThingWatcher will recollect the data. If there are more than 3 recollections due to a long time gap, ThingWatcher transitions to a failed state and will not be able to recover. In this case you can either instruct ThingWatcher to retrain and try again or check the data source to make sure it does not have long gaps. 4. Question. Why does ThingWatcher remain in Buffering? There are many possible reasons for ThingWatcher to remain in Buffering, but the most likely issue is time gaps which cause ThingWatcher to remain stuck in Buffering. If the incoming data regularly contains long time gaps, you will notice that ThingWatcher keeps alternating between the monitoring and buffering states. You may need to provide better quality data i.e. more evenly spaced data. Source: Alex Meng, Specialist Software Engineer
View full tip
An introduction to installing the ThingWorx platform. Information on the environment, prerequisites, and configuration steps when installing ThingWorx. Includes walkthroughs of installing with H2 and PostgreSQL databases, an introduction and demonstration of the Linux installation script, solutions to common installation problems and more.     For full-sized viewing, click on the YouTube link in the player controls.   Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
Announcements