cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
Learning is a wide domain and thus, the field of machine learning has branched into several sub fields dealing with different types of learning tasks. Supervised vs Unsupervised Learning Since learning involves an interaction between the learner and the environment, we can divide tasks according to the nature of the interaction. Consider the task of detecting the hand written numbers among the lots of digital numbers vs task of anomaly detection. For the hand-written numbers detection task, we consider a setting in which the learner receives a set of training numbers, labeled as ‘Hand-Written’ and ‘Digital’. On the basis of such training, the learner should figure out a rule for labeling a newly arriving number. In contrast, for the task of anomaly detection, all the learner gets, as a training is a large set of numbers, without any labels on it, and the learner’s task is to detect ‘unusual’ numbers. Consider learning as a process of ‘using the experience to gain expertise’. Supervised learning describes a scenario in which the ‘experience’, a training example, consist significant information (labels Hand-Written or Digital) that is missing in the future arriving numbers, to which the learned experience is to be applied. Here, the acquired expertise is aimed at predicting the missing information for the validation data. However, in unsupervised learning, there is no difference between the training data and validation data. The learner processes input data with the goal of coming up with some summary or compressed version of the data. Clustering a data set into subsets of similar objects is a good example of such task. There is also an intermediate learning setting in which, the training set contains more information than the validation set. The learner here is required to predict even more information for the validation set. Consider a game of chess, where a value function describes each setting of a chess board. Now one may want to learn the degree why which White’s position is better than the black’s. The only information available to the learner at the training time is positions that occurred in an actual chess game and who won it. Such learning frameworks are investigated under reinforcement learning. Source: ‘Understanding machine learning: From theory to algorithms’ written by Shai Shalev-Shwartz and Shai Ben-David
View full tip
Persistent properties are stored in ThingWorx database while non-persistent properties are stored in memory. This means that the persistent values do not get erased or deleted if the thing restarts or platform restarts. The persistent properties can also be retrieved in the same way as non-persistent properties. To explain better how we can retrieve persistent data, please consider the below example: I took a device group which has multiple devices and defined 2 persistent properties. One is serial number and the other is firmware version number. Now in a mashup builder, the user gives the serial number and the corresponding firmware version will be retrieved. I achieved this by writing services for that thing. Created a Data shape with the name DeviceData and defined two filed definitions. One is Serial number and the other is firmware version. Created a thing template with the name DevGroup. Added a property in the thing template with the name DeviceData and selected the basetype as info table. Also selected the previously created data shape (Device Data) in data shape field. And made this property as persistent data by selecting Is Persistent. Saved the property. Created two devices with the names Device1 and Device2. Both the devices use the above template. Now for each device set the property values by clicking the set button in properties link. These values will be persistent, meaning they do not change even after refresh or when you restart ThingWorx. You can even set these properties at run time by just creating a service. Created GetFirmversion service which retrieves the firm version of given service number. Created mashup as shown below: . Once you select Device1, enter the serial number in the numeric entry field and click query button, the firmware version of the given serial number is displayed along with the data of device1. If we enter a wrong number, no data will be displayed. You can also set an error message instead of displaying empty values. Similar case when we select the device2 data. This is one way to retrieve persistent data. You can also obtain the same in many ways.
View full tip
I've been working with the 7.x and 8.x versions of Thingworx over the last several months doing integrations. So I have a few development instances I've been working with. I'd like to go over some of the issues I've encountered and provide some potential best practices around how you work with Thingworx in development mode and transition to production. Typically, I'll create an instance and develop the integration using the Administrator user which is the only user created as you start up Thingworx instances. Lately, I've been having trouble with a lot of authentication failures as I build. Problem number 1: The new User Lockout feature. Around 7.2 a new User Lockout feature was added to Thingworx to help prevent brute-force cracking of passwords. You now are allowed only so many authentication failures in a given period of time before the user is automatically put in lock mode for a number of minutes. Unfortunately, (but realistically in a more secure manner), a lockout appears as more authentication failures. In reality, it is because the user you have just successfully authenticated to has been automatically locked. I came very close to wiping out an entire instance because I just couldn't get signed in. Then I remembered the lockout, so I worked on something else for a while and then tried to get back into the server and I was successful because the lock timeout had expired. To see the settings in your system, look at the User Management Subsystem configuration page. The Account Lockout Settings default to 5 failures in a 5 minute period that will result in a 15 minute lockout. One note is that setting the lockout to 0 doesn't disable it, it permanently locks out the account. It will have to be reset by a member of the administrators group. The screenshot below shows Problem number 2: AppKeys There is a new setting in the PlatformSubsystem that by default prevents the use of AppKeys in a URL. This setting is present because it is more secure. If you use an appKey as a query parameter in a URL, the appKey will be logged as clear text in your web browser's access log. This is a security risk - an AppKey, especially one not using a whitelist setting might make it possible for someone to gain access to your system by managing to see the access log for your system (maybe via some log analysis tool you feed your logs to). You can disable this setting, but it is not recommended for production - you may have to for a while because you may have code that uses this technique that must be updated before you can enforce the policy. You should deal with this in your production systems as soon as practical. See the graphic below on where this setting shows up. Problem number 3: REST API testing ​As a Thingworx developer, you're probably aware of tools like Curl, Postman and even the Web Browser that can let you exercise a REST API as you develop your code so you can validate your functionality. The REST guidelines specify that you should use the GET method to retrieve data, PUT to create data, POST to update data, etc. The issue is that it is easiest to test an API call if you can execute it from a web browser. However, the web browser always uses the GET method to make a request. This means that PUT and POST (along with other methods) will not work from your browser. Thingworx originally interpreted the incoming request and would internally reroute incoming requests to the POST or PUT functionality. This is also insecure because it makes it too easy to execute services from a browser. A setting was added to the PlatformSubsystem allow for a gradual transition to the more secure configuration. Turn this on in developer mode to simplify your testing of REST calls, but you should not leave it on in production mode as it provides a potential attack vector for your server. So I have some recommendations: 1) Set up an additional Administrative user upon installation If you only have one user defined and it gets locked out, you're stuck until the lockout times out. Worse, if for some reason you set the timeout value to 0, you're locked out forever by Thingworx. You're only choice will be to hack the database to unlock the user or to wipe out the instance and start over. I just went through a situation where I did create the second user and forgot to add it to the Administrators group. So I did something else for 20 minutes to make sure the lockout had cleared. Then I added the user to the Administrators group but got distracted and never pressed the Save button so it locked up again. Make sure you have the user created and functional immediately upon installing the instance - don't wait until you're getting locked out by some loop that's not authenticating properly. Even if you were logged in as your Administrator user, the lockout will cause a failure the next time you try to do something in Composer, like turn off the lockout checkbox! 2) Test your REST Calls with Curl or Postman - not the web browser Don't test your code in a loop until you've tested it in isolation to be sure it's not going to fail authentication for some reason (which may include violating the PlatformSubsystem settings above). Don't use the browser to do the testing - it will require disabling the secure settings. Use Curl or even better, Postman or a similar tool to test your REST calls - it will give you better formatted output than Curl. And you can easily put appKey in as a header (where it should be) instead of a parameter on the URL or in the body. 3) Tighten up your appKeys where possible. Since an appKey is effectively a user/password replacement, you should protect them in the same manner - keep them out of log files by not allowing them as URL parameters, and use the whitelist to keep them from being used for other purposes. If you have a server to server connection, whitelist the server who will be making the calls to you. What I'm not sure of is just whether this is really IP addresses only or if you can use a DNS name and it will look up the IP address and insure it is in fact coming from the expected source. Someone else might be able to comment on this. 4) Test with the PlatformSubsystem settings off Make sure you can run your server without the method redirect or appKey as parameter settings in the PlatformSubsystem. Those settings are potential security vulnerabilities. You may find some Thingworx code that requires one or the other of those settings. Please be sure to report this through PTC Tech Support so it can be fixed.
View full tip
A license is not required to download and install KEPServerEX V6. After finishing the full install, all features of the KEPServerEX Configuration are available for setup and testing, including the ThingWorx Native Interface. When either a Client application makes a request of a Driver, or a Plug-in becomes activated, then a license check is performed. If a feature is not licensed, a two-hour demo countdown period will begin for that feature. There are System tags that can be monitored in KEPServerEX to determine which features are licensed, which are in a time-limited demo countdown period, and which have expired. To reset the countdown timer for an unlicensed feature, the KEPServerEX Runtime service can be stopped/restarted. See the links and steps below for more information. How do I download and install KEPServerEX?​ Monitor licensed, time-limited, and expired features in KEPServerEX:​ 1. Open the KEPServerEX Configuration window 2. The right-most icon in the Configuration icon toolbar is the white "QC" icon. Select this icon to launch a new OPC Quick Client window. The server project should automatically build 3. In the left pane, locate the _System folder 4. Within this folder locate the following three Tags, with the corresponding information displaying in the Value column: Tag (Item ID) Information displayed as a String in the Value column _System._ExpiredFeatures Any unlicensed driver or plug-in that has timed out _System._LicensedFeatures Drivers or plug-ins that are currently licensed for use with this install _System._TimeLimitedFeatures Any unlicensed driver or plug-in that is in demo mode, with the remaining demo timer countdown (in seconds) Note: The ThingWorx Native Interface does not require a KEPServerEX license. Restart the KEPServerEX Runtime service The KEPServerEX Runtime service (server_runtime.exe) runs in the background as a system service. Resetting this service will reset the demo countdown for any unlicensed feature. To reset the Runtime service: 1. Right-click on the Administration tool (the green EX icon in the Systray by the System Clock) 2. Select "Stop Runtime Service. 3. Either the user will be prompted to restart this service, or the same Administration tool menu can be used again to "Start Runtime Service"
View full tip
Connecting Existing Things to ThingWorx Industrial Gateway for Anomaly Detection   In this Video you will learn how to :   - To bind a property of an existing entity to the KEPSserverEX Data Feed - To create an Alert on that property and monitor it's behavior   Updated Link for access to this video:  Connecting Existing Things to ThingWorx Industrial Gateway for Anomaly Detection
View full tip
Many users of our software have submitted cases regarding the Third-Party Components and their functions within ThingWorx Analytics. This short blog post will provide the main components used by our software and explain their functionality.   ThingWorx Analytics uses the following components in its default installation:   Apache ZooKeeper ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services ThingWorx Analytics uses ZooKeeper as the gatekeeper to API calls and processes to the Application Component Homepage: https://zookeeper.apache.org/ Apache Tomcat ​Apache Tomcat software is an open source implementation of the Java Servlet, JavaServer Pages, Java Expression Language and Java WebSocket technologies ThingWorx Analytics uses Tomcat to handle webservices and API communications This enables the use of ThingWorx Foundation (Core) mashups with ThingWorx Analytics Server Component Homepage: http://tomcat.apache.org/ PostgreSQL Server PostgreSQL is an open source object-relational database system ThingWorx Analytics uses PostgresSQL server to store analytical results for later retrieval Component Homepage: https://www.postgresql.org/
View full tip
When I tried to set String property values with Chinese letters by using C SDK, I could see only broken characters in Thingworx. The cause of the problem is simple and it's encoding. In order to solve this problem, you need to convert encoding to UTF-8 in your C code. I used 'libiconv' library to do it. This guide is for Win32 version. If you want to make a library for other platforms such as Linux, you can create or get a library for your own by googling. How to Get the Source Code of libiconv At the moment, the most recent version of libiconv is 1.15. You can download the source code of libiconv from here. How to Build I used MS Visual Studio 2012, but the explanation can be applied to the earlier versions of MS Visual Studio and express editions. Step 1. Download the most recent version of libiconv and unzip the file. Step 2. Make a new Win32 Project. Let's say "libiconv" as the project name. Check to create directory for solution. Choose DLL as the application type and check Empty Project for additional options. Click the button "Finish" to generate the new project. Step 3. Copy files from the folders of libiconv to project folders. To build "libiconv", you need to compile three files "localcharset.c", "relocatable.c" and "iconv.c". That's the key! Copy three files "relocatable.h", "relocatable.c" and "iconv.c" in the folder "...\libiconv-1.15\lib\" to the project folder "...\libiconv\libiconv\". Copy "...\libiconv-1.15\libcharset\lib\localcharset.c" to the project folder "...\libiconv\libiconv\". Copy "...\libiconv-1.15\libcharset\include\localcharset.h.build.in" to the project folder "...\libiconv\libiconv\" and then, rename the copied "localcharset.h.build.in" to "localcharset.h". Copy "...\libiconv-1.15\windows\libiconv.rc" to the project folder "...\libiconv\libiconv\". Make folder "include" under the project folder "...\libiconv\". Copy "...\libiconv-1.15\include\iconv.h.build.in" to the project include folder "...\libiconv\include" and then, rename the copied "iconv.h.build.in" to "iconv.h". Copy "...\libiconv-1.15\config.h.in" to the project include folder "...\libiconv\include" and then, rename the copied "config.h.in" to "config.h". Copy all the header files (*.h) and definition files (*.def) in the folder "...\libiconv-1.15\lib" to the project include folder "...\libiconv\include". Step 4. Add existing items. Execute "project > Add Existing items..." at the main menu to add existing items to the project. Step 5. Project Settings. You can make 64-bit platform through configuration manager in order to generate libiconv.dll for 64-bit system. You can also make two other configurations "ReleaseStatic" and "DebugStatic" in order to generate libiconvStatic.lib as a static link library. At the project properties, change Output Directory as "$(SolutionDir)$(Configuration)_$(Platform)\" and Intermediate Directory as "$(SolutionDir)obj\$(ProjectName)\$(Configuration)_$(Platform)\". Change Include Directories as "..\include;$(IncludePath)": You have to add "BUILDING_LIBICONV" and "BUILDING_LIBCHARSET" to Peprocessor Definitions of all Platforms and of all configurations. You'd better set Runtime Library to "Multi-threaded" when building dynamic link library libiconv.dll. Then, the dependency on VC Runtime library can be controlled by the applications that will be built and dynamically linked with libiconv.dll because libiconv.dll does not need VC Runtime library but only the application that uses libiconv.dll may or may not need VC Runtime library. However, when building the static link library libiconvStatic.lib, you can choose Runtime Library option for libiconvStatic.lib depending on the application that uses libiconvStatic.lib. You have to change Precompiled Header option to "Not Using Precompiled Headers". Step 6. Tweak the source code. libiconv.rc Open libiconv.rc with text editor or the source code editor of Visual Studio IDE by double-clicking libiconv.rc in the Solution explorer and insert some code at line 4 as follows: /////////////    ADD    ///////////// #define PACKAGE_VERSION_MAJOR 1 #define PACKAGE_VERSION_MINOR 14 #define PACKAGE_VERSION_SUBMINOR 0 #define PACKAGE_VERSION_STRING "1.14" ///////////////////////////////////// You may be asked to change Line endings to "Windows (CR LF)". Then, let it do so. It will be more convenient for you if you mainly use Windows. localcharset.c Open localcharset.c and delete or comment the lines 80 - 83 as follows: //////////////////  DELETE //////////////// ///* Get LIBDIR.  */ //#ifndef LIBDIR //# include "configmake.h" //#endif /////////////////////////////////////////// iconv.c Open iconv.c and delete or comment the lines 250 - 252 and add three lines there as follows: ///////////////////////// DELETE /////////////////////// //size_t iconv (iconv_t icd, //              ICONV_CONST char* * inbuf, size_t *inbytesleft, //              char* * outbuf, size_t *outbytesleft) /////////////////////////   ADD   ////////////////////// size_t iconv (iconv_t icd,               const char* * inbuf, size_t *inbytesleft,               char* * outbuf, size_t *outbytesleft) //////////////////////////////////////////////////////// localcharset.h Open localcharset.h and delete or comment the lines 21 - 25 and add 7 lines there as follows: /////////////////////////   DELETE  //////////////////////// //#if @HAVE_VISIBILITY@ && BUILDING_LIBCHARSET //#define LIBCHARSET_DLL_EXPORTED __attribute__((__visibility__("default"))) //#else //#define LIBCHARSET_DLL_EXPORTED //#endif /////////////////////////    ADD    ////////////////////// #ifdef BUILDING_LIBCHARSET #define LIBCHARSET_DLL_EXPORTED __declspec(dllexport) #elif USING_STATIC_LIBICONV #define LIBCHARSET_DLL_EXPORTED #else #define LIBCHARSET_DLL_EXPORTED __declspec(dllimport) #endif //////////////////////////////////////////////////////////////////// config.h Open config.h in the project include folder "...\libiconv\include" and delete or comment the lines 29 - 30 as follows: ///////////////////////// DELETE /////////////////////// ///* Define as good substitute value for EILSEQ. */ //#undef EILSEQ //////////////////////////////////////////////////////// Otherwise you can redefine EILSEQ as good substitute value. iconv.h Open iconv.h in the project include folder "...\libiconv\include" and delete or comment the line 175 and add 1 line as follows: /////////////////////////  DELETE  /////////////////////// //#if @HAVE_WCHAR_T@ /////////////////////////    ADD   ////////////////////// #if HAVE_WCHAR_T //////////////////////////////////////////////////////////////////////////////// Delete or comment the line 128 and add 1 line as follows: /////////////////////////  DELETE  /////////////////////// //#if @USE_MBSTATE_T@ /////////////////////////   ADD   ////////////////////// #if USE_MBSTATE_T //////////////////////////////////////////////////////////////////////////////// Delete or comment the lines 107-108 and add 2 lines as follows: /////////////////////////  DELETE  /////////////////////// //#if @USE_MBSTATE_T@ //#if @BROKEN_WCHAR_H@ /////////////////////////  ADD  ////////////////////// #if USE_MBSTATE_T #if BROKEN_WCHAR_H //////////////////////////////////////////////////////////////////////////////// Delete or comment the line 89 and add 2 lines as follows: /////////////////////////  DELETE /////////////////////// //extern LIBICONV_DLL_EXPORTED size_t iconv (iconv_t cd, @ICONV_CONST@ char* * inbuf, //size_t *inbytesleft, char* * outbuf, size_t *outbytesleft); /////////////////////////    ADD   ////////////////////// extern LIBICONV_DLL_EXPORTED size_t iconv (iconv_t cd, const char* * inbuf,   size_t *inbytesleft, char* * outbuf, size_t *outbytesleft); //////////////////////////////////////////////////////////////////////////////// Delete or comment the lines 25 - 30 and add 8 lines as follows: /////////////////////////  DELETE /////////////////////// //#if @HAVE_VISIBILITY@ && BUILDING_LIBICONV //#define LIBICONV_DLL_EXPORTED __attribute__((__visibility__("default"))) //#else //#define LIBICONV_DLL_EXPORTED //#endif //extern LIBICONV_DLL_EXPORTED @DLL_VARIABLE@ int _libiconv_version; /* Likewise */ /////////////////////////    ADD   ////////////////////// #if BUILDING_LIBICONV #define LIBICONV_DLL_EXPORTED __declspec(dllexport) #elif USING_STATIC_LIBICONV #define LIBICONV_DLL_EXPORTED #else #define LIBICONV_DLL_EXPORTED __declspec(dllimport) #endif extern LIBICONV_DLL_EXPORTED int _libiconv_version; /* Likewise */ //////////////////////////////////////////////////////////////////////////////// How to Use When you use newly built libiconv, the only header file that you need is iconv.h. You will need to link either the import library libiconv.lib or the static library libiconvStatic.lib in your project property or write the code in one of your source file as follows: #pragma comment (lib, "libiconv.lib") or #pragma comment (lib, "libiconvStatic.lib") In the source of the application that uses this library either libiconv.dll or libiconvStatic.lib, if you don't define anything but only include iconv.h, your application will use libiconv.dll while it will use libiconvStatic.lib if you define USING_STATIC_LIBICONV before you include iconv.h in your application as follows: //#define USING_STATIC_LIBICONV #include <iconv.h> And in C SDK code, I used a "SteamSensorWithFileTransferAndTunneling" sample code and added codes in the "dataCollectionTask" function as below. void dataCollectionTask(DATETIME now, void * params) {   iconv_t ic;   char* in_buf = "Hi, 文健英";   char *to_chrset = "UTF-8";   char *from_chrset = "EUC-KR";   size_t in_size = strlen(in_buf);   size_t  out_size = sizeof(wchar_t) * in_size * 4;   char* out_buf = malloc(out_size);   // Caution: iconv's inbuf, outbuf are double pointers, so need to define separate pointers and pass addresses.   char* in_ptr = in_buf;   char* out_ptr = out_buf;   size_t out_buf_left;   size_t result;   memset(out_buf, 0x00, out_size);      ic = iconv_open(to_chrset, from_chrset);   if (ic == (iconv_t) -1)   {         printf("Not supported code \n");         exit(1);   }   printf("input len = %d, %s\n", in_size, in_buf);   out_buf_left = out_size;   iconv(ic, &in_ptr, &in_size, &out_ptr, &out_buf_left);   //printf("input len = %d, output len=%d %s\n", in_size, out_size - out_buf_left, out_buf);   iconv_close(ic);   /* TW_LOG(TW_TRACE,"dataCollectionTask: Executing"); */   properties.TotalFlow = rand()/(RAND_MAX/10.0);   properties.Pressure = 18 + rand()/(RAND_MAX/5.0);   properties.Location.latitude = properties.Location.latitude + ((double)(rand() - RAND_MAX))/RAND_MAX/5;   properties.Location.longitude = properties.Location.longitude + ((double)(rand() - RAND_MAX))/RAND_MAX/5;   properties.Temperature  = 400 + rand()/(RAND_MAX/40);   properties.BigGiantString = out_buf; // Set values for String property   /* Check for a fault.  Only do something if we haven't already */   if (properties.Temperature > properties.TemperatureLimit && properties.FaultStatus == FALSE) {   twInfoTable * faultData = 0;   char msg[140];   properties.FaultStatus = TRUE;   properties.InletValve = TRUE;   sprintf(msg,"%s Temperature %2f exceeds threshold of %2f",   thingName, properties.Temperature, properties.TemperatureLimit);   faultData = twInfoTable_CreateFromString("message", msg, TRUE);   twApi_FireEvent(TW_THING, thingName, "SteamSensorFault", faultData, -1, TRUE);   twInfoTable_Delete(faultData);   }   /* Update the properties on the server */   sendPropertyUpdate(); }
View full tip
The AddStreamEntries​ snippet does not offer too much information, except that it needs an InfoTable as input. It is however based on the InfoTable for the AddStreamEntity service.     To use the AddStreamEntries table, an InfoTable based on sourceType, values, location, source, timestamp​ and ​tags​ must be used.   In this example, I started with a new Thing based on a ​Stream​ template and the following DataShape:     This DataShape must be converted into an InfoTable with is used as the ​values​ parameter. It's important that the ​timestamp​ parameter has distinct values! Otherwise values matching the same timestamp will be overwritten!   We don't really need the sourceType​ as ThingWorx will automatically determine the type by knowing the source and which kind of Entity Type it is.   I created a new ​MyStreamThing​ with a new service, filling the InfoTable and the Stream. The result is the following code which will add 5 rows to the Stream:     // *** SET UP META DATA FOR INFO TABLE ***   // create a new InfoTable based on AddStreamEntries parameters (timestamp, location, source, sourceType, tags, values)   var myInfoTable = { dataShape: { fieldDefinitions : {} }, rows: [] };   myInfoTable.dataShape.fieldDefinitions['timestamp']  = { name: 'timestamp', baseType: 'DATETIME' }; myInfoTable.dataShape.fieldDefinitions['location']  = { name: 'location', baseType: 'LOCATION' }; myInfoTable.dataShape.fieldDefinitions['source']    = { name: 'source', baseType: 'STRING' }; myInfoTable.dataShape.fieldDefinitions['sourceType'] = { name: 'sourceType', baseType: 'STRING' }; myInfoTable.dataShape.fieldDefinitions['tags']      = { name: 'tags', baseType: 'TAGS' }; myInfoTable.dataShape.fieldDefinitions['values']    = { name: 'values', baseType: 'INFOTABLE' };   // *** SET UP ACTUAL VALUES FOR INFO TABLE ***   // create new meta data   var tags = new Array(); var timestamp = new Date(); var location = new Object(); location.latitude = 0; location.longitude = 0; location.elevation = 0; location.units = "WGS84";   // add rows to InfoTable (~5 times)   for (i=0; i<5; i++) {       // create new values based on Stream DataShape       var params = {           infoTableName : "InfoTable",           dataShapeName : "Cxx-DS"     };       var values = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params);       // add something to the values to make them unique       // create and add new row based on Stream DataShape     // only a single line allowed!       var newValues = new Object();     newValues.a = "aaa" + i; // STRING - isPrimaryKey = true     newValues.b = "bbb" + i; // STRING     newValues.c = "ccc" + i; // STRING       values.AddRow(newValues);       // create new InfoTable row based on meta data & values     // add 10 ms to each object, to make it's timestamp unique     // otherwise entries with the same timestamp will be overwritten       var newEntry = new Object();     newEntry.timestamp = new Date(Date.now() + (i * 10));     newEntry.location = location;     newEntry.source = me.name;     newEntry.tags = tags;     newEntry.values = values;       // add new Info Table row to Info Table           myInfoTable.rows = newEntry;       }       // *** ADD myInfoTable (HOLDING MULITPLE STREAM ENTRIES) TO STREAM       // add stream entries in the InfoTable       var params = {           values: myInfoTable /* INFOTABLE */     };       // no return       Things["MyStreamThing"].AddStreamEntries(params);   To verify the values have been added correctly, call the ​GetStreamEntriesWithData​ service on the ​MyStreamThing​
View full tip
Retraining the Model in ThingWorx Analytics When using ThingWorx Analytics Products to build Prediction Models, it is not enough to end up with models that are a Technical Success. The purpose is to ultimately have models that are a Business Success. What the user would want to achieve is to have Models that remain reliable and accurate in a potentially changing production environment. Therefore, when your environment changes, the model that you have used and relied on might no longer provide the same quality of results. Hence the need to retrain your model. Types of Models to be retrained: There are currently two types of models that are created with ThingWorx Analytics: Predictive models Anomaly Detection models Each of those models could require retraining based on the context in which they are created then used. When to retrain your Model: - Predictive models: For predictive analytics models, the main initiator for retraining would be a change in the production environment. resulting in the change of collected Dataset. This could nonetheless be caused by many factors: An overall change in the business objective: This could include a change in the granularity at which the Dataset is used. An Example, in a Company HR Dataset, could be moving from making predictions on a Department Level to making predictions on an Employee Level. The addition of either new features in the Dataset or even new values in the existing features which did not figure within the values of the training dataset. This type of change in the Dataset would require the retraining of the Model. The emergence of new trends in the marketplace: These new trends would appear in the generated Datasets. This could be detected by the degradation of the results that are provided by the existing Prediction Models. - Anomaly Detection models: In anomaly detection, the need to retrain the models originates mainly from a change in what is considered to be a Normal behavior of a certain monitored property. The could be caused by the following factors: A change in the context in which the Property values are measured then monitored: An example is monitoring the Traffic in a Street in the Working weekdays while excluding the weekends then adding the Weekend days to the monitored behavior. Here the change in the Traffic is normal however would be detected as an Anomaly unless the model is retrained. A change in the Thresholds of values accepted to be normal in a certain property. An example is the temperatures measured on a running device. The Device, previously,  never run at full power when the model was built but since it started running at full power the temperature increased beyond the usual threshold and thus the model needs to be retrained to include the new Normal temperatures. Another reason that could justify retraining the anomaly detection model is simply that when the model was trained the Property values that were used were not representing its normal state. For example, the temperature of an  Engine was being measured on a "Turned off" state when we are actually trying to build a model that would detect temperature anomalies on a running Engine. This might not be an exhaustive list of the reasons that would require either a Predictive or an Anomaly Detection Model to be retrained. As a general rule of thumb, if the model starts delivering results that are below expected or if the business context for the model is not valid, then it might be a wise decision to retrain the Analytics Model.
View full tip
Behavior of ThingWorx Analytics Anomaly Detection with Data Gaps In ThingWorx Analytics, Anomaly detection is performed through the ThingWatcher API framework. This is done by observing the Data from an Edge device, learning what the data stream should look like and then monitoring for any unexpected sequences of Data within the incoming Data stream. Ideally, for this process to work properly, there should be no Data Gaps. However, Data Gaps do occur, this blog describes how ThingWatcher deals with them in order to achieve high performance in anomaly detection. Data Gaps and phases affected: In anomaly detection, ThingWatcher goes through three consecutive phases which are Initializing, Calibrating and then Monitoring. Both the Initializing and Monitoring phases involve either collecting or monitoring streamed Data, so these two phases are sensitive to Data streaming Gaps. The Calibrating phase involves the use of already collected data to create the Anomaly Detection Model. Thus this phase is not directly affected by Data gaps. Dealing with Long and Short Data Gaps: Initializing Phase: During this phase, Data is collected and as part of the collection process, the sampling rate is imputed. So when short data gaps occur these are interpolated so that there are no missing values. However long Gaps might also occur. A Data gap is considered to be long if there is more than three missing Data points which should amount to three times the sampling rate. Basically, if the Timestamp on a data point is greater than the previous timestamp by more than three times the sampling rate that is considered to be a long gap. If a long gap occurs, ThingWatcher would restart the Data Collection process since long Data gaps are not acceptable. The data recollection process could be initiated three times when there are long gaps before failing if the gaps persist. The Data source would then no longer be considered reliable Monitoring Phase: In this phase, the Data stream is monitored to detect any unexpected behavior. In that case, if a short time gap occurs between the previous and the current TimedValue data points, the lookback buffer would be cleared. ThingWatcher will re-enter the Buffering state and will remain in this state until the lookback window buffer is completely filled. For more information on The functionalities of ThingWatcher, Please refer to the ThingWatcher Deployment Guide https://support.ptc.com/WCMS/files/173109/en/ThingWatcher-Deployment-Guide-8.0.pdf However, if the gaps are long and exceed three times the sampling rate, data filling could no longer be a valid solution and Data collection restarts. It is important to note that these imputed values decrease the accuracy of the Anomaly Detection Model. Therefore data monitored by ThingWatcher should be incremented in regular intervals. In general, persistent data gaps should be avoided by ensuring that data is streamed such that the timestamps increase in regular increments and any gaps that exist are generally incidental and small.
View full tip
In this video we go through the steps to install ThingWorx Analytics Server 8.0   Updated Link for access to this video:  ThingWorx Analytics Server 8.0
View full tip
Please refer to the release notes: PTC Here are some common questions and answers in regards to the Uprade change: Extension: The removal of dependencies was almost impossible. Do the changes allow extension updates without removal? That is correct Is uninstalling extensions "easier" possible? For example, having extensions with many dependencies which results in a struggle when manually deleting all include files...it would be great to have just 1 uninstall function. Unfortunately, that is still the case when one is uninstalling extension Is there an easy way to see all entity dependencies on an extension (vs. on any one template etc)? Not currently at the extension level.  That's certainly something to be considered adding If one made a mistake on changing an extension mashup, how do  they recover? Is it necessary to remove and then import newer extension? Yes, at the moment that is the only recourse.
View full tip
This blog post provide information on the technical changes in Thingworx 8.0, New Technical Changes in ThingWorx 8.0.0 Here are some common questions and answers in regards to the Licensing change: Does that mean all the extensions in the marketplace won't be free anymore? Depends on the extensions. The main extensions we are licensing for 8.0 are Navigate, Manufacturing and Utilties. We are not licensing the MailExtension on the marketplace, for example. Partners and customers can still import their custom apps/extensions If TWX connects to RP(remote platform) which has its own subscription based Flexera license (InService, for example), how does this interaction works- license validation.  Is server to sever connection counts as user login direct to PTC product? License files are per TWX instance. For RP, each would have their own license files. User counts (if entitled and enforced) are generic to each system.
View full tip
This article https://support.ptc.com/appserver/cs/view/solution.jsp?n=CS264270 and the blog post provide information on the technical changes in Thingworx 8.0, New Technical Changes in ThingWorx 8.0.0 Here are some common questions and answers in regards to the change. Is there any way that one could lose access to the Appkey keystore? (restore a backup, etc) Yes, it is possible to  lose the key store. If one for some reason simply deletes it or renames it, the existing application keys would no longer be able to be decrypted - thus unusable. Is there a way to back up the appkey store and any necessary secrets such that it can be recovered then? Yes, but this would be handled at the file system, not inside of ThingWorx. We do have some best practices documented for managing the keystore. For developers who configure Edge applications and Connectors to connect to the platform, where do they copy the app key from? Are they copyring the encrypted version of the app key? They would do the same as before. We are not encrypting the app key in the UI, only on disk. Is there any way to get a summary of appkeys that are being used via URL? We do have some requirements for highlighting "repeat offenders". In 8.0, one can query the logs to find those app keys. We don't log the app key id, only the app key name.
View full tip
It usually happens that we need to copy a large file to ThingWorx server periodically, and what's worse, the big file is changing(like a log file). This sample give a simpler way to implement. The main idea in the sample is: 1. Lower the management burden from ThingWorx server and instead it put all the work in edge SDK side 2. Save network burden with only uploading the incremented file and append it to the older file on ThingWorx server   Java SDK version in this sample: 6.0.1-255
View full tip
Here is a demo that uses repeater widget and smart grid widget to display a nested table. Note: This demo is for testing use. Create a datashape for  the nested datatable named DataTableStructure Create DataShape named InfoStructure for Obj Create the nested datatable named DataTableTest as below Create a service named AddObjInput as below in this datatable Crate a Mashup named RepeaterMsh Add widgets like textbox, label, button and smart grid on the mashup There are some parameters we added before, please bind them to the widgets that we just added to the mashup Add DataTableTest service AddOrUpdateDataTableEntry to the mashup. Bind textbox and smart grid edited table to AddOrUpdateDataTableEntry parameter. Bind button click event to trigger the service AddOrUpdateDataTableEntry The smart grid widget need a special attribute named TableDefinition This demo sample is{"columns":[{"name":"PrimarySchool","type":"text","display":"PrimarySchool"},{"name":"SecondarySchool","type":"text","display":"SecondarySchool"},{"name":"HighSchool","type":"text","display":"HighSchool"}]} Create mashup which has repeater widgets The mashup over view is like below The first part is aimed to add a new record, the second part (repeater widget)is aimed to display table data, and also update data. In the first part, there are textboxs, labels, smart grid and button widget. They are banded to the service named AddDataTableEntry, aimed to add a new record in the table. (Smart grid is used to add the nested table data) There is a button named AddRecords, bind it’s clicked event to AddDataTableEntry service. And bind the AddDataTableEntry service’s event ServiceInvokeCompleteted to GetDataTableEntries The smart grid widget in here is an empty grid, but it should have a structure at beginning. That’s why to create a service named AddObjInput in step2. It returns an empty table but with the datashape. The second part is a repeater widget. Repeater widget has an attribute named Mashup, bind the mashup Repeatermsh. Bind GetDataTableEntries all data to the repeater widget. There should be some parameters that need to bind like gender, InfoTable, Sname… Testing the demo Note: Smart Grid widget is not a ThingWorx OOTB functionality The smart grid widget can be edited flexible. Please download with is Link https://marketplace.thingworx.com/Items/Smart%20Grid%20Widget There is a thread teach how to use smart grid: https://community.thingworx.com/message/53379#53379
View full tip
Unless created and owned by the Administrator user, by default MySQL Database Thing will not connect to the database as it requires certain permissions on the user. In order for a user other than an administrator to create a working database thing, they need three permissions (in addition to the typical subsystem and resource permissions - refer to https://www.youtube.com/watch?v=HzFqxvgHtpI&index=8&list=PLz1ppcU_kaneagUT9qgQfz3HByf6-9zTF ​ ):: Visibility to the Database Thing Template. Execute service permission on the EncryptPropertyValue service in the Encryption Services resource. Visibility to the DatabaseThing Thing Package. Typically to track down permissions issue, the most convenient and easy way is to use browser developer tools. For example in Chrome, developer tools can be used to view the API calls being sent by Composer, and the errors sent in response.   ThingWorx Composer doesn’t expose Thing Packages, so in order to set visibility to the DatabaseThing Thing Package, one would need to throw a REST API call at it. Hope this information helps in setting up a non-administrator own MSSQL database thing! *In addition refer to The use of System User
View full tip
Wanting to build a simple environment with ThingWorx, using a Raspbery Pi mini-computer and an Edge MicroServer (EMS), one has to bear in mind some apparently insignificant tips, that can make a huge difference in a well-connected platform.    The Raspbery Pi can have either a few sensors attached on it and/or a Sense HAT integrated with it. A Sense HAT is an add-on board for Raspbery Pi that has an 8x8 LED matrix, a five-button joystick and a sensor, gyroscope. Tips: The ThingWorx guide "Connecting ThingWorx EMS with Rasberry Pi Quickstart" can be expanded to read Sense HAT specific sensor information A Sense HAT integrated with Raspberry Pi can be accessed from anywhere, using the existing APIs to read/write Getting location information while using the Pi board (because The Sense HAT doesn't have this feature): The Raspberry Pi has several USB ports, therefore inserting a USB GPS can be used as a workaround for connecting and reading location information (or the GPIO pins could be used to connect a GPS board) To include services and properties to read Sense HAT sensors data, start from the existing LUA example.lua template and built a Pi specific one. The EMS is a pre-built application that can be installed on Windows or Linux and it provides a bridge between the remote device and the ThingWorx platform, using the REST API AlwaysOn protocol. Tips: To read data from a sensor, it is important to use the corresponding library for that sensor; if LUA scripting is used, then this library has to be easily integrated with LUA (LSR is optional with EMS; EMS can be used standalone). Using the correct library, the EMS will be able to gather the data and push it to ThingWorx. The LSR sends properties to the EMS over the HTTP(s) protocol, which converts it to the AlwaysOn Protocol, and sends it to the Platform LUA is a process manager useful if having to integrate devices from different vendors using different modes to interact To test if the LSR is working: go to Composer -> Properties tab of the remote thing, click "Manage Bindings", select the "Remote" tab, next to "Local" - here can be seen all the remote properties added to the thing by the LSR. For creating and biding more things: create the remote thing in ThingWorx and then add them to the config.lua file in the same way you have the gateway Listing any "Remote Things" that are ready to be connected, and are not associated with any Thing, yet: Monitoring; one method to try to bind them: type the name of the thing in the Identifier section of General Information of the Thing To request a local REST API call to the default EMS port 8000, from a LSR, make sure that the GET works fine (i.e for a property update:  localhost:8000/Thingworx/Things/MyThing/Properties/myString It should return you something like this: {"rows":[{"myString":""}],"fieldDefinitions":{"myString":{"name":"myString","description":"","type":"STRING","aspects":{}}}}
View full tip
This blog addresses a few points that are related to scoring with ThingWorx Analytics. It, particularly, brings a clearer understanding of the concepts behind the values of the scores that are generated when performing a scoring job.   Scoring Outputs:   It is important to note that when training an analytics model, the method is to create a generalizable model from a relatively small training dataset.   By its nature, we expect the training process to see a limited subset and not an exhaustive list of all possible values for many constraints, especially for time and practicality.   As such, these generalized models will be expected to handle unseen data in the form of new combinations or values outside of previously observed ranges (more on this below).   One common way to see scores that exceed the observed ranges in training, under the assumption that the goals are continuous, is to use prescriptive scoring.   Prescriptive scoring attempts to find optimal values for a lever, meaning tunable, features in order to maximize or minimize score values. See the prescriptive scoring documentation and functionality for more information.   min/max constraints: these are constraints that are placed upon the inputs for training and expected inputs for scoring.   •          For training: If theses ranges were provided as part of the upload process, then training will raise exceptions regarding invalid data. However, if the ranges are not provided - they will be inferred from the data and, as such, training will not see values outside of observed ranges.   •          For scoring: Validation of the ranges will only be performed on the inputs - not the outputs. It is very important to note that the handling of these "constraints" is dependent upon the data type.   For categorical (e.g. colors) and ordinal data (e.g. shirt sizes), the constraints are strict and data that was not observed in training will raise exceptions during scoring.   However, for continuous values (e.g. temperature ranges) these constraints are more informational in nature. For predictive scoring, our code will accept records with values outside of those ranges.   The rule of thumb is that values slightly outside these ranges are acceptable and that as the values stray farther from the ranges, the accuracy of the model degrades very quickly.   For prescriptive scoring, these constraints are used to determine the acceptable ranges of values to try when determining the optimal values. Values outside of these constraints will NOT be tried.
View full tip
Merging InfoTables can seem to be an overwhelming task in ThingWorx. What even IS an InfoTable?? The short answer is that an InfoTable is a specifically structured JSON object. Merging InfoTables is therefore as easy as incrementing through each, adding the columns of each to another table, and then populating the new table with data. For InfoTables with the same DataShapes, the built in "Union" service can be used. Find this snippet under the InfoTableResources section. There is also a snippet for incrementing through DataShape fields if the DataShape is known. For InfoTables which don't have the same DataShapes or for which the DataShape is not known, things get a bit trickier. Thankfully, some new KCS documentation provides example code to step you through merging any number of tables together. I hope this documentation is helpful!
View full tip
This simple example creates an infotable of hierarchical data from an existing datashape that can be used in a tree or D3 Tree widget.  Attachments are provides that give a PDF document as well as the ThingWorx entities for the example.  If you were just interested in the service code, it is listed below: var params = {   infoTableName : "InfoTable",   dataShapeName : "TreeDataShape" }; // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(TreeDataShape) var result = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); var NewRow = {}; NewRow.Parent = "No Parent"; NewRow.Child = "Enterprise"; NewRow.ChildDescription = "Enterprise"; result.AddRow(NewRow); NewRow.Parent = "Enterprise"; NewRow.Child = "Site1"; NewRow.ChildDescription = "Wilmington Plant"; result.AddRow(NewRow); NewRow.Parent = "Site1"; NewRow.Child = "Site1.Line1"; NewRow.ChildDescription = "Blow Molding"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line1"; NewRow.Child = "Site1.Line1.Asset1"; NewRow.ChildDescription = "Preform Staging"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line1"; NewRow.Child = "Site1.Line1.Asset2"; NewRow.ChildDescription = "Blow Molder"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line1"; NewRow.Child = "Site1.Line1.Asset3"; NewRow.ChildDescription = "Bottle Unscrambler"; result.AddRow(NewRow); NewRow.Parent = "Site1"; NewRow.Child = "Site1.Line2"; NewRow.ChildDescription = "Filling"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line2"; NewRow.Child = "Site1.Line2.Asset1"; NewRow.ChildDescription = "Rinser"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line2"; NewRow.Child = "Site1.Line2.Asset2"; NewRow.ChildDescription = "Filler"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line2"; NewRow.Child = "Site1.Line2.Asset3"; NewRow.ChildDescription = "Capper"; result.AddRow(NewRow); NewRow.Parent = "Site1"; NewRow.Child = "Site1.Line3"; NewRow.ChildDescription = "Packaging"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line3"; NewRow.Child = "Site1.Line3.Asset1"; NewRow.ChildDescription = "Printer Labeler"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line3"; NewRow.Child = "Site1.Line3.Asset2"; NewRow.ChildDescription = "Packer"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line3"; NewRow.Child = "Site1.Line3.Asset3"; NewRow.ChildDescription = "Palletizing"; result.AddRow(NewRow); NewRow.Parent = "Enterprise"; NewRow.Child = "Site2"; NewRow.ChildDescription = "Mobile Plant"; result.AddRow(NewRow); NewRow.Parent = "Site2"; NewRow.Child = "Site2.Line1"; NewRow.ChildDescription = "Blow Molding"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line1"; NewRow.Child = "Site2.Line1.Asset1"; NewRow.ChildDescription = "Preform Staging"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line1"; NewRow.Child = "Site2.Line1.Asset2"; NewRow.ChildDescription = "Blow Molder"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line1"; NewRow.Child = "Site2.Line1.Asset3"; NewRow.ChildDescription = "Bottle Unscrambler"; result.AddRow(NewRow); NewRow.Parent = "Site2"; NewRow.Child = "Site2.Line2"; NewRow.ChildDescription = "Filling"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line2"; NewRow.Child = "Site2.Line2.Asset1"; NewRow.ChildDescription = "Rinser"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line2"; NewRow.Child = "Site2.Line2.Asset2"; NewRow.ChildDescription = "Filler"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line2"; NewRow.Child = "Site2.Line2.Asset3"; NewRow.ChildDescription = "Capper"; result.AddRow(NewRow); NewRow.Parent = "Site2"; NewRow.Child = "Site2.Line3"; NewRow.ChildDescription = "Packaging"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line3"; NewRow.Child = "Site2.Line3.Asset1"; NewRow.ChildDescription = "Printer Labeler"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line3"; NewRow.Child = "Site2.Line3.Asset2"; NewRow.ChildDescription = "Packer"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line3"; NewRow.Child = "Site2.Line3.Asset3"; NewRow.ChildDescription = "Palletizing"; result.AddRow(NewRow);
View full tip
Announcements