cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Help us improve the PTC Community by taking this short Community Survey! X

IoT Tips

Sort by:
About This is the third part of a ThingBerry related blog post series. ThingBerry is ThingWorx installed on a RaspBerry Pi, which can be used for portable demonstrations without the need of utilizing e.g. customer networks. Instead the ThingBerry provides its own custom WIFI hotspot and allows Things to connect and send / receive demo data on a small scale. In this particual blog post we'll discuss on how to setup the ThingBerry for encrypted communication and trust via certificates. We will create a custom Root and Intermediate Certificate Authority as well as a server specific certificate. More information on the theory of Trust & Encryption can be found here: Trust &amp; Encryption - Theory As the ThingBerry is a highly unsupported environment for ThingWorx, please see this blog post for all related warnings. Preparing the environment To get started and keep all the working materials safe and secure within the scope of the pi user account, let's start creating a workspace directory and then install the openssl package cd ~ mkdir certs chmod 700 certs cd certs sudo apt-get install openssl Troubleshooting Tips Keytool To look at a specific certificate use the keytool and specify the .crt file to look at keytool -printcert -file rootca.crt This will show information on the certificate's configuration, such as signing authority or validity. Timestamps It's important, that the Chain of Trust follows a timely fashion. The Root CA has the longest expiration date. Within its validity the Intermediate CA must be valid. The server specific certificate must be valid in the timeframe of the Intermediate CA validity. As the validity is calculated based on seconds of the current (signing) action, I'm using valid days of Root CA 999 days Intermediate CA 888 days Server specific certificate 777 days Any other times, that make sense can be used, as long as the time constraints are followed. Otherwise the certificate will not be recognized as valid and will therefore not be trusted. Creating a Chain of Trust For this example we'll create a Root and Intermediate Certificate Authority (CA) as well as the server specific certificate. In a first step we can create all of the required keys. openssl genrsa -des3 -out rootca.key 4096 openssl genrsa -out intermediateca.key 4096 openssl genrsa -out server.key 4096 The -des3 option for the rootca will store a password and encrypt it within the key for more secure access. This tightens up security, so that the key for the rootca is not easily corrupted or exposed. Root Certificate Authority Having the keys, we now create the Root CA itself openssl req -new -x509 -sha256 -days 999 -key rootca.key -reqexts v3_req -extensions v3_ca -out rootca.crt Enter the information, starting with the password for the Root CA key. Use a easy to identify name for the Common Name (CN), e.g. "My Root CA" Intermediate Certificate Authority As the Intermediate CA needs to be signed / approved by the Root CA, we need a Certficiate Signing Request (CSR) openssl req -new -key intermediateca.key -reqexts v3_req -extensions v3_ca -out intermediateca.csr The v3_ca extension will mark this certificate as CA itself. This means that we can use the intermediate CA to sign other certificates as well. The CSR will be signed with the Root CA's key and certificate. For this the CSR must be sent to the whoever own the Root CA. In our case, we're the owner ourselves and have everything in the same directory, so no need to send it off to Let's Encrypt, Google or VeriSign. For signing the Intermediate CA as an actual CA, the Root CA must have a specific configuration to allow the v3_ca extension. This needs to be created via nano v3_ca.ext Into this new file, copy & paste the following: basicConstraints        = CA:TRUE subjectKeyIdentifier    = hash authorityKeyIdentifier  = keyid:always,issuer:always keyUsage                = cRLSign, dataEncipherment, digitalSignature, keyCertSign, keyEncipherment, nonRepudiation Save and exit. Finally, the Root CA is signing the Intermediate CA, using the v3_ca extension via: openssl x509 -req -in intermediateca.csr -CA rootca.crt -CAkey rootca.key -CAcreateserial -CAserial intermediateca.srl -extfile v3_ca.ext -days 888 -sha256 -out intermediateca.crt Certificate Authority Chain Having now two CAs, they need to be chained together. This is required for later to create the keystore used in Tomcat. Creating the Chain is probably the easiest part of this blog: cat intermediateca.crt rootca.crt > cachain.crt Server Specific Certificate The server specific certificate has to be signed and obtained by the Intermediate CA; for this a CSR is required. openssl req -new -key server.key -out server.csr The request is then signed with the Intermediate CA's key and certificate. For commerical certificates this signing process will be done by publicly trusted CAs, like Let's Encrypt, VeriSign or Google. In our case, we're the Intermediate CA ourselves and can sign the CSR. openssl x509 -req -in server.csr -CA intermediateca.crt -CAkey intermediateca.key -CAcreateserial -CAserial server.srl -days 777 -sha256 -out server.crt Creating Keystores For the keystore as used in Tomcat we need the CA Chain as well as the server specific certificate and the corresponding key. To generate a keystore that can be used in Tomcat, we first need to create a PKCS12 type keystore with openssl openssl pkcs12 -export -certfile cachain.crt -in server.crt -inkey server.key -out server.p12 This PKCS12 keystore can now be transformed into a JKS keystore, by importing the PKCS12 information into a new JKS keystore keytool -importkeystore -srckeystore server.p12 -srcstoretype PKCS12 -destkeystore keystore.jks -deststoretype JKS Important: the passwords for the PKCS12 keystore and the JKS keystore must match! Tomcat requires those passwords to be same - in case they are different, the configuration will not work and ThingWorx cannot be accessed through a secure connection. Configuring ThingWorx For ThingWorx, only the keystore is required. It can be copied over e.g. to certificates area in the storage directory. sudo cp keystore.jks /thingworx/storage/certificates sudo chmod 640 /thingworx/storage/certificates/keystore.jks sudo chown tomcat:tomcat /thingworx/storage/certificates/keystore.jks To enable it, Tomcat's server.xml needs to be updated. sudo nano $CATALINA_HOME/conf/server.xml Find the connector for port 80 and add a new connector after the port 80 connector: <Connector port="443" protocol="org.apache.coyote.http11.Http11NioProtocol"            maxThreads="150" connectionTimeout="20000"  redirectPort="8443"            SSLEnabled="true" scheme="https" secure="true" clientAuth="false" sslProtocol="TLSv1.2" enableLookups="false"            keystoreFile="/thingworx/storage/certificates/keystore.jks" keystorePass="<keystorePass>" /> Don't forget to update the password. Stop Tomcat sudo service tomcat stop Ensure Tomcat is actually down with ps -ef | grep tomcat If it's still running, just sudo kill it and start Tomcat again to activate the new configuration sudo service tomcat start And now? Congratulations! ThingWorx is now running on your ThingBerry using a secure channel. Access it via https://<thingberry>/Thingworx As HTTPS is automatically using port 443, you will automatically connect to the port configured in the server.xml Classic HTTP connections on port 80 can still be used. For more information about setting up ThingWorx with (self-signed) certificates also see https://support.ptc.com/appserver/cs/view/solution.jsp?n=CS193947 - there's also a note on forwarding all HTTP traffic to HTTPS. See also https://support.ptc.com/appserver/cs/view/solution.jsp?n=CS246292 for more information about using specific SSL protocols or cipher suites.
View full tip
This script is provided as is and is not officially supported by PTC. Users are welcome to edit or change this script as desired. This script automates the installation steps outlined in the ThingWorx Installation Guide. If you are not familiar with these processes or are installing ThingWorx for the first time, it is recommended to refer to the guide for additional information. Requirements:   64-bit Linux: Ubuntu (all flavors) 14/15/16, Red Hat Enterprise Linux 6/7, or CentOS 6/7   Root privileges   Internet connection   A ThingWorx platform installation file for either H2 or PostgreSQL (available in the PTC downloads center https://support.ptc.com/appserver/cs/software_update/swupdate.jsp )   *NOTE: This script does not require a GUI and can be run in headless mode Minimal download option: The script will look in the working directory for pre-downloaded Java, Tomcat, and (for RHEL and CentOS) PostgreSQL installation files. If the installer detects one of the supported files, the user will be given the option to install with that file and skip the automated download. This feature is limited to specific versions and requires one (or more) of the following files:     jdk-8u92-linux-x64.tar.gz     apache-tomcat-8.0.33.tar.gz     pgdg-redhat94-9.4-3.noarch.rpm     NOTE: This reduces the amount of data downloaded by the script, but does not eliminate it entirely. An active internet connection is still required. Support for existing SSL certificates: If an existing SSL certificate exists and the user wishes to use it instead of automatically generating one with the script, the user can copy the '.keystore' file to the working directory before running the script. During the installation process the user will be prompted with the option to use this keystore file, and will need to provide the keystore password created with the certificate.     NOTE: this script does not support keystore passwords containing the following characters: '!/@\"#$%^&*()_+]    *Note for Red Hat Enterprise Linux users*     This script was designed to work on servers with an active Red Hat subscription. It has been tested and appears to work on unsubscribed servers if the following applications are installed: gcc make epel repository (see This Red Hat Announcement for more information) Download the file attached to this document Steps to use this script: 1. Create a new folder on your Linux server. (This will be referred to as the 'working folder') 2. Download the ThingWorx platform installer file from the PTC Software downloads page. (For example, MED-61111-CD-072_SP2_ThingWorx-Platform-Postgres-7-2-2.zip) 3. Copy the installer file to the working folder. Do not unzip. 4. Copy the installer.sh file to your working folder. 5. In the command prompt, cd to your working directory. 6. Make the installer.sh file excutable:   > chmod +x installer.sh 7. Run the installer.sh file either using sudo or logging in as root   > sudo bash installer.sh   OR   > su   > bash installer.sh   *NOTE: The 'bash' command is mandatory, running the installer with the 'sh' command will not work
View full tip
There are Four Types of Analytics:                         Descriptive: What Happened? Descriptive analytics is a preliminary stage of data processing that creates a summary of historical data to yield useful information and possibly prepare the data for further analysis. Analytics, which use data aggregation and data mining to provide insight into the past and answer: “What has happened? Descriptive analysis or statistics does exactly what the name implies they “Describe”, or summarize raw data and make it something that is interpret-able by humans. They are analytics that describe the past. The past refers to any point of time that an event has occurred, whether it is one minute ago, or one year ago. Descriptive analytics are useful because they allow us to learn from past behaviors, and understand how they might influence future outcomes. The vast majority of the statistics we use fall into this category. (Think basic arithmetic like sums, averages, percent changes). Usually, the underlying data is a count, or aggregate of a filtered column of data to which basic math is applied. For all practical purposes, there are an infinite number of these statistics. Descriptive statistics are useful to show things like, total stock in inventory, average dollars spent per customer and Year over year change in sales. Common examples of descriptive analytics are reports that provide historical insights regarding the company’s production, financials, operations, sales, finance, inventory and customers. Note: Use Descriptive Analytics when you need to understand at an aggregate level what is going on in your company, and when you want to summarize and describe different aspects of your business.                                     Different techniques of Descriptive Analytics: Sampling Mean Mode Median Standard Deviation Range and Variance Stem and Leaf Diagram Histogram Quartiles Frequency Distributions Use of Descriptive Analytics in ThingWorx Analytics: Signal Detection: When analyzing volumes of data, it is helpful to know which data is actually useful and which data is just noise. Signals are based on a correlation algorithm that examines historical data to identify the strength of a given input in predicting future outcomes. Signals can identify meaningful correlations within the data. Signals are useful during initial analysis to determine which features you want to curate in a given data-set for predictive model generation. For example, knowing the month of the year is more important to accurately predicting tomorrow’s weather than knowing the day of the week. The month has a much stronger signal than the day of the week for this prediction. ThingWorx Analytics reports signal strength in a mutual information (MI) score that represents the probability of predicting the goal variable when a given feature is provided. It can effectively capture non-linear relationships. ThingWorx Analytics evaluates each feature, or combination of features, to identify the top signals. Cluster Analysis: Cluster analysis categorizes data into groups based on similarities relative to a goal variable. Like a clique, objects in a cluster minimize intra-distances (distances within the cluster) while maximizing inter-distances (distances between clusters). Clusters are mutually exclusive, meaning that each record can belong to only one cluster. However, ThingWorx Analytics supports a user-defined cluster hierarchy that can include sub-clusters inside other clusters. The higher the number of clusters in the data, the smaller each cluster’s population will be, but the stronger the potential insights can be. How to Access Descriptive Analysis Functionality via ThingWorx Analytics: REST API Service — Using a REST client, you can access the Signals Service and the Clusters Service. Each service includes a series of API endpoints to submit analysis requests, retrieve results, list jobs, and more. Requires installation of the ThingWorx Analytics Server. Analytics Builder — As part of the ThingWorx Analytics Extension, Analytics Builder provides a user interface for interacting with your data. In addition to generating and scoring predictive models in Analytics Builder, you can also run procedures to generate signals. How to avoid mistakes - Useful tips for Different Techniques of Descriptive Analytics: Crystallize the research problem → Operability of it! Read literature on data analysis techniques. Evaluate various techniques that can do similar things w.r.t. to research problem. Know what a technique does and what it doesn’t. Consult people, esp. supervisor.
View full tip
As per ThingWorx Documentation: Updating Properties Automatically in a Mashup A mashup using the GetProperties service can be configured to use websockets and receive updates to properties automatically. When creating or editing a mashup, you can configure the GetProperties service so that the properties are automatically updated by selecting the Automatically update values when able checkbox in the service properties panel. So, the feature to update Properties Automatically in a Mashup is limited to GetProperties service only. Following are the steps to invoke our own custom Service automatically when a property change: 1. Find all the Properties in your Thing for which the DataChange should trigger the custom service. 2. In mashup; add value display widget (or some other widgets) for each property in Step1. 3. Bind the properties from GetProperties service to these widget. 4. Set visible property of these widgets to false so that they don't show up at the RunTime. 5. Now bind the ServiceInvokeCompleted event of GetProperties Service to your custom Service. 6. Save and view Mashup. Result: When any of the Property from Step1 is changed; Custom Service will be invoked in our Mashup.
View full tip
Disclaimer: example was provided by Hatcher Chad - chad@onfarmsystems.com   //   // For this example, we'll have an Math service   // which takes two numbers, and an operation.   // The result will be that operation performed on the two inputs.       //   // We either need an Application Key,   // or user credentials to perform the reads and writes.   // App keys are a little safer.   // In this demo, we'll store it on the Entity as a Property.   var appKey = me.appKey;       //   // The service name needs to be unique and not already in use.   var serviceName = "MyMath";       //   // What are the inputs to the service?   // We'll define them nicely here, but manipulate this object later.   var parameters = {   "op" : "STRING",   "x" : "NUMBER",   "y" : "NUMBER"   };       //   // What datatype does the service return?   // If it's an infotable,   // then you'll also have to specify the data shape   // as part of the resultType's aspect,   // but I won't demonstrate that here.   var output = "NUMBER";       //   // What is the actual service script?   // We'll define it here as an array of lines, and then join them together.   var serviceScript = [   "var result = (function() {",   " switch(op) {",   " case \"add\": return x + y;",   " case \"sub\": return x - y;",   " case \"mult\": return x * y;",   " case \"div\": return x / y;",   " default: return op in Math ? Math[op](x, y) : 0;",   " };",   "})();",   ].join("\n");       ////////       //   // Let's convert the friendly parameter definition   // into the structure that ThingWorx uses:   var parameterDefinitions = Object.keys(parameters).reduce(function(parameterDefinitions, parameterName, index) {   var parameterType = parameters[parameterName];   parameterDefinitions[parameterName] = {   "name": parameterName,   "aspects": {},   "description": "",   "baseType": parameterType,   "ordinal": index   };   return parameterDefinitions;   }, {});       //   // Now let's set up our service definition and implementation.   var definition = {   "isAllowOverride": false,   "isOpen": false,   "sourceType": "Unknown",   "parameterDefinitions": parameterDefinitions,   "name": serviceName,   "aspects": {   "isAsync": false   },   "isLocalOnly": false,   "description": "",   "isPrivate": false,   "sourceName": "",   "category": "",   "resultType": {   "name": "result",   "aspects": {},   "description": "",   "baseType": output,   "ordinal": 0   }   };       var implementation = {   "name": serviceName,   "description": "",   "handlerName": "Script",   "configurationTables": {   "Script": {   "isMultiRow": false,   "name": "Script",   "description": "Script",   "rows": [{   "code": serviceScript   }],   "ordinal": 0,   "dataShape": {   "fieldDefinitions": {   "code": {   "name": "code",   "aspects": {},   "description": "code",   "baseType": "STRING",   "ordinal": 0   }   }   }   }   }   };       ////////       //   // Here are the URLs we'll need in order to make updates.   // You can change the thing name ('ServiceModifier' here)   // to something else.   // If you use credentials instead of an app key,   // then you can remove the appKey parameter here,   // but you'll have to add the username and password   // to the two ContentLoaderFunctions calls.   var url = {   export : "http://127.0.0.1:8080/Thingworx/Things/ServiceModifier?Accept=application/json&appKey="+appKey,   import : "http://127.0.0.1:8080/Thingworx/Things/ServiceModifier?appKey="+appKey   };       //   // We can download the entity to modify as a JSON object.   // Older versions of ThingWorx might not support this.   var config = Resources.ContentLoaderFunctions.GetJSON({   url : url.export,   });       //   // We have to modify both the 'effectiveShape',   // as well as the 'thingShape'.   config.effectiveShape.serviceDefinitions[serviceName] = definition;   config.effectiveShape.serviceImplementations[serviceName] = implementation;       config.thingShape.serviceDefinitions[serviceName] = definition;   config.thingShape.serviceImplementations[serviceName] = implementation;       // Finally, we can push our updates back into ThingWorx.   Resources.ContentLoaderFunctions.PutText({   url : url.export,   content : JSON.stringify(config),   contentType : "application/json",   });       // The end.
View full tip
Sampling Strategy​ This Blog Post will cover the 4 sampling Strategies that are available in ThingWorx Analytics.  It will tell you how the sampling strategy runs behind the scenes, when you may want to use that strategy, and will give you the pros and cons of each strategy. SAMPLE_WITH_REPLACEMENT This strategy is not often used by professionals but still may be useful in certain circumstances.  When you sample with replacement, the value that you randomly selected is then returned to the sample pool.  So there is a chance that you can have the same record multiple times in your sample. Example Let’s say you have a hat that contain 3 cards with different people’s names on them. John Sarah Tom Let’s say you make 2 random selections. The first selection you pull out the name Tom. When you sample with replacement, you would put the name Tom back into the hat and then randomly select a card again.  For your second selection, it is possible to get another name like Sarah, or the same one you selected, Tom. Pros May find improved models in smaller datasets with low row counts Cons The Accuracy of the model may be artificially inflated due to duplicates in the sample SAMPLE_WITHOUT_REPLACEMENT This is the default setting in ThingWorx Analytics and the most commonly used sampling strategy by professionals.  The way this strategy works is after the value is randomly selected from the sample pool, it is not returned.  This ensures that all the values that are selected for the sample, are unique. Example Let’s say you have a hat that contain 3 cards with different people’s names on them. John Sarah Tom Let’s say you make 2 random selections. The first selection you pull out the name Tom. When you sample without replacement, you would randomly select a card from the hat again without adding the card Tom.  For your second selection, you could only get the Sarah or John card. Pros This is the sampling strategy that is most commonly used It will deliver the best results in most cases Cons May not be the best choice if the desired goal is underrepresented in the dataset UPSAMPLE_AND_SAMPLE_WITHOUT_REPLACEMENT This is useful when the desired goal is underrepresented in the dataset.  The features that represent the desired outcome of the goal are copied multiple times so they represent a larger share of the total dataset. Example Let’s say you are trying to discover if a patient is at risk for developing a rare condition, like chronic kidney failure, that affects around .5% of the US population.  In this case, the most accurate model that would be generated would say that no one will get this condition, and according to the numbers, it would be right 99.5% of the time.  But in reality, this is not helpful at all to the use case since you want to know if the patient is at risk of developing the condition. To avoid this from happening, copies are made of the records where the patient did develop the condition so it represents a larger share of the dataset.  Doing this will give ThingWorx Analytics more examples to help it generate a more accurate model. Pros Patterns from the original dataset remain intact Cons Longer training time DOWNSAMPLE_AND_SAMPLE_WITHOUT_REPLACEMENT This is also useful when the desired goal is underrepresented in the dataset. In downsample and sample without replacement, some features that do not represent the desired goal outcome are removed. This is done to increase the desired features percentage of the dataset. Example Let’s continue using the medical example from above.  Instead of creating copies of the desired records, undesired record are removed from the dataset.  This causes the records where patients did develop the condition to occupy a larger percentage of the dataset. Pros Shorter training time Cons Patterns from the original dataset may be lost
View full tip
One of topics that are usually of interest when entering the ThingWorx world is integration with third-party systems. Disclaimer: the following guide is intended to be rather comprehensive and guide you in achieving the best integration with your desired system ( !=quick and dirty ). For example, from my experience, customers many times ask: -how can I connect to my hardware device -how can I connect to this device cloud so I can get data from it? -how can I connect to my ERP? With some luck, I hope that at the end of this article I will provide a generic workflow that will help you on achieving the best integration solution for your use-case. We need to write down some requirements (they are not in order; depending on the usecase might not be worth to be answered): 0. What is the usecase (detailed description) ?.      This is by far one of the most important aspects of any software development project.      Please document your usecase with all possible future uses.      For example:           - I want to send information from sensors to the ThingWorx Server, and I want to do TCP Tunnelling to the device and/or Remote Desktop. Or maybe only sending information from sensors and nothing else. Do I need in the future Software Updates or not?           - I want to read the Customer information from my CRM AND also update that information (read/write). 1. Write down system specification for the hardware or software system.           -Available RAM for user apps           -Available Disk Space for User Apps           -Does it have a TCP IP Stack?           -Operating System           -Installed runtimes (Java/.NET - which versions?) 2. Can I access the system or device directly from the ThingWorx Server?      This means answering the question: is my system directly accessible from my server? Or is there a firewall which stops incoming connections?      Another question to answer, is: can I modify my firewall to allow incoming connections? 3. What protocol is my device or system capable of supporting for data transfer?      Example: I have a device which is capable of outputting information through TCP only.                     I have a device who can only do HTTP callbacks to a HTTP server.                     I have Microsoft SQL, to which I can connect through ADO.NET or JDBC.                     I have a third party service billing provider who supports interfacing via HTTP Webservices (SOAP or REST).                     I have a device supporting CoAP.      Typically all third party software systems support communication via Webservices. 4. Can I configure and/or deploy new software to my device or system?      We need to have this question answered, because on some cases it might make more sense to write some logic on the system or device.      For example if I want to access data from an SQL server and my usecase might require some processing for which that SQL server is better suited to do, it might be much more efficient to have that logic stored as a Stored Procedure there and I just call it from ThingWorx.       Or in the case of Windchill, it might make more sense to write an InfoEngine task to do my functionality than writing that on the ThingWorx side.      Possible example answers:                     -My device is already deployed in the field and I can not modify the configuration at all.                     -My device is a new product, so I can put whatever software I want on it.                     -I only have read access to my software system, so I must do all processing externally. If you wrote down all of those it is time to determine what are the integration options for us. The typical workflow that I follow is the next one: I look for any Out-Of-The-Box supported protocol (determined at step 3) and then implement the needed functionality in the language that is best suited for my usecase (Javascript usually). The list of protocols that the platform supports is listed in different places: -PTC Help Center Link -ThingWorx Marketplace - https://marketplace.thingworx.com/items The key point is that the list is alive and updated by both our partners and us. Usually the preferred way to write logic is by using the Javascript services. It makes it incredibly fast to write down your business logic without having the need to recompile. The elements from the ThingWorx ecosystems that we can use are the following: -the ThingWorx server itself (it has built in support for calling external Webservices) -ThingWorx Extensions. They are Java written pieces of code that can help you achieve your usecase. To be used whenever your ThingWorx server OOTB functionality (or Marketplace Extensions) does not allow you to develop your usecase. There is no actual need to write an Extension if somebody else already developed that for you and published in the ThingWorx Marketplace https://marketplace.thingworx.com/items      A link for understanding Extension is the following: How to rapidly develop ThingWorx IIoT Extensions -ThingWorx Integration Connectors -ThingWorx Edge Micro Server: https://developer.thingworx.com/ems -ThingWorx Edge SDKs: https://developer.thingworx.com/sdks Examples: -I have an third party Server which allows me to send SMS and Voice messages through it via its Rest API.      Answer: best here is to use the OOTB Webservices support from ThingWorx, which exposes the HTTP verbs, like GET, POST, PUT, via the ContentLoaderFunctions -I have a device which has a TCP stack that is capable only to do HTTP calls.      Answer: I can point that device to do calls against the REST API of ThingWorx, in order to update data directly there. -I have a fleet of 300.000 devices which are sending their data to an MQTT server.      Answer: In this case I can use the MQTT Extension that is offered by ThingWorx -I have an external SQL server that does not accept inbound connections (behind a firewall) but I must get data from it. Network will however allow outbound connections      Answer: use the ADO.NET Edge Client that must be installed in a location accessible to that server. The ADO.NET Edge Client will connect to the Server and then to the ThingWorx platform allowing use of SQL statements directly from within the platform. -I have a device who only accepts TCP connections and I want to read data from it. It sends data only after receiving a command through TCP.      Answer: Use the TCP Extension available in the ThingWorx Marketplace. It is built specifically for this usecase -I have a device which has lots of RAM and Disk Space and I must send data from it, while allowing software updates in the future.      Answer: depending on your preferred coding language you can use either the ThingWorx Edge Microserver (for which you must write code in LUA) or write an implementation in one of the ThingWorx Edge SDKs. A key point here is to understand that the coding effort is identical in theory, and is only limited by the experience you have and the functionality that may be available easier in Java, vs LUA, vs C, vs. Net. I appreciate feedback to this article in the hope of being able to continuously improve it.
View full tip
How to input Database User Credentials at RunTime. This Blog considers that you have already imported the Database Extension and Configured the Thing Template. If you have not done this already please see Steps to connecting to your Relational Database first. Steps: Create a Database Thing template with correct configuration. Example configuration for MySql Database: jDBCDriverClass: com.mysql.jdbc.Driver jDBCConnectionURL: jdbc:mysql://127.0.0.1:3306/<DatabaseNameHere>?allowMultiQueries=true connectionValidationString: SELECT NOW() maxConnections: 100 userName: <DataBaseUserNameHere> password: <DataBasePasswordHere> Create any Generic Thing and add a service to create thing based on the Thing template created in Step 1. Example: // NewDataBaseThingName is the String input for name the database thing to be created. // MySqlServerUpdatedConfiguration is the Thing template with correct configuration var params = {      name: NewDataBaseThingName /* STRING */,      description: NewDataBaseThingName /* STRING */,     thingTemplateName: "MySqlServerUpdatedConfiguration" /* THINGTEMPLATENAME */,     tags: undefined /* TAGS */ }; // no return Resources["EntityServices"].CreateThing(params); Add code to enable and then restart the above thing using EnableThing() and RestartThing() service. Example Things[NewDataBaseThingName].EnableThing(); Things[NewDataBaseThingName].RestartThing(); Test and confirm that the Database Thing services runs as expected. Now Create a DataShape with following Fields: jDBCDriverClass: STRING jDBCConnectionURL: STRING connectionValidationString: STRING maxConnections: NUMBER userName: STRING password: PASSWORD Now in the Generic Thing created in Step 1 add code to update the configuration settings of DataBase Thing. Make sure JDBC Driver Class Name should never be changed. If different database connection is required use different Thing Template. Also, add code to restart the DataBase Thing using RestartThing() service. Example: var datashapeParams = {     infoTableName : "InfoTable",     dataShapeName : "DatabaseConfigurationDS" }; // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(DatabaseConfigurationDS) var config = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(datashapeParams); var passwordParams = {         data: "DataBasePasswordHere" /* STRING */ }; // DatabaseConfigurationDS entry object var newEntry = new Object(); newEntry.jDBCDriverClass= "com.mysql.jdbc.Driver"; // STRING newEntry.jDBCConnectionURL = "jdbc:mysql://127.0.0.1:3306/<DatabaseNameHere>?allowMultiQueries=true"; // STRING newEntry.connectionValidationString = "SELECT NOW()"; // STRING newEntry.maxConnections = 100; // NUMBER newEntry.userName = "DataBaseUserNameHere"; // STRING newEntry.password = Resources["EncryptionServices"].EncryptPropertyValue(passwordParams); // PASSWORD config.AddRow(newEntry); var configurationTableParams = { configurationTable: config /* INFOTABLE */, persistent: true /* BOOLEAN */, tableName: "ConnectionInfo" /* STRING */ }; // ThingNameForConfigurationUpdate is the input string for Thing Name whose configuration needs to be updated. // no return Things[ThingNameForConfigurationUpdate].SetConfigurationTable(configurationTableParams); Things[ThingNameForConfigurationUpdate].RestartThing(); Test and confirm that the Database Thing services runs as expected.
View full tip
Recently, mentor.axeda.com was retired. The content has not yet been fully migrated to Thingworx Community, though a plan is in place to do this over the coming weeks. Attached, please find OneNote 2016 and PDF attachments that contain the content that was previously available on the Axeda Mentor website.
View full tip
Super simple widget that embeds the HTML5 audio tag, allowing MP3 files to be played and/or triggers by another mashup event.
View full tip
Does WSEMS use SSL pinning? Yes, it does. To explain the process a bit more, the ssl pinning is the act of verifying a server certificate by comparing it to the exact certificate that is expected. We “install” a certificate on the EMS (copying the cert to the ems device, and specifying the cert in config.json), EMS will then check all incoming certs against the cert in config.json, looking for an exact match and verifying the certificate chain. Should it be expected that the client, WSEMS.exe, installs the client certificate in order to validate the server certificate using SSL pinning? This does not happen automatically, the cert must be downloaded and manually added to the device. The config must also be manually updated. If so, how we issue the client cert for WSEMS.exe? Does it need to be issued the same way as the server certificate? If talking about pinning, one could use either the server cert directly, or the root cert (recommended). The root cert is the public cert of the signing authority, e.g. if the server uses a verisign issued cert, one can verify the authenticity of the signer by having the verisign public cert. If talking about client certs (2 way auth, client verifies server, server verifies client) then the process is a little different. What happens when the client certificate expires? Do all the devices go offline when the client certificate expires? The device won’t connect with a failure to authorize. Once the server cert expires, the server cert needs to be updated everywhere. This is the advantage to using something like the verisign public cert (root cert) as the installed cert on the client. The root certs usually last longer than the issued certs, but they will have to replaced eventually when they expire. When using Entrust certificates, obtained at https://www.entrust.com/get-support/ssl-certificate-support/root-certificate-downloads/ , how do we know that the certifactes are fed to the WSEMS configuration incorrectly? With the wrong configuration, WSEMS would error out rejecting the Entrust certificate: Non-fips Error code 20: Invalid certificate Fips: Error code 19:  self signed certificate in certificate chain How to properly install Entrust certificate to be accessed by EMS? Public certificate needs to be downloaded and put in a directory that is accessible to the EMS. The path to be added to the cert chain field of the configuration. Even if its trusted it will always need to be installed on the EMS. If a certificate is self-signed, meaning it's non-trusted by default, OpenSSL would throw errors and compain. To solve this, it needs to be installed as a trusted server. If it's signed by a non-trusted CA, that CA's certificate needs to be installed as well. Once the certificates are obtained this if the block of the configuration we are interested in: "certificates":    {                   "validate":          true, //Not validated model is not recommended                       "allow_self_signed":  false, //Self signed model is not recommended, yet theoretically better than non-validated                       "cert_chain":      " " } //recommended, trusted - also the only way to work with a FIPS enabled EMS May use the SSL test tool (for example,https://www.ssllabs.com )to find all chains. For Entrust, both Entrust L1k and G2 need to be downloaded: Because cert_chain is an array type of field, the format to install the certificates paths, would be: "cert_chain": ["path1"], ["path2"] "cert_chain": ["C:\\ThingworxLocation\\MicroServerLocation\\locationofcertificates\\entrust_l1k.pem","C:\\ThingworxLocation\\MicroServerLocation\\locationofcertificates\\entrust_g2_ca.pem"]
View full tip
In case we would like to create an external application and we aren't sure what's the best solution to use, below are some useful tips. Scenario: Let's say we use a gateway in order to access the external application we want to create. We would like to implement this gateway translating the ThingWorx standard protocol to the SCADA protocol. The system administrator, who manages the grid, has the own secure system, with a standard for communication inside the SCADA system, and we want to be able to get data from our system to the system they have. Let's also consider that the data is connected on the electrical field. Tips: It is recommended to develop a 3rd party that on one side talks to ThingWorx, and on the other side, talks to the SCADA system. This external ThingWorx application that we want to implement would have a series-edge-interface allowing to enter in our customer's Ethernet network, in order for both systems to communicate. JDBC is not recommended - it's mostly for connecting to the data base, which in our case is not the main purpose. Each REST API call to the platform uses a security accreditation (appkey or user/password); depending of the permissions contained in that token, the access can be allowed to certain parts of the platform. Reasons for using REST API: REST API is simple and not dependent on any format that the data comes from ThingWorx. It can be used offline, online, synchronously, asynchronously, and is easy to manage from a formatting point of view. ThingWorx can give a lot of options: like exporting information via xml to a plain xml file, to parse it to whatever protocol we have on the other hand: Either our application would have to handle xml inputs from ThingWorx and process it towards SCADA compatible output. Or our application will have to handle xml input from ThingWorx and process it towards SCADA compatible output. Or we can talk directly via REST API and read on a per-thing basis (using the web services). The interface application just has to know how to read xmls or REST API calls (which are provided with an xml formatted response). SDK has already library written in C, C# and Java. SDK C, C# and Java use the AlwaysOn protocol (web socket)  and are more firewall friendly. It's mostly like for speed and automated processing, so when known exactly what happens and we trust the other side and we know there are little chances for errors.​ If we go with REST API or SDK , the application that is developed will have complete access inside ThingWorx, like change/edit the things. If we want to have access in both ways, not only in reading data, but also in update/delete information, etc, SDK and REST API ​can be used, because we have the whole range of commands, like set property values, call on services, etc. We can limit access, if we want, for security reasons. SDK offers access to the same services as REST API, but in a different way. Otherwise, it's better to go with xml decoupled files. Conclusion: for this particular scenario, better use SDK.
View full tip
Introduction In-Memory Column stores the data in columnar format contrary to row format. Allowing users to run faster analytics which is also the idea behind this is to push the computation as close to the data store as possible. In this post I'll configure the Oracle database to enable this feature and then populate one or more tables in the In Memory Column store. This could be particularly helpful if you are using Oracle 12c as an external data store for storing data in database table via JDBC connection, current/historic values from DataTable, Streams or ValueStreams for running analytics or DMLs with lots of join and require lot of computation before the data is finally presented on to the Mashup(s). For this post I used the data generated by temperature sensor getting stored in ValueStream, exported to CSV from the ValueStream and imported it in the Oracle table. In-Memory Column Store vs In-Memory database Usage As mentioned above Oracle 12c version 12.1.2 comes with in built In-Memory Column Store feature. As the name suggest it allows data to be populated in RAM enabling high speed transaction and analytics on data without the need to traverse the HDD, and in some cases this is much faster than the buffer cache of the SGA. Without going into too much nitty-gritty it's important to note that In-Memory Column  Store does not equate to In-Memory database. While it could be possible to move the entire schema, if it's small enough, to fit in the defined memory for In-Memory Column Store, the idea however is to speed up the computation requiring analytics on one or more table(s) which are heavily queried by the users. If you are interested in In-Memory Database as persistence provider for ThingWorx please refer to the documentation Using SAP HANA as the Persistence Provider which is one of the option among other available Persistence Providers for ThingWorx. What changes are required to the current Oracle 12c installation or JDBC connection to ThingWorx? In-Memory Column Store feature is an inbuilt feature in Oracle 12.1.2 and only needs to be enabled, as it's not by default. This can be enabled without having to bring any sort of change to the following : 1. The existing SQL services created within ThingWorx 2. General application architecture accessing the tables in the Oracle database over JDBC 3. Existing Oracle 12c installation Getting Started What will it take to enable In-Memory Column Store? This feature can be enabled by following few steps : 1. Enable this feature in the Oracle 12.1.2 installation, by assigning some memory in RAM for InMemory Column 2. Adjust the SGA size for the database to incorporate the memory assigned to the In-Memory Column 3. Bounce the database As mentioned above though this is an inbuilt feature with Oracle 12.1.2, but is not enabled by default and we can confirm this by executing following SQL in SQL*Plus or Oracle SQL Developer connecting to database for which we are enabling this feature. SQL> show parameter INMEMORY; Things to consider before enabling 1. Ensure that the hardware/ VM hosting the Oracle installation have sufficient RAM, 2. Ensure to bump up the SGA by the amount of memory assigned to In-Memory Column store, failing to do so may lead to database failing to start and will require recovery Note: Minimum memory that can be assigned to In-Memory Column Store is 100M Setting it all up For my test setup I will be assigning 5G to the In-Memory Column Store and will add this amount to the current SGA, to do this let's start the SQL*Plus with the rights that will allow me to make changes to the existinng SGA, so i'm using sys@orcl as sysdba  (ORCL is the test DB name i have for my database) Step 1: Start SQL*Plus, e.g. sqlplus sys@orcl as sysdba Step 2: ALTER SYSTEM SET INMEMORY_SIZE = 5G SCOPE=SPFILE; Step 3: ALTER SYSTEM SET SGA_TARGET = 20G SCOPE=SPFILE; Once done, bounce the database. And that's it! We should now be able to confirm that, via SQL*Plus, certain amount of memory, 5G in my case, has been assigned to the In-Memory Column Store feature SQL> show parameter inmemory Populating the In-Memory Column Store In-Memory Column Store will only populate the data from the table only on the first use or if the table is marked critical which will tell Oracle to populate as soon as the database comes online after restart. For more detail on the commands concerning the In-Memory Column Store refer to the OTN webpage I'll now use the SensorHistory table in which i have the ValueStream's exported data in CSV format, currently this table is holding ~32million+ rows, and populate them in columnar architecture of the In Memory Column Store with following command: SQL>ALTER TABLE SENSORHISTORY INMEMORY; // marking the table to be eligible for In-Memory column with default parameters Just to confirm that the data is still not populated since we have only marked the table to be eligible for In-Memory Column Store, if I now query the dynamic view V$IM_SEGMENTS for current usage of the InMemory, it'll confirm this: So now let's populate the In-Memory with a query which would require full table scan, e.g. SQL> select property_name, count(*) from sensorhistory           Group by property_name; Let's recheck the dynamic view V$IM_SEGMENTS As mentioned above, that this is completely transparent to the application layer, so if you already have an existing JDBC connection in ThingWorx to Oracle, all the existing services created for that table will continue to work as expected. If you don't have an existing JDBC connection to Oracle, it can be created with usual steps with no special configuration for In-Memory. Creating JDBC connection I'm including this section for the purpose of completeness, if you already have a working JDBC connection to Oracle 12.1.2 you can skip to Conclusion below.Now for accessing the above database along with the In-Memory Column Store table we'll now setup the JDBC connection to the Oracle, for that download and import the TW_Download_Relational Databases Connectors.zip (ThingWorx Marketplace) > unzip to access the Oracle12Connector_Extension.zip Step 1 : Import the extension in the ThingWorx by navigating to Import/Export > Import > Extensions Step 2: Create a Thing using the OracleDBServer12 Template, part of the extension we just imported Step 3: Here's how a valid configuration would look like to successfully connect to the database, ORCL in this case Step 4: Navigate to the Properties in the Entity Information panel on the left and verify that the isConnected property value is True. Conclusion This is a very short introduction to what could be a setup for improving the data analytics performed on the stored data, manifold. The data in the In-Memory Column Store is not stored in conventional row format, rather in large columnar format. If the need is to have simple SQL queries with not so many joins it could be that the SGA Cache would be sufficient and probably be faster and you may not gain much by configuring the In-Memory Column Store. However, queries requiring heavy computation on large data sets, having In-Memory Column Store configured could bring manifold increase in performance. Therefore if you need more guidelines on where you'd want to use the In-Memory Column Store, feel free to give following listed good reads a try along with real world data use case for reference. I will try to find some time to run my own benchmark and will try to put it out in a separate blog on performance gain. 1. Oracle Database In-Memory with Oracle Database 12c Release 2 : Oracle white paper 2. When to Use Oracle Database In-Memory :  Identifying Use Cases for Application Acceleration 3. Oracle Database 12c In-Memory Option 4. Testing Oracle In-Memory Column Store @ CERN
View full tip
Here is a sample to run ConvertJSON just for test 1. Create a DataShape 2. There are 4 input for service ConvertJSON fieldMap (The structure of infotable in the json row) json (json content)   { "rows":[         {             "email":"example1@ptc.com"         },         {             "name":"Lily",             "email":"example2@ptc.com"         }     ] } rowPath (json rows that indicate table information) rows dataShape (infotable dataShape) UserEmail
View full tip
There are some scenarios where you don't necessarily want to connect to your corporate mail server, or a public mail server like gmail - e.g. when testing a new function that possibly spams the official mail servers - or the mail server is not yet available. In such a scenario it might be a good idea to use a custom, private mail server to be able to send and receive emails locally on a test- or development-environment.   In this post I will show how to use the hMailServer and setup the ThingWorx mail extension to send emails. This post will concentrate on installing and deploying within a Windows environment. More specifically on a Windows 2012 R2 server virtual machine.   Installing hMailServer   Download and install the ​.NET Framework 3.5 (includes .NET 2.0 and 3.0)​. In Windows Server 2012 R2 open the Server Manager and in the Configuration add roles and features.​ Click through the "Role-based or feature-based installation" steps and install the ".NET Framework 3.5 Features" in case they are not already installed.   Download ​hMailServer​ via https://www.hmailserver.com/download As always: the latest version is more stable while the beta versions might provide more functionality and additional bug fixes. This post is based on version 5.6.6-B2383. Functionality and how-to-clicks might change in other versions.   Note: The Microsoft .NET Framework is required for this installation. In case the .NET installation fails by installing it with the hMailServer framework, it's best to cancel the installation and install the required .NET Framework manually instead of the automatic download and installation offered by hMailServer. In case of such a failure it's best to play it safe and uninstall the mail server again, install the .NET framework manually and then re-install hMailServer. (Any left-over directories should be deleted before re-installing)   For the installation, choose your path and a ​full​ installation. Use the built-in database engine​, set a password for the administrative user and install.   Configuring hMailServer   The hMailServer Administrator opens automatically after the installation - if not you will find it in the Start menu. Connect to the default instance on the localhost. The password is the one set up during the installation process.   ​Add a domain​ (e.g. mycompany.com) and ​save​ it. The domain will specify the domain of the mail-addresses e.g. user@domain (me@mycompany.com).   In the domain add an account​. Specify the address (e.g. noreply) and set a password (e.g. ts). ​Save​ the new account.   The default port used for SMTP is 25​. For POP3 it's ​110​. This is configured under Settings > Advanced > TCP/IP ports​ Ensure the ports for SMPT and POP3 are not blocked by a firewall in case you run into issues later on.   This setup should *usually* work. However there might be hostname specific SMTP issues. In case something happens / or to avoid errors in the first place, go to Settings > Protocols > SMTP > Delivery of e-mail​ and specific the ​Local host name​. This should be the fully qualified hostname of the server (e.g. myserver.this.company.com).   Test hMailServer via telnet   Note: telnet needs to be installed for this test - in case it's not installed, Google can help.​​​   Open a command line window and execute: telnet <yourhostname> 25 This will open a connection to the SMTP port of the hMailServer. Manual commands can be send to test if the basic send functions are working. The following structure can be used for testing - it holds manual input and responses.   Username and password need to be Base64 encoded. See https://www.base64encode.org/ for Base64 conversions. (Tip: only text, don't add additional spaces or line breaks - otherwise the hash will be quite different!)   Command / Response Description 220 <HOSTNAME> ESMTP Connected to host HELO mycompany.com Connect with domain as defined in hMailServer 250 Hello. Connected AUTH LOGIN Login as authenticated user 334 VXNlcm5hbWU6 Base64 for "Username:" bm9yZXBseUBteWNvbXBhbnkuY29t Base64 for "noreply@mycompany.com" 334 UGFzc3dvcmQ6 Base64 for "Password:" dHM= Base64 for "ts" 235 authenticated. Authentication successful MAIL FROM: noreply@mycompany.com Sender address 250 OK   RCPT TO: <your real mail address> To address 250 OK   DATA ​Body​ 354 OK, send.   Subject: sending mail via telnet ​Subject ​line   Blank line to indicate end of subject just a simple test! ​Content​ . . indicates the end of mail 250 Queued (10.969 seconds) Mail queued and sent with duration QUIT Log off telnet 221 goodbye   Connection to host lost. Log off confirmed     Configuring ThingWorx   Download and configure the mail extension   Download the MAIL EXTENSION from the ThingWorx Marketplace https://marketplace.thingworx.com/Items/mail-extension   In ThingWorx, click Import / Export > Extensions > Import​, choose the downloaded .zip file and import it. The Composer should be refreshed to reflect the changes introduced by the extension.   The Extension created a new Thing Template: MailServer​   Create a new ​Thing​ based on the MailServer Template​. In its configuration adjust the servername and port to match the hMailServer configuration, e.g. localhost and port 25. Change the Mail User Account and Password to the authentication user (e.g. noreply@mycompany.com / ts). ​Save​ the configuration to persist the changes.   In any Thing, create a new Service to send mails and notifications. Insert a snippet based on Entities > <yourMailThing> > Send Message​ Call the service manually for an initial functional test. It should look similar to this... but parameters need to be adjusted to your environment:   var params = {   cc: undefined /* STRING */,   bcc: undefined /* STRING */,   subject: "sending email via ThingWorx" /* STRING */,   from: "noreply@mycompany.com" /* STRING */,   to: "<your real mail address>" /* STRING */,   body: "just a simple test!" /* HTML */ };   // no return Things["<yourMailThing>"].SendMessage(params);   Check your mailbox for incoming messages!   What next?   The mail server can also be used to receive emails. So instead of sending mails to your regular mail address and risking a ton of spam (depending on your services and frequency of sending automated emails), you could also configure a local Outlook / Thunderbird / etc. installation and send mails directly to the noreply@mycompany.com address. Those mails can then be downloaded from hMailServer via POP3.   With this the whole send AND receive mechanism is contained within a single (virtual) machine.
View full tip
Hi everyone, As everyone knows already, the main way to define Properties inside the EMS Java SDK is to use annotations at the beginning of the VirtualThing class implementation. There are some use-cases when we need to define those properties dynamically, at runtime, like for example when we use a VirtualThing to push a sensor's data from a Device Cloud to the ThingWorx server, for multiple customers. In this case, the number properties differ based on customers, and due to the large number of variations, we need to be able to define programmatically the Properties themselves. The following code will do just that: for (int i = 0; i < int_PropertiesLength; i++) {     Node nNode = device_Properties.item(i);     PropertyDefinition pd;     AspectCollection aspects = new AspectCollection();     if (NumberUtils.isNumber(str_NodeValue))     {         pd = new PropertyDefinition(nNode.getNodeName(), " ", BaseTypes.NUMBER);     }     else if (str_NodeValue=="true"|str_NodeValue=="false")     {         pd = new PropertyDefinition(nNode.getNodeName(), " ", BaseTypes.BOOLEAN);     }     else     pd = new PropertyDefinition(nNode.getNodeName(), " ", BaseTypes.STRING);     aspects.put(Aspects.ASPECT_DATACHANGETYPE,    new StringPrimitive(DataChangeType.VALUE.name()));     //Add the dataChangeThreshold aspect     aspects.put(Aspects.ASPECT_DATACHANGETHRESHOLD, new NumberPrimitive(0.0));     //Add the cacheTime aspect     aspects.put(Aspects.ASPECT_CACHETIME, new IntegerPrimitive(0));     //Add the isPersistent aspect     aspects.put(Aspects.ASPECT_ISPERSISTENT, new BooleanPrimitive(false));     //Add the isReadOnly aspect     aspects.put(Aspects.ASPECT_ISREADONLY, new BooleanPrimitive(true));     //Add the pushType aspect     aspects.put("pushType", new StringPrimitive(DataChangeType.ALWAYS.name()));     aspects.put(Aspects.ASPECT_ISLOGGED,new BooleanPrimitive(true));     //Add the defaultValue aspect if needed...     //aspects.put(Aspects.ASPECT_DEFAULTVALUE, new BooleanPrimitive(true));     pd.setAspects(aspects);     super.defineProperty(pd); }  //you need to comment initializeFromAnnotations() and use instead the initialize() in order for this to work. //super.initializeFromAnnotations();   super.initialize(); Please put this code in the Constructor method of your VirtualThing extending implementation. It needs to be run exactly once, at any instance creation. This method relies on the manual discovery of the sensor properties that you will do before this. Depending on the implementation you can either do the discovery of the properties here in this method (too slow), or you can pass it as a parameter to the constructor (better). Hope it helps!
View full tip
In this particular scenario, the server is experiencing a severe performance drop.The first step to check first is the overall state of the server -- CPU consumption, memory, disk I/O. Not seeing anything unusual there, the second step is to check the Thingworx condition through the status tool available with the Tomcat manager. Per the observation: Despite 12 GB of memory being allocated, only 1 GB is in use. Large number of threads currently running on the server is experiencing long run times (up to 35 minutes) Checking Tomcat configuration didn't show any errors or potential causes of the problem, thus moving onto the second bullet, the threads need to be analyzed. That thread has been running 200,936 milliseconds -- more than 3 minutes for an operation that should take less than a second. Also, it's noted that there were 93 busy threads running concurrently. Causes: Concurrency on writing session variable values to the server. The threads are kept alive and blocking the system. Tracing the issue back to the piece of code in the service recently included in the application, the problem has been solved by adding an IF condition in order to perform Session variable values update only when needed. In result, the update only happens once a shift. Conclusion: Using Tomcat to view mashup generated threads helped identify the service involved in the root cause. Modification required to resolve was a small code change to reduce the frequency of the session variable update.
View full tip
In this Blog, we will share some light about Gradient boost, which is a default algorithm in our Analytics platform. We will touch on: 1) The main purpose of Gradient boost and how the technique works. 2) We will look at advantages and constraint. 3) Last some “nice to know” tips when working with Gradient. Gradient boost is a machine learning technique which main purpose is to help weak prediction models become stronger. Gradient boost works by building one tree at a time, and correct errors made by previously tree. The theory support reweights of edges which allows badly weight edges to get reweighted. For example the misclassified gain weight and those weights which are classified correctly, lose weight. It is kind of the same strategy when dealing with stocks; you balance the investment between bonds and share. An analog could also be done to illnesses; If a doctor informs that you have a rare disease, you want to make sure to get a few more opinions from other doctors, You will evaluate all the information to make a more correct decision about how to cure yourself. Why use gradient boost: - Gradient boost provides the user with a powerful tool to boost/improve weak prediction models. - Gradient boost works well with regression and classification problems, therefore Decision tree can benefit from applying gradient boost. - Gradient bo​ost is known in the industry, to be one of the best techniques to use when dealing with model improvement. - Gradient boost uses stagewise fashion, in this way each time it adjust a tree, it does not go back and readjust when dealing with the next tree. As with all machine learning algorithms gradient boost also have some constraint: - There is a change of overfitting. “Nice to know” tips: - A natural way to reduce this risk of overfitting would be to monitor and adjust the iterations. - The depth of the tree might have an influence on the prediction error, observe what happens if the depth is a stump/1 level deep.
View full tip
You can control the Tracking Indicator that is used to mark the ThingMark position. The Tracking Indicator is a green hexagon, in the screenshot below the red arrow points to it. You can control the display of this tracking indicator via the Display Tracking Indicator property of the ThingMark widget: But you can also get fancier. Here is an exmaple that shows the tracking indicator for 3 seconds when the tracking has started and then hides it automatically. To achieve such a behavior you'll have to use a bit of Javascript. We'll first create a function hideIn3Sec() in the javascript section of our view and then add it to the javascript handler of the Tracking Acquired event of the ThingMark widget. Step 1: Here is the code for copy/paste convenience: $scope.hideIn3Sec=function(){   // The $timeout function has two arguments: the function to execute (defined inline here)   // and the time in msec after which the function is invoked.   $timeout( function hide(){     // you may have to change 'thingMark-1' by the id of the ThingMark in your own experience     $scope.app.view['Home'].wdg['thingMark-1']['trackingIndicator']=false;   },   3000); } Step 2: That's it. Have fun!
View full tip
ThingWorx Analytics Interactive API Guide is a great way for users to familiarize themselves with ThingWorx Analytics APIs calls.  It even gives users the ability to run jobs through its interface.  This blog post will cover how to access the ThingWorx Analytics Interactive API Guide installed on a Virtual Machine or Standalone Server. Steps Get the IP address of the ThingWorx Analytics Server Type ​ip a ​Put that IP address into the desired web browser ​Your IP address may be different from the one in the picture above Add the port number of the server to the end of the IP address ​The Default  port number is 8080 Make sure to put a colon " : " between the end of the IP address and the start of the port number The port number could be different in some cases, depending if it was configured differently during installation ​Hit Enter and the main page will load.
View full tip
Hello, There have been some inquires about how can one use AngularJS for developing custom parts that can run in the ThingWorx environment. To address these inquires I have created a document that describes the process of integrating AngularJS with ThingWorx. The document attached comes with the source code for the examples presented throughout the document and an extension for AngularJS 1.5.8 and angular-material components. Feedback is appreciated. Thank you.
View full tip