cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - New to the community? Learn how to post a question and get help from PTC and industry experts! X

IoT Tips

Sort by:
  Today on Ask Kaya, I have a riddle.   I was effective and trendy, but now I can be annoying. I sometimes tend to look out of place and I get in others’ space. I look easy to learn, but few have truly mastered my intricacies. Am I the floss dance?   No, I’m not the floss dance. I’m the expression and validator widgets.   It’s time to say goodbye to those pesky widgets that were super useful but super annoying. Yep, those widgets that littered your design canvas but were “Invisible at Runtime.”     I’m talking about expressions, validators, status messages and event routers. In our next release, expression and validator widgets will no longer appear on the canvas at build time.   You may remember from the previous post titled “Ask the Expert: What are the top three features in ThingWorx 8.4 that I might not know about?” In the post, we discussed the concept of Data Helpers, now known as “Functions.”   What do Functions do? Functions give you the ability add custom logic and bindings to improve UI application functionality. Before we describe how to configure them, let’s first explain what they are.   An expression widget runs any expression you give it. That piece of logic might be something like result = a +  b. While an expression can run any type of logic—not just numbers—you must specify the output base type to be the same as the input base type.    Expressions can also be used to run a different service based on an event. For example, a user may write an expression to run a service if a value changes. They may not care about the value itself but rather just want to know that the value changed.   A validator widget is similar to an expression widget; the key difference is that a validator only outputs a Boolean. When their result is true, you bind to one service, when false, another. Unlike an expression widget, the validator widget does not have to have matching input and output datatypes because the output datatype will always be Boolean.   The input for a validator can be anything. You can create a scenario in which a validator widget outputs a status message that reads “the value is within the acceptable range” when the validator returns true or “the value is outside of the acceptable range” when the validator returns false.   Ready for the extra good stuff? We’re introducing a new editor for you to create, add and configure expressions and validators.   How can I use Functions? Let’s walk through an example using the following these steps.   Create a new Mashup. You’ll see a new tab called “Functions,” which, on default, appears in the bottom right panel.                 Click the “+” arrow in the top right of the “Functions” panel. Choose “expression.” Use the new Functions editor to write your expression. In this example, we’ll say that result = a + b; New Functions Editor  We’ll then set the default values for a and b to be 2 and 3, respectively, to output a result of 5.     Expressions are just as powerful as they were before, but they no longer take up space on your mashup during design time, and they can now be configured in our brand new editor! (To spread the joy even more, the same holds true for validator widgets.)   Reach out with any questions or thoughts below!   Stay connected and keep floss dancing, Kaya
View full tip
Predicting time to failure (TTF) or remaining useful life (RUL) is a common need in IIOT world. We are looking here at some  ways to implement it. We are going to use one of the Nasa dataset publicly available that simulates the Turbofan engine degradation (https://c3.nasa.gov/dashlink/resources/139/) . The original dataset has got 26 features as below Column 1 – asset id Column 2 – cycle/time of sensor data collection Column 3- 5 – operational setting Column 6-26 – sensor measurement In the training dataset the sensor measurement ends when the failure occurs.     Data Collection Since the prediction model is based on historic data, the data collection is a critical point. In some cases the data would have been already collected form the past and you need to make the best out of it. See the Data preparation chapter below. In situation where you are collecting data, a few points are good to keep in mind, some may or may not apply depending on the type of data to be collected. More frequent (higher frequency) collection is usually better, especially for electronic measure. In situation where one or more specific sensor values are known to impact the TTF, it is good to take measure at different values of this sensor until the failure without artificially modifying the values. For example, for a light bulb with normal working voltage of 1.5V, it is good to take some measure at let’s say 1V, 1.5V , 2V , 3V and 4V. But each time run till the failure. Do not start at 1.5V and switch to 4V after 1h. This would compromise what the model can learn. More variation is better as it helps the prediction model to generalize. In the same example of voltage it is best to collect data for 1V, 1.5V , 2V , 3V and 4V rather than just 1.5V which would be the normal running condition. This also depends on the use case, for example if we know for sure that voltage will always be between 1.45V and 1.55V, then we could focus only on data collection in this range. Once the failure is reached, stop collecting data. We are indeed not interested in what happens after the failure. Collecting data after the failure will also lead to lower prediction model accuracy. Each failure run should be a separate cycle in the dataset. In other word from a metadata stand point, each failure run should be represented by a different ENTITY_ID. TTF business need Before going into data preparation and model creation we need to understand what information is important in term of TTF prediction for our business need. There are several ways to conceive the TTF, for example: Exact time value when failure might occur This is probably going to be the most challenging to predict However one should consider if it is really necessary. Indeed do we need to know that a failure might occur in 12 min as opposed to 14 min ? Very often knowing that the time to failures is less than X min, is what is important, not the exact time. So the following options are often more appropriate. Threshold For some application knowing that a critical threshold is reached is all that is needed. In this case a Boolean goal, for example lessThan30min or healthy with yes/no values, can be used This is usually much easier than the exact value above. Range For other applications we may need to have a bit more insight and try to predict some ranges, for example: lessThan30min, 30to60min, 60to90min and moreThan90min In this case we will define an ordinal goal The caveat here is that currently ThingWorx Analytics Builder does not support ordinal goal, though ThingWorx Analytics Server does support it. So it only means that the model creation needs to be done through the API. This is the option we will take with the NASA dataset. The picture below shows the 3 different types of TTF listed above   Data Preparation   General Feature engineering Data Preparation is always a very important step for any machine learning work. It is important to present the data in the best suitable way for the algorithms to give the best results. There are a lot of practices that can be used but beyond the scope of this post. The Feature Engineering  post gives some starting point on this. There are also a lot of resources available on the Internet to get started, though the use of a data scientist may be necessary. As an example, in the original NASA dataset we can see that a few features have a constant value therefore there are unlikely to impact the prediction and will be removed. This will allow to free computational resources and prevent confusion in the model. Sensor data resampling The data sampling across the different sensor should be uniform. In a real case scenario we may though have sensors data collected at different time interval. Data transformation/extrapolation should  be done so that all sensor values are at the same frequency in the uploaded dataset. TTF feature Since we want to predict the time to failure, we do need a column in the dataset that represent this values for the data we have. In a real case scenario we obviously cannot measure the time to failure, but we usually have sensor data up to the point of failure, which we can use to derived the TTF values. This is what happens in the NASA dataset, the last cycle corresponds to the time when the failure occurred. We can therefore derive a new feature TTF in this dataset. This will start at 0 for the last cycle when failure occurred, and will be incremented by 1 up to the very first measurement, as shown below:   Once this TTF column is defined, we may need to transform it further depending on the path we choose for TTF prediction, as described in the TTF business need chapter. In the case of the NASA dataset we are choosing a range TTF with values of more100, 50to100, 10to50 and less10 to represent the number of remaining cycles till the predicted failure. This is the information we need to predict in order to plan a suitable maintenance action. Our transformed TTF column look as below:     Once the data in csv is ready, we need to create the json file to represent the metadata. In the case of range TTF this will be defined as an ordinal goal as below (see attachment for the full matadata json file) {         "fieldName": "TTF",         "values": ["less10",                   "10to50",                   "50to100",                   "more100"],         "range": null,         "dataType": "STRING",         "opType": "ORDINAL",         "timeSamplingInterval": null,         "isStatic": false   }   Model creation Once the data is ready it can be uploaded into ThingWorx Analytics and work on the prediction model can start. ThingWorx Analytics is designed to make machine learning easy and accessible to non data scientists, so this steps will be easier than when using other solutions. However some trial and error are needed to refine the model which may also involve reworking the dataset. Important considerations: When dealing with Time to failure prediction, it is usually needed to unset the Use Goal History in the Advanced parameters of the model creation wizard. If using API, the equivalent is to set the virtualSensor parameter to true. Tests with Redundancy Filter enabled should be done as this has shown to give better results. In a first attempt it is a good idea to keep lookback Size to 0. This indicates to ThingWorx Analytics to find the best lookback size between 2, 4, 8 and 16. If you need a different value or know that a different value is better suited, you can change this value accordingly. However bear in mind the following: Larger lookback size will lead to less data being available to train, since more data are needed to predict one goal. Larger lookback do lead to significant memory increase – See https://www.ptc.com/en/support/article?n=CS294545     In the case of the NASA dataset, since we are using an ordinal goal, we need to execute it through API. This can be done through mashup and services (see How to work with ordinal and categorical data in ThingWorx Analytics ? for an example) for a more productive way. As a test the TrainingThing.CreateJob service can be called from the Composer directly, as shown below:       Once the model is created we can check some performance statistics in ThingWorx Analytics Builder or, in the case of ordinal goal, via the ValidationThing.RetrieveResults service. The parameter most relevant in the case of ordinal goal will be the confusion matrix. Here is the confusion matrix I get   Another validation is to compute some PVA (Predicted Vs Actual) results for some validation data. ThingWorx Analytics does validation automatically when using ThingWorx Analytics Builder and present some useful performance metrics and graph. In the case of ordinal goal, we can still get this automatic validation run (hence the above confusion matrix), but no PVA graph or data is available. This can be done manually if some data are kept aside and not passed to the training microservice. Once the model is completed, we can then score (using PredictionThing.RealTimeScore or BatchScore for ordinal goal, or Builder UI for other goal) this validation dataset and compare the prediction result with the actual value. here is one example:     Depending on the business case this model can be deemed acceptable or may need rework, such as change the range values, change learners’ parameters, modify dataset … There is certainly a fair amount of experimentation before creating the optimal model but hopefully this post does give some good starting points.   Resources:   Original Dataset attached as train_FD001-original.csv Transformed dataset attached as train_FD001-TTF-transformed.csv json metadata file for transformed dataset attached as train_FD001-ttford.json                    
View full tip
Timers and schedulers can be useful tool in a Thingworx application.  Their only purpose, of course, is to create events that can be used by the platform to perform and number of tasks.  These can range from, requesting data from an edge device, to doing calculations for alerts, to running archive functions for data.  Sounds like a simple enough process.  Then why do most platform performance issues seem to come from these two simple templates? It all has to do with how the event is subscribed to and how the platform needs to process events and subscriptions.  The tasks of handling MOST events and their related subscription logic is in the EventProcessingSubsystem.  You can see the metrics of this via the Monitoring -> Subsystems menu in Composer.  This will show you how many events have been processed and how many events are waiting in queue to be processed, along with some other settings.  You can often identify issues with Timers and Schedulers here, you will see the number of queued events climb and the number of processed events stagnate. But why!?  Shouldn't this multi-threaded processing take care of all of that.  Most times it can easily do this but when you suddenly flood it with transaction all trying to access the same resources and the same time it can grind to a halt. This typically occurs when you create a timer/scheduler and subscribe to it's event at a template level.  To illustrate this lets look at an example of what might occur.  In this scenario let's imagine we have 1,000 edge devices that we must pull data from.  We only need to get this information every 5 minutes.  When we retrieve it we must lookup some data mapping from a DataTable and store the data in a Stream.  At the 5 minute interval the timer fires it's event.  Suddenly all at once the EventProcessingSubsystem get 1000 events.  This by itself is not a problem, but it will concurrently try to process as many as it can to be efficient.  So we now have multiple transactions all trying to query a single DataTable all at once.  In order to read this table the database (no matter which back end persistence provider) will lock parts or all of the table (depending on the query).  As you can probably guess things begin to slow down because each transaction has the lock while many others are trying to acquire one.  This happens over and over until all 1,000 transactions are complete.  In the mean time we are also doing other commands in the subscription and writing Stream entries to the same database inside the same transactions.  Additionally remember all of these transactions and data they access must be held in memory while they are running.  You also will see a memory spike and depending on resource can run into a problem here as well. Regular events can easily be part of any use case, so how would that work!  The trick to know here comes in two parts.  First, any event a Thing raises can be subscribed to on that same Thing.  When you do this the subscription transaction does not go into the EventProcessingSubsystem.  It will execute on the threads already open in memory for that Thing.  So subscribing to a timer event on the Timer Thing that raised the event will not flood the subsystem. In the previous example, how would you go about polling all of these Things.  Simple, you take the exact logic you would have executed on the template subscription and move it to the timer subscription.  To keep the context of the Thing, use the GetImplimentingThings service for the template to retrieve the list of all 1,000 Things created based on it.  Then loop through these things and execute the logic.  This also means that all of the DataTable queries and logic will be executed sequentially so the database locking issue goes away as well.  Memory issues decrease also because the allocated memory for the quries is either reused or can be clean during garbage collection since the use of the variable that held the result is reallocated on each loop. Overall it is best not to use Timers and Schedulers whenever possible.  Use data triggered events, UI interactions or Rest API calls to initiate transactions whenever possible.  It lowers the overall risk of flooding the system with recourse demands, from processor, to memory, to threads, to database.  Sometimes, though, they are needed.  Follow the basic guides in logic here and things should run smoothly!
View full tip
ThingWorx Analytics is capable of being assembled in multiple Operating Systems. In this post, we will discuss common issues that have been encountered by other users. Permissions Denied – Read/Write access to Third Party Components This is encountered when executing the desired Shell script to begin the creation process. In MacOS and Linux you may encounter a “Permissions Denied” error on the two required components in the creation, the packer-post-processor-vhd and packer components. Error Message This will result in a Terminal dialog message that will read “Process Completed, No Artifacts Created”. This indicates that the Packer Script has failed to complete the task, and the desired appliance images were not created. To correct this issue, you will have to change the permissions of the packer-post-processor-vhd and packer components to be able to be read and executable by the user account that is attempting to create the appliance. Solution Run the following commands in the Virtual Machine terminal (you may need to run as SUDO or as Root): chmod +x packer-post-processor-vhd ​chmod +x packer After running the above command, run the Shell script of the desired VM Appliance output. This should resolve the issue with “Permission Denied” while executing the build scripts. Error Starting Appliance in VirtualBox Users have experienced this issue at the first run of the Appliance, right after it has been assembled. This issue is unique to VirtualBox versions 5.0 and above. Error Message – Dialog Box If you encounter the error depicted below, please check under settings for the imported OVA for any errors: This issue is the result of invalid settings in the Appliance Configuration. You will need to check for Invalid Settings, by navigating to the Settings Menu for the Appliance: The “Invalid settings detected” indicates that when the Product was assembled, some configuration settings were not applied correctly by the creation tool scripts. Solution Hover your mouse over the settings and it will direct you to cause, in this case it is due to remote monitor setup. Just change the settings in Display (Remote Display Tab) by unchecking the Enable Server button. Press OK after unchecking the “Enable Server” option, and start the Appliance.
View full tip
Get MQTT (like mosquitto) operating with SSL - use http://rockingdlabs.dunmire.org/exercises-experiments/ssl-client-certs-to-secure-mqtt as your primary guide to building out your self-signed CA cert and your server cert and key. Simply follow their directions with the one caveat of setting IPLIST and HOSTLIST environment variables prior to executing the generate-CA.sh script. This will be necessary for hosted environments like AWS where the actual IP address of the system cannot be used to access the server from the internet. Put the external facing IP address in IPLIST and the external facing fully qualified domain name (FQDN) into HOSTLIST. If you have multiple usable ip addresses or hostname aliases, enclose them in quotes and separate them with spaces  (export IPLIST="1.2.3.4 5.6.7.8") Complete steps 1-3 in the instructions above. This is sufficient to get the MQTT traffic encrypted and use it with Thingworx. Do not proceed until you can make a mosquitto_pub and mosquitto_sub pass data using the --cafile option and get an error if you do not supply the --cafile option. Make sure you have a copy of the ca.crt file generated by the script above to reference in the commands. Note that it may be necessary to use the ip address rather than the FQDN. mosquitto_sub --cafile path/to/ca.crt -h ipaddr -t topic mosquitto_pub --cafile path/to/ca.crt -h ipaddr -t topic -m message Create an MQTT Thing in Thingworx based on the MQTT ThingTemplate. Create a property in the new thing for sending messages to the MQTT broker. In the configuration page for the new MQTT Thing, set the serverName, serverPort and check the useSSL checkbox. In the Property to MQTT topic mappings, create a publish entry that points to the property you created in the thing and set the topic to the mqtt topic on which you want to publish . The ca.crt file created in the above script is the certificate for a new Certificate Authority (self-signed, so not really official). Clients may have to import this certificate into their trusted CA Root store in order to make the encryption work. Add the ca.crt file from the mqtt broker system to a keystore file that will become tomcat's truststore (the list of CAs trusted by the server). See the Tomcat documentation if you need to configure https on tomcat as well. Create a new keystore if one does not already exist as a truststore. keytool -import -trustcacerts -file /path/to/ca/ca.crt -alias CA_ALIAS -keystore path/to/TrustStore -storepass mypassword). Replace the CA_ALIAS with some identifying string like MyPrivateCertificateAuthority. It did not appear to care about the CA_ALIAS value used. Replace path/to/truststore to point to the file that already exists or you want to create. Add the following to the CATALINA_OPTS for starting tomcat -Djavax.net.ssl.trustStore=path/to/TrustStore"  -Djavax.net.ssl.trustStorePassword=xxxx Replace path/to/TrustStore with the pathname of the file you created / updated with keytool above. Replace xxxx with whatever password you used in the keytool command above Restart tomcat. Check the mqtt Thing for its isConnected property. It should now be true. If it is not, then check the log files for mosquitto and for tomcat looking for SSL issues. Change the property value and see it appear in the output of a (properly constructed) mosquitto_sub --cafile path/to/TrustStore -t test somewhere.
View full tip
In this blog we will have a look at the installation of the Thingworx Analytics Builder extension. This is used as guideline but make sure to check the Help Center for your release as steps do vary with versions. The installation has been divided in 3 parts: Introduction and import of the extension into Thingworx Platform Video Link : 1568 Configuration of the extension Note: For release 8.1, the Settings menu differs from previous versions, seeWhat's New in ThingWorx Analytics Builder 8.1 between times 00:12 sec to 00:40 sec for up to date menu selection. Video Link : 1572 Installation of the UploadThing module Note: this step no longer applies as of ThingWorx Analytics 8.1 Video Link : 1573 Useful links: PTC Download page for Thingworx Analytics PTC Reference Document page for Thingworx Analytics How to copy files from Windows to Linux ?
View full tip
The following is valid  for ThingWorx Analytics (TWA) 52.0.2 till 8.0 For release 8.3.0 and above see How to score new data in ThingWorx Analytics 8.3.x ?   Overview The main steps are as follow: Create a dataset Configure the dataset Upload data to the dataset Optimize the dataset Create filters for training and scoring data Train the model Execute scoring on existing data Upload new data to dataset Execute scoring on new data TWA models are dataset centric, which means a model created with one dataset cannot be reused with a different dataset. In order to be able to score new data, a specific feature, record purpose in the below example, is included in the dataset. This feature needs to be included from the beginning when the data is first uploaded to TWA. A filter on that feature can then be created to allow to isolate desired data. When new data comes in, they are added to the original dataset but with a specific value for the filtered feature (record purpose), which allows to discriminate and score only those new records. Process Create a dataset This example uses the beanpro demo dataset Create dataset is done through a POST on datasets REST API as below 2. Configure dataset This is done through a POST on <dataset>/configuration REST API 3.      Upload data         4.      Optimize the dataset         5.      Create filters The dataset includes a feature named record purpose created especially to differentiate between the rows to be used for training and the rows to be used for scoring. New data to be added will have record purpose set to scoringnew, which will allow to execute a scoring job limited to those filtered new rows Filter for training data: Filter for new scoring data        6.      Train the model This is done through a POST on <dataset>/prediction API        7.      Score the training data This is done through a POST on <dataset>/predictive_scores API. Note the use of the filter TrainingData created earlier. This allow to score only the rows with training as value for record purpose feature. Note: scoring could also be done without filter at this stage, in which case all the data in the dataset will be scored and not just the ones with training fore record purpose   Retrieving the scoring result show all the records in the dataset:   8.      Upload new data The newly uploaded csv file should only contains new record. This will be appended to the existing ones.   Note that the new record (it could be more than one) has a value scoringnew for the record purpose feature: This will allow to use the previously created filter ScoringNewData so that a new scoring job will only take into account this new record.   9.      Scoring new data A POST on API predictive_scores is executed however using the filter ScoringNewData. This results in only the newly added data to be scored and therefore a much quicker execution time too. Retrieving the scoring result shows only the new record:
View full tip
Original Post Date:      June 6, 2016   Description: This is a video tutorial on creating an InfoTable through a service, creating and adding a Data Shape adding rows to the Infotable through a service, adding rows to an InfoTable property and querying the InfoTable.    
View full tip
In this blog I will be testing the SAPODataConnector using the SAP Gateway - Demo Consumption System.   Overview   The SAPODataConnector enables the connection to the SAP Netweaver Gateway through the ODdata specification. It is a specialized implementation of the ODataConnector. See Integration Connectors for documentation.   It relies on three components : Integration Runtime : microservice that runs outside of ThingWorx and has to be deployed separately, it uses Web Socket to communicate with the ThingWorx platform (similar to EMS). Integration Subsystem : available by default since 7.4 (not extension needed) Integration Connector : SAPODataConnector available by default in 8.0 (not extension needed)   ThingWorx can use OAuth to access SAP, but in this blog I will just use basic authentication.   SAP Netweaver Gateway Demo system registration   1. Create an account on the Gateway Demo system (credentials to be used on the connector are sent by email) 2. Verify that the account has access to the basic OData sample service : https://sapes4.sapdevcenter.com/sap/opu/odata/IWBEP/GWSAMPLE_BASIC/   Integration Runtime microservice setup   1. Follow WindchillSwaggerConnector hands-on (7.4) - Integration Runtime microservice setup Note: Only one Integration Runtime instance is required for all your Integration Connectors (Multiple instances are supported for High Availability and scale).   SAPODataConnector setup   Use the New Composer UI (some setting, such as API maps, are not available in the ThingWorx legacy composer)     1. Create a DataShape that is used to map the attributes being retrieved from SAP SAPObjectDS : Id (STRING), Name (STRING), Price (NUMBER) 2. Create a Thing named TestSAPConnector that uses SAPODataConnector as thing template 3. Setup the SAP Netweaver Gateway connection under TestSAPConnector > Configuration Generic Connector Connection Settings Authentication Type = fixed HTTP Connector Connection Settings Username = <SAP Gateway user> Password = < SAP Gateway pwd> Base URL : https://sapes4.sapdevcenter.com/sap Relative URL : /opu/odata/IWBEP/GWSAMPLE_BASIC/ Connection URL : /opu/odata/IWBEP/GWSAMPLE_BASIC/$metadata 4. Create the API maps and service under TestSAPConnector > API Maps (New Composer only) Mapping ID : sap EndPoint : getProductSet Select DataShape : SAPObjectDS (created at step 1) and map the following attributes : Name <- Name Id <- ProductID Price <- Price Pick "Create a Service from this mapping"     Testing our Connector   Test the TestSAPConnector::getProductSet service (keep all the input parameters blank)
View full tip
Let us consider that we have 2 properties Property1 and Property2 in Thing Thing1 which we want to update using UpdatePropertyValues service. In our service we will use a system defined DataShape NamedVTQ which has following field definitions: In Thing1 create a custom service like following: var params1 = {      infoTableName : "InfoTable",      dataShapeName : "NamedVTQ" }; // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(NamedVTQ) var InputInfoTable = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params1); var now = new Date(); // Thing1 entry object var newEntry = new Object(); newEntry.time = now; // DATETIME - isPrimaryKey = true newEntry.quality = undefined; // STRING newEntry.name = "Property1"; // STRING newEntry.value = "Value1"; // STRING InputInfoTable.AddRow(newEntry); newEntry.name = "Property2"; // STRING newEntry.value = "Value2"; // STRING InputInfoTable.AddRow(newEntry); var params2 = {      values: InputInfoTable /* INFOTABLE */ }; me.UpdatePropertyValues(params2);
View full tip
Thingworx provides a library of InfoTable functions, one of the most powerful ones being DeriveFields (besides that I use Aggregate and Query a lot and ... getRowCount) DeriveFields can generate additional columns to your InfoTable and fill that with values that can be derived from ... nearly anything! Hard coded, based on a Service you call, based on a Property Value, based on other values within the InfoTable you are adding the column to. Just remember for this Service (as well as Aggregate), no spaces between different column definitions and use a , (comma) as separator. Here are some two powerful examples: //Calling another function using DeriveFields //Note that the value thingTemplate is the actual value in the row of column thingTemplate! var params = {   types: "STRING" /* STRING */,   t: AllItems /* INFOTABLE */,   columns: "BaseTemplate" /* STRING */,     expressions: "Things['PTC.RemoteMonitoring.GeneralServices'].RetrieveBaseTemplate({ThingTemplateName:thingTemplate})" /* STRING */ }; // result: INFOTABLE var AllItemsWithBase = Resources["InfoTableFunctions"].DeriveFields(params); //Getting values from other Properties //to in this case is the value of the row in the column to //Note the use of , and no spaces //NOTE: You can make this even more generic with something like Things[to][propName] var params = {     types: "NUMBER,STRING,STRING,LOCATION" /* STRING */,     t: AllAssets /* INFOTABLE */,     columns: "Status,StatusLabel,Description,AssetLocation" /* STRING */,     expressions: "Things[to].Status,Things[to].StatusLabel,Things[to].description,Things[to].AssetLocation" /* STRING */ }; // result: INFOTABLE var AllAssetsWithStatus = Resources["InfoTableFunctions"].DeriveFields(params);
View full tip
In this post, I show how you can downsample time-series data on server side using the LTTB algorithm. The export comes with a service to setup sample data and a mashup which shows the data with weak to strong downsampling.   Motivation: Users displaying time series data on mashups and dashboards (usually by a service using a QueryPropertyHistory-flavor in the background) might request large amounts of data by selecting large date ranges to be visualized, or data being recorded in high resolution. The newer chart widgets in Thingworx can work much better with a higher number of data points to display. Some also provide their own downsampling so only the „necessary“ points are drawn (e.g. no need to paint beyond the screen‘s resolution). See discussion here. However, as this is done in the widgets, this means the data reduction happens on client site, so data is sent over the network only to be discarded. It would be beneficial to reduce the number of points delivered to the client beforehand. This would also improve the behavior of older widgets which don’t have support for downsampling. Many methods for downsampling are available. One option is partitioning the data and averaging out each partition, as described here. A disadvantage is that this creates and displays points which are not in the original data. This approach here uses Largest-Triangle-Three-Buckets (LTTB) for two reasons: resulting data points exist in the original data set and the algorithm preserves the shape of the original curve very well, i.e. outliers are displayed and not averaged out. It also seems computationally not too hard on the server. Setting it up: Import Entities from LTTB_Entities.xml Navigate to thing LTTB.TestThing in project LTTB, run service downsampleSetup to setup some sample data Open mashup LTTB.Sampling_MU: Initially, there are 8000 rows sent back. The chart widget decides how many of them are displayed. You can see the rowcount in the debug info. Using the button bar, you determine to how many points the result will be downsampled and sent to the client. Notice how the curve get rougher, but the shape is preserved. How it works: The potentially large result of QueryPropertyHistory is downsampled by running it through LTTB. The resulting Infotable is sent to the widget (see service LTTB.TestThing.getData). LTTB implementation itself is in service downsampleTimeseries     Debug mode allows you to see how much data is sent over the network, and how much the number decreases proportionally with the downsampling.   LTTB.TestThing.getData;   The export and the widget is done with TWX  9 but it's only the widget that really needs TWX 9. I guess the code would need some more error-checking for robustness, but it's a good starting point.  
View full tip
ThingWorx Performance Monitoring with Grafana authored by EDC team member Desheng Xu ( @xudesheng )   Monitoring ThingWorx performance is crucially important, both during the load testing of a newly completed application, and after the deployment of new code in an existing application. Monitoring performance ensures that everything works as expected at the Enterprise level.  This tutorial steps you through configuring and installing a tool  which runs on the same network as the ThingWorx instance. This tool collects data from the Platform and translates it into something visual and easy to understand via Grafana.    tsample is  small and customizable, and it plays a similar role to telegraf. Its focus is on gathering ThingWorx performance metrics. Historically, this tool also supported collecting OS level performance metrics, but this is no longer supported. It is highly recommended to collect OS level performance metrics by using telegraf, a tool designed specifically for that purpose (and not discussed here). This is not the only way to go about monitoring ThingWorx performance, but this tool uses a very good approach that has been proven effective both at customer sites and internally by PTC to monitor scale tests.   Find the most recent release here.   Recommended Deployment Architecture tsample can be deployed in the same box where ThingWorx Tomcat is running, but it's recommended to deploy it on a separated box to minimize any performance impact caused by the collector. tsample supports export to InfluxDB and/or local file. In this document, it is assumed that InfluxDB will be used for monitoring purpose. Please note that this is not the same instance of InfluxDB being used by ThingWorx (if configured). This article will not cover setting up InfluxDB or NGINX (if necessary), so please configure these before beginning this tutorial.   Supported Platform tsample has been tested on Windows 2016, MacOS 10.15, Ubuntu 16.04, and Redhat 7.x.  It's anticipated to work on a more general Ubuntu/Redhat/Mac/Windows release as well. Please leave a comment or contact the author, @xudesheng , if Raspberry Pi support is needed.    Configuration File Where to Store the Configuration File tsample will pick up the configuration file in the following sequence: from the command line...   ./tsample -c <path to configuration file>​     from the environment... Linux:   export TSAMPLE_CONFIG=<path to configuration file> ./tsample​   Windows:   set TSAMPLE_CONFIG=<path to configuration file> tsample.exe   from a default location... tsample will try to find a file with the name "config.toml " from the same folder in which it starts.   How to Craft a Configuration File You can use following command to generate a sample file:     ./tsample -c config.toml -e     or:     ./tsample -c config.toml --export       A file with the name "config.toml " will be generated with a sample configuration. You can then adjust its content in accordance with the following.   Configuration File Content Format Configuration file must be in toml format. title and owner sections Both sections are optional. The intention of these two sections is to support doc tool in future. TestMachine section This is section is required, and it defines where this tool will run.   thingworx_servers section This section is where you define targeted ThingWorx applications. Multiple ThingWorx servers can be defined with the same or different metrics to be collected.   thingworx_servers.metrics sections Underneath each thingworx_servers section, there are several metrics. In default example, following metrics have been included: ValueStreamProcessingSubsystem DataTableProcessingSubsystem EventProcessingSubsystem PlatformSubsystem StreamProcessingSubsystem WSCommunicationsSubsystem WSExecutionProcessingSubsystem TunnelSubsystem AlertProcessingSubsystem FederationSubsystem You can add your own customized metrics, as long as the result follows the same Data Shape. The default Data Shape has 3 columns: If the output Data Shape exceeds this limit, the tool will likely not work properly.   result_export_to_db section This section defines the target InfluxDB as a sink of collected performance metrics.   result_export_to_file section This section defines the target file storage for collected performance metrics.   Grafana Configuration Example Monitor Value Stream Step 1. Connect Grafana to InfluxDB   Step 2: Create a New Dashboard   Step 3. Create a New Query Depending on which metrics you defined to collect in the tsample configuration file, you will see a different choice of measurement in Grafana. Here, we will use ValueStreamProcessingSubsystem as an example.   Step 4. Choose the Right Platform and Storage Provider Some metrics depend on the database storage provider, like Value Stream and Stream.   Step 5. Choose the Metrics Figures   Select "remove" to get rid of the default 'mean' calculation. Select "non_negative_difference" from Transformations. Using this transformation, Grafana can show us the speed of writes.     Then, remove the default GROUP BY "time" clause. Assign a meaningful alias of this query.   Step 6. Add Another Query You can add another query as 'Value Stream Queued Speed' by following the same steps.   Step 7. Assign a Panel Title   Step 8. Review the Result Let's go back to the dashboard page and select "last 15 minutes" or "last 5 minutes" from the top right corner. It should show a result similar to the chart below.   Step 9. Save the Dashboard Don't forget to save your dashboard before we add more panels.   Step 10. Refine the Panel It's difficult to figure out the high-level write speed from the above panel, so let's enhance it. Add a new query with the following configuration: In the above query, there are two additional figures: 20s and 1m... How do you choose? 20s should be the same as sampling_cycle_inseconds in your tsample configuration file. If you choose a different value, then you could end up with misleading results. Larger values such as "1m" may give you a smoother result, but they could also hide system instability. Going larger than 1m is not recommended in most cases. With this new query, it's much easier to figure out what the average write speed in current testing is.   Tips: if your sampling_Cycle_inseconds is 30s, then you may not need this additional query. The following image is a sample at the 30s interval time. You would not need an additional average query to get a smooth write speed.   The next example is a sample at the 10s interval time. Without additional queries, you may not be able to get a meaningful understanding of the write speed. From the above three examples, it's recommended to configure the sampling interval time at 30s, or anything larger than 20s. You can then choose whether you need additional queries based on the visualization result.   Step 11. Further Refinement The above charts illustrate the queuing and writing speed. However, it is possible that the Value Stream may perform at a reasonable speed, but the Value Stream queue may be growing and could exceed its capacity. Let's add another query to monitor this: However, it is difficult to read this chart, since it has a different value range on the y-axis: Let's move this query to a second y-axis on the right: This will make the view much easier to see: The current queue size or remaining queue size will always move up and down; it is healthy as long as it does not continue to grow to a high level.   What Else Can Be Monitored? The following metrics would be monitored by most customers: Value Stream write and queue speed Value Stream queue size Stream write and queue speed Stream queue size Event performed speed (completedTaskCount) Event submitted speed (submittedTaskCount) Event queue size Websocket communication Websocket connection   ThingWorx Memory Usage Monitoring Create a new panel and add a new query: In a running system, memory usage will always move up and down - at times sharply or quickly - when the system is busy. The system is healthy as long as memory doesn't go up continuously or stay at a maximum for a long period of time.   Conclusion Setting up monitoring is absolutely crucial to managing the performance of an enterprise ThingWorx application. Using Grafana makes tracking and visualizing the performance much easier. Stay tuned to the EDC tag for more monitoring tips to come!
View full tip
There are a lot of new and exciting changes included in ​ThingWorx 8.0.0, which is due out today, and no doubt, you'll want to get a test environment set up to try them out.  But there are a few changes that could impact your ability to get up and running with a new server that I wanted to share with everyone. IMPORTANT – Changes to Licensing in ThingWorx 8.0.0 In ThingWorx 7.4.0, a new Licensing Subsystem was introduced, and the license file required was provided as part of the ThingWorx 7.4.0 platform download.  As of ThingWorx 8.0.0, the license.bin file will no longer be provided with the platform download, but will instead need to be downloaded from the PTC Licensing Tool on the PTC eSupport Portal. As part of the operating system specific Installing ThingWorx (OS) sections in the Installing ThingWorx 8.0 guide, additional steps have been added that outline how to use the PTC Licensing Tool to download and install the ThingWorx license file for your organization. The license file downloaded from the ThingWorx Licensing Tool must be renamed to license.bin and copied to the /ThingworxPlatform directory prior to starting ThingWorx for the first time.  The server will fail to start if it cannot find this license file. For more information:      KCS Article 264374 - ThingWorx 8.0.0 Licensing IMPORTANT – Default Administrator Password for ThingWorx Composer is Changing in 8.0.0 In order to help encourage the use of secure passwords for the Administrator account on ThingWorx servers, the default Administrator account password will be changing in version 8.0 and above.  The new Administrator password is a complex password containing mixed case and special characters. This will encourage administrators to change their default, fully-privileged account password to one that is more secure and conforms to their organizational security standards. Information about the new default password can be found in each of the operating system specific Installing ThingWorx (OS) sections of the Installing ThingWorx 8.0 guide. PTC strongly advises against the use of any default password on any product, and encourages administrators to immediately change any default password to a proper, complex password. For more information: KCS Article 264270​ - ​Unable to log into ThingWorx 8.0 Composer using the default Administrator login Application Key Usage is Changing in 8.0.0 As part of an effort to better secure the ThingWorx Platform, the use of Application Keys as URL parameters (through the ‘appkey’ parameter) is being deprecated in ThingWorx 8.0.0. For example, the following URL uses the now deprecated ‘appkey’ parameter to pass in an application key:               https://thingworx.server.com/ThingWorx/Things/MyThing/Services/MyService?appkey=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx When included in the URL, the ‘appkey’ parameter becomes a browser-cacheable, clear-text rendition of a usable application key.  A knowledgeable user could retrieve this application key from their browser history and use it to perform unauthorized actions against a ThingWorx server. For full security, GET and POST requests that are run against a ThingWorx server should be performed over an HTTPS connection and should include the required application key in the request’s headers.  Query string parameters are visible in both HTTP and HTTPS contexts, and by moving the application key into the request headers, the application key itself is encrypted as part of the HTTPS request. (Note that the application key in a header is still visible for plaintext HTTP connections though!) By default, new installations of ThingWorx 8.0.0 will have the Allow Application Key as URL Parameter option disabled.  Upgrades from previous versions of ThingWorx will retain the ability to use the ‘appkey’ query string parameter, but PTC strongly encourages customers to promptly update any solutions that are dependent on sending the appkey using a URL parameter to move to request headers instead.  Please note that a future release of ThingWorx may completely disable the use of application keys as URL parameters. The Allow Application Key as URL Parameter option can be enabled or disabled on the ThingWorx PlatformSubsystem’s configuration page, accessible at System > Subsystems > PlatformSubsystem > Configuration. For more information: KCS Article 264349 - ThingWorx Appkey URL Parameter is Deprecated as of ThingWorx 8.0.0 Changes to Default Visibility (Organizations) in ThingWorx 8.0.0 Starting in ThingWorx 8.0.0, the Everyone organization will no longer be granted default visibility across all entity collections.  Only users who are a member of the Administrator group will be able to see ThingWorx entities on a newly installed server; any additional visibility permissions will need to be explicitly granted by the Administrator. This is a change in behavior from the previous ThingWorx releases where the Everyone organization (of which the all-encompassing Users group was a member) was granted visibility access by default to all entity collections. Visibility permissions had to explicitly be removed by the Administrator during solution development, which could be overlooked. For more information: KCS Article 264351 - Changes to Default Visibility (Organizations) in ThingWorx 8.0.0
View full tip
Hello, There have been some inquires about how can one use AngularJS for developing custom parts that can run in the ThingWorx environment. To address these inquires I have created a document that describes the process of integrating AngularJS with ThingWorx. The document attached comes with the source code for the examples presented throughout the document and an extension for AngularJS 1.5.8 and angular-material components. Feedback is appreciated. Thank you.
View full tip
Sometimes you need the values from different ThingTemplate members in ONE grid. Therefore it would be great, if you can join 2 "GetImplementedThingsWithData" results into a common one. Here a script that works generally as long as you don't mess with datatypes on same column names. I'm very interested, if someone can find a much easier solution. The Union function was the only one I found suited for the task, but this needs preparation of the infotables upfront. Input: Table1 :Infotable Table2: Infotable Output: Infotable Here the "Snippet": // Define params for an Infotable to hold column names var params = {   infoTableName: "field" /* STRING */ }; // Define column 1 var newField = new Object(); newField.name = "field"; newField.baseType = 'STRING'; // Two 1 columns Infotables to store the field definition; var field1 = Resources["InfoTableFunctions"].CreateInfoTable(params); field1.AddField(newField); var field2 = Resources["InfoTableFunctions"].CreateInfoTable(params); field2.AddField(newField); // Define the cell to add to Infotable var myField = new Object(); myField.field = ""; myField.baseType = "STRING"; // Loop through Table1 var dataShapeFields = Table1.dataShape.fields; for (var fieldName in dataShapeFields) {   logger.debug('field1 name is ' + dataShapeFields[fieldName].name);     myField.field = dataShapeFields[fieldName].name;    field1.AddRow(myField); } // Loop through Table2 var dataShapeFields = Table2.dataShape.fields; for (var fieldName in dataShapeFields) {   logger.debug('field2 name is ' + dataShapeFields[fieldName].name);    myField.field = dataShapeFields[fieldName].name;    field2.AddRow(myField); } // Using inner join functionality to filter only the values that exist in both var params = {   columns2: "field" /* STRING */,   columns1: "field" /* STRING */, joinType: "INNER" /* STRING */,   t1: field1 /* INFOTABLE */, t2: field2 /* INFOTABLE */,   joinColumns1: "field" /* STRING */,   joinColumns2: "field" /* STRING */ }; var commonFields = Resources["InfoTableFunctions"].Intersect(params); // Loop over the result to build a search string var commonColumns = ""; var tableLength = commonFields.rows.length; for (var x = 0; x < tableLength; x++) {   var row = commonFields.rows ;   commonColumns = commonColumns + row.field + ","; } // Reduce Table1 to match only common columns var params = { t: Table1 /* INFOTABLE */, columns: commonColumns /* STRING */ }; var result1 = Resources["InfoTableFunctions"].Distinct(params); // Reduce Table2 to match only common columns var params = {   t: Table2 /* INFOTABLE */,   columns: commonColumns /* STRING */ }; var result2 = Resources["InfoTableFunctions"].Distinct(params); // At the END JOIN the tables together (does not work if colums are different) var params = {   t1: result1 /* INFOTABLE */,   t2: result2 /* INFOTABLE */ }; var result = Resources["InfoTableFunctions"].Union(params);
View full tip
For those of you that aren't aware - the newest version of the Eclipse Plugin for Extension Development was made available last week in the ThingWorx Marketplace here. Because of the infancy of the product, there is not an official process for supplying release notes along with the plugin.  These are not official or all encompassing, but cover the main items worked on for 7.0. New Features: Added Configuration Table Wizard for code generation SDK Javadocs now automatically linked to SDK resources on project creation When creating a Service, Trace logging statements are generated inside of it (along with appropriate initializers) ThingWorx Source actions are now available from right click menu within a .java file Bugs: Fixed problem where some BaseTypes are not uppercase in annotations when generating code Fixed error when Creating and importing Extension Projects when the Eclipse install has a space in the file path Fixed inconsistent formatting in the metadata.xml when adding new Entities We are hoping to have a more official Release Note process for the next release.  Feel free to reply with questions or concerns.
View full tip
Hi everyone,   Maybe you got my email, I just wanted to post this also here.   I built a CSS generator for a few widgets, 8 at the moment. This is on a cloud instance accessible by anyone.   Advantages:      - don't have to style buttons and apply style definitions over and over again      - greater flexibility in styling the widget      - you don't have to write any CSS code   The way this works: you use the configurator to style the widget as you want. Use also a class to define what that widget is, or how it's styled. For example "primary-btn", "secondary-btn", etc. Copy the generated CSS code into the CustomCSS tab in ThingWorx and on the widget, put the CustomClass specified in the configurator.   As a best practice, I'd recommend placing all your CSS code into the Master mashup. And then all your mashups that use that master will also get the CustomCSS. So the only thing you have to do to your widgets in the mashups, is fill the CustomClass property with the desired generated style. Also, comment your differently styled widgets by separating them with /* My red button */ for example.   The mashups for this won't be released, this will only be offered as a service. As you'll see in the configurator, they are not that pretty, the main goal was functionality.   Here is the link to the configurator: https://pp-18121912279c.portal.ptc.io/Thingworx/Runtime/index.html#master=CSSMaster&mashup=ButtonVariables User: guest Password: guest123123   Give it a go and have fun! 🙂   NOTE: I will add more widgets to this in the future and will not take any requests in making it for a specific widget, I make these based on usage and styling capabilities.    
View full tip
In this post, I will use an instance of InfluxDB and Chronograf. See this post for installing both using Docker. InfluxDB - Time Series Databases   InfluxDB is a time series database. It allows users to work with and organize time series data. The advantage of such a database system is that it comes with built-in functionality to easily aggregate and operate on data based on time intervals. Other types of databases can do this as well - but time series databases are heavily optimized for this kind of data structures which will show in storage space and performance.   Data is stored in the database with its timestamp, its value and one or more tags.   Time Temperature Humidity Location 2019-01-24T00:00:00 23 42 Home 2019-01-24T00:01:00 22 43 Home 2019-01-24T00:02:00 21 44 Home 2019-01-24T00:03:00 23 45 Home 2019-01-24T00:04:00 24 42 Home 2019-01-24T00:05:00 25 43 Home 2019-01-24T00:06:00 23 44 Home   Values can be aggregated by intervalls, i.e. "give me the temperatur values within the last hour and take the average for 5 minutes". This would result in (60 / 5) = 12 results with a value that represents the average temperature within this 5 minute interval.   Example: Temperature Data averaged by 4 minutes   Time Temperature 2019-01-24T00:00:00 (23 + 22 + 21+ 23) / 4 = 22,25 2019-01-24T00:04:00 (24 + 25 + 23) / 3 = 24   To find out more about InfluxDB see also https://www.influxdata.com/time-series-database/ and https://www.influxdata.com/time-series-platform/   InfluxDB in ThingWorx   The new ThingWorx 8.4 release comes with an option to setup InfluxDB as additional Persistence Provider. Meta Data like Entity Definitons will still be stored in PostgreSQL. Streams, Value Streams and Data Tables however can be stored in InfluxDB.   The InfluxDB Persistence Provider setup is delivered with the PostgreSQL installation package for ThingWorx. Currently ThingWorx does not allow any aggregation of data with its built-in InfluxDB capabilities.   Prepare InfluxDB   InfluxDB will need a user and a database. Connect via Chronograf - the graphical UI to administer InfluxDB and create a new user via   InfluxDB Admin > Users Default username = twadmin Default password = password Permissions = ALL   Create a new database via   InfluxDB Admin > Databases Default database name = thingworx   Configure ThingWorx   Create a new Persistence Provider for InfluxDB in ThingWorx - but don't mark it as active yet!     Switch to the Configuration and change the username / password, database and hostname to match your installation.     Save the configuration, switch back to the General tab and mark the InfluxDB Persistence Provider as Active.   Save again and a "successful" message will be shown. If the save action failed, the connection settings are not correct - check for the correct ports and for any typos.   Creating Entities & Testing   Streams, Value Streams and Data Tables can now be created using the new InfluxDB Persistence Provider.   To test with a Value Stream   Create a new Thing with some NUMBER properties, e.g. 'a', 'b' and 'c' as properties - ensure they are marked as logged as well Name = InfluxValueStreamThing Create a new ValueStream based and change its Persistance Provider to the InfluxDB created above Name = InfluxValueStream Save both Entities Setting values for the properties will now automatically create the entries in InfluxDB - including the Entity name "InfluxValueStreamThing" Running the QueryPropertyHistory service on the Thing will return the results as an InfoTable In Chronograf this will display like this:   ThingWorx 8.4 will be released end of January 2019. Be sure to check out and test the new Persistence Provider features!
View full tip
Continuing our series of Troubleshooting ThingWorx Analytics installations, in this IoT Tech Tip we will cover two items have been appearing for many users.   Error 1069 Encountered with Native Windows Installation of ThingWorx Analytics 8.2   In some instances, when a user successfully installs ThingWorx Analytics (TWAS) to a Windows Server operating system, they will encounter an error where TWAS will report an Error 1069: The Service did not start due to logon failure.   This can occur with any individual Service that is created by the installation, the following fix should work in addressing the issue.   Primary Reason This Happens:   This error can be encountered when the user provides incorrect credentials for associating the Services to an account during installation. In TWAS 8.2, there is a utility that will enable to the user to change the associated user on the Services. It is important the user provides the password for the User Account on Windows, and not the user/password combination for ThingWorx Foundation Platform Server.   Steps to Fix Issue   Solution 1:   Open a Command Prompt as Administrator, via Start Menu à Run à type CMD. Then right click on cmd.exe and Run As Administrator.   In the elevated command prompt, change your directory to the ThingWorxAnalyticsServer/bin directory, for example in the default installation path would be: cd C:\Program Files (x86)/ThingWorxAnalyticsServer/bin Then execute the changeServiceUserAccount.bat <username>, for example: changeServiceUserAccount.bat user1   You will be prompted to change the password for the user.   Solution 2:   If Solution 1 does not resolve the issue, alternately you can manually change the Log On properties for each of the services. The changeServiceUserAccount.bat would do this via script, but on occasion this may work. Open the Control Panel and navigate to Services, for example: Control Panel à All Control Panel Items à Administrative Tools   You will have to right click each individual service and go to Properties à Log On tab and enter the account name and password for the local account. Note: Local System account will not resolve this issue.   This issue was resolved in the ThingWorx Analytics Server 8.3 release, where all Services are associated with the Network Service account.     More information can be found in this Knowledge Article   Uploading of a Dataset hangs or does not complete in ThingWorx Analytics 8.3   On occasion, after a fresh installation of ThingWorx Analytics Server 8.3 on a Windows Server operating system, a dataset will not complete its upload. Typically no error message is displayed, and the upload wizard UI will just hang on the upload progress after:   Creating copy of Configuration File... Submitting Create Dataset request... Creating copy of Data File...   Primary Reason This Happens:   This is caused by twas-zookeeper service being stuck in a PAUSED state. This means that in the post installation, twas-zookeeper did not start.   Steps to Fix Issue   You will have to double check that the JAVA_HOME variable was defined as a System Variable. In the ThingWorx Analytics Installation guide, pages 12-14 outline the steps required as pre-requisites. You can change this in Control Panel > System > Advanced Settings > Environment Variables, and ass a new variable named JAVA_HOME under System Variables. The value location should be the location of your deployment of JAVA software.   Typically this is located in C:\Program Files\Java\<jre or jdk>_<version number>     More information can be found in this Knowledge Article
View full tip