ThingWorx 8.3 covers the following areas of the product portfolio: ThingWorx Analytics, ThingWorx Utilities, and ThingWorx Foundation which includes Connection Server and Edge capabilities.
Highlights of the release include:
Next Generation Composer:
Now default admin and developer interface
Full Feature parity with legacy Composer
New capability for User and Group administration, Authorization and permissions, Export, Monitoring and Logging. More in Helpcenter
Localization support for German and French
JQuery 3 upgrade
Grid Advanced Extension now supports Cell Editing and Footers
Active Directory (AD) Integration enhancements for larger AD forests and user extension field mapping
Upgrade in-place enhancements for Java SDK developers
Capture the usage statics such as time taken to execute a ThingWorx service, # of times a service runs in ThingWorx using Service Utilization Statistics functionality powered by all new and efficient Utilization Subsystem.
Collect ThingWorx system data such as ESAPI configuration, ThingworxStorage logs, licensing, and JVM information to better diagnose system issues
Service Utilization Statistics:
ThingWorx Support Package tool
Administrator Password and Password Length
New installations of ThingWorx will be required to supply the initial Administrator password of the installer’s choice. That password must be supplied via a new entry in the platform-settings.json file. After the initial installation, the Administrator password should then be changed to a strong password to be used going forward. Additional information.
As a step toward industry best practices, the Administrator password and all new passwords will need to be at least 10 characters. When upgrading to 8.3, passwords from older versions of the platform will not need to be modified, but any new passwords being created will need to be at least 10 characters long.
See the installation instructions for complete details.
New Descriptive Services
Core statistics (min, max, deviation, etc.), data distribution (binning), confidence intervals, and other useful calculations.
Frequency analysis and transformation (via fast Fourier transform) for troubleshooting use cases and predictive analytics applications
Improves users’ ability to apply logic and derive the following insights from streaming data without constructing complex models or accessing machine learning:
Enables platform developers to easily process platform data in their applications and prepare the data for predictions.
Statistical Process Control (SPC) Services
Provides industry-standard calculations that allow IoT developers to implement SPC “control chart rules” in their applications. Useful in manufacturing and in monitoring equipment and processes.
Supports a wide assortment of rules, including number of points continuously above / below a range, in and out of range, increasing or decreasing trends, or alternating directions.
Bundles the two Analytics interfaces (Analytics Builder and Manager) into a new Analytics section in Composer.
Predictive Analytics Improvements
Reduces overall install and administration complexity.
Improves handling of time dseries data when used in predictive scoring.
Includes a new learner, Support Vector Machines, enhancing the platform’s utility in building Boolean predictions.
Includes a new ensemble method, Majority Vote, that improves generated model accuracy.
Provides redundancy filtering which can optionally remove redundant information to improve explanatory analytics (Signals) and predictive model training.
Now supports time series lookahead configuration, simplifying this type of prediction.
Replaces ThingPredictor predictive scoring in Analytics Manager with native Analytics Server scoring:
Improves scalability of concurrent jobs.
Axeda Compatibility Package
IDM Connector Support
o ACP v1.1.0 introduces the IDM Connector which enables Axeda customers to connect their Axeda IDM agents to the ThingWorx platform. The IDM Connector provides support for registration requests, property updates, faults, events, file uploads and downloads.
Axeda ThingWorx Entity Exporter Update
o ACP v1.1.0 also includes an updated version of Axeda-ThingWorx Entity Exporter (ATEE) which now supports exporting Axeda IDM assets from the Axeda application into a format that can be imported in the ThingWorx Platform.
eMessage Connector Improvements
o Additionally, ACP v1.1.0 includes support for instruction based Software Content Management packages for the eMessage Connector which allows you to download file(s), execute instruction(s) and optionally restart the agent. The Axeda Compatibility Extension (ACE) has new entities to support the IDM Connector and SCM for the eMesssage Connector.
o Finally, updated versions of the Axeda Compatibility Extensions (ACE) and the Connection Services Extension (CSE) are included in ACP v1.1.0 and provide an improved workflow for granting permissions to the eMessage and IDM Connectors.
ThingWorx Extension Updates
Websocket Tunnel Extension Update
The Websocket Tunnel Extension was updated for 8.3 to support the upgrade to jQuery3
Grid Advanced 4.0.0 comes with 2 key features:
Editing - we now have cell editing support for all basetypes. The previous version had boolean editing; 4.0.0 now includes support for all basetypes.
Footers - A footer section can now be added to the Grid to display rolled-up Grid totals. You can perform client-side calculations like count, min, max and average, and it includes support for custom functions.
Note - Grid Advanced 4.0.0 only supports ThingWorx 8.3 and above.
Custom Charts 3.0.1
12 Bug Fixes
Google Maps 3.0.1
General Bug Fixes
With the 8.3 Release, ThingWorx Utilities functionality are being repackaged into ThingWorx Foundation and ThingWorx Asset Advisor. ThingWorx Workflow will now be available with Foundation. The functionality from the Asset and Alert Management Utilities will be delivered in ThingWorx Asset Advisor. ThingWorx Software Content Management capabilities will continue to be available for customer to manage the delivery of Software to their Connected Products. The naming of “Utilities” is being phased out of the ThingWorx Platform packaging but the key functionality formerly described as ThingWorx Utilities continues to be delivered with version 8.3.
ThingWorx 8.3 Reference Documents
ThingWorx Analytics 8.3 Reference Documents
ThingWorx Platform 8.3 Release Notes
ThingWorx Platform Help Center
ThingWorx Edge SDKs and WebSocket-based Edge MicroServer Help Center
ThingWorx Connection Services Help Center
ThingWorx Analytics Help Center
ThingWorx Industrial Connectivity Help Center
ThingWorx Utilities Help Center
ThingWorx Utilities Installation Guide
ThingWorx eSupport Portal
ThingWorx Developer Portal
The following items will be available for download from the PTC Software Download site on June 8, 2018.
ThingWorx Platform – Select Release 8.3
ThingWorx Utilities – Select Release 8.3
ThingWorx Analytics – Select Release 8.3
ThingWorx Extensions – Select Individual Extensions for download. Will be available with the next Marketplace refresh
... View more
This video go through the steps required to use the Creo Insight extension:
- Download and install the required extension
- Set required config.pro options
- Create provider in Analytics Manager
- Publish sensor from Creo
- Create analysis Event in Analysis Manager
- Retrieve sensor values from ThingWorx in Creo
- https://www.ptc.com/en/support/article?n=CS277514 for a written version of those steps.
- Creo Help Center
... View more
Predictive models: Predictive model is one of the best technique to perform predictive analytics. This is the development of models that are trained on historical data and make predictions on new data. These models are built in order to analyse the current data records in combination with some historical data.
Use of Predictive Analytics in Thingworx Analytics and How to Access Predictive Analysis Functionality via Thingworx Analytics
Bias and variance are the two components of imprecision in predictive models. Bias in predictive models is a measure of model rigidity and inflexibility, and means that your model is not capturing all the signal it could from the data. Bias is also known as under-fitting. Variance on the other hand is a measure of model inconsistency, high variance models tend to perform very well on some data points and really bad on others. This is also known as over-fitting and means that your model is too flexible for the amount of training data you have and ends up picking up noise in addition to the signal.
If your model is performing really well on the training set, but much poorer on the hold-out set, then it’s suffering from high variance. On the other hand if your model is performing poorly on both training and test data sets, it is suffering from high bias.
Techniques to improve:
Add more data: Having more data is always a good idea. It allows the “data to tell for itself,” instead of relying on assumptions and weak correlations. Presence of more data results in better and accurate models. The question is when we should ask for more data? We cannot quantify more data. It depends on the problem you are working on and the algorithm you are implementing, example when we work with time series data, we should look for at least one-year data, And whenever you are dealing with neural network algorithms, you are advised to get more data for training otherwise model won’t generalize.
Feature Engineering: Adding new feature decreases bias on the expense of variance of the model. New features can help algorithms to explain variance of the model in more effective way. When we do hypothesis generation, there should be enough time spent on features required for the model. Then we should create those features from existing data sets.
Feature Selection: This is one of the most important aspects of predictive modelling. It is always advisable to choose important features in the model and build the model again only with important and significant features. e. let’s say we have 100 variables. There will be variables which drive most of the variance of a model. If we just select the number of features only on p-value basis, then we may still have more than 50 variables. In that case, you should look for other measures like contribution of individual variable to the model. If 90% variance of the model is explained by only 15 variables then only choose those 15 variables in the final model.
Multiple Algorithms: Hitting at the right machine learning algorithm is the ideal approach to achieve higher accuracy. Some algorithms are better suited to a particular type of data sets than others. Hence, we should apply all relevant models and check the performance.
Algorithm Tuning: We know that machine learning algorithms are driven by parameters. These parameters majorly influence the outcome of learning process. The objective of parameter tuning is to find the optimum value for each parameter to improve the accuracy of the model. To tune these parameters, you must have a good understanding of these meaning and their individual impact on model. You can repeat this process with a number of well performing models. For example: In random forest, we have various parameters like max_features, number_trees, random_state, oob_score and others. Intuitive optimization of these parameter values will result in better and more accurate models.
Cross Validation: Cross Validation is one of the most important concepts in data modeling. It says, try to leave a sample on which you do not train the model and test the model on this sample before finalizing the model. This method helps us to achieve more generalized relationships.
Ensemble Methods: This is the most common approach found majorly in winning solutions of Data science competitions. This technique simply combines the result of multiple weak models and produce better results. This can be achieved through many ways.
Bagging: It uses several versions of the same model trained on slightly different samples of the training data to reduce variance without any noticeable effect on bias. Bagging could be computationally intensive esp. in terms of memory.
Boosting: is a slightly more complicated concept and relies on training several models successively each trying to learn from the errors of the models preceding it. Boosting decreases bias and hardly affects variance.
... View more