cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
Error: Exception: JavaException: java.lang.Exception: Import Failed: License is not available for Product: null Feature: twx_things Possible root cause: editing web.xml without further restart of the platform. In result, the product name does not pick up and the license path drops while the composer is still running, Fix: Go to LicensingSubsystem -> Services and run the AcquireLicense service. IF the issue does not get resolved, please contact the Support team, attaching your license.bin to the ticket.
View full tip
The Axeda Platform has long had the ability to write custom logic to retrieve, manipulate and create data.  In the current versions of the Platform, there are two classes of API, Version 1 (v1) and Version 2 (v2).  The v1 APIs allow a developer to work with data on the Platform, but all of the APIs are subject to the maxQueryResults configuration property, which by default limits the number of results per query to 1000. For some subsets of data, this can be inadequate to process data.  In comes the v2 API, which introduces pagination. One of the first things a new user does when exploring the V2 API, is something like the following: HistoricalDataItemValueCriteria criteria = new HistoricalDataItemValueCriteria() criteria.assetId = '9701' criteria.startDate = '2014-07-23T12:33:00Z' criteria.endDate = '2014-07-23T12:44:00Z' DataItemBridge dbridge = com.axeda.sdk.v2.dsl.Bridges.dataItemBridge FindDataItemValueResult results = dbridge.findHistoricalValues(criteria)           And they get frustrated when they only get the same 100 rows of data.  Repeat after me: V2 API invocations (find operations) are limited to batches of 100 results at a time! But that's not the end of the story.  With a small change, the query above can be tuned to iterate through all results that match the search criteria:  HistoricalDataItemValueCriteria criteria = new HistoricalDataItemValueCriteria() criteria.assetId = '9701' criteria.startDate = '2014-07-23T12:33:00Z' criteria.endDate = '2014-07-23T12:44:00Z' criteria.pageNumber = 1 criteria.pageSize = 100 // Default. DataItemBridge dbridge = com.axeda.sdk.v2.dsl.Bridges.dataItemBridge FindDataItemValueResult results = dbridge.findHistoricalValues(criteria) tcount = 0 while ( (results = dbridge.findHistoricalValues(criteria)) != null  && tcount < results .totalCount) {   results.dataItems.each { res ->     tcount++   }   criteria.pageNumber = criteria.pageNumber + 1 }    I currently recommend that people avoid using the count() or countDomainObjectByCriteria() functions if you're then going to call a find.  Currently both the count*() and find functions compute total results, and doubles execution time of just those two calls.  Total count is only computed when running the first find() operation, so the code pattern above is so far the most efficient way I've seen to run these operations on the platform. So having covered how to do this in code (custom objects), let's turn our attention to the REST APIs - the other entry-point for using these capabilities.  The REST API doesn't offer a count*() function, but the first find() invocation (if using XML) brings back totalCount as part of the result set.  You can use this in your application to decide how many times to call the REST end-point to retrieve your data.  So for the example above: POST:  https://customer-sandbox.axeda.com/services/v2/dataItem/findHistoricalValues HEADERS: Content-Type: application/xml Accept: application/xml BODY: <?xml version="1.0" encoding="UTF-8"?> <HistoricalDataItemValueCriteria xmlns="http://www.axeda.com/services/v2" pageSize="100" pageNumber="1"> <assetId>9701</assetId> <StartDate>2014-07-23T12:33:00Z</StartDate> <endDate>2014-07-23T12:35:02Z</endDate> </HistoricalDataItemValueCriteria>      RESULTS: <v2:FindAssetResult totalCount="1882" xmlns:v2="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">    <v2:criteria pageSize="100" pageNumber="1">       <v2:name>*</v2:name>       <v2:propertyNames/>    </v2:criteria>    <v2:assets>    </v2:assets> </v2:FindAssetResult>      Or JSON: POST:  https://customer-sandbox.axeda.com/services/v2/dataItem/findHistoricalValues HEADERS: Content-Type: application/xml Accept: application/xml BODY: {   "id":  9701,   "startDate": "2014-07-23T12:33:00Z",   "endDate": "2014-07-23T12:35:02Z",   "pageNumber": 1,   "pageSize": 2 }      And that's how you work around the maxQueryResults limitation of the v1 APIs.  Some APIs do not currently have matching v2 Bridges (e.g. MobileLocation and DataItemAssociation), in which case the limitation will still apply.  Creative use of the query Criteria will allow you to work around these limitations as we continue to improve the V2 API. Regards, -Chris
View full tip
Thing Subscription This post is intended for novice ThingWorx users who wants to understand what the definition of Thing Subscription is and the overall purpose of using Thing Subscriptions.   Definition of a Thing Subscription? A Thing subscription is a script(JavaScript) that is called each time an event occurs. Events are property states which are of end users interest (e.g. temperature) and therefore indicators to kick off some functionality in a Thing subscription when any action needed. Events can e.g. be triggered by an Alert that detects a change or an anomaly in property values. The Thing subscription is explicitly linked to an event and when the event is fired the data is being passed to the subscriber.    Why Use a Thing Subscription? Imagine your machine is running 24 hours 7 days a week with supervised human interaction. If a pump temperature exceeds accepted value it needs to be regulated by the manufacturing department. But no one in the department knows when the temperature will exceed accepted value or drop suddenly therefore, the machines is always sporadically physically supervised by humans which leads to heavy costs for the manufacture. With a Thing Subscription a notification alert email can be sent directly to the department manager who acts based on the email notification.   Thing Subscription must have A Thing subscription must have defined a rule which gets executed when an event occurs. The definition of the rule may accommodate any appropriate business logic.   Thing Subscription example process In this scenario Thing subscription is using a predictive analytics model to detect Data Change or any anomaly values going through a Thing Property. So, based on historical data including failure information, a predictive analytics model begins to analyze run-time values from individual Things/properties to the analytics server. The predictive analytics model detects a pattern which detects past failures, when the analytics model predicts a failure/event based on the analyzed patterns an action is being fired via a Thing subscription. That action could be for ThingWorx to create a service ticket or send a notification email to the service department.   Example of a simple Thing Subscription set-up without using Analytics model to analyze data but instead a build-in ThingWorx alert Below example of Thing Subscription will send a notification email when temperature exceeds defined values from ThingWorx alert configuration. Prerequisites; it is necessary to have a mail server extension imported into the ThingWorx Composer this enables the service department to receive the email notification when an event have occurred. The extension can be downloaded from the marketplace. 1. Create a Thing with the MailServer[i] as the Base Thing template.     2. Create a new Thing and add Properties together with an alert that is triggered when the value exeeds user defined temerature.   3. Enable the Thing Subcriptions by Select Subscription and click +Add Make sure to mark the checkbox Enabled Selecting your Event name and your Property name In the right side of the screen you can enter your script/function that will notify ThingWorx email service to create the email notification Select Done and Save   4. Enable Email notification by selecting Services Provide an name Select Me/Entities Mark Other entity Find your Thing where the MailServer is the Thing Template   5. Then find the SendMessage snippet/script and fill out the snippet with your personal information.   [i] View this blog for more information on how to install the MailServer
View full tip
  Connect   Connect Your Data to ThingWorx In the world of IoT application development, connectivity refers to the infrastructure and protocols which connect devices to the cloud or network. Edge devices handle the interface between the physical world and the cloud. ThingWorx provides you with several different tools for connecting to the ThingWorx platform. Your decision on which connectivity method to pick will be dependent on your individual use case.   Learning Paths Connect and Configure Industrial Devices and Systems   Featured Guides Install ThingWorx Kepware Server Connect to an Azure OPC UA Server   REST API Use the REST API to Connect Low-Capability Devices to ThingWorx   Using the ThingWorx REST API is an easy way for low-capability devices to connect with the ThingWorx platform and push data to the platform. Any edge device that can make an HTTP POST can read and update properties or execute services on the ThingWorx platform.   Choose a Connectivity Method Use REST API to Access ThingWorx Connect an Arduino Developer Board   Edge SDKs Connect natively to ThingWorx using an AlwaysOn protocol SDK.  Secure, embeddable, and easily deployable communications designed for connecting sensors, devices and equipment across any network topology and any communication scenario.   SDKs are available for Java, C, .net and allow you to connect your devices to ThingWorx with the AlwaysOn protocol. Using the Edge SDKs will give you all the flexibility you need to meet your application's requirements and build robust, secure, full-featured edge integrations and gateways for any platform.   ThingWorx Edge SDKs SDK Reference C SDK Tutorial Java SDK Tutorial   Edge Microserver The Edge Microserver proxies connections via AlwaysOn   Connect your devices to the ThingWorx platform with the Edge MicroServer, a pre-built application that enables devices incapable of making TLS connections to securely interact with the platform.   Connect Raspberry Pi to ThingWorx Choose a Connectivity Method   Kepware Server Access data from industrial machine controllers   ThingWorx Kepware Server with 150+ industrial protocol drivers allows you to easily connect to different types of industrial equipment. The interface provides real-time, bi-directional industrial controls data to the ThingWorx Platform via the AlwaysOn protocol.   Install ThingWorx Kepware Server   Device Cloud Connectors Connect devices with the adapter of your choice and integrate with ThingWorx to build scalable IoT applications.   Connect Azure IoT Devices     Analyze   Analyze and Visualize IoT Data The AI and Machine Learning technologies used in ThingWorx Analytics automate much of the complex analytical processes involved in creating data-driven insights for your IIoT application. Simulate behavior of physical products in the digital world, use predictive analytic algorithms to find patterns in your business data and generate a prediction model, or build a real-time anomaly detection model by monitoring for data points that fall outside of an expected range.   Learning Paths Monitor Factory Supplies and Consumables Design and Implement Data Models to Enable Predictive Analytics   Featured Guides Operationalize an Analytics Model Build a Predictive Analytics Model   Perform Analytical Calculations Embed analytics capabilities into your industrial IoT applications in order to monitor real-time data, predict future events and conditions, and optimize performance of devices and organizations.   Operationalize an Analytics Model Build a Predictive Analytics Model Monitor an SMT Assembly Line Statistical Monitoring with Descriptive Analytics Perform Statistical Calculations with Descriptive Analytics     Build   Rapid, Model-based Application Development Build your industrial IoT application using ThingWorx’s drag-and-drop GUI development environment, model-based development platform. Using the ThingModel to describe assets, processes, and organizational elements and how they relate to each other. Define the functional behavior, add business logic, and extend your application with pre-built plugins. With a properly-constructed framework, your application will be scalable, flexible and more secure.   Learning Paths Medical Device Service Design and Implement Data Models to Enable Predictive Analytics   Featured Guides Get Started with ThingWorx for IoT Data Model Introduction   Build the Data Model Define the properties, services, and events of Things you want to expose to your application developers. The ThingWorx Data Model is a logical representation of the physical devices, systems, and people that interact with your application.   Data Model Introduction Monitor an SMT Assembly Line Data Model Implementation Design Your Data Model   Leverage the Data Model Leverage your data model using events subscriptions, and custom business logic.   Monitor an SMT Assembly Line Methods for Data Storage Bind Data to Widgets Implement Services, Events, and Subscriptions Create Custom Business Logic Application Development Tips & Tricks Create Session Parameters   Extend the Platform Capabilities Take advantage of extensions from partners and third-parties to add new functionality into your system in a seamless manner. Extensions can be service (function/method) libraries, connector templates, widgets, and more.   Create An Extension Create A Mashup Widget Extension Create An Authentication Extension     Manage   ThingWorx Platform Management Efficiently manage your assets with visibility and control over your IoT solution. Install, configure and troubleshoot your application, while monitoring performance and communication with devices. Offering a comprehensive set of tools and features, ThingWorx enables remote access, file transfers, software upgrades, logging, debugging, and more.   Learning Paths Getting Started on the ThingWorx Platform Using an Allen-Bradley PLC with ThingWorx   Featured Guides Deploy an Application   Manage Your Platform Compare Persistence Providers   Manage Your Applications Operationalize application updates, OS upgrades, patches and documentation.   Deploy an Application Compare Persistence Providers     Experience   Design Engaging Experiences Use the industry’s first purpose built IoT application development environment to design engaging experiences for web and mobile applications. Designed to reduce the time, cost, and risk required to build new innovative IoT applications, this layer has two distinct functions: build-time and run-time. Build-time encompasses the technology to create the things in your Industrial IoT solution while Run-time includes the operational permissions to execute and manage those things.   Learning Paths Getting Started on the ThingWorx Platform Customize UI and Display Options to Deploy Applications   Features Guides Create Your Application UI   Application Layout (UI)  Utilize the ThingWorx Mashup Builder tools to design and create engaging IoT applications.   Define Your UI Style Add Style to Your UI with CSS Effective UI Implementation   Charts & Graphs Bring your IoT data to life with dynamic charts and graphs.   How to Display Data in Charts   Reusable Components Leverage the ThingWorx widget library to create a robust user experience and enhance your application capabilities.   Object-Oriented UI Design Tips Display Geolocation Data Using Google Maps Organize Your UI with the Collection Widget     Secure   Securely Collect and Process Data ThingWorx is secure by design and offers multiple authentication options to increase the security of your IoT application. From TLS-encrypted communication and role-based access controls to the distribution of security patches, ThingWorx integrates a range of security features that you can leverage in your development process.   Learning Paths Getting Started on the ThingWorx Platform   Featured Guides Configure Permissions   IoT Application Security Authenticate devices on our platform. ThingWorx handles data transformation, data persistence, and business logic so you can focus on developing your application.   Configure Permissions Enabling LDAP Authentication in ThingWorx Create An Authentication Extension Create An Application Key
View full tip
  PLEASE NOTE DataConnect has now been deprecated and no longer in use and supported.   We are regularly asked in the community how to send data from ThingWorx platform to ThingWorx Analytics in order to perform some analytics computation. There are different ways to achieve this and it will depend on the business needs. If the analytics need is about anomaly detection, the best way forward is to use ThingWatcher without the use of ThingWorx Analytics server. The ThingWatcher Help Center is an excellent place to start, and a quick start up program can be found in this blog. If the requirement is to perform a full blown analytics computation, then sending data to ThingWorx Analytics is required. This can be achieved by Using ThingWorx DataConnect, and this is what this blog will cover Using custom implementation. I will be very pleased to get your feedback on your experience in implementing custom solution as this could give some good ideas to others too. In this blog we will use the example of a smart Tractor in ThingWorx where we collect data points on: Speed Distance covered since last tyre change Tyre pressure Amount of gum left on the tyre Tyre size. From an Analytics perspective the gum left on the tyre would be the goal we want to analyse in order to know when the tyre needs changing.   We are going to cover the following: Background Workflow DataConnect configuration ThingWorx Configuration Data Analysis Definition Configuration Data Analysis Definition Execution Demo files   Background For people not familiar with ThingWorx Analytics, it is important to know that ThingWorx Analytics only accepts a single datafile in a .csv format. Each columns of the .csv file represents a feature that may have an impact on the goal to analyse. For example, in the case of the tyre wear, the distance covered, the speed, the pressure and tyre size will be our features. The goal is also included as a column in this .csv file. So any solution sending data to ThingWorx Analytics will need to prepare such a .csv file. DataConnect will perform this activity, in addition to some transformation too.   Workflow   Decide on the properties of the Thing to be collected, that are relevant to the analysis. Create service(s) that collect those properties. Define a Data Analysis Definition (DAD) object in ThingWorx. The DAD uses a Data Shape to define each feature that is to be collected and sends them to ThingWorx Analytics. Part of the collection process requires the use of the services created in point 2. Upon execution, the DAD will create one skinny csv file per feature and send those skinny .csv files to DataConnect. In the case of our example the DAD will create a speed.csv, distance.csv, pressure.csv, gumleft.csv, tyresize.csv and id.csv. DataConnect processes those skinny csv files to create a final single .csv file that contains all these features. During the processing, DataConnect will perform some transformation and synchronisation of the different skinny .csv files. The resulting dataset csv file is sent to ThingWorx Analytics Server where it can then be used as any other dataset file.     DataConnect configuration   As seen in this workflow a ThingWorx server, DataConnect server and a ThingWorx Analytics server will need to be installed and configured. Thankfully, the installation of DataConnect is rather simple and well described in the ThingWorx DataConnect User’s guide. Below I have provided a sample of a working dataconnect.conf file for reference, as this is one place where syntax can cause a problem:   ThingWorx Configuration The platform Subsystem needs to be configured to point to the DataConnect server . This is done under SYSTEM > Subsystems > PlatformSubsystem:     DAD Configuration The most critical part of the process is to properly configure the DAD as this is what will dictate the format and values filled in the skinny csv files for the specific features. The first step is to create a data shape with as many fields as feature(s)/properties collected.   Note that one field must be defined as the primary key. This field is the one that uniquely identifies the Thing (more on this later). We can then create the DAD using this data shape as shown below   For each feature, a datasource needs to be defined to tell the DAD how to collect the values for the skinny csv files. This is where custom services are usually needed. Indeed, the Out Of The Box (OOTB) services, such as QueryNumberPropertyHistory, help to collect logged values but the id returned by those services is continuously incremented. This does not work for the DAD. The id returned by each services needs to be what uniquely identifies the Thing. It needs to be the same for all records for this Thing amongst the different skinny csv files. It is indeed this field that is then used by DataConnect to merge all the skinny files into one master dataset csv file. A custom service can make use of the OOTB services, however it will need to override the id value. For example the below service uses QueryNumberPropertyHistory to retrieve the logged values and timestamp but then overrides the id with the Thing’s name.     The returned values of the service then needs to be mapped in the DAD to indicate which output corresponds to the actual property’s value, the Thing id and the timestamp (if needed). This is done through the Edit Datasource window (by clicking on Add Datasource link or the Datasource itself if already defined in the Define Feature window).   On the left hand side, we define the origin of the datasource. Here we have selected the service GetHistory from the Thing Template smartTractor. On the right hand side, we define the mapping between the service’s output and the skinny .csv file columns. Circled in grey are the output from the service. Circled in green are what define the columns in the .csv file. A skinny csv file will have 1 to 3 columns, as follow: One column for the ID. Simply tick the radio button corresponding to the service output that represents the ID One column representing the value of the Thing property. This is indicated by selecting the link icon on the left hand side in front of the returned data which represent the value (in our example the output data from the service is named value) One column representing the Timestamp. This is only needed when a property is time dependant (for example, time series dataset). On the example the property is Distance, the distance covered by the tyre does depend on the time, however we would not have a timestamp for the TyreSize property as the wheel size will remain the same. How many columns should we have (and therefore how many output should our service has)? The .csv file representing the ID will have one column, and therefore the service collecting the ID returns only one output (Thing name in our smartTractor example – not shown here but is available in the download) Properties that are not time bound will have a csv file with 2 columns, one for the ID and one for the value of the property. Properties that are time bound will have 3 columns: 1 for the ID, one for the value and one for the timestamp. Therefore the service will have 3 outputs.   Additionally the input for the service may need to be configured, by clicking on the icon.   Once the datasources are configured, you should configure the Time Sampling Interval in the General Information tab. This sampling interval will be used by DataConnect to synchronize all the skinny csv files. See the Help Center for a good explanation on this.   DAD Execution Once the above configuration is done, the DAD can be executed to collect properties’ value already logged on the ThingWorx platform. Select Execution Settings in the DAD and enter the time range for property collection:   A dataset with the same name as the DAD is then created in DataConnect as well as in ThingWorx Analytics Server Dataconnect:     ThingWorx Analytics:   The dataset can then be processed as any other dataset inside ThingWorx Analytics.   Demo files   For convenience I have also attached a ThingWorx entities export that can be imported into a ThingWorx platform for you to take a closer look at the setup discussed in this blog. Attached is also a small simulator to populate the properties of the Tractor_1 Thing. The usage is : java -jar TWXTyreSimulatorClient.jar  hostname_or_IP port AppKey For example: java -jar TWXTyreSimulatorClient.jar 192.168.56.106 8080 d82510b7-5538-449c-af13-0bb60e01a129   Again feel free to share your experience in the comments below as they will be very interesting for all of us. Thank you
View full tip
Hi all,   ThingWorx contains lots of useful functionality for your services (last count is 339 Snippets in ThingWorx 8.5.2). These snippets are an important part of the platform application building capabilities, and most of them are simple enough to understand based on their name and the description that appears when hovering on them.   I have witnessed that however, in some cases, the platform users are not aware of their full capabilities. With this in mind, I started creating some time ago a Snippet Guide for my personal use that I'm sharing now with the community. It contains additional explanations, documentation links and sample source code tested by me.   Please bear in mind that it was done for an earlier ThingWorx version and I did not have enough time to update it for 8.5.x, but it should work the same here as well.   This enhanced documentation is not supported by PTC, so please 1. do not open a Tech Support ticket based on the content of this document and, instead 2. Comment on this thread if there are things I can improve on it.   Happy New Year!
View full tip
This video concludes Module 9: Anomaly Detection of the ThingWorx Analytics Training videos. It gives an overview of the "Statistical Process Control (SPC) Accelerator"
View full tip
PostgreSQL is a powerful, open source object-relational database system that provides unlimited database size. Thingworx 6.5 introduces PostgreSQL as persistence provider and supports High Availability. Main advantages with Thingworx Postgres are 1. Highly customizable PostgreSQL also includes a framework that allows developers to define and create their own custom data types along with supporting functions and operators that define their behavior. Triggers and stored procedures can be written in C and loaded into the database as a library, allowing great flexibility in extending its capabilities. 2. Synchronous replication PostgreSQL streaming replication is asynchronous by default. Synchronous replication offers the ability to confirm that all changes made by a transaction have been transferred to one synchronous standby server. This extends the standard level of durability offered by a transaction commit. The only possibility that data can be lost is if both the primary and the standby suffer crashes at the same time. 3. Write ahead logging for fault tolerance The Write Ahead Log (WAL), is the feature of PostgreSQL that allows it to recover data, usually up to the point where the server stopped. As you make changes to your data, PostgreSQL aggressively writes those changes to the WAL. PostgreSQL issues a checkpoint when a buffer limit is reached. When PostgreSQL restarts, it replays the changes from the WAL since the last Checkpoint, to bring the database back to the state of the last completed commit. Master node sends a live stream of data changes to the slave nodes through the WAL and slaves applies this data and stay up to date. 4. Point-in time recovery Point-in-time Recovery (PITR) also called as incremental database backup , online backup or may be archive backup. This mechanism use the history records stored in WAL file to do roll-forward changes made since last database full backup. With Point-in-time Recovery, database backup down time can totally eliminated because this mechanism can make database backup and system access happened at the same time. with PITR, we backup the latest archive log file since last backup instead of full database backup everyday. Thingworx streams data from the connected devices and postgres handles it with a greater scalability. In Thingworx, postgresql acts as a persistence provider that stores both run-time data and metadata about things. Run-time data is the data that is persisted once the things are composed and are used by connected devices to store their data. Streams and value streams fetch huge amounts of data, once the streaming data reaches a limit fo 50gb neo4j can't handle the performance. For example, for a singleStream that has 50 properties that gathers data from 10000 devices, it will quickly hit the memory limit with neo persistence provider. So, it is strongly recommended to choose postgresql for a better performance issues. Overview of Installing Thingworx PostgreSQL: Install latest version of Java and make sure environment variables are configured. Follow the instructions in Installing Thingworx 6.5​ to install tomcat. Instructions/commands may vary for different Linux flavors. Install PostgreSQL. For Linux/Unix environments, YUM-Installation Guidelines. Create 'ThingworxPostgresqlStorage' and 'ThingworxPlatform' folders in the root directory( / ), assign access permissions to the user. Copy modelproviderconfig.json file (from Thingworx download package) to 'ThingworxPlatform' folder. Execute ThingworxPostgresSchemaSetup and ThingworxPostgresDBSetup scripts (.bat for windows and .sh for Unix/Linux environments), for further instructions follow Getting Started with PostgreSQL ThingWorx Administrators Guide​. Restart the tomcat.
View full tip
ThingWorx is great for storing large amounts of data coming from your devices but it can also be used like a traditional, row based database for information you would like to integrate with your thing data. Attached to this blog entry is a short example of creating an address book database using a DataTable and a DataShape. It does not focus on creating mashups but sticks with discussing the modeling and service calls you would use to create a simple database.
View full tip
In this part of the Troubleshooting blog series, we will review the process on how to restart individual services essential to the ThingWorx Analytics Application within the Virtual Machine Appliance.   Services have stopped, and I cannot run my Analytics jobs! In some cases, we have had users encounter issues where a system or process has halted and they are unable to proceed with their tasks. This can be due to a myriad of reasons, ranging from OS hanging issues to memory issues with certain components.   As we covered earlier in Part II, the ThingWorx Analytics Application is installed in a CentOS (Linux) Operating System. As with most Linux Operating Systems, you have the ability to manually check and restart processes as needed.   Steps to Restart Services   With how the Application is installed and configured, the services for functionality should auto start when you boot up the VM. You will have to verify that the Appliance is functional by running your desired API call.   If a system is not functioning as expected, you will receive an error in your output when you POST an API call. Some errors are very specific and you can search the Knowledge Database for any existing Knowledge Articles that may solve the issue.   For error messages that do not have an existing article, you may want to attempted the following   Method 1:   If you are encountering issues, and are unsure what process is not working correctly, we would recommend a full Application restart. This involves restarting the Virtual Machine Appliance via the command line terminal.   We would recommend that you use the following command, as root user or using SUDO, as this is known as a “Graceful restart” ​sudo reboot -h now   This will restart the virtual machine safely, and once you are back up and running you can run your API calls to verify functionality. This should resolve any incremental issues you may have faced.   Method 2:   If you want to restart an individual service, there is a particular start order that needs to be followed to make sure the Application is operating as expected.   The first step is check what services are not running, you can use the following command to check what is running and its current status: service –status-all   The services you are looking for are the following: Zookeeper PostgreSQL Server GridWorker(s) Tomcat   If a particular service is not on the running list, you will have to manually start them by using the service start command. service [name of service] start e.g. service tomcat start You may be prompted for the root password   You can verify that the services are operating by running the status check command as described above.   If you need to restart all services, we have a specific start order, this is important to do as there are some dependencies such as Postgres for the GridWorker(s) and Tomcat to use.   The start order is as follows: Zookeeper PostgreSQL Server GridWorker(s) Tomcat   After completing the restart, and verifying that the services are running, run your desired API call.
View full tip
Distributed Timer and Scheduler Execution in a ThingWorx High Availability (HA) Cluster Written by Desheng Xu and edited by Mike Jasperson    Overview Starting with the 9.0 release, ThingWorx supports an “active-active” high availability (or HA) configuration, with multiple nodes providing redundancy in the event of hardware failures as well as horizontal scalability for workloads that can be distributed across the cluster.   In this architecture, one of the ThingWorx nodes is elected as the “singleton” (or lead) node of the cluster.  This node is responsible for managing the execution of all events triggered by timers or schedulers – they are not distributed across the cluster.   This design has proved challenging for some implementations as it presents a potential for a ThingWorx application to generate imbalanced workload if complex timers and schedulers are needed.   However, your ThingWorx applications can overcome this limitation, and still use timers and schedulers to trigger workloads that will distribute across the cluster.  This article will demonstrate both how to reproduce this imbalanced workload scenario, and the approach you can take to overcome it.   Demonstration Setup   For purposes of this demonstration, a two-node ThingWorx cluster was used, similar to the deployment diagram below:   Demonstrating Event Workload on the Singleton Node   Imagine this simple scenario: You have a list of vendors, and you need to process some logic for one of them at random every few seconds.   First, we will create a timer in ThingWorx to trigger an event – in this example, every 5 seconds.     Next, we will create a helper utility that has a task that will randomly select one of the vendors and process some logic for it – in this case, we will simply log the selected vendor in the ThingWorx ScriptLog.     Finally, we will subscribe to the timer event, and call the helper utility:     Now with that code in place, let's check where these services are being executed in the ScriptLog.     Look at the PlatformID column in the log… notice that that the Timer and the helper utility are always running on the same node – in this case Platform2, which is the current singleton node in the cluster.   As the complexity of your helper utility increases, you can imagine how workload will become unbalanced, with the singleton node handling the bulk of this timer-driven workload in addition to the other workloads being spread across the cluster.   This workload can be distributed across multiple cluster nodes, but a little more effort is needed to make it happen.   Timers that Distribute Tasks Across Multiple ThingWorx HA Cluster Nodes   This time let’s update our subscription code – using the PostJSON service from the ContentLoader entity to send the service requests to the cluster entry point instead of running them locally.       const headers = { "Content-Type": "application/json", "Accept": "application/json", "appKey": "INSERT-YOUR-APPKEY-HERE" }; const url = "https://testcluster.edc.ptc.io/Thingworx/Things/DistributeTaskDemo_HelperThing/services/TimerBackend_Service"; let result = Resources["ContentLoaderFunctions"].PostJSON({ proxyScheme: undefined /* STRING */, headers: headers /* JSON */, ignoreSSLErrors: undefined /* BOOLEAN */, useNTLM: undefined /* BOOLEAN */, workstation: undefined /* STRING */, useProxy: undefined /* BOOLEAN */, withCookies: undefined /* BOOLEAN */, proxyHost: undefined /* STRING */, url: url /* STRING */, content: {} /* JSON */, timeout: undefined /* NUMBER */, proxyPort: undefined /* INTEGER */, password: undefined /* STRING */, domain: undefined /* STRING */, username: undefined /* STRING */ });   Note that the URL used in this example - https://testcluster.edc.ptc.io/Thingworx - is the entry point of the ThingWorx cluster.  Replace this value to match with your cluster’s entry point if you want to duplicate this in your own cluster.   Now, let's check the result again.   Notice that the helper utility TimerBackend_Service is now running on both cluster nodes, Platform1 and Platform2.   Is this Magic?  No!  What is Happening Here?   The timer or scheduler itself is still being executed on the singleton node, but now instead of the triggering the helper utility locally, the PostJSON service call from the subscription is being routed back to the cluster entry point – the load balancer.  As a result, the request is routed (usually round-robin) to any available cluster nodes that are behind the load balancer and reporting as healthy.   Usually, the load balancer will be configured to have a cookie-based affinity - the load balancer will route the request to the node that has the same cookie value as the request.  Since this PostJSON service call is a RESTful call, any cookie value associated with the response will not be attached to the next request.  As a result, the cookie-based affinity will not impact the round-robin routing in this case.   Considerations to Use this Approach   Authentication: As illustrated in the demo, make sure to use an Application Key with an appropriate user assigned in the header. You could alternatively use username/password or a token to authenticate the request, but this could be less ideal from a security perspective.   App Deployment: The hostname in the URL must match the hostname of the cluster entry point.  As the URL of your implementation is now part of your code, if deploy this code from one ThingWorx instance to another, you would need to modify the hostname/port/protocol in the URL.   Consider creating a variable in the helper utility which holds the hostname/port/protocol value, making it easier to modify during deployment.   Firewall Rules: If your load balancer has firewall rules which limit the traffic to specific known IP addresses, you will need to determine which IP addresses will be used when a service is invoked from each of the ThingWorx cluster nodes, and then configure the load balancer to allow the traffic from each of these public IP address.   Alternatively, you could configure an internal IP address endpoint for the load balancer and use the local /etc/hosts name resolution of each ThingWorx node to point to the internal load balancer IP, or register this internal IP in an internal DNS as the cluster entry point.
View full tip
This document is a general reference/help with configuring and troubleshooting google email account with the ThingWorx mail extension. To start with the configuration: SMTP: smtp.gmail.com 587, TLS checked.  If SSL is being used, the port should be 465. POP3: pop.gmail.com 995 To test, go to "Services" and click on "test" for the SendMessage service. Successful request will show an empty screen with green "result" at the top. Possible errors: Could not connect to SMTP host: smtp.gmail.com, port: 587 with nothing else in the logs. Check your Internet connection to ensure it's not being blocked. <hostname:port>/Thingworx/Common/locales/en-US/translation-login.json 404 (Not Found) Check your gmail folders for incoming messages regarding a sign-in from unknown device. The subject will be "Someone has your password", and the email  content will include the device, location, and timestamp of when the incident occurred. Ensure to check the "this was me" option to prevent from further blocking. This may or may not be sufficient, sometimes this leads to another error - "Please log in via your web browser and 534-5.7.14 then try again. 534-5.7.14 Learn more at 534 5.7.14..." The error can be resolved by: Turning off “less secure”  feature in your Gmail settings. You have to be logged in to your gmail account to follow the link: https://www.google.com/settings/security/lesssecureapps​ Changing your gmail password afterwards. I don't have a valid explanation as to why, but this is a required step, and the error doesn't clear without changing the password.
View full tip
In the following scenario (for redhat in this case), running the dbsetup script results in the error: ./thingworxPostgresDBSetup.sh psql:./thingworx-database-setup.sql:1: ERROR:  syntax error at or near ":" LINE 1: CREATE TABLESPACE :"tablespace" OWNER :"username" location :... ^ psql:./thingworx-database-setup.sql:3: ERROR:  syntax error at or near ":" LINE 1: GRANT ALL PRIVILEGES ON TABLESPACE :"tablespace" to :"userna... ^ psql:./thingworx-database-setup.sql:5: ERROR:  syntax error at or near ":" LINE 1: GRANT CREATE ON TABLESPACE :"tablespace" to public; ^ psql:./thingworx-database-setup.sql:14: ERROR:  syntax error at or near ":" LINE 1: CREATE DATABASE :"database" WITH ^ psql:./thingworx-database-setup.sql:16: ERROR:  syntax error at or near ":" LINE 1: GRANT ALL PRIVILEGES ON DATABASE :"database" to :"username"; Given that the installed components match the requirements guide (tomcat 8, Postgresql 9.4.5+ for Thingworx 7.x), run the following command: Run this directly from bin directory of postgres deployment – psql -q -h localhost -U twadmin -p 5432 -v database=thingworx -v tablespace=thingworx -v tablespace_location=/app/navigate/ThingworxPostgresqlStorage -v username=twadmin That must get into command line interface. From there  run the following with full qualified path to the sql file on disk (replace FULLPATH with the path to sql file ) \i ./FULLPATH/thingworx-database-setup.sql If you are experiencing the above-mentioned syntax error, then likely the output will be: psql: FATAL:  database "twadmin" does not exist. Then from postgres bin directory, run the following: ./psql postgres \set Then the second command; \q psql -q -h localhost -U twadmin -p 5432 -v database=thingworx -v tablespace=thingworx -v tablespace_location=/app/navigate/ThingworxPostgresqlStorage -v username=twadmin \set   We see the following outputs: ./psql postgres Password: psql.bin (9.4.11) Type "help" for help. postgres=# \set AUTOCOMMIT = 'on' PROMPT1 = '%/%R%# ' PROMPT2 = '%/%R%# ' PROMPT3 = '>> ' VERBOSITY = 'default' VERSION = 'PostgreSQL 9.4.11 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-55), 64-bit' DBNAME = 'postgres' USER = 'postgres' PORT = '5432' ENCODING = 'UTF8' postgres=# \q -bash-4.1$ psql -q -h localhost -U twadmin -p 5432 -v database=thingworx -v tablespace=thingworx -v tablespace_location=/ThingworxPostgresqlStorage -v username=twadmin Password for user twadmin: twadmin=# \set AUTOCOMMIT = 'on' QUIET = 'on' PROMPT1 = '%/%R%# ' PROMPT2 = '%/%R%# ' PROMPT3 = '>> ' VERBOSITY = 'default' VERSION = 'PostgreSQL 8.4.20 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-17), 64-bit' database = 'thingworx' tablespace = 'thingworx' tablespace_location = '/ThingworxPostgresqlStorage' username = 'twadmin' DBNAME = 'twadmin' USER = 'twadmin' HOST = 'localhost' PORT = '5432' ENCODING = 'UTF8' Note, even though Postgresql 9.4.5 has been installed by the system administrator, there are still traces of Postgresql 8.4.20 present in the system that cause the syntax error issue (possibly as part of  the default OS packaging). Removing the 8.4.20 rpms will resolve the problem.
View full tip
    Step 4: Create Extension Project   In this tutorial, you will create a ThingWorx extension that retrieves weather information using OpenWeatherMap API. Create Account In this part of the lesson, you will create a free account in OpenWeatherMap that creates an AppKey so you can access their REST API. Sign-up for a free account. Log in to your account. Create a new API Key   NOTE: We will use this generated API key as a parameter in the REST calls.   Create New Extension Project   NOTE: Make sure that you are in the ThingWorx Extension Perspective. To verify, you should see a plus icon:   in the menu bar. If you don’t see this, you are probably in the wrong perspective. Go back to the previous step to learn how to set the perspective to ThingWorx Extension in Eclipse.   Go to File->New->Project. Click ThingWorx->ThingWorx Extension Project.  Click Next. NOTE: A New ThingWorx Extension window will appear. Enter the Project Name (for example, MyThingworxWeatherExtension). Select Gradle or Ant as your build framework. Enter the SDK location by browsing to the directory where the Extension SDK is storeed.   NOTE: The Eclipse Plugin accepts the Extension SDK version 6.6 or higher. Enter the Vendor information (for example, ThingWorx Labs). Change the default package version from 1.0.0 to support extension dependency. NOTE: The information from ThingWorx Extension Properties is used to populate the metadata.xml file in your project. The metadata.xml file contains information about the extension and details for the various artifacts within the extension. The information in this file is used in the import process in ThingWorx to create and initialize the entities. Select the JRE version to 1.8. Click Next then click Finish. Your newly created project is added to the Package Explorer tab.   Create New Entity   Select your project and click Add to create a new entity. NOTE: You can also access this from the ThingWorx menu on the menu bar. Create a Thing Template for your MyThingWorxWeatherExtension Project.   NOTE: In this guide, we are using a Template, but in a real-world scenario, you may consider using a Thing Shape to encapsulate extension functionality. By using Thing Shapes you give users of your extension the ability to easily add new functionality to existing Things. It is simple to add a new Thing Shape to an existing Thing Template, while using the properties or services defined by a Thing Template would require recreating all existing assets using the new Template. Since subscriptions cannot be created on Thing Shapes, you might choose to create Thing Templates that implement one or more subscriptions for convenience. In the pop-up window, browse to add the source folder of your project in Source Folder. NOTE: It should default to the src directory of your project. In our case it will be MyThingworxWeatherExtension/src. Browse to add the package where you want to create this new class, or simply give it a name (such as com.thingworx.weather). Enter a name and description to your Thing Template (WeatherThingTemplate).  NOTE: By default, the Base Thing Template is set to GenericThing. Select Next. NOTE: If you want to give other users of this entity permission to edit it in ThingWorx Composer, select the entity as an editable entity. Only non-editable entities can be upgraded in place; editable entities must be deleted and recreated when your extension is updated. If you need to make it possible to customize the extension, consider using a configuration table to save user customizations. Select Finish.   Verify that you have a WeatherThingTemplate class created that extends the Thing class. @ThingworxBaseTemplateDefinition(name = "GenericThing") public class WeatherThingTemplate extends Thing { public WeatherThingTemplate() { // TODO Auto-generated constructor stub } } NOTE: You might see a warning to add a serial version. You can add a default or generated serial value.   Step 5: Add Properties    In this section, you are going to add CurrentCity, Temperature and WeatherDescription properties to the WeatherThingTemplate. These properties are associated with the Thing Template and add the @ThingworxPropertyDefinitions annotation before the class definition in the code.   Right click inside the WeatherThingTemplate class or right click on the WeatherThingTemplate class from the Package Explorer. Select ThingWorx Source-> Add Property. In the popup window, create a property to store the city name. Name = CurrentCity, Base Type = STRING, Description = ‘’ Select the Has Default Value checkbox and enter a city name (eg. Boston). This will be the default value unless a specific value is passed. Select the “Persistent” checkbox. This will maintain the property value if a system restart occurs. NOTE: If you select the Logged checkbox, the property value is logged to a data store. If you select the Read Only checkbox, the data will be static. Select VALUE from the Data Change Type drop down menu       NOTE: This allows any Thing in the system to subscribe to a data change event for this property. Choose to use one of the following Data Change Types:   Data Change Type Description Always Fires the event to subscribers for any property value change Never Does not fire a change event On For most values, any change will trigger this. Off Fires the event if the new value is false Value For numbers, if the new value has changed by more than the threshold value, fire the change event. For non-numbers, this setting behaves the same as Always. Select Finish. Create another property called Temperature with a base type of NUMBER. You can keep the default values for the other parameters. Create another property called WeatherDescription with a base type of STRING. Keep the default values for the other parameters.   Step 6: Create Configuration Table   In this part of the lesson, we will create a configuration table to store the API Id that you generated from the openMapsWeather. Configuration tables are used for Thing Templates to store values similar to properties that do not change often.   Right-click inside the WeatherThingTemplate class and select ThingWorx Source->Add Configuration Table. Create a new configuration table with name OpenWeatherMapConfigurationTable. Click Add in the Data Shape Field Definitions frame. NOTE: Configuration tables require fields (columns) with a defined table structure (DataShape). Enter appid as the name with a base type STRING. Select the Required checkbox. Click OK, then Finish to add the Configuration Table To use the appid in the REST calls, you need to obtain the value from the configuration table and assign it to a field variable in the Java code. We will use the initializeThing method to obtain the appid value at runtime. NOTE: The initializeThing() method acts as an initialization hook for the Thing. Every time a Thing is created or modified, this method is executed and the value of appid is obtained from the configuration table and stored in a global field variable of the class. initializeThing() must call super.initializeThing() to ensure it performs initialization of the Thing. Create the initializeThing() method and field variable _appid with base type STRING anywhere in the WeatherThingTemplate class. private static Logger _logger = LogUtilities.getInstance().getApplicationLogger(WeatherThingTemplate.class); private String _appid; @Override public void initializeThing() throws Exception { super.initializeThing(); _appid = (String) this.getConfigurationSetting("OpenWeatherMapConfigurationTable", "appid"); } NOTE: In the code above we used ThingWorx LogUtilities to get a reference to the ThingWorx logging system, then assigned the reference to the variable _logger. In the steps below we will use this variable to log information. There are multiple kinds of loggers and log levels used in the ThingWorx Platform, but we recommend that you use the application or script loggers for logging anything from inside extension services. If prompted to import the logger, use slf4j.     Click here to view Part 3 of this guide.
View full tip
Performance and memory issues in ThingWorx often appear at the Java Virtual Machine (JVM) level. We can frequently detect these issues by monitoring JVM garbage collection logs and thread dumps. In this first of a multi-part series, let's discuss how to analyze JVM garbage collection (GC) logs to detect memory issues in our application. What are GC logs? When enabled on the Apache server, the logs will show the memory usage over time and any memory allocation issues (See: How to enable GC logging in our KB for details on enabling this logging level).  Garbage Collection logs capture all memory cleanup operations performed by the JVM.  We can determine the proper memory configuration and whether we have memory issues by analyzing GC logs. Where do I find the GC logs? When configured as per our KB article, GC logs will be written to your Apache logs folder. Check both the gc.out and the gc.restart files for issues if present, especially if the server was restarted when experiencing problems. How do I read GC logs? There are a number of free tools for GC log analysis (e.g. http://gceasy.io, GCViewer). But GC logs can also be analyzed manually in a text editor for common issues. Reading GC files: Generally each memory cleanup operation is printed like this: Additional information on specific GC operations you might see are available from Oracle. A GC analysis tool will convert the text representation of the cleanups into a more easily readable graph: Three key scenarios indicating memory issues: Let's look at some common scenarios that might indicate a memory issue.  A case should be opened with Technical Support if any of the following are observed: Extremely long Full GC operations (30 seconds+) Full GCs occurring in a loop Memory leaks Full GCs: Full GCs are undesirable as they are a ‘stop-the-world’ operation. The longer the JVM pause, the more impact end users will see: All other active threads are stopped while the JVM attempts to make more memory available Full GC take considerable time (sometimes minutes) during which the application will appear unresponsive Example – This full GC takes 46 seconds during which time all other user activity would be stopped: 272646.952: [Full GC: 7683158K->4864182K(8482304K) 46.204661 secs] Full GC Loop: Full GCs occurring in quick succession are referred to as a Full GC loop If the JVM is unable to clean up any additional memory, the loop will potentially go on indefinitely ThingWorx will be unavailable to users while the loop continues Example – four consecutive garbage collections take nearly two minutes to resolve: 16121.092: [Full GC: 7341688K->4367406K(8482304K), 38.774491 secs] 16160.11: [Full GC: 4399944K->4350426K(8482304K), 24.273553 secs] 16184.461: [Full GC: 4350427K->4350426K(8482304K), 23.465647 secs] 16207.996: [Full GC: 4350431K->4350427K(8482304K), 21.933158 secs]      Example – the red triangles occurring at the end of the graph are a visual indication that we're in a full GC loop: Memory Leaks: Memory leaks occur in the application when an increasing number of objects are retained in memory and cannot be cleaned up, regardless of the type of garbage collection the JVM performs. When mapping the memory consumption over time, a memory leaks appears as ever-increasing memory usage that’s never reclaimed The server eventually runs out of memory, regardless of the upper bounds set           Example(memory is increasing steadily despite GCs until we'll eventually max out): What should we do if we see these issues occurring? Full GC loops and memory leaks require a heap dump to identify the root cause with high confidence The heap dump needs to be collected when the server is in a bad state Thread dumps should be collected at the same time A case should be opened with support to investigate available gc, stacktrace, or heap dump files
View full tip
The ThingWorx EMS and SDK based applications follow a three step process when connecting to the Platform: Establish the physical websocket:  The client opens a websocket to the Platform using the host and port that it has been configured to use.  The websocket URL exposed at the Platform is /Thingworx/WS.  TLS will be negotiated at this time as well. Authenticate:  The client sends a AUTH message to the platform, containing either an App Key (recommended) or username/password.  The AUTH message is part of the Thingworx AlwaysOn protocol.  If the client attempts to send any other message before the AUTH, the server will disconnect it.  The server will also disconnect the client if it does not receive an AUTH message within 15 seconds.  This time is configurable in the WSCommunicationSubsystem Configuration tab and is named "Amount of time to wait for authentication message (secs)." Once authenticated the SDK/EMS is able to interact with the Platform according to the permissions applied to its credentials.  For the EMS, this means that any client making HTTP calls to its REST interface can access Platform functionality.  For this reason, the EMS only listens for HTTP connections on localhost (this can be changed using the http_server.host setting in your config.json). At this point, the client can make requests to the platform and interact with it, much like a HTTP client can interact with the Platform's REST interface.  However, the Platform can still not direct requests to the edge. Bind:  A BIND message is another message type in the ThingWorx AlwaysOn protocol.  A client can send a BIND message to the Platform containing one or more Thing names or identifiers.  When the Platform receives the BIND message, it will associate those Things with the websocket it received the BIND message over.  This will allow the Platform to send request messages to those Things, over the websocket.  It will also update the isConnected and lastConnection time properties for the newly bound Things. A client can also send an UNBIND request.  This tells the Platform to remove the association between the Thing and the websocket.  The Thing's isConnected property will then be updated to false. For the EMS, edge applications can register using the /Thingworx/Things/LocalEms/Services/AddEdgeThing service (this is how the script resource registers Things).  When a registration occurs, the EMS will send a BIND message to the Platform on behalf of that new resource.  Edge applications can de-register (and have an UNBIND message sent) by calling /Thingworx/Things/LocalEms/RemoveEdgeThing.
View full tip
Based on Google's Spanner DB; CockroachDB is a distributed SQL DB scaling horizontally; surviving disk, machine, rack & even datacenter failures. It is built to automatically replicate, rebalance & recover with minimal configuration  See What is CockroachDB? for more.   Useful in use cases requiring: Distributed or replicated OLTP Multi-datacenter deployments Multi-region deployments Cloud migrations Cloud-native infrastructure initiatives Note: CockroachDB in current state isn't suitable for heavy analytics / OLAP.   Feature that makes it really attractive As mentioned above, scaling horizontally it requires minimal configuration out of the box allowing quick setup starting from local laptop/machine as shown below it can scale easily to single dedicated server, development/public cloud cluster. Due to easy setup, adding new nodes is as simple as starting the cockroach utility.See CockroachDB FAQ for more. To top it off, it uses PostgreSQL Wire protocol and PostgreSQL's dialect further reducing configuration and special JDBC driver requirements when a ThingWorx is configured with PostgreSQL as persistence provider.   Setting up cockroach DB cluster Download required binary or docker version from Install CockroachDB available for Mac, Linux & Windows   PS :Following setup uses Window's binary on a VM with Win10 64 bit, 6G RAM.     Starting Cluster node Open command prompt and navigate to the directory where cockroach.exe is unzipped, and launching the node with following command prompt     cockroach.exe start --insecure --host=10.128.13.183 --http-port=8082     This will start a node on defined host in insecure mode with its web based DB administration console on port 8082 and DB listening on default port 26257. Note it will log a security warning since node is started in insecure mode due to the tag --insecure, like so     * * WARNING: RUNNING IN INSECURE MODE! * * - Your cluster is open for any client that can access 10.128.13.183. * - Any user, even root, can log in without providing a password. * - Any user, connecting as root, can read or write any data in your cluster. * - There is no network encryption nor authentication, and thus no confidentiality. * * Check out how to secure your cluster: https://www.cockroachlabs.com/docs/stable/secure-a-cluster.html * CockroachDB node starting at 2018-03-16 11:52:57.164925 +0000 UTC (took 2.1s) build: CCL v1.1.6 @ 2018/03/12 18:04:35 (go1.8.3) admin: http://10.128.13.183:8082 sql: postgresql://root@10.128.13.183:26257?application_name=cockroach&sslmode=disable logs: C:\CockroachDb\cockroach116\cockroach-data\cockroach-data\logs store[0]: path=C:\CockroachDb\cockroach116\cockroach-data\cockroach-data status: restarted pre-existing node clusterID: 012d011e-acef-47e2-b280-3efc39f2c6e7 nodeID: 1     Ensure that the secure mode is used when deploying in production.   Starting 2 additional nodes   Starting node2 cockroach.exe start --insecure --store=node2 --host=10.128.13.183 --port=28258 --http-port=8083 --join=10.128.13.183:26257   Starting node 3   cockroach.exe start --insecure --store=node2 --host=10.128.13.183 --port=28259 --http-port=8084 --join=10.128.13.183:26257     Note: Both of these 2 nodes are joining the cluster via 10.128.13.183:26257 (port for the node 1)   Verifying the live cluster and nodes via the web based CockroachDB admin console Open a web browser with any of the above node's http-port e.g. http://10.128.13.183:8084 Click on the View nodes list on the right panel   This will open the nodes list page   Connecting to ThingWorx as external datastore Good news, if your ThingWorx is running with PostgreSQL persistence provider, then no additional JDBC driver needed as CockroachDB uses the PostgreSQL wire protocol additionally, the SQL dialect is that of PostgreSQL For any other persistence provider download and install the PostgreSQL Relational Database Connector from ThingWorx Marketplace.   Creating a database in the cluster Start SQL client connecting to any of the running node, open a command prompt navigate to the directory containing cockroach.exe use following command:   cockroach sql --insecure --port=26257 This will change the prompt to root@<serverName/IP>:26257> Since above command logs in insecure mode no password is needed, default admin username is root in CockroachDb, use following to create a database   create database thingworx; show databases; root@10.128.13.183:26257/> SHOW databases; +--------------------+ | Database | +--------------------+ | crdb_internal | | information_schema | | pg_catalog | | system | | thingworx | | thingworxdatastore | +--------------------+ (6 rows)   This confirms thingworx database is created Creating a user to access that database CREATE USER cockroach WITH PASSWORD 'admin'; This will grant all rights to "cockroach" user on the database thingworx database   grant all on database thingworx to cockroach; Creating a Thing & connecting to CockroachDB via ThingWorx Composer For below example ThingWorx is using PostgreSQL as persistence provider. Create a Thing based of Database Thing Template Use following connection settings:   JDBC Driver Class Name : org.postgresql.Driver JDBC Connection String : jdbc:postgresql://<serverIp/name>:26257/thingworx?sslmode=disable Database User Name : cockroach Database password : <password>   Navigate to the Properties to verify the connectivity   Creating table(s) Now that the Thing is connected to the database, there are following ways DB objects can be created Via Thing based SQL Command Via SQL CockroachDB's SQL client Following command will create a small demo table CREATE TABLE demo ( id INT, demovalue STRING) Use SQLCommand as JavaScript handler when using above statement to create table directly from ThingWorx's Database Thing Verifying the Database & a table created within that DB via the web admin console of CockroachDb Under the left panel click on the Databases from the home page of one of the node's web admin consloe e.g. http://localhost:8084     Apart from other useful information about the database e.g. the database size and total number of tables, etc., clicking on the table name will also show the sql used to create it (including the defaults).   Creating couple of Database Thing services to perform bulk insert into the table from ThingWorx Composer Insert Service created as SQL Command with code snippet, service takes 2 inputs of type int and string   insert into demo values ([[id]], [[demoValue]]) JavaScript service executing bulk demo data insert by wrapping the SQL service created above   for (i=0; i<2000; i++) { var params = { id: i /* INTEGER */, demoValue: 'Insert from node 0 while node 3 is down' /* STRING */ }; // result: NUMBER var result = me.InsertDemo(params); }   At this point different users in ThingWorx with sufficient access rights can create their DB Things in ThingWorx Composer and can use any of the node address to read/write the data to the CockroachDB cluster. For the purpose of demo one node was stopped while other 2 were running and data was written to the clsuter via the test service created above. Once the 3rd node was restarted we can see the automatic replication happening between the nodes; this can be seen live via the web based admin console of any of the running node's web console.   As highlighted above at certain point in time after i.e 1500hrs all nodes were synced with the data, including the node3 ,which as mentioned above was down while data was being inserted to the cluster. All of the above replication process was done using default configuration.  
View full tip
Key Functional Highlights ThingWorx 8.1 covers the following areas of the product portfolio: ThingWorx Analytics, ThingWorx Utilities and ThingWorx Foundation which includes Core, Connection Server and Edge capabilities. Highlights of the release include: ThingWorx Foundation Next Generation Composer: Embedded Mashup Builder enables codeless development of web visualization. New ability to manage and push configurations for KEPServerEX Notifications: Create SMS and Email notifications natively in Next Generation Composer Support for localized and dynamic content with tokens Protocol Adapter Toolkit: Encrypted communication between edge devices and the Connector over HTTPS or WSS. Encrypted communication between the Connector and ThingWorx Core (WSS). Ability to define a mapping of outbound messages from ThingWorx Core using a Codec to an edge-bound message. Authentication of edge devices 3 rd Party Platform Connectivity: Azure IoT Connector v2.0 § Model data from Azure IoT using the Thing Model § Utilize data from Azure IoT as properties in the Thing Model § Utilize services and events through Azure IoT § Utilize Azure file storage ThingWorx for Predix v1.0 § Synchronize data from Predix to ThingWorx § Enable SSO between Predix & ThingWorx C SDK: Framework for custom functionality to be added to C SDK-based applications at runtime License Management: Simple, automated, licensing system for collection, storage, reporting, management and auditing of licensing entitlements. Deprecated the SQUEAL functionality ThingWorx Analytics Categorical and Ordinal Goals: Adds use of unordered text (categorical) and ordered text (ordinal) goals to predictive analytics.  Create and score models with the new goal types. Virtual Sensor: Adds support for time-series predictions when historical data is not available.  Allows machine learning predictions to take the place of physical sensors and simplifies predictions like time to failure and probability of failure. Tighter platform integration: Analytics Server is more tightly integrated with ThingWorx Core, providing native control and access to analytics programming interfaces. New, simplified API: The new Analytics Server 2.0 API pattern is simpler, more modern, and easier to use. Microservices-based Architecture: Conversion to microservices sets the stage for High Availability and improved distributed installations. Native Linux installer: Docker is no longer required to run on Linux-based systems. Analytics Manager: Several enhancements, including: Simulation-driven data framework allows external providers to send data as if they were a physical Thing. Time Series Data Inputs improves the ability to share time series data with external providers. Thing Connect / Disconnect makes it easier to connect specific Things with external providers. Analytics Builder: Ease of use enhancements including: New UI support for time series models. Easier access to / use of Signals and Profiles. Simplified models for Boolean goals. Easier installation, no longer requires UploadThing. ThingWorx Utilities Software Content Management (SCM): Define package dependencies where the deployment of a package requires the presence of one or more other packages. ThingWorx Trial Edition ThingWorx Trial Edition will be available to internal PTC resources at launch and will be made available externally on the Developer Portal shortly after launch. Developer Enablement: Enhancements have been made to the Trial Edition installation tool, providing a native installation process of the ThingWorx platform including: ThingWorx Foundation ThingWorx Utilities ThingWorx Analytics ThingWorx Industrial Connectivity Documentation ThingWorx 8.1 Reference Documents ThingWorx Analytics 8.1 Reference Documents ThingWorx Core 8.1 Release Notes ThingWorx Core Help Center ThingWorx Edge SDKs and WebSocket-based Edge MicroServer Help Center ThingWorx Connection Services Help Center ThingWorx Industrial Connectivity Help Center ThingWorx Utilities Help Center ThingWorx Utilities Installation Guide ThingWorx Analytics Help Center ThingWorx Trial Edition User Guide Additional information ThingWorx eSupport Portal ThingWorx Developer Portal ThingWorx Marketplace Download The following items are available for download from the PTC Software Download site. ThingWorx Platform – Select Release 8.1 ThingWorx Utilities – Select Release 8.1 ThingWorx Analytics – Select Release 8.1
View full tip
Hi Community,   I've recently had a number of questions from colleagues around architectures involving MQTT and what our preferred approach was.  After some internal verification, I wanted to share an aggregate of my findings with the ThingWorx Architect and Developer Community.   PTC currently supports four methods for integrating with MQTT for IoT projects. ThingWorx Azure IoT Hub Connector ThingWorx MQTT Extension ThingWorx Kepware Server Choice is nice, but it adds complexity and sometimes confusion.  The intent of this article is to clarify and provide direction on the subject to help others choose the path best suited for their situation.   ThingWorx MQTT Extension The ThingWorx MQTT extension has been available on the marketplace as an unsupported “PTC Labs” extension for a number of years.  Recently its status has been upgraded to “PTC Supported” and it has received some attention from R&D getting some bug fixes and security enhancements.  Most people who have used MQTT with ThingWorx are familiar with this extension.  As with anything, it has advantages and disadvantages.  You can easily import the extension without having administrative access to the machine, it’s easy to move around and store with projects, and can be up and running quite quickly.  However it is also quite limited when it comes to the flexibility required when building a production application, is tied directly to the core platform, and does not get feature/functionality updates.   The MQTT extension is a good choice for PoCs, demos, benchmarks, and prototypes as it provides MQTT integration relatively quickly and easily.  As an extension which runs with the core platform, it is not a good choice as a part of a client/enterprise application where MQTT communication reliability is critical.   ThingWorx Azure IoT Hub Connector Although Azure IoT Hub is not a fully functional MQTT broker, Azure IoT does support MQTT endpoints on both IoT Hub and IoT Edge.  This can be an interesting option to have MQTT devices publish to Azure IoT and be integrated to ThingWorx using the Azure IoT Hub Connector without actually requiring an MQTT broker to run and be maintained.  The Azure IoT Hub Connector works similarly to the PAT and is built on the Connection Server, but adds the notion of device management and security provided by Azure IoT.   When using Azure IoT Edge configured as a transparent gateway with buffering (store and forward) enabled, this approach has the added benefit of being able to buffer MQTT device messages at a remote site with the ability to handle Internet interruptions without losing data.   This approach has the added benefit of having far greater integrated security capabilities by leveraging certificates and tying into Azure KeyVault, as well as easily scaling up resources receiving the MQTT messages (IoT Hub and Azure IoT Hub Connector).  Considering that this approach is build on the Connection Server core, it also follows our deployment guidance for processing communications outside of the core platform (unlike the extension approach).   ThingWorx Kepware Server As some will note, KepWare has some pretty awesome MQTT capabilities: both as north and southbound interfaces.  The MQTT Client driver allows creating an MQTT channel to devices communicating via MQTT with auto-tag creation (from the MQTT payload).  Coupled with the native ThingWorx AlwaysOn connection, you can easily connect KepWare to an on-premise MQTT broker and connect these devices to ThingWorx over AlwaysOn.   The IoT Gateway plug-in has an MQTT agent which allows publishing data from all of your KepWare connected devices to an MQTT broker or endpoint.  The MQTT agent can also receive tag updates on a different topic and write back to the controllers.  We’ve used this MQTT agent to connect industrial control system data to ThingWorx through cloud platforms like Azure IoT, AWS, and communications providers.   ThingWorx Product Segment Direction A key factor in deciding how to design your solution should be aligned with our product development direction.  The ThingWorx Product Management and R&D teams have for years been putting their focus on scalable and enterprise-ready approaches that our partners and customers can build upon.  I mention this to make it clear that not all supported approaches carry the same weight.  Although we do support the MQTT extension, it is not in active development due to the fact that out-of-platform microservices-based communication interfaces are our direction forward.   The Azure IoT Hub Connector, being built on the Connection Server is currently the way forward for MQTT communications to the ThingWorx Foundation.   Regards,   Greg Eva
View full tip
From the documentation, a SOLR node is only needed in case of using DataTables.  If the SOLR configuration field left blank, the extension will request to provide an input. Are SOLR nodes required or optional in order to use DSE with TW (in the hypothetical case of not using DataTables)?      -- As for functionality of the Thingworx, a Solr node is not required. However, the extension does try to validate the configuration, and hence, at this point, a SOLR node is mandatory to properly configure the extension. This will be fixed in the future. When there are 2 entries for addresses, one for a Cassandra Cluster and one for a Solr Cluster, are they the same Cluster, or different Clusters?      -- They could be either. There can be one machine with SOLR enabled and using the same IP for both Cassandra and Solr. However, it's not recommend for production workloads. It would be perfectly fine for development or test environments. In a Cluster, in order to have Solr and Cassandra nodes, use of Datacenters is required. Even if a Datacenter isn't explicitlydefined, a default install of DSE will create two data centers called "Cassandra" and "Solr" which is what would be seen see in the default "Cassandra Keyspace Settings" property in the configuration. If the user does create Datacenters with specific names then they will have to update the "Cassandra Keyspace Settings" property to reflect the same. I.e. replication = {'class':'NetworkTopologyStrategy', 'Cassandra':1, 'Solr':1} The number in front (1 being the default) represents the replication factor (https://docs.datastax.com/en/cql/3.1/cql/cql_using/update_ks_rf_t.html) depending on the number of nodes in each data center
View full tip
Anomaly Detection (also known as Outlier Detection) is a set of techniques that identify unusual occurrences in data. The premise is that such occurrences may be early indicators of future negative events (e.g. failure of assets or production lines).  Data Science algorithms for Anomaly Detection include both Supervised and Unsupervised methods. In Unsupervised Anomaly Detection, the algorithms make the assumption that most of the data points are "normal" (e.g. normal operation of the asset) and are looking for data points that are most dissimilar to the remainder of the dataset. Supervised Anomaly Detection requires a labeled set of Anomalies, in which case predictive algorithms can be applied directly on this data.   Thingworx employs a number of algorithms in support of Anomaly Detection: Simple threshold alerts. These are easy to setup on Thing properties but require a domain expert to provide such thresholds. Then, the alert will automatically fire when the value of the monitored property goes outside the predefined range of values, often seen as the "bad" side of the threshold. Statistical Process Control (SPC). This can be implemented using Thingworx Analytics Property Transforms. Most companies use a subset of SPC charts and rules to monitor production processes. Examples include the X Bar and R charts, as well as the Western Electric rules (e.g. one point outside the average +/- three sigma range). Explainable and widely accepted, SPC can also provide an earlier warning system compared to simple threshold alerts, in that it captures more complex patterns. Clustering. Using Thingworx Analytics, one can build an optimal clustering for the available data points. Under the assumption that data is representative of mostly normal operation and that there is not a significant pre-defined pattern of anomalies that form their own cluster, one can identify outliers by looking at the distance between points and their corresponding cluster centers. Points that are very far from their corresponding cluster center can be labeled as anomalies. Semi-supervised Anomaly Alerting (formerly known as ThingWatcher). This functionality identifies single property time series behavior that is statistically different than what was seen in a finite window of “known normal operation”. As such, it does not identify a “bad” event, or even a precursor to a “bad” event.  Rather, it points the end user to further investigate a situation which may lead to a “bad” event. Anomaly Alerts can be easily setup like any other Alert on a Thing property. Multiple Anomaly Alerts can be setup on the same or different properties of a Thing. Behind the scenes, the platform builds a time series neural network model for the known normal operation data, which is then applied to incoming data, and, if the errors are significantly different than those on known normal operation over a period of time, then an Anomaly Alert is produced. The techniques mentioned above are either unsupervised or semi-supervised. If the dataset contains labeled anomalies (e.g. asset faults, or suspicious patterns) then supervised predictive techniques (such as regression, decision trees, neural nets, or ensemble methods) are available to model the relationships between such anomalies (dependent variables) and various variables of interest (independent variables). These models can then be employed to monitor assets or production lines for upcoming anomalies. In many real-world use cases, anomalies are relatively rare; care needs to be taken when building such predictive models. Techniques such as up-sampling can prove beneficial in such situations.   What constitutes an Anomaly depends on the observed data and the current context. If only few data points are initially available, then it is possible that a lot of future data is predicted as an Anomaly, despite being normal operation. Also, in terms of context, if an Anomaly Detection is trained on a connected product in the Winter, it is likely to say all Summer operation is anomalous. This can be tackled by having multiple anomaly detection alerts implemented, one for each different context of operation (e.g. season, recipe being manufactured, operation done by a robot).   Another consideration is lead time vs explainability. For example, when a threshold alert fires, it is obvious why, but it may not be early enough to take action. As more advanced methods are employed, more complex patterns can be captured, hence more lead time, but typically at the expense of explainability. For example, semi supervised Anomaly Alerting (formerly known as ThingWatcher) uses time windows, aggregations, and derivatives of up to the third order, resulting in significantly less explainability when an Anomaly is presented to the end user.   Choosing the appropriate Anomaly Detection technique is use case dependent, balancing the desired lead time and explainability. If historical data on failures/anomalies is not available, a good place to start is Statistical Process Control, as it provides a balanced approach between the two dimensions, in addition to being already in use across many manufacturing companies.
View full tip
Announcements