cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
Design Your Data Model Guide Part 1   Overview   This project will introduce the process of taking your IoT solution from concept to design. Following the steps in this guide, you will create a solution that doesn’t need to be constantly revamped, by creating a comprehensive Data Model before starting to build and test your solution. We will teach you how to utilize a few proposed best practices for designing the ThingWorx Data Model and provide some prescriptive methods to help you generate a high-quality framework that meets your business needs. NOTE: This guide’s content aligns with ThingWorx 9.3. The estimated time to complete ALL 3 parts of this guide is 60 minutes. All content is relevant but there are additional tools and design patterns you should be aware of. Please go to this link for more details.    Step 1: Data Model Methodology   We will start by outlining the overall process for the proposed Data Model Methodology.       Step Description 1 User Stories Identify who will use the application and what information they need. By approaching the design from a User perspective, you should be able to identify what elements are necessary for your system. 2 Data Sources Identify the real-world objects or systems which you are trying to model. To create a solid design, you need to identify what the “things” are in your system and what data or functionality they expose. 3 Model Breakdown Compose a representative model of modular components to enable uniformity and reuse of functionality wherever possible. Break down user requirements and data, identifying how the system will be modeled in Foundation. 4 Data Strategy Identify the sources of data, then evaluate how many different types of data you will have, what they are, and how your data should be stored. From that, you may determine the data types and data storage requirements. 5 Business Logic Strategy Examine the functional needs, and map them to your design for proper business logic implementation. Determine the business logic as a strategic flow of data, and make sure everything in your design fits together in logical chunks. 6 User Access Strategy Identify each user's access and permission levels for your application. Before you start building anything, it is important to understand the strategy behind user access. Who can see or do what? And why? NOTE: Due to the length of this subject, the ThingWorx Data Model Methodology has been divided into multiple parts. This guide focuses on the first three steps = User Stories, Data Sources, and Model Breakdown. Guides covering the last three steps are linked in the final Next Steps page.    Step 2: User Stories     With a user-based approach to design, you identify requirements for users at the outset of the process. This increases the likelihood of user satisfaction with the result. Utilizing this methodology, you consider each type of user that will be accessing your application and determine their requirements according to each of the following two categories: Category Requirement Details Functionality Determine what the user needs to do. This will define what kind of Services and Subscriptions will need to be in the system and which data elements and Properties must be gathered from the connected Things. Information What information do they need? Examine the functional requirements of the user to identify which pieces of information the users need to know in order to accomplish their responsibilities.   Factory Example   Let’s revisit our Smart Factory example scenario. The first step of the User Story phase of the design process is to identify the potential users of your system. In this example scenario, we have defined three different types of users for our solution: Maintenance Operations Management Each of these users will have a different role in the system. Therefore, they will have different functional and informational needs.   Maintenance   It is the maintenance engineer’s job to keep machines up and running so that the operator can assemble and deliver products. To do this well, they need access to granular data for the machine’s operating status to better understand healthy operation and identify causes of failure. They also need to integrate their maintenance request management system to consolidate their efforts and to create triggers for automatic maintenance requests generated by the connected machines. Required Functionality Get granular data values from all assets Get a list of maintenance requests Update maintenance requests Set triggers for automatic maintenance request generation Automatically create maintenance requests when triggers have been activated Required Information Granular details for each asset to better understand healthy asset behavior Current alert status for each asset When the last maintenance was performed on an asset When the next maintenance is scheduled for an asset Maintenance request for information, including creation date, due date, progress notes   Operations   The operator’s job is to keep the line running and make sure that it’s producing quality products. To do this, operators must keep track of how well their line is running (both in terms of speed and quality). They also need to be able to file maintenance requests when they have issues with the assets on their line. Required Functionality File maintenance request Get quality data from assets on their line Get performance data for the whole line Get a prioritized list of production orders for their line Create maintenance requests Required Information Individual asset performance metrics Full line performance metrics Product quality readings   Management   The production manager oversees the dispatch of production orders and ensures quotas are being met. Managers care about the productivity of all lines and the status of maintenance requests. Required Functional Create production orders Update production orders Cancel production orders Access line productivity data Elevate maintenance request priority Required Information Production line productivity levels (OEE) List of open maintenance requests   Step 3: Data Sources – Thing List     Thing List   Once you have identified the users' requirements, you'll need to determine what parts of your system must be connected. These will be the Things in your solution. Keep in mind that a Thing can represent many different types of connected endpoints. Here are some examples of possible Things in your system: Devices deployed in the field with direct connectivity or gateway-connectivity to Foundation Devices deployed in the field through third-party device clouds Remote databases Connections to external business systems (e.g., Salesforce.com, Weather.com, etc.)   Factory Example   In our Smart Factory example, we have already identified the users of the system and listed requirements for each of those users. The next step is to identify the Things in our solution. In our example, we are running a factory floor with multiple identical production lines. Each of these lines has multiple different devices associated with it. Let’s consider each of those items to be a connected Thing. Things in each line: Conveyor belt x 2 Pneumatic gate Robotic Arm Quality Check Camera Let's also assume we already have both a Maintenance Request System and a Production Order System that are in use today. To add this to our solution, we want to build a connector between Foundation and the existing system. These connectors will be Things as well. Internal system connection Thing for Production Order System Internal system connection Thing for Maintenance Request System NOTE: It is entirely possible to have scenarios in which you want to examine more granular-level details of your assets. For example, the arm and the hand of the assembly robot could be represented separately. There are endless possibilities, but for simplicity's sake, we will keep the list shorter and more high-level. Keep in mind that you can be as detailed as needed for this and future iterations of your solution. However, being too granular could potentially create unnecessary complexity and data overload.    Click here to view Part 2 of this guide.
View full tip
It usually happens that we need to copy a large file to ThingWorx server periodically, and what's worse, the big file is changing(like a log file). This sample give a simpler way to implement. The main idea in the sample is: 1. Lower the management burden from ThingWorx server and instead it put all the work in edge SDK side 2. Save network burden with only uploading the incremented file and append it to the older file on ThingWorx server   Java SDK version in this sample: 6.0.1-255
View full tip
Ran into this recently thought I share an approach to getting a table with multi-column distinct yet retaining all the columns of the row. If you use Distinct, you get only the Columns you do Distinct on. This isn't very helpful if you want the 'latest' or the 'first occurrences'  of records in your table with a combination of fields being unique. For example I had Process, Part, Dimension and Point for which I had multiple value and date time entries, but I only wanted the latest entries. Following is how I solved it, if you have a better way please leave a comment! P.S.: for the query I used the awesome query builder available in the snippet section! --------------------------------------- var q1Result = Things["MyThing"].QueryStreamEntriesWithData({maxItems:99999, query:query1}); //Below creates a temporary measurement table to store the latest meaurement values var params = {                 infoTableName : "InfoTable",                 dataShapeName : "MyDatashape.DS" }; // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(MyDataShape.DS) var tempTable1 = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); // Extract only the latest measurements for the PART from the measurement result table 'q1Result' //The way we are going to reduce this to unique measurements is //1. records are in reverse order of date time //2. get distinct by Process Part Dim Point //3. Step through and match against distinct set //4. First match goes into final set //5. Upon match remove from distinct set //6. If no match then skip record //7. If no more distinct match records break loop var params = {                 t: q1Result /* INFOTABLE */,                 columns: 'ProcessID,PartID,Dimension,Point' /* STRING */ }; // result: INFOTABLE var distinctResult = Resources["InfoTableFunctions"].Distinct(params); for (var x = 0; x < q1Result.rows.length; x++) {     var query = {       "filters": {         "type": "AND",         "filters": [           {             "fieldName": "ProcessID",             "type": "EQ",             "value": q1Result.rows .ProcessID           },           {             "fieldName": "PartID",             "type": "EQ",             "value": q1Result.rows .PartID          },           {             "fieldName": "Dimension",             "type": "EQ",             "value": q1Result.rows .Dimension           },           {             "fieldName": "Point",             "type": "EQ",             "value": q1Result.rows .Point           }         ]       }     };   var params = {                 t: distinctResult /* INFOTABLE */,                 query: query /* QUERY */ }; // result: INFOTABLE var matchResult = Resources["InfoTableFunctions"].Query(params);     if (matchResult.rows.length == 1) {         tempTable1.AddRow(q1Result.rows );            var params = {             t: distinctResult /* INFOTABLE */,             query: query /* QUERY */         };         // result: INFOTABLE         var distinctResult = Resources["InfoTableFunctions"].DeleteQuery(params);         if (distinctResult.rows.length == 0) {                        break                    }            }    } //I now have a tempTable1 with the full rows and the 4 fields distinct result = tempTable1
View full tip
Preface This guide applies to a clean installation of the CentOS 7 Minimal distribution. This is labeled as "Minimal ISO" on the CentOS.org website and the filename of the iso image used to install the operating system will resemble "CentOS-7-x86_64-Minimal-1611.iso." The machine used in this guide was a virtual machine created using Oracle VirtualBox but the same steps should apply to any machine with a clean CentOS 7 Minimal install. It is however possible that some installations may encounter slight variations due to hardware configurations. Before starting Unzip the downloaded "MED-..._ThingWorx-Analytics-Server-Linux-Standalone-8-0-0.zip".  Inside the unzipped directory you will find a file called "ThingWorxAnalyticsServer-8.0.0-linux-x64-installer.run". Before running step number 10, upload that file to your CentOS machine using a SFTP SCP tool of your choice. Configuration and installation steps Step 1: Install Docker with the following commands (these steps are presented at https://docs.docker.com/engine/installation/linux/docker-ce/centos/#install-using-the-repository😞 yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum makecache fast yum -y install docker-ce Step 2: Create a group called docker (If this command reports the group already exists, that is ok. You can move to the next step): groupadd docker Step 3: Add your non-root user to the docker group, in this example my non-root user is called "thingworx", please replace with the correct username: usermod -aG docker thingworx Step 4: Start the Docker service and enable it to auto start after reboot: systemctl start docker systemctl enable docker Step 5: Verify that docker is working: docker ps Step 6: After running the above command you should see a single line output that resembles the following: "CONTAINER ID        IMAGE              COMMAND            CREATED            STATUS              PORTS              NAMES" Step 7: Disable selinux with the two following commands. Note by doing this you will want to make sure if this is a public facing server that you take appropriate security measures to lock down the system. setenforce 0 sed -i -e 's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux Step 8: Set the hostname of your machine to something otherthan the default which is "localhost.localdomain".  In this case I am using the name "centos", this can be replaced with a name of your choosing: hostname centos echo "centos" > /etc/hostname Step 9: Allow traffic through the default CentOS firewall.  Note that in a production environment, the firewall should be configured more granular to allow incoming traffic to only the required ports (5432, 2181 and 8080). Please refer to CentOS documentation and consult security best practices within your organization for more information. The following commands will completely disable the CentOS firewall. systemctl disable firewalld systemctl stop firewalld Step 10: Ensure the ThingWorx Analytics Server installer is executable then run the installer. You may have to change to the directory where the installer was uploaded to the machine, in this case I have it in the home directory of the user named thingworx.  Please replace that path with the correct path for your machine.  Note below are 3 separate commands. cd /home/thingworx chmod +x ThingWorxAnalyticsServer-8.0.0-linux-x64-installer.run ./ThingWorxAnalyticsServer-8.0.0-linux-x64-installer.run Step 11: Verify that the ThingWorx Analytics Server installation is successful. Note that it may take a few minutes for the system to become available. Retry the command after a few minutes if an error is initially encountered. curl http://127.0.0.1:8080/analytics/1.0/about/versioninfo NOTE: The response from the above command should resemble the following: {"implementationVersion":"8.0.0"}
View full tip
Load Testing through Remote Device Simulation   Designing an enterprise-ready application requires extensive testing and quality assurance. This includes all sorts of tests, of course, from examining the user interface for flaws to verifying there is correct logic in all background services. However, no area of testing is more important than scalability. Load testing is how to test the application to ensure it still functions as desired when remote things are connected and streaming information to the Platform.   Load testing is considered a critical component of the change management process. It is mentioned numerous times throughout PTC best practice documentation. This tutorial will step you through designing a load test using Kepware as a simulator. Kepware is free to download and use in short demos, making it the perfect tool for this type of test.   Start by acquiring the latest version of Kepware from the download site. Click “Download Free Demo” if a license was not included in your PTC product package. The installation of Kepware is simple, and for details, see the Kepware Installation Guide. The tutorial shown here uses Kepware version 6.7 and ThingWorx version 8.4.4. Given that we are testing a ThingWorx application, this tutorial assumes ThingWorx is already installed and configured correctly.   Once Kepware is installed, follow these steps: (This tutorial was developed by Desheng Xu and edited by Victoria Tielebein. Exact specifications of the equipment used in both large scale and local tests are given in step VI, which discusses the size of the simulation)   Understand how to configure Kepware as a simulator Go to the Help menu within Kepware, and click on “Driver Help” Select “Simulator” in the pop-up window, and click “OK” Expand “Address Descriptions” and then “Simulation Functions” Select “Ramp Function” to review details about the function needed for this tutorial, as well as information about function syntax Close the window once this information has been reviewed Create a new project in Kepware Click “File” > “New” In case you are connected to runtime, Kepware will allow you to choose to edit this project offline Add a channel in Kepware Channels represent threads which Kepware will use to contact ThingWorx Under “Connectivity”, click “Click to add a channel.” From the drop-down list, select “Simulator” Use all the default settings, selecting “Next” all the way down to “Finish” Next, add one device to the channel Highlight the new channel and click “Click to add a device” (which will appear in the center of the screen) Once again, use the default settings, selecting “Next” all the way down to “Finish” Add a tag to this device Within Kepware, tags represent properties which bind to remote things on the Platform and update with new information over time. Each device will need several tags to simulate remote property updates. The easiest way to add many tags for testing is to create one, and then copy and paste it. Highlight the device created in the previous step and click “Click to add static tag”, which appears in the center of the screen For “name” type “tag1” For Address, enter the Ramp function: RAMP(1000,1,2000000,1) The first parameter is the update rate given in milliseconds The next two parameters are the range of values which can be sent The last parameter is the increment or step Together this means that every 1 second, this tag will send a new value that is 1 higher than the previous value to the Platform, starting at 1 and ending at 2 million Ensure the Data Type is given as “DWord” or any type which will be read as a “Number” (and NOT an “Integer”) on the Platform Change the Scan Rate to 250 Then click “OK” Add more devices to the test The most basic set-up is now done: if this project connected to the Platform, one remote thing with one remote property could be used to simulate property updates. That is not very useful for load testing, however. We need many more things than this, and many more properties. The number of tags on each device should match the expected number of remote properties in the application itself. The number of devices in each channel should be large enough that when more channels are created, the number of total devices is close to the target for the application. For example, to simulate 10,000 things, each with 25 remote properties, we need 25 tags per device, 200 devices per channel, and 50 channels. This would require a lot of memory to run and should not be attempted on a local machine. A full test of 40 channels each with 10 devices was performed as shown in the screenshots here. This simulates 10,000 writes per second to the Platform total, or about 400 remote device connections. This test used the following hardware specifications: Kepware machine running Windows 2016 64-bit, 2 cores, 8G ThingWorx Platform machine running Ubuntu 16.04, 4 cores, 16G PostgreSQL 9.6 machine running Ubuntu 16.04, 4 cores, 16G Influx 1.6.3 machine running Ubuntu 16.04, 4 cores, 16G A local test was also run on Windows 10 (64-bit), using the H2 database, with Kepware and ThingWorx running side by side on the machine, 4 cores, 16G. This test made use of only 2 channels, with 10 devices each. For local tests to see how the simulation works, this is fine, but a more robust set-up like the above will be needed in a true load test. If there is not enough memory on the machine hosting Kepware, errors like this will appear in the Kepware logs: One or more value change updates lost due to insufficient space in the connection buffer. Once you decide on the number of tags and devices needed, follow the steps below to add them.  To add more tags, copy and paste the existing tag (ctrl+c  and ctrl+v  work in Kepware for convenience) until there as many tags as desired To add more devices, highlight the device in Kepware and copy and paste it as well (click on the channel before pasting) Then, copy and paste the entire channel until the number of channels, devices, and tags totals the desired load (be sure to click on “Connectivity” before hitting paste this time)  Configure the ThingWorx connection Right click on Project in the left-hand navigation bar and in the pop-up window that appears, highlight ThingWorx Change the “Enable” field to “Yes” to activate the other fields Fill in the details for “Host”, “Port”, “Application Key”, and “Thing name” Note that the application key will need to be created in ThingWorx and then the value copied in here The certificate and encryption settings may also need to be adjusted to match your environment For local set-ups, it is likely that self-signed and all certificates will need to be accepted, so both of those fields will likely need to be set to “Yes” (Encryption may need to be disabled as well). In production systems, this should not be the case  Save the project It doesn’t matter too much if this project is saved as encrypted or not, so either enter a password to encrypt the save or select “No encryption” Connect to ThingWorx Click “Runtime” > “Connect…” A pop-up will appear asking if you want to load this project, click “Yes” The connection status should then appear in the bottom portion of the window where the logs are displayed Configure in ThingWorx Login to the ThingWorx Platform Under “Industrial Connections” a thing should appear which is named as indicated in the Kepware configuration step above Click to open this thing and save it Also create a new thing, a value stream for ingesting data from Kepware Create remote things in ThingWorx Import the provided entity into ThingWorx (should appear as a downloadable attachment to this post) Open the KepwareUtil thing and go to the services tab Run the AutoKepwareCreate service to generate remote things on the Platform Give the name of the stream created above so each thing has a place to store property information The IgnoreTemplate flag should be set to false. This allows for the service to create a thing template first, which is then passed to the remote devices. The only reason this would be set to true is if the devices need to be deleted and recreated, but the template does not (then set the flag to true). To delete the devices, use the AutoKepwareDelete service also provided on the KepwareUtil thing Note that the AutoKepwareCreate service is asynchronous, so once it is executed, close the window and check the script logs to see when it completes. The logs will look like: KepwareUtil AutoKepwareCreate task finished!!! Check status of remote things Once the things are created, they should automatically connect to the Platform Run the TotalDeviceByTemplateWithTemplate service to see if the things are connected The template given here could be the one created by the AutoKepwareCreate service, or just give it RemoteThings if this is a small local set-up without many remote things on it The number of devices will equal the number of devices per channel times the total number of channels, which in the test shown here, is 400 isConnected will be checked if all of the devices are connected without issue If some of them are not connected, verify in the logs if there are any errors and resolve those before moving on View Ingestion Rate Once the devices are created, their tags should show as numbers (NOT integers), and they should already be updating with new values every second To view the ingestion rate, run the KepwareUtil service AutoKepwareRateSummary Give the thing template name that is created by the AutoKepwareCreate service, which will look like the name of the Kepware thing itself with a “T-“ in the front The start time should be close to the current time, and the periodInMinutes should be large enough to include some of the test (periodInMinutes is used to calculate the end time within the service) Note in the results here that the Average Write Per Second is only 9975 wps, which is close but not exactly what we would expect. This means that there are properties not updating correctly, which requires us to look at the logs and restart some things. If nothing shows up here, despite the Total Connected Things showing correctly, then look at the type of the tags on one of the remote devices. The type must be NUMBER for the query within this service to work, and not INTEGER. If the type of the tags is incorrect, then the type of the tags within Kepware was probably given as something which is not interpreted as a number in ThingWorx. Ensure DWord is used for the tags in Kepware Within the script log, look for any devices which show errors as seen in the image below and restart them to get their properties updating correctly Once the ingestion rate equals what is expected (in the case of the test here, 10,000 wps), use the AutoKepwareIngestionStat service on the KepwareUtil thing to see details about each remote device The TimeGapAvg in this service represents the gap between two ingestions in milliseconds, showing any lag that may be present between Kepware and ThingWorx The TimeGapSTD shows the standard deviation of the time gap between two ingestions on any given thing, also indicating lag (the lower this number, the better) The StartTime and EndTime show the first and last timestamp observed in the ThingWorx database during the given duration The totalCount shows the total number of ingested records during the sampling cycle The StartValue and EndValue fields show the first and last value ingested into the tag during the given duration If the ingestion rate is working as expected, and the ramp function is actually sending an update on time (in this case, once each second), then the difference between the EndValue and StartValue should always be equal to the totalCount plus 1. If this doesn’t match up, then there may be data loss or something else wrong with the property updates, which will show as a checked box in the valueException column. It is not enough to ensure that the ingestion rate is correct, as sometimes the rate may fluctuate only by 1 or 2 wps and appear perfect, even while some data is lost. That is why it is important to ensure that there are no valueException boxes showing as checked in the test of the application. If none of these are marked as having failed, then the test was successful and this ingestion rate is acceptable for the application   This tutorial is a very basic way to simulate many remote devices ingesting data into the Platform. For this to be a true test of the application, the remote things created in this test will need to be given business logic tasks as well. The AutoKepwareCreate service can be modified to give any template (and not just RemoteThing) to the thing template which is created and subsequently passed into the demo devices. Likewise, the template itself can be created, and then manually modified to look like the actual remote device template in the application, before the rest of the things are created (using the IgnoreTemplate flag in the creation and deletion services, as discussed above).   Ensure that events are triggered as expected and that subscriptions to property updates are in place on the thing template before creating the demo things. Make use of the subsystem monitor to ensure that the event, value stream, and stream queues do not grow so large that the Platform cannot keep up with the requests (for details about tuning the stream and value stream processing subsystems, see PTC’s best practice documentation). Also be sure to load some of the mashups to see how they perform while the ingestion test is happening. This will test whether or not the ingestion rate and business logic of the application can function side by side without errors, data loss, or performance issues.
View full tip
Official name: DataStax Enterprise, sometimes referred as Cassandra. Note: DBA skills required, free self-paced training can be found here Training | DataStax The extension package can further be obtained through Technical Support. Thingworx 6.0 introduces DSE as a backend database scaling to much greater byte count, ad Neo4j performance limitations hit at 50Gbs. Some of the main reasons to consider DSE are: 1. Elastic scalability -- Alows to easily add capacity online to accommodate more customers and more data when needed. 2. Always on architecture -- Contains no single point of failure (as with traditional master/slave RDBMS's and other NoSQL solutions) resulting in continious availability for business-critical applications that can't afford to go down. 3. Fast linear-scale performance -- Enables sub-second response times with linear scalability (double the throughput with two nodes, quadruple it with four, and so on) to deliver response time speeds. 4. Flexible data storage -- Easily accommodates the full range of data formats - structured, semi-structured and unstructured -- that run through today's modern applications. 5. Easy data distribution -- Read and write to any node with all changes being automatically synchronized across a cluster, giving maximum flexibility to distribute data by replicating across multiple datacenters, cloud, and even mixed cloud/on-premise environments. Note: Windows+DSE is currently not fully supported. Connecting Thingworx: Prerequisite: fully configured DSE database. 1. Obtain the dse_persistancePackage 2. Import as an extension in Composer. 3. In composer, create a new persistence provider. 4. Select the imported package as Persistence Provider Package. 5. In Configuration tab:      - For Cassandra Cluster Host, enter the IP address set in cassandra.yaml or localhost if hosted locally      - Enter new of existing Cassandra Keyspace name      - Enter Solr Cluster URL      - Other fields can be left at default (*) 6. Go to Services and execute TestConnectivity service to ensure True response. 7. When creating new Stream, Value Stream, or a Data Table, set Persistence Provider to the one created in previous steps. Currently all reads and writes are done through Thingworx and all Thingworx data is encoded in DSE.  Opcenter still allows to see connectes streams, datatables, valuestreams. *SimpleStrategy can be used for a single data center, or NetworkTopologyStrategy is recommended for most deployments, because it is much easier to expand to multiple data centers when required by future expansion. Is there a limit of data per node? 1 TB is a reasonable limit on how much data a single node can handle, but in reality, a node is not at all limited by the size of the data, only the rate of operations. A node might have only 80 GB of data on it, but if it's continuously hit with random reads and doesn't have a lot of RAM, it might not even be able to handle that number of requests at a reasonable rate. Similarly, a node might have 10 TB of data, but if it's rarely read from, or there is a small portion of data that is hot (so it could be effectively cached), it will do just fine. If the replication factor is above 1 and there is no reads at consistency level ALL, other replicas will be able to respond quickly to read requests, so there won't be a large difference in latency seen from a client perspective.
View full tip
A common issue that is seen when trying to deploy, design or scale up a ThingWorx application is performance.  Slow response, delayed data and the application stopping have all been seen when a performance problems either slowly grows or suddenly pops up.  There are some common themes that are seen when these occur typically around application model or design.  Here are a few of the common problems and some thoughts on what to do about them or how to avoid them. Service Execution This covers a wide range of possibilities and is most commonly seen when trying to scale an application.  Data access within a loop is one particular thing to avoid.  Accessing data from a Thing, other service or query may be fast when only testing it on 100 loops, but when the application grows and you have 1000 suddenly it's slow.  Access all data in one query and use that as an in memory reference.  Writing data to a data store (Stream, Datatable or ValueStream) then querying that same data in one service can cause problems as well.  Run the query first then use all the data you have in the service variables.   To troubleshoot service executions there are a few methods that can be used.  Some for will not be practical for a production system since it is not always advisable to change code without testing first. Used browser development tools to see the execution time of a service.  This is especially helpful when a mashup is slow to load or respond.  It will allow quickly identifying which of multiple services may be the issue. Addition of logging in a service.  Once a service is identified adding simple logging points in the service can narrow what code in the service cases the slow down (it may be another service call).  These logging statements show up in the script logs with time stamps ( you can also log the current time with the logging statements). Use the test button in Composer.  This is a simple on but if the service does not have many parameters (or has defaults) it's a fast and easy way to see how long a service takes to return,' When all else fails you can get thread dumps from the JVM.  ThingWorx Support created an extension that assists with this.  You can find it on the Marketplace with instructions on how to use it.  You can manually examine the output files or open a ticket with support to allow them to assist.  Just be careful of doing memory dumps, there are much larger, hard to analyse and take a lot of memory.  https://marketplace.thingworx.com/tools/thingworx-support-tools Queries ​These of course are services too but a specific type.  Accessing data in ThingWorx storage structures or from external sources seems fairly straight forward but can be tricky when dealing with large data sets.  When designing and dealing with internal platform storage refer to this guide as a baseline to decide where to store data...  Where Should I Store My Thingworx Data?   NEVER store historical data in infotable properties.  These are held in memory (even if they are persistent) and as they grow so will the JVM memory use until the application runs out of it.  We all know what happens then.  Finally one other note that has causes occasional confusion.  The setting on a query service or standard ThingWorx query service that limits the number of records returned.  This is how many records are returned to from the service at the end of processing, not how many are processed or loaded in memory.  That number may be much higher and could cause the same types of issues. Subscriptions and Events ​This is similar to service however there is an added element frequency.  Typical events are data change and timers/schedulers.  This again is often an issue only when scaling up the number of Things or amount of data that need to be referenced.  A general reference on timers and schedulers can be found here.  This also describes some of the event processing that takes place on the platform.  Timers and Schedulers - Best Practice For data change events be very cautions about adding these to very rapidly changing property values.  When a property is updating very quickly, for example two times each second, the subscription to that event must be able to complete in under 0.5 seconds to stay ahead of processing.  Again this may work for 5-10 Things with properties but will not work with 500 due to resources, speed and need to briefly lock the value to get an accurate current read.  In these cases any data processing should be done at the edge when possible (or in the originating system) and pushed to the platform in a separate property or service call.  This allows for more parallel processing since it is de-centralized. A good practice for allowing easier testing of these types of subscription code is to take all of the script/logic and move it to a service call.  Then pass any of the needed event data to parameters in the service.  This allows for easier debug since the event does not need to fire to make the logic execute.  In fact it can essentially be stand alone by the test button in Composer. Mashup Performance This​ one can be very tricky since additional browser elements and rendering can come into play. Sometimes service execution is the root of the issue and reviewed above, other times it is UI elements and design that cause slow down. The Repeater widget is a common culprit. The biggest thing to note here is that each repeater will need to render every element that is repeated and all of the data and formatting for each of those widgets in the repeated mashup. So any complex mashup that is repeated many times may become slow to load. You can minimize this to a degree based on the Load/Unload setting of the widget and when the slowness is more acceptable (when loading or when scrolling). When a mashup is launched from Composer it comes with some debugging tools built in to see errors and execution. Using these with browser debug tools can be very helpful. Scaling an Application When initially modeling an application scale must be considered from the start. It is a challenge (but not impossible) to modify an application after deployment or design to be very efficient. Many times new developers on the ThingWorx platform fall into what I call the .Net trap. Back when .Net was released one of the quote I recall hearing about it's inefficiencies was "memory is cheap". It was more cost efficient to purchase and install more memory than to take extra development time to optimize memory use. This was absolutely true for installed applications where all of the code was complied and stored on every system. Web based applications are not quite a forgiving since most processing and execution is done on the single central web server. Keep this in mind especially when creating Shapes, Templates and Subscriptions. While you may be writing one piece of code when this code is repeated on 1,000 Things they will all be in memory and all be executing this code in parallel. You can quickly see how competition for resources, locks on databases and clean access to in memory structures can slow everything down (and just think when there are 10,000 pieces of that same code!!). Two specific things around this must be stated again (though they were covered in the above sections). Data held in properties has fast access since it is in JVM memory. But this is held in memory for each individual Thing, so hold 5 MB of information in one Thing seems small, loading 10,000 Thing mean instant use of 50 GB of memory!! Next execution of a service. When 10 things are running a service execution takes 2 seconds. Slow but not too bad and may not be too noticeable in the UI. Now 10,000 Things competing for the same data structure and resources. I have seen execution time jump to 2 minutes or more. Aside from design the best thing you can do is TEST on a scaled up structure. If you will have 1,000 Things next year test your application early at that level of deployment to help identify any potential bottlenecks early. Never assume more memory will alleviate the issue. Also do NOT test scale on your development system. This introduces edits changes and other variables which can affect actual real world results. Have a QA system setup that mirrors a production environment and simulate data and execution load. Additional suggestions are welcome in comments and will likely update this as additional tool and platform updates change.
View full tip
This video concludes Module 9: Anomaly Detection of the ThingWorx Analytics Training videos. It gives an overview of the "Statistical Process Control (SPC) Accelerator"
View full tip
Hi all,   ThingWorx contains lots of useful functionality for your services (last count is 339 Snippets in ThingWorx 8.5.2). These snippets are an important part of the platform application building capabilities, and most of them are simple enough to understand based on their name and the description that appears when hovering on them.   I have witnessed that however, in some cases, the platform users are not aware of their full capabilities. With this in mind, I started creating some time ago a Snippet Guide for my personal use that I'm sharing now with the community. It contains additional explanations, documentation links and sample source code tested by me.   Please bear in mind that it was done for an earlier ThingWorx version and I did not have enough time to update it for 8.5.x, but it should work the same here as well.   This enhanced documentation is not supported by PTC, so please 1. do not open a Tech Support ticket based on the content of this document and, instead 2. Comment on this thread if there are things I can improve on it.   Happy New Year!
View full tip
Error: Exception: JavaException: java.lang.Exception: Import Failed: License is not available for Product: null Feature: twx_things Possible root cause: editing web.xml without further restart of the platform. In result, the product name does not pick up and the license path drops while the composer is still running, Fix: Go to LicensingSubsystem -> Services and run the AcquireLicense service. IF the issue does not get resolved, please contact the Support team, attaching your license.bin to the ticket.
View full tip
The Axeda Platform has long had the ability to write custom logic to retrieve, manipulate and create data.  In the current versions of the Platform, there are two classes of API, Version 1 (v1) and Version 2 (v2).  The v1 APIs allow a developer to work with data on the Platform, but all of the APIs are subject to the maxQueryResults configuration property, which by default limits the number of results per query to 1000. For some subsets of data, this can be inadequate to process data.  In comes the v2 API, which introduces pagination. One of the first things a new user does when exploring the V2 API, is something like the following: HistoricalDataItemValueCriteria criteria = new HistoricalDataItemValueCriteria() criteria.assetId = '9701' criteria.startDate = '2014-07-23T12:33:00Z' criteria.endDate = '2014-07-23T12:44:00Z' DataItemBridge dbridge = com.axeda.sdk.v2.dsl.Bridges.dataItemBridge FindDataItemValueResult results = dbridge.findHistoricalValues(criteria)           And they get frustrated when they only get the same 100 rows of data.  Repeat after me: V2 API invocations (find operations) are limited to batches of 100 results at a time! But that's not the end of the story.  With a small change, the query above can be tuned to iterate through all results that match the search criteria:  HistoricalDataItemValueCriteria criteria = new HistoricalDataItemValueCriteria() criteria.assetId = '9701' criteria.startDate = '2014-07-23T12:33:00Z' criteria.endDate = '2014-07-23T12:44:00Z' criteria.pageNumber = 1 criteria.pageSize = 100 // Default. DataItemBridge dbridge = com.axeda.sdk.v2.dsl.Bridges.dataItemBridge FindDataItemValueResult results = dbridge.findHistoricalValues(criteria) tcount = 0 while ( (results = dbridge.findHistoricalValues(criteria)) != null  && tcount < results .totalCount) {   results.dataItems.each { res ->     tcount++   }   criteria.pageNumber = criteria.pageNumber + 1 }    I currently recommend that people avoid using the count() or countDomainObjectByCriteria() functions if you're then going to call a find.  Currently both the count*() and find functions compute total results, and doubles execution time of just those two calls.  Total count is only computed when running the first find() operation, so the code pattern above is so far the most efficient way I've seen to run these operations on the platform. So having covered how to do this in code (custom objects), let's turn our attention to the REST APIs - the other entry-point for using these capabilities.  The REST API doesn't offer a count*() function, but the first find() invocation (if using XML) brings back totalCount as part of the result set.  You can use this in your application to decide how many times to call the REST end-point to retrieve your data.  So for the example above: POST:  https://customer-sandbox.axeda.com/services/v2/dataItem/findHistoricalValues HEADERS: Content-Type: application/xml Accept: application/xml BODY: <?xml version="1.0" encoding="UTF-8"?> <HistoricalDataItemValueCriteria xmlns="http://www.axeda.com/services/v2" pageSize="100" pageNumber="1"> <assetId>9701</assetId> <StartDate>2014-07-23T12:33:00Z</StartDate> <endDate>2014-07-23T12:35:02Z</endDate> </HistoricalDataItemValueCriteria>      RESULTS: <v2:FindAssetResult totalCount="1882" xmlns:v2="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">    <v2:criteria pageSize="100" pageNumber="1">       <v2:name>*</v2:name>       <v2:propertyNames/>    </v2:criteria>    <v2:assets>    </v2:assets> </v2:FindAssetResult>      Or JSON: POST:  https://customer-sandbox.axeda.com/services/v2/dataItem/findHistoricalValues HEADERS: Content-Type: application/xml Accept: application/xml BODY: {   "id":  9701,   "startDate": "2014-07-23T12:33:00Z",   "endDate": "2014-07-23T12:35:02Z",   "pageNumber": 1,   "pageSize": 2 }      And that's how you work around the maxQueryResults limitation of the v1 APIs.  Some APIs do not currently have matching v2 Bridges (e.g. MobileLocation and DataItemAssociation), in which case the limitation will still apply.  Creative use of the query Criteria will allow you to work around these limitations as we continue to improve the V2 API. Regards, -Chris
View full tip
  PLEASE NOTE DataConnect has now been deprecated and no longer in use and supported.   We are regularly asked in the community how to send data from ThingWorx platform to ThingWorx Analytics in order to perform some analytics computation. There are different ways to achieve this and it will depend on the business needs. If the analytics need is about anomaly detection, the best way forward is to use ThingWatcher without the use of ThingWorx Analytics server. The ThingWatcher Help Center is an excellent place to start, and a quick start up program can be found in this blog. If the requirement is to perform a full blown analytics computation, then sending data to ThingWorx Analytics is required. This can be achieved by Using ThingWorx DataConnect, and this is what this blog will cover Using custom implementation. I will be very pleased to get your feedback on your experience in implementing custom solution as this could give some good ideas to others too. In this blog we will use the example of a smart Tractor in ThingWorx where we collect data points on: Speed Distance covered since last tyre change Tyre pressure Amount of gum left on the tyre Tyre size. From an Analytics perspective the gum left on the tyre would be the goal we want to analyse in order to know when the tyre needs changing.   We are going to cover the following: Background Workflow DataConnect configuration ThingWorx Configuration Data Analysis Definition Configuration Data Analysis Definition Execution Demo files   Background For people not familiar with ThingWorx Analytics, it is important to know that ThingWorx Analytics only accepts a single datafile in a .csv format. Each columns of the .csv file represents a feature that may have an impact on the goal to analyse. For example, in the case of the tyre wear, the distance covered, the speed, the pressure and tyre size will be our features. The goal is also included as a column in this .csv file. So any solution sending data to ThingWorx Analytics will need to prepare such a .csv file. DataConnect will perform this activity, in addition to some transformation too.   Workflow   Decide on the properties of the Thing to be collected, that are relevant to the analysis. Create service(s) that collect those properties. Define a Data Analysis Definition (DAD) object in ThingWorx. The DAD uses a Data Shape to define each feature that is to be collected and sends them to ThingWorx Analytics. Part of the collection process requires the use of the services created in point 2. Upon execution, the DAD will create one skinny csv file per feature and send those skinny .csv files to DataConnect. In the case of our example the DAD will create a speed.csv, distance.csv, pressure.csv, gumleft.csv, tyresize.csv and id.csv. DataConnect processes those skinny csv files to create a final single .csv file that contains all these features. During the processing, DataConnect will perform some transformation and synchronisation of the different skinny .csv files. The resulting dataset csv file is sent to ThingWorx Analytics Server where it can then be used as any other dataset file.     DataConnect configuration   As seen in this workflow a ThingWorx server, DataConnect server and a ThingWorx Analytics server will need to be installed and configured. Thankfully, the installation of DataConnect is rather simple and well described in the ThingWorx DataConnect User’s guide. Below I have provided a sample of a working dataconnect.conf file for reference, as this is one place where syntax can cause a problem:   ThingWorx Configuration The platform Subsystem needs to be configured to point to the DataConnect server . This is done under SYSTEM > Subsystems > PlatformSubsystem:     DAD Configuration The most critical part of the process is to properly configure the DAD as this is what will dictate the format and values filled in the skinny csv files for the specific features. The first step is to create a data shape with as many fields as feature(s)/properties collected.   Note that one field must be defined as the primary key. This field is the one that uniquely identifies the Thing (more on this later). We can then create the DAD using this data shape as shown below   For each feature, a datasource needs to be defined to tell the DAD how to collect the values for the skinny csv files. This is where custom services are usually needed. Indeed, the Out Of The Box (OOTB) services, such as QueryNumberPropertyHistory, help to collect logged values but the id returned by those services is continuously incremented. This does not work for the DAD. The id returned by each services needs to be what uniquely identifies the Thing. It needs to be the same for all records for this Thing amongst the different skinny csv files. It is indeed this field that is then used by DataConnect to merge all the skinny files into one master dataset csv file. A custom service can make use of the OOTB services, however it will need to override the id value. For example the below service uses QueryNumberPropertyHistory to retrieve the logged values and timestamp but then overrides the id with the Thing’s name.     The returned values of the service then needs to be mapped in the DAD to indicate which output corresponds to the actual property’s value, the Thing id and the timestamp (if needed). This is done through the Edit Datasource window (by clicking on Add Datasource link or the Datasource itself if already defined in the Define Feature window).   On the left hand side, we define the origin of the datasource. Here we have selected the service GetHistory from the Thing Template smartTractor. On the right hand side, we define the mapping between the service’s output and the skinny .csv file columns. Circled in grey are the output from the service. Circled in green are what define the columns in the .csv file. A skinny csv file will have 1 to 3 columns, as follow: One column for the ID. Simply tick the radio button corresponding to the service output that represents the ID One column representing the value of the Thing property. This is indicated by selecting the link icon on the left hand side in front of the returned data which represent the value (in our example the output data from the service is named value) One column representing the Timestamp. This is only needed when a property is time dependant (for example, time series dataset). On the example the property is Distance, the distance covered by the tyre does depend on the time, however we would not have a timestamp for the TyreSize property as the wheel size will remain the same. How many columns should we have (and therefore how many output should our service has)? The .csv file representing the ID will have one column, and therefore the service collecting the ID returns only one output (Thing name in our smartTractor example – not shown here but is available in the download) Properties that are not time bound will have a csv file with 2 columns, one for the ID and one for the value of the property. Properties that are time bound will have 3 columns: 1 for the ID, one for the value and one for the timestamp. Therefore the service will have 3 outputs.   Additionally the input for the service may need to be configured, by clicking on the icon.   Once the datasources are configured, you should configure the Time Sampling Interval in the General Information tab. This sampling interval will be used by DataConnect to synchronize all the skinny csv files. See the Help Center for a good explanation on this.   DAD Execution Once the above configuration is done, the DAD can be executed to collect properties’ value already logged on the ThingWorx platform. Select Execution Settings in the DAD and enter the time range for property collection:   A dataset with the same name as the DAD is then created in DataConnect as well as in ThingWorx Analytics Server Dataconnect:     ThingWorx Analytics:   The dataset can then be processed as any other dataset inside ThingWorx Analytics.   Demo files   For convenience I have also attached a ThingWorx entities export that can be imported into a ThingWorx platform for you to take a closer look at the setup discussed in this blog. Attached is also a small simulator to populate the properties of the Tractor_1 Thing. The usage is : java -jar TWXTyreSimulatorClient.jar  hostname_or_IP port AppKey For example: java -jar TWXTyreSimulatorClient.jar 192.168.56.106 8080 d82510b7-5538-449c-af13-0bb60e01a129   Again feel free to share your experience in the comments below as they will be very interesting for all of us. Thank you
View full tip
ThingWorx is great for storing large amounts of data coming from your devices but it can also be used like a traditional, row based database for information you would like to integrate with your thing data. Attached to this blog entry is a short example of creating an address book database using a DataTable and a DataShape. It does not focus on creating mashups but sticks with discussing the modeling and service calls you would use to create a simple database.
View full tip
Thing Subscription This post is intended for novice ThingWorx users who wants to understand what the definition of Thing Subscription is and the overall purpose of using Thing Subscriptions.   Definition of a Thing Subscription? A Thing subscription is a script(JavaScript) that is called each time an event occurs. Events are property states which are of end users interest (e.g. temperature) and therefore indicators to kick off some functionality in a Thing subscription when any action needed. Events can e.g. be triggered by an Alert that detects a change or an anomaly in property values. The Thing subscription is explicitly linked to an event and when the event is fired the data is being passed to the subscriber.    Why Use a Thing Subscription? Imagine your machine is running 24 hours 7 days a week with supervised human interaction. If a pump temperature exceeds accepted value it needs to be regulated by the manufacturing department. But no one in the department knows when the temperature will exceed accepted value or drop suddenly therefore, the machines is always sporadically physically supervised by humans which leads to heavy costs for the manufacture. With a Thing Subscription a notification alert email can be sent directly to the department manager who acts based on the email notification.   Thing Subscription must have A Thing subscription must have defined a rule which gets executed when an event occurs. The definition of the rule may accommodate any appropriate business logic.   Thing Subscription example process In this scenario Thing subscription is using a predictive analytics model to detect Data Change or any anomaly values going through a Thing Property. So, based on historical data including failure information, a predictive analytics model begins to analyze run-time values from individual Things/properties to the analytics server. The predictive analytics model detects a pattern which detects past failures, when the analytics model predicts a failure/event based on the analyzed patterns an action is being fired via a Thing subscription. That action could be for ThingWorx to create a service ticket or send a notification email to the service department.   Example of a simple Thing Subscription set-up without using Analytics model to analyze data but instead a build-in ThingWorx alert Below example of Thing Subscription will send a notification email when temperature exceeds defined values from ThingWorx alert configuration. Prerequisites; it is necessary to have a mail server extension imported into the ThingWorx Composer this enables the service department to receive the email notification when an event have occurred. The extension can be downloaded from the marketplace. 1. Create a Thing with the MailServer[i] as the Base Thing template.     2. Create a new Thing and add Properties together with an alert that is triggered when the value exeeds user defined temerature.   3. Enable the Thing Subcriptions by Select Subscription and click +Add Make sure to mark the checkbox Enabled Selecting your Event name and your Property name In the right side of the screen you can enter your script/function that will notify ThingWorx email service to create the email notification Select Done and Save   4. Enable Email notification by selecting Services Provide an name Select Me/Entities Mark Other entity Find your Thing where the MailServer is the Thing Template   5. Then find the SendMessage snippet/script and fill out the snippet with your personal information.   [i] View this blog for more information on how to install the MailServer
View full tip
PostgreSQL is a powerful, open source object-relational database system that provides unlimited database size. Thingworx 6.5 introduces PostgreSQL as persistence provider and supports High Availability. Main advantages with Thingworx Postgres are 1. Highly customizable PostgreSQL also includes a framework that allows developers to define and create their own custom data types along with supporting functions and operators that define their behavior. Triggers and stored procedures can be written in C and loaded into the database as a library, allowing great flexibility in extending its capabilities. 2. Synchronous replication PostgreSQL streaming replication is asynchronous by default. Synchronous replication offers the ability to confirm that all changes made by a transaction have been transferred to one synchronous standby server. This extends the standard level of durability offered by a transaction commit. The only possibility that data can be lost is if both the primary and the standby suffer crashes at the same time. 3. Write ahead logging for fault tolerance The Write Ahead Log (WAL), is the feature of PostgreSQL that allows it to recover data, usually up to the point where the server stopped. As you make changes to your data, PostgreSQL aggressively writes those changes to the WAL. PostgreSQL issues a checkpoint when a buffer limit is reached. When PostgreSQL restarts, it replays the changes from the WAL since the last Checkpoint, to bring the database back to the state of the last completed commit. Master node sends a live stream of data changes to the slave nodes through the WAL and slaves applies this data and stay up to date. 4. Point-in time recovery Point-in-time Recovery (PITR) also called as incremental database backup , online backup or may be archive backup. This mechanism use the history records stored in WAL file to do roll-forward changes made since last database full backup. With Point-in-time Recovery, database backup down time can totally eliminated because this mechanism can make database backup and system access happened at the same time. with PITR, we backup the latest archive log file since last backup instead of full database backup everyday. Thingworx streams data from the connected devices and postgres handles it with a greater scalability. In Thingworx, postgresql acts as a persistence provider that stores both run-time data and metadata about things. Run-time data is the data that is persisted once the things are composed and are used by connected devices to store their data. Streams and value streams fetch huge amounts of data, once the streaming data reaches a limit fo 50gb neo4j can't handle the performance. For example, for a singleStream that has 50 properties that gathers data from 10000 devices, it will quickly hit the memory limit with neo persistence provider. So, it is strongly recommended to choose postgresql for a better performance issues. Overview of Installing Thingworx PostgreSQL: Install latest version of Java and make sure environment variables are configured. Follow the instructions in Installing Thingworx 6.5​ to install tomcat. Instructions/commands may vary for different Linux flavors. Install PostgreSQL. For Linux/Unix environments, YUM-Installation Guidelines. Create 'ThingworxPostgresqlStorage' and 'ThingworxPlatform' folders in the root directory( / ), assign access permissions to the user. Copy modelproviderconfig.json file (from Thingworx download package) to 'ThingworxPlatform' folder. Execute ThingworxPostgresSchemaSetup and ThingworxPostgresDBSetup scripts (.bat for windows and .sh for Unix/Linux environments), for further instructions follow Getting Started with PostgreSQL ThingWorx Administrators Guide​. Restart the tomcat.
View full tip
In this part of the Troubleshooting blog series, we will review the process on how to restart individual services essential to the ThingWorx Analytics Application within the Virtual Machine Appliance.   Services have stopped, and I cannot run my Analytics jobs! In some cases, we have had users encounter issues where a system or process has halted and they are unable to proceed with their tasks. This can be due to a myriad of reasons, ranging from OS hanging issues to memory issues with certain components.   As we covered earlier in Part II, the ThingWorx Analytics Application is installed in a CentOS (Linux) Operating System. As with most Linux Operating Systems, you have the ability to manually check and restart processes as needed.   Steps to Restart Services   With how the Application is installed and configured, the services for functionality should auto start when you boot up the VM. You will have to verify that the Appliance is functional by running your desired API call.   If a system is not functioning as expected, you will receive an error in your output when you POST an API call. Some errors are very specific and you can search the Knowledge Database for any existing Knowledge Articles that may solve the issue.   For error messages that do not have an existing article, you may want to attempted the following   Method 1:   If you are encountering issues, and are unsure what process is not working correctly, we would recommend a full Application restart. This involves restarting the Virtual Machine Appliance via the command line terminal.   We would recommend that you use the following command, as root user or using SUDO, as this is known as a “Graceful restart” ​sudo reboot -h now   This will restart the virtual machine safely, and once you are back up and running you can run your API calls to verify functionality. This should resolve any incremental issues you may have faced.   Method 2:   If you want to restart an individual service, there is a particular start order that needs to be followed to make sure the Application is operating as expected.   The first step is check what services are not running, you can use the following command to check what is running and its current status: service –status-all   The services you are looking for are the following: Zookeeper PostgreSQL Server GridWorker(s) Tomcat   If a particular service is not on the running list, you will have to manually start them by using the service start command. service [name of service] start e.g. service tomcat start You may be prompted for the root password   You can verify that the services are operating by running the status check command as described above.   If you need to restart all services, we have a specific start order, this is important to do as there are some dependencies such as Postgres for the GridWorker(s) and Tomcat to use.   The start order is as follows: Zookeeper PostgreSQL Server GridWorker(s) Tomcat   After completing the restart, and verifying that the services are running, run your desired API call.
View full tip
Based on Google's Spanner DB; CockroachDB is a distributed SQL DB scaling horizontally; surviving disk, machine, rack & even datacenter failures. It is built to automatically replicate, rebalance & recover with minimal configuration  See What is CockroachDB? for more.   Useful in use cases requiring: Distributed or replicated OLTP Multi-datacenter deployments Multi-region deployments Cloud migrations Cloud-native infrastructure initiatives Note: CockroachDB in current state isn't suitable for heavy analytics / OLAP.   Feature that makes it really attractive As mentioned above, scaling horizontally it requires minimal configuration out of the box allowing quick setup starting from local laptop/machine as shown below it can scale easily to single dedicated server, development/public cloud cluster. Due to easy setup, adding new nodes is as simple as starting the cockroach utility.See CockroachDB FAQ for more. To top it off, it uses PostgreSQL Wire protocol and PostgreSQL's dialect further reducing configuration and special JDBC driver requirements when a ThingWorx is configured with PostgreSQL as persistence provider.   Setting up cockroach DB cluster Download required binary or docker version from Install CockroachDB available for Mac, Linux & Windows   PS :Following setup uses Window's binary on a VM with Win10 64 bit, 6G RAM.     Starting Cluster node Open command prompt and navigate to the directory where cockroach.exe is unzipped, and launching the node with following command prompt     cockroach.exe start --insecure --host=10.128.13.183 --http-port=8082     This will start a node on defined host in insecure mode with its web based DB administration console on port 8082 and DB listening on default port 26257. Note it will log a security warning since node is started in insecure mode due to the tag --insecure, like so     * * WARNING: RUNNING IN INSECURE MODE! * * - Your cluster is open for any client that can access 10.128.13.183. * - Any user, even root, can log in without providing a password. * - Any user, connecting as root, can read or write any data in your cluster. * - There is no network encryption nor authentication, and thus no confidentiality. * * Check out how to secure your cluster: https://www.cockroachlabs.com/docs/stable/secure-a-cluster.html * CockroachDB node starting at 2018-03-16 11:52:57.164925 +0000 UTC (took 2.1s) build: CCL v1.1.6 @ 2018/03/12 18:04:35 (go1.8.3) admin: http://10.128.13.183:8082 sql: postgresql://root@10.128.13.183:26257?application_name=cockroach&sslmode=disable logs: C:\CockroachDb\cockroach116\cockroach-data\cockroach-data\logs store[0]: path=C:\CockroachDb\cockroach116\cockroach-data\cockroach-data status: restarted pre-existing node clusterID: 012d011e-acef-47e2-b280-3efc39f2c6e7 nodeID: 1     Ensure that the secure mode is used when deploying in production.   Starting 2 additional nodes   Starting node2 cockroach.exe start --insecure --store=node2 --host=10.128.13.183 --port=28258 --http-port=8083 --join=10.128.13.183:26257   Starting node 3   cockroach.exe start --insecure --store=node2 --host=10.128.13.183 --port=28259 --http-port=8084 --join=10.128.13.183:26257     Note: Both of these 2 nodes are joining the cluster via 10.128.13.183:26257 (port for the node 1)   Verifying the live cluster and nodes via the web based CockroachDB admin console Open a web browser with any of the above node's http-port e.g. http://10.128.13.183:8084 Click on the View nodes list on the right panel   This will open the nodes list page   Connecting to ThingWorx as external datastore Good news, if your ThingWorx is running with PostgreSQL persistence provider, then no additional JDBC driver needed as CockroachDB uses the PostgreSQL wire protocol additionally, the SQL dialect is that of PostgreSQL For any other persistence provider download and install the PostgreSQL Relational Database Connector from ThingWorx Marketplace.   Creating a database in the cluster Start SQL client connecting to any of the running node, open a command prompt navigate to the directory containing cockroach.exe use following command:   cockroach sql --insecure --port=26257 This will change the prompt to root@<serverName/IP>:26257> Since above command logs in insecure mode no password is needed, default admin username is root in CockroachDb, use following to create a database   create database thingworx; show databases; root@10.128.13.183:26257/> SHOW databases; +--------------------+ | Database | +--------------------+ | crdb_internal | | information_schema | | pg_catalog | | system | | thingworx | | thingworxdatastore | +--------------------+ (6 rows)   This confirms thingworx database is created Creating a user to access that database CREATE USER cockroach WITH PASSWORD 'admin'; This will grant all rights to "cockroach" user on the database thingworx database   grant all on database thingworx to cockroach; Creating a Thing & connecting to CockroachDB via ThingWorx Composer For below example ThingWorx is using PostgreSQL as persistence provider. Create a Thing based of Database Thing Template Use following connection settings:   JDBC Driver Class Name : org.postgresql.Driver JDBC Connection String : jdbc:postgresql://<serverIp/name>:26257/thingworx?sslmode=disable Database User Name : cockroach Database password : <password>   Navigate to the Properties to verify the connectivity   Creating table(s) Now that the Thing is connected to the database, there are following ways DB objects can be created Via Thing based SQL Command Via SQL CockroachDB's SQL client Following command will create a small demo table CREATE TABLE demo ( id INT, demovalue STRING) Use SQLCommand as JavaScript handler when using above statement to create table directly from ThingWorx's Database Thing Verifying the Database & a table created within that DB via the web admin console of CockroachDb Under the left panel click on the Databases from the home page of one of the node's web admin consloe e.g. http://localhost:8084     Apart from other useful information about the database e.g. the database size and total number of tables, etc., clicking on the table name will also show the sql used to create it (including the defaults).   Creating couple of Database Thing services to perform bulk insert into the table from ThingWorx Composer Insert Service created as SQL Command with code snippet, service takes 2 inputs of type int and string   insert into demo values ([[id]], [[demoValue]]) JavaScript service executing bulk demo data insert by wrapping the SQL service created above   for (i=0; i<2000; i++) { var params = { id: i /* INTEGER */, demoValue: 'Insert from node 0 while node 3 is down' /* STRING */ }; // result: NUMBER var result = me.InsertDemo(params); }   At this point different users in ThingWorx with sufficient access rights can create their DB Things in ThingWorx Composer and can use any of the node address to read/write the data to the CockroachDB cluster. For the purpose of demo one node was stopped while other 2 were running and data was written to the clsuter via the test service created above. Once the 3rd node was restarted we can see the automatic replication happening between the nodes; this can be seen live via the web based admin console of any of the running node's web console.   As highlighted above at certain point in time after i.e 1500hrs all nodes were synced with the data, including the node3 ,which as mentioned above was down while data was being inserted to the cluster. All of the above replication process was done using default configuration.  
View full tip
In the following scenario (for redhat in this case), running the dbsetup script results in the error: ./thingworxPostgresDBSetup.sh psql:./thingworx-database-setup.sql:1: ERROR:  syntax error at or near ":" LINE 1: CREATE TABLESPACE :"tablespace" OWNER :"username" location :... ^ psql:./thingworx-database-setup.sql:3: ERROR:  syntax error at or near ":" LINE 1: GRANT ALL PRIVILEGES ON TABLESPACE :"tablespace" to :"userna... ^ psql:./thingworx-database-setup.sql:5: ERROR:  syntax error at or near ":" LINE 1: GRANT CREATE ON TABLESPACE :"tablespace" to public; ^ psql:./thingworx-database-setup.sql:14: ERROR:  syntax error at or near ":" LINE 1: CREATE DATABASE :"database" WITH ^ psql:./thingworx-database-setup.sql:16: ERROR:  syntax error at or near ":" LINE 1: GRANT ALL PRIVILEGES ON DATABASE :"database" to :"username"; Given that the installed components match the requirements guide (tomcat 8, Postgresql 9.4.5+ for Thingworx 7.x), run the following command: Run this directly from bin directory of postgres deployment – psql -q -h localhost -U twadmin -p 5432 -v database=thingworx -v tablespace=thingworx -v tablespace_location=/app/navigate/ThingworxPostgresqlStorage -v username=twadmin That must get into command line interface. From there  run the following with full qualified path to the sql file on disk (replace FULLPATH with the path to sql file ) \i ./FULLPATH/thingworx-database-setup.sql If you are experiencing the above-mentioned syntax error, then likely the output will be: psql: FATAL:  database "twadmin" does not exist. Then from postgres bin directory, run the following: ./psql postgres \set Then the second command; \q psql -q -h localhost -U twadmin -p 5432 -v database=thingworx -v tablespace=thingworx -v tablespace_location=/app/navigate/ThingworxPostgresqlStorage -v username=twadmin \set   We see the following outputs: ./psql postgres Password: psql.bin (9.4.11) Type "help" for help. postgres=# \set AUTOCOMMIT = 'on' PROMPT1 = '%/%R%# ' PROMPT2 = '%/%R%# ' PROMPT3 = '>> ' VERBOSITY = 'default' VERSION = 'PostgreSQL 9.4.11 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-55), 64-bit' DBNAME = 'postgres' USER = 'postgres' PORT = '5432' ENCODING = 'UTF8' postgres=# \q -bash-4.1$ psql -q -h localhost -U twadmin -p 5432 -v database=thingworx -v tablespace=thingworx -v tablespace_location=/ThingworxPostgresqlStorage -v username=twadmin Password for user twadmin: twadmin=# \set AUTOCOMMIT = 'on' QUIET = 'on' PROMPT1 = '%/%R%# ' PROMPT2 = '%/%R%# ' PROMPT3 = '>> ' VERBOSITY = 'default' VERSION = 'PostgreSQL 8.4.20 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-17), 64-bit' database = 'thingworx' tablespace = 'thingworx' tablespace_location = '/ThingworxPostgresqlStorage' username = 'twadmin' DBNAME = 'twadmin' USER = 'twadmin' HOST = 'localhost' PORT = '5432' ENCODING = 'UTF8' Note, even though Postgresql 9.4.5 has been installed by the system administrator, there are still traces of Postgresql 8.4.20 present in the system that cause the syntax error issue (possibly as part of  the default OS packaging). Removing the 8.4.20 rpms will resolve the problem.
View full tip
This document is a general reference/help with configuring and troubleshooting google email account with the ThingWorx mail extension. To start with the configuration: SMTP: smtp.gmail.com 587, TLS checked.  If SSL is being used, the port should be 465. POP3: pop.gmail.com 995 To test, go to "Services" and click on "test" for the SendMessage service. Successful request will show an empty screen with green "result" at the top. Possible errors: Could not connect to SMTP host: smtp.gmail.com, port: 587 with nothing else in the logs. Check your Internet connection to ensure it's not being blocked. <hostname:port>/Thingworx/Common/locales/en-US/translation-login.json 404 (Not Found) Check your gmail folders for incoming messages regarding a sign-in from unknown device. The subject will be "Someone has your password", and the email  content will include the device, location, and timestamp of when the incident occurred. Ensure to check the "this was me" option to prevent from further blocking. This may or may not be sufficient, sometimes this leads to another error - "Please log in via your web browser and 534-5.7.14 then try again. 534-5.7.14 Learn more at 534 5.7.14..." The error can be resolved by: Turning off “less secure”  feature in your Gmail settings. You have to be logged in to your gmail account to follow the link: https://www.google.com/settings/security/lesssecureapps​ Changing your gmail password afterwards. I don't have a valid explanation as to why, but this is a required step, and the error doesn't clear without changing the password.
View full tip
    Step 4: Create Extension Project   In this tutorial, you will create a ThingWorx extension that retrieves weather information using OpenWeatherMap API. Create Account In this part of the lesson, you will create a free account in OpenWeatherMap that creates an AppKey so you can access their REST API. Sign-up for a free account. Log in to your account. Create a new API Key   NOTE: We will use this generated API key as a parameter in the REST calls.   Create New Extension Project   NOTE: Make sure that you are in the ThingWorx Extension Perspective. To verify, you should see a plus icon:   in the menu bar. If you don’t see this, you are probably in the wrong perspective. Go back to the previous step to learn how to set the perspective to ThingWorx Extension in Eclipse.   Go to File->New->Project. Click ThingWorx->ThingWorx Extension Project.  Click Next. NOTE: A New ThingWorx Extension window will appear. Enter the Project Name (for example, MyThingworxWeatherExtension). Select Gradle or Ant as your build framework. Enter the SDK location by browsing to the directory where the Extension SDK is storeed.   NOTE: The Eclipse Plugin accepts the Extension SDK version 6.6 or higher. Enter the Vendor information (for example, ThingWorx Labs). Change the default package version from 1.0.0 to support extension dependency. NOTE: The information from ThingWorx Extension Properties is used to populate the metadata.xml file in your project. The metadata.xml file contains information about the extension and details for the various artifacts within the extension. The information in this file is used in the import process in ThingWorx to create and initialize the entities. Select the JRE version to 1.8. Click Next then click Finish. Your newly created project is added to the Package Explorer tab.   Create New Entity   Select your project and click Add to create a new entity. NOTE: You can also access this from the ThingWorx menu on the menu bar. Create a Thing Template for your MyThingWorxWeatherExtension Project.   NOTE: In this guide, we are using a Template, but in a real-world scenario, you may consider using a Thing Shape to encapsulate extension functionality. By using Thing Shapes you give users of your extension the ability to easily add new functionality to existing Things. It is simple to add a new Thing Shape to an existing Thing Template, while using the properties or services defined by a Thing Template would require recreating all existing assets using the new Template. Since subscriptions cannot be created on Thing Shapes, you might choose to create Thing Templates that implement one or more subscriptions for convenience. In the pop-up window, browse to add the source folder of your project in Source Folder. NOTE: It should default to the src directory of your project. In our case it will be MyThingworxWeatherExtension/src. Browse to add the package where you want to create this new class, or simply give it a name (such as com.thingworx.weather). Enter a name and description to your Thing Template (WeatherThingTemplate).  NOTE: By default, the Base Thing Template is set to GenericThing. Select Next. NOTE: If you want to give other users of this entity permission to edit it in ThingWorx Composer, select the entity as an editable entity. Only non-editable entities can be upgraded in place; editable entities must be deleted and recreated when your extension is updated. If you need to make it possible to customize the extension, consider using a configuration table to save user customizations. Select Finish.   Verify that you have a WeatherThingTemplate class created that extends the Thing class. @ThingworxBaseTemplateDefinition(name = "GenericThing") public class WeatherThingTemplate extends Thing { public WeatherThingTemplate() { // TODO Auto-generated constructor stub } } NOTE: You might see a warning to add a serial version. You can add a default or generated serial value.   Step 5: Add Properties    In this section, you are going to add CurrentCity, Temperature and WeatherDescription properties to the WeatherThingTemplate. These properties are associated with the Thing Template and add the @ThingworxPropertyDefinitions annotation before the class definition in the code.   Right click inside the WeatherThingTemplate class or right click on the WeatherThingTemplate class from the Package Explorer. Select ThingWorx Source-> Add Property. In the popup window, create a property to store the city name. Name = CurrentCity, Base Type = STRING, Description = ‘’ Select the Has Default Value checkbox and enter a city name (eg. Boston). This will be the default value unless a specific value is passed. Select the “Persistent” checkbox. This will maintain the property value if a system restart occurs. NOTE: If you select the Logged checkbox, the property value is logged to a data store. If you select the Read Only checkbox, the data will be static. Select VALUE from the Data Change Type drop down menu       NOTE: This allows any Thing in the system to subscribe to a data change event for this property. Choose to use one of the following Data Change Types:   Data Change Type Description Always Fires the event to subscribers for any property value change Never Does not fire a change event On For most values, any change will trigger this. Off Fires the event if the new value is false Value For numbers, if the new value has changed by more than the threshold value, fire the change event. For non-numbers, this setting behaves the same as Always. Select Finish. Create another property called Temperature with a base type of NUMBER. You can keep the default values for the other parameters. Create another property called WeatherDescription with a base type of STRING. Keep the default values for the other parameters.   Step 6: Create Configuration Table   In this part of the lesson, we will create a configuration table to store the API Id that you generated from the openMapsWeather. Configuration tables are used for Thing Templates to store values similar to properties that do not change often.   Right-click inside the WeatherThingTemplate class and select ThingWorx Source->Add Configuration Table. Create a new configuration table with name OpenWeatherMapConfigurationTable. Click Add in the Data Shape Field Definitions frame. NOTE: Configuration tables require fields (columns) with a defined table structure (DataShape). Enter appid as the name with a base type STRING. Select the Required checkbox. Click OK, then Finish to add the Configuration Table To use the appid in the REST calls, you need to obtain the value from the configuration table and assign it to a field variable in the Java code. We will use the initializeThing method to obtain the appid value at runtime. NOTE: The initializeThing() method acts as an initialization hook for the Thing. Every time a Thing is created or modified, this method is executed and the value of appid is obtained from the configuration table and stored in a global field variable of the class. initializeThing() must call super.initializeThing() to ensure it performs initialization of the Thing. Create the initializeThing() method and field variable _appid with base type STRING anywhere in the WeatherThingTemplate class. private static Logger _logger = LogUtilities.getInstance().getApplicationLogger(WeatherThingTemplate.class); private String _appid; @Override public void initializeThing() throws Exception { super.initializeThing(); _appid = (String) this.getConfigurationSetting("OpenWeatherMapConfigurationTable", "appid"); } NOTE: In the code above we used ThingWorx LogUtilities to get a reference to the ThingWorx logging system, then assigned the reference to the variable _logger. In the steps below we will use this variable to log information. There are multiple kinds of loggers and log levels used in the ThingWorx Platform, but we recommend that you use the application or script loggers for logging anything from inside extension services. If prompted to import the logger, use slf4j.     Click here to view Part 3 of this guide.
View full tip
Hi Community,   I've recently had a number of questions from colleagues around architectures involving MQTT and what our preferred approach was.  After some internal verification, I wanted to share an aggregate of my findings with the ThingWorx Architect and Developer Community.   PTC currently supports four methods for integrating with MQTT for IoT projects. ThingWorx Azure IoT Hub Connector ThingWorx MQTT Extension ThingWorx Kepware Server Choice is nice, but it adds complexity and sometimes confusion.  The intent of this article is to clarify and provide direction on the subject to help others choose the path best suited for their situation.   ThingWorx MQTT Extension The ThingWorx MQTT extension has been available on the marketplace as an unsupported “PTC Labs” extension for a number of years.  Recently its status has been upgraded to “PTC Supported” and it has received some attention from R&D getting some bug fixes and security enhancements.  Most people who have used MQTT with ThingWorx are familiar with this extension.  As with anything, it has advantages and disadvantages.  You can easily import the extension without having administrative access to the machine, it’s easy to move around and store with projects, and can be up and running quite quickly.  However it is also quite limited when it comes to the flexibility required when building a production application, is tied directly to the core platform, and does not get feature/functionality updates.   The MQTT extension is a good choice for PoCs, demos, benchmarks, and prototypes as it provides MQTT integration relatively quickly and easily.  As an extension which runs with the core platform, it is not a good choice as a part of a client/enterprise application where MQTT communication reliability is critical.   ThingWorx Azure IoT Hub Connector Although Azure IoT Hub is not a fully functional MQTT broker, Azure IoT does support MQTT endpoints on both IoT Hub and IoT Edge.  This can be an interesting option to have MQTT devices publish to Azure IoT and be integrated to ThingWorx using the Azure IoT Hub Connector without actually requiring an MQTT broker to run and be maintained.  The Azure IoT Hub Connector works similarly to the PAT and is built on the Connection Server, but adds the notion of device management and security provided by Azure IoT.   When using Azure IoT Edge configured as a transparent gateway with buffering (store and forward) enabled, this approach has the added benefit of being able to buffer MQTT device messages at a remote site with the ability to handle Internet interruptions without losing data.   This approach has the added benefit of having far greater integrated security capabilities by leveraging certificates and tying into Azure KeyVault, as well as easily scaling up resources receiving the MQTT messages (IoT Hub and Azure IoT Hub Connector).  Considering that this approach is build on the Connection Server core, it also follows our deployment guidance for processing communications outside of the core platform (unlike the extension approach).   ThingWorx Kepware Server As some will note, KepWare has some pretty awesome MQTT capabilities: both as north and southbound interfaces.  The MQTT Client driver allows creating an MQTT channel to devices communicating via MQTT with auto-tag creation (from the MQTT payload).  Coupled with the native ThingWorx AlwaysOn connection, you can easily connect KepWare to an on-premise MQTT broker and connect these devices to ThingWorx over AlwaysOn.   The IoT Gateway plug-in has an MQTT agent which allows publishing data from all of your KepWare connected devices to an MQTT broker or endpoint.  The MQTT agent can also receive tag updates on a different topic and write back to the controllers.  We’ve used this MQTT agent to connect industrial control system data to ThingWorx through cloud platforms like Azure IoT, AWS, and communications providers.   ThingWorx Product Segment Direction A key factor in deciding how to design your solution should be aligned with our product development direction.  The ThingWorx Product Management and R&D teams have for years been putting their focus on scalable and enterprise-ready approaches that our partners and customers can build upon.  I mention this to make it clear that not all supported approaches carry the same weight.  Although we do support the MQTT extension, it is not in active development due to the fact that out-of-platform microservices-based communication interfaces are our direction forward.   The Azure IoT Hub Connector, being built on the Connection Server is currently the way forward for MQTT communications to the ThingWorx Foundation.   Regards,   Greg Eva
View full tip
Announcements