cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Learn all about PTC Community Badges. Engage with PTC and see how many you can earn! X

IoT Tips

Sort by:
Embedded databases come with the installation of the ThingWorx Platform No additional installation or configuration is required for embedded databases Read about the various benefits and pitfalls of embedded versus external below Database Options H2 RDBMS (relational database management system), written in Java Has a small memory footprint Embedded into ThingWorx for easy installation Not as robust as other database options Not scalable in production environments (unless used alongside a separate, external database for stream, value stream, and other data) ​ See KCS Article CS243975 for further reading on the use of external databases Meant to be used for quick deployments and testing environments PostgreSQL ORDBMS (object-relational database management system), written in C PostgreSQL is the ThingWorx recommended database for production systems More Robust External database installed separately from ThingWorx Beneficial because external databases can be specifically configured for use in production, while embedded databases cannot Able to efficiently handle larger amounts of data and store more data without affecting ThingWorx system performance Greater Stability Recover from data corruptions more easily by accessing the database from an external application (separate from ThingWorx) using simple SQL statements Easier to back-up the database in case of issues (further reading in KCS Article CS246598) Less risky and simpler upgrade procedure, which occurs "in-place" Instead of exporting and importing data and entities, a simple schema update allows these to automatically persist into the new version If ThingworxStorage folder is accidentally deleted, entities and data are secure in the external database More Secure HA (High Availability) allows for multiple server instances at different locations in the network Assists in time of failover, i.e. if one server fails, the other can immediately take over Secures the data and prevents further data loss in the event of a failure Customizable security settings and complex password requirements Fewer security vulnerabilities than other databases Because Postgres is an external database, it can be harder to install Follow the steps in the installation guide closely See KCS Articles CS235937 and CS230085 for troubleshooting and help with installation and configuration Hana RDBMS (relational database management system) In-memory, column based data storage For more information on this database, please see the Getting Started with SAP HANA Guide Neo4J GDBMS (graph database management system), written in Java Data is not easily accessed by external applications, and CQL must be used instead of SQL, making recovery from corruptions very difficult Embedded database with limited configuration options Known to have issues with deadlocks Deprecated in version 7.0 (related KCS Article: CS228537) For full installation steps for H2 and PostgreSQL, see the ThingWorx Installation Guide
View full tip
This Blog presents a simple Java utility to validate the deployment of ThingWatcher. It is important to note that the utility used is not a real life situation, the intent was to keep it as simple as possible in order to achieve its aim: validation of the deployment. An understanding of Java IDE (such as Eclipse) is necessary in order to run the utility with relevant dependency and classpath setup. Those are beyond the scope of this posting. We will cover the following points: Pre-Requisites Using the sample utility Code walk through Validate training job creation Validate model job creation Update for ThingWorx Analytics 8.0 Pre-requisites A strict adherence to the ThingWatcher deployment guide is recommended in order to first deploy training and model microservices as well as to familiarize yourself with ThingWatcher APIs. Prior to testing ThingWatcher, both the training and model microservices should be up and running The media for ThingWatcher (including model and training micro-service) should be downloaded from PTC Software Download page . The commands to deploy the micro-services will vary depending on the platform used and are presented in the ThingWatcher deployment guide. As a reference example, on Windows the command will be similar to the following: Start Docker: Start > Program > Docker > Docker Quick Start Terminal Load model micro service tar $ docker load < "D:\PTC\MED-61147-CD-522_F000_ThingWorx-Analytics-ThingWatcher-52-2\components\ModelService\ModelService\model-service.tar"     3. Install model service: $ docker run -d -p 8080:8080 -v '/d/TWatcherStorage/model:/data/models' -v '/d/TWatcherStorage/db:/tmp/' twxml/model-service:1.0 -Dfile.storage.path=/data/models -jar maven/model-1.0.jar server maven/standalone-evaluator.yml     4. Load training micro service tar file                         $ docker load < "D:\PTC\MED-61147-CD-522_F000_ThingWorx-Analytics-ThingWatcher-52-2\components\TrainingService\TrainingService\training-service.tar"     5. Install training service                         $ docker run -d -p 8090:8080  twxml/training-service:1.0.0  -Dmodel.destination.uri=model://192.168.99.100:8080/models -jar maven/training-standalone-1.0.0-bin.jar server /maven/training-standalone-single.yml Note: the -Dmodel.destination.uri points here to the model micro-service host. To find the ip address, enter docker-machine ip on the model micro-service docker machine.     6. Validate micro-services deployment: Execute docker ps  and confirmed that both services are up, as in the following example: CONTAINER ID        IMAGE                          COMMAND                      CREATED            STATUS              PORTS NAMES 5b6a29b95611        twxml/training-service:1.0.0  "java -Dmodel.destina"  13 days ago        Up 44 minutes      8081/tcp, 0.0.0.0:8090->8080/tcp  modest_albattani 8c13c0bc910e        twxml/model-service:1.0        "java -Dfile.storage."      2 weeks ago        Up 44 minutes      0.0.0.0:8080->8080/tcp, 8081/tcp  thirsty_ptolemy   Using the sample utility Download the attachment Main.java Import Main.java into Eclipse (or IDE of choice) with the ThingWatcher dependencies added in classpath. Update the trainingBaseURI (see below) to points to the training micro-services. The utility should be ready to execute. Code walk through The code declares a thingwatcher in the following snippet: ThingWatcher thingwatcher = new ThingWatcherBuilder() .certainty(90.0) .trainingDataDuration(60) .trainingDataDurationUnit(DurationUnit.SECOND) .trainingBaseURI("http://192.168.99.100:8090/training") .getThingWatcher(); In the above code it is important to update the trainingBaseURI argument with the correct ip address and port for the training micro-service host. The code then loops 10000 times and sends a new value, which simulates the sensor data, at a simulated 100 ms interval. The value is computed as Math.sin(i) for the whole calibrating phase and most of the monitoring phase too. We artificially introduce an anomaly by sending a value of Math.incremetExact(i) between the 9000 th and 9900 th iterations. During the Monitoring phase, the code logs the value, the anomalous status and the thingwatcher state. It is advised to save the output to a file in order to review the logging once the utility has run. In Eclipse this can be done by selecting the Main.java with right mouse button > Run As… > Run Configuration > Common and tick Output File under the Standard Input and Output, and specify a location for the output file. A review of the output log file will shows that somewhere between timestamp 900000 and 990000, the isAnomalousValue is true. Note that this does not starts and ends exactly at 900000 and 990000, as ThingWatcher needs a few occurrences before reporting it as anomaly. Sample output indicating an anomalous state: [main] INFO com.thingworx.analytics.demo.Main - Value = 901700,9017.0,-9016.403802019577 [main] INFO com.thingworx.analytics.demo.Main - isAnomalousValue = true [main] INFO com.thingworx.analytics.demo.Main - ThingWatcherStat = MONITORING As part of validating the successful deployment of ThingWatcher, it is recommended to validate the correct creation of a training and model job. Validate training job creation In order to validate the successful creation of a training job, execute a GET request to the training micro service : http://192.168.99.100:8090/training (update the ip address to the one on your system) This should return a COMPLETED job whose body starts with something similar to: Validate model job creation In order to validate the successful creation of a model job, execute a GET request to http://192.168.99.100:8080/models (update the ip address to the one on your system) to see all the models that have been created. For example: Alternatively, click (or use) the URI reported in the training job output, here http://192.168.99.100:8080/models/6/pmml.xml, to see the complete model definition. The output will be similar to: When this sample test runs correctly, the ThingWatcher deployment has been validated. Update for ThingWorx Analytics 8.0 Deploying the microservices, see Video Link : 1937 Updated Java code: see Does anyone know how to use java api to achieve anomaly detection with Thingwatcher8.0? To Note: The utility provided is for testing purpose only. The code does not represent any kind of best practice and is not meant to be a perfect java coding example. It is provided as is with no guarantee.
View full tip
Introduction Oracle 12c release introduced the concept of multi-tenant architecture for housing several databases running as service under a single database, I'll try to address the connectivity and required configuration to connect to one of the Pluggable database running in the multi-tenant architecture. Multi-tenant database architecture in scope of ThingWorx External Data Source What is multi-tenant Database architecture ? Running multiple databases under a single database installation. Oracle 12c allows user to create one database called Container Database (CDB) and then spawn several databases called Pluggable Databases (PDB) running as services under it. Why use multi-tenant architecture? Such a setup allows users to spawn a new PDB as and when needed with limited resource requirements, easily administer several PDBs just by administering the container database - since all the PDBs are contained within a single database's tablespace structure, start and stop individual PDB leading to low cost on maintaining different databases - as the resource management is limited to one CDB. When to use multi-tenant architecture? In scenarios like creating PoCs, different test environments requiring external data storage, maintaining different versions of dataset, having this run in the multi-tenant architecture could help save time, money and effort. Create Container Database (CDB) Creation of a Container Database (CDB) is not very different from creating a non Container Database use the attached guide Installing Oracle Database Software and Creating a Database.pdf same is accessible online. Create Pluggable Database (PDB) Use the attached Multitenant : Create and Configure a Pluggable Database (PDB) in Oracle Database 12c PDF guide to create and plug a Pluggable Database into the Container Database created in previous step, same is accessible online Using above guide I have bunch of pluggable databases as can be seen below. I'll be using TW724 for connecting to ThingWorx server as an external datasource for following example Connect to a Pluggable Database(PDB) as external data source for ThingWorx Download and unzip the Relational Databases Connectors Extension from ThingWorx Marketplace and extract Oracle12Connector_Extension Import Oracle12Connector_Extension to the ThingWorx using Extension -> Import Create a Thing using OracleDBServer12 Thing Template , e.g. TW724_PDB_Thing Navigate to the Configurations for TW724_PDB_Thing to update the default configuration: JDBC Driver Class Name : oracle.jdbc.OracleDriver JDBC Connection String : jdbc:oracle:thin:@//oravm.ptcnet.ptc.com:1521/tw724.ptcnet.ptc.com Database Username : <UserName> Database Password : <password>   5. Once done save the entity Note: A PDB in a container database can be reached only as a service and not using the CDB's SID. In the above configuration TW724 is a PDB which can be connected to via it's service name i.e. TW724.PTCNET.PTC.COM Let's head to the Services tab for TW724_PDB_Thing to query and access the PDB data Creating Services to access the PDB as external database source for ThingWorx Once the configuration is done the TW724_PDB_Thing is ready for use. The queries remain the same as any other SQL query needed to access the data from Oracle. Service for creating a Table Once on the Services tab for the TW724_PDB_Thing click on Add My Service select the service handler as SQL Command to use following script to create a testTable1 in the PDB create table testTable1 (     id NUMBER GENERATED ALWAYS AS IDENTITY primary key,     col1 varchar2(100),     col2 number ) Note: GENERATED ALWAYS AS IDENTITY option is Oracle 12c specific and I included it here for the reason that with Oracle 12c the possibility to auto generate is now built in with that option simplifying the sequence generation when compared with older Oracle versions such as Oracle 11g. User creating table will need access right on creating table and sequence checkout the Oracle documentation on Identity for more on this. Service for getting all the data from the table Add another service with script Select * from testTable1 for getting all the data from the table Service for inserting data into the table Adding another service with script insert into testTable1 (col1, col2) values ('TextValue', 123)  will insert the data into the table created above Service for getting all tables from the PDB i.e. TW724 Using Select * from tab lists all the available tables in the TW724 PDB Summary Just a quick wrap up on how this would look visually refer to the following image. Since this is a scalable setup - given the platform having enough resources it's possible to create upto 252 PDBs under a CDB therefore 252 PDBs could be created and configured to as many things extending the OracleDBServer12 Thing. ______________________________________________________________________________________________________________________________________________ Edit: Common Connection Troubleshooting If you observe the error something like this Unable to Invoke Service GetAllPDBTables on TW724_PDB_Thing : ORA-01033: ORACLE initialization or shutdown in progress Ensure that the pluggable database, in this error TW724 (since this is what I created and used above in my services) is opened and accessible. If it's not opened use the command after logging in as sys/system (with admin rights) in CDB, which is ORCL in via SQL*Plus or SQL Developer or any SQL utility of your choice capable of connecting to Oracle DB and open the pluggable database using the command : alter pluggable database tw724 open;
View full tip
The system user can become a vital point for properly yet conveniently securing your application. From the ThingWorx helpcenter: The system user is a system object in ThingWorx. With the System User, if a service is called from within a service or subscription (a wrapped service call), and the System User is permitted to execute the service, the service will be permitted to execute regardless of the User who initially triggered the sequence of events, scripts, or services. http://support.ptc.com/cs/help/thingworx_hc/thingworx_7.0_hc/index.jspx?id=SystemUser&action=show A few important notes to remember: It is not possible to log in as a system user Adding a system user to the Administrators group will not grant it the administrator permissions Adding a system user to the Everyone organization will not grant it the same visibility As an option, one of the posts on our community provides a script to assign all of the permissions to the system user for a one time set up: https://community.thingworx.com/community/developers/blog/2016/10/28/assigning-the-system-user-through-script Example: 1. Create a new template T1, several things Thing1, Thing2, Thing3 2. Create a new thing NewThing and a new user BlankUser 3. Create a service within NewThing that uses ThingTemplates[“T1"].GetImplementingThings() and give all the permissions to the new non-admin user, BlankUser Now the service on the template T1 can be accessed through the NewThing without explicit permissions for the BlankUser but rather through the system user. When manipulating with data (involving read/write and access to persistence provider), the BlankUser would require more than  just visibility permissions. For example, for a Stream, the following permissions would need to be set up: 1. Visibility on Stream template,StreamProcessingSubsystem, PersistenceProvider 2. Read/write permission on the Stream thing in the use case, created with the Stream template. Similarly, for other sources of data, things, templates and resources involved need visibility and, depending on the scenario, read/write permissions on the specific template.
View full tip
Thingworx provides a library of InfoTable functions, one of the most powerful ones being DeriveFields (besides that I use Aggregate and Query a lot and ... getRowCount) DeriveFields can generate additional columns to your InfoTable and fill that with values that can be derived from ... nearly anything! Hard coded, based on a Service you call, based on a Property Value, based on other values within the InfoTable you are adding the column to. Just remember for this Service (as well as Aggregate), no spaces between different column definitions and use a , (comma) as separator. Here are some two powerful examples: //Calling another function using DeriveFields //Note that the value thingTemplate is the actual value in the row of column thingTemplate! var params = {   types: "STRING" /* STRING */,   t: AllItems /* INFOTABLE */,   columns: "BaseTemplate" /* STRING */,     expressions: "Things['PTC.RemoteMonitoring.GeneralServices'].RetrieveBaseTemplate({ThingTemplateName:thingTemplate})" /* STRING */ }; // result: INFOTABLE var AllItemsWithBase = Resources["InfoTableFunctions"].DeriveFields(params); //Getting values from other Properties //to in this case is the value of the row in the column to //Note the use of , and no spaces //NOTE: You can make this even more generic with something like Things[to][propName] var params = {     types: "NUMBER,STRING,STRING,LOCATION" /* STRING */,     t: AllAssets /* INFOTABLE */,     columns: "Status,StatusLabel,Description,AssetLocation" /* STRING */,     expressions: "Things[to].Status,Things[to].StatusLabel,Things[to].description,Things[to].AssetLocation" /* STRING */ }; // result: INFOTABLE var AllAssetsWithStatus = Resources["InfoTableFunctions"].DeriveFields(params);
View full tip
Since it's somewhat unclear on how to set up the reset password feature through the login form, these steps might be a little more helpful. Assuming the mail extension has already been imported into the Thingworx platform and properly configured - say, PassReset - (test with SendMessage service to verify), let's go ahead and create a new user - Blank, and a new organization that will have that user assigned as a member - Test. Let's open the configuration tab for the organization, assign the PassReset mail thing as the mail server, assign login image, style, prompt (optional), check the Allow Password Reset, then the rest looks like this: Onto the Email content part, it is not possible to save the organization as is at the moment: Clicking on the question mark for the Email content will provide the following requirements: Now this is when it might not be too clear. The tokens [[:user:]], [[:organization:]], [[:url:]] can be used in the email body and at the runtime will be replaced with the actual Usernames, organization, and the reset password url. Out of those fiels, only [[:url:]] token is required. So, it is sufficient to place only [[:url:]] in the body and save the organization: Then, when going to the FormLogin, at <your thingworx host:port>/Thingworx/FormLogin/<organization name>, a password reset button is available: Filling out the User information in the reset field, the email gets sent to the user address specified and the proper message appears: Since in this example only the [[:url:]]  token has been used in the email content, the email received will look like this: To troubleshoot any errors that might be seen in the process of retrieving the password reset link, it's helpful to check your browser developer tools and Thingworx application log for details.
View full tip
The following is valid  for ThingWorx Analytics (TWA) 52.0.2 till 8.0 For release 8.3.0 and above see How to score new data in ThingWorx Analytics 8.3.x ?   Overview The main steps are as follow: Create a dataset Configure the dataset Upload data to the dataset Optimize the dataset Create filters for training and scoring data Train the model Execute scoring on existing data Upload new data to dataset Execute scoring on new data TWA models are dataset centric, which means a model created with one dataset cannot be reused with a different dataset. In order to be able to score new data, a specific feature, record purpose in the below example, is included in the dataset. This feature needs to be included from the beginning when the data is first uploaded to TWA. A filter on that feature can then be created to allow to isolate desired data. When new data comes in, they are added to the original dataset but with a specific value for the filtered feature (record purpose), which allows to discriminate and score only those new records. Process Create a dataset This example uses the beanpro demo dataset Create dataset is done through a POST on datasets REST API as below 2. Configure dataset This is done through a POST on <dataset>/configuration REST API 3.      Upload data         4.      Optimize the dataset         5.      Create filters The dataset includes a feature named record purpose created especially to differentiate between the rows to be used for training and the rows to be used for scoring. New data to be added will have record purpose set to scoringnew, which will allow to execute a scoring job limited to those filtered new rows Filter for training data: Filter for new scoring data        6.      Train the model This is done through a POST on <dataset>/prediction API        7.      Score the training data This is done through a POST on <dataset>/predictive_scores API. Note the use of the filter TrainingData created earlier. This allow to score only the rows with training as value for record purpose feature. Note: scoring could also be done without filter at this stage, in which case all the data in the dataset will be scored and not just the ones with training fore record purpose   Retrieving the scoring result show all the records in the dataset:   8.      Upload new data The newly uploaded csv file should only contains new record. This will be appended to the existing ones.   Note that the new record (it could be more than one) has a value scoringnew for the record purpose feature: This will allow to use the previously created filter ScoringNewData so that a new scoring job will only take into account this new record.   9.      Scoring new data A POST on API predictive_scores is executed however using the filter ScoringNewData. This results in only the newly added data to be scored and therefore a much quicker execution time too. Retrieving the scoring result shows only the new record:
View full tip
I imagine a lot of people that face this problem might be using Session Parameters, but there is a secret lost Ninja art that allows you to do it with Mashup parameters which is much more contextual and direct. The key is to have Mashup parameters with the same name. End Result Starting out I am on my main mashup, you can see the Tree Data in the Grid below Clicking on the next node now shows the new mashup and the TO field inside. That To value was passed in using a mashup parameter Clicking the next node, you can see it is actually a different mashup, but I am still passing the TO value How is it done: Here is my mashup with the Tree and Contained mashup, you can see the bindings are in place already, but how did I do it, since the Contained Mashup is empty? First create the new mashups with a mashup parameter named the SAME in this case EntityName Here is Mashup2 and you can see the Mashup parameter with the same name EntityName bound to one of the Value Displays Now how do I bind from my main mashup? What you need to do is to temporarily assign one of the Mashups to the Contained Mashup, here I am showing Mashup1 assigned. This will now allow you to bind not just the Mashup Name, but also bind a value to the Mashup Parameter in that Mashup. Just drag your selected row values onto the contained mashup. Here you can see the parameter showing as a property, I just dropped my value on the contained mashup and I can bind to Name (name of the mashup to show) and EntityName (the value I want to pass to the mashup parameter) Now just remove the assigned mashup from the Contained mashup and you’ll note that the bindings stay intact. That’s it!
View full tip
In this post, I show how you can downsample time-series data on server side using the LTTB algorithm. The export comes with a service to setup sample data and a mashup which shows the data with weak to strong downsampling.   Motivation: Users displaying time series data on mashups and dashboards (usually by a service using a QueryPropertyHistory-flavor in the background) might request large amounts of data by selecting large date ranges to be visualized, or data being recorded in high resolution. The newer chart widgets in Thingworx can work much better with a higher number of data points to display. Some also provide their own downsampling so only the „necessary“ points are drawn (e.g. no need to paint beyond the screen‘s resolution). See discussion here. However, as this is done in the widgets, this means the data reduction happens on client site, so data is sent over the network only to be discarded. It would be beneficial to reduce the number of points delivered to the client beforehand. This would also improve the behavior of older widgets which don’t have support for downsampling. Many methods for downsampling are available. One option is partitioning the data and averaging out each partition, as described here. A disadvantage is that this creates and displays points which are not in the original data. This approach here uses Largest-Triangle-Three-Buckets (LTTB) for two reasons: resulting data points exist in the original data set and the algorithm preserves the shape of the original curve very well, i.e. outliers are displayed and not averaged out. It also seems computationally not too hard on the server. Setting it up: Import Entities from LTTB_Entities.xml Navigate to thing LTTB.TestThing in project LTTB, run service downsampleSetup to setup some sample data Open mashup LTTB.Sampling_MU: Initially, there are 8000 rows sent back. The chart widget decides how many of them are displayed. You can see the rowcount in the debug info. Using the button bar, you determine to how many points the result will be downsampled and sent to the client. Notice how the curve get rougher, but the shape is preserved. How it works: The potentially large result of QueryPropertyHistory is downsampled by running it through LTTB. The resulting Infotable is sent to the widget (see service LTTB.TestThing.getData). LTTB implementation itself is in service downsampleTimeseries     Debug mode allows you to see how much data is sent over the network, and how much the number decreases proportionally with the downsampling.   LTTB.TestThing.getData;   The export and the widget is done with TWX  9 but it's only the widget that really needs TWX 9. I guess the code would need some more error-checking for robustness, but it's a good starting point.  
View full tip
Announcing: ThingWorx Solution Central 3.1.0 and its New API Written by: Tori Firewind of the IoT EDC   Solution Central 3.1.0 ThingWorx Solution Central (SC) is the solution management tool for ThingWorx and Digital Performance Management (DPM), the latest version of which (DPM 1.1) can now be deployed directly from the PTC Solutions menu of SC. Streamlining packaging strategies and ensuring efficient solution deployment can now be done for all kinds of ThingWorx solutions, even those with heavy customization. The new API allows for updated building blocks from within the PTC Solutions menu to be easily discovered and deployed right alongside custom solutions. Even the most advanced developers can now house their deployment management process within SC.   As discussed in a previous article, Solution Central forms a necessary part of a mature DevOps pipeline, usually as a set of services within Foundation which finalize and publish the solution to the Solution Central servers. The recommendation to utilize Solution Central from within Foundation remains a best practice for ThingWorx DevOps because the vast majority of solutions benefit from using the ThingWorx APIs, which scan and check for dependencies and proper XML formatting on each included entity.   Packaging and publishing the solution from within ThingWorx is the easiest and most straightforward way recommended by PTC, but it is now possible to publish to Solution Central using a standard API for those who need to publish from Jenkins or other build jobs. If there are legacy extensions, 3rd party tool dependencies, or other customizations within the ThingWorx application, then it may be beneficial to use this new API instead.   The new API also allows for editable extensions and entities within a published solution, though PTC still recommends avoiding this as a general practice. It is usually better for purposes of maintainability and ease of upgrades to just publish the solution again (with an incremented) version each time any changes are made.   How to Use the API Within Solution Central, a new menu option has been added to review the API, with information about the different types of requests and their parameters and responses. To access this within the Solution Central UI, open the help menu and navigate to “Public APIs” (see the image on the right). To see a sample response, select a request type and scroll down to the “Responses” section (shown below). Examples of error responses are also provided, and it’s important to ensure that whatever makes these requests can properly report or log any errors for troubleshooting and maintenance of the DevOps process.   The general steps for making use of this API are as follows: Create the solution resource with a POST Add some files to that solution with PUTs First, create the solution archive with the right Solution Identifiers; this should contain at least one project XML and all of the entities belonging to that ThingWorx project Next, compute MD5 using a tool like DigestUtils on the contents of the archive; this checksum is required for Solution Central Compute the SHA hash on the archive and save it; this will need to be provided along with the archive in the PUT requests Compute MD5 on the hash file also Finally, make the two PUT requests, one for the archive and one for its hash; for example cURL requests, see the Help Center Publish the solution using a PUT So, with a little more work it is now possible to make use of SC in a more custom DevOps process. It is now possible to build JSON or XML solutions using development tools outside of ThingWorx and still publish these customizations to Solution Central. The process of DevOps Management just became more versatile, and with the ease of deployment of DPM and other PTC building blocks as well, ThingWorx is now more accessible and easy to use than ever before.
View full tip
Persistent vs. Logged Properties By Mike Jasperson, VP of IoT EDC   Executive Summary ThingWorx provides several different “aspects” (or storage options) for how property values are saved.  These options each have different implications for performance and scalability.  Understanding those implications is important for designing a scalable IOT solution.   Persistent Properties are best used for non-telemetry data which will change infrequently (for example only a few times in a day) and where historical values are not required.  When overused, Persistent properties can put significant pressure on the database layer of your ThingWorx implementation, leading to poor performance of your IOT application.  As the number of Things in your IOT application scales up, the quantity or frequency of persistent properties per Thing needs to be carefully considered.   Logged Properties are best used for telemetry data where historical values need to be retained, but also for any other value that is expected to change frequently.  Logged properties can create some additional requirements: a process for handling null/default values after restarts, more disk space, and a data retention policy. There are benefits as well, though, like more flexibility and scalability for the ingestion of larger volumes of data.   Persistent + Logged Properties perform database operations of both aspects.  Combined use should be very limited – only properties that update infrequently (a few times a day), and that must be in-memory in the event of a ThingWorx restart.   In-Memory Only Properties are neither persistent nor logged – they are not stored to the database.  These properties can greatly improve scale for values that need to be available for the application to drive UIs or compute other derived values that will be stored.  However, high-frequency updates of in-memory properties can create scale challenges in HA (high availability) ThingWorx configurations where memory state needs to be constantly shared between multiple ThingWorx nodes.     Find a complete summary as well as example cases in the document attached.
View full tip
Is your team operating an effective DevOps pipeline? DevOps is an important part of a mature, enterprise ready application, but the process isn’t simple.   This expert session will focus on how a full DevOps pipeline looks like and how PTC can help to build a seamless pipeline. Join us for our upcoming Expert Session to learn how to create a Docker image, integrate Azure with Docker and Git, and set up a seamless DevOps pipeline.   When? Thursday, September 30th 2021 | 11 AM EST Host: Tori FIrewind, Senior Engineer in PTC IOT Enterprise Deployment Center Registration link: https://www.ptc.com/en/resources/iiot/webcast/devops-pipeline-thingworx 
View full tip
Remote Timeouts Some Notes: Format Units: unit of measure for the timeout or limit (seconds, milliseconds, cycles, etc.) Description: describes the timeouts Outcome: describes the default behavior if a timeout or limit is reached. Related Timeouts: lists other timeouts that are closely related to the timeout in question, meaning they should be configured together because one timeout will affect another timeout Notes This guide is heavily focused on the C SDK; certain timeouts may have different names in other SDK's or agents There are no descriptions of any imposed delays or timeouts related to thread pools on the ThingWorx Platform Local timeouts (not related to remote requests) were intentionally not added There are far too many applications to provide detail about every situation introduced by every timeout, but this should provide a good starting point for custom timeout configuration Edge socket_read_timeout Units: milliseconds Description: used to free the socket mutex allowing another service to read on the socket. Increasing this value is beneficial in low resource systems, but could lead to slower performance Outcome: socket read retry Related timeouts: ssl_read_timeout ssl_read_timeout Units: milliseconds Description: If a partial record is read but not saved, it is possible to remove part of an ssl record that would have otherwise been essential in decrypting the entire record. This timeout is used to prevent this situation; it will allow a function to re-acquire the socket mutex in the event that a partial ssl record was captured but the socket_read_timeout was reached. Outcome: websocket read retry Related timeouts: socket_read_timeout frame_read_timeout Units: milliseconds Description: essentially an idle socket timeout. If an edge device requests a message from the ThingWorx Platform, and nothing is read after the request for the time value specified in this property (not even request headers/ssl header), then the websocket is assumed to be experiencing an error and the connection is closed Outcome: websocket disconnect Related timeouts: message_timeout message_timeout Units: milliseconds Description: the max overall message time that that the edge will wait for a full response during a particular request to the ThingWorx Platform. This timeout can be overridden by the frame read timeout if there is no activity on the socket for a given expected response period Outcome: websocket disconnect Related timeouts:frame_read_timeout, pingpong_timeout, Message Timeout (WSCommunication subsystem) pingpong_timeout Units: milliseconds Description: the ping and pong messages are the heartbeat of the AlwaysOn protocol. If a pong is not received <pingpong_timeout> ms after the ping is sent, the websocket will disconnect even if there are successful messages during the ping/pong period. If a pong is received DURING the read loop of another service, the pongs will be routed to the pong manager and recorded to prevent a pong timeout Outcome: websocket disconnect Related timeouts: message_timeout connect_timeout Units: milliseconds Description: when attempting to connect to the ThingWorx Platform, the connection and authentication will wait on an idle socket for the specified number of seconds before closing the connection and retrying Outcome: close socket and attempt to reconnect Related timeouts: connect_retries, Auth Message timeout (WSCommunication subsystem) connect_retries Units: integer (number of tries, not actually a time measurement) Description: not actually coupled to an explicit time value. sets the max number of reconnect_timeouts that the edge will tolerate before giving up. In certain SDK's -1 will correspond to an infinite number of retries. Outcome: stop attempting to reconnect Related timeouts: connect_timeout, Auth Message timeout (WSCommunication subsystem) file_xfer_timeout Units: milliseconds Description: If a file transfer to an edge device becomes idle for too long, this timeout will trigger an error and free the memory associated with the file in the program (this does not delete files on disk in case they are to be resumed later) Outcome: the file transfer is stopped, an error is reported, and associated file transfer memory objects are free'd Related timeouts: File Transfer Idle Timeout, Copy Timeout (Service) ThingWorx Platform Related Services Units: milliseconds Description: some timeouts are passed in as a parameter to a service on an edge device (SendFile, twApi_InvokeService, etc.). These timeouts will act similarly to the WSCommunication message timeout, but they are driven from an edge device instead of the ThingWorx Platform. Outcome: timeout error reported Related timeouts: message_timeout, frame_read_timeout Remote Thing (ThingWorx Platform) Service Timeout Units: seconds Description: these timeouts are set explicitly in composer when editing remote services. These values will override the default message timeout that is set in the WSCommunication subsystem Outcome: service execution error Related timeouts: Property Timeout Property Timeout Units: seconds Description: these timeouts are set explicitly in composer when editing remote properties. These values will override the default message timeout that is set in the WSCommunication subsystem. Outcome: property get/set error Related timeouts: Service Timeout ThingWorx Platform Subsystems WSCommunicationSubsystem Idle Connection Timeout Units: seconds Description: if a particular websocket connection has not received or sent a message in the specified time, the connection is assumed to be invalid. The ThingWorx Platform will unbind any related things then disconnect the websocket. This should be set higher than the pingpong_timeout value Outcome: websocket disconnect Related timeouts: pingpong_timeout Auth Message Timeout Units: seconds Description: when a websocket first connects (before binding) the connection will be allowed to stay open for the specified time interval without authenticating. Increasing this value will accommodate high latency devices, but the ThingWorx Platform will be more vulnerable to saturating its own connections with unauthorized websockets. Outcome: websocket disconnect Related timeouts: connect_timeout Message Response Timeout Units: seconds Description: the max amount of time that is allowed during an edge request before claiming the service result as a failure Outcome: property get/set or service execution error Related timeouts: message_timeout (edge) TunnelSubsystem Startup Tunnel Timeout Units: seconds Description: once a remote tunnel is opened it will be given a specified time interval to establish an end-to-end connection before closing. For example, an SSH tunnel is opened but no client is attached to the endpoint Outcome: close tunnel, report error Related timeouts: n/a Idle Tunnel Timeout Units: seconds Description: once a tunnel is established, and an end-to-end connection is established, this will monitor the activity on the socket and report a timeout if there is no read/write activity for the specified time interval Outcome: close tunnel, report error Related timeouts: n/a FileTransferSubsystem File Transfer Idle Timeout Units: seconds Description: when the file transfer subsystem Copy service is executed a series of secondary remote services will be executed to complete the transfer. The File Transfer Idle Timeout will monitor the activity of each secondary service and stop the entire Copy service if any one secondary service records no activity for the specified time interval. Outcome: transfer stopped, error reported Related timeouts: Copy Timeout (Service) Copy (Service) Timeout Units: seconds Description: the number of seconds that the File Transfer Subsystem waits for the completion of a file transfer. This is set every time a transfer is executed. Outcome: transfer stopped, error reported Related timeouts: File Transfer Idle Timeout
View full tip
With ThingWorx, we can already use univariate anomaly alerts (on a single sensor value). However, in many situations, the readings from an individual sensor may not tell you much about the overall issue and a multivariate anomaly detector can be more useful. This post is intended to provide an overview of the Azure Anomaly Detector and how it can be integrated with ThingWorx. The attachment contains: A document with detailed instructions about the setup; A .csv file with the multivariate timeseries dataset; A .twx file with some entities that need to be imported in ThingWorx as well as the CSVParser extension that needs to be installed; A .zip file that will need to uploaded in an Azure Blob Container at some point in the setup
View full tip
In our interactions with PTC customers we often learn they have previously performed Analytics modeling in Python, Matlab, R, or even built home grown analyses in languages such as Java or C++. As expected, when adopting an Industrial Innovation Platform such as ThingWorx that also has its own ThingWorx Analytics module, customers do not want to reimplement everything from scratch and would rather integrate their previous work in the Smart Applications built in ThingWorx, leveraging a combination of their existing toolset together with ThingWorx Analytics modeling. That is certainly possible and there are multiple ways to do that. In this article we will focus on several general ways to make that happen, but it is important to keep in mind that language specific approaches are also possible and we are happy to discuss those in the specific context of the customer.   Here are five different ways to bring existing Analytics into ThingWorx: If the task is to reuse an existing predictive model developed in a language such as Python/R/Matlab, typically one can export that model in PMML (Predictive Model Markup Language), an xml format, and import it in ThingWorx Analytics using the AnalyticsServer_ResultsThing -> UploadModel service. Libraries such as sklearn2pmml & r2pmml can be utilized towards that goal. The imported model can then be used in the same fashion as a ThingWorx Analytics developed model to power smart applications built in ThingWorx. If the Analysis involves more complex tasks than Predictive Modeling, such as custom data normalizations or non-standard Machine Learning models or home grown algorithms, one can use the options below. Call the ThingWorx exposed REST Web API from Python/Matlab/R/Java/Javascript. Every service from ThingWorx can be called that way, and the API can also be used to push analyses results into ThingWorx for further consumption, perhaps together with other sources of data such as sensor readings, in the smart applications built there. The documentation for the ThingWorx REST API can be found here.  Expose the existing Analytics via using a thin layer of REST Web Services. For example, in Python, this can be done using Flask, with few lines of code. Then, the orchestration can happen from ThingWorx by calling the exposed Web Service and weaving the results back into smart applications. Often our customers' current architecture involves a relational database (e.g. SQL Server, Oracle, etc) that is powering the existing Analytics, and stores the end results (predictions, correlations, etc). In this scenario, we can connect ThingWorx directly to that database to read these results.  Finally, in the case of complex Analytics, where a tighter integration with ThingWorx is desired, existing Analytics / algorithms can be wrapped into a ThingWorx Extension or an Analytics Provider using the corresponding PTC SDKs.  When choosing an integration option, customers need to carefully balance complexity of integration, constraints of their architecture, Analytics modeling complexity, as well as end user consumption requirements.
View full tip
JavaMelody is an open source (LGPL) application that measures and calculates statistical information based on application usage. The resulting data can be viewed in a variety of formats including evolution charts, which track various operations and server attributes over time. There are also robust reporting options that allow data to be exported in either HTML of PDF formats. Installation Installation is fairly simple and can be done in just a few minutes. Download the distribution from JavaMelody Wiki and extract the javamelody.jar, available at https://github.com/javamelody/javamelody/releases Step 1: Download the java melody file (in unix, use the following command*): wget javamelody.googlecode.com/files/javamelody-1.49.0.zip Note: Ensure the latest version available at the link provided above before executing the unix command, modify the version accordingly. Step 2: Extract the zip file (using the following command in unix, note the version from step 1); unzip javamelody-1.49.0.zip Step 3: Copy the javamelody.jar and jrobin-x.jar from the javamelody installable to the WEB-INF/lib directory of the war file deployed in the tomcat using the following command in unix: cp -pr javamelody-1.49.0 jrobin-x.jar /opt/tomcat/server/webapps/<application name>/WEB-INF/lib Step 4: Edit the web.xml file from WEB-INF directory of the war file deployed in the tomcat and add the following lines in the web.xml before the description of the servlet.ie. mostly at the starting of the web.xml file.                 <filter> <filter-name>monitoring</filter-name>                <filter-class>net.bull.javamelody.MonitoringFilter</filter-class>        </filter>        <filter-mapping>                <filter-name>monitoring</filter-name>                <url-pattern>/*</url-pattern>        </filter-mapping>        <listener>                <listener-class>net.bull.javamelody.SessionListener</listener-class>        </listener> Step 5: Restart the tomcat server after editing the web.xml and access the javamelody page using the following url pattern: http://<hostname on which tomcat is configured>:<Port number on which the application is accessed>/<application name>/monitoring The url can be customized in the configuration file. Reports can be viewed in weekly, daily, or monthly formats. They can also be downloaded or can be sent over email in pdf format. iText library for WebApps and Java’s Mail and Activation libraries are required on the server in order to use the mail session. The report provides the same information that can be found in monitoring web page with both high-level and detailed information. CPU&Memory usage: Detailed SQL Information: SQL Statistics: Server Requests: System threads, caches: Data Caches: System Overhead ​On the JavaMelody Wiki, https://github.com/javamelody/javamelody/wiki/Overhead​ one can find a healthy discussion about system overhead. It seems that the general consensus is that  the overhead cost caused by JavaMelody is very low and that the feature is safe to enable full-time in QA environment. ->JavaMelody records only statistics and not events, so the overhead of memory is quite minimal. ->No  I/O on the wire and minimal I/O on disk. If no problem arises, it can be considered to enable JavaMelody on the production environment as well. Using a tool like JavaMelody can lead to valuable insights on how to optimize servers or uncover otherwise hidden issues, providing value that exceeds the overhead cost.
View full tip
For those of you that aren't aware - the newest version of the Eclipse Plugin for Extension Development was made available last week in the ThingWorx Marketplace here. Because of the infancy of the product, there is not an official process for supplying release notes along with the plugin.  These are not official or all encompassing, but cover the main items worked on for 7.0. New Features: Added Configuration Table Wizard for code generation SDK Javadocs now automatically linked to SDK resources on project creation When creating a Service, Trace logging statements are generated inside of it (along with appropriate initializers) ThingWorx Source actions are now available from right click menu within a .java file Bugs: Fixed problem where some BaseTypes are not uppercase in annotations when generating code Fixed error when Creating and importing Extension Projects when the Eclipse install has a space in the file path Fixed inconsistent formatting in the metadata.xml when adding new Entities We are hoping to have a more official Release Note process for the next release.  Feel free to reply with questions or concerns.
View full tip
The New and Improved DGIS Guide to ThingWorx Development Written by Victoria Firewind of the IoT EDC   The classic Developing Great IoT Solutions guide has been reskinned and revamped for newer versions of ThingWorx! The same information on how to build a quality IoT application is now available for versions of ThingWorx 9.1+, and now, a complete sample application is included to demonstrate these ideas.    Find within the attached archive a PDF with high-level overview information on development and application design geared towards managers and business users, so that everyone can understand the necessary requirements, common terms, and key tips on how to ensure an application is scalable and maintainable right from the very start. Reduce your chances of running into issues between PoC and Go Live by reviewing this information today!   Also find within this PDF a series of tutorials which teach not just how to use the ThingWorx software, but which also educate on how to make good application design choices. A basic rules engine for sending real-time notifications is included here, as well as a complete demo application which illustrates each concept in a real-world use case. This Coffee Machine Demo App relies upon the tutorial entities, which can also now be imported directly using the other XML files provided here. This ensures that anyone can review these concepts, regardless of how much time one can commit or how much knowledge one already has on the subject.   This is a complex guide, and any issues, questions, or bugs found within can be reported right here on this thread. Happy developing from the IoT EDC!
View full tip
  Hello everyone,   If you’re like me, you’re always looking for the optimal or most efficient way to do something. Today, I’ll share a quick trick and two tips to help you develop your awesome IoT solutions with ThingWorx.   #1. Trick: Finding Dependency References We are targeting a new “Where Used” Composer feature in an upcoming release of the platform to help you find your references of bindings, properties, mashups, and services. In the meantime, did you know you can get some of that information yourself today with a quick service call?   As of ThingWorx 8.5, a new service is present on Project entities; the service crawls the contents of your project and highlights the full external dependency list to help you find references. On any Project Entity, ListExternalDependencies() shows output like this in 9.0:  ListExternalDependencies() output   For each entity (“A”) in the project, the service calls out any entities (“B”) that it is referencing and the referenced dependency’s extension package if present. It will only find external dependencies to the project and will not currently list dependencies within the project. Notice also in the infotable output, the last column, “where used,” even lists the type of reference (e.g. coded in JavaScript, Mashup Data, Resource, Property binding, etc.). Pretty handy!   Code reference from “Where Used” service output   Click this link for additional help content that explains the service output and usage. Again, it only searches for entity references outside of your current project scope. Also, this service will stop crawling the dependency hierarchy when it finds items in a project, since its current purpose is packaging.  Consider if you have Thing T1 in Project P1, which uses ThingTemplate TT2 and it’s not in a Project. TT2, in turn, uses ThingShape TS3 which is also not in a Project.  Calling ListExternalDependencies()  on Project P1 will find both TT2 and TS3. If, however, we then put TT2 in a Project P2, then call the List() service on Project P1, the scan will stop at TT2 and NOT identify TS3.  The reason for this is that the service assumes that when you package P2, it will find the orphan TS3.     We know this doesn’t cover all “where used” type use cases, so there is still a planned feature to really complete this concept on the platform. But even in the 8.5 or 9.0 releases, if you wanted to see entity references (inside and outside of its project) for a single Thing A, you could quickly assign Thing A to a new project and run the ListExternalDependencies() service to find all of its references and then assign Thing A back to its original project once you’ve found what you are looking for. Moving entities into projects just for searching is not something I would recommend doing often, but it can work in a pinch!   #2. Tip: JavaScript looping When iterating through data from infotables, use a .forEach() loop! Consider these four code options and their average performance on the Rhino engine:  Infotable looping performance   Very clearly, the .forEach() syntax is the most performant and, in my opinion, the cleanest to read. Try it out in your app! We plan to update our help documentation with more of these ThingWorx JavaScript best practices in 9.1. We also plan to provide some updates to our Code Snippets features in an upcoming Composer release so we can recommend these good practices right from the start.   #3. Tip: Code optimizations As with many performance bottlenecks, it is those pesky loops that can really amplify degradation. Here are two ThingWorx patterns for your consideration:   Wrong Way:   In this block of code, we setup the property names we are looking for, and then loop through to make a logger message. While creating each logger message, we are making an API call for querying all things for a Thing named me.name and executing a service call GetMetadataAsJSON() on that Thing which walks the hierarchy to build a JSON representation of itself. In this trivial example, we are making these same API 2 calls for each item in the propertyNames list, though the Thing reference and JSON definitions are never changing. Pretty expensive.   Correct Way:   Notice in this example, we are not only declaring the propertyNames outside of the loop, but also the propertyDefinitions. This will significantly improve performance and reduce the number of API calls and round trips to the application server. Again, this is a trivial example, but can pay off in larger and more complex code areas.   If you like these quick tips, check out more best practices here! Got a tip of your own? Have a question on how to tackle something? As always, just Ask Kaya!   Stay connected! Kaya
View full tip
PostgreSQL is a powerful, open source object-relational database system that provides unlimited database size. Thingworx 6.5 introduces PostgreSQL as persistence provider and supports High Availability. Main advantages with Thingworx Postgres are 1. Highly customizable PostgreSQL also includes a framework that allows developers to define and create their own custom data types along with supporting functions and operators that define their behavior. Triggers and stored procedures can be written in C and loaded into the database as a library, allowing great flexibility in extending its capabilities. 2. Synchronous replication PostgreSQL streaming replication is asynchronous by default. Synchronous replication offers the ability to confirm that all changes made by a transaction have been transferred to one synchronous standby server. This extends the standard level of durability offered by a transaction commit. The only possibility that data can be lost is if both the primary and the standby suffer crashes at the same time. 3. Write ahead logging for fault tolerance The Write Ahead Log (WAL), is the feature of PostgreSQL that allows it to recover data, usually up to the point where the server stopped. As you make changes to your data, PostgreSQL aggressively writes those changes to the WAL. PostgreSQL issues a checkpoint when a buffer limit is reached. When PostgreSQL restarts, it replays the changes from the WAL since the last Checkpoint, to bring the database back to the state of the last completed commit. Master node sends a live stream of data changes to the slave nodes through the WAL and slaves applies this data and stay up to date. 4. Point-in time recovery Point-in-time Recovery (PITR) also called as incremental database backup , online backup or may be archive backup. This mechanism use the history records stored in WAL file to do roll-forward changes made since last database full backup. With Point-in-time Recovery, database backup down time can totally eliminated because this mechanism can make database backup and system access happened at the same time. with PITR, we backup the latest archive log file since last backup instead of full database backup everyday. Thingworx streams data from the connected devices and postgres handles it with a greater scalability. In Thingworx, postgresql acts as a persistence provider that stores both run-time data and metadata about things. Run-time data is the data that is persisted once the things are composed and are used by connected devices to store their data. Streams and value streams fetch huge amounts of data, once the streaming data reaches a limit fo 50gb neo4j can't handle the performance. For example, for a singleStream that has 50 properties that gathers data from 10000 devices, it will quickly hit the memory limit with neo persistence provider. So, it is strongly recommended to choose postgresql for a better performance issues. Overview of Installing Thingworx PostgreSQL: Install latest version of Java and make sure environment variables are configured. Follow the instructions in Installing Thingworx 6.5​ to install tomcat. Instructions/commands may vary for different Linux flavors. Install PostgreSQL. For Linux/Unix environments, YUM-Installation Guidelines. Create 'ThingworxPostgresqlStorage' and 'ThingworxPlatform' folders in the root directory( / ), assign access permissions to the user. Copy modelproviderconfig.json file (from Thingworx download package) to 'ThingworxPlatform' folder. Execute ThingworxPostgresSchemaSetup and ThingworxPostgresDBSetup scripts (.bat for windows and .sh for Unix/Linux environments), for further instructions follow Getting Started with PostgreSQL ThingWorx Administrators Guide​. Restart the tomcat.
View full tip
Hi all, Here is the recording of the expert session hosted in September 3rd. For full-sized viewing, click on the YouTube link in the player controls Your feedback is very important to us! After watching the recording, please take 2 min to complete this survey  
View full tip