cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - You can change your system assigned username to something more personal in your community settings. X

IoT Tips

Sort by:
Recently I needed to be able to parse and handle XML data natively inside of a ThingWorx script, and this XML file happened to have a SOAP namespace as well. I learned a few things along the way that I couldn’t find a lot of documentation on, so am sharing here.   Lessons Learned The biggest lesson I learned is that ThingWorx uses “E4X” XML handling. This is a language that Mozilla created as a way for JavaScript to handle XML (the full name is “ECMAscript for XML”). While Mozilla deprecated the language in 2014, Rhino, the JavaScript engine that ThingWorx uses on the server, still supports it, so ThingWorx does too. Here’s a tutorial on E4X - https://developer.mozilla.org/en-US/docs/Archive/Web/E4X_tutorial The built-in linter in ThingWorx will complain about E4X syntax, but it still works. I learned how to get to the data I wanted and loop through to create an InfoTable. Hopefully this is what you want to do as well.   Selecting an Element and Iterating My data came inside of a SOAP envelope, which was meaningless information to me. I wanted to get down a few layers. Here’s a sample of my data that has made-up information in place of the customer's original data:                <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" headers="">     <SOAP-ENV:Body>         <get_part_schResponse xmlns="urn:schemas-iwaysoftware-com:iwse">             <get_part_schResult>                 <get_part_schRow>                     <PART_NO>123456</PART_NO>                     <ORD_PROC_DIV_CD>E</ORD_PROC_DIV_CD>                     <MFG_DIV_CD>E</MFG_DIV_CD>                     <SCHED_DT>2020-01-01</SCHED_DT>                 </get_part_schRow>                 <get_part_schRow>                     <PART_NO>789456</PART_NO>                     <ORD_PROC_DIV_CD>E</ORD_PROC_DIV_CD>                     <MFG_DIV_CD>E</MFG_DIV_CD>                     <SCHED_DT>2020-01-01</SCHED_DT>                 </get_part_schRow>             </get_part_schResult>         </get_part_schResponse>     </SOAP-ENV:Body> </SOAP-ENV:Envelope> To get to the schRow data, I need to get past SOAP and into a few layers of XML. To do that, I make a new variable and use the E4X selections to get there: var data = resultXML.*::Body.*::get_part_schResponse.*::get_part_schResult.*; Note a few things: resultXML is a variable in the service that contains the XML data. I skipped the Envelope tag since that’s the root. The .* syntax does not mean “all the following”, it means “all namespaces”. You can define and specify the namespaces instead of using .*, but I didn’t find value in that. I found some sample code that theoretically should work on a VMware forum: https://communities.vmware.com/thread/592000. This gives me schRow as an XML List that I can iterate through. You can see what you have at this point by converting the data to a String and outputting it: var result = String(data); Now that I am to the schRow data, I can use a for loop to add to an InfoTable: for each (var row in data) {      result.AddRow({         PartNumber: row.*::PART_NO,         OrderProcessingDivCD: row.*::ORD_PROC_DIV_CD,         ManufacturingDivCD: row.*::MFG_DIV_CD,         ScheduledDate: row.*::SCHED_DT     }); } Shoo! That’s it! Data into an InfoTable! Next time, I'll ask for a JSON API. 😊
View full tip
This might be a well-known topic for some, but I recently had a need that Event Routers fit into perfectly and wanted to share. If you have some neat applications for Event Routers on mashups, feel free to reply!   What? Event Routers are a function on Mashups that let you connect multiple inputs to a single output. For my use case, this was extremely helpful to let me have two different Service Outputs go to the same Widget. They are a really simple tool that can save a lot of headache.   How? Event Routers work by funneling the latest data through to a single output. This is particularly useful for  user-activated actions with the output tied to a widget or another service. The Event Router automatically activates when any one of the Inputs changes.   Example I have two services that generate HTML from different sources, but I want to display just the latest one that the user had activated in a single HTML Text Area widget. The two different services are activated with two different buttons. But how do I show these two outputs in a single widget? Create an Event Router with two HTML inputs!         Now I just tie each service output to the Inputs and tie the Output to the HTML Text Area Text (note: the icon for Input2 is incorrect—it should be HTML as well; this system is running 8.5.1, perhaps it's an issue in that release).         Now when the user clicks on either button, the correct service’s HTML is sent to the HTML Text Area. Ta-da!   P.S. I noticed in some older posts that Event Routers used to be a widget or extension that came and went. Now (8.5+) it is baked into the Functions on the far right side of Mashup Builder.
View full tip
While I was writing my previous post about Purging ValueStream entries from Deleted Things , it occurred to me that a similar issue would happen to those using the InfluxDB as persistence provider for their ValueStreams.   I wanted to take the opportunity that the logic is still fresh in my mind and extend the utility to do the same thing with InfluxDB. I also used the opportunity to create a dynamic InfoTable as it's been a while since I did it.    Also, it shows how to directly interact with the InfluxDB through its API.    The modification is based on a new thing called influxDBConnector  which has the following services:   dropMeasurements: Deletes all the DB entries related to a Thing; getAllEntries: Queries all entries related to a Thing. It has a string output; getAllEntriesTable: Queries all entries related to a Thing. Infotable output; getMissingThings: Queries ThingWorx Things tables and InfluxDB measurements. It filters to have in the result only the Measurements that are not in the Things table; showMeasurements; Gets all the measurements in InfluxDB   Also the following properties: database: Influx database name used to persist the value stream data; InfluxLink: hostname:port to the InfluxDB server   Like the previous example, I created a sample mashup (purgeInfluxDBStreamsMashup ) to help to execute the services:   Obviously this utility expects that there's an existing Influx persistence provider.   Once more: these services affect directly the DB and need to be used carefully and only by Administrators.  Make sure you fully test it before using it in production.   The attached XML contains the complete utility for PostgreSQL and InfluxDB.   Hope it helps Ewerton  
View full tip
When using Value Streams to log historical data, there's a service to purge the ValueStream entries from the Thing itself. But, what to do when a Thing that once logged values into a ValueStream was deleted? Currently, there's no OOTB way to delete these entries if they're not being used anymore. Currently, I was asked this question and wanted to share this with the entire community. I created a utility application that queries directly the TWX DB for Things that are present in the ValueStream but don't exist anymore and allows a user to purge it.    These services are considering  PostgreSQLServer as persistence provider for the ValueStreams. The services can be modified if you're using SQLServer.  Do not apply for InfluxDB persistence providers   The twxDBConnector thing is based on the PostgreSqlServer template, that is present in the Relational Databases extension. It has 4  main services:   getEntriesToPurge:  Queries the TWX DB for all the entries related to a Thing. It does not consider the ValueStream id, so it will purge all the entries across all value streams. Requires a Thing name as an input; getMissingThings: Queries Things that are present in the ValueStream DB table that are not present in the Things table, meaning that they were deleted; purgeThingEntries: Purges the entries related to a Thing. It does not consider the ValueStream id, so it will purge all the entries across all value streams; purgeAllEntries: Purge all the entries related to Things that were deleted.   The queries can be modified to allow the selection of the value stream to be cleaned.   I also added a sample mashup that leverages the services.       The twxDBConnector has a configuration table that requires the DB Connection string, user and password.   You can also do it directly from the DB using PGAdmin and purge it all.   DELETE FROM value_stream WHERE value_stream.entry_id IN --Queries all entries in the value stream table that belong to an inexistent thing (SELECT entry_id FROM value_stream LEFT JOIN thing_model ON value_stream.source_id = thing_model.name WHERE thing_model.name IS NULL)   Attention: These services are changing directly the TWX DB, so use it carefully.   To use it:   Import the PostgresSQLServer Extension (you might need to change the JAR in the extension depending on the TWX version you're using); Import the entities from the purgeVSEntries.xml Thanks  @dsantos for the help on optimizing the queries.   Hope it helps. Ewerton  
View full tip
Hi all,   ThingWorx contains lots of useful functionality for your services (last count is 339 Snippets in ThingWorx 8.5.2). These snippets are an important part of the platform application building capabilities, and most of them are simple enough to understand based on their name and the description that appears when hovering on them.   I have witnessed that however, in some cases, the platform users are not aware of their full capabilities. With this in mind, I started creating some time ago a Snippet Guide for my personal use that I'm sharing now with the community. It contains additional explanations, documentation links and sample source code tested by me.   Please bear in mind that it was done for an earlier ThingWorx version and I did not have enough time to update it for 8.5.x, but it should work the same here as well.   This enhanced documentation is not supported by PTC, so please 1. do not open a Tech Support ticket based on the content of this document and, instead 2. Comment on this thread if there are things I can improve on it.   Happy New Year!
View full tip
In this post I show how to use Federation in ThingWorx to execute services on a different ThingWorx platform instance. In the use case below I set up one ThingWorx instance in the Factory and another instance in the Cloud, whereby the latter is executing a service which is actually running on the former.   Please find the document in attachment.   HTH, Alessio Marchetti  
View full tip
Users of ThingWorx Analytics (TWA) may choose to create a predictive model using TWA or import a predictive model that was created using other software. When importing into or exporting out of TWA, this predictive model must be in a PMML (Predictive Model Markup Language) version 4.3+ format. This post describes how to complete the import and export processes. Exporting: The user may create a model in two main ways inside of TWA: using the Builder user interface, or by using ‘Create Job’ service that exists the Training Thing. Whichever method is used, a model Job Id is created automatically by TWA for that model. It is this model Job Id that is used to identify the model inside of TWA, regardless of what is being done with that model.   If a model is trained using Builder, the user may highlight that model, click ‘Job Details’, and then copy the Job ID. This is done as follows:   Next, the user will navigate to Browse --> Things --> …TrainingThing. This is the Training Microservice inside of TWA where all the functionality involved with training a model exists. Within the …TrainingThing, the user will execute the ‘RetrieveModel’ service under Services. When executing the service, the user will paste the model Job ID (ex. 49704f1a-7fcd-4e38-ab53-84ef46517d0a) they copied earlier, and press ‘Execute’. The resulting text can then be highlighted and copied to Notepad or some other text editor, and saved as .pmml format (ex. ‘ModelExport.pmml’).   Importing Through Results Microservice: To import a model that has been saved in PMML 4.3+ format into TWA using the Results Microservice, the user will navigate to Manage --> Repositories (ex. AnalyticsUploadStorage) --> Actions --> Upload, and choose the PMML file. The user will then navigate to Browse --> Things --> …ResultsThing. This is the Results Microservice inside of TWA where all the functionality exists related to previously trained models. Within the …ResultsThing, the user will execute the ‘UploadModel’ service under Services. Alternatively, the user can upload the model from any repository using ‘UploadModelFromRepository” service.   To create a model from the uploaded PMML inside of TWA, the user will fill out the filePath and name then execute the service. Note: This model will not show up in Builder, as that would require model validation information that is not part of the imported PMML file.   The resulting Job Id can be used to make predictions, such as by using the …PredictionThing’s BatchScore or RealtimeScore services. At this point, the uploaded model acts the same way as if the model were created inside of that TWA environment.       Importing Through Analytics Manager: To import a model that has been saved in PMML 4.3+ format into TWA using the Analytics Manager, the user will navigate to Analytics --> Analytics Manager --> Analysis Models, and click the green “New” button. Next the user will choose the provider name (or create a new one by navigating to Analytics --> Analytics Manager --> Analysis Providers). The user will also check the box to “Upload Model”, and click the grey “Choose File” button to find the PMML file. Finally, the user will click the black “Upload” button, then the green “Save” button.     At this point, the model is uploaded into ThingWorx Analytics, and the user may progress through the subsequent steps to set up “Analysis Events” and “Analysis Jobs” that will be powered by the imported model.
View full tip
OpenDJ is a directory server which is also the base for WindchillDS. It can be used for centralized user management and ThingWorx can be configured to login with users from this Directory Service.   Before we start Pre-requisiste Docker on Ubuntu JKS keystore with a valid certificate JKS keystore is stored in /docker/certificates - on the machine that runs the Docker environments Certificate is generated with a Subject Alternative Name (SAN) extension for hostname, fully qualified hostname and IP address of the opendj (Docker) server Change the blue phrases to the correct passwords, machine names, etc. when following the instructions If possible, use a more secure password than "Password123456"... the one I use is really bad   Related Links https://hub.docker.com/r/openidentityplatform/opendj/ https://backstage.forgerock.com/docs/opendj/2.6/admin-guide/#chap-change-certs https://backstage.forgerock.com/knowledge/kb/article/a43576900   Configuration Generate the PKCS12 certificate Assume this is our working directory on the Docker machine (with the JKS certificate in it)   cd /docker/certificates   Create .pin file containing the keystore password   echo "Password123456" > keystore.pin   Convert existing JKS keystore into a new PKCS12 keystore   keytool -importkeystore -srcalias muc-twx-docker -destalias server-cert -srckeystore muc-twx-docker.jks -srcstoretype JKS -srcstorepass `cat keystore.pin` -destkeystore keystore -deststoretype PKCS12 -deststorepass `cat keystore.pin` -destkeypass `cat keystore.pin`   Export keystore and Import into truststore   keytool -export -alias server-cert -keystore keystore -storepass `cat keystore.pin` -file server-cert.crt keytool -import -alias server-cert -keystore truststore -storepass `cat keystore.pin` -file server-cert.crt     Docker Image & Container Download and run   sudo docker pull openidentityplatform/opendj sudo docker run -d --name opendj --restart=always -p 389:1389 -p 636:1636 -p 4444:4444 -e BASE_DN=o=opendj -e ROOT_USER_DN=cn=Manager -e ROOT_PASSWORD=Password123456 -e SECRET_VOLUME=/var/secrets/opendj -v /docker/certificates:/var/secrets/opendj:ro openidentityplatform/opendj   After building the container, it MUST be restarted immediately in order for recognizing the new certificates   sudo docker restart opendj   Verify that the certificate is the correct one, execute on the machine that runs the Docker environments: openssl s_client -connect localhost:636 -showcerts   Load the .ldif Use e.g. JXplorer and connect   Select the opendj node LDIF > Import File (my demo breakingbad.ldif is attached to this post) Skip any warnings and messages and continue to import the file   ThingWorx Tomcat If ThingWorx runs in Docker as well, a pre-defined keystore could be copied during image creation. Otherwise connect to the container via commandline: sudo docker exec -it <ThingworxImageName> /bin/sh Tomcat configuration cd /usr/local/openjdk-8/jre/lib/security openssl s_client -connect 10.164.132.9:636 -showcerts Copy the certifcate between BEGIN CERTIFACTE and END CERTIFICATE of above output into opendj.pem, e.g. echo "<cert_goes_here>" > opendj.pem Import the certificate keytool -keystore cacerts -import -alias opendj -file opendj.pem -storepass changeit   ThingWorx Composer As the IP address is used (the hostname is not mapped in Docker container) the certificate must have a SAN containing the IP address     Only works with the TWLDAPExample Directory Service not the ADDS1, because ADDS1 uses hard coded Active Directory queries and structures and therefore does not work with OpenDJ. User ID (cn) must be pre-created in ThingWorx, so the user can login. There is no automatic user creation by the Directory Service. Make sure the Thing is Enabled under General Information   Appendix LDAP Structure for breakingbad.ldif cn=Manager / Password123456 All users with password Password123456    
View full tip
In case it's useful for anyone, I successfully used the Thingworx Importer to import an Entity using version 8.4.1. The import command was in a CURL script (can also be run using e.g. Cygwin on Windows) and the entity data was contained in an XML file. The XML file is attached to this post, and the CURL script is copied into the bottom of the post body.   Notes on using the script: Copy CURL code into import.sh You might need to change the line endings to UNIX, e.g. in Notepad++ menu option Edit > EOL Conversion > Unix Give permissions to run import.sh (chmod +x import.sh) ./import.sh >>>>>>>>>>>> #!/bin/bash APPKEY="2e6704c0-XXXX-XXXX-XXXX-a1589e387d1a" TWX_HTTP_PORT="8018" FILE_PATH_TWX="Things_testImport.xml" PROTOCOL="http://" IP="localhost" URL="$PROTOCOL""$IP"":""$TWX_HTTP_PORT""/Thingworx/Importer?purpose=import" curl -X POST -H 'appKey: '$APPKEY \ -H 'Content-Type: multipart/form-data' \ -H 'Accept: application/json' \ -H 'x-thingworx-session: true' \ -H 'X-XSRF-TOKEN: TWX-XSRF-TOKEN-VALUE' \ -F 'upload=@'$FILE_PATH_TWX \ $URL <<<<<<<<<<<<   Also TWX Importer is explained in this support article.
View full tip
Please find here an Labview implementation to connect to Thingworx via RestCalls. Have Fun using it. Any Feedback is appreciated. https://github.com/Seppel1985/LabVIEW_TWX_RestAPI
View full tip
When predicting a Boolean goal such as Failure in the next hour or any other goal that has a yes or no answer, Thingworx Analytics(TWXA) models will output a 'risk' of the event occurring. TWXA will intelligently pick a threshold beyond which that risk warrants attention. 1. In Analytics Builder, click on the export button 2. This will export a PMML model and download it for you 3. Open up the PMML model, in the output section, you will find a condition that explains the threshold that was selected by TWX Analytics.   In this example case, TWXA chose 0.5 as the best Threshold.   Note: The export button will only be available in Builder for TWXA 8.4+.
View full tip
Contents: Introduction Prerequisites Installing Java Installing PostgreSQL Running the Installer Post Installation Steps Troubleshooting tips   Introduction:   Starting with ThingWorx 8.4, PTC released a new way to install a fresh ThingWorx environment.  This installer takes care of all the permissions, database scripts, credential encryption, and tomcat options that previously needed to be done manually.  More information on the installer can be found in the ThingWorx Help Center   NOTE: This is different than the Docker installer we have available in earlier releases.   As of right now, the installation guide has very basic instructions for the installer.  The purpose of this post is to show you from start to finish what the process looks like.  For this example, I chose to deploy PostgreSQL 10 on the local system to keep things simple.   Prerequisites:   Download the latest Java 8 SE JDK RPM for RHEL Get your database ready: If you're accessing a remote PostgreSQL instance, make sure PSQL is installed and working on your ThingWorx Server Download the appropriate installer from support.ptc.com Ensure the RHEL user that will be executing the installer has SUDO privileges   NOTE: There are pieces of the manual installation guide that I had to reference in order to get JAVA and PostgreSQL properly configured.   Installing Java:   Per Page 83, I downloaded the latest Linux x64 RPM for Java 8 SE JDK (201) and followed steps 2-8 to configure Java. For step 5, I needed to use the -f parameter listed in the guide under NOTE Step 7 make sure you don't accidentally select OpenJDK if it was preinstalled   Installing PostgreSQL:   I'm following along with the Version 10 download instructions found on https://www.postgresql.org/download/linux/redhat/ NOTE: this needs root access, so run all the commands with SUDO Install the client packages Postgresql10 I will Install the optional server packages postgresql10-server since this is a local PostgreSQL instance Complete step 7 to enable automatic start.  We need to set the postgres password so our ThingWorx installer is able to create our thingworx user and the database.  This can be done with the following command: NOTE: Since this is the master user for your database, it is highly recommended to use a password that has a combination of case, numbers, letters, and symbols Sudo passwd postgres Although, this may be redundant, I also run the following command to update the password used in PostgreSQL : sudo -u postgres psql -c "ALTER ROLE postgres WITH password '<password from above>'" Navigate to /var/lib/pgsql/10/data and open pg_hba.conf for editing Review page 91 of the Installation guide to determine which setting best applies to your business needs In the same directory open postgresql.conf Scroll down to "listen_addresses" line and un-comment it.  This would  be the place to make changes if you expect remote connections to access the database.  If it is local, then the default of localhost is fine Restart PostgreSQL to apply these changes: Sudo service postgresql-10 restart   Running the Installer:   Everything should be in place now to run our installer.  Extract the ThingWorxFoundationPostgres-1.2.0-SNAPSHOT.run file to the ~ (home) directory Execute the .run file: NOTE: If it doesn't let you execute the file, it may not have extracted as an executable.  Run the below command to make it executable then try again: Chmod -x ThingWorxFoundationPostgres-1.2.0-SNAPSHOT.run Sudo ./ThingWorxFoundationPostgres-1.2.0-SNAPSHOT.run   At this point you'll be going through text to setup your installation settings.  I'll briefly list out the order you'll see them below: Terms and conditions and whether you agree Where you want ThingWorx deployed (/opt by default) NOTE: this folder will contain ThingworxStorage/ ThingworxPlatform/ tomcat/ etc… Installation Configuration user (twxfoundation by default).  This step creates a user in RHEL that will have ownership of Tomcat, various ThingWorx directory's, etc ThingWorx Administrator Password.  Used to login to ThingWorx Composer. WRITE THIS DOWN SOMEWHERE!  You cannot retrieve this password, and most likely will require you to do a fresh installation if you forget it Tomcat Port http (8080) Tomcat SSL port (8443) Use SSL For simplicity, I chose not to use it for this exercise PostgreSQL information Host Name : mine is local, so localhost Port (5432) Administrator Username (Administrator) : use postgres here, since that's the DB user password we updated above Admin password : use the postgres password ThingWorx Database login username (twadmin).  This user will be created in PostgreSQL and be tied to our ThingWorx database ThingWorx database login password: NOTE There's no place to re-enter your password, so make sure you write this down.   Unexpected issue:   For this particular install, I kept running into a failure saying "Warning: Failed to validate the PostgreSQL connection.  Check the information you entered".  I opened another putty connection and, as root, navigated to /var/lib/pgsql/10/data/log and opened the postgresql log to find the following:   2019-02-28 17:10:30.678 UTC [93377] LOG:  could not connect to Ident server at address "::1", port 113: Connection refused 2019-02-28 17:10:30.678 UTC [93377] FATAL:  Ident authentication failed for user "postgres" 2019-02-28 17:10:30.678 UTC [93377] DETAIL:  Connection matched pg_hba.conf line 84: "host    all             all             ::1/128                 ident"     The solution for me was to go into the pg_hba.conf and change the IPv6 local connections from ident to md5.  Again, make sure you are reading through the PostgreSQL documentation and adjusting these properties in a way that meets both your security and business needs.   Once the change was made, I restarted postgresql, and switched back over to my Putty instance that had the installer going.     A summary pops up for a few items, and then it asks if you're ready to continue NOTE: The progress bar goes to 100% pretty quickly, and doesn't appear to move.  Just let it sit for a few minutes while it finishes up Copy the Thingworx Device ID for future reference To check if ThingWorx is running, run 'sudo service Thingworx-Foundation status' in your command line If it is active (running) try to access it with a remote browser: More information around the command Firewalld can be found here  http://<thingworxurl>:<tomcatport>/Thingworx NOTE: If it just hangs, check your firewall to make sure the port is open for external communication   Post Installation Steps:   Licensing: Navigate to /opt/ThingWorxPostgres-1.2.0-SNAPSHOT/licensingconfigurator and run the twx-licensing-configurator.run as SUDO Choose whether or not you want PTC to store your credentials and download the license for you, or if you want to manually download the license yourself from http://support.ptc.com -> Manage Licenses (bottom right) For this example, I manually downloaded the license Move the license file over to the ThingWorx Server Since you're running the licensingconfigurator as SUDO, don't put this file into your user's home directory.  Instead, put it into /tmp NOTE: Change the downloaded filename to license_capability_response.bin.  Otherwise the file will not be recognized Then it will ask for your ThingWorx Administrator password This appears to be used for verification after the license is in place, and it sees if it can successfully log into your system Once it has completed, and assuming it says "Setup has finished configuration licensing for ThingWorx", open up a web browser and login as Administrator -> Monitor -> Subsystems -> Licensing Subsystem and verify that your licensing information looks correct on the system   Extensions: Extra security has been added as of 8.4 around importing Extensions.  More details can be found in the Help Center In short, adding extensions is disabled by default, and you need to add some lines into your /ThingworxPlatform/platform-settings.json under the "PlatformSettingsConfig" section. For example, here is what I added:    "PlatformSettingsConfig": {                 "BasicSettings": {                         "BackupStorage": "/opt/ThingWorxPostgres-1.2.0-SNAPSHOT/ThingworxBackupStorage",                         "DatabaseLogRetentionPolicy": 7,                         "EnableBackup": true,                         "EnableHA": false,                         "EnableSystemLogging": true,                         "HTTPRequestHeaderMaxLength": 2000,                         "HTTPRequestParameterMaxLength": 2000,                         "InternalAesCryptographicKeyLength": 128,                         "Storage": "/opt/ThingWorxPostgres-1.2.0-SNAPSHOT/ThingworxStorage"                 },                 "ExtensionPackageImportPolicy": {                        "importEnabled": true,                        "allowJarResources": true,                        "allowJavascriptResources": false,                        "allowCSSResources": false,                        "allowJSONResources": false,                        "allowWebAppResources": false,                        "allowEntities": true,                        "allowExtensibleEntities": false       }           }   Make sure you set the appropriate items above to true based on what your extensions require   Troubleshooting:   If things backfire, depending on where you are in the setup process, the following logs should be looked at for clues on the failure:   Installation: /tmp/bitrock_installer.logs I believe the installation directory (default /opt/ThingWorxPostgres-1.2.0-SNAPSHOT) will contain a log file if the installer fails /opt/ThingWorxPostgres-1.2.0-SNAPSHOT/ThingworxStorage/logs/ (need root access) /opt/ThingWorxPostgres-1.2.0-SNAPSHOT/tomcat/apache-tomcat-<version>/logs PostgreSQL (requires root): /var/lib/pgsql/10/data/log LicensingConfigurator : /opt/ThingWorxPostgres-1.2.0-SNAPSHOT/licensingconfigurator
View full tip
Attached (as PDF) are some steps to quickly get started with the Thingworx MQTT Extension so that you can subscribe / publish topics.
View full tip
Here is a tutorial to explain the process of uploading a PMML file from an external system to Thingworx Analytics. The tutorial steps are explained in the attached PDF and all referenced files can be found in the attached ZIP.  
View full tip
In this post, I will use an instance of InfluxDB and Chronograf. See this post for installing both using Docker. InfluxDB - Time Series Databases   InfluxDB is a time series database. It allows users to work with and organize time series data. The advantage of such a database system is that it comes with built-in functionality to easily aggregate and operate on data based on time intervals. Other types of databases can do this as well - but time series databases are heavily optimized for this kind of data structures which will show in storage space and performance.   Data is stored in the database with its timestamp, its value and one or more tags.   Time Temperature Humidity Location 2019-01-24T00:00:00 23 42 Home 2019-01-24T00:01:00 22 43 Home 2019-01-24T00:02:00 21 44 Home 2019-01-24T00:03:00 23 45 Home 2019-01-24T00:04:00 24 42 Home 2019-01-24T00:05:00 25 43 Home 2019-01-24T00:06:00 23 44 Home   Values can be aggregated by intervalls, i.e. "give me the temperatur values within the last hour and take the average for 5 minutes". This would result in (60 / 5) = 12 results with a value that represents the average temperature within this 5 minute interval.   Example: Temperature Data averaged by 4 minutes   Time Temperature 2019-01-24T00:00:00 (23 + 22 + 21+ 23) / 4 = 22,25 2019-01-24T00:04:00 (24 + 25 + 23) / 3 = 24   To find out more about InfluxDB see also https://www.influxdata.com/time-series-database/ and https://www.influxdata.com/time-series-platform/   InfluxDB in ThingWorx   The new ThingWorx 8.4 release comes with an option to setup InfluxDB as additional Persistence Provider. Meta Data like Entity Definitons will still be stored in PostgreSQL. Streams, Value Streams and Data Tables however can be stored in InfluxDB.   The InfluxDB Persistence Provider setup is delivered with the PostgreSQL installation package for ThingWorx. Currently ThingWorx does not allow any aggregation of data with its built-in InfluxDB capabilities.   Prepare InfluxDB   InfluxDB will need a user and a database. Connect via Chronograf - the graphical UI to administer InfluxDB and create a new user via   InfluxDB Admin > Users Default username = twadmin Default password = password Permissions = ALL   Create a new database via   InfluxDB Admin > Databases Default database name = thingworx   Configure ThingWorx   Create a new Persistence Provider for InfluxDB in ThingWorx - but don't mark it as active yet!     Switch to the Configuration and change the username / password, database and hostname to match your installation.     Save the configuration, switch back to the General tab and mark the InfluxDB Persistence Provider as Active.   Save again and a "successful" message will be shown. If the save action failed, the connection settings are not correct - check for the correct ports and for any typos.   Creating Entities & Testing   Streams, Value Streams and Data Tables can now be created using the new InfluxDB Persistence Provider.   To test with a Value Stream   Create a new Thing with some NUMBER properties, e.g. 'a', 'b' and 'c' as properties - ensure they are marked as logged as well Name = InfluxValueStreamThing Create a new ValueStream based and change its Persistance Provider to the InfluxDB created above Name = InfluxValueStream Save both Entities Setting values for the properties will now automatically create the entries in InfluxDB - including the Entity name "InfluxValueStreamThing" Running the QueryPropertyHistory service on the Thing will return the results as an InfoTable In Chronograf this will display like this:   ThingWorx 8.4 will be released end of January 2019. Be sure to check out and test the new Persistence Provider features!
View full tip
Installing an Open Source Time Series Platform For testing InfluxDB and its graphical user interface, Chronograf I'm using Docker images for easy deployment. For this post I assume you have worked with Docker before.   In this setup, InfluxDB and Chronograf will share an internal docker network to exchange data.   InfluxDB can be accessed e.g. by ThingWorx via its exposed port 8086. Chronograf can be accessed to administrative purposes via its port 8888. The following commands can be used to create a InfluxDB environment.   Pull images   sudo docker pull influxdb:latest sudo docker pull chronograf:latest   Create a virtual network   sudo docker network create influxdb   Start the containers   sudo docker run -d --name=influxdb -p 8086:8086 --net=influxdb --restart=always influxdb sudo docker run -d --name=chronograf -p 8888:8888 --net=influxdb --restart=always chronograf --influxdb-url=http://influxdb:8086     InfluxDB should now be reachable and will also restart automatically when Docker (or the Operating System) are restarted.
View full tip
This post covers how to build and operationalize a time series model using Thingworx Analytics. A lookback window is used to read multiple previous rows before the current one, and base the prediction on those lookback rows.   In this example we use time series data to predict water flow for different water pumps in a system.   There is a full explanation of the method attached, also all necessary resources are included in the attached files.
View full tip
This post adds to my previous post: Deploying H2 Docker versions quickly   In addition to configuring the basic Docker Images and Containers, it's also possible to deploy them with a TLS / SSL certificate and access the instances via HTTPS protocol.   For this a valid certificate is required inside a .jks keystore. I'm using a self-signed certificate, but commercial ones are even better! The certificate must be in the name of the machine which runs Docker and which is accessed by the users via browser. In my case this is "mne-docker". The password for the keystore and the private key must be the same - this is a Tomcat limitation. In my case it's super secret and "Password123456".   I have the following directory structure on my Operating System   /home/ts/docker/ certificates mne-docker.jks twx.8.2.x.h2 Dockerfile settings platform-settings.json <license_file> storage Thingworx.war   The Recipe File   In the Recipe File I make sure that I create a new Connector on port 8443, removing the old one on port 8080. I do this by just replacing via the sed command - also introducing options for content compression. I'm only replacing the first line of the xml node as it holds all the information I need to change.   Changes to the original version I posted are in green   FROM tomcat:latest MAINTAINER mneumann@ptc.com LABEL version = "8.2.0" LABEL database = "H2" RUN mkdir -p /cert RUN mkdir -p /ThingworxPlatform RUN mkdir -p /ThingworxStorage RUN mkdir -p /ThingworxBackupStorage ENV LANG=C.UTF-8 ENV JAVA_OPTS="-server -d64 -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -Dfile.encoding=UTF-8 -Duser.timezone=GMT -XX:+UseNUMA -XX:+UseG1GC -Djava.library.path=/usr/local/tomcat/webapps/Thingworx/WEB-INF/extensions RUN sed -i 's/<Connector port="8080" protocol="HTTP\/1.1"/<Connector port="8443" protocol="org.apache.coyote.http11.Http11NioProtocol" maxThreads="150" SSLEnabled="true" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" enableLookups="false" keystoreFile="\/cert\/mne-docker.jks" keystorePass="Password123456" ciphers="TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_RSA_WITH_AES_128_GCM_SHA256, TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_RSA_WITH_AES_256_CBC_SHA256, TLS_DHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA" compression="on" compressableMimeType="text\/html,text\/xml,text\/plain,text\/css,text\/javascript,application\/javascript,application\/json"/g' /usr/local/tomcat/conf/server.xml COPY Thingworx.war /usr/local/tomcat/webapps VOLUME ["/ThingworxPlatform", "/ThingworxStorage", "/cert"] EXPOSE 8443   Note that I also map the /cert directory to the outside, so all of my Containers can access the same certificate. I will access it read-only.   Deploying     sudo docker build -t twx.8.2.x.h2 . sudo docker run -d --name=twx.8.2.x.h2 -p 88:8443 -v /home/ts/docker/twx.8.2.x.h2/storage:/ThingworxStorage -v /home/ts/docker/twx.8.2.x.h2/settings:/ThingworxPlatform -v /home/ts/docker/certificates:/cert:ro twx.8.2.x.h2   Mapping to the 8443 port ensures to only allow HTTPS connections. The :ro in the directory mapping ensures read-only access.   What next   Go ahead! Only secure stuff is kind of secure 😉 For more information on how to import the certificate into a the Windows Certificate Manager so browsers recognize it, see also the Trusting the Root CA chapter in Trust & Encryption - Hands On
View full tip
ThingWorx offers Docker based installations utilizing existing PostgreSQL databases. In newer releases ThingWorx Docker installers also offer using other databases.   Personally I'm using a certain method of deployment where I can just easily exchange some files, create new images and have a H2 based environment running for some quick tests.   As H2 is a built-in database, I will not dive into setting up the platform-settings.json for other connectivity. However other databases can be connected to by adjusting the platform-settings.json. This might also require an internal Docker Network structure which I will not elaborate on here.   Note: the following procedure is not fully supported as it's not using the deployment methods provided by the installers!   Create the Directory Structure   My Directory structure looks the following (expanded for the 8.2.x branch):   /home/ts/docker/ twx.8.0.x.h2 twx.8.1.x.h2 twx.8.2.x.h2 Dockerfile settings platform-settings.json <license_file> storage Thingworx.war twx.8.3.x.h2   I have a directory for every version I want to test with.   In each directory there's the Dockerfile - the recipe file I'm using. There's also the version specific Thingworx.war file as well as two directories: settings and storage which I will map to the ThingWorx directories inside the image later.   The Recipe File   FROM tomcat:latest MAINTAINER me@somewhere.com LABEL version = "8.2.0" LABEL database = "H2"  RUN mkdir -p /ThingworxPlatform RUN mkdir -p /ThingworxStorage RUN mkdir -p /ThingworxBackupStorage ENV LANG=C.UTF-8 ENV JAVA_OPTS="-server -d64 -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -Dfile.encoding=UTF-8 -Duser.timezone=GMT -XX:+UseNUMA -XX:+UseG1GC -Djava.library.path=/usr/local/tomcat/webapps/Thingworx/WEB-INF/extensions COPY Thingworx.war /usr/local/tomcat/webapps VOLUME ["/ThingworxPlatform", "/ThingworxStorage"] EXPOSE 8080   I change the version label to keep track of the versions for each recipe.   Deploying   Build the Docker Image by navigating to the directory where the recipe file is based in   sudo docker build -t twx.8.2.x.h2 .   Create a Docker Container and start it   sudo docker run -d --name=twx.8.2.x.h2 -p 82:8080 -v /home/ts/docker/twx.8.2.x.h2/storage:/ThingworxStorage -v /home/ts/docker/twx.8.2.x.h2/settings:/ThingworxPlatform twx.8.2.x.h2   I change the name of the Image and the Container as well as the external port to distinguish all the different versions. The -v option maps the paths in my Operating System to the paths in the Docker Container, so I can browse the ThingworxStorage and ThingworxPlatform folder without connecting inside the Container. That's quite handy to check the logs, or place the license file.   Starting and Stopping   I can fire up and shut down Containers I need with the following commands:   sudo docker start twx.8.2.x.h2 sudo docker stop twx.8.2.x.h2   What next   That's just my basic setup. Usually I copy & paste a working directory for deploying another version and adjust what needs to be changed. You could use this as a basis for quick and easy deployment where even additional features could be added, i.e. HTTPS configuration or auto-deploying certain ThingWorx Extensions via a REST API call.   To ensure starting with a clean Image, for building new Images I delete the contents of the storage folder and only leave the platform-settings.json in the settings folder (I copy the license later after generating it with my new Device ID).
View full tip
  There are times when the raw sensor readings are not directly useful for monitoring conditions on a machine. The raw data may need to be transformed before it can provide value within your monitoring applications. For example, instead of monitoring individual pressure readings reported each second, you may only be concerned with the maximum pressure reading each minute. Or, maybe you want to monitor the median value of the electrical current pulled by a machine every five seconds to smooth out the noise of raw sub-second sensor readings. Or, maybe you want to monitor if the average hourly temperature of a machine exceeds a control limit in 2 of the past 3 hours.   Let’s take the example of monitoring the max pressure of a valve reading over the past 45 seconds for your performance dashboard. How do you do it? Today, you might add a new property (e.g. “MaxPressure”) to your valve Thing. Then, you might add a subscription that triggers when the Pressure property value changes, and then call a service FindMax() to return the maximum pressure for that time interval. Lastly, you might write that maximum result value to the new property MaxPressure to store it and visualize it in the dashboard. Admittedly, not the worst process, but also not the most efficient.   Coming in 8.4, we will now offer Property Transforms, which enable you to automatically execute common statistical calculations—like min, max, average, median, mode and standard deviation, as well as SPC calculations—directly within a property itself. These transforms are configurable to run at certain intervals of time or points collected and can also be used with our alerting subsystem to drive behavior and user action where necessary. There is no longer a need to create an elaborate subscription-based logic flow just to do simple calculations!  This is just another way that ThingWorx 8.4 offers a more productive environment for IoT developers than ever before.   Ready to see it in action? Check out this video below by our product manager Mark!   (view in My Videos)   Comment your thoughts below!   Stay connected, Kaya
View full tip
Let's assume I collect Timeseries Data of two temperature sensors, located next to each other. This is done for redundancy and ensuring the quality of measures. Each of the sensors is logged into its Property in ThingWorx and I can create a Timeseries for the individual sensors. However I would like to create a combined InfoTable that holds information for both sensors, but averages out their values.   Instead of reading values from a stream, I just create some custom data for both InfoTables. After this I use the UNION function to combine the two tables and sort them. Once they are sorted, the INTERPOLATE function allows to group the InfoTable by timestamp.   With this, I have combined the two sensor result into on result set. Taking the average of numbers will give closer results to the real value (as both sensors might not be 100% accurate). In case one sensor does not have data for a given point in time, it will still be considered in the final output.   InfoTable1:   2018-12-18 00:00:00.000 2 2018-12-19 00:00:00.000 3 2018-12-20 00:00:00.000 5 2018-12-21 00:00:00.000 7   InfoTable2:   2018-12-18 00:00:00.000 1 2018-12-19 12:00:00.000 2 2018-12-20 00:00:00.000 3 2018-12-21 00:00:00.000 4   Combined Result:   2018-12-18 00:00:00.000 1.5 2018-12-19 00:00:00.000 3 2018-12-19 12:00:00.000 2 2018-12-20 00:00:00.000 4 2018-12-21 00:00:00.000 5.5     This can be done with the following code:   // Required DataShape "myInfoTableShape": "timestamp" = DATETIME, "value" = NUMBER // The Service Output is an InfoTable based on the same DataShape var params = { infoTableName : "InfoTable", dataShapeName : "myInfoTableShape" }; // Create two InfoTables, representing the data of each sensor var infoTable1 = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); var infoTable2 = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); var newEntry = new Object(); // Create custom data for InfoTable1 newEntry.timestamp = 1545091200000; newEntry.value = 2; infoTable1.AddRow(newEntry); newEntry.timestamp = 1545177600000; newEntry.value = 3; infoTable1.AddRow(newEntry); newEntry.timestamp = 1545264000000; newEntry.value = 5; infoTable1.AddRow(newEntry); newEntry.timestamp = 1545350400000; newEntry.value = 7; infoTable1.AddRow(newEntry); // Create custom data for InfoTable2 newEntry.timestamp = 1545091200000; newEntry.value = 1; infoTable2.AddRow(newEntry); newEntry.timestamp = 1545220800000; newEntry.value = 2; infoTable2.AddRow(newEntry); newEntry.timestamp = 1545264000000; newEntry.value = 3; infoTable2.AddRow(newEntry); newEntry.timestamp = 1545350400000; newEntry.value = 4; infoTable2.AddRow(newEntry); // Combine the two InfoTables via the UNION function var unionTable = Resources["InfoTableFunctions"].Union({ t1: infoTable1, t2: infoTable2 }); // Optional: Sort the table by timestamp var sortedTable = Resources["InfoTableFunctions"].Sort({ sortColumn: "timestamp", t: unionTable, ascending: true }); // Interpolate the (sorted) table by Interval and take average values and build the result var result = Resources["InfoTableFunctions"].Interpolate({ mode: "INTERVAL", timeColumn: "timestamp", t: sortedTable, ignoreMissingData: undefined, stats: "AVG", endDate: 1545609600000, columns: "value", count: undefined, startDate: 1545004800000 });  
View full tip
Announcements