cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Need help navigating or using the PTC Community? Contact the community team. X

IoT Tips

Sort by:
In this session, we pick up where we left off with the mashup which was worked on in part 1 of our Advanced Mashup Expert Session series. Specifically, we will explore the concepts of master mashups, session variables, and media entities, using each to further enhance the look and feel of our mashup.     For full-sized viewing, click on the YouTube link in the player controls. Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
This video builds upon the mashup created in the basic session, and strives to create a more polished, user-friendly interface that is ready for deployment. In part 1, we’ll take a look at advanced layout designs and include a more varied set of widgets to help draw attention to some of the more pertinent properties being captured within the mashup.   For full-sized viewing, click on the YouTube link in the player controls. Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.  
View full tip
Database backups are vital when it comes to ensure data integrity and data safety. PostgreSQL offers simple solutions to generate backups of the existing ThingWorx database instance and recover them when needed.   Please note that this does not replace a proper and well-defined disaster recovery plan. Export and Import are part of this strategy, but are not reflecting the complete strategy. The commands used in this post are for Windows, but can be adjusted to work on Linux-based systems as well.   Backup   To create a Backup, the export / dump functionality of PostgreSQL can be used.     pg_dump -U postgres -C thingworx > thingworxDump.sql     The -C option will include the statement to create the database in the .sql file and map it to the existing tablespace and user (e.g. 'thingworx' and 'twadmin'). The tablespace and user can be seen in the .sql file in the line with "ALTER DATABASE <dbname> OWNER TO <user>;"   In the above example, we're backing up the thingworx database schema and dump it into the thingworxDump.sql file   Tablespace, username & password are also included in the platform_settings.json   Restore   To restore the database, we just assume an empty PostgreSQL installation. We need to create the DB schema user first via the following commandline:     psql -U postgres -c "CREATE USER twadmin WITH PASSWORD 'ts';"     With the user created, we can now re-generate the tablespace and grant permissions to the twadmin user:   psql -U postgres -c "CREATE TABLESPACE thingworx OWNER twadmin LOCATION 'C:\ThingWorx\ThingworxPostgresqlStorage';" psql -U postgres -c "GRANT ALL PRIVILEGES ON TABLESPACE thingworx TO twadmin;" Finally the database itself can be restored by using the following commandline:   psql -U postgres < thingworxDump.sql This will create the database and populate it with tables, functions and sequences and will also restore the data from the .sql file.   It is important to have database, username and tablespace match with the original system - otherwise granting permissions and re-creating data might fail on recovery. User and tablespace can also be reused, so only the database has to be deleted before restoring it.   Part of the strategy   Part of the backup and recovery strategy should be to actually test the backup as well as the recovery part of the procedure. It should be well tested on a test-environment before deployed to any production environments. Take backups on a regular basis and test for disaster recovery once or twice a year, to ensure the procedure is still valid.   Data is the most important source in your application - protect it!  
View full tip
The purpose of this document is to see how you can setup an MXChip IoT DevKit and also how send the readings of this microprocessor to ThingWorx through an Azure cloud server. You will also learn how to view the values that are being sent.
View full tip
Help Center link on how to control file transfers from the edge client using the EdgeControlled ThingShape.   The EdgeControlled ThingShape is a default entity included with ThingWorx that allows you to manage the amount of egress being sent from the platform to the Edge.   At the time of writing this post, the available 'When Disconnected' settings for a remotely bound property in ThingWorx are 'Fold' and 'Ignore'. Setting a property to 'Fold' while using this EdgeControlled ThingShape is necessary whether the device is connected all the time or only for brief updates.   To use this ThingShape in a real world scenario you might code your edge client to invoke the DequeueEgress REST API function available through this ThingShape. The parameter you pass in is then the number of messages you would like the client to receive. The result of this function is how many messages the platform then actually sent.   A quick setup: 1. Create a RemoteThing entity in ThingWorx 2. Create an ApplicationKey entity in ThingWorx 3. Setup an edge client to bind to that RemoteThing using the specified ApplicationKey 4. Manage Bindings on the Properties page of the RemoteThing, and pull in a few properties you would like to send property updates to 5. Set the 'When Disconnected' value to 'Fold' for each property you want to queue messages for   5a. Set any other settings on the properties you'd like; ie. persistence, logged 6. Save the Thing 7. Add the EdgeControlled ThingShape to the Thing 8. Save the Thing 9. Update property values, see exceptions thrown, but the value will be queued 10. Invoke DequeueEgress on the RemoteThing, with the number of messages to send to the edge client passed in as the parameter value   10a. Notice 'Fold' means only the last value set for a property will be sent to the edge client. There is currently no retention available for any values previously set to the property and stored as the message to be sent. Those values are lost upon a new value coming in before it's dequeued. 11. Verify the edge client has received the expected egress, and the return result of the DequeueEgress function was the expected # of messages sent.
View full tip
Previously Installing & Connecting C SDK to Federated ThingWorx with VNC Tunneling to the Edge device Installing and configuring Web Socket Tunnel Extension on ThingWorx Platform Overview     Using the C SDK Edge client configuration we did earlier, we'll now create above illustrated setup. In this C SDK Client we'll push the data to ThingWorx Publisher with servername : TW802Neo to ThingWorx Subscriber servername : TW81. Notice that the SteamSensor2 on the pulisher server is the one binding to the C SDK client and the FederatedSteamThing on subscriber is only getting data from the SteamSensor2. Let's crack on!   Content   Configure ThingWorx to publish Configure ThingWorx to subscribe Publish entities from Publisher to the Subscriber Create Mashup to view data published to the subscriber Pre-requisite Minimum requirement is to have two ThingWorx servers running. Note that both ThingWorx systems can be publisher & subscriber at the same time.   Configure ThingWorx Publisher   Configuring Federation Subsystem   1. Navigate to ThingWorx Composer > Subsystems > Federation Subsystems and configure the following highlighted sections   Essentially its required to configure the Server Identification, i.e. My Server name (FQDN / IP) , My Server Description (optional) Federation subscribers this server publishes to, i.e. all the server you want to publish to from this server. Refer to the Federation Subsystem doc in the Help Center to check detail description on each configurable parameter.   2. Save the federation subsystem   Configuring a Thing to be published   1. Navigate back to the Composer home page and select the entity which you'd like to publish 2. In this case I'm using SteamSensor2 which is created to connect to the C SDK client 3. To publish edit the entity and click on Publish checkbox, like so 4. Save the entity   Configure ThingWorx Subscriber     Configuring Federation Subsystem   1. Navigate to ThingWorx Composer > Subsystems > Federation Subsystems and configure the following highlighted sections   Essentially its required to configure the Server Identification, i.e. My Server name (FQDN / IP) , My Server Description (optional) Refer to help center doc on Federation Subsystems should you need more detail on the configurable parameter If you only want to use this server as a subscriber of entities from the publishing ThingWorx Server, in that case you don't have to Configure the section Federation subscribers this server publishes to, I've configured here anyway to show that both servers can work as publishers and subscribers   2. Save the federation subsystem   Configuring a Thing to subscribe to a published Thing 1. Subscribing to an entity is fairly straight forward, I'll demonstrate by utilizing the C SDK client which is currently pushing values to my remote thing called SteamSensor2 on server https://tw802neo:443/Thingworx 2. I have already Published the StreamSensor2, see above section Configuring a Thing to be published 3. Create a Thing called FederatedStreamThing with RemoteThingWithTunnels as ThingTemplate, 4. Browser for the Identifier and select the required entity to which binding must be done, like so   5. Navigate to the Properties section for the entity, click Manage Bindings to search for the remote properties like so for adding them to this thing:     6. Save the entity and then we can see the values that were pushed from the client C SDK     7. Finally, we can also quickly see the values pulled via a Mashup created in the subscriber ThingWorx , below a is a simple mashup with grid widget pulling values using QueryPropertyHistory service  
View full tip
Disclaimer: This post does of course not express any political views.   Pie Chart Coloring   In ThingWorx Pie Charts use a default color schema based on the DefaultChartStyle Definitions. These schemas are using fixed numbering and coloring systems, e.g. 1 is blue, 2 is green, 3 is red and so on. All Pie Charts will be rendered with these colors in the same order, no matter which data the chart is using. Visualization of data with the default colors might not necessarily help in creating an easy to read chart.   Just take a look at the following example with the default color schema. Let's just take political parties - as they are usually associated with a distinct color - to illustrate how the default color schema will fail depending on the data displayed.         In the first example, just by sheer coincidence the colors are perfectly matching the parties. When introducing a new party to the pool suddenly the blues are rendered green and the yellows rendered light-blue etc. This can be quite confusing, especially on election night 😉   Custom Color Schema   PoliticalParties Thing   To test a custom color schema, we first need to create a new Thing: PoliticalParties as a GenericThing Add a dataset property with the following PoliticalParties DataShape.         Save the Thing and set the InfoTable to:   Key Value The Purples 20 The Blues 20 The Greens 20 The Reds 20 The Yellows 20 Others 20   Number values don't actually matter too much, as the Pie Chart will automatically distribute them according to their percentage.   PoliticalParties Mashup   Create a new Mashup and add a PieChart to the canvas. Bind the PoliticalParties > GetPropertyValues > dataset to the Data input of the Widget. Ensure to set the LabelField to key and the ValueField to value for a correct mapping.     Save the Mashup and preview it.   It should show a non-matching color for each party listed in the InfoTable.   Custom Styles and States   Create new custom Style Definitions for each political party. As the Pie Chart is only using the Background Color other properties can stay on the default. I chose to go with a more muted version of the colors to make the chart easier to look at.         With the newly defined colors we can now generate a new State Definition as follows:       The States allow to evaluate the key-Strings in the Thing's InfoTable and assign a Style Definition depending on the actual value. In this definition we map a color schema based on the InfoTable's key-value to create a 1:1 mapping for the Strings.   This means, no matter where a certain party is positioned in the chart it will be tinted with its associated color.   Refining the Mashup   Back in the Mashup, select the PieChart. In the ColorFormat property choose the newly created State Definition.     Save the Mashup and preview it. With the States and Styles applies, colors are now displayed correctly.       Even when changing positions and numbers in the original InfoTable of the PoliticalParties Thing, the chart now considers the mapping of Strings and still displays the colors correctly.  
View full tip
Saw this great question in the Developers forum https://community.ptc.com/t5/ThingWorx-Developers/Thingworx-Permission-Hierarchy/m-p/556829#M29312. Answered it there, copying it to here: Question Hi, I have a few of questions regarding the permissions model in Thingworx. I can't find any documentation that explains it clearly. Hoping someone can help, or point me in the right direction for more in depth documentation.   My understanding is that permissions can be set at a number of different levels.  Collection Level Template Level Instance Level Thing level My question is, how do these levels interact with one another. Do they all get 'AND'ed together, or do those at the lower levels supersede the ones set at higher levels. e.g.  If I set some visibility at collection level would this overridden by me setting a different visibility at say the Template instance level, or would both visibility permissions be valid. At each of level there is the ability to override (e.g. for a particular property or service). How does that fit in the hierarchy. I have read that in Thingworx 'deny' always supersedes an 'allow' permission. Is this still the case if I set deny at collection level and then at a lower level I gave 'allow' permissions would the deny take precedence. As far as I can tell 'Create' permissions can only be set at collection level. Does this mean that I am unable to restrict one set of users to create things of one template, and a different set of users to create another type of thing. Thanks in advance for any replies   Answer: Great question Thing/Entity level permissions always take precedence So if you set on Collection then on Template then on Entity it will first look at Entity then fill in with Template and Permission So if Collection says can't do Service Execute Template says Can execute Service 1 but not Service 2 Entity says Can execute Service 2 and leaves Service 1 as inherited the end result is that the user can execute service 1 and 2   In Template and Entity you can find the Override ability, that is to specifically allow or disallow the execution of a Service or read/write of a Property   What is a BEST PRACTICE? 1. Give the System user all service execute on collection level 2. Give User Groups 'blanket' permissions to Property Read/Write on ThingTemplate Level 3. Give User Groups only Override permission to execute Services on ThingTemplate Level 4. Override User Group permission to DENY property read on potential properties they are not supposed to read on the ThingTemplate Level   Generally most properties all users can access fully and the blanket permission on a ThingTemplate is fine It is very BAD to give user groups blanket permission to Service execute and should always be done by Override   Entity Hierarchy overrides the Allow Deny hierarchy, but within a single level (Collection / Template / Entity) Deny wins over Allow.   Create is indeed only set on the Collection Level, however the way to secure this is to give the System user the Create ability and create Wrapper services that use the CreateThing service which you can then secure for specific Groups. So you could create a CreateNewThingType1 and CreateNewThingType2 for example and give User Group 1 permission to Type 1 creation and User Group 2 permission to Type 2 creation.   Hope that helps.
View full tip
This is a follow-up post on my initial document about Edge Microserver (EMS) and Lua Script Resource (LSR) security. While the first part deals with fundamentals on secure configurations, this second part will give some more practical tips and tricks on how to implement these security measurements.   For more information it's also recommended to read through the Setting Up Secure Communications for WS EMS and LSR chapter in the ThingWorx Help Center. See also Trust & Encryption Theory and Hands On for more information and examples - especially around the concept of the Chain of Trust, which will be an important factor for this post as well.   In this post I will only reference the High Security options for both, the EMS and the LSR. Note that all commands and directories are Linux based - Windows equivalents might slightly differ.   Note - some of the configuration options are color coded for easy recognition: LSR resources / EMS resources   Password Encryption   It's recommended to encrypt all passwords and keys, so that they are not stored as cleartext in the config.lua / config.json files.   And of course it's also recommended, to use a more meaningful password than what I use as an example - which also means: do not use any password I mentioned here for your systems, they might too easy to guess now 🙂   The luaScriptResource script can be used for encryption:   ./luaScriptResource -encrypt "pword123" ############ Encrypted String AES:A26fBYKHJq+eMu0Fm2FlDw== ############   The wsems script can be used for encryption:   ./wsems -encrypt "pword123" ############ Encrypted String AES:A26fBYKHJq+eMu0Fm2FlDw== ############   Note that the encryption for both scripts will result in the same encrypted string. This means, either the wsems or luaScriptResource scripts can be used to retrieve the same results.   The string to encrypt can be provided with or without quotation marks. It is however recommended to quote the string, especially when the string contains blanks or spaces. Otherwise unexpected results might occur as blanks will be considered as delimiter symbols.   LSR Configuration   In the config.lua there are two sections to be configured:   scripts.script_resource which deals with the configuration of the LSR itself scripts.rap which deals with the connection to the EMS   HTTP Server Authentication   HTTP Server Authentication will require a username and password for accessing the LSR REST API.     scripts.script_resource_authenticate = true scripts.script_resource_userid = "luauser" scripts.script_resource_password = "pword123"     The password should be encrypted (see above) and the configuration should then be updated to   scripts.script_resource_password = "AES:A26fBYKHJq+eMu0Fm2FlDw=="   HTTP Server TLS Configuration   Configuration   HTTP Server TLS configuration will enable TLS and https for secure and encrypted communication channels from and to the LSR. To enable TLS and https, the following configuration is required:     scripts.script_resource_ssl = true scripts.script_resource_certificate_chain = "/pathToLSR/lsrcertificate.pem" scripts.script_resource_private_key = "/pathToLSR/key.pem" scripts.script_resource_passphrase = "keyForLSR"     It's also encouraged to not use the default certificate, but custom certificates instead. To explicitly set this, the following configuration can be added:     scripts.script_resource_use_default_certificate = false     Certificates, keys and encryption   The passphrase for the private key should be encrypted (see above) and the configuration should then be updated to     scripts.script_resource_passphrase = "AES:A+Uv/xvRWENWUzourErTZQ=="     The private_key should be available as .pem file and starts and ends with the following lines:     -----BEGIN ENCRYPTED PRIVATE KEY----- -----END ENCRYPTED PRIVATE KEY-----     As it's highly recommended to encrypt the private_key, the LSR needs to know the password for how to encrypt and use the key. This is done via the passphrase configuration. Naturally the passphrase should be encrypted in the config.lua to not allow spoofing the actual cleartext passphrase.   The certificate_chain holds the Chain of Trust of the LSR Server Certificate in a .pem file. It holds multiple entries for the the Root, Intermediate and Server Specific certificate starting and ending with the following line for each individual certificate and Certificate Authority (CA):     -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----     After configuring TLS and https, the LSR REST API has to be called via https://lsrserver:8001 (instead of http).   Connection to the EMS   Authentication   To secure the connection to the EMS, the LSR must know the certificates and authentication details for the EMS:     scripts.rap_server_authenticate = true scripts.rap_userid = "emsuser" scripts.rap_password = "AES:A26fBYKHJq+eMu0Fm2FlDw=="     Supply the authentication credentials as defined in the EMS's config.json - as for any other configuration the password can be used in cleartext or encrypted. It's recommended to encrypt it here as well.   HTTPS and TLS   Use the following configuration establish the https connection and using certificates     scripts.rap_ssl = true scripts.rap_cert_file = "/pathToLSR/emscertificate.pem" scripts.rap_deny_selfsigned = true scripts.rap_validate = true     This forces the certificate to be validated and also denies selfsigned certificates. In case selfsigned certificates are used, you might want to adjust above values.   The cert_file is the full Chain of Trust as configured in the EMS' config.json http_server.certificate options. It needs to match exactly, so that the LSR can actually verify and trust the connections from and to the EMS.   EMS Configuration   In the config.lua there are two sections to be configured:   http_server which enables the HTTP Server capabilities for the EMS certificates which holds all certificates that the EMS must verify in order to communicate with other servers (ThingWorx Platform, LSR)   HTTP Server Authentication and TLS Configuration   HTTP Server Authentication will require a username and password for accessing the EMS REST API. HTTP Server TLS configuration will enable TLS and https for secure and encrypted communication channels from and to the EMS.   To enable both the following configuration can be used:   "http_server": { "host": "<emsHostName>", "port": 8000, "ssl": true, "certificate": "/pathToEMS/emscertificate.pem", "private_key": "/pathToEMS/key.pem", "passphrase": "keyForEMS", "authenticate": true, "user": "emsuser", "password": "pword123" }   The passphrase as well as the password should be encrypted (see above) and the configuration should then be updated to   "passphrase": "AES:D6sgxAEwWWdD5ZCcDwq4eg==", "password": "AES:A26fBYKHJq+eMu0Fm2FlDw=="   See LSR configuration for comments on the certificate and the private_key. The same principals apply here. Note that the certificate must hold the full Chain of Trust in a .pem file for the server hosting the EMS.   After configuring TLS and https, the EMS REST API has to be called via https://emsserver:8000 (instead of http).   Certificates Configuration   The certificates configuration hold all certificates that the EMS will need to validate. If ThingWorx is configured for HTTPS and the ws_connection.encryption is set to "ssl" the Chain of Trust for the ThingWorx Platform Server Certificate must be present in the .pem file. If the LSR is configured for HTTPS the Chain of Trust for the LSR Server Certificate must be present in the .pem file.   "certificates": { "validate": true, "allow_self_signed": false, "cert_chain" : "/pathToEMS/listOfCertificates.pem" } The listOfCertificates.pem is basicially a copy of the lsrcertificate.pem with the added ThingWorx certificates and CAs.   Note that all certificates to be validated as well as their full Chain of Trust must be present in this one .pem file. Multiple files cannot be configured.   Binding to the LSR   When binding to the LSR via the auto_bind configuration, the following settings must be configured:   "auto_bind": [{ "name": "<ThingName>", "host": "<lsrHostName>", "port": 8001, "protocol": "https", "user": "luauser", "password": "AES:A26fBYKHJq+eMu0Fm2FlDw==" }]   This will ensure that the EMS connects to the LSR via https and proper authentication.   Tips   Do not use quotation marks (") as part of the strings to be encrypted. This could result in unexpected behavior when running the encryption script. Do not use a semicolon (:) as part of any username. Authentication tokens are passed from browsers as "username:password" and a semicolon in a username could result in unexpected authentication behavior leading to failed authentication requests. In the Server Specific certificates, the CN must match the actual server name and also must match the name of the http_server.host (EMS) or script_resource_host (LSR) In the .pem files first store Server Specific certificates, then all required Intermediate CAs and finally all required Root CAs - any other order could affect the consistency of the files and the certificate might not be fully readable by the scripts and processes. If the EMS is configured with certifcates, the LSR must connect via a secure channel as well and needs to be configured to do so. If the LSR is configured with certifcates, the EMS must connect via a secure channel as well and needs to be configured to do so. For testing REST API calls with resources that require encryptions and authentcation, see also How to run REST API calls with Postman on the Edge Microserver (EMS) and Lua Script Resource (LSR)   Export PEM data from KeyStore Explorer   To generate a .pem file I usually use the KeyStore Explorer for Windows - in which I have created my certificates and manage my keystores. In the keystore, select a certificate and view its details Each certificate and CA in the chain can be viewed: Root, Intermediate and Server Specific Select each certificate and CA and use the "PEM" button on the bottom of the interface to view the actual PEM content Copy to clipboard and paste into .pem file To generate a .pem file for the private key, Right-click the certificate > Export > Export Private Key Choose "PKCS #8" Check "Encrypted" and use the default algorithm; define an "Encryption Password"; check the "PEM" checkbox and export it as .pkcs8 file The .pkcs8 file can then be renamed and used as .pem file The password set during the export process will be the scripts.script_resource_passphrase (LSR) or the http_server.passphrase (EMS) After generating the .pem files I copy them over to my Linux systems where they will need 644 permissions (-rw-r--r--)
View full tip
Overview This document is targeted towards covering basic PostgreSQL monitoring and health check related system objects like tables, views, etc. This allows simple monitoring of PostgreSQL database via some custom services, which I'll attach at the end of this document, from the ThingWorx Composer itself. I'll also try to cover short detail on some of the services that are included with the Thing: PostgreSQLHealthCheck which implements Database ThingTemplate   Pre-requisite The document assumes that the user already has ThingWorx running with PostgreSQL as a Persistence Provider.   How to install Usage for this is fairly straight forward, import the Entities.twx and it will create required Thing which implements Database ThingTemplate and some DataShapes. Each Service under the Thing: PostgreSQLHealthCheck has its own DataShape. Feel free edit these services / DataShapes if you are looking to use output of these services  as part of your mashup(s).   Make sure to edit the PostgeSQL's JDBC Connection String, Username & password under the configuration section in order to connect to your PostgreSQL instance under the Thing PostgreSQLHealthCheck which will be created when Entities.twx is imported (attached with this blog)   Note : Users can use these services to query non-ThingWorx related database created with PostgreSQL as part of the external JDBC connection.   Reviewing Services from Thing: PostgreSQLHealthCheck   1. DescribeTableStructure - Takes two inputs **Table Name** and the **Schema Name** in which the ThingWorx database tables exists both inputs have default values that can be modified to match your PostgreSQL schema setup and required table name - It provides information on a Table's structure, see below     2. GetAllPSQLConfig - Provides runtime details on all the configurations done in the postgresql.conf which are in-effect - For detail on pg_settings see Postgresql 9.4 Doc     3. GetAllPSQLConfigLimited Similar to GetAllPSQLConfig, however with limited information   4. GetAllPSQLRoles - Lists all the database roles/users - Also lists their access rights permissions together with OID - Helpful in identifying if the role is active/inactive or carries any limitation on the DB connections     5. GetPG_Stat_Activity - Part of the Statistics Collector subsystem for the PostgreSQL DB - Shows current state of the schema e.g. connections, queries, etc. - For more detail on the output refer to the PostgreSQL 9.4 doc   6. GetPSQLDBLocksInformation - Shows the kind of locks in effect on which database and on which relation (table) - Particularly useful in identifying the relations and what lock mode is enabled on them     7. GetPSQLDBStat - Shows database wide statistics - Like Commits, reads, block reads, tupples (rows) fetched, inserted, deadlocks, etc - For more detail refer to PostgreSQL 9.4 doc   8. GetPSQLLogDesitnation - Checks where the PostgreSQL server logs are directed to - I.e. stderr, csvlog or syslog - Default is stderr   9. GetPSQLLogFileName - Fetches the log PostgreSQL log file name and the filename format - E.g. postgresql-%Y-%m-%d_%H%M%S.log    10. GetPSQLLoggingLocation - Fetches the location where the logs are stored for PostgreSQL - e.g. pg_log, which is also the default location - Desired location for the logs can be done in the postgresql.conf file   11. GetPSQLRelationIndexes - Gets information on the Indexes - Information like index size, number of rows, table names on which the index is created   12. GetPSQLReplicationStat - Shows information related to the Replication on PostgreSQL DB - Applicable to the PostgreSQL DBs where replication is enabled   13. GetPSQLTablespaceInfo - Takes tablespace name as input (String DataType), service defaults to 'thingworx' - modify if needed - Fetches information like owner oid, tablespace ACL     14. GetPSQLUserIndexIO - Fetches index that are created only on the User created DB objects - Shows relations (table) vs index relations ids (index on table), together with their names - Also shows additional info like number of disk blocks read from this index & number of buffer hits in this index     15. GetPSQLUserSequencesIOStats - Fetches informtion on Sequence objects used on user defined relations (tables) - Number of disk blocks read from this sequence & buffer hits in this sequence     16. GetPSQLUserTableIOStat - Fetches disk I/O information on the user created tables     17. GetPSQLUserTables - Fetches all the user created tables, together with their name, OID Disk I/O Last auto vacuum , vacuum Also lists the amount of rows each relation (table) has   Finally The attached entity has some additional service not yet covered in this blog, as they are minor services. Therefore for brevity of this blog I've left them out for now, feel free to explore or enhance this. I will continue to look for any additional services and will enhance this document and the entities belong to this.    If you are looking to enhance this feel free to fork from twxPostgreSqlHealthCheck over Github.
View full tip
I always find it difficult to remember which version of software is supported with which version of ThingWorx, so I created a table for my reference. I hope this is also helpful to other people.     Oracle JDK Tomcat Database Options Memo PostgreSQL Neo4J H2 Microsoft SQLServer SAP HANA DetaStax Enterprise Edition ThingWorx 6.5 1.8.0(64-bit) 8.0.23(64-bit) 9.4.4 embedded N/A N/A N/A N/A IE 10 ThingWorx 6.6 1.8.0(64-bit) 8.0.23(64-bit) 9.4.4 embedded N/A N/A N/A N/A   ThingWorx 7.0 1.8.0(64-bit) 8.0.23(64-bit) 9.4.? embedded N/A N/A N/A N/A   ThingWorx 7.1 1.8.0_92-b14(64-bit) 8.0.33(64-bit) 9.4.5 embedded N/A N/A N/A 4.6.3   ThingWorx 7.2 1.8.0_92-b14(64-bit) 8.0.33(64-bit) 9.4.5 embedded embedded N/A N/A 4.6.3 IE 11 and later ThingWorx 7.3 1.8.0_92-b14(64-bit) 8.0.38(64-bit) 9.4.5 embedded embedded N/A SPS 11, 12 4.6.3, 5   ThingWorx 7.4 1.8.0_92-b14(64-bit) 8.0.38(64-bit) 9.4.5 embedded embedded 2014 and later SPS 11, 12 4.6.3, 5   ThingWorx 8.0 1.8.0_92-b14(64-bit) 8.0.44(64-bit), 8.5.13(64-bit) 9.4.5 embedded embedded 2014 and later SPS 11, 12 4.6.3, 5   ThingWorx 8.1 1.8.0_92-b14(64-bit) 8.0.44(64-bit), 8.5.13(64-bit) 9.4.5 embedded embedded 2014 and later SPS 11, 12 4.6.3, 5   ThingWorx 8.2 1.8.0_92-b14(64-bit) 8.0.47(64-bit), 8.5.23(64-bit) 9.4.5 embedded embedded 2014 and later SPS 11, 12 4.6.3, 5   ThingWorx 8.3                  
View full tip
Large files could cause slow response times. In some cases large queries might cause extensively large response files, e.g. calling a ThingWorx service that returns an extensively large result set as JSON file.   Those massive files have to be transferred over the network and require additional bandwidth - for each and every call. The more bandwidth is used, the more time is taken on the network, the more the impact on performance could be. Imagine transferring tens or hundreds of MB for service calls for each and every call - over and over again.   To reduce the bandwidth compression can be activated. Instead of transferring MBs per service call, the server only has to transfer a couple of KB per call (best case scenario). This needs to be configured on Tomcat level. There is some information availabe in the offical Tomcat documation at https://tomcat.apache.org/tomcat-8.5-doc/config/http.html Search for the "compression" attribute.   Gzip compression   Usually Tomcat is compressing content in gzip. To verify if a certain response is in fact compressed or not, the Development Tools or Fiddler can be used. The Response Headers usually mention the compression type if the content is compressed:     Left: no compression Right: compression on Tomcat level   Not so straight forward - network vs. compression time trade-off   There's however a pitfall with compression on Tomcat side. Each response will add additional strain on time and resources (like CPU) to compress on the server and decompress the content on the client. Especially for small files this might be an unnecessary overhead as the time and resources to compress might take longer than just transferring a couple of uncompressed KB.   In the end it's a trade-off between network speed and the speed of compressing, decompressing response files on server and client. With the compressionMinSize attribute a compromise size can be set to find the best balance between compression and bandwith.   This trade-off can be clearly seen (for small content) here:     While the Size of the content shrinks, the Time increases. For larger content files however the Time will slightly increase as well due to the compression overhead, whereas the Size can be potentially dropped by a massive factor - especially for text based files.   Above test has been performed on a local virtual machine which basically neglegts most of the network related traffic problems resulting in performance issues - therefore the overhead in Time are a couple of milliseconds for the compression / decompression.   The default for the compressionMinSize is 2048 byte.   High potential performance improvement   Looking at the Combined.js the content size can be reduced significantly from 4.3 MB to only 886 KB. For my simple Mashup showing a chart with Temperature and Humidity this also decreases total load time from 32 to 2 seconds - also decreasing the content size from 6.1 MB to 1.2 MB!     This decreases load time and size by a factor of 16x and 5x - the total time until finished rendering the page has been decreased by a factor of almost 22x! (for this particular use case)   Configuration   To configure compression, open Tomcat's server.xml   In the <Connector> definitions add the following:   compression="on" compressibleMimeType="text/html,text/xml,text/plain,text/css,text/javascript,application/javascript,application/json"     This will use the default compressionMinSize of 2048 bytes. In addition to the default Mime Types I've also added application/json to compress ThingWorx service call results.   This needs to be configured for all Connectors that users should access - e.g. for HTTP and HTTPS connectors. For testing purposes I have a HTTPS connector with compression while HTTP is running without it.   Conclusion   If possible, enable compression to speed up content download for the client.   However there are some scenarios where compression is actually not a good idea - e.g. when using a WAN Accelerator or other network components that usually bring their own content compression. This not only adds unnecessary overhead but is compressing twice which might lead to errors on client side when decompressing the content.   Especially dealing with large responses can help decreasing impact on performance. As compressing and decompressing adds some overhead, the min size limit can be experimented with to find the optimal compromise between a network and compression time trade-off.
View full tip
    Go beyond functional application development and learn how you can dress up your UI to enhance the user experience. In this webinar, we'll show you how to implement common design elements throughout your UI including custom logos, color schemes, and styles.   In this video, UI expert Tsveta Saul will demonstrate valuable tips and tricks for making a sophisticated IoT application with ThingWorx, including: Masters and Menus - enhance your app framework with personalized menu titles, backgrounds, and custom headers Layout and Labels - create consistency in your application by combining commonly used widgets State and Style Definitions - define widget styles to illustrate your brand Images - integrate visuals to describe thousands of words' worth of design and development specifications Watch the recording above, and download this sample Mashup containing all the data and entities shared in the video.   Q&A   We didn’t have time to get to all of the questions during the live webcast, but we’ve answered them here on our blog. Have any additional questions? Please leave us a comment. WHERE ELSE CAN STATE DEFINITIONS BE APPLIED? State Definitions can be applied to the Map Widget. Not only can you add color-coded pins, but also images that describe certain types of assets, for example. You can also add State Definitions to your Time Series Chart. For instance, you can color-code for values that exceed a certain threshold.   ARE THERE ANY IMPORTANT OR HIDDEN PROPERTIES THAT I SHOULD KNOW ABOUT? One feature to take note of is the Z-index, which helps you control the position of a widget in relation to another widget. This option can control whether users see the specific widget or not. ThingWorx has a robust a system that handles permissions – from users and groups to organizations. There’s a lot of control as to who can see what either through Model, or through specific services and conditions inside the Mashup.   HOW CAN I DISABLE A USER FROM SEEING SOMETHING OR DOING A CERTAIN ACTION? Again, that is handled through the security system of ThingWorx. There are multiple ways of authenticating users from other systems. You can create services that are related to your Things or user session parameters, and define whether a user can see those parameters or not.   HOW CAN WE CHANGE DATE FORMAT FOR X-AXIS IN TIME SERIES CHART? This property is available within the Time Series Chart Widget. It uses standard programming language – you can add or delete in the property.   CAN LAYOUT SPACING BE A PERCENTAGE? The spacing within the layout (none, 5px, 10px, 15px, or 20px).
View full tip
  Based on real use cases and industry-leading solutions, this webinar looks at how developers can deliver valuable analytical outputs through experiences that ensure easy consumption of trusted analyses.   By taking a deep dive into a range of the analytics capabilities within ThingWorx, we will demonstrate how you can visualize complex analytic outputs to help your users understand what really matters in your data - and ultimately make quick, insightful business decisions.   Q&A   We didn’t have time to get to all of the questions during the live webcast, but we’ve answered them here on our blog. Have any additional questions? Please leave us a comment.   THIS SESSION TALKED ABOUT CONNECTIVITY WITH OTHER ANALYTICS PROGRAMS, SUCH AS R. HOW WOULD THINGWORX INTERACT WITH THE R PLATFORM? There are multiple reasons for using R and other analytics tools. Say you’re using R to build a predictive model, for instance—you can interact with data and load that information into our Analytics Server, even outside Analytics Builder. You can also interface in script within ThingWorx to run certain calculations driven by R. Moreover, even if you are building a model in R or any other tool, you can use Analytics Manager to operationalize data coming in from a Thing model that’s being scored against variables created elsewhere. Our ultimate goal is not to migrate you away from the tools you’re already using, but rather augment the development experience with ThingWorx integration.   HOW WOULD A DEVELOPER GO ABOUT CREATING & OBTAINING JSON AND CSV DATA FOR ANALYSIS? In terms of creating a historical data view, there are two separate methods. There’s creating a descriptive view for which you are using to create the model, and then there is operationalizing the model. On the operational side, it’s data coming from the Thing model being scored against the actual predictive model that’s created. For the descriptive view, the tool of choice ultimately boils down to organizational preference. How you load represented data into the Analytics Server is completely based on the tools with which you work.   CAN YOU EXPLAIN IN DETAIL THE STEPS FOR CONNECTING A MACHINE TO THE THINGWORX ANALYTICAL MODEL, INCLUDING HOW TO DEFINE THE DATA COMING FROM THE MACHINE TO CREATE THE MODEL? Absolutely. I’d encourage you to go check out our new Operationalize an Analytics Model developer guide available on the ThingWorx Developer Portal. In just 30 minutes, you’ll learn how to use Analytics Manager and ThingPredictor to automatically perform analytical calculations.   OUR ORGANIZATION SEES FEATURE ENGINEERING AS A KEY PART OF THE DATA ANALYSIS PROCESS. DO THINGWORX ALGORITHMS HANDLE FEATURE ENGINEERING INTERNALLY? Yes. There’s feature engineering in terms of getting a dataset ready for consumption. The technology ThingWorx provides is being able to automatically sift through the data and use various features to guide the selection of algorithms. Feature enrichment is what’s really powerful about our supervised machine learning capabilities.    HOW DO I INCORPORATE ADVANCED STYLING IN MY UI, LIKE ANIMATIONS AND RESPONSIVE BEHAVIORS? The standard way of achieving advanced styling in ThingWorx is to leverage the Media Entities, Style and State Definitions. Many widgets, such as the Value Display and the Shape Widget, have out-of-the-box ability to take a State Definition and apply advanced styling for things like severity of risks, etc. Watch our IoT Application Makeover webcast for more information about this topic.   DO YOU HAVE ANY RECOMMENDATIONS FOR GUARANTEEING DESIGN CONSISTENCY IN MY IOT APPLICATION? For non-designers: to keep your design clean and consistent, it is important to properly manage your Style Definitions. Define styles and stick to re-using them; don’t have five different styles for showing model accuracy, for instance. I would recommend creating 5-10 styles for text, and then from there choosing a color scheme for things like buttons, labels, charts, grid headers, and other elements where you need a vibrant color.
View full tip
  Create compelling, modern application user interfaces in ThingWorx with the latest enhancements to our Mashup visualization platform - Collection and Custom CSS.   In this webinar with IoT application designer Gabriel Bucur, we'll show how the new Collection widget makes it easy to replicate visual content in your UI for menu systems, dashboards, tables, and more. You'll learn about several of the 60+ configuration properties available for collections, many of which offer input/output bindings for dynamic flexibility.   Gabriel will also demonstrate the styling and UX power of the latest feature in the Next Gen Composer, which allows you to write classes and CSS for your Mashups, masters, and widgets.   Watch the recording above, and download this sample Mashup containing all the data and entities shared in the video.   Q&A   We didn’t have time to get to all of the questions during the live webcast, but we’ve answered them here on our blog. Have any additional questions? Please leave us a comment.   WILL PTC CONTINUE SUPPORT FOR THE REPEATER WIDGET IN THINGWORX 8.2, OR WILL IT BE REMOVED? The Repeater Widget will not removed. However, due to limited performance in various browsers, switching to the Collection is highly recommended.   WHAT’S THE DIFFERENCE BETWEEN REPEATER AND COLLECTION, AND ARE THERE PROS AND CONS FOR EACH WIDGET? The Collection is an advanced widget that allows you to contain a series of repeated Mashups within a collection. Its functionality is similar to the Repeater Widget, but contains more properties that provide additional options and better performance, especially when handling large amounts of data.   IS IT POSSIBLE TO ADD A DRAG AND DROP ACTION TO LISTS OR REPEATERS, E.G. DRAGGING AN ELEMENT FROM ONE CONTAINER TO ANOTHER? Drag and drop functionality is not available in the Collection Widget at this time. It is, however, in consideration for future ThingWorx releases.   IN THE EVENT I HAVE MORE THAN ONE MASHUP (FOR EXAMPLE, MASHUP A AND MASHUP B), CAN I BIND DIFFERENT PROPERTIES TO THE SAME MASHUP PARAMETER ACCORDING TO THE MASHUP NAME? The MashupName row goes to the MashupNameField in the collection, where you’ll  have a dropdown after you populate it with the InfoTable that contains the MashupName. You can put all the bindings there, even if you don't use them in all the Mashups. For example: {"valueA":"MashupA","valueB":"MashupB"}   IS IT POSSIBLE TO ORDER SECTIONS HORIZONTALLY IN THE COLLECTION? Sections can only be ordered vertically at this time.   WHAT IS THE DIFFERENCE BETWEEN GLOBAL PROPERTIES AND SESSION VALUES? Global Properties are only available within the Collection. These properties offer a way to control Things from other widgets with which the Collection is displaying.   IF THERE ARE MULTIPLE COLLECTIONS, DO THEY SHARE THE SAME SET OF GLOBALPARAMETERS? No. If you defined a Boolean on your Collection, when you drag the Boolean output from a checkbox on the Collection you will see that you can bind it to that defined Boolean in the GlobalParameters.   WHEN USING CUSTOM CSS, DO YOU HAVE TO DEFINE STYLING FOR EACH ELEMENT, OR CAN YOU CREATE A STYLE THING WITH CSS? Widgets differ in functionality, so the same class might not apply to the same widget. However, if you define a styling in CSS for a button, for example, you can apply that class on any button you want.   DOES CUSTOM CSS ALWAYS OVERRIDE THE WIDGET STYLES? Yes. That is the essential purpose of custom CSS integration – to rewrite styles.   IF YOU HAVE TWO CONFLICTING STYLES – ONE IN CSS AND THE OTHER IN A STYLE DEFINITION – WHICH ONE TAKES PRECEDENCE? CSS will typically rewrite the ThingWorx styles; however, it depends on the specificity of the CSS target definition. For example: “.button-element" will be overwritten by ".default-button .button-element". Visit https://developer.mozilla.org/en-US/docs/Web/CSS/Specificity for more details regarding this topic.   CAN I RESIZE MY WIDGETS DURING RUNTIME? The size of the widget is determined by the CSS, and how it renders in ThingWorx. While you can bind different sizing classes to the CustomClass property of the widget, resizing with your mouse is not available at Runtime.
View full tip
  The data in your industrial IoT application is only valuable if it tells a story. As a developer, you need to consider the logical connections between graphical elements and business data and determine how users want to consume the information. With the Mashup Builder in ThingWorx, you can rapidly create a custom visualization that displays data from your connected devices. These easy-to-configure widgets deliver real-time data functionality at your fingertips - streamlining, processing, and displaying valuable information for your application users.   In this video, you'll learn how to create immersive, interactive visualizations by utilizing dynamic charts and graphs in your GUI. ThingWorx technical engineer Jason Wyatt demonstrates how to: Create a UI with ThingWorx Composer and Mashup Builder Incorporate visual displays that highlight business data requirements Supply data to components in your Mashup leveraging pre-built widgets and services Implement the Time Series Chart, Pie Chart, Open Street Map, Location Picker, Auto Refresh, Textbox, and Button widgets Watch the recording above, and download this sample Mashup containing all the data and entities shared in the video.   Q&A   We didn’t have time to get to all of the questions during the live webcast, but we’ve answered them here on our blog. Have any additional questions? Please leave us a comment. THIS UI CAPABILITY CLEARLY IS USEFUL FOR PROTOTYPING. AT WHAT SCALE AND / OR COMPLEXITY DO YOU RECOMMEND USING TRADITIONAL PROGRAMMING METHODS? The ThingWorx platform does not require users to utilize the Mashup Builder to create a frontend for their application. Some PTC customers use ThingWorx for the backend and rely on a traditional HTML team to design a custom UI. In that scenario, ThingWorx provides the Edge connectivity and backend storage/organization/business-logic. But, that said, the Mashup Builder can be used to develop production-level sites, especially if you customize State and Style definitions and create a Master mashup. WHAT IS THE DIFFERENCE BETWEEN GETIMPLEMENTINGTHINGS VS. GETIMPLEMENTINGTHINGSWITHDATA VS. QUERYPROPERTYHISTORY? GetImplementingThings provides a list of all Things instantiated from a particular Thing Template. GetImplementingThingsWithData provides the current property values associated with the Things. QueryPropertyHistory provides historical Property values. WHAT ARE THE POSSIBLE SCENARIOS WHERE SERVICEINVOKECOMPLETED CAN BE TRIGGERED? ServiceInvokeCompleted is an Event that occurs when a Mashup Data Service completes. We recommend you use it in any situation where timing is important for proper application execution. For instance, in the application created for the webinar, I wanted QueryPropertyHistory to run after SetProperties was complete. Therefore, I used ServiceInvokeCompleted to trigger the second Service. IS THERE ANY WAY TO DISPLAY TWO INFO TABLE DATA IN SINGLE GRID? This isn't possible by default, but as a workaround you could make a custom Service to combine two InfoTables into one; and then you could apply the combined data to the Grid Widget. CAN I RESIZE MY WIDGETS DURING RUNTIME? You can change any Widget Property at runtime that accepts an incoming data-bind. You can tell which Widget Properties accept dynamic data by looking at the Property itself in the bottom-left section of the Mashup Builder. If the Property has a left arrow pointing at the Property name, then you can bind it to dynamic data and change it during runtime. For Widget Properties where this isn't possible (such as some Widgets' Width and Height), you can apply custom CSS. IS THERE AN UNDO FEATURE AVAILABLE ON MASHUP BUILDER? Currently, no, but I believe Undo is a feature request on the R&D radar. HOW DO YOU GET MASHUP INFORMATION UP TO THE MASTER MASHUP? You can pass information between two Mashups when one Mashup pushes a change down to a Thing, then the other Mashup may read that data from that same Thing. Additionally, Mashup Parameters and Session Variables can act like Global Variables that are accessible across multiple Mashups. DOES THE OPAQUE OR MAKING BG COLOR TRANSPARENT WORK? You may set certain Widgets' style in such a way that the Widget itself is visible, but the background of the Widget is transparent. CAN YOU SHOW AGAIN HOW YOU ADDED THE TEXTBOX TO THE PROPERTIES OF SERVICES? You can view the recording of the webinar to see how I made an invisible TextBox set the MaxItems Property of the QueryPropertyHistory Mashup Data Service. I used an invisible TextBox to get a static number, then applied that number to the MaxItems Parameter of QueryPropertyHistory in order to change the Service's functionality. WHAT IS THE PURPOSE FOR SETPROPERTIES? SetProperties is one of several Mashup Data Services that sends information from the Mashup to the backend. DO WE HAVE FILTERS ON THE PIE CHART FOR CHANGING THE VIEW WHEN A VALUE IS SELECTED IN THE FILTER? Yes, there are several operations you may perform when a section of a Pie Chart is selected. I had originally intended to show how you can change the displayed color when you select a particular section of the Pie Chart, but I unfortunately ran out of time. In addition, all three of the display Widgets were tied together: when I selected a section of the Pie Chart, the same piece of data was selected on the Map and Time Series (and vice versa for clicking on the other two). You can import the sample Mashup into your Composer to view the configuration settings. IS THERE WAY TO SET PROPERTIES AND GET DATA THROUGH EMAIL USING THINGWORX? Yes, there are several ways to push information to the ThingWorx backend. For instance, you could have an entirely external process which strips data from an e-mail and then makes a REST call to ThingWorx to archive the data or trigger some Service. And, yes, ThingWorx supports sending and receiving e-mail through an Extension which you may download for free from the ThingWorx Marketplace. CAN THE MAXITEMS OF THE QUERYPROPERTYHISTORY BE APPLIED TO A COMBO BOX / LIST, WHICH CAN BE SELECTED FROM THE MASHUP SCREEN ONLINE? If I'm understanding correctly, you're asking whether or not the MaxItems Parameter of the QueryPropertyHistory Mashup Data Service can be set dynamically. The answer is yes. For instance, that TextBox which I made invisible could have been left visible and given a Label of "Max Items to Display". It would still be tied to the QueryPropertyHistory Service in the same way. But when the TextBox has a new value entered, then that would change the Parameter configuration of QueryPropertyHistory to show whatever had been entered. You would still have to figure out how to call the QueryPropertyHistory Service again. Changing a Parameter just sets up how the Service will behave the next time it is called. DO YOU HAVE TIPS FOR MAKING A PRINT-FRIENDLY MASHUP? My only real recommendation would be to use a Static Mashup with a specific resolution (to ensure that everything fit on the page while still looking good), while also setting the Style of various Widgets such that everything was in gray-scale (or something similar) that would be easy on your printer's ink cartridge. HOW WELL DO THE STYLE DEFINITIONS AND CSS WORK WITH TWITTER'S BOOTSTRAP? I'm unfamiliar with Twitter's Bootstrap, but I do know that with Style Definitions and the new CSS functionality, you have a lot of control over exactly how your Mashup looks. So you should be able to configure your Mashup to comply if Twitter has some particular requirements. WHY DO WIDGETS NOT STICK TO THE MASHUP WHEN YOU DRAG THEM INTO A BUSY UI? I HAVE HAD WIDGETS 'FLY' BACK TO THE WIDGET LIST AND HAVE BEEN UNABLE TO GET THEM TO STICK. Unfortunately, that's a known issue. It has something to do with the fact that you're not allowed to drag-and-drop a new Widget on top of an existing Widget. Which is a little strange considering that you explicitly *ARE* allowed to stack Widgets on top of one another after they've been placed in the central Canvas areas. I believe that R&D is investigating. CAN YOU EXPLAIN THE DIFFERENT OPTIONS FOR MASHUP? I believe that this question has something to do with the options in the pop-up when you first create a new Mashup. A Responsive Mashup grows and shrinks to match the viewing-resolution, while Static stays at the resolution you specify. The other options, (Page vs. Template vs. Shape), have to do with a specific setup where you have a Mashup-in-Mashup design. For instance, you could subdivide a Responsive Mashup just as I did in the webinar, and then have one of those sub-sections be an entirely different Mashup. Template and Shape Mashups are used when you have the scenario I describe above with Mashup-in-Mashup, but you want the sub-Mashup to change based off some other metric, such as the selection of a Thing listed in a Grid Widget. You could use some Mashup Data Service like GetImplementingThings. That would return a list of every Thing instantiated from a Thing Template. You could then have a Grid which displays a list of every Thing returned by the GetImplementingThings Service. You could then have a Template Mashup stored within every Thing instantiated from that Template. Whenever a Thing is selected from the Grid, it displays the Template Mashup for that specific Thing. HOW DO YOU HIDE TOOLBAR WHICH ALLOW USER TO SELECT "SHOW/HIDE LOG", "SHOW/HIDE LOG", "RELOAD", ETC. ON YOUR MASHUP SCREEN? Many Widgets have a ""Visible"" Property which may be dynamically set via some other criteria. For instance, you could have a Checkbox Widget, which is great for Boolean values like the Visible Property. You could tie the State Property of the Checkbox Widget to a different Widget's Visible Property. When the Checkbox is checked, the other Widget is visible. When the Checkbox is unchecked, then the other Widget becomes invisible. I WOULD LIKE TO KNOW THE CHALLENGES IN RESPONSIVE MASHUPS VS. STATIC MASHUP DEVELOPMENT. ALSO NEED THE LIST OF WIDGETS NOT SUPPORTED BY THE RESPONSIVE MASHUPS. WHAT IS NEW IN THINGWORX 8.2 FOR RESPONSIVE MASHUPS? The main challenge of a Responsive Mashup is that it's almost necessary to test the Mashup at each resolution that you believe your users may be viewing the page. Responsive does a good job of stretching and shrinking… but this can also lead to undesirable situations where you have scroll bars because the viewing-resolution is too small for everything to fit. The main challenge of a Static Mashup is that it really only works at the specific resolution you set. If you have a 300x200 px Static Mashup, it will essentially be unreadable on a 4k display. As for a list, some Widgets *AREN'T* Responsive. The TextBox Widget will not grow and shrink as the viewing-resolution changes, for instance, but you can still use a TextBox in a Responsive Mashup. The big new item for Mashups in 8.2 was the inclusion of CSS and the Collections Widget. Either or both may be used in any Mashup, regardless of whether it's Responsive or Static.
View full tip
Predictive models: ​ Predictive model is one of the best technique to perform predictive analytics. This is the development of models that are trained on historical data and make predictions on new data. These models are built in order to analyse the current data records in combination with some historical data.   Use of Predictive Analytics in Thingworx Analytics and How to Access Predictive Analysis Functionality via Thingworx Analytics   Bias and variance are the two components of imprecision in predictive models. Bias in predictive models is a measure of model rigidity and inflexibility, and means that your model is not capturing all the signal it could from the data. Bias is also known as under-fitting.  Variance on the other hand is a measure of model inconsistency, high variance models tend to perform very well on some data points and really bad on others. This is also known as over-fitting and means that your model is too flexible for the amount of training data you have and ends up picking up noise in addition to the signal.   If your model is performing really well on the training set, but much poorer on the hold-out set, then it’s suffering from high variance. On the other hand if your model is performing poorly on both training and test data sets, it is suffering from high bias.   Techniques to improve:   Add more data: Having more data is always a good idea. It allows the “data to tell for itself,” instead of relying on assumptions and weak correlations. Presence of more data results in better and accurate models. The question is when we should ask for more data? We cannot quantify more data. It depends on the problem you are working on and the algorithm you are implementing, example when we work with time series data, we should look for at least one-year data, And whenever you are dealing with neural network algorithms, you are advised to get more data for training otherwise model won’t generalize.  Feature Engineering: Adding new feature decreases bias on the expense of variance of the model. New features can help algorithms to explain variance of the model in more effective way. When we do hypothesis generation, there should be enough time spent on features required for the model. Then we should create those features from existing data sets. Feature Selection: This is one of the most important aspects of predictive modelling. It is always advisable to choose important features in the model and build the model again only with important and significant features. e. let’s say we have 100 variables. There will be variables which drive most of the variance of a model. If we just select the number of features only on p-value basis, then we may still have more than 50 variables. In that case, you should look for other measures like contribution of individual variable to the model. If 90% variance of the model is explained by only 15 variables then only choose those 15 variables in the final model. Multiple Algorithms: Hitting at the right machine learning algorithm is the ideal approach to achieve higher accuracy. Some algorithms are better suited to a particular type of data sets than others. Hence, we should apply all relevant models and check the performance. Algorithm Tuning: We know that machine learning algorithms are driven by parameters. These parameters majorly influence the outcome of learning process. The objective of parameter tuning is to find the optimum value for each parameter to improve the accuracy of the model. To tune these parameters, you must have a good understanding of these meaning and their individual impact on model. You can repeat this process with a number of well performing models. For example: In random forest, we have various parameters like max_features, number_trees, random_state, oob_score and others. Intuitive optimization of these parameter values will result in better and more accurate models. Cross Validation: Cross Validation is one of the most important concepts in data modeling. It says, try to leave a sample on which you do not train the model and test the model on this sample before finalizing the model. This method helps us to achieve more generalized relationships. Ensemble Methods: This is the most common approach found majorly in winning solutions of Data science competitions. This technique simply combines the result of multiple weak models and produce better results. This can be achieved through many ways.  Bagging: It uses several versions of the same model trained on slightly different samples of the training data to reduce variance without any noticeable effect on bias. Bagging could be computationally intensive esp. in terms of memory. Boosting: is a slightly more complicated concept and relies on training several models successively each trying to learn from the errors of the models preceding it. Boosting decreases bias and hardly affects variance.     
View full tip
The Squeal functionality has been discontinued with ThingWorx 8.1, see ThingWorx 8.1.0 Release Notes   There might be scenarios where it should be disabled in earlier versions as well. This can be achieved e.g. with Tomcat Security Constraints. To add such a constraint, open <Tomcat>\webapps\Thingworx\WEB-INF\web.xml At the end of the file add a new constraint just before closing the </web-app> tag:   <security-constraint> <web-resource-collection> <web-resource-name>Forbidden</web-resource-name> <url-pattern>/Squeal/*</url-pattern> </web-resource-collection> <auth-constraint/> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> Save the file and restart Tomcat. Accessing the /Thingworx/Squeal resource now will result in an error message:   HTTP Status 403 - Access to the requested resource has been denied   One scenario to be aware of is when the web.xml changes, e.g. due to updating ThingWorx or other manual changes. In such a case, ensure that the filter is still present in the file and taken into account.
View full tip
DataShape Simply put, it represents the data in your model giving your application an in-built sense on how to represent the data in different scenarios. DataShape is defined with set of field definitions and related metadata e.g. DataType. DataShapes are must have (except for ValueStream) when creating entities that deal with data storage i.e. DataTable & Stream. For more detail on  DataShapes and the DataTypes see DataShapes in ThingWorx Help Center   Note: See ThingShape : Nuances, Tips & Tricks  for ThingShape vs DataShape   Ways to create DataShape   Via the ThingWorx Composer Navigate to ThingWorx Composer click on New > DataShape Provide a unique name to the DataShape entity DataShape creation in ThingWorx Composer Navigate to the Field Definition and add required Field Definition Defining Field for the DataShape Via a custom service in ThingWorx Navigate to an entity under which the service is to be created, e.g. Thing Switch to Services section for the Thing and click Add to create new service OOTB available service CreateDataShape can be used from the Resources > EntityService // snippet creating the infotable with the var params = { infoTableName : "InfoTable", dataShapeName : "DemoDataShape" }; // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(DemoDataShape) var result = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); // snippet creating the DataShape using the Infotable queried above which returns the field and the metadata on those fields // DSName used below is created as the Resources["EntityServices"].CreateDataShape({ name: DSName /* STRING */, description: "Custom created DataShape" /* STRING */, fields: result /* INFOTABLE */, tags: undefined /* TAGS */ }); Here's how it'd appear in the Service editor :   DataShape creation with JavaScript service in ThingWorx Via the ThingWorx Extension SDK   Following example snippet shows the creation and usage of the DataShape while creating custom extension with the Extension SDK    @ThingworxConfigurationTableDefinitions(tables = { @ThingworxConfigurationTableDefinition(name = "ConfigTableExample1", description = "Example 1 config table", isMultiRow = false, dataShape = @ThingworxDataShapeDefinition(fields = { @ThingworxFieldDefinition(name = "field1", description = "", baseType = "STRING"), @ThingworxFieldDefinition(name = "field2", description = "", baseType = "NUMBER") })) }) Note: Refer to the ThingShape : Nuances, Tips & Tricks for Tips & Tricks   Other related reads How are DataShape used while storing the data in ThingWorx How to pass a DataShape as parameter Can two DataShapes have the same service name if used on the same thing in ThingWorx? DataShape in ThingWorx Help Center
View full tip
Connect and Monitor Industrial Plant Equipment Learning Path   Learn how to connect and monitor equipment that is used at a processing plant or on a factory floor.   NOTE: Complete the following guides in sequential order. The estimated time to complete this learning path is 180 minutes.   Create An Application Key  Install ThingWorx Kepware Server Connect Kepware Server to ThingWorx Foundation Part 1 Part 2 Create Industrial Equipment Model Build an Equipment Dashboard Part 1 Part 2
View full tip
Design Your Data Model Guide Part 2   Step 4: Data Sources – Component Breakout   Component Breakout     Once you have a full list of Things in your system (as well as requirements for each user), the next step is to identify the information needed from each Thing (based on the user's requirements). This involves evaluating the available data and functionality for each Thing. You then align the data and functionality with the user's requirements to determine exactly what you need, while eliminating that which you do not. This is important, as there can be cost and security benefits to only collecting data you need, and leaving what you don't. NOTE: Remember from the Data Model Introduction that a Thing's Components include Properties, Services, Events, and Subscriptions.   Factory Example   Using the Smart Factory example, let’s go through the different Things and break down each Thing's components that are needed for each of our users.   Conveyor Belts   The conveyor belt is simple in operation but could potentially have a lot of available data. Maintenance Engineer - needs to know granular data for the belt and if it has any alerts emergency shutdown (service) machine state (on/off) (property) serial number (property) last maintenance date (property) next scheduled maintenance date(property) power consumption (property) belt speed (property) belt motor temp (property) belt motor rpm (property) error notification (event) auto-generated maintenance requests (subscription) Operator - needs to know if the belt is working as intended belt speed (property) alert status (event) Production Manager - wants access to the data the Operator can see but otherwise has no new requirements   Robotic Arm   The robotic arm has 3 axes of rotation as well as a clamp hand. Maintenance Engineer - needs to know granular data for the arm and if it has any alerts time since last pickup (property): how long it has been since the last part was picked up by this hand? product count (property): how many products the hand has completed emergency shutdown (service) machine state (on/off) (property) serial number (property) last maintenance date (property) next scheduled maintenance date (property) power consumption (property) arm rotation axis 1 (property) arm rotation axis 2 (property) arm rotation axis 3 (property) clamp pressure (property) clamp status (open/closed) (property) error notification (event) 15.auto-generated maintenance requests (subscription) Operator - needs to know if the robotic arm is working as intended clamp status (open/closed) (property) error notification (event) product count (property): How many products has the hand completed? Production Manager - wants access to the data the Operator can see but otherwise has no new requirements   Pneumatic Gate   The pneumatic gate has two states, open and closed. Maintenance Engineer - needs to know granular data for the gate and if it has any alerts emergency shutdown (service) machine state (on/off) (property) serial number (property) last maintenance date (property) next scheduled maintenance date (property) power consumption (property) gate status (open/closed) (property) error notification (event) auto-generated maintenance requests (subscription) Operator - needs to know if the pneumatic gate is working as intended. gate status (open/closed) (property) error notification (event) The Production Manager wants access to the data the Operator can see but otherwise has no new requirements   Quality Control Camera   The QC camera uses visual checks to make sure a product has been constructed properly. Maintenance Engineer - needs to know granular data for the camera and if it has any alerts machine state (property): on/off serial number (property) last maintenance date (property) next scheduled maintenance date (property) power consumption (property) current product quality reading (property) images being read (property) settings for production quality assessment (property) error notification (event) auto-generated maintenance requests (subscription) product count (property): how many products the camera has seen Operator - needs to keep track of the quality check results and if there are any problems with the camera setup settings for production quality assessment (property) error notification (event) bad quality flag (event) product count (property): how many products the camera has seen Production Manager - wants access to the data the Operator can see but otherwise has no new requirements   Maintenance Request System Connector   Determining the data needed from the Maintenance Request System is more complex than from the physical components, as it will be much more actively used by all of our users. It is important to note that the required functionality already exists in our system as is, but it needs bridges created to connect it to a centralized system. Maintenance Engineer - needs to receive and update maintenance requests maintenance engineer credentials (property): authentication with the maintenance system endpoint configuration for connecting to the system (property) get unfiltered list of maintenance requests (service) update description of maintenance request (service) close maintenance request (service) Operator - needs to create and track maintenance requests operator credentials (property): authentication with the maintenance system endpoint configuration for connecting to the system (property) create maintenance request (service) get filtered list of maintenance requests for this operator (service) Production Manager - needs to monitor the entire system - both the creation and tracking of maintenance requests; needs to prioritize maintenance requests to keep operations flowing smoothly production manager credentials (property): authentication with the maintenance system endpoint configuration for connecting to the system (property) create maintenance request (service) get unfiltered list of maintenance requests (service) update priority of maintenance request (service)   Production Order System Connector   Working with the Production Order System is also more complex than the physical components of the lines, as it will be more actively used by two of the three users. It is important to note that the required functionality already exists in our existing production order system as is, but it needs bridges created to connect to a centralized system. Maintenance Engineer - will not need to know anything about production orders, as it is outside the scope of their job needs Operator - needs to know which production orders have been set up for the line, and needs to mark orders as started or completed operator credentials (property): authentication with the production order system endpoint configuration for connecting to the system (property) mark themselves as working a specific production line (service) get a list of filtered production orders for their line (service) update production orders as started/completed (service) Production Manager - needs to view the status of all production orders and who is working on which line production manager credentials (property): authentication with the production order system endpoint configuration for connecting to the system (property) get a list of production lines with who is working them (service) get the list of production orders with filtering options (service) create new production orders (service) update existing production orders for quantity, and priority (service) assign a production order to a production line (service) delete production orders (service)   Step 5: Data Sources – Thing-Component Matrix     Now that you have identified the Components necessary to build your solution (as well as the Things involved in enabling said Components), you are almost ready to create your Data Model design. Before moving onto the design, however, it is very helpful to get a good picture of how these Components interact with different parts of your solution. To do that, we recommend using a Thing-Component Matrix. A Thing-Component Matrix is a grid in which you will list Things in rows and Components in columns. This allows you to identify where there are overlaps between Components. From there, you can break those Components down into reusable Groups. Really, all you're doing in this step is taking the list of individual Things and their corresponding Components and organizing them. Instead of thinking of each item's individually-required functionality, you are now thinking of how those Components might interact and/or be reused across multiple Things.   Sample Thing-Component Matrix   As a generic example, look at the chart presented here.   You have a series of Things down the rows, while there are a series of Components (i.e. Properties, Services, Events, and Subscriptions) in the columns. This allows you to logically visually identify how some of those Components are common across multiple Things (which is very important in determining our recommendations for when to use Thing Templates vs. Thing Shapes vs. directly-instantiated Things). If we were to apply this idea to our Smart Factory example, we would create two sections of our Thing-Component Matrix, i.e. the Overlapping versus Unique Components. NOTE: It is not necessary to divide your Thing Component Matrix between Overlapping vs Unique if you don't wish to do so. It is done here largely for the sake of readability.   Overlapping Matrix   This matrix represents all the overlapping Components that are shared by multiple types of Things in our system:   Unique Matrix   This matrix represents the Components unique to each type of Thing:     Step 6: Model Breakdown         Breaking down your use case into a Data Model is the most important part of the design process for ThingWorx. It creates the basis for which every other aspect of your solution is overlaid. To do it effectively, we will use a multi-step approach. This will allow us to identify parts we can group and separate, leading to a more modular design.   Entity Relationship Diagram   To standardize the represention of Data Models, it is important to have a unified view of what a representation might look like. For this example, we have developed an Entity Relationship Diagram schematic used for Data Model representation. We will use this representation to examine how to build a Data Model.   Breakdown Process   ThingWorx recommends following an orderly system when building the specifics of your Data Model. You've examined your users and their needs. You've determined the real-world objects and systems you want to model. You've broken down those real-world items by their Component functionality. Now, you will follow these steps to build a specific Data Model for your application. Step Description 1 Prioritize the Groups of Components from your Thing-Component Matrix by each Group's Component quantity. 2 Create a base Thing Template for the largest group. 3 Iterate over each Group deciding which entity type to create. 4 Validate the design through instantiation. In the next several pages, we'll examine each of these steps in-depth.   Click here to view Part 3 of this guide.   
View full tip