cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Learn all about PTC Community Badges. Engage with PTC and see how many you can earn! X

IoT Tips

Sort by:
This post is part of the series Forced Root Cause Monitoring via Mashups and Modal Popups To not feel lost or out of context, it's recommended to read the main post first. Create Entities AlertStateDefinition Create a new StateDefinition called "rcp_AlertStateDefinition" In the State Information tab, select Apply State: Numeric from the list on the right hand side Create a new State: Less than or equal to "1" Display Name: "Something good" Style: a new custom style with text color #f5b83d (orange) Create a new State: Less than or equal to "2" Display Name: "Something bad" Style: a new custom style with text color #f55c3d (red) Create a new State: Less than or equal to "3" Display Name: "Something ugly" Style: a new custom style with text color #ad1f1f (red) with a Font Bold Edit the "Default" State Set the Style: a new custom style with text color #36ad1f (green) We will not use this style, but in case we need a default configuration it will blend into the color schema Save the StateDefinition ValueStream Create a new ValueStream called "rcp_ValueStream" (choose a default ValueStream, not a RemoteValueStream) Save the ValueStream AlertThing Create a new Thing called "rcp_AlertThing" Based on a Generic Thing Base Thing Template Using the rcp_ValueStream Value Stream In the Properties and Alerts tab create the following Properties Name: "trigger" Base Type: BOOLEAN With a Default Value of "false" Check the "Persistent" checkbox Name: "selectedReason" BaseType: NUMBER Check the "Persistent" checkbox Check the "Logged" checkbox Advanced Settings: Data Change Type: ALWAYS In the Services tab create a new Service Name: "clearTrigger" No Inputs and no Outputs Service code me.trigger = false; When this service is executed, it will set the trigger Property to false Click Done to complete the Service creation Save the Thing
View full tip
This post is part of the series Forced Root Cause Monitoring via Mashups and Modal Popups To not feel lost or out of context, it's recommended to read the main post first. Before we start Create a new Project called "RootCausePopups" and save it. In the New Composer set the Project Context (top left box) to the "RootCausePopups" project. This will automatically add all of our new Entities into our project. Otherwise we would have to add each Entity manually on creation.
View full tip
This post is part of the series Forced Root Cause Monitoring via Mashups and Modal Popups To not feel lost or out of context, it's recommended to read the main post first. Required Logic The following logic will help us realizing this particular use case: The trigger property on the AlertThing switches from false to true. The MashupMain will receive dynamic Property updates via the AlertThing.GetProperties service. It will validate the value of the trigger Property and if it's true the MashupMain will show the MashupPopup as a modal popup. A modal popup will be exclusively in the foreground, so the user cannot interact with anything else in the Mashup except the modal popup. In the modal popup the user chooses one of the pre-defined AlertStateDefinitions. When a State is selected, the popup will set the State as a Mashup Parameter, pass this to the MashupMain and the popup close itself. When the MashupPopup is closed, the MashupMain will read the Mashup Parameter The MashupMain will set the selectedReason in the AlertThing to the selected value. It will also reset the trigger property to false. This allows the property to be set to true again to trigger another forced popup. On any value change the AlertThing will store the selectedReason State in a ValueStream to capture historic information on which root causes were selected at which time. The ValueStream information will be displayed as a table in a GridWidget in the MashupMain once the new properties have been set.
View full tip
This post is part of the series Forced Root Cause Monitoring via Mashups and Modal Popups To not feel lost or out of context, it's recommended to read the main post first. Required Entities In this simplified example we'll just use a Thing to set a status triggering the popup. This Thing will have two properties and one service: Properties trigger (Boolean) - to indicate if an error status is present or not, if so - trigger the popup selectedReason (Number) - to indicate the selected reason / root cause chosen in the modal popup Service clearTrigger - to reset the trigger to "false" once a reason has been selected The selectedReason will be logged into a ValueStream. In addition to the Thing and the ValueStream we will need a StateDefinition to pre-define potential root causes to be displayed in the popup. We will use three states to be used in a traffic-light fashion to indicate the severity of the issue in a custom color schema. To display the monitoring Mashup and the popup we will need two Mashups.
View full tip
Warning This post is quite long, has various chapters and you might get bored reading it. If you just want a summary read the "Use case" and "Conclusion" chapter - and maybe the "Required Logic" chapter, because I made a cool graph for it. The rest is all about implementation... Introduction I recently had the opportunity to deliver a ThingWorx training for Saint Gobain. One of the use cases for their ThingWorx application is monitoring machine errors and outages on the production line. If an outage or error status is triggered, the machine operator will see a popup on the monitoring screen where he is forced to select a root cause. This root cause will then be persisted in ThingWorx for more data transformation, analytics and reporting - like cost analysis or optimization opportunities. During the training we were also discussing on how such a forced root cause monitoring can be implemented via Mashups and the usage of modal popups. I've compiled the details into this post as it might also interest other developers. The ThingWorx Entities I'm using in this example can be downloaded from here Note: I'm using the word "Alert" here - but not in the context of a ThingWorx Property Alert... just beware to not be confused due to the wording. Use Case One of the requirements for Saint Gobain's IoT Solution was an interactive alert monitoring directly in the factory on the production machines. Let's say the machine has stopped, the root cause should be recorded. For this an interactive popup will be displayed on the machine's monitoring display and an employee has to choose the root cause from a pre-defined list. This could be planned outages, e.g. for maintenance or unplanned outages, e.g. material jam. The root cause will then be recorded and a history of outage causes can be stored in a ThingWorx value stream. This can then be later analyzed with e.g. ThingWorx Analytics capabilities to understand and optimize the machine's production capabilities and efficiency. As the root cause must be entered, the popup will be forced to be displayed when a certain condition / criteria is met - and it will only disappar when a root cause is chosen. The user should not be able to interact with any other elements of the Mashup and not be able to just close the popup. The popup will close itself and reset the initial condition once the root cause has been identified and chosen. Requirements Required Entities Required Logic Note: Just to make it easier to manage and export Entities, I will add all of the created elements in a new Project called RootCausePopups. All of the elements will have a "rcp_" added in front of their name - just to make it easier for me to find and identify them. Implementation Before we start - set a Project Context Create Entities Create a Popup Mashup Create the Main Mashup Testing the Mashups Conclusion Certain conditions (like the state of a checkbox) can be used to trigger modal popups. A modal popup forces a user interaction and the interaction will not offer any other option until a choice is made. With these parameters it's easy to have mandatory reaction from users when it's important to capture data which rely on the analysis of an engineer or a user - e.g. reasons for machine outages. Using this technique there's not much training required for staff, other than pushing a button with an option of their choice - this saves quite some time in capturing data in any other way (e.g. updating Excel files or manual pen-and-paper techniques). As this data is now part of the ThingWorx instance it can be used for further transformation, analysis or just for monitoring purposes There's of course more possibilities when it comes to states and formatting which would exhaust the context of this post - but feel free to explore... In the example we wouldn't need the textbox, but it's there to demonstrate if the correct values are persisted or not In the example we could of course also set the visibility of the checkbox to false, so that we would only see the popup and the Grid holding historic information We could also create different StateDefinitions to color-format / text-format the input differently from the output in the Grid If you found this interesting (and actually made it to the end of this post) - feel free to play with this concept a bit more... The dependencies might seem a bit difficult, but it should be easy to implement and to adjust to your own ideas and requirements.
View full tip
As it is not available in support.ptc.com. Please provide Creo View and Thing View Widget Documentation or guide to view 3D Object in custom mashup/UI except for the ThingWorx Navigate app.   I am posting this request to the community. Not for this ThingWorx developers portal after discussing it with PTC technical support. Please refer to article CS291582.   LeighPTC, I have no option to do move to the community again. But this had happened.   The post Creo View and Thing View Widget Documentation to view 3D Object in custom mashup/UI. was moved by LeighPTC.   Please don't move this request to the ThingWorx developers portal.    So that PTC Customer can have Creo View and Thing View Widget Documentation to view 3D Object in custom mashup/UI. As it is not available.   Many thanks, Rahul
View full tip
Exciting news! ThingWorx now has improved support for Docker containers to help you manage CI/CD, improve development efficiency in your organization and save costs. Check out these FAQs below and, as always, reach out to me if you have any additional questions.   Stay connected, Kaya   FAQs: ThingWorx Docker Containers   What are Docker Containers? From Docker.com: “a Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings”. Learn more here.   What's the difference between Docker containers and VMs? Containers are an abstraction at the app layer that packages code and dependencies together, whereas Virtual Machines (VMs) are an abstraction of physical hardware turning one server into many servers. Here are some great discussions on it on Stack Overflow. Containers vs. VMs   How can I build ThingWorx Docker images? Check out the Building ThingWorx 8.3 Docker Images Guide or watch this video to instruct you on how to build and test Docker containers. (view in My Videos)   How does PTC support building ThingWorx Docker images? PTC provides the ability for customers and partners to build ThingWorx Docker images. A customer can download the Dockerfiles and scripts packaged as a zip folder from the PTC Software Downloads Portal under “ThingWorx Platform,” then “Release 8.3”  then“ThingWorx Dockerfiles.” (Please note that you must be logged in for the link to function properly.) PTC Software Downloads PortalThe zip folder contains the Dockerfiles, template jar, and scripts to fetch Tomcat, and ThingWorx WAR files using CLI. Java must be downloaded manually from the vendor's website. We also provide an instructional guide called “Build ThingWorx Docker Images” available on the Reference Documents page on the Support Portal.   How are ThingWorx Docker images different from the usual delivery media of WAR files? The WAR file delivery is typically accompanied by an installation guide that contains the manual steps for creating the VM or bare-metal environment. That guide includes instructions for the administrator to manually install the prerequisites, including Tomcat, Java, and ThingWorx platform settings files. To deploy and run the WAR file, the administrator follows the guide to create the runtime environment on an OS. In contrast, the Dockerfile build in this delivery automates the creation of a Docker image once supplied with the prerequisites.   Do you have any reference deployment and guidance? Yes, you can refer to our blog post to learn how to deploy and run ThingWorx Docker containers on your existing Kubernetes environment.   Is there any recommendation on which Container Orchestrator as a Service (CaaS) a customer should run ThingWorx Foundation Docker container images on? You can use Docker-Compose for testing, but it is generally not suggested for production deployment use cases. In a production environment, customers should use container orchestrators such as Kubernetes, OpenShift, Azure Kubernetes Service (AKS), or Amazon Elastic Container Service for Kubernetes (Amazon EKS), to deploy and manage ThingWorx Docker images.   What are the skill sets required? Familiarity with OS CLI and Docker tools is required to build building the ThingWorx Docker images. Familiarity with Docker-compose to run the resulting Docker containers is needed to test the resulting builds. We don’t recommend Docker-Compose for production use, but when using it for local testing and demo purposes, users can rapidly install ThingWorx and get it up and running in minutes. We expect PTC partners and customers who want to run ThingWorx containerized instances in their production environment to possess the required skill sets within their DevOps team.   How is ThingWorx licensing handled with the Docker images? By default, the container created from these Docker images starts up in a limited mode with no license supplied. You can configure your username and password for the PTC licensing portal to automatically load a license via environment variables passed into the container on startup. Additionally, you can mount a volume to the /ThingworxPlatform directory, which contains your license file, or to retrieve a license request. To keep your Host ID consistent, ensure that the /ThingworxStorage and /ThingworxPlatform directories are persisted and not removed with individual container restarts. More detailed instructions can be found in the build guide or in a Kubernetes blog post .   Is Docker free? What version of Docker does PTC support for ThingWorx? Docker is open-source and licensed under the Apache 2 license. Information on Docker licensing can be found here. The following Docker versions are required: Docker Community Edition (docker-ce) Version 18.05.0-ce is recommended. To install the Docker Community Edition on your system, follow the instructions for your operating system on the Docker website here. Docker Compose (docker-compose) Version 1.17.1 is recommended. To install the Docker Compose on your system, follow the instructions for your operating system on the Docker website here. What persistence providers are currently supported? PTC provides the ability to build ThingWorx Foundation containers for the following supported persistence providers: H2 Microsoft SQL Server PostgreSQL Additional persistence providers will be added to the Docker build delivery as the ThingWorx Foundation Platform releases support for those new databases in future releases.   What are some of the security best practices? For production use, customers are strongly advised to secure their Docker environments by following all the recommendations provided by Docker. Review and implement the best practices detailed at https://docs.docker.com/engine/security/security/.   Can we build Docker images for ThingWorx High Availability (HA) architecture? Yes. ThingWorx Dockerfiles are provided for both basic ThingWorx deployment architecture and HA ThingWorx deployment architecture.   How easy is the rehosting and upgrading of ThingWorx releases on Docker with existing data? In Kubernetes environment, data is kept in a separate volume and can be attached to different containers. When one container dies, the data can be attached to a different container and the container should start without issue. For more information, please refer to the upgrade section of the Building ThingWorx 8.3 Docker Images Guide.   Is it okay to use the Docker exec and access the bash shell to make config changes or should I always rebuild the image and re-deploy?­ Although using Docker exec to gain access to the container internals is useful for testing and troubleshooting issues, any changes made will not be saved after a container is stopped. To configure a container's environment, variables are passed in during the start process. This can be done with Docker start commands, using compose files with environment variables defined, or with helm charts. More detailed instructions can be found in the build guide or in this blog post .   What if there are issues? Should I call PTC Technical Support? We are providing the scripts and reference documents solely to empower our community to build ThingWorx Docker images. We believe that customers using Docker in their production processes would have expertise to manage running Docker containers themselves. If there are any issues or questions regarding the build scripts provided in the PTC official downloads portal, then customers can contact PTC Technical Support at 1-800-477-6435 or visit us online at: http://support.ptc.com. PTC does not provide support for orchestration troubleshooting.   What can you share about future roadmap plans? As we are enabling our customers and partners to build ThingWorx Foundation Platform Docker images, we plan to do the same for upcoming products such as ThingWorx Integration & Orchestration, ThingWorx Analytics, upcoming persistence providers such as InfluxDB, and many more. We also plan to provide additional reference architecture examples and use cases to help developers understand how to use Docker containers in their DevOps and production environments.   Where can I learn more about Docker containers and container orchestrators? See these resources below for additional information: https://training.docker.com/ https://kubernetes.io/docs/tutorials/online-training/overview/
View full tip
I got this excellent question and I thought it worthwhile to put my answer here as well.   There are two ways to segregate information between clients. By default we use a ‘software’ approach to segregation by using Organizations. This allows you to designate a Client to an Organization/Organizational nodes and give those nodes ‘visibility’ to specific entities within the software. This will mean that ‘through software logic’ users can only see what they’ve been given visibility to see. This mainly applies to all the client’s equipment (Thingworx Things). They can only see their own equipment. This would also apply to a specific set of their data which is ValueStream data because that can only be retrieved from the perspective of a Thingworx Thing   Blog/Wiki/DataTable/Streams can store data across clients and do not utilize visibility on a row basis, in this case appropriate queries would need to be created to allow retrieval for only a specific client. In this case we use a construct for security that utilizes what we call the ‘system user’ and wrapper routines that work of the CurrentUser context, this allows you to create enforced validated queries against the data that will allow a user to only retrieve their specific data.   In regards to the data itself, if you need to, you can provide actual ‘physical’ segregation by using multiple persistence provider and mapping Blog/Wiki/DataTable/Streams/ValueStreams to different persistence providers. Persistence providers are basically additional database schemas (in one and the same database or different database) of the Thingworx data storage schema, allowing you to completely separate the location of where data is stored between clients. Note that just creating unique Blog/Wiki/DataTable/Streams/ValueStreams per client and using visibility is still only a logical / software way of providing segregation because the data will be stored in one and the same database schema also known as the Thingworx data persistence provider.
View full tip
In this session, we pick up where we left off with the mashup which was worked on in part 1 of our Advanced Mashup Expert Session series. Specifically, we will explore the concepts of master mashups, session variables, and media entities, using each to further enhance the look and feel of our mashup.     For full-sized viewing, click on the YouTube link in the player controls. Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
This video builds upon the mashup created in the basic session, and strives to create a more polished, user-friendly interface that is ready for deployment. In part 1, we’ll take a look at advanced layout designs and include a more varied set of widgets to help draw attention to some of the more pertinent properties being captured within the mashup.   For full-sized viewing, click on the YouTube link in the player controls. Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.  
View full tip
Database backups are vital when it comes to ensure data integrity and data safety. PostgreSQL offers simple solutions to generate backups of the existing ThingWorx database instance and recover them when needed.   Please note that this does not replace a proper and well-defined disaster recovery plan. Export and Import are part of this strategy, but are not reflecting the complete strategy. The commands used in this post are for Windows, but can be adjusted to work on Linux-based systems as well.   Backup   To create a Backup, the export / dump functionality of PostgreSQL can be used.     pg_dump -U postgres -C thingworx > thingworxDump.sql     The -C option will include the statement to create the database in the .sql file and map it to the existing tablespace and user (e.g. 'thingworx' and 'twadmin'). The tablespace and user can be seen in the .sql file in the line with "ALTER DATABASE <dbname> OWNER TO <user>;"   In the above example, we're backing up the thingworx database schema and dump it into the thingworxDump.sql file   Tablespace, username & password are also included in the platform_settings.json   Restore   To restore the database, we just assume an empty PostgreSQL installation. We need to create the DB schema user first via the following commandline:     psql -U postgres -c "CREATE USER twadmin WITH PASSWORD 'ts';"     With the user created, we can now re-generate the tablespace and grant permissions to the twadmin user:   psql -U postgres -c "CREATE TABLESPACE thingworx OWNER twadmin LOCATION 'C:\ThingWorx\ThingworxPostgresqlStorage';" psql -U postgres -c "GRANT ALL PRIVILEGES ON TABLESPACE thingworx TO twadmin;" Finally the database itself can be restored by using the following commandline:   psql -U postgres < thingworxDump.sql This will create the database and populate it with tables, functions and sequences and will also restore the data from the .sql file.   It is important to have database, username and tablespace match with the original system - otherwise granting permissions and re-creating data might fail on recovery. User and tablespace can also be reused, so only the database has to be deleted before restoring it.   Part of the strategy   Part of the backup and recovery strategy should be to actually test the backup as well as the recovery part of the procedure. It should be well tested on a test-environment before deployed to any production environments. Take backups on a regular basis and test for disaster recovery once or twice a year, to ensure the procedure is still valid.   Data is the most important source in your application - protect it!  
View full tip
The purpose of this document is to see how you can setup an MXChip IoT DevKit and also how send the readings of this microprocessor to ThingWorx through an Azure cloud server. You will also learn how to view the values that are being sent.
View full tip
Fresh look at getting started with ThingWorx in a relevant context that outlines the DEVOPS needed to kick-start your programming.     For full-sized viewing, click on the YouTube link in the player controls. Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
Help Center link on how to control file transfers from the edge client using the EdgeControlled ThingShape.   The EdgeControlled ThingShape is a default entity included with ThingWorx that allows you to manage the amount of egress being sent from the platform to the Edge.   At the time of writing this post, the available 'When Disconnected' settings for a remotely bound property in ThingWorx are 'Fold' and 'Ignore'. Setting a property to 'Fold' while using this EdgeControlled ThingShape is necessary whether the device is connected all the time or only for brief updates.   To use this ThingShape in a real world scenario you might code your edge client to invoke the DequeueEgress REST API function available through this ThingShape. The parameter you pass in is then the number of messages you would like the client to receive. The result of this function is how many messages the platform then actually sent.   A quick setup: 1. Create a RemoteThing entity in ThingWorx 2. Create an ApplicationKey entity in ThingWorx 3. Setup an edge client to bind to that RemoteThing using the specified ApplicationKey 4. Manage Bindings on the Properties page of the RemoteThing, and pull in a few properties you would like to send property updates to 5. Set the 'When Disconnected' value to 'Fold' for each property you want to queue messages for   5a. Set any other settings on the properties you'd like; ie. persistence, logged 6. Save the Thing 7. Add the EdgeControlled ThingShape to the Thing 8. Save the Thing 9. Update property values, see exceptions thrown, but the value will be queued 10. Invoke DequeueEgress on the RemoteThing, with the number of messages to send to the edge client passed in as the parameter value   10a. Notice 'Fold' means only the last value set for a property will be sent to the edge client. There is currently no retention available for any values previously set to the property and stored as the message to be sent. Those values are lost upon a new value coming in before it's dequeued. 11. Verify the edge client has received the expected egress, and the return result of the DequeueEgress function was the expected # of messages sent.
View full tip
Previously Installing & Connecting C SDK to Federated ThingWorx with VNC Tunneling to the Edge device Installing and configuring Web Socket Tunnel Extension on ThingWorx Platform Overview     Using the C SDK Edge client configuration we did earlier, we'll now create above illustrated setup. In this C SDK Client we'll push the data to ThingWorx Publisher with servername : TW802Neo to ThingWorx Subscriber servername : TW81. Notice that the SteamSensor2 on the pulisher server is the one binding to the C SDK client and the FederatedSteamThing on subscriber is only getting data from the SteamSensor2. Let's crack on!   Content   Configure ThingWorx to publish Configure ThingWorx to subscribe Publish entities from Publisher to the Subscriber Create Mashup to view data published to the subscriber Pre-requisite Minimum requirement is to have two ThingWorx servers running. Note that both ThingWorx systems can be publisher & subscriber at the same time.   Configure ThingWorx Publisher   Configuring Federation Subsystem   1. Navigate to ThingWorx Composer > Subsystems > Federation Subsystems and configure the following highlighted sections   Essentially its required to configure the Server Identification, i.e. My Server name (FQDN / IP) , My Server Description (optional) Federation subscribers this server publishes to, i.e. all the server you want to publish to from this server. Refer to the Federation Subsystem doc in the Help Center to check detail description on each configurable parameter.   2. Save the federation subsystem   Configuring a Thing to be published   1. Navigate back to the Composer home page and select the entity which you'd like to publish 2. In this case I'm using SteamSensor2 which is created to connect to the C SDK client 3. To publish edit the entity and click on Publish checkbox, like so 4. Save the entity   Configure ThingWorx Subscriber     Configuring Federation Subsystem   1. Navigate to ThingWorx Composer > Subsystems > Federation Subsystems and configure the following highlighted sections   Essentially its required to configure the Server Identification, i.e. My Server name (FQDN / IP) , My Server Description (optional) Refer to help center doc on Federation Subsystems should you need more detail on the configurable parameter If you only want to use this server as a subscriber of entities from the publishing ThingWorx Server, in that case you don't have to Configure the section Federation subscribers this server publishes to, I've configured here anyway to show that both servers can work as publishers and subscribers   2. Save the federation subsystem   Configuring a Thing to subscribe to a published Thing 1. Subscribing to an entity is fairly straight forward, I'll demonstrate by utilizing the C SDK client which is currently pushing values to my remote thing called SteamSensor2 on server https://tw802neo:443/Thingworx 2. I have already Published the StreamSensor2, see above section Configuring a Thing to be published 3. Create a Thing called FederatedStreamThing with RemoteThingWithTunnels as ThingTemplate, 4. Browser for the Identifier and select the required entity to which binding must be done, like so   5. Navigate to the Properties section for the entity, click Manage Bindings to search for the remote properties like so for adding them to this thing:     6. Save the entity and then we can see the values that were pushed from the client C SDK     7. Finally, we can also quickly see the values pulled via a Mashup created in the subscriber ThingWorx , below a is a simple mashup with grid widget pulling values using QueryPropertyHistory service  
View full tip
Disclaimer: This post does of course not express any political views.   Pie Chart Coloring   In ThingWorx Pie Charts use a default color schema based on the DefaultChartStyle Definitions. These schemas are using fixed numbering and coloring systems, e.g. 1 is blue, 2 is green, 3 is red and so on. All Pie Charts will be rendered with these colors in the same order, no matter which data the chart is using. Visualization of data with the default colors might not necessarily help in creating an easy to read chart.   Just take a look at the following example with the default color schema. Let's just take political parties - as they are usually associated with a distinct color - to illustrate how the default color schema will fail depending on the data displayed.         In the first example, just by sheer coincidence the colors are perfectly matching the parties. When introducing a new party to the pool suddenly the blues are rendered green and the yellows rendered light-blue etc. This can be quite confusing, especially on election night 😉   Custom Color Schema   PoliticalParties Thing   To test a custom color schema, we first need to create a new Thing: PoliticalParties as a GenericThing Add a dataset property with the following PoliticalParties DataShape.         Save the Thing and set the InfoTable to:   Key Value The Purples 20 The Blues 20 The Greens 20 The Reds 20 The Yellows 20 Others 20   Number values don't actually matter too much, as the Pie Chart will automatically distribute them according to their percentage.   PoliticalParties Mashup   Create a new Mashup and add a PieChart to the canvas. Bind the PoliticalParties > GetPropertyValues > dataset to the Data input of the Widget. Ensure to set the LabelField to key and the ValueField to value for a correct mapping.     Save the Mashup and preview it.   It should show a non-matching color for each party listed in the InfoTable.   Custom Styles and States   Create new custom Style Definitions for each political party. As the Pie Chart is only using the Background Color other properties can stay on the default. I chose to go with a more muted version of the colors to make the chart easier to look at.         With the newly defined colors we can now generate a new State Definition as follows:       The States allow to evaluate the key-Strings in the Thing's InfoTable and assign a Style Definition depending on the actual value. In this definition we map a color schema based on the InfoTable's key-value to create a 1:1 mapping for the Strings.   This means, no matter where a certain party is positioned in the chart it will be tinted with its associated color.   Refining the Mashup   Back in the Mashup, select the PieChart. In the ColorFormat property choose the newly created State Definition.     Save the Mashup and preview it. With the States and Styles applies, colors are now displayed correctly.       Even when changing positions and numbers in the original InfoTable of the PoliticalParties Thing, the chart now considers the mapping of Strings and still displays the colors correctly.  
View full tip
Saw this great question in the Developers forum https://community.ptc.com/t5/ThingWorx-Developers/Thingworx-Permission-Hierarchy/m-p/556829#M29312. Answered it there, copying it to here: Question Hi, I have a few of questions regarding the permissions model in Thingworx. I can't find any documentation that explains it clearly. Hoping someone can help, or point me in the right direction for more in depth documentation.   My understanding is that permissions can be set at a number of different levels.  Collection Level Template Level Instance Level Thing level My question is, how do these levels interact with one another. Do they all get 'AND'ed together, or do those at the lower levels supersede the ones set at higher levels. e.g.  If I set some visibility at collection level would this overridden by me setting a different visibility at say the Template instance level, or would both visibility permissions be valid. At each of level there is the ability to override (e.g. for a particular property or service). How does that fit in the hierarchy. I have read that in Thingworx 'deny' always supersedes an 'allow' permission. Is this still the case if I set deny at collection level and then at a lower level I gave 'allow' permissions would the deny take precedence. As far as I can tell 'Create' permissions can only be set at collection level. Does this mean that I am unable to restrict one set of users to create things of one template, and a different set of users to create another type of thing. Thanks in advance for any replies   Answer: Great question Thing/Entity level permissions always take precedence So if you set on Collection then on Template then on Entity it will first look at Entity then fill in with Template and Permission So if Collection says can't do Service Execute Template says Can execute Service 1 but not Service 2 Entity says Can execute Service 2 and leaves Service 1 as inherited the end result is that the user can execute service 1 and 2   In Template and Entity you can find the Override ability, that is to specifically allow or disallow the execution of a Service or read/write of a Property   What is a BEST PRACTICE? 1. Give the System user all service execute on collection level 2. Give User Groups 'blanket' permissions to Property Read/Write on ThingTemplate Level 3. Give User Groups only Override permission to execute Services on ThingTemplate Level 4. Override User Group permission to DENY property read on potential properties they are not supposed to read on the ThingTemplate Level   Generally most properties all users can access fully and the blanket permission on a ThingTemplate is fine It is very BAD to give user groups blanket permission to Service execute and should always be done by Override   Entity Hierarchy overrides the Allow Deny hierarchy, but within a single level (Collection / Template / Entity) Deny wins over Allow.   Create is indeed only set on the Collection Level, however the way to secure this is to give the System user the Create ability and create Wrapper services that use the CreateThing service which you can then secure for specific Groups. So you could create a CreateNewThingType1 and CreateNewThingType2 for example and give User Group 1 permission to Type 1 creation and User Group 2 permission to Type 2 creation.   Hope that helps.
View full tip
This is a follow-up post on my initial document about Edge Microserver (EMS) and Lua Script Resource (LSR) security. While the first part deals with fundamentals on secure configurations, this second part will give some more practical tips and tricks on how to implement these security measurements.   For more information it's also recommended to read through the Setting Up Secure Communications for WS EMS and LSR chapter in the ThingWorx Help Center. See also Trust & Encryption Theory and Hands On for more information and examples - especially around the concept of the Chain of Trust, which will be an important factor for this post as well.   In this post I will only reference the High Security options for both, the EMS and the LSR. Note that all commands and directories are Linux based - Windows equivalents might slightly differ.   Note - some of the configuration options are color coded for easy recognition: LSR resources / EMS resources   Password Encryption   It's recommended to encrypt all passwords and keys, so that they are not stored as cleartext in the config.lua / config.json files.   And of course it's also recommended, to use a more meaningful password than what I use as an example - which also means: do not use any password I mentioned here for your systems, they might too easy to guess now 🙂   The luaScriptResource script can be used for encryption:   ./luaScriptResource -encrypt "pword123" ############ Encrypted String AES:A26fBYKHJq+eMu0Fm2FlDw== ############   The wsems script can be used for encryption:   ./wsems -encrypt "pword123" ############ Encrypted String AES:A26fBYKHJq+eMu0Fm2FlDw== ############   Note that the encryption for both scripts will result in the same encrypted string. This means, either the wsems or luaScriptResource scripts can be used to retrieve the same results.   The string to encrypt can be provided with or without quotation marks. It is however recommended to quote the string, especially when the string contains blanks or spaces. Otherwise unexpected results might occur as blanks will be considered as delimiter symbols.   LSR Configuration   In the config.lua there are two sections to be configured:   scripts.script_resource which deals with the configuration of the LSR itself scripts.rap which deals with the connection to the EMS   HTTP Server Authentication   HTTP Server Authentication will require a username and password for accessing the LSR REST API.     scripts.script_resource_authenticate = true scripts.script_resource_userid = "luauser" scripts.script_resource_password = "pword123"     The password should be encrypted (see above) and the configuration should then be updated to   scripts.script_resource_password = "AES:A26fBYKHJq+eMu0Fm2FlDw=="   HTTP Server TLS Configuration   Configuration   HTTP Server TLS configuration will enable TLS and https for secure and encrypted communication channels from and to the LSR. To enable TLS and https, the following configuration is required:     scripts.script_resource_ssl = true scripts.script_resource_certificate_chain = "/pathToLSR/lsrcertificate.pem" scripts.script_resource_private_key = "/pathToLSR/key.pem" scripts.script_resource_passphrase = "keyForLSR"     It's also encouraged to not use the default certificate, but custom certificates instead. To explicitly set this, the following configuration can be added:     scripts.script_resource_use_default_certificate = false     Certificates, keys and encryption   The passphrase for the private key should be encrypted (see above) and the configuration should then be updated to     scripts.script_resource_passphrase = "AES:A+Uv/xvRWENWUzourErTZQ=="     The private_key should be available as .pem file and starts and ends with the following lines:     -----BEGIN ENCRYPTED PRIVATE KEY----- -----END ENCRYPTED PRIVATE KEY-----     As it's highly recommended to encrypt the private_key, the LSR needs to know the password for how to encrypt and use the key. This is done via the passphrase configuration. Naturally the passphrase should be encrypted in the config.lua to not allow spoofing the actual cleartext passphrase.   The certificate_chain holds the Chain of Trust of the LSR Server Certificate in a .pem file. It holds multiple entries for the the Root, Intermediate and Server Specific certificate starting and ending with the following line for each individual certificate and Certificate Authority (CA):     -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----     After configuring TLS and https, the LSR REST API has to be called via https://lsrserver:8001 (instead of http).   Connection to the EMS   Authentication   To secure the connection to the EMS, the LSR must know the certificates and authentication details for the EMS:     scripts.rap_server_authenticate = true scripts.rap_userid = "emsuser" scripts.rap_password = "AES:A26fBYKHJq+eMu0Fm2FlDw=="     Supply the authentication credentials as defined in the EMS's config.json - as for any other configuration the password can be used in cleartext or encrypted. It's recommended to encrypt it here as well.   HTTPS and TLS   Use the following configuration establish the https connection and using certificates     scripts.rap_ssl = true scripts.rap_cert_file = "/pathToLSR/emscertificate.pem" scripts.rap_deny_selfsigned = true scripts.rap_validate = true     This forces the certificate to be validated and also denies selfsigned certificates. In case selfsigned certificates are used, you might want to adjust above values.   The cert_file is the full Chain of Trust as configured in the EMS' config.json http_server.certificate options. It needs to match exactly, so that the LSR can actually verify and trust the connections from and to the EMS.   EMS Configuration   In the config.lua there are two sections to be configured:   http_server which enables the HTTP Server capabilities for the EMS certificates which holds all certificates that the EMS must verify in order to communicate with other servers (ThingWorx Platform, LSR)   HTTP Server Authentication and TLS Configuration   HTTP Server Authentication will require a username and password for accessing the EMS REST API. HTTP Server TLS configuration will enable TLS and https for secure and encrypted communication channels from and to the EMS.   To enable both the following configuration can be used:   "http_server": { "host": "<emsHostName>", "port": 8000, "ssl": true, "certificate": "/pathToEMS/emscertificate.pem", "private_key": "/pathToEMS/key.pem", "passphrase": "keyForEMS", "authenticate": true, "user": "emsuser", "password": "pword123" }   The passphrase as well as the password should be encrypted (see above) and the configuration should then be updated to   "passphrase": "AES:D6sgxAEwWWdD5ZCcDwq4eg==", "password": "AES:A26fBYKHJq+eMu0Fm2FlDw=="   See LSR configuration for comments on the certificate and the private_key. The same principals apply here. Note that the certificate must hold the full Chain of Trust in a .pem file for the server hosting the EMS.   After configuring TLS and https, the EMS REST API has to be called via https://emsserver:8000 (instead of http).   Certificates Configuration   The certificates configuration hold all certificates that the EMS will need to validate. If ThingWorx is configured for HTTPS and the ws_connection.encryption is set to "ssl" the Chain of Trust for the ThingWorx Platform Server Certificate must be present in the .pem file. If the LSR is configured for HTTPS the Chain of Trust for the LSR Server Certificate must be present in the .pem file.   "certificates": { "validate": true, "allow_self_signed": false, "cert_chain" : "/pathToEMS/listOfCertificates.pem" } The listOfCertificates.pem is basicially a copy of the lsrcertificate.pem with the added ThingWorx certificates and CAs.   Note that all certificates to be validated as well as their full Chain of Trust must be present in this one .pem file. Multiple files cannot be configured.   Binding to the LSR   When binding to the LSR via the auto_bind configuration, the following settings must be configured:   "auto_bind": [{ "name": "<ThingName>", "host": "<lsrHostName>", "port": 8001, "protocol": "https", "user": "luauser", "password": "AES:A26fBYKHJq+eMu0Fm2FlDw==" }]   This will ensure that the EMS connects to the LSR via https and proper authentication.   Tips   Do not use quotation marks (") as part of the strings to be encrypted. This could result in unexpected behavior when running the encryption script. Do not use a semicolon (:) as part of any username. Authentication tokens are passed from browsers as "username:password" and a semicolon in a username could result in unexpected authentication behavior leading to failed authentication requests. In the Server Specific certificates, the CN must match the actual server name and also must match the name of the http_server.host (EMS) or script_resource_host (LSR) In the .pem files first store Server Specific certificates, then all required Intermediate CAs and finally all required Root CAs - any other order could affect the consistency of the files and the certificate might not be fully readable by the scripts and processes. If the EMS is configured with certifcates, the LSR must connect via a secure channel as well and needs to be configured to do so. If the LSR is configured with certifcates, the EMS must connect via a secure channel as well and needs to be configured to do so. For testing REST API calls with resources that require encryptions and authentcation, see also How to run REST API calls with Postman on the Edge Microserver (EMS) and Lua Script Resource (LSR)   Export PEM data from KeyStore Explorer   To generate a .pem file I usually use the KeyStore Explorer for Windows - in which I have created my certificates and manage my keystores. In the keystore, select a certificate and view its details Each certificate and CA in the chain can be viewed: Root, Intermediate and Server Specific Select each certificate and CA and use the "PEM" button on the bottom of the interface to view the actual PEM content Copy to clipboard and paste into .pem file To generate a .pem file for the private key, Right-click the certificate > Export > Export Private Key Choose "PKCS #8" Check "Encrypted" and use the default algorithm; define an "Encryption Password"; check the "PEM" checkbox and export it as .pkcs8 file The .pkcs8 file can then be renamed and used as .pem file The password set during the export process will be the scripts.script_resource_passphrase (LSR) or the http_server.passphrase (EMS) After generating the .pem files I copy them over to my Linux systems where they will need 644 permissions (-rw-r--r--)
View full tip
Overview This document is targeted towards covering basic PostgreSQL monitoring and health check related system objects like tables, views, etc. This allows simple monitoring of PostgreSQL database via some custom services, which I'll attach at the end of this document, from the ThingWorx Composer itself. I'll also try to cover short detail on some of the services that are included with the Thing: PostgreSQLHealthCheck which implements Database ThingTemplate   Pre-requisite The document assumes that the user already has ThingWorx running with PostgreSQL as a Persistence Provider.   How to install Usage for this is fairly straight forward, import the Entities.twx and it will create required Thing which implements Database ThingTemplate and some DataShapes. Each Service under the Thing: PostgreSQLHealthCheck has its own DataShape. Feel free edit these services / DataShapes if you are looking to use output of these services  as part of your mashup(s).   Make sure to edit the PostgeSQL's JDBC Connection String, Username & password under the configuration section in order to connect to your PostgreSQL instance under the Thing PostgreSQLHealthCheck which will be created when Entities.twx is imported (attached with this blog)   Note : Users can use these services to query non-ThingWorx related database created with PostgreSQL as part of the external JDBC connection.   Reviewing Services from Thing: PostgreSQLHealthCheck   1. DescribeTableStructure - Takes two inputs **Table Name** and the **Schema Name** in which the ThingWorx database tables exists both inputs have default values that can be modified to match your PostgreSQL schema setup and required table name - It provides information on a Table's structure, see below     2. GetAllPSQLConfig - Provides runtime details on all the configurations done in the postgresql.conf which are in-effect - For detail on pg_settings see Postgresql 9.4 Doc     3. GetAllPSQLConfigLimited Similar to GetAllPSQLConfig, however with limited information   4. GetAllPSQLRoles - Lists all the database roles/users - Also lists their access rights permissions together with OID - Helpful in identifying if the role is active/inactive or carries any limitation on the DB connections     5. GetPG_Stat_Activity - Part of the Statistics Collector subsystem for the PostgreSQL DB - Shows current state of the schema e.g. connections, queries, etc. - For more detail on the output refer to the PostgreSQL 9.4 doc   6. GetPSQLDBLocksInformation - Shows the kind of locks in effect on which database and on which relation (table) - Particularly useful in identifying the relations and what lock mode is enabled on them     7. GetPSQLDBStat - Shows database wide statistics - Like Commits, reads, block reads, tupples (rows) fetched, inserted, deadlocks, etc - For more detail refer to PostgreSQL 9.4 doc   8. GetPSQLLogDesitnation - Checks where the PostgreSQL server logs are directed to - I.e. stderr, csvlog or syslog - Default is stderr   9. GetPSQLLogFileName - Fetches the log PostgreSQL log file name and the filename format - E.g. postgresql-%Y-%m-%d_%H%M%S.log    10. GetPSQLLoggingLocation - Fetches the location where the logs are stored for PostgreSQL - e.g. pg_log, which is also the default location - Desired location for the logs can be done in the postgresql.conf file   11. GetPSQLRelationIndexes - Gets information on the Indexes - Information like index size, number of rows, table names on which the index is created   12. GetPSQLReplicationStat - Shows information related to the Replication on PostgreSQL DB - Applicable to the PostgreSQL DBs where replication is enabled   13. GetPSQLTablespaceInfo - Takes tablespace name as input (String DataType), service defaults to 'thingworx' - modify if needed - Fetches information like owner oid, tablespace ACL     14. GetPSQLUserIndexIO - Fetches index that are created only on the User created DB objects - Shows relations (table) vs index relations ids (index on table), together with their names - Also shows additional info like number of disk blocks read from this index & number of buffer hits in this index     15. GetPSQLUserSequencesIOStats - Fetches informtion on Sequence objects used on user defined relations (tables) - Number of disk blocks read from this sequence & buffer hits in this sequence     16. GetPSQLUserTableIOStat - Fetches disk I/O information on the user created tables     17. GetPSQLUserTables - Fetches all the user created tables, together with their name, OID Disk I/O Last auto vacuum , vacuum Also lists the amount of rows each relation (table) has   Finally The attached entity has some additional service not yet covered in this blog, as they are minor services. Therefore for brevity of this blog I've left them out for now, feel free to explore or enhance this. I will continue to look for any additional services and will enhance this document and the entities belong to this.    If you are looking to enhance this feel free to fork from twxPostgreSqlHealthCheck over Github.
View full tip
I always find it difficult to remember which version of software is supported with which version of ThingWorx, so I created a table for my reference. I hope this is also helpful to other people.     Oracle JDK Tomcat Database Options Memo PostgreSQL Neo4J H2 Microsoft SQLServer SAP HANA DetaStax Enterprise Edition ThingWorx 6.5 1.8.0(64-bit) 8.0.23(64-bit) 9.4.4 embedded N/A N/A N/A N/A IE 10 ThingWorx 6.6 1.8.0(64-bit) 8.0.23(64-bit) 9.4.4 embedded N/A N/A N/A N/A   ThingWorx 7.0 1.8.0(64-bit) 8.0.23(64-bit) 9.4.? embedded N/A N/A N/A N/A   ThingWorx 7.1 1.8.0_92-b14(64-bit) 8.0.33(64-bit) 9.4.5 embedded N/A N/A N/A 4.6.3   ThingWorx 7.2 1.8.0_92-b14(64-bit) 8.0.33(64-bit) 9.4.5 embedded embedded N/A N/A 4.6.3 IE 11 and later ThingWorx 7.3 1.8.0_92-b14(64-bit) 8.0.38(64-bit) 9.4.5 embedded embedded N/A SPS 11, 12 4.6.3, 5   ThingWorx 7.4 1.8.0_92-b14(64-bit) 8.0.38(64-bit) 9.4.5 embedded embedded 2014 and later SPS 11, 12 4.6.3, 5   ThingWorx 8.0 1.8.0_92-b14(64-bit) 8.0.44(64-bit), 8.5.13(64-bit) 9.4.5 embedded embedded 2014 and later SPS 11, 12 4.6.3, 5   ThingWorx 8.1 1.8.0_92-b14(64-bit) 8.0.44(64-bit), 8.5.13(64-bit) 9.4.5 embedded embedded 2014 and later SPS 11, 12 4.6.3, 5   ThingWorx 8.2 1.8.0_92-b14(64-bit) 8.0.47(64-bit), 8.5.23(64-bit) 9.4.5 embedded embedded 2014 and later SPS 11, 12 4.6.3, 5   ThingWorx 8.3                  
View full tip
Large files could cause slow response times. In some cases large queries might cause extensively large response files, e.g. calling a ThingWorx service that returns an extensively large result set as JSON file.   Those massive files have to be transferred over the network and require additional bandwidth - for each and every call. The more bandwidth is used, the more time is taken on the network, the more the impact on performance could be. Imagine transferring tens or hundreds of MB for service calls for each and every call - over and over again.   To reduce the bandwidth compression can be activated. Instead of transferring MBs per service call, the server only has to transfer a couple of KB per call (best case scenario). This needs to be configured on Tomcat level. There is some information availabe in the offical Tomcat documation at https://tomcat.apache.org/tomcat-8.5-doc/config/http.html Search for the "compression" attribute.   Gzip compression   Usually Tomcat is compressing content in gzip. To verify if a certain response is in fact compressed or not, the Development Tools or Fiddler can be used. The Response Headers usually mention the compression type if the content is compressed:     Left: no compression Right: compression on Tomcat level   Not so straight forward - network vs. compression time trade-off   There's however a pitfall with compression on Tomcat side. Each response will add additional strain on time and resources (like CPU) to compress on the server and decompress the content on the client. Especially for small files this might be an unnecessary overhead as the time and resources to compress might take longer than just transferring a couple of uncompressed KB.   In the end it's a trade-off between network speed and the speed of compressing, decompressing response files on server and client. With the compressionMinSize attribute a compromise size can be set to find the best balance between compression and bandwith.   This trade-off can be clearly seen (for small content) here:     While the Size of the content shrinks, the Time increases. For larger content files however the Time will slightly increase as well due to the compression overhead, whereas the Size can be potentially dropped by a massive factor - especially for text based files.   Above test has been performed on a local virtual machine which basically neglegts most of the network related traffic problems resulting in performance issues - therefore the overhead in Time are a couple of milliseconds for the compression / decompression.   The default for the compressionMinSize is 2048 byte.   High potential performance improvement   Looking at the Combined.js the content size can be reduced significantly from 4.3 MB to only 886 KB. For my simple Mashup showing a chart with Temperature and Humidity this also decreases total load time from 32 to 2 seconds - also decreasing the content size from 6.1 MB to 1.2 MB!     This decreases load time and size by a factor of 16x and 5x - the total time until finished rendering the page has been decreased by a factor of almost 22x! (for this particular use case)   Configuration   To configure compression, open Tomcat's server.xml   In the <Connector> definitions add the following:   compression="on" compressibleMimeType="text/html,text/xml,text/plain,text/css,text/javascript,application/javascript,application/json"     This will use the default compressionMinSize of 2048 bytes. In addition to the default Mime Types I've also added application/json to compress ThingWorx service call results.   This needs to be configured for all Connectors that users should access - e.g. for HTTP and HTTPS connectors. For testing purposes I have a HTTPS connector with compression while HTTP is running without it.   Conclusion   If possible, enable compression to speed up content download for the client.   However there are some scenarios where compression is actually not a good idea - e.g. when using a WAN Accelerator or other network components that usually bring their own content compression. This not only adds unnecessary overhead but is compressing twice which might lead to errors on client side when decompressing the content.   Especially dealing with large responses can help decreasing impact on performance. As compressing and decompressing adds some overhead, the min size limit can be experimented with to find the optimal compromise between a network and compression time trade-off.
View full tip
Announcements