cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Have a PTC product question you need answered fast? Chances are someone has asked it before. Learn about the community search. X

IoT Tips

Sort by:
Background: Customer machines create files containing data, configuration, log data, etc. These files may contain critical information for service technicians about the asset and its data. How can you make sure you get this important information to the Cloud as soon as it’s available? Use File Watcher, a standard tool available with Firewall-Friendly Agents. You can configure File Watcher to monitor a directory or file specification and automatically upload the file(s) to the Axeda Cloud. By monitoring selected directories and files on the device, the File Watcher determines which files have changed or appeared and then upload those files to the Axeda Cloud Server. Sometimes the size of the file being written or copied into the file watch target is large enough that it takes time to complete. Without taking the right precautions, the file upload may try to start before the file creation process is completed. Recommendations: To ensure that large files don’t cause problems, we recommend using a “move” or “rename” operation instead of a file copy operation to place your file into a watched directory. Unlike a “copy” operation, using autonomous operations like move or rename will ensure that the Agent doesn’t detect a file is in the midst of growing and prematurely begin to upload that operation. In situations when it isn’t possible to use the autonomous “move” or “rename” operations, you can implement a delay. Setting a delay in the File Watcher configuration will prevent the Agent from sending files to the Axeda CloudServer before those files have transferred completely to the watched file directory. Finally, use file compression wisely. You need to balance the benefits from compressing files before sending with the potential adverse impact that compression will have on the Agent. Compressing very large files BEFORE sending to the Platform will affect the Agent; however, compressing smaller files can be beneficial. If the Agent’s computer is of lower power, the CPU may have to work overtime to compress huge files or to send huge uncompressed files. As a general rule, compressing files before sending is recommended. However, depending on your Agent and network setup, it may prove more beneficial to stream uncompressed files rather than compress first and then send. Need more information? For information about configuring and using file watchers, see the Axeda® Builder User’s Guide (PDF) or the online help in Axeda Builder.
View full tip
OpenDJ is a directory server which is also the base for WindchillDS. It can be used for centralized user management and ThingWorx can be configured to login with users from this Directory Service.   Before we start Pre-requisiste Docker on Ubuntu JKS keystore with a valid certificate JKS keystore is stored in /docker/certificates - on the machine that runs the Docker environments Certificate is generated with a Subject Alternative Name (SAN) extension for hostname, fully qualified hostname and IP address of the opendj (Docker) server Change the blue phrases to the correct passwords, machine names, etc. when following the instructions If possible, use a more secure password than "Password123456"... the one I use is really bad   Related Links https://hub.docker.com/r/openidentityplatform/opendj/ https://backstage.forgerock.com/docs/opendj/2.6/admin-guide/#chap-change-certs https://backstage.forgerock.com/knowledge/kb/article/a43576900   Configuration Generate the PKCS12 certificate Assume this is our working directory on the Docker machine (with the JKS certificate in it)   cd /docker/certificates   Create .pin file containing the keystore password   echo "Password123456" > keystore.pin   Convert existing JKS keystore into a new PKCS12 keystore   keytool -importkeystore -srcalias muc-twx-docker -destalias server-cert -srckeystore muc-twx-docker.jks -srcstoretype JKS -srcstorepass `cat keystore.pin` -destkeystore keystore -deststoretype PKCS12 -deststorepass `cat keystore.pin` -destkeypass `cat keystore.pin`   Export keystore and Import into truststore   keytool -export -alias server-cert -keystore keystore -storepass `cat keystore.pin` -file server-cert.crt keytool -import -alias server-cert -keystore truststore -storepass `cat keystore.pin` -file server-cert.crt     Docker Image & Container Download and run   sudo docker pull openidentityplatform/opendj sudo docker run -d --name opendj --restart=always -p 389:1389 -p 636:1636 -p 4444:4444 -e BASE_DN=o=opendj -e ROOT_USER_DN=cn=Manager -e ROOT_PASSWORD=Password123456 -e SECRET_VOLUME=/var/secrets/opendj -v /docker/certificates:/var/secrets/opendj:ro openidentityplatform/opendj   After building the container, it MUST be restarted immediately in order for recognizing the new certificates   sudo docker restart opendj   Verify that the certificate is the correct one, execute on the machine that runs the Docker environments: openssl s_client -connect localhost:636 -showcerts   Load the .ldif Use e.g. JXplorer and connect   Select the opendj node LDIF > Import File (my demo breakingbad.ldif is attached to this post) Skip any warnings and messages and continue to import the file   ThingWorx Tomcat If ThingWorx runs in Docker as well, a pre-defined keystore could be copied during image creation. Otherwise connect to the container via commandline: sudo docker exec -it <ThingworxImageName> /bin/sh Tomcat configuration cd /usr/local/openjdk-8/jre/lib/security openssl s_client -connect 10.164.132.9:636 -showcerts Copy the certifcate between BEGIN CERTIFACTE and END CERTIFICATE of above output into opendj.pem, e.g. echo "<cert_goes_here>" > opendj.pem Import the certificate keytool -keystore cacerts -import -alias opendj -file opendj.pem -storepass changeit   ThingWorx Composer As the IP address is used (the hostname is not mapped in Docker container) the certificate must have a SAN containing the IP address     Only works with the TWLDAPExample Directory Service not the ADDS1, because ADDS1 uses hard coded Active Directory queries and structures and therefore does not work with OpenDJ. User ID (cn) must be pre-created in ThingWorx, so the user can login. There is no automatic user creation by the Directory Service. Make sure the Thing is Enabled under General Information   Appendix LDAP Structure for breakingbad.ldif cn=Manager / Password123456 All users with password Password123456    
View full tip
Setting Up the Azure Load Balancer with a ThingWorx High Availability Deployment Purpose In this post, one of PTC’s most experienced ThingWorx deployment architects, Desheng Xu, explains the steps to configure Azure Load Balancer with ThingWorx when deployed in a High Availability architectural model.   This approach has been used successfully on customer implementations for several ThingWorx 7.x and 8.x versions. However, with some of the improvements planned for ThingWorx High Availability architecture in the next major release, this best practice will likely change (so keep an eye out for updates to come).   Azure Load Balancer The overview article What is Azure Load Balancer? from Microsoft will give you a high-level understanding of load balancers in general, as well as the capabilities and limitations of Azure Load Balancer itself. For our purposes, we will leverage Azure Load Balancer's capability to manage incoming internet traffic to ThingWorx Platform virtual machine (VM) instances. This configuration is known as a Public Load Balancer.   Important Note: Different load balancers operate at different “layers” of the OSI Model. Azure Load Balancer operates at Layer 4 (Transport Layer) – it is indifferent to the specific TCP Payload. As a result, you must either configure both the front-end and back-end to work on SSL, or configure both of them to work on non-SSL communications. “SSL Termination” or “TLS Offload” is not supported by Azure Load Balancer.   Azure offers multiple different load balancing solutions. If you need some guidance on choosing the right one for you, I highly recommending reviewing the Microsoft DevBlog post Azure Load Balancing Solutions: A guide to help you choose the correct option.   High-Level Diagram: ThingWorx High Availability with Azure Load Balancer To keep this article focused, we will not go into the setup of ThingWorx in a High Availability architecture. It will be assumed that ThingWorx is working correctly and the ZooKeeper cluster is managing failover for the Platform instances as expected. For more details on setting up this configuration, the best place to start would be the High Availability Administrator’s Guide.   Planning In this installation, let's assume we have following plan (you will likely need to change these values for your own implementation): Azure Load Balancer will have a public facing domain name: edc.ptc.io Azure Load Balancer will have a public IP: 41.35.22.33 ThingWorx Platform VM instance 1 has a local computer name, like: vm1 ThingWorx Platform VM instance 2 has a local computer name, like: vm2   ThingWorx Preparation By default, the ThingWorx Platform provides a healthcheck end point at /Thingworx/Admin/HA/LeaderCheck , which can only be accessed with a credential configured in platform-settings.json : "HASettings": { "LoadBalancerBase64EncodedCredentials":"QWRtaW5pc3RyYXRvcjphZG1pbg==" } However, Azure Load Balancer does not permit this Health Check with a credential with current versions of ThingWorx. As a workaround, you can create a pings.jsp (using the attached JSP example code) in the Tomcat folder $CATALINA_HOME/webapps/docs . This workaround will no longer be needed in ThingWorx 8.5 and newer releases.   There are two lines that likely need to be modified to meet your situation: The hostname in final String probeURL (line 10) must match your end point domain name. It's edc.ptc.io in our example, don’t forget to replace this with your real hostname! You also need to add a line in your local hosts file and point this domain name to 127.0.0.1 . For example: 127.0.0.1 edc.ptc.io The credential in final String base64EncodedCredential (line 14) must match the credential configured in platform-settings.json. Additionally: Don't forget to make the JSP file accessible and executable by the user who starts Tomcat service for ThingWorx. These changes must be applied to both ThingWorx Platform VM instances. Tomcat needs to be configured to support SSL on a specific port. In this example, SSL will be enabled on port 8443. Please make sure similar configuration is included in $CATALINA_HOME/conf/server.xml <Connector protocol="org.apache.coyote.http11.Http11NioProtocol" port="8443" maxThreads="200" scheme="https" secure="true" SSLEnabled="true" keystoreFile="/opt/yourcertificate.pfx" keystorePass="dontguess" clientAuth="false" sslProtocol="TLS" keystoreType="PKCS12"/> The values in keystoreFile and keyStorePass will need to be changed for your implementation. While pkcs12 format is used in above example, you can use a different certificate formats, as long as it is supported by Tomcat (example: jks format). All other parameters, like maxThreads , are just examples - you should adjust them to meet your requirements.   How to Verify Before configuring the load balancer, verify that health check workaround is working as expected on both ThingWorx Platform instances. You can use following command to verify: curl -I https://edc.ptc.io:8443/docs/pings.jsp The expected result from active node should look like: HTTP/1.1 200 There will be three or more lines in output, depending on your instance configuration but you should be able to see the keyword: HTTP/1.1 200.   Expected result from passive node should look like: HTTP/1.1 503   Load Balancer Configuration Step 1: Select SKU Search for “load balancer” in the Azure market and select Load Balancer from Microsoft Verify the correct vendor before you create a Load Balancer. Step 2: Create load balancer To create a proper load balancer, make sure to read Microsoft’s What is Azure Load Balancer? overview to understand the differences between “basic” and “standard” SKU offerings. If your IT policy only requires SSL communication to the outside but doesn't require a SSL communication in a health probe, then the “basic” SKU should be adequate (not considering zone redundancy). You have to decide following parameters: Region Type (public or Internal) SKU (basic or standard) IP address Public IP address name Availability zone PTC cannot provide specific recommendations for these parameters – you will need to choose them based on your specific business needs, or consult Microsoft for available offerings in your region.   Step 3: Start to configure Once a load balancer is successfully created by Azure, You should be able to see:   Step 4: Confirm frontend IP Click frontend IP configuration at left side and you should be able to see public IP address configuration. Please make sure to register this IP with your domain name ( edc.ptc.io in our example) in your Domain Name Server (DNS). If you unfamiliar with DNS configuration, you should consult with the administrator of your DNS server. If you are using Azure DNS, this Quickstart article on creating Azure DNS Zones and records may help.   Step 5: Configure Backend pools Click Backend pools and click “Add” to add a backend pool definition. Select a name for your Backend pool (using ThingworxBackend in our example). Next step is to choose Virtual network.   Once you select Virtual network, then you can choose which VM (or VMs) you want to put behind this load balancer. The VM should be the ThingWorx VM instance.   In a high availability architecture, you will typically need to choose two instances to put behind this load balancer.   Please Note: The “Virtual machine status” column in this table only shows VM status, but not ThingWorx status. ThingWorx running status will be determined by the health probe configured in the next step.     Step 6: Configure Health Probe Health Probe will be used to determine the ThingWorx Platform’s running status. When a ThingWorx Platform instance is running as the leader, then it will give HTTP status code 200 during a health probe. The Azure Load Balancer will rely on this status code to determine if the platform is running properly.   When a ThingWorx platform VM is not responding, offline, or not the leader in a High Availability setup, then this health probe will provide response with a different HTTP status code other than 200.   For the health probe, select HTTPS for the protocol. In our example port 8443 is used, though another port can be selected if necessary. Then, provide the “/docs/pings.jsp” we created earlier as the probe’s path. You may need to change this path value if you put this file in a different location.   Step 7: Configure Load balancing rules. Select “Load balancing rules” from left side and click “Add”   Select TCP as protocol, in our example we are using 443 as front-end port and 8443 as back-end port. You can choose other port numbers if necessary.   Reminder: Azure Load Balancer is a layer 4 (Transport Layer) router – it cannot differentiate between HTTP or HTTPS requests. It will simply forward requests from front-end to back-end, based on port-forwarding rules defined.   Session persistence is not critical for current versions of ThingWorx as only one active node is currently permitted in a High Availability architecture. In the future, selecting Client IP may be required to support active-active architectures.   Step 8: Verify health probe Once you complete this configuration, you can go to the $CATALINA_HOME/logs folder and monitor latest local_access log. You should see similar entries as pictured below - HTTP 200 responses should be observed from the ThingWorx leader node, and HTTP 503 responses should be observed from the ThingWorx passive node. In the example below, 168.63.129.16 is the internal IP Address of the load balancer in the current region.   Step 9: Network Security Group rules to access Azure Load Balancer On its own, Azure Load Balancer does not have a network access policy – it simply forwards all requests to the back-end pool. Therefore, the appropriate Network Security Group associated with the backend resources within the resource group should have a policy to allow access to the destination port of the backend ThingWorx Foundation server (shown as 8443 here, for example). The following image displays an inbound security rule that will accept traffic from any source, and direct it to port 443 of the IP Address for the Azure Load Balancer.   Enjoy!! With the above settings, you should be able to access ThingWorx via: https://edc.ptc.io/Thingworx (replacing edc.ptc.io with the hostname you have selected).   Q&A Can I configure the health probe running on a port other than the traffic port (8443) in this case? Yes – if desired you can use a different port for the health probe configuration.   Can I use different protocol other than HTTPS for health probe? Yes – you can use different protocol in the health probe configuration, but you will need to develop your own functional equivalent to the pings.jsp example in this article for the protocol you choose.   Can I configure ZooKeeper to support the health probe? No – the purpose of the health probe is to inform the Load Balancer which node is providing service (the leader), not to select a leader. In a High Availability architecture, ZooKeeper determines which VM is the leader and talking with the database. This approach will change in future releases where multiple ThingWorx instances are actively processing requests.   How well does Azure Load Balancer scale? This question is best answered by Microsoft – as a starting point, we recommend reading the DevBlog post: Azure Load Balancing Solutions: A guide to help you choose the correct option.   How do I access logs for Azure Load Balancer? This question is best answered by Microsoft – as a starting point, we recommend reviewing the Microsoft article Azure Monitor logs for public Basic Load Balancer.   Do I need to configure specifically for Websocket and/or AlwaysOn communication? No – Azure Load Balancer is a Layer 4 (Transport Protocol) router - it only handles TCP traffic forwarding.   Can I leverage this load balancer to access all VMs behind it via ssh? Yes – you could configure Inbound NAT rules for this. If you require specific help in configuring this, the question is best answered by Microsoft. As a starting point, we recommend reviewing the Microsoft tutorial Configure port forwarding in Azure Load Balancer using the portal.   Can I view current health probe status on a portal? No – Unfortunately there is no current approach to do this with Azure Load Balancer.
View full tip
  The IoT Enterprise Deployment Center’s goal is to create and share knowledge around the best practices for architecting, designing, and deploying successful, enterprise-scale Thingworx IoT Solutions.    To accomplish this goal, the EDC team takes a “real world” approach, using simulated IoT assets and users to benchmark the capabilities of different Thingworx deployment configurations. First, each implementation is pushed to its limit in an effort to establish real-world baselines, metrics which can be used to help customers determine which architecture choices will work for their custom needs. Then, each implementation is pushed beyond its limits, providing useful insight into where and why things fail, and illuminating potential implementation changes which could push the boundaries further.   Through the simulations testing to come, the EDC will be publishing the resulting benchmarks for all to see! These benchmarks will include details on implementation goals and performance metrics for different stages of deployment. Additionally, best-practice articles which illustrate how to deploy the different architectural components (those referenced within the benchmarks) will also be posted, highlighting the optimal approach to integrating everything into the Thingworx platform.   Stay tuned to see more about just how versatile the ThingWorx Platform can be! We look forward to discussing these findings as they are published right here on the PTC Community. 
View full tip
Since the marketplace extension is no longer supported and the drivers may be outdated, you may build your own jdbc package/extension: Download the Extension Metadata file Here Download the appropriate JDBC driver Build the extension structure by creating the directory lib/common Place the JAR file in this directory location: lib/common/<JDBC driver jar file> Modify the name attribute of the ExtensionPackage entity in the metadata.xml file as needed Point the file attribute of the FileResource entity to the name of the JDBC JAR file The metadata also contains a ThingTemplate the name is set to MySqlServer, but can be modified as needed Select the lib folder and metadata.xml file and send to a zip archive Tip: The name of the zip archive should match the name given in the name attribute of the ExtensionPackage entity in the metadata.xml file Import the newly created extension as usual To the JDBC extension, simply create a new thing and assign it the new ThingTemplate that was imported with the JDBC extension Configuration Field Explanation: JDBC Driver Class Name ​Depends on the driver being used Refer to documentation JDBC Connection String ​Defines the information needed to establish a connection with the database Connection string examples can be found in the ThingWorx Help Center ConnectionValidationString ​A simple query that will work regardless of table names to be executed to verify return values from the database   Alternatively, you may download the jdbc connector creator from the marketplace here https://marketplace.thingworx.com/Items/jdbc-connector-extension Then you may just view the mashup and use it to package your jdbc jar into an extension (which can be later imported into ThingWorx).  
View full tip
We are excited to announce ThingWorx 8.4 is now available for download!    Key functional highlights ThingWorx 8.4 covers the following areas of the product portfolio: ThingWorx Analytics and ThingWorx Foundation which includes Connection Server and Edge capabilities.   ThingWorx Foundation Next Generation Composer: File Repository Editor added for application file management New entity Config Table Editor to enable application configurability and customization Localization support fornew languages: Italian, Japanese, Korean, Spanish, Russian, Chinese/Taiwan, Chinese/Simplified Mashup Builder: Responsive Layout with new Layout Editor 13 new and updated widgets (beta) Theming Editor (beta) New Functions Editor New Personalized Workspace Platform: Added support for AzureSQL, a relational database-as-a-service (DBaaS) as the new persistence provider A PaaS database that is always running on the latest stable version of SQL Server Database Engine and  patched OS with 99.99% availability.   Added support for InfluxData, a leading time series storage platform as the new ThingWorx persistence provider Supports ingesting large amounts of IoT data and offers high availability with clustering setup New extension for Remote Access and Control Supports VNC, RDP desktop sharing for any remote device HTTP and SSH connectivity supported An optional microservice to offload the ThingWorx server by allowing query execution to occur in a separate process on the same or on a different physical machine. Installers for Postgres versions of ThingWorx running on Windows or RHEL AzureSQL InfluxDB Thing Presence feature introduced which indicates whether the connection of a thing is “normal” based on the expected behavior of the device. Remote Access Extension Query Microservice: Click and Go Installers for Windows and Linux (RHEL) Security: Major investments include updating 3rd party libraries, handling of data to address cross-site scripting (XSS)  issues and enhancements to the password policy, including a password blacklist. A significant number of security issues have been fixed in this release. It is recommended that customers upgrade as soon as possible to take advantage of these important improvements. Docker Support  Added Dockerfile as a distribution media for ThingWorx Foundation and Analytics Allows building Docker container image that unlocks the potential of Dev and Ops Note:  Legacy Composer has been removed and replaced with the New Composer.   Documentation: ThingWorx 8.4 Reference Documents ThingWorx Platform 8.4 Release Notes ThingWorx Platform Help Center ThingWorx Analytics Help Center ThingWorx Connection Services Help Center  
View full tip
This post adds to my previous post: Deploying H2 Docker versions quickly   In addition to configuring the basic Docker Images and Containers, it's also possible to deploy them with a TLS / SSL certificate and access the instances via HTTPS protocol.   For this a valid certificate is required inside a .jks keystore. I'm using a self-signed certificate, but commercial ones are even better! The certificate must be in the name of the machine which runs Docker and which is accessed by the users via browser. In my case this is "mne-docker". The password for the keystore and the private key must be the same - this is a Tomcat limitation. In my case it's super secret and "Password123456".   I have the following directory structure on my Operating System   /home/ts/docker/ certificates mne-docker.jks twx.8.2.x.h2 Dockerfile settings platform-settings.json <license_file> storage Thingworx.war   The Recipe File   In the Recipe File I make sure that I create a new Connector on port 8443, removing the old one on port 8080. I do this by just replacing via the sed command - also introducing options for content compression. I'm only replacing the first line of the xml node as it holds all the information I need to change.   Changes to the original version I posted are in green   FROM tomcat:latest MAINTAINER mneumann@ptc.com LABEL version = "8.2.0" LABEL database = "H2" RUN mkdir -p /cert RUN mkdir -p /ThingworxPlatform RUN mkdir -p /ThingworxStorage RUN mkdir -p /ThingworxBackupStorage ENV LANG=C.UTF-8 ENV JAVA_OPTS="-server -d64 -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -Dfile.encoding=UTF-8 -Duser.timezone=GMT -XX:+UseNUMA -XX:+UseG1GC -Djava.library.path=/usr/local/tomcat/webapps/Thingworx/WEB-INF/extensions RUN sed -i 's/<Connector port="8080" protocol="HTTP\/1.1"/<Connector port="8443" protocol="org.apache.coyote.http11.Http11NioProtocol" maxThreads="150" SSLEnabled="true" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" enableLookups="false" keystoreFile="\/cert\/mne-docker.jks" keystorePass="Password123456" ciphers="TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_RSA_WITH_AES_128_GCM_SHA256, TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_RSA_WITH_AES_256_CBC_SHA256, TLS_DHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA" compression="on" compressableMimeType="text\/html,text\/xml,text\/plain,text\/css,text\/javascript,application\/javascript,application\/json"/g' /usr/local/tomcat/conf/server.xml COPY Thingworx.war /usr/local/tomcat/webapps VOLUME ["/ThingworxPlatform", "/ThingworxStorage", "/cert"] EXPOSE 8443   Note that I also map the /cert directory to the outside, so all of my Containers can access the same certificate. I will access it read-only.   Deploying     sudo docker build -t twx.8.2.x.h2 . sudo docker run -d --name=twx.8.2.x.h2 -p 88:8443 -v /home/ts/docker/twx.8.2.x.h2/storage:/ThingworxStorage -v /home/ts/docker/twx.8.2.x.h2/settings:/ThingworxPlatform -v /home/ts/docker/certificates:/cert:ro twx.8.2.x.h2   Mapping to the 8443 port ensures to only allow HTTPS connections. The :ro in the directory mapping ensures read-only access.   What next   Go ahead! Only secure stuff is kind of secure 😉 For more information on how to import the certificate into a the Windows Certificate Manager so browsers recognize it, see also the Trusting the Root CA chapter in Trust & Encryption - Hands On
View full tip
This post is part of the series Forced Root Cause Monitoring via Mashups and Modal Popups To not feel lost or out of context, it's recommended to read the main post first. Testing the Mashups Open the rcp_MashupMain in a new browser window For this test I find it easier to have the rcp_AlertThing and the Mashup in two windows side-by-side to each other The Mashup should be completely empty right now Nothing in the historic table (Grid) The Selected Reason is blank The Checkbox is false In the rcp_AlertThing switch the trigger to false The following will now happen The new value will be automatically pushed to Mashup The checkbox will switch to true The validator now throws the TRUE Event, as the condition is met and the trigger is indeed true The TRUE Event will invoke the Navigation Widget's Navigate service and the modal popup will be opened The user now only has the option to select one of the three states offered by the Radio Button selector, everything else will be greyed out After choosing any option, the SelectionChanged Event will be fired and trigger setting the selectedState as well as closing the popup The PopupClosed Event in our MashupMain will then be fired and populate the selectedState parameter into the textbox (just for display) and will also call the SetProperties service on our Thing, updating the selectedReason with the selectedState parameter value Once the property is set and persisted into the ValueStream via the SetProperties' ServiceInvokeCompleted Event, we clear the trigger (back to false) and update the Grid with the new data In the AlertThing, refresh the properties to actually see the trigger false and the selectedReason to whatever the user selected Note: When there is a trigger state and the trigger is set to true the popup will always be shown, even if the user refreshes the UI or the browser window. This is to avoid cheating the system by not entering a root cause for the current issue. As the popup is purely depending on the trigger flag, only clearing the flag can unblock this state. The current logic does not consider to close the popup when the flag is cleared - this could however be implemented using the Validator's FALSE Event and adding additional logic
View full tip
This post is part of the series Forced Root Cause Monitoring via Mashups and Modal Popups To not feel lost or out of context, it's recommended to read the main post first. Create the Main Mashup Create a new Mashup called "rcp_MashupMain" as Page and Responsive Save and switch to the Design tab Design Add a Layout with two Columns In the right Column add another Layout (vertical) with a Header and one Row Add a Grid to the Row Add a Panel to the Header Add a Panel into the Panel (we will use a Panel-In-Panel technique for a better design experience) Set "Width" to 200 Set "Height" to 50 Set "Horizontal Anchor" to "Center" Set "Vertical Anchor" to "Middle" Delete its current "Style" and add a new custom style - all values to default (this will create a transparent border around the panel) Add a Label to the inner Panel Set "Text" to "Historic data of what went wrong" Set "Alignment" to "Center Aligned" Set "Width" to 200 Set "Top" to 14 Add a Panel to the left Column Add a Navigation Widget to the Panel This will call the Popup Window when its Navigate service is invoked (by a Validator) Set "MashupName" to "rcp_MashupPopup" Set "TargetWindow" to "Modal Popup" Set "ShowCloseButton" to false Set "ModalPopupOpacity" to 0.8 (to make the background darker and give more visual focus to the popup) Set "FixedPopupWidth" to 500 Set "FixedPopupHeight" to 300 Set "PopupScrolling" to "Off" Set "Visible" to false, so it will not be shown to the user during runtime Add a Textbox to the Panel This will show the numeric value corresponding to the State selected in the modal popup This will just be used for displaying with no other functionality - so that we can verify the actual values chosen Set "Read Only" to true Set "Label" to "Selected Reason (numeric value)" Add a Checkbox to the Panel This will be used an input for the Validator to determine if an error state is present or not Set "Prompt" to "Set this box to 'true' to trigger the popup. Set the value via the Thing to simulate a service. Once the value is set, the trigger is set to 'false' as the popup has been dealt with. A new historic entry will be created." Set "Disabled" to true Set "Width" to 250 Add a Validator to the Panel This will determine if the checkbox (based on the trigger / error state) is true or false. If the checkbox switches to true then the validator will call the Navigate service on the Navigation Widget. Otherwise it will do nothing. Click on Configure Validator Add Parameter Name: "Input" Base Type: BOOLEAN Click Done Set "Expression" to "Input" (the Parameter we just created) Set "AutoEvaluate" to true Save the Mashup Data In the Data panel on the right hand side, click on Add entity Choose the "rcp_AlertThing" and select the following services GetProperties (execute when Mashup is loaded) SetProperties QueryPropertyHistory (execute when Mashup is loaded) clearTrigger Click Done and the services will appear in the Data panel Connections After configuring the UI elements and the Data Sources we now have to connect them to implement the logic we decided on earlier GetProperties service Drag and drop the trigger property to the Checkbox and bind it to State Set the Automatically update values when able to true SetProperties service From the Navigation Widget drag and drop the selectedState property and bind it to the SetProperties service selectedReason property From the Navigation Widget drag and drop the PopupClosed event and bind it to the SetProperties service From the SetProperties service drag and drop the ServiceInvokeCompleted event and bind it to the clearTrigger service From the SetProperties service drag and drop the ServiceInvokeCompleted event and bind it to the QueryPropertyHistory service QueryPropertyHistory service Drag and drop the Returned Data's All Data to the Grid and bind it to Data On the Grid click on Configure Grid Columns Switch the position of the timestamp and selectedReason fields with their drag and drop handles For the selectedReason Set the "Column Title" to "Reason for Outage" Switch to the Column Renderer & State Formatting tab Change the format from "0.00" to "0" (as we're only using Integer values anyway) Choose the State-based Formatting Set "Dependent Field" to "selectedReason" Set "State Definition" to "rcp_AlertStateDefinition" Click Done clearTrigger service There's nothing more to configure for this service As the properties will automatically be pushed via the GetProperties service, there's no special action required after the service invoke for the clearTrigger service has been completed Validator Widget Drag and drop the Validator's TRUE event to the Navigation Widget and bind it to the Navigate service Drag and drop the Checkbox State to the Validator and bind it to the Input parameter Navigation Widget Drag and drop the Navigation Widget's selectedState to the Textbox and bind it to the Text property Save the Mashup
View full tip
This post is part of the series Forced Root Cause Monitoring via Mashups and Modal Popups To not feel lost or out of context, it's recommended to read the main post first. Create a Popup Mashup Create a new Mashup called "rcp_MashupPopup" as Page and Static Save and switch to the Design tab Design Edit the Mashup Properties Set "Width" to 500 Set "Height" to 300 Add a new Label Set "Text" to "Something went wrong - what happend?" Set "Alignment" to "Center Aligned" Set "Width" to 230 Set "Top" to 55 Set "Left" to 130 Add a new Radio Button Set "Button States" to "rcp_AlertStateDefinition" Set "Top" to 145 Set "Left" to 25 Set "Width" to 450 Set "Height" to 100 In the Workspace tab, select the "Mashup" Click on Configure Mashup Parameters Add Parameter Name: "selectedState" BaseType: NUMBER Click Done Save the Mashup Connections Select the Radio Button Drag and drop its Selected Value property to the Mashup and bind it to the selectedState Mashup Parameter Drag and drop its SelectionChanged event to the Mashup and bind it to the CloseIfPopup service Save the Mashup
View full tip
This post is part of the series Forced Root Cause Monitoring via Mashups and Modal Popups To not feel lost or out of context, it's recommended to read the main post first. Create Entities AlertStateDefinition Create a new StateDefinition called "rcp_AlertStateDefinition" In the State Information tab, select Apply State: Numeric from the list on the right hand side Create a new State: Less than or equal to "1" Display Name: "Something good" Style: a new custom style with text color #f5b83d (orange) Create a new State: Less than or equal to "2" Display Name: "Something bad" Style: a new custom style with text color #f55c3d (red) Create a new State: Less than or equal to "3" Display Name: "Something ugly" Style: a new custom style with text color #ad1f1f (red) with a Font Bold Edit the "Default" State Set the Style: a new custom style with text color #36ad1f (green) We will not use this style, but in case we need a default configuration it will blend into the color schema Save the StateDefinition ValueStream Create a new ValueStream called "rcp_ValueStream" (choose a default ValueStream, not a RemoteValueStream) Save the ValueStream AlertThing Create a new Thing called "rcp_AlertThing" Based on a Generic Thing Base Thing Template Using the rcp_ValueStream Value Stream In the Properties and Alerts tab create the following Properties Name: "trigger" Base Type: BOOLEAN With a Default Value of "false" Check the "Persistent" checkbox Name: "selectedReason" BaseType: NUMBER Check the "Persistent" checkbox Check the "Logged" checkbox Advanced Settings: Data Change Type: ALWAYS In the Services tab create a new Service Name: "clearTrigger" No Inputs and no Outputs Service code me.trigger = false; When this service is executed, it will set the trigger Property to false Click Done to complete the Service creation Save the Thing
View full tip
This post is part of the series Forced Root Cause Monitoring via Mashups and Modal Popups To not feel lost or out of context, it's recommended to read the main post first. Before we start Create a new Project called "RootCausePopups" and save it. In the New Composer set the Project Context (top left box) to the "RootCausePopups" project. This will automatically add all of our new Entities into our project. Otherwise we would have to add each Entity manually on creation.
View full tip
This post is part of the series Forced Root Cause Monitoring via Mashups and Modal Popups To not feel lost or out of context, it's recommended to read the main post first. Required Logic The following logic will help us realizing this particular use case: The trigger property on the AlertThing switches from false to true. The MashupMain will receive dynamic Property updates via the AlertThing.GetProperties service. It will validate the value of the trigger Property and if it's true the MashupMain will show the MashupPopup as a modal popup. A modal popup will be exclusively in the foreground, so the user cannot interact with anything else in the Mashup except the modal popup. In the modal popup the user chooses one of the pre-defined AlertStateDefinitions. When a State is selected, the popup will set the State as a Mashup Parameter, pass this to the MashupMain and the popup close itself. When the MashupPopup is closed, the MashupMain will read the Mashup Parameter The MashupMain will set the selectedReason in the AlertThing to the selected value. It will also reset the trigger property to false. This allows the property to be set to true again to trigger another forced popup. On any value change the AlertThing will store the selectedReason State in a ValueStream to capture historic information on which root causes were selected at which time. The ValueStream information will be displayed as a table in a GridWidget in the MashupMain once the new properties have been set.
View full tip
This post is part of the series Forced Root Cause Monitoring via Mashups and Modal Popups To not feel lost or out of context, it's recommended to read the main post first. Required Entities In this simplified example we'll just use a Thing to set a status triggering the popup. This Thing will have two properties and one service: Properties trigger (Boolean) - to indicate if an error status is present or not, if so - trigger the popup selectedReason (Number) - to indicate the selected reason / root cause chosen in the modal popup Service clearTrigger - to reset the trigger to "false" once a reason has been selected The selectedReason will be logged into a ValueStream. In addition to the Thing and the ValueStream we will need a StateDefinition to pre-define potential root causes to be displayed in the popup. We will use three states to be used in a traffic-light fashion to indicate the severity of the issue in a custom color schema. To display the monitoring Mashup and the popup we will need two Mashups.
View full tip
As it is not available in support.ptc.com. Please provide Creo View and Thing View Widget Documentation or guide to view 3D Object in custom mashup/UI except for the ThingWorx Navigate app.   I am posting this request to the community. Not for this ThingWorx developers portal after discussing it with PTC technical support. Please refer to article CS291582.   LeighPTC, I have no option to do move to the community again. But this had happened.   The post Creo View and Thing View Widget Documentation to view 3D Object in custom mashup/UI. was moved by LeighPTC.   Please don't move this request to the ThingWorx developers portal.    So that PTC Customer can have Creo View and Thing View Widget Documentation to view 3D Object in custom mashup/UI. As it is not available.   Many thanks, Rahul
View full tip
I got this excellent question and I thought it worthwhile to put my answer here as well.   There are two ways to segregate information between clients. By default we use a ‘software’ approach to segregation by using Organizations. This allows you to designate a Client to an Organization/Organizational nodes and give those nodes ‘visibility’ to specific entities within the software. This will mean that ‘through software logic’ users can only see what they’ve been given visibility to see. This mainly applies to all the client’s equipment (Thingworx Things). They can only see their own equipment. This would also apply to a specific set of their data which is ValueStream data because that can only be retrieved from the perspective of a Thingworx Thing   Blog/Wiki/DataTable/Streams can store data across clients and do not utilize visibility on a row basis, in this case appropriate queries would need to be created to allow retrieval for only a specific client. In this case we use a construct for security that utilizes what we call the ‘system user’ and wrapper routines that work of the CurrentUser context, this allows you to create enforced validated queries against the data that will allow a user to only retrieve their specific data.   In regards to the data itself, if you need to, you can provide actual ‘physical’ segregation by using multiple persistence provider and mapping Blog/Wiki/DataTable/Streams/ValueStreams to different persistence providers. Persistence providers are basically additional database schemas (in one and the same database or different database) of the Thingworx data storage schema, allowing you to completely separate the location of where data is stored between clients. Note that just creating unique Blog/Wiki/DataTable/Streams/ValueStreams per client and using visibility is still only a logical / software way of providing segregation because the data will be stored in one and the same database schema also known as the Thingworx data persistence provider.
View full tip
In this session, we pick up where we left off with the mashup which was worked on in part 1 of our Advanced Mashup Expert Session series. Specifically, we will explore the concepts of master mashups, session variables, and media entities, using each to further enhance the look and feel of our mashup.     For full-sized viewing, click on the YouTube link in the player controls. Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
This video builds upon the mashup created in the basic session, and strives to create a more polished, user-friendly interface that is ready for deployment. In part 1, we’ll take a look at advanced layout designs and include a more varied set of widgets to help draw attention to some of the more pertinent properties being captured within the mashup.   For full-sized viewing, click on the YouTube link in the player controls. Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.  
View full tip
Database backups are vital when it comes to ensure data integrity and data safety. PostgreSQL offers simple solutions to generate backups of the existing ThingWorx database instance and recover them when needed.   Please note that this does not replace a proper and well-defined disaster recovery plan. Export and Import are part of this strategy, but are not reflecting the complete strategy. The commands used in this post are for Windows, but can be adjusted to work on Linux-based systems as well.   Backup   To create a Backup, the export / dump functionality of PostgreSQL can be used.     pg_dump -U postgres -C thingworx > thingworxDump.sql     The -C option will include the statement to create the database in the .sql file and map it to the existing tablespace and user (e.g. 'thingworx' and 'twadmin'). The tablespace and user can be seen in the .sql file in the line with "ALTER DATABASE <dbname> OWNER TO <user>;"   In the above example, we're backing up the thingworx database schema and dump it into the thingworxDump.sql file   Tablespace, username & password are also included in the platform_settings.json   Restore   To restore the database, we just assume an empty PostgreSQL installation. We need to create the DB schema user first via the following commandline:     psql -U postgres -c "CREATE USER twadmin WITH PASSWORD 'ts';"     With the user created, we can now re-generate the tablespace and grant permissions to the twadmin user:   psql -U postgres -c "CREATE TABLESPACE thingworx OWNER twadmin LOCATION 'C:\ThingWorx\ThingworxPostgresqlStorage';" psql -U postgres -c "GRANT ALL PRIVILEGES ON TABLESPACE thingworx TO twadmin;" Finally the database itself can be restored by using the following commandline:   psql -U postgres < thingworxDump.sql This will create the database and populate it with tables, functions and sequences and will also restore the data from the .sql file.   It is important to have database, username and tablespace match with the original system - otherwise granting permissions and re-creating data might fail on recovery. User and tablespace can also be reused, so only the database has to be deleted before restoring it.   Part of the strategy   Part of the backup and recovery strategy should be to actually test the backup as well as the recovery part of the procedure. It should be well tested on a test-environment before deployed to any production environments. Take backups on a regular basis and test for disaster recovery once or twice a year, to ensure the procedure is still valid.   Data is the most important source in your application - protect it!  
View full tip
The purpose of this document is to see how you can setup an MXChip IoT DevKit and also how send the readings of this microprocessor to ThingWorx through an Azure cloud server. You will also learn how to view the values that are being sent.
View full tip
Fresh look at getting started with ThingWorx in a relevant context that outlines the DEVOPS needed to kick-start your programming.     For full-sized viewing, click on the YouTube link in the player controls. Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip