cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
Background: Customer machines create files containing data, configuration, log data, etc. These files may contain critical information for service technicians about the asset and its data. How can you make sure you get this important information to the Cloud as soon as it’s available? Use File Watcher, a standard tool available with Firewall-Friendly Agents. You can configure File Watcher to monitor a directory or file specification and automatically upload the file(s) to the Axeda Cloud. By monitoring selected directories and files on the device, the File Watcher determines which files have changed or appeared and then upload those files to the Axeda Cloud Server. Sometimes the size of the file being written or copied into the file watch target is large enough that it takes time to complete. Without taking the right precautions, the file upload may try to start before the file creation process is completed. Recommendations: To ensure that large files don’t cause problems, we recommend using a “move” or “rename” operation instead of a file copy operation to place your file into a watched directory. Unlike a “copy” operation, using autonomous operations like move or rename will ensure that the Agent doesn’t detect a file is in the midst of growing and prematurely begin to upload that operation. In situations when it isn’t possible to use the autonomous “move” or “rename” operations, you can implement a delay. Setting a delay in the File Watcher configuration will prevent the Agent from sending files to the Axeda CloudServer before those files have transferred completely to the watched file directory. Finally, use file compression wisely. You need to balance the benefits from compressing files before sending with the potential adverse impact that compression will have on the Agent. Compressing very large files BEFORE sending to the Platform will affect the Agent; however, compressing smaller files can be beneficial. If the Agent’s computer is of lower power, the CPU may have to work overtime to compress huge files or to send huge uncompressed files. As a general rule, compressing files before sending is recommended. However, depending on your Agent and network setup, it may prove more beneficial to stream uncompressed files rather than compress first and then send. Need more information? For information about configuring and using file watchers, see the Axeda® Builder User’s Guide (PDF) or the online help in Axeda Builder.
View full tip
One commonly asked question is what are the correct settings for the Configuration Tables tab when creating/setting up a Database Thing to connect to a SQL Server (2005 or later) database.  There are a couple of ways to do this but the tried and true settings are listed below. connectionValidationString - SELECT GetDate() jDBCConnectionURL - jdbc:sqlserver://servername;databaseName=databasename jDBCDriverClass - com.microsoft.sqlserver.jdbc.SQLServerDriver Max number of connections in the pool - 5 (this can be modified based on number of concurrent connections required) Database Password - databaseusername Database User Name - databaseuserpassword <br> The jdbc driver file sqljdbc4.jar is by default installed with the ThingWorx server.  It is located in TomcatDir\webapps\Thingworx\WEB-INF\lib\
View full tip
  Hi everyone,   Today I’m here with a very exciting guest—Janie! Janie recently joined PTC after spending the last two years at a global professional services firm where she held many roles including that of a lead ThingWorx developer. While Janie did do some font-end development work and UI creation, the bulk of her time was spent on the backend writing services that enabled the application’s functionality. In this role, she not only wrote code herself, she also played a large part in the in-platform architectural decisions, determining the best way to utilize the platform in order to satisfy complex business requirements.   Throughout her last two years, she worked on various IIoT projects in the manufacturing industry developing a real-time asset health monitoring and OEE/production tracking application to monitor key metrics like downtime and output. For each customer engagement an assessment of IoT platforms was often undertaken in order to assess strengths and weaknesses based off-of the business requirements of the customer and while she spent time looking into a variety of platforms, ThingWorx always came out on top.   That said, her experience with ThingWorx showed her how much and how quickly that technology can can change the way a manufacturing company operates. She decided that instead of doing implementations of the technology, she wanted to have an impact on the technology itself and have a hand in the direction it goes and the way it continues to change the businesses we know today. This desire is what led her to product management and the role she currently holds at PTC today.   I recently sat down with Janie to hear how it has been moving from a developer on the platform to a PM for it to learn about her past ThingWorx experiences and how those will influence the work she does at PTC.     Here’s how our convo went.   Kaya: In your previous role as an IoT developer at another company, what made you choose ThingWorx over other IoT solution platforms? Janie: We chose ThingWorx over other IoT platforms due to its strong ability to connect such a wide variety of machines using ThingWorx Industrial Connectivity via Kepware, its industry-leading standing in the market and the fact that it did not require its users to have extensive development knowledge given its low-code environment, which enabled non-developers to be successful.   Kaya: Why do you think ThingWorx saved you time over other development platforms/products? Janie: There were three key ways the platform helped to save me time. First, I didn’t have to write code from scratch. For example, when needing to manipulate arrays or parse through uploaded csv files, there were pieces of code readily available for me to do so without having to know how to do it by myself. Second, due to the widgets ThingWorx already had available, I didn’t have to spend time making custom front-end UIs or widgets. And, thirdly, I saved a ton of time during the connectivity phase because connectivity to devices was supported OOTB through Kepware’s extensive suite of drivers available including some of the key ones we utilized such as Allen-Bradley Control Logix, Modbus, Siemens, and user configurable drivers.   Kaya: Along the lines of saving time, how would your experience with the platform have differed had you leveraged one of our manufacturing apps like Production KPIs? Janie: As I talked about previously, having access to a pre-built solution would have evidently saved a ton of development time and accelerated time to value. It also would have reduced the complexity of our completed app, which would have made it more scalable. While I understand the Manufacturing apps are not 100% ready-to-go out of the box but rather configurable stepping stones into a larger and mole holistic solution, if we could have had Production KPI’s as our development starting point, we would have had a more sound and already proven way of tackling Production and OEE tracking and could have added even more value on top of that.   Kaya: What were some of your favorite aspects about the platform? Janie: The code-snippets to get started were a big favorite, as were the OOTB widgets that allowed for quick visualization of important data. The OOTB industrial connectivity with seamless integration to ThingWorx was huge—we were able to connect multiple devices and stream information in real-time with little difficulty, which enabled us to derive even more value from the platform and really focus on becoming a fully smart and connected operation. Finally, the drag-and-drop UI was simple and intuitive.   Kaya: What do you think the top misconception is about IoT? Janie: I would say repeatability. Scaling an IoT solution across an enterprise is not as simple as it is often represented. I know we’re releasing a new functionality to help users more easily deploy their solutions, so I’m excited about that.   Kaya: So, now that you work as PM for the platform, what were a few things you wish you knew as a developer working on ThingWorx? Janie: Seeing everything we’re working on for the platform now is very exciting; I wish I could have been more aware of what’s on the roadmap so I knew what was coming as I was developing on the platform.   Kaya: On a similar note, now that you are working on the ThingWorx PM team, what items are you hoping to drive/change based on your experience? Janie: First, I’d like to drive stronger interaction with our system integrators (SIs) and partners by providing them insights into our roadmaps and allowing for them to provide feedback in a more seamless way. Next, I’d like to improve upon our essential platform developer capabilities, such as CI/CD, debugging, versioning, testing, etc. That said, Solution Central and PTC’s commitment to and integration with Microsoft products are very exciting roadmap features for me.   Kaya: I completely agree. Those are some great points. As I’m sure you’re aware, but our readers might not be aware of, we are working within the platform itself and a new feature, Solution Central, to provide more of that basic functionality so that, in addition to building blocks, low-code and a drag-and-drop UI, we can continue to help you accelerate your time to value.   Readers, I hope you enjoyed hearing from Janie! We’re excited to have her on the ThingWorx team and we look forward to making ThingWorx even stronger.   Let me know what you think in the comments below.   Stay connected, Kaya
View full tip
    Raise your hand if you’re ready for seamless, rapid deployment of your ThingWorx applications with visibility into your various environments! It’s time to say goodbye to error-prone deployments with manual dependency tracking and hello to Solution Central!   Releasing this fall as part of 8.5, Solution Central is a brand-new cloud service coming to the ThingWorx platform to enable you to efficiently manage your ThingWorx applications across the enterprise.   With Solution Central, you’ll no longer be caught chasing missing dependencies (like ThingShapes, Mashups, templates or library extensions). Solution Central automatically identifies and packages up the dependencies required for your application. No more manual dependency madness!   Whether you’re managing many apps deployed to a few environments or a single app deployed to hundreds of environments, Solution Central allows you to accelerate your deployment through an intuitive UI or powerful APIs for automation.   Here’s how it works: Begin by creating your application in Composer with a project. Let Solution Central automatically package up all the artifacts and dependencies required for your application. Allow Solution Central to publish your solution package to the cloud. Deploy your application to your various environments (local servers, data centers, cloud systems) directly from Solution Central. It’s like your company has its own private app store. Here’s a sneak peek of the Solution Central UI! Keep an eye out for the release of ThingWorx 8.5 at the end of Sept 2019 and begin accelerating your app deployment! Check out the presentation and demo my fellow PM Chris Baldwin and I delivered at LiveWorx19—and be sure to attend LiveWorx20! To navigate to our session recording, search for “Introducing Solution Central: Your Gateway to Accelerated IIoT Value Across the Enterprise” here.   Sound interesting? Message me directly to discover how you can become part of the Solution Central Private Preview Program!   -Kaya  
View full tip
Thingworx Analytics is offered through the User interface called Analytics Builder with some pre-configured functionality. However, should you want to create your own jobs and mashups, all features from Analytics Builder and some more are available through the Thingworx Services.  Running most functionality requires that you provide some data to run the Analytics Services. This is where the datasetRef parameter is required.        Data uploaded through Analytics Builder Any dataset uploaded through builder will require have a datasetUri as shown in the image above and format will be parquet (all small letters) datasetUri can be obtained from the list of datasets in builder Passing data as an in-body Dataset If data isn't uploaded through Analytics Builder, data can be supplied as an Infotable in the data parameter of the datasetRef. Metadata will also need to be supplied if a new dataset is being created (create Job of the AnalyticsServer_DataThing) If this data is being supplied for a scoring job, as long as the column names match up to what the model is expecting, TWX Analytics will inference them appropriately. The filter parameter is for parquet datasets already uploaded into TWXA and will take an ANSI SQL statement format to add conditions to reduce number of rows. Exclusions is an single column infotable list of the columns you wish to remove from the job you are trying to submit Example: If you want Profiles to only run on 5 out of 10 columns, you would give a list of 5 columns that you don't want to include in this exclusions infotable. Data may also be supplied as a csv file in the file repo in some cases, in which case you would give the dataseturi parameter the location of the file on the TWX File repo (of the format thingworx://UseCaseFileRepo/tempdata.csv) and the format which would be csv
View full tip
OpenDJ is a directory server which is also the base for WindchillDS. It can be used for centralized user management and ThingWorx can be configured to login with users from this Directory Service.   Before we start Pre-requisiste Docker on Ubuntu JKS keystore with a valid certificate JKS keystore is stored in /docker/certificates - on the machine that runs the Docker environments Certificate is generated with a Subject Alternative Name (SAN) extension for hostname, fully qualified hostname and IP address of the opendj (Docker) server Change the blue phrases to the correct passwords, machine names, etc. when following the instructions If possible, use a more secure password than "Password123456"... the one I use is really bad   Related Links https://hub.docker.com/r/openidentityplatform/opendj/ https://backstage.forgerock.com/docs/opendj/2.6/admin-guide/#chap-change-certs https://backstage.forgerock.com/knowledge/kb/article/a43576900   Configuration Generate the PKCS12 certificate Assume this is our working directory on the Docker machine (with the JKS certificate in it)   cd /docker/certificates   Create .pin file containing the keystore password   echo "Password123456" > keystore.pin   Convert existing JKS keystore into a new PKCS12 keystore   keytool -importkeystore -srcalias muc-twx-docker -destalias server-cert -srckeystore muc-twx-docker.jks -srcstoretype JKS -srcstorepass `cat keystore.pin` -destkeystore keystore -deststoretype PKCS12 -deststorepass `cat keystore.pin` -destkeypass `cat keystore.pin`   Export keystore and Import into truststore   keytool -export -alias server-cert -keystore keystore -storepass `cat keystore.pin` -file server-cert.crt keytool -import -alias server-cert -keystore truststore -storepass `cat keystore.pin` -file server-cert.crt     Docker Image & Container Download and run   sudo docker pull openidentityplatform/opendj sudo docker run -d --name opendj --restart=always -p 389:1389 -p 636:1636 -p 4444:4444 -e BASE_DN=o=opendj -e ROOT_USER_DN=cn=Manager -e ROOT_PASSWORD=Password123456 -e SECRET_VOLUME=/var/secrets/opendj -v /docker/certificates:/var/secrets/opendj:ro openidentityplatform/opendj   After building the container, it MUST be restarted immediately in order for recognizing the new certificates   sudo docker restart opendj   Verify that the certificate is the correct one, execute on the machine that runs the Docker environments: openssl s_client -connect localhost:636 -showcerts   Load the .ldif Use e.g. JXplorer and connect   Select the opendj node LDIF > Import File (my demo breakingbad.ldif is attached to this post) Skip any warnings and messages and continue to import the file   ThingWorx Tomcat If ThingWorx runs in Docker as well, a pre-defined keystore could be copied during image creation. Otherwise connect to the container via commandline: sudo docker exec -it <ThingworxImageName> /bin/sh Tomcat configuration cd /usr/local/openjdk-8/jre/lib/security openssl s_client -connect 10.164.132.9:636 -showcerts Copy the certifcate between BEGIN CERTIFACTE and END CERTIFICATE of above output into opendj.pem, e.g. echo "<cert_goes_here>" > opendj.pem Import the certificate keytool -keystore cacerts -import -alias opendj -file opendj.pem -storepass changeit   ThingWorx Composer As the IP address is used (the hostname is not mapped in Docker container) the certificate must have a SAN containing the IP address     Only works with the TWLDAPExample Directory Service not the ADDS1, because ADDS1 uses hard coded Active Directory queries and structures and therefore does not work with OpenDJ. User ID (cn) must be pre-created in ThingWorx, so the user can login. There is no automatic user creation by the Directory Service. Make sure the Thing is Enabled under General Information   Appendix LDAP Structure for breakingbad.ldif cn=Manager / Password123456 All users with password Password123456    
View full tip
Setting Up the Azure Load Balancer with a ThingWorx High Availability Deployment Purpose In this post, one of PTC’s most experienced ThingWorx deployment architects, Desheng Xu, explains the steps to configure Azure Load Balancer with ThingWorx when deployed in a High Availability architectural model.   This approach has been used successfully on customer implementations for several ThingWorx 7.x and 8.x versions. However, with some of the improvements planned for ThingWorx High Availability architecture in the next major release, this best practice will likely change (so keep an eye out for updates to come).   Azure Load Balancer The overview article What is Azure Load Balancer? from Microsoft will give you a high-level understanding of load balancers in general, as well as the capabilities and limitations of Azure Load Balancer itself. For our purposes, we will leverage Azure Load Balancer's capability to manage incoming internet traffic to ThingWorx Platform virtual machine (VM) instances. This configuration is known as a Public Load Balancer.   Important Note: Different load balancers operate at different “layers” of the OSI Model. Azure Load Balancer operates at Layer 4 (Transport Layer) – it is indifferent to the specific TCP Payload. As a result, you must either configure both the front-end and back-end to work on SSL, or configure both of them to work on non-SSL communications. “SSL Termination” or “TLS Offload” is not supported by Azure Load Balancer.   Azure offers multiple different load balancing solutions. If you need some guidance on choosing the right one for you, I highly recommending reviewing the Microsoft DevBlog post Azure Load Balancing Solutions: A guide to help you choose the correct option.   High-Level Diagram: ThingWorx High Availability with Azure Load Balancer To keep this article focused, we will not go into the setup of ThingWorx in a High Availability architecture. It will be assumed that ThingWorx is working correctly and the ZooKeeper cluster is managing failover for the Platform instances as expected. For more details on setting up this configuration, the best place to start would be the High Availability Administrator’s Guide.   Planning In this installation, let's assume we have following plan (you will likely need to change these values for your own implementation): Azure Load Balancer will have a public facing domain name: edc.ptc.io Azure Load Balancer will have a public IP: 41.35.22.33 ThingWorx Platform VM instance 1 has a local computer name, like: vm1 ThingWorx Platform VM instance 2 has a local computer name, like: vm2   ThingWorx Preparation By default, the ThingWorx Platform provides a healthcheck end point at /Thingworx/Admin/HA/LeaderCheck , which can only be accessed with a credential configured in platform-settings.json : "HASettings": { "LoadBalancerBase64EncodedCredentials":"QWRtaW5pc3RyYXRvcjphZG1pbg==" } However, Azure Load Balancer does not permit this Health Check with a credential with current versions of ThingWorx. As a workaround, you can create a pings.jsp (using the attached JSP example code) in the Tomcat folder $CATALINA_HOME/webapps/docs . This workaround will no longer be needed in ThingWorx 8.5 and newer releases.   There are two lines that likely need to be modified to meet your situation: The hostname in final String probeURL (line 10) must match your end point domain name. It's edc.ptc.io in our example, don’t forget to replace this with your real hostname! You also need to add a line in your local hosts file and point this domain name to 127.0.0.1 . For example: 127.0.0.1 edc.ptc.io The credential in final String base64EncodedCredential (line 14) must match the credential configured in platform-settings.json. Additionally: Don't forget to make the JSP file accessible and executable by the user who starts Tomcat service for ThingWorx. These changes must be applied to both ThingWorx Platform VM instances. Tomcat needs to be configured to support SSL on a specific port. In this example, SSL will be enabled on port 8443. Please make sure similar configuration is included in $CATALINA_HOME/conf/server.xml <Connector protocol="org.apache.coyote.http11.Http11NioProtocol" port="8443" maxThreads="200" scheme="https" secure="true" SSLEnabled="true" keystoreFile="/opt/yourcertificate.pfx" keystorePass="dontguess" clientAuth="false" sslProtocol="TLS" keystoreType="PKCS12"/> The values in keystoreFile and keyStorePass will need to be changed for your implementation. While pkcs12 format is used in above example, you can use a different certificate formats, as long as it is supported by Tomcat (example: jks format). All other parameters, like maxThreads , are just examples - you should adjust them to meet your requirements.   How to Verify Before configuring the load balancer, verify that health check workaround is working as expected on both ThingWorx Platform instances. You can use following command to verify: curl -I https://edc.ptc.io:8443/docs/pings.jsp The expected result from active node should look like: HTTP/1.1 200 There will be three or more lines in output, depending on your instance configuration but you should be able to see the keyword: HTTP/1.1 200.   Expected result from passive node should look like: HTTP/1.1 503   Load Balancer Configuration Step 1: Select SKU Search for “load balancer” in the Azure market and select Load Balancer from Microsoft Verify the correct vendor before you create a Load Balancer. Step 2: Create load balancer To create a proper load balancer, make sure to read Microsoft’s What is Azure Load Balancer? overview to understand the differences between “basic” and “standard” SKU offerings. If your IT policy only requires SSL communication to the outside but doesn't require a SSL communication in a health probe, then the “basic” SKU should be adequate (not considering zone redundancy). You have to decide following parameters: Region Type (public or Internal) SKU (basic or standard) IP address Public IP address name Availability zone PTC cannot provide specific recommendations for these parameters – you will need to choose them based on your specific business needs, or consult Microsoft for available offerings in your region.   Step 3: Start to configure Once a load balancer is successfully created by Azure, You should be able to see:   Step 4: Confirm frontend IP Click frontend IP configuration at left side and you should be able to see public IP address configuration. Please make sure to register this IP with your domain name ( edc.ptc.io in our example) in your Domain Name Server (DNS). If you unfamiliar with DNS configuration, you should consult with the administrator of your DNS server. If you are using Azure DNS, this Quickstart article on creating Azure DNS Zones and records may help.   Step 5: Configure Backend pools Click Backend pools and click “Add” to add a backend pool definition. Select a name for your Backend pool (using ThingworxBackend in our example). Next step is to choose Virtual network.   Once you select Virtual network, then you can choose which VM (or VMs) you want to put behind this load balancer. The VM should be the ThingWorx VM instance.   In a high availability architecture, you will typically need to choose two instances to put behind this load balancer.   Please Note: The “Virtual machine status” column in this table only shows VM status, but not ThingWorx status. ThingWorx running status will be determined by the health probe configured in the next step.     Step 6: Configure Health Probe Health Probe will be used to determine the ThingWorx Platform’s running status. When a ThingWorx Platform instance is running as the leader, then it will give HTTP status code 200 during a health probe. The Azure Load Balancer will rely on this status code to determine if the platform is running properly.   When a ThingWorx platform VM is not responding, offline, or not the leader in a High Availability setup, then this health probe will provide response with a different HTTP status code other than 200.   For the health probe, select HTTPS for the protocol. In our example port 8443 is used, though another port can be selected if necessary. Then, provide the “/docs/pings.jsp” we created earlier as the probe’s path. You may need to change this path value if you put this file in a different location.   Step 7: Configure Load balancing rules. Select “Load balancing rules” from left side and click “Add”   Select TCP as protocol, in our example we are using 443 as front-end port and 8443 as back-end port. You can choose other port numbers if necessary.   Reminder: Azure Load Balancer is a layer 4 (Transport Layer) router – it cannot differentiate between HTTP or HTTPS requests. It will simply forward requests from front-end to back-end, based on port-forwarding rules defined.   Session persistence is not critical for current versions of ThingWorx as only one active node is currently permitted in a High Availability architecture. In the future, selecting Client IP may be required to support active-active architectures.   Step 8: Verify health probe Once you complete this configuration, you can go to the $CATALINA_HOME/logs folder and monitor latest local_access log. You should see similar entries as pictured below - HTTP 200 responses should be observed from the ThingWorx leader node, and HTTP 503 responses should be observed from the ThingWorx passive node. In the example below, 168.63.129.16 is the internal IP Address of the load balancer in the current region.   Step 9: Network Security Group rules to access Azure Load Balancer On its own, Azure Load Balancer does not have a network access policy – it simply forwards all requests to the back-end pool. Therefore, the appropriate Network Security Group associated with the backend resources within the resource group should have a policy to allow access to the destination port of the backend ThingWorx Foundation server (shown as 8443 here, for example). The following image displays an inbound security rule that will accept traffic from any source, and direct it to port 443 of the IP Address for the Azure Load Balancer.   Enjoy!! With the above settings, you should be able to access ThingWorx via: https://edc.ptc.io/Thingworx (replacing edc.ptc.io with the hostname you have selected).   Q&A Can I configure the health probe running on a port other than the traffic port (8443) in this case? Yes – if desired you can use a different port for the health probe configuration.   Can I use different protocol other than HTTPS for health probe? Yes – you can use different protocol in the health probe configuration, but you will need to develop your own functional equivalent to the pings.jsp example in this article for the protocol you choose.   Can I configure ZooKeeper to support the health probe? No – the purpose of the health probe is to inform the Load Balancer which node is providing service (the leader), not to select a leader. In a High Availability architecture, ZooKeeper determines which VM is the leader and talking with the database. This approach will change in future releases where multiple ThingWorx instances are actively processing requests.   How well does Azure Load Balancer scale? This question is best answered by Microsoft – as a starting point, we recommend reading the DevBlog post: Azure Load Balancing Solutions: A guide to help you choose the correct option.   How do I access logs for Azure Load Balancer? This question is best answered by Microsoft – as a starting point, we recommend reviewing the Microsoft article Azure Monitor logs for public Basic Load Balancer.   Do I need to configure specifically for Websocket and/or AlwaysOn communication? No – Azure Load Balancer is a Layer 4 (Transport Protocol) router - it only handles TCP traffic forwarding.   Can I leverage this load balancer to access all VMs behind it via ssh? Yes – you could configure Inbound NAT rules for this. If you require specific help in configuring this, the question is best answered by Microsoft. As a starting point, we recommend reviewing the Microsoft tutorial Configure port forwarding in Azure Load Balancer using the portal.   Can I view current health probe status on a portal? No – Unfortunately there is no current approach to do this with Azure Load Balancer.
View full tip
  Energy. Innovation. Passion. That’s how I would describe LiveWorx19 in three words.   From beginning to end, LiveWorx truly was a one-of-a-kind digital transformation event. Whether it was the top-notch demos, the delicious lunch from food trucks, or the unique integration of our products with our partners’ technologies, everywhere I turned, inspiration and excitement were booming.   Here are my top three highlights:   1. Delivering a breakout session Chris Baldwin and I shared exciting new functionality that we’re releasing in ThingWorx to simplify and accelerate deployment of applications across your enterprise. We introduced Solution Central, a new portal in the cloud for application management and deployment (more info to come in a future post, but in the meantime, reach out with any questions).   2. Assisting with Xtropolis, our Demo Floor Speaking directly with you and hearing how you’re using ThingWorx to transform industries like medical devices, beer brewing or furniture manufacturing was pretty impressive. It was also quite rewarding and inspiring to hear your clever questions about your unique IoT use cases while being surrounded by folks performing service on a giant John Deere tractor, attendees having virtual rides on a Polaris vehicle and a team of brewers using ThingWorx and Vuforia to brew some tasty Trillium beer.   3. Participating in Usability Tests Thank you to everyone who participated in the UX Lab. Hearing your direct feedback in usability tests on what you like and what you want to see changed or added to the product helps make it feel like you are right there with us designing ThingWorx—because, well, you are. We take the feedback you share at LiveWorx right back to the office and begin iterating. So, quick shoutout to everyone who participated—thanks for your feedback.     If you weren’t able to attend LiveWorx, check out our CEO, Jim Heppelmann, deliver the opening keynote above and hear him discuss how PTC is interweaving AR, IoT, generative design, PLM and AI to create a digital thread that transforms industrial enterprises across the globe.     After hearing about our overall strategy, you’re looking for more details on our product roadmap, watch the above roadmap session delivered by our EVP of Products, Kathleen Mitford.   One of the most rewarding parts of the show for me was meeting some of you—ThingWorx developers and Ask Kaya readers—in person!   Stay connected, Kaya   P.S. If you want to hear what others thought of LiveWorx, check out what Forbes, Automation World or Diginomica had to say.   P.P.S. If you had as much fun as I did or you’re looking to attend for the first time, tickets for LiveWorx 2020 are already available—check them out here!
View full tip
  We are counting down the days for you—developers, technologists, futurists—to witness the unparalleled power of PTC’s technology. Hosted by PTC, LiveWorx is the world’s leading digital transformation event to equip you with the knowledge, power and tools you need to begin or accelerate your company’s digital transformation.   I’m excited to share that I will be presenting a breakout session on June 11 th , at 1:15pm EST, around a brand-new functionality we’re offering to improve your ability to manage and deploy your ThingWorx applications.   Want to learn more? Attend LiveWorx 2019 or learn about our livestream options. However you choose to attend, it’s an event that I’m pretty amped for and I can’t wait for you to be, too.   Hope to see you there—it’d be great to meet you in person!   Stay connected, Kaya  
View full tip
In case it's useful for anyone, I successfully used the Thingworx Importer to import an Entity using version 8.4.1. The import command was in a CURL script (can also be run using e.g. Cygwin on Windows) and the entity data was contained in an XML file. The XML file is attached to this post, and the CURL script is copied into the bottom of the post body.   Notes on using the script: Copy CURL code into import.sh You might need to change the line endings to UNIX, e.g. in Notepad++ menu option Edit > EOL Conversion > Unix Give permissions to run import.sh (chmod +x import.sh) ./import.sh >>>>>>>>>>>> #!/bin/bash APPKEY="2e6704c0-XXXX-XXXX-XXXX-a1589e387d1a" TWX_HTTP_PORT="8018" FILE_PATH_TWX="Things_testImport.xml" PROTOCOL="http://" IP="localhost" URL="$PROTOCOL""$IP"":""$TWX_HTTP_PORT""/Thingworx/Importer?purpose=import" curl -X POST -H 'appKey: '$APPKEY \ -H 'Content-Type: multipart/form-data' \ -H 'Accept: application/json' \ -H 'x-thingworx-session: true' \ -H 'X-XSRF-TOKEN: TWX-XSRF-TOKEN-VALUE' \ -F 'upload=@'$FILE_PATH_TWX \ $URL <<<<<<<<<<<<   Also TWX Importer is explained in this support article.
View full tip
  Helloooooo ThingWorx users,   Ever wanted to see the coolest technology in action? Ever wished you could surround yourself with awesome ThingWorx developers? Maybe you’ve even wished you could meet the ThingWorx product management team!   If that’s the case, you’re in luck! Join me for LiveWorx 2019 from June 10 – 13 in Boston this summer to discover how you can make digital transformation a reality for your organization. See what all the hype’s about here!   I’ll be presenting a rockin’ session on some exciting new functionality coming in ThingWorx to help with enterprise-wide app deployment. See me present with Chris Baldwin on Tues, June 11, @ 1:15pm in our session titled Introducing Solution Central: Your Gateway to Accelerated IIoT Value Across the Enterprise!   For a sneak peek of what’s to come at LiveWorx, here are seven sensational sessions our developers can’t miss! (Note: Dates and times are subject to change.) It's Electric: HowCaterpillar Develops Compelling IIoT Apps That Resonate With Customers & Dealers Mon, June 10, @ 4:30 (45 min) ThingWorx and Microsoft Azure from A to Z Tues, June 11, @ 4:00pm (45 min) It’s All About The Apps: Introducing ThingWorxMashupBuilder 2.0 and More! Wed, June 12, @ 9:00am (45 min) Connecting Asset Advisor to Azure Wed, June 12, @ 3:00pm (45 min) ThingWorx for Scalability with InfluxDB and Beyond! Wed, June 12, @ 3:00pm (45 min) From Pilot to Production: Tips & Tricks for Vuforia Studio Thurs, June 13, @ 12:00pm (45 min)   Hope to see you all there and meet you in person!   Stay connected, Kaya
View full tip
When predicting a Boolean goal such as Failure in the next hour or any other goal that has a yes or no answer, Thingworx Analytics(TWXA) models will output a 'risk' of the event occurring. TWXA will intelligently pick a threshold beyond which that risk warrants attention. 1. In Analytics Builder, click on the export button 2. This will export a PMML model and download it for you 3. Open up the PMML model, in the output section, you will find a condition that explains the threshold that was selected by TWX Analytics.   In this example case, TWXA chose 0.5 as the best Threshold.   Note: The export button will only be available in Builder for TWXA 8.4+.
View full tip
Objective Use Influx as a database to store data coming from Kepware ThingWorx Industrial Connectivity server   Prerequisite Configure ThingWorx connection to Kepware’s KEPServerEX  and bind tags that exist in KEPServerEX to things in the ThingWorx model as referenced in Industrial Connections Example   Configuration Steps 1. Create database in Influx for ThingWorx: Connect:    influx -precision rfc3339 > SHOW DATABASES > CREATE DATABASE thingworx   2. Create Influx Persistence Provider     and configure   3. In the Industrial Thing where the Remote Properties are bounded define Value Stream     and make sure to have Persistence Provider set to Influx and is set to Active     4. In the Value Stream Properties and Alerts define the mappings using Manage Bindings to specify what properties are to be stored in this value stream     5. Save it and test it to make sure properties are stored in Influx: > use thingworx > show measurements   name   ----   Channel1.Device1 > show field keys on thingworx from "Channel1.Device1" > select Channel1_Device1_Tag2 from "Channel1.Device1"   name: Channel1.Device1   fieldKey fieldType   -------- ---------   Channel1_Device1_Tag10 integer   Channel1_Device1_Tag11 integer   Channel1_Device1_Tag12 integer   Channel1_Device1_Tag13 integer   Channel1_Device1_Tag14 integer   Channel1_Device1_Tag15 integer   Channel1_Device1_Tag16 integer   Channel1_Device1_Tag17 integer   Channel1_Device1_Tag18 integer   Channel1_Device1_Tag19 integer   Channel1_Device1_Tag2 integer   Channel1_Device1_Tag20 integer   Channel1_Device1_Tag21 integer   Channel1_Device1_Tag3 integer   Channel1_Device1_Tag4 integer   Channel1_Device1_Tag5 integer   Channel1_Device1_Tag6 integer   Channel1_Device1_Tag7 integer   Channel1_Device1_Tag8 integer   Channel1_Device1_Tag9 integer   shows data stored in Channel1_Device1_Tag2: >select Channel1_Device1_Tag2 from "Channel1.Device1"   2019-02-20T16:26:13.699Z 8043   2019-02-20T16:26:14.715Z 8044   2019-02-20T16:26:15.728Z 8045   2019-02-20T16:26:16.728Z 8046   2019-02-20T16:26:17.727Z 8047   2019-02-20T16:26:18.725Z 8048   2019-02-20T16:26:19.724Z 8049   2019-02-20T16:26:20.722Z 8050   2019-02-20T16:26:21.723Z 8051   2019-02-20T16:26:22.722Z 8052
View full tip
  Question: What is the best way to use Git with ThingWorx? Disclaimer: Please note that, while the ThingWorx Git Backup Extension is a very useful tool, it is not a PTC product, nor is it supported by PTC.   After the release of ThingWorx 8.4 two weeks ago, are you looking for even more? Can’t get enough of ThingWorx? Good thing—because we’ve got you covered.   We have just released Version 2.0 of the ThingWorx Git Backup Extension! Reach out if you'd like to learn how to obtain access to it.    In the newest version, you’ll find: Major UX improvements and UI restyling. The extension now includes a new page called GitBackup.Main.Mashup, which offers access to all the functionality previously available in the Home Mashup (see below). GitBackup.Main.Mashup is now the single interface for all the GitBackupThings in the system; you’ll no longer need to go to Composer to manage them individually. New ThingWorx Git Backup Extension 2.0 Support for querying and selecting the Bitbucket repositories that you, as a user, have access to. An updated ExtensionExportExtension with bugfixes.   If you’re looking for guidance on how to configure Git with ThingWorx, check out one of my earlier posts that explains how you can use Git to achieve continuous integration with ThingWorx or view the updated Git Backup Extension User Guide attached (see the “Attachments” section to the right).   Shoutouts to Vladimir, Gabriel, Bogdan, Moritz and Pierre for making this available.   Let me know what you think of Version 2.0 in the comments below!   The open-source Git Backup Extension can be found here.    Stay connected, Kaya
View full tip
This video shows the steps to install ThingWorx Analytics Server 8.4 as well as the ThingWorx Analytics Extension.
View full tip
Installing an Open Source Time Series Platform For testing InfluxDB and its graphical user interface, Chronograf I'm using Docker images for easy deployment. For this post I assume you have worked with Docker before.   In this setup, InfluxDB and Chronograf will share an internal docker network to exchange data.   InfluxDB can be accessed e.g. by ThingWorx via its exposed port 8086. Chronograf can be accessed to administrative purposes via its port 8888. The following commands can be used to create a InfluxDB environment.   Pull images   sudo docker pull influxdb:latest sudo docker pull chronograf:latest   Create a virtual network   sudo docker network create influxdb   Start the containers   sudo docker run -d --name=influxdb -p 8086:8086 --net=influxdb --restart=always influxdb sudo docker run -d --name=chronograf -p 8888:8888 --net=influxdb --restart=always chronograf --influxdb-url=http://influxdb:8086     InfluxDB should now be reachable and will also restart automatically when Docker (or the Operating System) are restarted.
View full tip
  Today on Ask Kaya, I have a riddle.   I was effective and trendy, but now I can be annoying. I sometimes tend to look out of place and I get in others’ space. I look easy to learn, but few have truly mastered my intricacies. Am I the floss dance?   No, I’m not the floss dance. I’m the expression and validator widgets.   It’s time to say goodbye to those pesky widgets that were super useful but super annoying. Yep, those widgets that littered your design canvas but were “Invisible at Runtime.”     I’m talking about expressions, validators, status messages and event routers. In our next release, expression and validator widgets will no longer appear on the canvas at build time.   You may remember from the previous post titled “Ask the Expert: What are the top three features in ThingWorx 8.4 that I might not know about?” In the post, we discussed the concept of Data Helpers, now known as “Functions.”   What do Functions do? Functions give you the ability add custom logic and bindings to improve UI application functionality. Before we describe how to configure them, let’s first explain what they are.   An expression widget runs any expression you give it. That piece of logic might be something like result = a +  b. While an expression can run any type of logic—not just numbers—you must specify the output base type to be the same as the input base type.    Expressions can also be used to run a different service based on an event. For example, a user may write an expression to run a service if a value changes. They may not care about the value itself but rather just want to know that the value changed.   A validator widget is similar to an expression widget; the key difference is that a validator only outputs a Boolean. When their result is true, you bind to one service, when false, another. Unlike an expression widget, the validator widget does not have to have matching input and output datatypes because the output datatype will always be Boolean.   The input for a validator can be anything. You can create a scenario in which a validator widget outputs a status message that reads “the value is within the acceptable range” when the validator returns true or “the value is outside of the acceptable range” when the validator returns false.   Ready for the extra good stuff? We’re introducing a new editor for you to create, add and configure expressions and validators.   How can I use Functions? Let’s walk through an example using the following these steps.   Create a new Mashup. You’ll see a new tab called “Functions,” which, on default, appears in the bottom right panel.                 Click the “+” arrow in the top right of the “Functions” panel. Choose “expression.” Use the new Functions editor to write your expression. In this example, we’ll say that result = a + b; New Functions Editor  We’ll then set the default values for a and b to be 2 and 3, respectively, to output a result of 5.     Expressions are just as powerful as they were before, but they no longer take up space on your mashup during design time, and they can now be configured in our brand new editor! (To spread the joy even more, the same holds true for validator widgets.)   Reach out with any questions or thoughts below!   Stay connected and keep floss dancing, Kaya
View full tip
Hello readers,   This week @ Ask Kaya we thought we would give someone else the keyboard for a different point of view on the platform.  I’m Chris, a Product Manager here at PTC working on the ThingWorx platform.  Instead of telling you what is coming in our next release, or interviewing one of our awesome PTC experts, I thought I would take a moment to reflect on the platform’s success and dream about where it could be.  After visiting with customers and partners at PTC Forum Europe this week, it looks like many of you share in our vision.  This is a bit of a fun post and by no means an exact look into ThingWorx 2019, but see what you think.   When I think about the next generation of ThingWorx, here is what I see: I see Mashups that generate themselves with suggested visualizations based on your input for style, user persona, and navigation I see Thing Models that populate, based on your use case, your equipment and your connectivity I see a self-learning platform with understanding of all industrial data sources, presenting options of integration to extend knowledge or informing you of correlations I see applications that automatically master individual pieces of equipment, small processes, and handfuls of KPIs and will command larger fleets, networks, and multi-site operations I see a platform without installation or setup, but is there when you need it I see test code and harnesses that are created based on what you build with our tools and tests that run automatically when things are changed I see developers being notified when things are changed by other developers, or when modules from PTC have new versions I see a central place to manage solutions, with push button access for administrators to deploy to sites I see upgrades happening seamlessly, confidently, with no penalty for failure and with the speed of iterative development I see a self-aware system that monitors and scales itself cost effectively   Readers, what do you see?  Sound off in the comments!   Chris
View full tip
Warning This post is quite long, has various chapters and you might get bored reading it. If you just want a summary read the "Use case" and "Conclusion" chapter - and maybe the "Required Logic" chapter, because I made a cool graph for it. The rest is all about implementation... Introduction I recently had the opportunity to deliver a ThingWorx training for Saint Gobain. One of the use cases for their ThingWorx application is monitoring machine errors and outages on the production line. If an outage or error status is triggered, the machine operator will see a popup on the monitoring screen where he is forced to select a root cause. This root cause will then be persisted in ThingWorx for more data transformation, analytics and reporting - like cost analysis or optimization opportunities. During the training we were also discussing on how such a forced root cause monitoring can be implemented via Mashups and the usage of modal popups. I've compiled the details into this post as it might also interest other developers. The ThingWorx Entities I'm using in this example can be downloaded from here Note: I'm using the word "Alert" here - but not in the context of a ThingWorx Property Alert... just beware to not be confused due to the wording. Use Case One of the requirements for Saint Gobain's IoT Solution was an interactive alert monitoring directly in the factory on the production machines. Let's say the machine has stopped, the root cause should be recorded. For this an interactive popup will be displayed on the machine's monitoring display and an employee has to choose the root cause from a pre-defined list. This could be planned outages, e.g. for maintenance or unplanned outages, e.g. material jam. The root cause will then be recorded and a history of outage causes can be stored in a ThingWorx value stream. This can then be later analyzed with e.g. ThingWorx Analytics capabilities to understand and optimize the machine's production capabilities and efficiency. As the root cause must be entered, the popup will be forced to be displayed when a certain condition / criteria is met - and it will only disappar when a root cause is chosen. The user should not be able to interact with any other elements of the Mashup and not be able to just close the popup. The popup will close itself and reset the initial condition once the root cause has been identified and chosen. Requirements Required Entities Required Logic Note: Just to make it easier to manage and export Entities, I will add all of the created elements in a new Project called RootCausePopups. All of the elements will have a "rcp_" added in front of their name - just to make it easier for me to find and identify them. Implementation Before we start - set a Project Context Create Entities Create a Popup Mashup Create the Main Mashup Testing the Mashups Conclusion Certain conditions (like the state of a checkbox) can be used to trigger modal popups. A modal popup forces a user interaction and the interaction will not offer any other option until a choice is made. With these parameters it's easy to have mandatory reaction from users when it's important to capture data which rely on the analysis of an engineer or a user - e.g. reasons for machine outages. Using this technique there's not much training required for staff, other than pushing a button with an option of their choice - this saves quite some time in capturing data in any other way (e.g. updating Excel files or manual pen-and-paper techniques). As this data is now part of the ThingWorx instance it can be used for further transformation, analysis or just for monitoring purposes There's of course more possibilities when it comes to states and formatting which would exhaust the context of this post - but feel free to explore... In the example we wouldn't need the textbox, but it's there to demonstrate if the correct values are persisted or not In the example we could of course also set the visibility of the checkbox to false, so that we would only see the popup and the Grid holding historic information We could also create different StateDefinitions to color-format / text-format the input differently from the output in the Grid If you found this interesting (and actually made it to the end of this post) - feel free to play with this concept a bit more... The dependencies might seem a bit difficult, but it should be easy to implement and to adjust to your own ideas and requirements.
View full tip
  Meet Anthony. Anthony came to PTC from a large industrial company as a user of Axeda, a leading device connectivity company that PTC acquired in 2014. With a background in aerospace engineering and experience in a variety of industries including life safety, healthcare, nuclear power, and oil and gas, Anthony has been working to create new value around innovation for customers transitioning to ThingWorx. When he’s not working on IIoT, he’s playing music on Cape Cod, photographing Hawaiian landscapes or bringing awesome inflatable chairs into the office.    You may recall from last week's post that Thing Presence is one of the top three features in 8.4 that you might not know about. This week, I spoke with Anthony for a deeper dive on Thing Presence. Check out how it went.   Kaya: What was the challenge users were facing that led us to create Thing Presence? Anthony: Axeda customers transitioning to ThingWorx were struggling with the connectivity use case and kept asking, ‘why is my asset always offline?’ When we evaluated it, we discovered that it was tied to the difference between AlwaysOn and Axeda eMessage protocol architectures. IsConnected will always report false for polling and duty cycle devices.   The use case of Thing Presence is to know that the asset is reporting into the network and is ready to provide information (push) or be accessed to retrieve information or do a remote desktop support (pull). This use case is relevant for any asset in ThingWorx that uses duty cycle.   In ThingWorx 8.4 (coming in early 2019), the new IsReporting state will inform the user when a polling device is communicating on a regular basis. If it is, then IsReporting will be true. The IsReporting state resolves the discrepancy wherein devices that are on duty cycle appear disconnected due to the IsConnected state reporting false.  New "IsReporting" state improves visibility of an asset's communication state Kaya: How exactly does Thing Presence work? Anthony: You can think of it in terms of having teenagers. You tell them they need to check in with you on a regular basis through text message. If a text is missed, all of a sudden you take action.   Now, imagine the teenager is a device. If a device was supposed to check in every five minutes and it misses one poll, I want to flag that as a problem. The challenge with that from a service perspective is that sometimes your service personnel will go out and work on a device and may need to take it offline for a bit of time; we need to factor that in. We certainly don’t want to deploy someone or try to fix something when a service technician is already there.   You might decide, ‘my average service visit is an hour, so, if I miss a couple of pings, I’m okay; but, if I’m offline for more than an hour, then I’d like to know about it because I’d like to take action.’ Thing Presence allows you to define that window.   Kaya: You’ve mentioned ‘duty cycle’ and ‘polling cycle.’ Can you explain these terms? Anthony: Duty cycle and polling cycle are the same thing. It means that a device has a time for which it is expected to check in, and, provided that it checks in within that timeframe, all is good with the connection.   Connected services rely on a connection. As soon as the connection is broken, I no longer have the ability to service the asset.   Kaya: Given everything we’ve discussed, where do you see Thing Presence headed? Anthony: The next piece of the equation for us is to provide information on the health of the connection. When you look at servicing a remote asset, you need to a) know that it is communicating, and b) know that the connection is healthy before you try anything. I wouldn’t want to try a software update if I am losing connection with my asset on a regular basis.   What do we mean by health? We mean: is the device checking in when it should be? If it’s not, is there a pattern to that connection, and are those patterns tied to applications? For example, is it only on during working hours? Does it turn off during holidays? If the device is in a school, does it turn off during summer maintenance work? This allows us to garner insights on how and when the equipment is being used, not just operating status. At the end of the day, does this mean I can apply analytics and AI tools to it? Absolutely. Is it the first place I would apply it? Probably not.   Kaya: In your mind, what is the next big thing coming in ThingWorx that you’re particularly excited about? Anthony: Mashup 2.0 and Asset Advisor 8.4. (Double mic drop.)   Kaya: That’s awesome. My last question is related to you. Can you tell me what your favorite aspect is about working at PTC? Anthony: The chaos. Very often it’s chaos that breeds innovation. What I mean by that is that, if you try to create something because you sit down and you say, ‘I am going to innovate,’ very often that is a failure because the majority of the time it’s the influence of a deadline, a customer need or an application at hand that makes the environment trying and sometimes hectic. But, it is in these challenging environments where you can be the most creative and innovative as an engineer.
View full tip
If your mashups are feeling cramped and crowded, we’ve got some good news for you. Coming in our next release, ThingWorx will offer truly responsive layouts for your mashups, allowing you to adjust the size of your viewport or present a mashup on a variety of displays—laptop, tablet, phone, etc.—without compromising presentation or responsiveness. This new layout capability is based on Flexbox. Check out this article to learn more.   Here’s a sneak peek of our new layout editor. It’s still in development, but I hope you can start to see how you can use this in your UI development.   Sneak Peek: Responsive Mashup Layout Notice the ability to split the layout into containers. Each of the containers allows you to horizontally or vertically layout, distribute from left or right, align from top or bottom, justify space between widgets, etc. This gives you as the developer ultimate control to define the responsiveness of the mashup.   Like what you see? Any feedback? Let me know below!   Stay connected, Kaya
View full tip
Announcements