cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
Recently, mentor.axeda.com was retired. The content has not yet been fully migrated to Thingworx Community, though a plan is in place to do this over the coming weeks. Attached, please find OneNote 2016 and PDF attachments that contain the content that was previously available on the Axeda Mentor website.
View full tip
A user can make a direct REST call to Thingworx platform, but when it comes to a website trying to make a REST call. The platform server blocks the request as it is a Cross-Origin request. To enable this feature, the platform server needs to allow Cross-Origin request from all/specific websites. Enabling Cross-Origin request can be done by adding CORS filter to the server. CORS (Cross-Origin Resource Sharing) specification enables the cross-origin requests from other websites deployed in a different server. By enabling CORS filter, a 3rd party tool or a website can retrieve the data from Thingworx instance. Follow the below steps inorder to update the CORS filter: Update web.xml file (located in $CATALINA_HOME/conf/web.xml) For Minimal Configurations, add the below code: <filter> <filter-name>CorsFilter</filter-name>   <filter-class>org.apache.catalina.filters.CorsFilter</filter-class> </filter> <filter-mapping>   <filter-name>CorsFilter</filter-name>   <url-pattern>/*</url-pattern>         // "*" opens platform to all URL patterns, recommended to use limited patterns. </filter-mapping> NOTE: the url-pattern - /* opens the Thingworx application to every domain. For advanced configuration, follow the below code: <filter> <filter-name>CorsFilter</filter-name> <filter-class>org.apache.catalina.filters.CorsFilter</filter-class> <init-param> <param-name>cors.allowed.origins</param-name> <param-value> http://www.customerwebaddress.com </param-value> </init-param> <init-param> <param-name>cors.allowed.methods</param-name> <param-value>GET,POST,HEAD,OPTIONS,PUT</param-value> </init-param> <init-param> <param-name>cors.allowed.headers</param-name> <param-value>Content-Type,X-Requested-With,accept,Origin,Access-Control-Request-Method,Access-Control-Request-Headers</param-value> </init-param> <init-param> <param-name>cors.exposed.headers</param-name> <param-value>Access-Control-Allow-Origin,Access-Control-Allow-Credentials</param-value> </init-param> <init-param> <param-name>cors.support.credentials</param-name> <param-value>true</param-value> </init-param> <init-param> <param-name>cors.preflight.maxage</param-name> <param-value>10</param-value> </init-param> </filter> <filter-mapping> <filter-name>CorsFilter</filter-name> <url-pattern>/* </url-pattern>   // "*" opens platform to all URL patterns, recommended to use limited patterns. </filter-mapping> NOTE: update the cors.allowed.origin parameter with the desired web address Save web.xml file Restart tomcat For additional information, please follow the official tomcat reference document: http://tomcat.apache.org/tomcat-7.0-doc/config/filter.html#CORS_Filter Tested this using an online Javascript editor (jsfiddle) and executing the below script <script> var data = null; var xhr = new XMLHttpRequest(); xhr.open("GET", "http://localhost:8080/Thingworx/Things", true); xhr.withCredentials = true; xhr.send(); </script> The request was successful and list of things are returned.
View full tip
  Hi, everyone!   We’re actively working towards the ThingWorx 9.0 release and we’re ready to provide a sneak peek into the biggest feature of 9.0: Active-Active Clustering for High Availability configuration.   You may be wondering: doesn’t ThingWorx already offer High Availability?   Yes, ThingWorx already supports a High Availability configuration. Previous versions of ThingWorx, such as ThingWorx 8.X version releases, support Active-Passive configuration, where one “active” ThingWorx server performs all processing and maintains the live connections to other systems such as databases and connected assets. Meanwhile, in parallel, there is a second “passive” ThingWorx server that is a mirror image and regularly updated with data but does not maintain active connections to any of the other systems. If the “active” ThingWorx server fails, the “passive” ThingWorx server is made the primary server, but this can take a few minutes to establish connections to the other systems.   So, how is ThingWorx Active-Active different?   Active-Active configuration differs from Active-Passive in that all the ThingWorx servers in the cluster are “active.” Not only is data mirrored across all ThingWorx servers, but all of the servers, instead of only one, maintain live connections with the other systems. This way, if any of the ThingWorx servers fail, the other ThingWorx servers take over instantaneously with no recovery time.   Since all ThingWorx servers are active, they are processing in parallel and, as a result, the cluster can process more data than that of a single server or a cluster with an Active-Passive configuration. Simply put, multiple servers working together outperform a single server. This allows customers to scale their deployment by simply adding more ThingWorx servers to the cluster (horizontally scaling), which does not have the same limitations of scale that is achieved by increasing the performance of the server itself.   What does that mean for me? Higher Availability - You can avoid single points of failure and configure the ThingWorx Foundation platform in an Active-Active cluster mode to achieve the highest availability for your IIoT systems and applications. Increased Scalability - Now, you can horizontally scale from one to many ThingWorx servers to easily manage large amounts of your IIoT data at scale more smoothly than ever before.   Stay tuned! We’ll be posting more information on Active-Active Clustering—how it's achieved in ThingWorx, architectural component overviews, and what it means for your ThingWorx deployment!   In the meantime, we're running the Active-Active Clustering Beta Program. Interested in participating? Reach out to Ryan Servais (rservais@ptc.com) or Ayush Tiwari (atiwari@ptc.com) to learn more about participation!   Stay connected! Kaya
View full tip
In this blog post I'm covering the pratical aspects of setting up a Chain of Trust and configure ThingWorx to use it. There are already some examples available on how to do this via command line using a self-signed certificate, e.g. https://support.ptc.com/appserver/cs/view/solution.jsp?n=CS193947 In this example I'll be using the KeyStore Explorer, a graphical tool for managing keystores. http://keystore-explorer.org/ To learn more about the theory behind trust and encryption, see also Trust &amp; Encryption - Theory The Chain of Trust Creating a Root CA Open the KeyStore Explorer and create a new jks KeyStore File > New > JKS Via Tools > Generate Key Pair create a new key-pair. Use the RSA algorithm with a key size of 4096. This size is more than sufficient. Set a validity period of e.g. 302 days. It's important that all subsequent certificates have a lower validity period than the original signer. Leave the other defaults and fill out the Name (addressbook icon). Only the Common Name is required - usually I never use the Email field, as I want to avoid receiving more spam than necessary. Click OK and enter an alias for the Root CA (leave the default). A new password needs to be set for the private key part of the certificate. The new Root CA will now show up in the list of certificates. Creating an Intermediate CA The intermediate CA is signed by the Root CA and used to sign the server specific certificate. This is a common approach as - in case the signing CA for the server specific certificate gets corrupted, not all of the created certificates are affected, but only the ones signed by the corrupted CA. In the list right-click the Root CA and Sign > Sign New Key Pair Use the RSA algorithm with a key size of 4096 again. Ensure the validity period is less than for the Root CA, e.g. 301 days. (Because every second counts) Fill out the Name again - this time as Intermediate Certificate. A Intermediate Certificate is a Certification Authority (CA) in itself. To reflect this, a Basic Constraint must be added via Add Extensions. Mark the checkbox for Subject is a CA Confirm with OK, set the alias (or leave the default) and set the password for the private key part. The new Intermediate CA will now show up in the list of certificates. Creating a Server Specific Certificate In the list right-click the Intermediate CA and Sign > Sign New Key Pair Use the RSA algorithm with a key size of 4096 again. Ensure the validity period is less than for the Root CA, e.g. 300 days. (Because still, every second counts) Fill out the Name again - this time it's important to actually use the public servername, so that browsers can match the server identity with its name in the browser's address bar. If the servername does not match exactly, the certificates and establishing trust will not work! In addition to this, a Subject Alternative Name must be defined - otherwise the latest Chrome versions consider the certificate as invalid. Click on Add Extensions and the + sign, choose Subject Alternative Name. Add a new name with the + sign. Make it the same as the Common Name (CN) above. As before, confirm everything with OK, enter an alias and a password for the private key part. Saving the keystore All certificates should now show with their corresponding properties in the certificate list: Notice the expiry date - the higher up the chain, the sooner the expiry date. If this is not the case - the certificates and establishing trust will not work! Save the keystore via File > Save As and set a password - this time for the keystore itself. As Tomcat has a restriction the password for the keystore must be the password for the private key of the server specific certificate. It's highly recommended and (hopefully) obvious that passwords for the other private keys should be different and that the password for the keystore should be completely different as well. The keystore password will be shared with e.g. Tomcat - the private key password should never be shared with a source you do not trust! Configuring ThingWorx In ThingWorx, configure the <Tomcat>/conf/server.xml as e.g. mentioned in https://support.ptc.com/appserver/cs/view/solution.jsp?n=CS193947 I'm using a connector on port 443 (as it's the default HTTPS port): <Connector port="443" protocol="org.apache.coyote.http11.Http11NioProtocol" maxThreads="150"           SSLEnabled="true" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" enableLookups="false"           connectionTimeout="20000" keystoreFile="C:\ThingWorx\myKeystore.jks" keystorePass="changeme" redirectPort="8443" /> Restart Tomcat to consider the new configuration. ThingWorx can now be called via https://<myserver.com>/Thingworx It's not yet trusted by the client - so it shows as potentially unsafe, and for me it shows in German as well *sorry* However, it's already using encrypted traffic as every HTTPS connection will use TLS. In this case we're communicating in a secure way, but do not know if the server (and its providers) can be trusted or not. Sharing credit card information or other personal details could be dodgy in such a scenario. Trusting the Root CA To trust our Root CA, we first have to export it. Every browser allows to view the certificate, e.g. in Chrome this is done via the Developer Tools > Security As we're using KeyStore Explorer, we can use it to export the Root CA. Right-click it in the certificate list and Export > Export Certificate Chain. Leave the defaults and export as .cer file. Open the Certificate Manager, by typing certmgr into the Windows search field. Open the Trusted Root Certification Authorities and right-click on the Certificates folder. Under All Tasks > Import. Choose the file you've just exported for the Root CA. The Root CA is now a trusted certificate authority. As the ThingWorx server specific certificate will fall back to the Root CA, it's now also recognized by the browser and the connection is now considered as secure and trustworthy. Congratulations!
View full tip
The term ‘Extension’ or ‘Plugin’ often has many different meanings. For example, from the point of view of a Product Manager it often means an ‘easy’ way to add additional functionality to an existing piece of software. In contrast, from the point of view of a Software Developer it often means new syntax to memorize, extensive amounts of API documentation to read and very often weeks or even months of trial and error to make this ‘easy’ addition of functionality.  We at ThingWorx recognized that in order to be a true Platform we need to take action to make the creation of Extensions easier at all phases of the process. Our development team took on the challenge of understanding what it is that normally makes this such a difficult process, and try and find solutions. We took to the open source community looking for a Platform that could offer our Extension developers a wide array of functionality that was easily accessible and familiar. This is where the Eclipse IDE comes in. We were able to create a Plugin of our own for the Eclipse IDE that makes it easy for an Extension Developer to create an Extension Project, generate ThingWorx specific code, manage all the project configuration and build files and also package the Extension. We can do all of this without having a developer read any API documentation or manually write any code, leaving the Extension Developer to focus on what they are best at, which is adding that additional functionality we mentioned earlier. Extensibility and the true nature of a Platform Extensibility is a core aspect of any true Platform as it allows users to add functionality at any time to meet new and changing requirements. The capabilities of extensions are almost endless but here are a few examples: Adding new UI Widgets to be used to visualize data Adding any custom third party Libraries to be used seamlessly in ThingWorx Easily accessing REST APIs outside of ThingWorx Creating helper Resources to be used across the entire platform Add custom Entities easily to multiple ThingWorx instances Provide custom Authenticators and Directory Services As you can see, it is possible to do practically anything that you or our community might find useful for the Internet of Things. This is the nature of a true ‘Platform’. How do I get started developing an extension? There are three steps that will help you dive into Extension Development quickly. First, an instance of ThingWorx Foundation and the ability to navigate the UI, called Composer. Second, a basic understanding of the ThingWorx Model, or “Thing Model”, is necessary. Finally, you will need an installation of the Eclipse IDE with the ThingWorx Eclipse Plugin installed to get started developing your extension. 1)  Getting familiar with ThingWorx Foundation The easiest way to get started playing with the ThingWorx platform is to head over to the Developer Portal and spin up a hosted ThingWorx Foundation server. This is as easy as clicking the ‘Create Foundation Server’ button and a 30-day hosted instance will be created for you to start using as your own personal development playground. If you prefer to set up and work in your own environment, you can also download a Developer Trial Edition to host on your own machine. In order to get familiar with ThingWorx Foundation, I recommend going through our ThingWorx Foundation Quickstart Guide that introduces you to the core building blocks of the platform as well as guide you through a typical scenario of creating a simple IoT application. 2)  Understanding the Thing Model Basics If you are already familiar with the Thing Model and know the basics of using the ThingWorx Platform, then you can probably skip over this section. If you aren’t, or just want a refresher, I’ll go over the basics here. The Thing Model is a collection of Entities that define your solution or business model in ThingWorx. You need a Thing Model for a few reasons.  For those in software development, the Thing Model and its benefits are very similar to those of the Object Oriented programming model.  A good model allows you to maximize reusability, maintainability, and encapsulation.  Having a sound Thing Model means that the future of your IIoT solution will be minimally affected by things like migration, iterative changes, permission changes and security vulnerabilities. The three most commonly used and most important Entities within your model are Things, Thing Templates, and Thing Shapes. These entity types will be the main building blocks for your Thing Model. These are only a few of the Entity Types provided by ThingWorx. It is not necessary but definitely recommended, to have a more comprehensive understanding of the Thing Model, and to work with the entire collection of Entity Types within the ThingWorx Platform by going through the ThingWorx Foundation Quickstart Guide on the Developer Portal. 3)  Dive into the Eclipse Plugin and develop your own extension Lastly, if you don’t have Eclipse IDE installed, head over to the eclipse.org download page and get one installed on your machine. Once you have that, you can find the Eclipse Plugin on our Marketplace here. In a next step, you will want to create a new Extension Project. We added a ThingWorx Extension perspective that enables all of the custom functionality.   Once in the correct perspective, we tried to make our plugin as intuitive as possible. We did this by following as many of the Eclipse usability standards as we could, which means that if you are familiar with the Eclipse IDE you should be able to find most of the ThingWorx functionality on your own. For Example, a simple ‘File’ >> ‘New’ will show you all the options for creating a Project. Creating a new ThingWorx Extension project requires a Name and a ThingWorx Extension SDK (found on the ThingWorx Marketplace) of the version of ThingWorx that you are building your extension for.   By utilizing the capabilities of the Eclipse IDE, we were able to automate the creation of many of the artifacts that had slowed extension developers down. The wizard allows the plugin to handle creating and managing a build file and a metadata.xml as well as manage all of the project dependencies. Once you have an Extension Project, you can use the ThingWorx menus to create your Entities. These actions will create the necessary Java files and manage their applicable entry within the metadata.xml of your project.   After creating your Entity, you can right click the applicable java file which will show you the ‘ThingWorx Source’ menu. This houses the options to generate the code for additional characteristics (Services, Properties, etc.) making the need to learn all of the custom annotations and method signatures a much less daunting process.   Once you have generated some code with the Plugin, it is time to get started implementing your solution - This is the point where my expertise ends and yours begins! If you are interested in getting some more in-depth information on this topic, check out these additional resources: Tutorial: We have created a tutorial that guides you step-by-step through the entire process of developing a ThingWorx extension. Webcast: Watch this 60-minute, interactive deep-dive into IIoT development and learn how to use the Eclipse Plugin to rapidly create a custom ThingWorx extension. Head over to the Developer Portal and start bringing all of your great ideas to life!
View full tip
Let us consider that we have 2 properties Property1 and Property2 in Thing Thing1 which we want to update using UpdatePropertyValues service. In our service we will use a system defined DataShape NamedVTQ which has following field definitions: In Thing1 create a custom service like following: var params1 = {      infoTableName : "InfoTable",      dataShapeName : "NamedVTQ" }; // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(NamedVTQ) var InputInfoTable = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params1); var now = new Date(); // Thing1 entry object var newEntry = new Object(); newEntry.time = now; // DATETIME - isPrimaryKey = true newEntry.quality = undefined; // STRING newEntry.name = "Property1"; // STRING newEntry.value = "Value1"; // STRING InputInfoTable.AddRow(newEntry); newEntry.name = "Property2"; // STRING newEntry.value = "Value2"; // STRING InputInfoTable.AddRow(newEntry); var params2 = {      values: InputInfoTable /* INFOTABLE */ }; me.UpdatePropertyValues(params2);
View full tip
Recently I needed to be able to parse and handle XML data natively inside of a ThingWorx script, and this XML file happened to have a SOAP namespace as well. I learned a few things along the way that I couldn’t find a lot of documentation on, so am sharing here.   Lessons Learned The biggest lesson I learned is that ThingWorx uses “E4X” XML handling. This is a language that Mozilla created as a way for JavaScript to handle XML (the full name is “ECMAscript for XML”). While Mozilla deprecated the language in 2014, Rhino, the JavaScript engine that ThingWorx uses on the server, still supports it, so ThingWorx does too. Here’s a tutorial on E4X - https://developer.mozilla.org/en-US/docs/Archive/Web/E4X_tutorial The built-in linter in ThingWorx will complain about E4X syntax, but it still works. I learned how to get to the data I wanted and loop through to create an InfoTable. Hopefully this is what you want to do as well.   Selecting an Element and Iterating My data came inside of a SOAP envelope, which was meaningless information to me. I wanted to get down a few layers. Here’s a sample of my data that has made-up information in place of the customer's original data:                <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" headers="">     <SOAP-ENV:Body>         <get_part_schResponse xmlns="urn:schemas-iwaysoftware-com:iwse">             <get_part_schResult>                 <get_part_schRow>                     <PART_NO>123456</PART_NO>                     <ORD_PROC_DIV_CD>E</ORD_PROC_DIV_CD>                     <MFG_DIV_CD>E</MFG_DIV_CD>                     <SCHED_DT>2020-01-01</SCHED_DT>                 </get_part_schRow>                 <get_part_schRow>                     <PART_NO>789456</PART_NO>                     <ORD_PROC_DIV_CD>E</ORD_PROC_DIV_CD>                     <MFG_DIV_CD>E</MFG_DIV_CD>                     <SCHED_DT>2020-01-01</SCHED_DT>                 </get_part_schRow>             </get_part_schResult>         </get_part_schResponse>     </SOAP-ENV:Body> </SOAP-ENV:Envelope> To get to the schRow data, I need to get past SOAP and into a few layers of XML. To do that, I make a new variable and use the E4X selections to get there: var data = resultXML.*::Body.*::get_part_schResponse.*::get_part_schResult.*; Note a few things: resultXML is a variable in the service that contains the XML data. I skipped the Envelope tag since that’s the root. The .* syntax does not mean “all the following”, it means “all namespaces”. You can define and specify the namespaces instead of using .*, but I didn’t find value in that. I found some sample code that theoretically should work on a VMware forum: https://communities.vmware.com/thread/592000. This gives me schRow as an XML List that I can iterate through. You can see what you have at this point by converting the data to a String and outputting it: var result = String(data); Now that I am to the schRow data, I can use a for loop to add to an InfoTable: for each (var row in data) {      result.AddRow({         PartNumber: row.*::PART_NO,         OrderProcessingDivCD: row.*::ORD_PROC_DIV_CD,         ManufacturingDivCD: row.*::MFG_DIV_CD,         ScheduledDate: row.*::SCHED_DT     }); } Shoo! That’s it! Data into an InfoTable! Next time, I'll ask for a JSON API. 😊
View full tip
Hi, ThingWorx users! We’re excited to share that we have partnered with InfluxData to make time series analysis in ThingWorx even easier. InfluxDB is a database by InfluxData that is “built specifically for metrics and events that empower developers to build next-generation IIoT, analytics and monitoring applications.”   Why InfluxData?  Today, application developers expect robust querying capabilities, fast response time, easy ways of aggregating and pivoting on data and leveraging results for reporting and visualization. IT and devops administrators also expect cost-effective storage and easy ways of aging data through archiving and the ability to keep large amounts of historical data to satisfy analysis requirements.   That’s why we’ve partnered with InfluxData to make it easier for developers to store, analyze and act on IIoT data in real-time. With InfluxData, developers can build connected IIoT applications more quickly while still incorporating the following capabilities: monitoring real-time alerting predictive maintenance streaming data anomaly and event detection visual and report-based analysis   We considered a few technologies for the purpose of improving ThingWorx time series analysis. Here are a few reasons we chose InfluxData: high compression of data ~45x ability to handle millions of writes per second* ability to read around thousands of rows in milliseconds* supports the standard time series functions of sampling, interpolation, time bucketing, aggregation, selector, transformation, predictor, etc.  * Query and write times will vary based on an individual ThingWorx application’s implementation with Influx. For example, as the number of concurrent reads increases, the query speed decreases. With the upcoming 8.4 release, the ThingWorx Sizing Guide will be updated to reflect representative performance for ThingWorx developers.   In addition to improved query capability, ThingWorx time series with Influx can now use less memory and CPU, giving your platform servers a bit of a break.   To start strategizing on how InfluxData can help you in your ThingWorx journey, here is a sneak preview of what it will look like:   New Features The new ThingWorx Influx Persistence Provider will make query services like ValueStream Thing QueryPropertyHistory, Stream Thing QueryStreamData and QueryStreamEntries even better. Simply create a new instance of the persistence provider, configure it to use your InfluxDB instance, create a new value stream (or stream) from the new persistence provider, and you’ll be writing, reading and analyzing your time series data like never before.   We’re also introducing a new enhancement to improve InfoTable support with time series data, including providing the ability to use a driver property. The driver property can be specified with the QueryPropertyHistoryWithDriverProperty service for time alignment and filling backward/forward in your stream queries.   Let’s walk through a driver property example where you have the properties of temperature, speed and battery level. Timestamp Temperature Speed Battery Level 1480589076592000000 80.003 5012 79 1480589077537000000 80.010 5011 79 1480589077550000000 80.010 5009 79 1480589077562000000 80.030 5011 78   Let’s say temperature is the key driver for your analysis. In other words, you are not concerned if speed or battery level changes—you only care about when temperature changes. We can specify temperature to be the driver property for that particular time and only return stream values for temperature, speed and battery level when temperature changes. If speed or battery level changes (but temperature does not change), the rows associated with those changes would not be included in the results set because neither speed nor battery level is a driver property. See chart below.  Timestamp Temperature (driver) Speed Battery Level 1480589076592000000 80.003 5012 79 1480589077537000000 80.010 5011 79 1480589077562000000 80.030 5011 78  Note that only three of the four rows are returned above because one entry in the original table did not have a change in temperature.    Stay Tuned Look out for these time series improvements and InfluxData integration in our upcoming 8.4 release. I’ll be sure to keep you updated on additional new features coming in our next release (like Orchestration and Mashup Builder 2.0), so check back shortly or subscribe to this Community so we can stay in touch. As always, if you have any questions, just ask Kaya!   Stay connected, Kaya
View full tip
ThingWorx Monitoring and Alerting, Part 1 Using Prometheus and Grafana By Tori Firewind, IoT EDC Introduction and Getting Started     As ThingWorx has become a more mature product during the lifetime of the IoT EDC, so too have our dev ops recommendations. As we’ve stated throughout many posts now, testing is a key part of ensuring enterprise readiness, and it occurs at every stage of the process: from unit testing to preserve individual service logic, to integration tests which preserve the functionality of the application as a whole, to user and edge load testing and user experience testing, which ensure enterprise readiness. So testing is a critical component, but the process of dev ops never stops. In order to effectively test the system, a comprehensive monitoring solution is also required.     Once the application is tested and the changes pushed into production, there is no knowing with certainty that everything will run smoothly indefinitely. Random spikes in usage, server bandwidth or availability, any unforeseeable factors like these can come along and cause issues for a system. If these issues aren’t detected and addressed early, then they can very rapidly morph into much larger problems: outages, data loss, inflated data tables which are hard to revert due to their size. It is critical to detect performance issues on a system as early as possible, to have as much information as is necessary to figure out where the problem is heading, and what may have started it. Monitoring is key to a healthy system. CI/CD stands for “Continuous Integration/Continuous Deployment”, a never-ending cycle of improvement. Testing just once before the initial go live isn’t enough. Each system should have automated tests that run continuously, as well as monitors and alerts which reveal problems sooner. Diagnostic tools play a role as well, being the bridge from the end of the dev ops process cycle back to the beginning (monitoring into planning). A good CI/CD dev ops process will ensure that problems are found earlier, fixed more rapidly, and fixed for everyone using the system.       In a fully mature dev ops pipeline, issues are anticipated, discovered and researched before they become production outages or critical issues. These investigations or testing follow-ups produce development tasks (usually bugs, but also features at times) which then start the dev ops cycle all over again. This is why a good, efficient dev ops pipeline is needed, one which allows changes to quickly and safely go from development to production.     This is also why diagnostic tools play a role in the monitoring piece of the dev ops process. They are the bridge between monitoring and planning. Tools like Dynatrace can be configured to provide call stacks and take thread dumps when issues start to occur, before the system is performing so poorly it needs a restart, which happens automatically in a cluster and can clear out any trace of the issue.     Thread dumps are often necessary to diagnosing the root cause of the issue (to permanently fix it), and doing so quickly ensures application stability and availability. That is, after all, the purpose of the dev ops process. Diagnostics is therefore an equally important piece of the dev ops Figure-8-shaped pie, and one which deserves its own spotlight in an article to come.     Every piece of the dev ops process must be viewed as equally important in its own way, lest the dev ops cycle get hung up on bottlenecks of its own. A safe and stable system is not one which never experiences issues, it is one which has a good, efficient plan in place to handle recovery and prevention of repetition. A wholesome dev ops process is a happy dev ops process.   The Monitoring Stack     There are many monitoring options available, but in our experience one of the easiest and most effective monitoring stacks to use with ThingWorx is Prometheus for metrics gathering with Grafana for metrics analysis and review. In a mature monitoring stack, Telegraf is also commonly installed on each VM/host to gather the system metrics (like CPU and Memory usage, things we’ve stated are good metrics of system performance and stability in past articles on scale and size testing) and output them in Prometheus format.     Prometheus is a highly scalable open-source monitoring framework that contains out of the box monitoring and alert capabilities for Kubernetes-based deployments (not covered in this article). Using Prometheus is very simple because the ThingWorx application exposes a metrics endpoint which is formatted directly for use by Prometheus. There is also built-in alerting in Prometheus, but not the ability to form dashboards for reviewing data or screenshotting it for documentation purposes. That’s where Grafana comes into play. Grafana has a preconfigured Prometheus-type data source and many preconfigured dashboard templates for various applications and services. Telegraf is also easily imported into Grafana, as is shown in the section below. The Prometheus targets in the larger diagram are expanded out on the left. For each target, some tool exports the data in a syntax which Prometheus can scrape. For VMs, this can be Telegraf, for Kubernetes, the Node Exporter. JVM has a JMX Exporter, and other tools like CX Server use Graphite. Many apps already have a Prometheus endpoint built-in, like ThingWorx and Zookeeper. Telegraf is not strictly necessary; the node exporter can also be used on VMs, but Telegraf is the more common choice since it is a more mature dev ops tool.     Once Prometheus is scraping the targets, alerting on them can be done with OOTB Prometheus functionality, and dashboards for monitoring can be made easily in Grafana (with built-in support as well). This stack does not include the diagnostics piece, something which triggers thread dumps or the like when issues do occur. There are too many ways to conduct a successful diagnostic piece to cover here.   How to Get Started     Getting started monitoring a ThingWorx application is incredibly easy in the latest versions. Simply open up a browser, and type in the ThingWorx URL, followed by “/Metrics”. At this endpoint, there is a specially formatted response that can automatically be read by the Prometheus monitoring software which contains subsystem and service data. In addition to the application metrics, Prometheus can be configured to collect metrics from a node exporter at the (virtualized) operating system or container (Kubernetes) level as well.     If you haven’t already, install Grafana, install Telegraf as a service, and install Docker Desktop. These are the tools required (in addition to ThingWorx of course) to set-up a simple sandbox system for familiarization with the monitoring stack recommended by PTC. The easiest way to try Prometheus on a local Windows instance is to use Docker. The command for that will be found below, but first open up Docker Desktop to set contextual parameters that the command line will need. Then, modify the configuration file for Telegraf or create one (called telegraf.conf in the same folder as the exe file), and put the following into the file (or uncomment it; the default config file has thousands of lines, so just search for “prometheus”):             Output plugin [[outputs.prometheus_client]] listen = "0.0.0.0:9125"             Alternatively, install the Prometheus Node Exporter tool, which will likely require some additions to the Prometheus config file (not covered here) which we are about to create.     Then, create a configuration file (called prom_config_localhost_scraper.yml in the command to come), add the following (assuming a standard localhost installation of ThingWorx):             # my global config global: scrape_interval: 45s evaluation_interval: 30s scrape_timeout: 30s # scrape_timeout is set to the global default (10s). rule_files: - prom_config_rules.yml scrape_configs: - job_name: thingworx static_configs: - targets: ['host.docker.internal:8080'] basic_auth: username: "Administrator" password: "admin!123456789" metrics_path: /Thingworx/Metrics scheme: http params: x-thingworx-session: - "false" - job_name: prometheus static_configs: - targets: ['localhost:9090'] - job_name: Telegraf # If telegraf is installed, grab stats about the local # machine by default. static_configs: - targets: ['host.docker.internal:9125']                 This example script file uses the host.docker.internal instead of localhost for the server target for ThingWorx because it is running outside of the Docker container which contains Prometheus. This yml file configures Prometheus to monitor both ThingWorx and itself, as well as the server metrics coming from Telegraf (as long as they are configured to push). It’s a sandbox-only configuration, really, as you wouldn’t want to use the Administrator user, or have the password printed in plain text in the config file in a real system. Also note the need for the x-thingworx-session parameter, as runaway sessions which spawn every 30s or so (whatever the scrape interval is) will result in memory issues over time (so we don’t want to use sessions here).     The rules file given here (prom_config_rules.yml) needs to be created separately. This is where all of the alert rules will be defined. This will determine if an alert state is happening, but without configuring the alert manager, there won’t be any notification. That isn’t covered here but is covered extensively in the Grafana docs. Here is an alert example:             groups: - name: alert.rules rules: # Alert for any instance that is unreachable for >5 minutes. - alert: HighMemory expr: mem_used > 14000000 for: 1s labels: severity: page annotations: summary: "High Memory" description: "Localhost Memory Usage is High"             Now, save these files and use Powershell to run the Docker container:             docker run -p 9090:9090 -v C:\<path_to_document>\prom_config_localhost_scraper.yml:/etc/prometheus/prometheus.yml prom/prometheus                 It should download Prometheus and install it in that container (if this is the first time), allowing you to very rapidly deploy it to an endpoint of localhost:9090 by default. If there is an error like the one shown below, this means that you forgot to start Docker Desktop (the application) before opening Powershell. Docker Desktop sets system parameters required for containers to run in a command line (in Linux, it should work if Docker is installed for use by the command line, simple as that).     The localhost endpoints are accessible in a browser. ThingWorx defaults to localhost:8080 endpoint. Prometheus defaults to localhost:9090. Telegraf is on port 9125. Open any of these in a browser tab to see the full monitoring stack. You can see easily if Prometheus is working by clicking “Status” > “Targets” at localhost:9090:     If all of the targets appear as blue and say “last scrape” and a time stamp, then they’re working as expected. If they don’t, ensure you have the right ports, that there aren’t any firewall issues (if things aren’t all on localhost), and that everything is running without errors.     The last step in the process here is to install a dashboard tool like Grafana. Once this is installed and running on localhost:3000 (by default), you can display the data from Prometheus with a few configuration steps the Grafana UI. Highlight over the settings icon in the bottom left of the screen, and then click on “Data sources”. Select the “Add data source” button, and then click on Prometheus. You have to type the URL again  (localhost:9090), but most of the defaults will be ok here, and all you have to do is click “Save and test”.     Now both targets should appear within Grafana, with their metrics showing up throughout the Grafana UI. This data source is what allows for the building of monitoring dashboards.    
View full tip
This video is Module 11: ThingWorx Analytics Mashup Exercise of the ThingWorx Analytics Training videos. It shows you how to create a ThingWorx project and populate it with entities that collectively comprise a functioning application. 
View full tip
The Protocol Adapter Toolkit (PAT) is an SDK that allows developers to write a custom Connector that enables edge devices to connect to and communicate with the ThingWorx Platform.   In this blog, I will be dabbling with the MQTT Sample Project that uses the MQTT Channel introduced in PAT 1.1.2.   Preamble All the PAT sample projects are documented in detail in their respective README.md. This post is an illustrated Walk-thru for the MQTT Sample project, please review its README.md for in depth information. More reading in Protocol Adapter Toolkit (PAT) overview PAT 1.1.2 is supported with ThingWorx Platform 8.0 and 8.1 - not fully supported with 8.2 yet.   MQTT Connector features The MQTT Sample project provides a Codec implementation that service MQTT requests and a command line MQTT client to test the Connector. The sample MQTT Codec handles Edge initiated requests read a property from the ThingWorx Platform write a property to the ThingWorx Platform execute a service on the ThingWorx Platform send an event to the ThingWorx Platform (also uses a ServiceEntityNameMapper to map an edgeId to an entityName) All these actions require a security token that will be validated by a Platform service via a InvokeServiceAuthenticator.   This Codec also handles Platform initiated requests (egress message) write a property to the Edge device execute a service without response on the Edge device  Components and Terminology       Mqtt Messages originated from the Edge Device (inbound) are published to the sample MQTT topic Mqtt Messages originated from the Connector (outbound) are published to the mqtt/outbound MQTT topic   Codec A pluggable component that interprets Edge Messages and converts them to ThingWorx Platform Messages to enable interoperability between an Edge Device and the ThingWorx Platform. Connector A running instance of a Protocol Adapter created using the Protocol Adapter Toolkit. Edge Device The device that exists external to the Connector which may produce and/or consume Edge Messages. (mqtt) Edge Message A data structure used for communication defined by the Edge Protocol.  An Edge Message may include routing information for the Channel and a payload for Codec. Edge Messages originate from the Edge Device (inbound) as well as the Codec (outbound). (mqtt) Channel The specific mechanism used to transmit Edge Messages to and from Edge Devices. The Protocol Adapter Toolkit currently includes support for HTTP, WebSocket, MQTT, and custom Channels. A Channel takes the data off of the network and produces an Edge Message for consumption by the Codec and takes Edge Messages produced by the Codec and places the message payload data back onto the network. Platform Connection The connection layer between a Connector and ThingWorx core Platform Message The abstract representation of a message destined for and coming from the ThingWorx Platform (e.g. WriteProperty, InvokeService). Platform Messages are produced by the Codec for incoming messages and provided to the Codec for outgoing messages/responses.   Installation and Build  Protocol Adapter Toolkit installation The media is available from PTC Software Downloads : ThingWorx Connection Server > Release 8.2 > ThingWorx Protocol Adapter Toolkit Just unzip the media on your filesystem, there is no installer The MQTT Sample Project is available in <protocol-adapter-toolkit>\samples\mqtt Eclipse Project setup Prerequisite : Eclipse IDE (I'm using Neon.3 release) Eclipse Gradle extension - for example the Gradle IDE Pack available in the Eclipse Marketplace Import the MQTT Project : File > Import > Gradle (STS) > Gradle (STS) Project Browser to <protocol-adapter-toolkit>\samples\mqtt, then [Build Model] and select the mqtt project     Review the sample MQTT codec and test client Connector : mqtt > src/main/java > com.thingworx.connector.sdk.samples.codec.MqttSampleCodec decode : converts an MqttEdgeMessage to a PlatformRequest encode (3 flavors) : converts a PlatformMessage or an InfoTable or a Throwable to a MqttEdgeMessage Note that most of the conversion logic is common to all sample projects (websocket, rest, mqtt) and is done in an helper class : SampleProtocol The SampleProtocol sources are available in the <protocol-adapter-toolkit>\samples\connector-api-sample-protocol project - it can be imported in eclipse the same way as the mqtt. SampleTokenAuthenticator and SampleEntityNameMapper are also defined in the <protocol-adapter-toolkit>\samples\connector-api-sample-protocol project. Client : mqtt > src/client/java > com.thingworx.connector.sdk.samples.MqttClient Command Line MQTT client based on Eclipse Paho that allows to test edge initiated and platform initiated requests. Build the sample MQTT Connector and test client Select the mqtt project then RMB > Gradle (STS) > Task Quick Launcher > type Clean build +  [enter] This creates a distributable archive (zip+tar) in <protocol-adapter-toolkit>\samples\mqtt\build\distributions that packages the sample mqtt connector, some startup scripts, an xml with sample entities to import on the platform and a sample connector.conf. Note that I will test the connector and the client directly from Eclipse, and will not use this package. Runtime configuration and setup MQTT broker I'm just using a Mosquitto broker Docker image from Docker Hub​   docker run -d -p 1883:1883 --name mqtt ncarlier/mqtt  ThingWorx Platform appKey and ConnectionServicesExtension From the ThingWorx Composer : Create an Application Key for your Connector (remember to increase the expiration date - to make it simple I bind it to Administrator) Import the ConnectionServicesExtension-x.y.z.zip and pat-extension-x.y.z.zip extensions available in <protocol-adapter-toolkit>\requiredExtensions  Connector configuration Edit <protocol-adapter-toolkit>\samples\mqtt\src\main\dist\connector.conf Update the highlighted entries below to match your configuration :   include "application" cx-server {   connector {     active-channel = "mqtt"     bind-on-first-communication = true     channel.mqtt {       broker-urls = [ "tcp://localhost:1883" ]       // at least one subscription must be defined       subscriptions {        "sample": [ "com.thingworx.connector.sdk.samples.codec.MqttSampleCodec", 1 ]       }       outbound-codec-class = "com.thingworx.connector.sdk.samples.codec.MqttSampleCodec"     }   }   transport.websockets {     app-key = "00000000-0000-0000-0000-000000000000"     platforms = "wss://thingWorxServer:8443/Thingworx/WS"   }   // Health check service default port (9009) was in used on my machine. Added the following block to change it.   health-check {      port = 9010   } }  Start the Connector Run the Connector directly from Eclipse using the Gradle Task RMB > Run As ... > Gradle (STS) Build (Alternate technique)  Debug as Java Application from Eclipse Select the mqtt project, then Run > Debug Configurations .... Name : mqtt-connector Main class:  com.thingworx.connectionserver.ConnectionServer On the argument tab add a VM argument : -Dconfig.file=<protocol-adapter-toolkit>\samples\mqtt\src\main\dist\connector.conf Select [Debug]  Verify connection to the Platform From the ThingWorx Composer, Monitoring > Connection Servers Verify that a Connection Server with name protocol-adapter-cxserver-<uuid> is listed  Testing  Import the ThingWorx Platform sample Things From the ThingWorx Composer Import/Export > From File : <protocol-adapter-toolkit>\samples\mqtt\src\main\dist\SampleEntities.xml Verify that WeatherThing, EntityNameConverter and EdgeTokenAuthenticator have been imported. WeatherThing : RemoteThing that is used to test our Connector EdgeTokenAuthenticator : holds a sample service (ValidateToken) used to validate the security token provided by the Edge device EntityNameConverter : holds a sample service (GetEntityName) used to map an edgeId to an entityName  Start the test MQTT client I will run the test client directly from Eclipse Select the mqtt project, then Run > Run Configurations .... Name : mqtt-client Main class:  com.thingworx.connector.sdk.samples.MqttClient On the argument tab add a Program argument : tcp://<mqtt_broker_host>:1883 Select [Run] Type the client commands in the Eclipse Console  Test Edge initiated requests     Read a property from the ThingWorx Platform In the MQTT client console enter : readProperty WeatherThing temp   Sending message: {"propertyName":"temp","requestId":1,"authToken":"token1234","action":"readProperty","deviceId":"WeatherThing"} to topic: sample Received message: {"temp":56.3,"requestId":1} from topic: mqtt/outbound Notes : An authToken is sent with the request, it is validated by a platform service using the SampleTokenAuthenticator (this authenticator is common to all the PAT samples and is defined in <protocol-adapter-toolkit>\samples\connector-api-sample-protocol) EntityNameMapper is not used by readProperty (no special reason for that) The PlatformRequest message built by the codec is ReadPropertyMessage   Write a property to the ThingWorx Platform In the MQTT client console enter : writeProperty WeatherThing temp 20   Sending message: {"temp":"20","propertyName":"temp","requestId":2,"authToken":"token1234","action":"writeProperty","deviceId":"WeatherThing"} to topic: sample Notes : An authToken is sent with the request, it is validated by a platform service using the SampleTokenAuthenticator EntityNameMapper is not used by writeProperty The PlatformRequest message built by the codec is WritePropertyMessage No Edge message is sent back to the device   Send an event to the ThingWorx Platform   In the MQTT client console enter : fireEvent Weather WeatherEvent SomeDescription   Sending message: {"requestId":5,"authToken":"token1234","action":"fireEvent","eventName":"WeatherEvent","message":"Some description","deviceId":"Weather"} to topic: sample Notes : An authToken is sent with the request, it is validated by a platform service using the SampleTokenAuthenticator fireEvent uses a EntityNameMapper (SampleEntityNameMapper) to map the deviceId (Weather) to a Thing name (WeatherThing), the mapping is done by a platform service The PlatformRequest message built by the codec is FireEventMessage No Edge message is sent back to the device   Execute a service on the ThingWorx Platform ... can be tested with the GetAverageTemperature on WeatherThing ... Test Platform initiated requests     Write a property to the Edge device The MQTT Connector must be configured to bind the Thing with the Platform when the first message is received for the Thing. This was done by setting the bind-on-first-communication=true in connector.conf When a Thing is bound, the remote egress messages will be forwarded to the Connector The Edge initiated requests above should have done the binding, but if the Connector was restarted since, just bind again with : readProperty WeatherThing isConnected From the ThingWorx composer update the temp property value on WeatherThing to 30 An egress message is logged in the MQTT client console :   Received message: {"egressMessages":[{"propertyName":"temp","propertyValue":30,"type":"PROPERTY"}]} from topic: mqtt/outbound   Execute a service on the ThingWorx Platform ... can be tested with the SetNtpService on WeatherThing ...
View full tip
This Blog presents a simple Java utility to validate the deployment of ThingWatcher. It is important to note that the utility used is not a real life situation, the intent was to keep it as simple as possible in order to achieve its aim: validation of the deployment. An understanding of Java IDE (such as Eclipse) is necessary in order to run the utility with relevant dependency and classpath setup. Those are beyond the scope of this posting. We will cover the following points: Pre-Requisites Using the sample utility Code walk through Validate training job creation Validate model job creation Update for ThingWorx Analytics 8.0 Pre-requisites A strict adherence to the ThingWatcher deployment guide is recommended in order to first deploy training and model microservices as well as to familiarize yourself with ThingWatcher APIs. Prior to testing ThingWatcher, both the training and model microservices should be up and running The media for ThingWatcher (including model and training micro-service) should be downloaded from PTC Software Download page . The commands to deploy the micro-services will vary depending on the platform used and are presented in the ThingWatcher deployment guide. As a reference example, on Windows the command will be similar to the following: Start Docker: Start > Program > Docker > Docker Quick Start Terminal Load model micro service tar $ docker load < "D:\PTC\MED-61147-CD-522_F000_ThingWorx-Analytics-ThingWatcher-52-2\components\ModelService\ModelService\model-service.tar"     3. Install model service: $ docker run -d -p 8080:8080 -v '/d/TWatcherStorage/model:/data/models' -v '/d/TWatcherStorage/db:/tmp/' twxml/model-service:1.0 -Dfile.storage.path=/data/models -jar maven/model-1.0.jar server maven/standalone-evaluator.yml     4. Load training micro service tar file                         $ docker load < "D:\PTC\MED-61147-CD-522_F000_ThingWorx-Analytics-ThingWatcher-52-2\components\TrainingService\TrainingService\training-service.tar"     5. Install training service                         $ docker run -d -p 8090:8080  twxml/training-service:1.0.0  -Dmodel.destination.uri=model://192.168.99.100:8080/models -jar maven/training-standalone-1.0.0-bin.jar server /maven/training-standalone-single.yml Note: the -Dmodel.destination.uri points here to the model micro-service host. To find the ip address, enter docker-machine ip on the model micro-service docker machine.     6. Validate micro-services deployment: Execute docker ps  and confirmed that both services are up, as in the following example: CONTAINER ID        IMAGE                          COMMAND                      CREATED            STATUS              PORTS NAMES 5b6a29b95611        twxml/training-service:1.0.0  "java -Dmodel.destina"  13 days ago        Up 44 minutes      8081/tcp, 0.0.0.0:8090->8080/tcp  modest_albattani 8c13c0bc910e        twxml/model-service:1.0        "java -Dfile.storage."      2 weeks ago        Up 44 minutes      0.0.0.0:8080->8080/tcp, 8081/tcp  thirsty_ptolemy   Using the sample utility Download the attachment Main.java Import Main.java into Eclipse (or IDE of choice) with the ThingWatcher dependencies added in classpath. Update the trainingBaseURI (see below) to points to the training micro-services. The utility should be ready to execute. Code walk through The code declares a thingwatcher in the following snippet: ThingWatcher thingwatcher = new ThingWatcherBuilder() .certainty(90.0) .trainingDataDuration(60) .trainingDataDurationUnit(DurationUnit.SECOND) .trainingBaseURI("http://192.168.99.100:8090/training") .getThingWatcher(); In the above code it is important to update the trainingBaseURI argument with the correct ip address and port for the training micro-service host. The code then loops 10000 times and sends a new value, which simulates the sensor data, at a simulated 100 ms interval. The value is computed as Math.sin(i) for the whole calibrating phase and most of the monitoring phase too. We artificially introduce an anomaly by sending a value of Math.incremetExact(i) between the 9000 th and 9900 th iterations. During the Monitoring phase, the code logs the value, the anomalous status and the thingwatcher state. It is advised to save the output to a file in order to review the logging once the utility has run. In Eclipse this can be done by selecting the Main.java with right mouse button > Run As… > Run Configuration > Common and tick Output File under the Standard Input and Output, and specify a location for the output file. A review of the output log file will shows that somewhere between timestamp 900000 and 990000, the isAnomalousValue is true. Note that this does not starts and ends exactly at 900000 and 990000, as ThingWatcher needs a few occurrences before reporting it as anomaly. Sample output indicating an anomalous state: [main] INFO com.thingworx.analytics.demo.Main - Value = 901700,9017.0,-9016.403802019577 [main] INFO com.thingworx.analytics.demo.Main - isAnomalousValue = true [main] INFO com.thingworx.analytics.demo.Main - ThingWatcherStat = MONITORING As part of validating the successful deployment of ThingWatcher, it is recommended to validate the correct creation of a training and model job. Validate training job creation In order to validate the successful creation of a training job, execute a GET request to the training micro service : http://192.168.99.100:8090/training (update the ip address to the one on your system) This should return a COMPLETED job whose body starts with something similar to: Validate model job creation In order to validate the successful creation of a model job, execute a GET request to http://192.168.99.100:8080/models (update the ip address to the one on your system) to see all the models that have been created. For example: Alternatively, click (or use) the URI reported in the training job output, here http://192.168.99.100:8080/models/6/pmml.xml, to see the complete model definition. The output will be similar to: When this sample test runs correctly, the ThingWatcher deployment has been validated. Update for ThingWorx Analytics 8.0 Deploying the microservices, see Video Link : 1937 Updated Java code: see Does anyone know how to use java api to achieve anomaly detection with Thingwatcher8.0? To Note: The utility provided is for testing purpose only. The code does not represent any kind of best practice and is not meant to be a perfect java coding example. It is provided as is with no guarantee.
View full tip
Build a Predictive Analytics Model Guide Part 1   Overview   This project will introduce ThingWorx Analytics Builder. Following the steps in this guide, you will create an analytical model, and then refine it based on further information from the Analytics platform. We will teach you how to determine whether or not a model is accurate and how you can optimize both your data inputs and the model itself.   NOTE: This guide's content aligns with ThingWorx 9.3. The estimated time to complete ALL 2 parts of this guide is 60 minutes.      Step 1: Scenario   MotorCo manufactures, sells, and services commercial motors. Recently, MotorCo has been developing a new motor, and they already have a working prototype. However, they've noticed that the motor has a chance to FAIL CATASTROPHICALLY if it's not properly serviced to replace lost grease on a key moving part. In order to prevent this type of failure in the field, MotorCo has decided to instrument their motors with sensors which record vibration. The hope is that these sensors can detect certain vibrations which indicate required maintenance before a failure occurs. MotorCo has decided to utilize ThingWorx Analytics to scan their prototype data for any insights they can gain to address this issue. Challenges: There is a known failure mode of a prototype, but it is not currently possible to predict when the failure might happen Until this failure mode is mitigated, the prototype cannot move into full production While connected data is being monitored, what to do with that data is not currently known The additional sensors are adding to the overall cost of the product   Step 2: Settings   If you manually installed the Analytics extension, then you need to complete the steps below to finalize the connection.  On the ThingWorx Composer Analytics tab, click ANALYTICS BUILDER > Settings. In the Analytics Server field, search for and select your Server Thing Entity, such as AnalyticsServer_Thing or localhost-AnalyticsServer. Click Verify Configuration.   Click Save. WARNING: You MUST CLICK SAVE or the configurations will be lost when you move to Data in the next step.   Step 3: Upload Data   We will now load the data that ThingWorx Analytics will use to generate a model. Download the attached analytics_vibration.zip to your computer. Unzip the analytics_vibration.zip file to access the vibration_data_and_header.csv and vibration_metadata.json files. On the left, click ANALYTICS BUILDER > DATA.   Under Datasets, click New.... The New Dataset pop-up will open.         5. In the Dataset Name field, enter vibration_dataset. 6. In the File Containing Dataset Data section, search for and select vibration_data_and_header.csv. 7. In the File Containing Dataset Field Configuration section, search for and select vibration_metadata.json.   8. Click Submit. Note that it will take a variable amount of time for the data-upload to complete, based on the size of your dataset.              Step 4: Signals   The Signals section of ThingWorx Analytics looks for the most statistically correlated single field in the dataset which relates to your selected goal. This doesn't necessarily indicate that it is the cause of your goal, whether maximizing or minimizing. It just means that the dataset indicates that this single field happens to correlate with the goal that you desire. On the left, click ANALYTICS BUILDER > Signals.   At the top, click New…. A New Signals pop up will open.   3. In the Signal Name field, enter vibration_signal. 4. In the Dataset field, select vibration_dataset. 5. Leave the Goal field set to the default of low_grease. 6. Leave the Filter field set to the default of all_data. 7. Leave the Excluded Fields from Signal field set to the default of empty. 8. Click Submit. 9. Wait ~30 seconds for Signal State to change to COMPLETED The results will be displayed at the bottom.             The results show that the five Frequency Bands for Sensor 1 are the most highly correlated with determining our goal of detecting a low grease condition. For Sensor 2, only bands one and four seem to be at all related, and bands two, three, and five are hardly related at all.   Click here to view Part 2 of this guide. 
View full tip
The following is valid  for ThingWorx Analytics (TWA) 52.0.2 till 8.0 For release 8.3.0 and above see How to score new data in ThingWorx Analytics 8.3.x ?   Overview The main steps are as follow: Create a dataset Configure the dataset Upload data to the dataset Optimize the dataset Create filters for training and scoring data Train the model Execute scoring on existing data Upload new data to dataset Execute scoring on new data TWA models are dataset centric, which means a model created with one dataset cannot be reused with a different dataset. In order to be able to score new data, a specific feature, record purpose in the below example, is included in the dataset. This feature needs to be included from the beginning when the data is first uploaded to TWA. A filter on that feature can then be created to allow to isolate desired data. When new data comes in, they are added to the original dataset but with a specific value for the filtered feature (record purpose), which allows to discriminate and score only those new records. Process Create a dataset This example uses the beanpro demo dataset Create dataset is done through a POST on datasets REST API as below 2. Configure dataset This is done through a POST on <dataset>/configuration REST API 3.      Upload data         4.      Optimize the dataset         5.      Create filters The dataset includes a feature named record purpose created especially to differentiate between the rows to be used for training and the rows to be used for scoring. New data to be added will have record purpose set to scoringnew, which will allow to execute a scoring job limited to those filtered new rows Filter for training data: Filter for new scoring data        6.      Train the model This is done through a POST on <dataset>/prediction API        7.      Score the training data This is done through a POST on <dataset>/predictive_scores API. Note the use of the filter TrainingData created earlier. This allow to score only the rows with training as value for record purpose feature. Note: scoring could also be done without filter at this stage, in which case all the data in the dataset will be scored and not just the ones with training fore record purpose   Retrieving the scoring result show all the records in the dataset:   8.      Upload new data The newly uploaded csv file should only contains new record. This will be appended to the existing ones.   Note that the new record (it could be more than one) has a value scoringnew for the record purpose feature: This will allow to use the previously created filter ScoringNewData so that a new scoring job will only take into account this new record.   9.      Scoring new data A POST on API predictive_scores is executed however using the filter ScoringNewData. This results in only the newly added data to be scored and therefore a much quicker execution time too. Retrieving the scoring result shows only the new record:
View full tip
Load Testing through C SDK Remote Device Simulation in ThingWorx   As discussed in the EDC's previous article, load or stress testing a ThingWorx application is very important to the application development process and comes highly recommended by PTC best practices. This article will show how to do stress testing using the ThingWorx C SDK at the Edge side. Attached to this article is a download containing a generic C SDK application and accompanying simulator software written in python. This article will discuss how to unpack everything and move it to the right location on a Linux machine (Ubuntu 16.04 was used in this tutorial and sudo privileges will be necessary). To make this a true test of the Edge software, modify the C SDK code provided or substitute in any custom code used in the Edge devices which connect to the actual application.   It is assumed that ThingWorx is already installed and configured correctly. Anaconda will be downloaded and installed as a part of this tutorial. Note that the simulator only logs at the "error" level on the SDK side, and the data log has been disabled entirely to save resources. For any questions on this tutorial, reach out to the author Desheng Xu from the EDC team (@DeShengXu).   Background: Within ThingWorx, most things represent remote devices located at the Edge. These are pieces of physical equipment which are out in the field and which connect and transmit information to the ThingWorx Platform. Each remote device can have many properties, which can be bound to local properties. In the image below, the example property "Pressure" is bound to the local property "Pressure". The last column indicates whether the property value should be stored in a time series database when the value changes. Only "Pressure" and "TotalFlow" are stored in this way.  A good stress test will have many properties receiving updates simultaneously, so for this test, more properties will be added. An example shown here has 5 integers, 3 numbers, 2 strings, and 1 sin signal property.   Installation: Download Python 3 if it isn't already installed Download Anaconda version 5.2 Sometimes managing multiple Python environments is hard on Linux, especially in Ubuntu and when using an Azure VM. Anaconda is a very convenient way to manage it. Some commands which may help to download Anaconda are provided here, but this is not a comprehensive tutorial for Anaconda installation and configuration. Download Anaconda curl -O https://repo.anaconda.com/archive/Anaconda3-5.2.0-Linux-x86_64.sh  Install Anaconda (this may take 10+ minutes, depending on the hardware and network specifications) bash Anaconda3-5.2.0-Linux-x86_64.sh​ To activate the Anaconda installation, load the new PATH environment variable which was added by the Anaconda installer into the current shell session with the following command: source ~/.bashrc​ Create an environment for stress testing. Let's name this environment as "stress" conda create -n stress python=3.7​ Activate "stress" environment every time you need to use simulator.py source activate stress​  Install the required Python modules Certain modules are needed in the Python environment in order to run the simulator.py  file: psutil, requests. Use the following commands to install these (if using Anaconda as installed above): conda install -n stress -c anaconda psutil conda install -n stress -c anaconda requests​ Unpack the download attached here called csim.zip Unzip csim.zip  and move it into the /opt  folder (if another folder is used, remember to change the page in the simulator.json  file later) Assign your current user full access to this folder (this command assumes the current user is called ubuntu ) : sudo chown -R ubuntu:ubuntu /opt/csim   Move the C SDK source folder to the lib  folder Use the following command:  sudo mv /opt/csim/csdkbuild/libtwCSdk.so.2.2.4 /usr/lib​ You may have to also grant a+x permissions to all files in this folder Update the configuration file for the simulator Open /opt/csim/simulator.json  (or whatever path is used instead) Edit this file to meet your environment needs, based on the information below Familiarize yourself with the simulator.py file and its options Use the following command to get option information: python simulator.py --help​   Set-Up Test Scenario: Plan your test Each simulator instance will have 8 remote properties by default (as shown in the picture in the Background section). More properties can be added for stress test purposes in the simulator.json  file. For the simulator to run 1k writes per second to a time series database, use the following configuration information (note that for this test, a machine with 4 cores and 16G of memory was used. Greater hardware specifications may be required for a larger test): Forget about the default 8 properties, which have random update patterns and result in difficult results to check later. Instead, create "canary properties" for each thing (where canary refers to the nature of a thing to notify others of danger, in the same way canaries were used in mine shafts) Add 25 properties for each thing: 10 integer properties 5 number properties 5 string properties 5 sin properties (signals) Set the scan rate to 5000 ms, making it so that each of these 25 properties will update every 5 seconds. To get a writes per second rate of 1k, we therefore need 200 devices in total, which is specified by the start and end number lines of the configuration file The simulator.json  file should look like this: Canary_Int: 10 Canary_Num: 5 Canary_Str: 5 Canary_Sin: 5 Start_Number: 1 End_Number: 200​ Run the simulator Enter the /opt/csim  folder, and execute the following command: python simulator.py ./simulator.json -i​ You should be able to see a screen like this: Go to ThingWorx to check if there is a dummy thing (under Remote Things in the Monitoring section): This indicates that the simulator is running correctly and connected to ThingWorx Create a Value Stream and point it at the target database Create a new thing and call it "SimulatorDummyThing" Once this is created successfully and saved, a message should pop up to say that the device was successfully connected Bind the remote properties to the new thing Click the "Properties and Alerts" tab Click "Manage Bindings" Click "Add all properties" Click "Done" and then "Save" The properties should begin updating immediately (every 5 seconds), so click "Refresh" to check Create a Thing Template from this thing Click the "More" drop-down and select "Create ThingTemplate" Give the template a name (ensure it matches what is defined in the simulator.json  file) and save it Go back and delete the dummy thing created in Step 4, as now we no longer need it Clean up the simulator Use the following command: python simulator.py ./simulator.json -k​ Output will look like this: Create 200 things in ThingWorx for the stress test Verify the information in the simulator.json  file (especially the start and end numbers) is correct Use the following command to create all things: python simulator.py ./simulator.json -c​ The output will look like this: Verify the things have also been created in ThingWorx: Now you are ready for the stress test   Run Stress Test: Use the following command to start your test: python simulator.py ./simulator.json -l​ or python simulator.py ./simulator.json --launch The output in the simulator will look like this: Monitor the Value Stream writing status in the Monitoring section of ThingWorx:   Stop and Clean Up: Use the following command to stop running all instances: python simulator.py ./simulator -k​ If you want to clean up all created dummy things, then use this command: python simulator.py ./simulator -d​ To re-initiate the test at a later date, just repeat the steps in the "Run Stress Test" section above, or re-configure the test by reviewing the steps in the "Set-Up Test Scenario" section   That concludes the tutorial on how to use the C SDK in a stress or load test of a ThingWorx application. Be sure to modify the created Thing Template (created in step 6 of the "Set-Up Test Scenario" section) with any business logic required, for instance events and alerts, to ensure a proper test of the application. 
View full tip
We are excited to announce ThingWorx 8.4 is now available for download!    Key functional highlights ThingWorx 8.4 covers the following areas of the product portfolio: ThingWorx Analytics and ThingWorx Foundation which includes Connection Server and Edge capabilities.   ThingWorx Foundation Next Generation Composer: File Repository Editor added for application file management New entity Config Table Editor to enable application configurability and customization Localization support fornew languages: Italian, Japanese, Korean, Spanish, Russian, Chinese/Taiwan, Chinese/Simplified Mashup Builder: Responsive Layout with new Layout Editor 13 new and updated widgets (beta) Theming Editor (beta) New Functions Editor New Personalized Workspace Platform: Added support for AzureSQL, a relational database-as-a-service (DBaaS) as the new persistence provider A PaaS database that is always running on the latest stable version of SQL Server Database Engine and  patched OS with 99.99% availability.   Added support for InfluxData, a leading time series storage platform as the new ThingWorx persistence provider Supports ingesting large amounts of IoT data and offers high availability with clustering setup New extension for Remote Access and Control Supports VNC, RDP desktop sharing for any remote device HTTP and SSH connectivity supported An optional microservice to offload the ThingWorx server by allowing query execution to occur in a separate process on the same or on a different physical machine. Installers for Postgres versions of ThingWorx running on Windows or RHEL AzureSQL InfluxDB Thing Presence feature introduced which indicates whether the connection of a thing is “normal” based on the expected behavior of the device. Remote Access Extension Query Microservice: Click and Go Installers for Windows and Linux (RHEL) Security: Major investments include updating 3rd party libraries, handling of data to address cross-site scripting (XSS)  issues and enhancements to the password policy, including a password blacklist. A significant number of security issues have been fixed in this release. It is recommended that customers upgrade as soon as possible to take advantage of these important improvements. Docker Support  Added Dockerfile as a distribution media for ThingWorx Foundation and Analytics Allows building Docker container image that unlocks the potential of Dev and Ops Note:  Legacy Composer has been removed and replaced with the New Composer.   Documentation: ThingWorx 8.4 Reference Documents ThingWorx Platform 8.4 Release Notes ThingWorx Platform Help Center ThingWorx Analytics Help Center ThingWorx Connection Services Help Center  
View full tip
Just like the perfect sandwich, we know that you have specific preferences and requirements for your ThingWorx deployment. Whether you like to keep things simple with a classic grilled cheese or you like to spice things up with a more elaborate chipotle mayo BLT, we’ve got you covered. Our ThingWorx Deployment Architecture Guide explains what you’ll need to deploy ThingWorx in three different scenarios: production, enterprise and high-availability (pictured below).   Deployment Architecture for ThingWorx on Azure in High-Availability We’ve recently published Version 1.1 of the ThingWorx Deployment Architecture Guide. In it, you can find updated deployment architecture diagrams to more distinctly show the data and application layers within a ThingWorx environment. Our team has also added a new section on what you’ll need to deploy ThingWorx on Microsoft Azure, PTC’s preferred cloud platform.   Check it out here or in the attachment section on the right.   Stay connected, Kaya
View full tip
Embedded databases come with the installation of the ThingWorx Platform No additional installation or configuration is required for embedded databases Read about the various benefits and pitfalls of embedded versus external below Database Options H2 RDBMS (relational database management system), written in Java Has a small memory footprint Embedded into ThingWorx for easy installation Not as robust as other database options Not scalable in production environments (unless used alongside a separate, external database for stream, value stream, and other data) ​ See KCS Article CS243975 for further reading on the use of external databases Meant to be used for quick deployments and testing environments PostgreSQL ORDBMS (object-relational database management system), written in C PostgreSQL is the ThingWorx recommended database for production systems More Robust External database installed separately from ThingWorx Beneficial because external databases can be specifically configured for use in production, while embedded databases cannot Able to efficiently handle larger amounts of data and store more data without affecting ThingWorx system performance Greater Stability Recover from data corruptions more easily by accessing the database from an external application (separate from ThingWorx) using simple SQL statements Easier to back-up the database in case of issues (further reading in KCS Article CS246598) Less risky and simpler upgrade procedure, which occurs "in-place" Instead of exporting and importing data and entities, a simple schema update allows these to automatically persist into the new version If ThingworxStorage folder is accidentally deleted, entities and data are secure in the external database More Secure HA (High Availability) allows for multiple server instances at different locations in the network Assists in time of failover, i.e. if one server fails, the other can immediately take over Secures the data and prevents further data loss in the event of a failure Customizable security settings and complex password requirements Fewer security vulnerabilities than other databases Because Postgres is an external database, it can be harder to install Follow the steps in the installation guide closely See KCS Articles CS235937 and CS230085 for troubleshooting and help with installation and configuration Hana RDBMS (relational database management system) In-memory, column based data storage For more information on this database, please see the Getting Started with SAP HANA Guide Neo4J GDBMS (graph database management system), written in Java Data is not easily accessed by external applications, and CQL must be used instead of SQL, making recovery from corruptions very difficult Embedded database with limited configuration options Known to have issues with deadlocks Deprecated in version 7.0 (related KCS Article: CS228537) For full installation steps for H2 and PostgreSQL, see the ThingWorx Installation Guide
View full tip
  The latest release of ThingWorx, version 9.7, brings powerful updates across performance, scalability, security, and developer productivity. Designed to meet the evolving demands of IoT and AR solutions, this version equips businesses with the tools to build smarter applications, improve operational efficiency, and stay ahead of the curve.   Performance & Scalability ThingWorx 9.7 introduces key updates that enhance performance and scalability for large-scale operations. The addition of partitioned value streams enables faster queries and purging of massive data sets, streamlining data management. Lazy loading for grid components optimizes application responsiveness, ensuring a seamless user experience, even in data-intensive environments. Additionally, the new Data Ordering feature ensures that ingested data is processed in the exact order it is received, even across asynchronous systems. This capability guarantees accurate execution of application logic and precise end calculations, addressing critical needs in industries requiring serialized data processing.   Reliability Enhancements Reliability is critical in enterprise applications, and ThingWorx 9.7 delivers with automated disaster recovery, providing quick failover mechanisms for deployments on the PTC Cloud. Enhanced diagnostics and tools for high-availability environments address challenges such as large cache collections, helping maintain stability and preventing bottlenecks in mission-critical systems.   Developer Productivity ThingWorx 9.7 offers developers a range of productivity-boosting features. The Mashup Builder enhancements include binding filtering, container zooming, and intuitive design tools that make application development faster and more efficient. Grid customizations, like inline editing and bulk selection, further streamline workflows. Developers also benefit from the latest Eclipse plugin, ensuring compatibility with modern tools and technologies.   Security & Compliance Security continues to be our highest priority in this release. ThingWorx 9.7 fully supports Java 21, delivering enhanced memory management and access to the latest security patches. TLS 1.3 Phase-2 integration brings advanced encryption protocols to secure communication across platforms. Additional security measures, such as granular file permissions, give administrators more control over sensitive data and operations.   Industry-Specific Solutions ThingWorx 9.7 introduces several innovative features tailored for specific industries: Route Versioning in Connected Work Cells: Manufacturers can now revise and track work cell routes with precision, creating complete traceability for serial numbers, routes, and associated work instructions. This feature streamlines workflows, enhances traceability, and boosts operational efficiency. Enhanced Operational Insights: The release includes new key performance indicators (KPIs) and reporting tools to deliver actionable insights at the factory line, site, and enterprise levels. These updates help organizations measure efficiency, reduce downtime, and improve overall productivity. Asset Muting for Chatty Devices: In scenarios where connected devices become overly chatty—sending excessive or unnecessary data—administrators can now mute these assets. This prevents data overload and mitigates risks such as distributed denial-of-service (DDoS) attacks, optimizing platform performance and ensuring uninterrupted operations. Improved Deployment Reliability: Enhancements in software content management ensure reliable and scalable management of large-scale deployments, reducing downtime and improving device performance.   Third-Party Library Upgrades ThingWorx 9.7 includes key updates to its underlying technology stack, ensuring compatibility with modern enterprise systems, improved performance, and robust security. To stay future-ready, we have updated its core technology stack, including Tomcat 9.0.95, Ignite 2.16 and RabbitMQ 3.13.7 to highlight few.   Why Upgrade to ThingWorx 9.7? ThingWorx 9.7 sets a new benchmark for IoT platforms, offering enhanced scalability, state-of-the-art security, and intuitive tools for developers. The updates cater to both enterprise-scale deployments and industry-specific challenges, ensuring that businesses can innovate and operate with confidence in an increasingly complex digital landscape.   Upgrade to ThingWorx 9.7 today and unlock new possibilities for innovation and success! View release notes here for more details on exact updates and be sure to upgrade to 9.7! Vineet Khokhar Principal Product Manager- IOT Security  
View full tip
To maintain a cleaner look of your ThingWorx server access URL, whether for production or convenience reasons, you may look into setting up redirection. This is a quick example on how to redirect <your.main.url.com> to <your.main.url.com/Thingworx> Go to the /<apache-tomcat-directory>/webapps/ROOT/ and find the "index.jsp" file. Copy that file for backup purposes and replace with a new one, containing the following: <!DOCTYPE html> <html> <head> <title>Redirecting....</title>     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <script type="text/javascript">          function Redirect() {                window.location="/Thingworx/";         }         setTimeout('Redirect()', 0);  </script> </head> Please note that once the URL is hit, it will still append the rest of the /Thingworx query in the address bar (i.e keep the "redirected to" address). You may also utilize this to have your <main.url.com> redirect to one of the mashups. This was the user, with proper permissions in place, may access the mashup directly, bypassing the Composer.
View full tip
A confusion matrix is a technique for summarizing the performance of a classification algorithm. Classification accuracy alone can be misleading if you have an unequal number of observations in each class or if you have more than two classes in your data set. Calculating a confusion matrix can give you a better idea of what your classification model is getting right and what types of errors it is making. Classification Accuracy and its Limitations: ​Classification Accuracy = Correct Predictions/Total Predictions The main problem with classification accuracy is that it hides the detail you need to better understand the performance of your classification model. Below are two examples: 1.  When you are data has more than 2 classes. With 3 or more classes you may get a classification accuracy of 80%, but you don’t know if that is because all classes are being predicted equally well or whether one or two classes are being neglected by the model. 2.  When your data does not have an even number of classes. You may achieve accuracy of 90% or more, but this is not a good score if 90 records for every 100 belong to one class and you can achieve this score by always predicting the most common class value. Classification accuracy can hide the detail you need to diagnose the performance of your model. But thankfully we can tease apart this detail by using a confusion matrix. Confusion Matrix Terminology: A confusion matrix is a table that is often use to describe the performance of a classification model on a set of test data for which true values are known. Let’s start with an example for a binary classifier: N=165 Predicted no: Predicted yes: Actual no: 50 10 Actual yes: 5 100 What we can learn from Confusion Matrix? There are two possible predicted classes: "yes" and "no". If we were predicting the presence of a disease, for example, "yes" would mean they have the disease, and "no" would mean they don't have the disease. The classifier made a total of 165 predictions (e.g., 165 patients were being tested for the presence of that disease). Out of those 165 cases, the classifier predicted "yes" 110 times, and "no" 55 times. In reality, 105 patients in the sample have the disease, and 60 patients do not. Let's now define the most basic terms, which are whole numbers (not rates): True positives (TP): These are cases in which we predicted yes (they have the disease), and they do have the disease. True negatives (TN): We predicted no, and they don't have the disease. False positives (FP): We predicted yes, but they don't actually have the disease. (Also known as a "Type I error.") False negatives (FN): We predicted no, but they actually do have the disease. (Also known as a "Type II error.") N=165 Predicted No: Predicted Yes: Actual No: TN=50 FP=10 60 Actual Yes: FN=5 TP=100 105 55 110 This is a list of rates that are often computed from a confusion matrix for a binary classifier: Accuracy: Overall, how often is the classifier correct? 1. (TP+TN)/total = (100+50)/165 = 0.91 Misclassification Rate: Overall, how often is it wrong? 1. (FP+FN)/total = (10+5)/165 = 0.09 2. Equivalent to 1 minus Accuracy 3. Also known as "Error Rate" True Positive Rate: When it's actually yes, how often does it predict yes? 1. TP/actual yes = 100/105 = 0.95 2. Also known as "Sensitivity" or "Recall" False Positive Rate: When it's actually no, how often does it predict yes? 1. FP/actual no = 10/60 = 0.17 Specificity: When it's actually no, how often does it predict no? 1. TN/actual no = 50/60 = 0.83 2. Equivalent to 1 minus False Positive Rate Precision: When it predicts yes, how often is it correct? 1. TP/predicted yes = 100/110 = 0.91 Prevalence: How often does the yes condition actually occur in our sample? 1. Actual yes/total = 105/165 = 0.64
View full tip
Announcements