cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Did you get an answer that solved your problem? Please mark it as an Accepted Solution so others with the same problem can find the answer easily. X

IoT Tips

Sort by:
ThingWorx is great for storing large amounts of data coming from your devices but it can also be used like a traditional, row based database for information you would like to integrate with your thing data. Attached to this blog entry is a short example of creating an address book database using a DataTable and a DataShape. It does not focus on creating mashups but sticks with discussing the modeling and service calls you would use to create a simple database.
View full tip
Anomaly Detection (also known as Outlier Detection) is a set of techniques that identify unusual occurrences in data. The premise is that such occurrences may be early indicators of future negative events (e.g. failure of assets or production lines).  Data Science algorithms for Anomaly Detection include both Supervised and Unsupervised methods. In Unsupervised Anomaly Detection, the algorithms make the assumption that most of the data points are "normal" (e.g. normal operation of the asset) and are looking for data points that are most dissimilar to the remainder of the dataset. Supervised Anomaly Detection requires a labeled set of Anomalies, in which case predictive algorithms can be applied directly on this data.   Thingworx employs a number of algorithms in support of Anomaly Detection: Simple threshold alerts. These are easy to setup on Thing properties but require a domain expert to provide such thresholds. Then, the alert will automatically fire when the value of the monitored property goes outside the predefined range of values, often seen as the "bad" side of the threshold. Statistical Process Control (SPC). This can be implemented using Thingworx Analytics Property Transforms. Most companies use a subset of SPC charts and rules to monitor production processes. Examples include the X Bar and R charts, as well as the Western Electric rules (e.g. one point outside the average +/- three sigma range). Explainable and widely accepted, SPC can also provide an earlier warning system compared to simple threshold alerts, in that it captures more complex patterns. Clustering. Using Thingworx Analytics, one can build an optimal clustering for the available data points. Under the assumption that data is representative of mostly normal operation and that there is not a significant pre-defined pattern of anomalies that form their own cluster, one can identify outliers by looking at the distance between points and their corresponding cluster centers. Points that are very far from their corresponding cluster center can be labeled as anomalies. Semi-supervised Anomaly Alerting (formerly known as ThingWatcher). This functionality identifies single property time series behavior that is statistically different than what was seen in a finite window of “known normal operation”. As such, it does not identify a “bad” event, or even a precursor to a “bad” event.  Rather, it points the end user to further investigate a situation which may lead to a “bad” event. Anomaly Alerts can be easily setup like any other Alert on a Thing property. Multiple Anomaly Alerts can be setup on the same or different properties of a Thing. Behind the scenes, the platform builds a time series neural network model for the known normal operation data, which is then applied to incoming data, and, if the errors are significantly different than those on known normal operation over a period of time, then an Anomaly Alert is produced. The techniques mentioned above are either unsupervised or semi-supervised. If the dataset contains labeled anomalies (e.g. asset faults, or suspicious patterns) then supervised predictive techniques (such as regression, decision trees, neural nets, or ensemble methods) are available to model the relationships between such anomalies (dependent variables) and various variables of interest (independent variables). These models can then be employed to monitor assets or production lines for upcoming anomalies. In many real-world use cases, anomalies are relatively rare; care needs to be taken when building such predictive models. Techniques such as up-sampling can prove beneficial in such situations.   What constitutes an Anomaly depends on the observed data and the current context. If only few data points are initially available, then it is possible that a lot of future data is predicted as an Anomaly, despite being normal operation. Also, in terms of context, if an Anomaly Detection is trained on a connected product in the Winter, it is likely to say all Summer operation is anomalous. This can be tackled by having multiple anomaly detection alerts implemented, one for each different context of operation (e.g. season, recipe being manufactured, operation done by a robot).   Another consideration is lead time vs explainability. For example, when a threshold alert fires, it is obvious why, but it may not be early enough to take action. As more advanced methods are employed, more complex patterns can be captured, hence more lead time, but typically at the expense of explainability. For example, semi supervised Anomaly Alerting (formerly known as ThingWatcher) uses time windows, aggregations, and derivatives of up to the third order, resulting in significantly less explainability when an Anomaly is presented to the end user.   Choosing the appropriate Anomaly Detection technique is use case dependent, balancing the desired lead time and explainability. If historical data on failures/anomalies is not available, a good place to start is Statistical Process Control, as it provides a balanced approach between the two dimensions, in addition to being already in use across many manufacturing companies.
View full tip
Performance and memory issues in ThingWorx often appear at the Java Virtual Machine (JVM) level. We can frequently detect these issues by monitoring JVM garbage collection logs and thread dumps. In this first of a multi-part series, let's discuss how to analyze JVM garbage collection (GC) logs to detect memory issues in our application. What are GC logs? When enabled on the Apache server, the logs will show the memory usage over time and any memory allocation issues (See: How to enable GC logging in our KB for details on enabling this logging level).  Garbage Collection logs capture all memory cleanup operations performed by the JVM.  We can determine the proper memory configuration and whether we have memory issues by analyzing GC logs. Where do I find the GC logs? When configured as per our KB article, GC logs will be written to your Apache logs folder. Check both the gc.out and the gc.restart files for issues if present, especially if the server was restarted when experiencing problems. How do I read GC logs? There are a number of free tools for GC log analysis (e.g. http://gceasy.io, GCViewer). But GC logs can also be analyzed manually in a text editor for common issues. Reading GC files: Generally each memory cleanup operation is printed like this: Additional information on specific GC operations you might see are available from Oracle. A GC analysis tool will convert the text representation of the cleanups into a more easily readable graph: Three key scenarios indicating memory issues: Let's look at some common scenarios that might indicate a memory issue.  A case should be opened with Technical Support if any of the following are observed: Extremely long Full GC operations (30 seconds+) Full GCs occurring in a loop Memory leaks Full GCs: Full GCs are undesirable as they are a ‘stop-the-world’ operation. The longer the JVM pause, the more impact end users will see: All other active threads are stopped while the JVM attempts to make more memory available Full GC take considerable time (sometimes minutes) during which the application will appear unresponsive Example – This full GC takes 46 seconds during which time all other user activity would be stopped: 272646.952: [Full GC: 7683158K->4864182K(8482304K) 46.204661 secs] Full GC Loop: Full GCs occurring in quick succession are referred to as a Full GC loop If the JVM is unable to clean up any additional memory, the loop will potentially go on indefinitely ThingWorx will be unavailable to users while the loop continues Example – four consecutive garbage collections take nearly two minutes to resolve: 16121.092: [Full GC: 7341688K->4367406K(8482304K), 38.774491 secs] 16160.11: [Full GC: 4399944K->4350426K(8482304K), 24.273553 secs] 16184.461: [Full GC: 4350427K->4350426K(8482304K), 23.465647 secs] 16207.996: [Full GC: 4350431K->4350427K(8482304K), 21.933158 secs]      Example – the red triangles occurring at the end of the graph are a visual indication that we're in a full GC loop: Memory Leaks: Memory leaks occur in the application when an increasing number of objects are retained in memory and cannot be cleaned up, regardless of the type of garbage collection the JVM performs. When mapping the memory consumption over time, a memory leaks appears as ever-increasing memory usage that’s never reclaimed The server eventually runs out of memory, regardless of the upper bounds set           Example(memory is increasing steadily despite GCs until we'll eventually max out): What should we do if we see these issues occurring? Full GC loops and memory leaks require a heap dump to identify the root cause with high confidence The heap dump needs to be collected when the server is in a bad state Thread dumps should be collected at the same time A case should be opened with support to investigate available gc, stacktrace, or heap dump files
View full tip
<p>We live in a connected world where we can (want!) to receive instant updates and notifications. ThingWorx leverages the power of Web 2.0 and its Always-On technology to deliver that, but our friendly SMS providers have also provided an easy and powerful way that can be used to deliver SMS notifications right to your phone. Email to Text!</p><div>Set up a 'notification' Thing using our MailServer Template, set up your outgoing e-mail server and you are now ready to invoke the 'SendMessage' service on a given event. All you need now is the email address of your SMS number, which you can find by following this link: <a href="http://sms411.net/how-to-send-email-to-a-phone/" target="_blank"><span style="font-size:8.5pt;line-height:115%;font-family:&quot;Arial&quot;,&quot;sans-serif&quot;">List of e-Mail to SMS  addresses</span></a></div><p class="MsoNormal"><o:p></o:p></p><div><p class="MsoNormal"><o:p></o:p></p></div><p></p>
View full tip
Pgweb is a light weight cross platform client for PostgreSQL DBs written in Go. It can connect to both local & remote PostgreSQL instances behind firewall using native SSH tunneling both with password or ssk keys Due to no dependency on any other external library/utility all that is needed to run pgweb is to download it and run to connect to a working PostgreSQL server. Also works on RaspberryPi.   Note: It can also be used to connect to the CockroachDB instances, detailed in post Cockroach DB : An open source distributed SQL Database as external data store for ThingWorx   Getting Access Download pre-compiled binaries here for different platforms.   Building pgweb from source Use https://github.com/sosedoff/pgweb.git to clone the source and build it.   Additional Reads Pgweb Wiki Features List Usage Installing & Testing it After downloading the pgweb_windows_amd64.exe for windows platform (used for testing in this blog), start the pgweb utility using: pgweb_windows_amd64.exe   Note starting the pgweb utility from the command prompt shows the port on which pgweb is accessible e.g. This is when no flag is used to start up the pgweb utility, all default options used     Accessing the pgweb over localhost:8081 via a web browser     Connecting Pgweb allows for connecting to the PostgreSQL DB in variety of ways - e.g. using something like JDBC connection string like postgres://user:password@host:port/db?sslmode=mode. Additionally users can connect using standard hostname, port , username & password connection w/o SSL or via SSH tunneling.   Navigating around pgweb UI UI design is pretty straight forward and clean highlighting database name, all the tables available under that database and selected table's information on the left. To switch to different database running under the connected server click on the database name to see a drop down menu with names listed.     Selected table gets queried for all its data automatically displaying all its rows together with additional tabs for quickly reviewing the metadata about the table     Query Tab This is particularly interesting esp. for getting quick info on how the SQL query is going to perform. This could be used to analyze and generate the query plan showing preliminary data on how is it going to cost to run a particular query, how many times its going to loop over the complete table, what's going to be the execution time, etc   To get this information simply type in the sql statement under the Query tab and hit Explain Query button to generate this data. For larger tables it could take a while to get this information generated     The queried data then can be exported in formats : JSON, CSV & XML   Exporting table data out of the tables This is pretty handy feature allowing quick export of entire table's data set by right clicking on the table name and selecting one of the desired formats which includes JSON, CSV, etc.    
View full tip
Connect and Monitor Industrial Plant Equipment Learning Path   Learn how to connect and monitor equipment that is used at a processing plant or on a factory floor.   NOTE: Complete the following guides in sequential order. The estimated time to complete this learning path is 180 minutes.   Create An Application Key  Install ThingWorx Kepware Server Connect Kepware Server to ThingWorx Foundation Part 1 Part 2 Create Industrial Equipment Model Build an Equipment Dashboard Part 1 Part 2
View full tip
Meet Neal. When Neal joined PTC five years ago, he immediately hit the ground running on IoT initiatives, working in multiple areas ranging from pre-sales to partner relations. Today, he is a Worldwide ThingWorx Center of Excellence Principal Lead at PTC, and his biggest focus is supporting the go-to-market for the Microsoft partnership.   I sat down with Neal recently to hear the details on exactly how Azure and ThingWorx can be used to develop world-class IIoT applications.   Kaya: Can you explain how Azure and ThingWorx work together? Neal: Yes, so Azure provides the cloud infrastructure that our customers need in order to deploy ThingWorx.   By having Azure as our preferred cloud platform, we’re able to specialize our R&D efforts into utilizing functionality that is available in Azure, rather than having to reinvent the wheel ourselves for each cloud platform in the attempt to remain cloud-agnostic. By leveraging a single, already quite powerful, cloud platform through Azure, we’re able to maximize our development efforts.   Kaya: What was the major problem that led to us investigating cloud options? Neal: There were two issues that our users had. The first was we often had complicated installs and setup procedures because we were genericizing the whole process, so the initial setup and run was complicated and expensive. For example, we were requiring them to setup additional VMs and components, and we were giving them generic instructions because we were being very agnostic, when they had already chosen outside of us to use one of the cloud platforms. We knew our customers wanted to move fast, so we had to make it easier and faster for them to see results.   The other issue we ran into with customers was the confusion in the offerings. For ThingWorx, ingest is just one aspect of IoT. ThingWorx is particularly strong in areas like enabling mixed reality and augmented reality as well as application enablement. And, while we also have the ability to perform ingest capabilities, Microsoft is especially powerful when it comes to ingest capabilities and security. We decided that the smartest solution was to leverage Microsoft’s expertise in data ingestion to make ThingWorx even more powerful; so, we made the Azure IoT Hub Connector. By partnering with Microsoft, we have joint architecture where you can see how Microsoft provides key features and ThingWorx will run on top of those features and get you faster to the market to develop the application.   Below is an example of a high-availability deployment of ThingWorx on Azure that utilizes ThingWorx Azure IoT Connectors to access an Azure IoT Hub.  High-Availability Deployment of ThingWorx on Azure Kaya: Why did we partner with Azure? What specific benefits does Azure offer over other cloud services providers? Neal: When we started to look at what our customers were using for cloud services, we noticed that a lot were using Microsoft. When we join forces with Microsoft, we have a much more wholistic offering around digital transformation. With the partnership, PTC and Microsoft are able to leverage each partner’s respective strengths to provide even more powerful IIoT solutions. You have Windchill and Microsoft’s business application strategies; you have Vuforia and Microsoft’s mixed reality and augmented reality strategies; and, you have ThingWorx on the Microsoft Azure cloud. Overall, you have a much more complete and powerful offering together.   Kaya: What is your favorite aspect about working at PTC? Neal: The growth. There’s been a lot of changes over the last five years that I’ve been here. PTC has been able to leverage its strengths and long-time experience in the CAD and PLM markets to enter and ultimately become a leader in the IIoT market, according to reports by research firms like Gartner and Forrester. In short, the growing IIoT market and PTC’s leadership in the industry.   Note to Readers: You’ve likely heard about our major strategic partnership with Microsoft to leverage our respective IIoT and cloud technologies to optimize the creation, deployment, management and overall use of your IIoT applications. If you haven’t heard about the partnership, check out the press release here. If you’re curious about more aspects of PTC’s partnership with Microsoft, check out this site devoted to sharing how ThingWorx and Azure are better together.
View full tip
This video go through the steps required to use the Creo Insight extension: - Download and install the required extension - Set required config.pro options - Create provider in Analytics Manager - Publish sensor from Creo - Create analysis Event in Analysis Manager - Retrieve sensor values from ThingWorx in Creo     See also: - https://www.ptc.com/en/support/article?n=CS277514  for a  written version of those steps. - Creo Help Center  
View full tip
How to input Database User Credentials at RunTime. This Blog considers that you have already imported the Database Extension and Configured the Thing Template. If you have not done this already please see Steps to connecting to your Relational Database first. Steps: Create a Database Thing template with correct configuration. Example configuration for MySql Database: jDBCDriverClass: com.mysql.jdbc.Driver jDBCConnectionURL: jdbc:mysql://127.0.0.1:3306/<DatabaseNameHere>?allowMultiQueries=true connectionValidationString: SELECT NOW() maxConnections: 100 userName: <DataBaseUserNameHere> password: <DataBasePasswordHere> Create any Generic Thing and add a service to create thing based on the Thing template created in Step 1. Example: // NewDataBaseThingName is the String input for name the database thing to be created. // MySqlServerUpdatedConfiguration is the Thing template with correct configuration var params = {      name: NewDataBaseThingName /* STRING */,      description: NewDataBaseThingName /* STRING */,     thingTemplateName: "MySqlServerUpdatedConfiguration" /* THINGTEMPLATENAME */,     tags: undefined /* TAGS */ }; // no return Resources["EntityServices"].CreateThing(params); Add code to enable and then restart the above thing using EnableThing() and RestartThing() service. Example Things[NewDataBaseThingName].EnableThing(); Things[NewDataBaseThingName].RestartThing(); Test and confirm that the Database Thing services runs as expected. Now Create a DataShape with following Fields: jDBCDriverClass: STRING jDBCConnectionURL: STRING connectionValidationString: STRING maxConnections: NUMBER userName: STRING password: PASSWORD Now in the Generic Thing created in Step 1 add code to update the configuration settings of DataBase Thing. Make sure JDBC Driver Class Name should never be changed. If different database connection is required use different Thing Template. Also, add code to restart the DataBase Thing using RestartThing() service. Example: var datashapeParams = {     infoTableName : "InfoTable",     dataShapeName : "DatabaseConfigurationDS" }; // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(DatabaseConfigurationDS) var config = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(datashapeParams); var passwordParams = {         data: "DataBasePasswordHere" /* STRING */ }; // DatabaseConfigurationDS entry object var newEntry = new Object(); newEntry.jDBCDriverClass= "com.mysql.jdbc.Driver"; // STRING newEntry.jDBCConnectionURL = "jdbc:mysql://127.0.0.1:3306/<DatabaseNameHere>?allowMultiQueries=true"; // STRING newEntry.connectionValidationString = "SELECT NOW()"; // STRING newEntry.maxConnections = 100; // NUMBER newEntry.userName = "DataBaseUserNameHere"; // STRING newEntry.password = Resources["EncryptionServices"].EncryptPropertyValue(passwordParams); // PASSWORD config.AddRow(newEntry); var configurationTableParams = { configurationTable: config /* INFOTABLE */, persistent: true /* BOOLEAN */, tableName: "ConnectionInfo" /* STRING */ }; // ThingNameForConfigurationUpdate is the input string for Thing Name whose configuration needs to be updated. // no return Things[ThingNameForConfigurationUpdate].SetConfigurationTable(configurationTableParams); Things[ThingNameForConfigurationUpdate].RestartThing(); Test and confirm that the Database Thing services runs as expected.
View full tip
The .net-sdk can be configured to emit very detailed debugging and diagnostic information to a log file during execution. The .net-sdk uses the standard .NET System.Diagnostic infrastructure for Logging, as such, all configuration of the .net-sdk logger is done via the standard .NET Logging configuration system. By default, Logging is configured via the standard .NET “App.config” file. Log messages can be routed to any standard .NET TraceListener. Optionally, ThingWorx provides a FixedFieldTraceListener which can be used to output log messages to a file. The use of the ThingWorx provided FixedFieldTraceListener is recommended. The FixedFieldTraceListener when configured will automatically create a "logs" directory in the same location as (a sibling to) the running executable file (.exe). This "logs" directory will contain the log files. Every .NET Class can be configured as a specific “Trace Source” which emits log messages. It is recommended to add at least the following Trace Sources to your App.config file to receive the most useful amount of information: com.thingworx.communications.client.BaseClient com.thingworx.communications.client.ConnectedThingClient com.thingworx.communications.client.things.VirtualThing com.thingworx.communications.client.TwApiWrapper com.thingworx.communications.client.things.filetransfer.FileTransferVirtualThing com.thingworx.communications.client.things.contentloader.ContentLoaderVirtualThing The amount of information emitted can range from very low level Trace messages (the Verbose setting) to nothing at all (the Off setting). The “SourceLevels Enumeration” can be used to control how much information is written out to the log file. For reference, this is the <add name="SourceSwitch" value="Information" /> element in the sample below. Below is sample App.config file. <?xml version="1.0" encoding="utf-8"?> <configuration>     <system.diagnostics>       <sources>         <source name="com.thingworx.common.utils.JSONUtilities" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.TwApiWrapper" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.BaseClient" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.ConnectedThingClient" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.things.contentloader.ContentLoaderVirtualThing" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.things.filetransfer.FileTransferVirtualThing" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.things.VirtualThing" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.metadata.annotations.MetadataAnnotationParser" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>       </sources>       <switches>         <add name="SourceSwitch" value="Information" />       </switches>       <sharedListeners>         <add name="file" type="com.thingworx.common.logging.FixedFieldTraceListener, thingworx-dotnet-common, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" initializeData="false"/>       </sharedListeners>       <trace autoflush="true" indentsize="4" />   </system.diagnostics> </configuration>
View full tip
JavaMelody is an open source (LGPL) application that measures and calculates statistical information based on application usage. The resulting data can be viewed in a variety of formats including evolution charts, which track various operations and server attributes over time. There are also robust reporting options that allow data to be exported in either HTML of PDF formats. Installation Installation is fairly simple and can be done in just a few minutes. Download the distribution from JavaMelody Wiki and extract the javamelody.jar, available at https://github.com/javamelody/javamelody/releases Step 1: Download the java melody file (in unix, use the following command*): wget javamelody.googlecode.com/files/javamelody-1.49.0.zip Note: Ensure the latest version available at the link provided above before executing the unix command, modify the version accordingly. Step 2: Extract the zip file (using the following command in unix, note the version from step 1); unzip javamelody-1.49.0.zip Step 3: Copy the javamelody.jar and jrobin-x.jar from the javamelody installable to the WEB-INF/lib directory of the war file deployed in the tomcat using the following command in unix: cp -pr javamelody-1.49.0 jrobin-x.jar /opt/tomcat/server/webapps/<application name>/WEB-INF/lib Step 4: Edit the web.xml file from WEB-INF directory of the war file deployed in the tomcat and add the following lines in the web.xml before the description of the servlet.ie. mostly at the starting of the web.xml file.                 <filter> <filter-name>monitoring</filter-name>                <filter-class>net.bull.javamelody.MonitoringFilter</filter-class>        </filter>        <filter-mapping>                <filter-name>monitoring</filter-name>                <url-pattern>/*</url-pattern>        </filter-mapping>        <listener>                <listener-class>net.bull.javamelody.SessionListener</listener-class>        </listener> Step 5: Restart the tomcat server after editing the web.xml and access the javamelody page using the following url pattern: http://<hostname on which tomcat is configured>:<Port number on which the application is accessed>/<application name>/monitoring The url can be customized in the configuration file. Reports can be viewed in weekly, daily, or monthly formats. They can also be downloaded or can be sent over email in pdf format. iText library for WebApps and Java’s Mail and Activation libraries are required on the server in order to use the mail session. The report provides the same information that can be found in monitoring web page with both high-level and detailed information. CPU&Memory usage: Detailed SQL Information: SQL Statistics: Server Requests: System threads, caches: Data Caches: System Overhead ​On the JavaMelody Wiki, https://github.com/javamelody/javamelody/wiki/Overhead​ one can find a healthy discussion about system overhead. It seems that the general consensus is that  the overhead cost caused by JavaMelody is very low and that the feature is safe to enable full-time in QA environment. ->JavaMelody records only statistics and not events, so the overhead of memory is quite minimal. ->No  I/O on the wire and minimal I/O on disk. If no problem arises, it can be considered to enable JavaMelody on the production environment as well. Using a tool like JavaMelody can lead to valuable insights on how to optimize servers or uncover otherwise hidden issues, providing value that exceeds the overhead cost.
View full tip
Get Started with ThingWorx for IoT Guide Part 5   Step 13: Extend Your Model   Modify the application model, enhance your UI, and add features to the house monitoring application to simulate a request as it might come from an end user. For this step, we do not provide explicit instructions, so you can use critical thinking to apply your skills. After completing the previous steps, your Mashup should look like this:   In this part of the lesson, you'll have an opportunity to: Complete an application enhancement in Mashup Builder Compare your work with that of a ThingWorx engineer Import and examine ThingWorx entities provided for download that satisfy the requirements Understand the implications of ThingWorx modeling options   Task Analysis   Add a garage to the previously-created house monitoring web application and include a way to display information about the garage in the UI. You will need to model the garage using Composer and add to the web application UI using Mashup Builder. What useful information could a web application for a garage provide? How could information about a garage be represented in ThingWorx? What is the clearest way to display information about a garage?   Tips and Hints   See below for some tips and hints that will help you think about what to consider when modifying the application in ThingWorx. Modify your current house monitoring application by adding a garage: Extend your model to include information about a garage using Composer. Add a display of the garage information to your web application UI using Mashup Builder.   Best Practices   Keep application development manageable by using ThingWorx features that help organize entities you create.   Modeling   The most important feature of a garage is the status of the door. In addition to its current status, a user might be interested in knowing when the garage door went up or down. Most garages are not heated, so a user may or may not be interested in the garage temperature.   Display   The current status of the garage door should be easily visible. Complete the task in your Composer before moving forward. The Answer Key below reveals how we accomplished this challenge so you can compare your results to that of a ThingWorx engineer.   Answer Key   Confirm you configured your Mashup to meet the enhancement requirements when extending your web application. Use the information below to check your work.   Create New Thing   Creating a new Thing is one way to model the garage door. We explain other methods, including their pros and cons, in the Solution discussion below. Did you create a new Thing using the Building Template? Did you apply a Tag to the new Thing you created?   Review detailed steps for how to Create a Thing in Part 2 of this guide.   Add Property   Any modeling strategy requires the addition of a new Property to your model. We explore options for selecting an appropriate base type for the garage Property in the Solution discussion on the next step. Did you add a Property to represent the garage door? Did you use the Boolean type? Did you check the Logged? check-box to save history of changes?   Review detailed steps for how to Add a Property. in Part 1, Step 3 of this guide.   Add Widget   In order to display the garage door status, you must add a Widget to your Mashup. We used a check-box in our implementation. We introduce alternative display options in the Solution discussion on the next step. Did you add a Widget to your Mashup representing the garage door status? Review detailed steps for how to Create an Application UI in Part 3, Step 8 of this guide.   Add Data Source   If you created a new Thing, you must add a new data source. This step is not required if you added a Property to the existing Thing representing a house. Did you add a data source from the garage door Property of your new Thing? Did you check the Execute on Load check-box? Review detailed steps for how to Add a Data Source to a Mashup in Part 4, Step 10 of this guide.   Bind Data Source to Widget   You must bind the new garage door Property to a Widget in order to affect the visualization. Did you bind the data source to the Widget you added to your Mashup? Review detailed steps for how to Bind a Data Source to a Widget in Part 4, Step 10 of this guide.   Solution   If you want to inspect the entities as configured by a ThingWorx engineer, import this file into your Composer. Download the attached example solution:   FoundationChallengeSolution.xml Import the xml file into, then open MyHouseAndGarage Thing. See below for some options to consider in your application development.   Modeling   There are several ways the garage door property could be added to your existing model. The table below discusses different implementations we considered. We chose to model the status of the garage door as a Property of a new Thing created using the building Template. Modeling Method Pros Cons Add Property to BuildingTemplate The Garage property will be added to existing house Thing All future Things using Building Template will have a garage door property Add Property to existing house Thing House and garage are linked No separate temperature and watts Property for garage Add Property to new Thing created with BuildingTemplate All Building features available No logical link between house and garage   Property Base Type   We chose to represent the status of the door with a simple Boolean Property named 'garageDoorOpen' Thoughtful property naming ensures that the values, true and false, have a clear meaning. Using a Boolean type also makes it easy to bind the value directly to a Widget. The table below explains a few Base Type options. Modeling Method Pros Cons Boolean Easy to bind to Widget Information between open and closed is lost Number Precise door status Direction information is lost String Any number of states can be represented An unexpected String could break UI   Visualization   We chose a simple Check-box Widget to show the garage door status, but there are many other Widgets you could choose depending on how you want to display the data. For example, a more professional implementation might display a custom image for each state.   Logging   We recommended that you check the Logged option, so you can record the history of the garage door status.   Step 14: Next Steps   Congratulations! You've successfully completed the Get Started with ThingWorx for IoT tutorial, and learned how to: Use Composer to create a Thing based on Thing Shapes and Thing Templates Store Property change history in a Value Stream Define application logic using custom defined Services and Subscriptions Create an applicaton UI with Mashup Builder Display data from connected devices Test a sample application The next guide in the Getting Started on the ThingWorx Platform learning path is Data Model Introduction.
View full tip
This expert session focuses on overviewing the patch and upgrade process of the Thingworx platform. It provides information on how to perform a patch upgrade for the platform as well as extensions upgrade, and when an in-place upgrade is applicable. It can be viewed as a quick reference note for upgrading your system.     For full-sized viewing, click on the YouTube link in the player controls.   Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
The following power point contains some reference slides to start up with DSE/ThingWorx integration. Start with understanding DSE architecture and specifically, the differences compared to regular Relational Databases. Free online courses offered by DataStaxAcademy: –https://academy.datastax.com/courses/understanding-cassandra-architecture –https://academy.datastax.com/courses/installing-and-configuring-cassandra   The following section will guide you through some of the specifics: http://datastax.com/documentation/cassandra/2.0/cassandra/architecture/architecturePlanningAbout_c.html
View full tip
The KEPServerEX ThingWorx Native Interface was originally released with KEPServerEX v5.21 to enable connectivity with v6.6 of the ThingWorx Platform. Compatibility with the ThingWorx NextGen Composer (v7.4) was implemented with the release of KEPServerEX v6.1, while still allowing support of older ThingWorx composer versions through the Native Interface "Legacy mode" setting.   When connecting with ThingWorx v7.4 and newer, an Industrial Gateway thing template is configured in the NextGen Composer. The instructions for connecting KEPServerEX v6.1 (and newer) with ThingWorx v7.4 (and newer) can be found in the ThingWorx Help Center here:   KEPServerEX / ThingWorx Industrial Connectivity Connection Example (Expand ThingWorx Model Definition and Composer > Industrial Connections > Industrial Connections Example)     When connecting with versions of ThingWorx that are older than v7.4-- or if the older version of ThingWorx Composer will be used-- it will necessary to download and import the KEPServerEX Extension from the ThingWorx marketplace. The "RemoteKEPServerEXThing" thing template will then be used to allow connection with KEPServerEX. Here is a link to the extension download: ThingWorx IoT Marketplace   To enable Legacy Mode in KEPServerEX (v6.1 or newer, when connecting with versions of ThingWorx that are older than v7.4): 1. Open the KEPServerEX Configuration window 2. In the top-left Project view pane, right-click on "Project" and select "Properties" 3. Select the ThingWorx Property Group 4. Change "Enable" to "Yes"; and change "Legacy Mode" to "Enable"     See the chart below for a version compatibility summary:   KEPServerEX version ThingWorx version v5.21 pre-7.4, using RemoteKEPServerEXThing v6.0 pre-7.4, using RemoteKEPServerEXThing v6.1 or higher w/ Legacy Mode enabled pre-7.4, using RemoteKEPServerEXThing v6.1 or higher w/ Legacy Mode disabled (default) v7.4 or higher, using Industrial Gateway
View full tip
Applicable Releases: ThingWorx Platform 7.0 to 8.5   Description:   Introduction to ThingWorx Extension Development, with the following topics: What is an Extension Why building an Extension Prerequisites Installing Eclipse plugin and features Creating entities with the plugin and including exported Entities in an Extension Project Upgrading or Updating and Existing extension in ThingWorx Building with Gradle and Ant       ThingWorx Extension Development Guide
View full tip
This video builds upon the mashup created in the basic session, and strives to create a more polished, user-friendly interface that is ready for deployment. In part 1, we’ll take a look at advanced layout designs and include a more varied set of widgets to help draw attention to some of the more pertinent properties being captured within the mashup.   For full-sized viewing, click on the YouTube link in the player controls. Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.  
View full tip
The System user is pivotal in securing your application and the simplest approach is to assign the System user to ALL Collections and give it Runtime Service Execute. These Collection Permissions ONLY Export to ThingworxStorage vs. the File Export, it becomes quite painful to manage this and then roll this out to a new machine. Best and fastest solution? Script the Assignment, you can take this script which does it for the System user and extend it to include any other Collection Level permissions you might need to set, like adding Entity Create Design Time for the System user. --------------------------------------------------------- //@ThingworxExtensionApiMethod(since={6,6}) //public void AddCollectionRunTimePermission(java.lang.String collectionName, //       java.lang.String type, //       java.lang.String resource, //       java.lang.String principal, //       java.lang.String principalType, //       java.lang.Boolean allow) //    throws java.lang.Exception // //Service Category: //    Permissions // //Service Description: //    Add a run time permission. // //Parameters: //    collectionName - Collection name (Things, Users, ThingShapes, etc.) - STRING //    type - Permission type (PropertyRead PropertyWrite ServiceInvoke EventInvoke EventSubscribe) - STRING //    resource - Resource name (* = all or enter a specific resource to override) - STRING //    principal - Principal name (name of user or group) - STRING //    principalType - Principal type (User or Group) - STRING //    allow - Permission (true = allow, false = deny) - BOOLEAN //Throws: //    java.lang.Exception - If an error occurs //   var params = {     modelTags: undefined /* TAGS */,     type: undefined /* STRING */ }; // result: INFOTABLE dataShape: EntityCount var EntityTypeList = Subsystems["PlatformSubsystem"].GetEntityCount(params); for each (var row in EntityTypeList.rows) {     try {         var params = {             principal: "System" /* STRING */,             allow: true /* BOOLEAN */,             resource: "*" /* STRING */,             type: "ServiceInvoke" /* STRING */,             principalType: "User" /* STRING */,             collectionName: row.name /* STRING */         };         // no return         Resources["CollectionFunctions"].AddCollectionRunTimePermission(params);     }     catch(err) {     } }
View full tip
Dive back into the mashup builder and learn about advanced widgets and layout options.   For full-sized viewing, click on the YouTube link in the player controls.   Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
This video will provide you with a brief introduction to the New Composer interface, which has been made available in the 7.4 and later releases of the ThingWorx Platform.  For complete details on what functionality is available within this next generation composer interface, and to also see what lies ahead on our road map, please refer to the following post in the ThingWorx Community: NG Composer feature availability
View full tip
Best Practices in Data Preparation for ThingWorx Analytics Data Preparation is an important phase in the process of Data Analysis when using ThingWorx Analytics. Basically, it is getting your Data from being Raw Data that you might have gathered through your Operational system or from your Data warehouse to the kind of Data ready to be analyzed. In this Document we will be using “Talend Data Preparation Free Desktop” as a Tool to illustrate some examples of the Data Preparations process. This tool could be downloaded under the following Link: https://www.talend.com/products/data-preparation (You could also choose to use another tool) We would also use the Beanpro Dataset in our Examples and illustrations. Checking data formats The analysis starts with a raw data file. The user needs to make sure that the data files can be read. Raw data files come in many different shapes and sizes. For example, spreadsheet data is formatted differently than web data or Sensors collected data and so forth. In ThingWorx Analytics the Data Format acceptable are CSV. So the Data retrieved needs to be inputted into that format before it could be uploaded to TWA Data Example (BeanPro dataset used): After that is done the user needs to actually look at what each field contains. For example, a field is listed as a character field could actually contains none character data. Verify data types Verifying the data types for each feature or field in the Dataset used.  All data falls into one of four categories that affect what sort of analytics that could be applied to it: Nominal data is essentially just a name or an identifier. Ordinal data puts records into order from lowest to highest. Interval data represents values where the differences between them are comparable. Ratio data is like interval data except that it also allows for a value of 0. It's important to understand which categories your data falls into before you feed it into ThingWorx Analytics. For example when doing Predictive Analytics TWA would not accept a Nominal Data Field as Goal. The Goal feature data would have to be of a numerical non nominal type so this needs to be confirmed in an early stage.                 Creating a Data Dictionary A data dictionary is a metadata description of the features included in the Dataset when displayed it consists of a table with 3 columns: - The first column represents a label: that is, the name of a feature, or a combination of multiple (up to 3) features which are fields in the used Dataset. It points to “fieldname” in the configuration json file. - The second column is the Datatype value attached to the label. (Integer, String, Datetime…). It points to “dataType” in the configuration json file. - The third column is a description of the Feature related to the label used in the first column. It points to “description” in the configuration json file. In the context of TWA this Metadata is represented by a Data configuration “json” file that would be uploaded before even uploading the Dataset itself. Sample of BeanPro dataset configuration file below: Verify data accuracy Once it is confirmed that the data is formatted the way that is acceptable by TWA, the user still need to make sure it's accurate and that it makes sense. This step requires some knowledge of the subject area that the Dataset is related to. There isn't really a cut-and-dried approach to verifying data accuracy. The basic idea is to formulate some properties that you think the data should exhibit and test the data to see if those properties hold. Are stock prices always positive? Do all the product codes match the list of valid ones? Essentially, you're trying to figure out whether the data really is what you've been told it is. Identifying outliers Outliers are data points that are distant from the rest of the distribution. They are either very large or very small values compared with the rest of the dataset. Outliers are problematic because they can seriously compromise the Training Models that TWA generates. A single outlier can have a huge impact on the value of the mean. Because the mean is supposed to represent the center of the data, in a sense, this one outlier renders the mean useless. When faced with outliers, the most common strategy is to delete them. Example of the effect of an Outlier in the Feature “AVG Technician Tenure” in BeanPro Dataset:   Dataset with No Outlier: Dataset with Outlier: Deal with missing values Missing values are one of the most common (and annoying) data problems you will encounter. In TWA dealing with the Null values is done by one of the below methods: - Dropping records with missing values from your Dataset. The problem with this is that missing values are frequently not just random little data glitches so this would consider as the last option. - Replacing the NULL values with average values of the responses from the other records of the same field to fill in the missing value Transforming the Dataset - Selecting only certain columns to load which would be relevant to records where salary is not present (salary = null). - Translating coded values: (e.g., if the source system codes male as "1" and female as "2", but the warehouse codes male as "M" and female as "F") - Deriving a new calculated value: (e.g., sale_amount = qty * unit_price) - Transposing or pivoting (turning multiple columns into multiple rows or vice versa) - Splitting a column into multiple columns (e.g., converting a comma-separated list, specified as a string in one column, into individual values in different columns) Please note that: Issue with Talend should be reported to the Talend Team Data preparation is outside the scope of PTC Technical Support so please use this article as an advisable Best Practices document
View full tip