cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
The .net-sdk can be configured to emit very detailed debugging and diagnostic information to a log file during execution. The .net-sdk uses the standard .NET System.Diagnostic infrastructure for Logging, as such, all configuration of the .net-sdk logger is done via the standard .NET Logging configuration system. By default, Logging is configured via the standard .NET “App.config” file. Log messages can be routed to any standard .NET TraceListener. Optionally, ThingWorx provides a FixedFieldTraceListener which can be used to output log messages to a file. The use of the ThingWorx provided FixedFieldTraceListener is recommended. The FixedFieldTraceListener when configured will automatically create a "logs" directory in the same location as (a sibling to) the running executable file (.exe). This "logs" directory will contain the log files. Every .NET Class can be configured as a specific “Trace Source” which emits log messages. It is recommended to add at least the following Trace Sources to your App.config file to receive the most useful amount of information: com.thingworx.communications.client.BaseClient com.thingworx.communications.client.ConnectedThingClient com.thingworx.communications.client.things.VirtualThing com.thingworx.communications.client.TwApiWrapper com.thingworx.communications.client.things.filetransfer.FileTransferVirtualThing com.thingworx.communications.client.things.contentloader.ContentLoaderVirtualThing The amount of information emitted can range from very low level Trace messages (the Verbose setting) to nothing at all (the Off setting). The “SourceLevels Enumeration” can be used to control how much information is written out to the log file. For reference, this is the <add name="SourceSwitch" value="Information" /> element in the sample below. Below is sample App.config file. <?xml version="1.0" encoding="utf-8"?> <configuration>     <system.diagnostics>       <sources>         <source name="com.thingworx.common.utils.JSONUtilities" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.TwApiWrapper" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.BaseClient" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.ConnectedThingClient" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.things.contentloader.ContentLoaderVirtualThing" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.things.filetransfer.FileTransferVirtualThing" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.things.VirtualThing" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.metadata.annotations.MetadataAnnotationParser" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>       </sources>       <switches>         <add name="SourceSwitch" value="Information" />       </switches>       <sharedListeners>         <add name="file" type="com.thingworx.common.logging.FixedFieldTraceListener, thingworx-dotnet-common, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" initializeData="false"/>       </sharedListeners>       <trace autoflush="true" indentsize="4" />   </system.diagnostics> </configuration>
View full tip
How to input Database User Credentials at RunTime. This Blog considers that you have already imported the Database Extension and Configured the Thing Template. If you have not done this already please see Steps to connecting to your Relational Database first. Steps: Create a Database Thing template with correct configuration. Example configuration for MySql Database: jDBCDriverClass: com.mysql.jdbc.Driver jDBCConnectionURL: jdbc:mysql://127.0.0.1:3306/<DatabaseNameHere>?allowMultiQueries=true connectionValidationString: SELECT NOW() maxConnections: 100 userName: <DataBaseUserNameHere> password: <DataBasePasswordHere> Create any Generic Thing and add a service to create thing based on the Thing template created in Step 1. Example: // NewDataBaseThingName is the String input for name the database thing to be created. // MySqlServerUpdatedConfiguration is the Thing template with correct configuration var params = {      name: NewDataBaseThingName /* STRING */,      description: NewDataBaseThingName /* STRING */,     thingTemplateName: "MySqlServerUpdatedConfiguration" /* THINGTEMPLATENAME */,     tags: undefined /* TAGS */ }; // no return Resources["EntityServices"].CreateThing(params); Add code to enable and then restart the above thing using EnableThing() and RestartThing() service. Example Things[NewDataBaseThingName].EnableThing(); Things[NewDataBaseThingName].RestartThing(); Test and confirm that the Database Thing services runs as expected. Now Create a DataShape with following Fields: jDBCDriverClass: STRING jDBCConnectionURL: STRING connectionValidationString: STRING maxConnections: NUMBER userName: STRING password: PASSWORD Now in the Generic Thing created in Step 1 add code to update the configuration settings of DataBase Thing. Make sure JDBC Driver Class Name should never be changed. If different database connection is required use different Thing Template. Also, add code to restart the DataBase Thing using RestartThing() service. Example: var datashapeParams = {     infoTableName : "InfoTable",     dataShapeName : "DatabaseConfigurationDS" }; // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(DatabaseConfigurationDS) var config = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(datashapeParams); var passwordParams = {         data: "DataBasePasswordHere" /* STRING */ }; // DatabaseConfigurationDS entry object var newEntry = new Object(); newEntry.jDBCDriverClass= "com.mysql.jdbc.Driver"; // STRING newEntry.jDBCConnectionURL = "jdbc:mysql://127.0.0.1:3306/<DatabaseNameHere>?allowMultiQueries=true"; // STRING newEntry.connectionValidationString = "SELECT NOW()"; // STRING newEntry.maxConnections = 100; // NUMBER newEntry.userName = "DataBaseUserNameHere"; // STRING newEntry.password = Resources["EncryptionServices"].EncryptPropertyValue(passwordParams); // PASSWORD config.AddRow(newEntry); var configurationTableParams = { configurationTable: config /* INFOTABLE */, persistent: true /* BOOLEAN */, tableName: "ConnectionInfo" /* STRING */ }; // ThingNameForConfigurationUpdate is the input string for Thing Name whose configuration needs to be updated. // no return Things[ThingNameForConfigurationUpdate].SetConfigurationTable(configurationTableParams); Things[ThingNameForConfigurationUpdate].RestartThing(); Test and confirm that the Database Thing services runs as expected.
View full tip
You can control the Tracking Indicator that is used to mark the ThingMark position. The Tracking Indicator is a green hexagon, in the screenshot below the red arrow points to it. You can control the display of this tracking indicator via the Display Tracking Indicator property of the ThingMark widget: But you can also get fancier. Here is an exmaple that shows the tracking indicator for 3 seconds when the tracking has started and then hides it automatically. To achieve such a behavior you'll have to use a bit of Javascript. We'll first create a function hideIn3Sec() in the javascript section of our view and then add it to the javascript handler of the Tracking Acquired event of the ThingMark widget. Step 1: Here is the code for copy/paste convenience: $scope.hideIn3Sec=function(){   // The $timeout function has two arguments: the function to execute (defined inline here)   // and the time in msec after which the function is invoked.   $timeout( function hide(){     // you may have to change 'thingMark-1' by the id of the ThingMark in your own experience     $scope.app.view['Home'].wdg['thingMark-1']['trackingIndicator']=false;   },   3000); } Step 2: That's it. Have fun!
View full tip
The KEPServerEX ThingWorx Native Interface was originally released with KEPServerEX v5.21 to enable connectivity with v6.6 of the ThingWorx Platform. Compatibility with the ThingWorx NextGen Composer (v7.4) was implemented with the release of KEPServerEX v6.1, while still allowing support of older ThingWorx composer versions through the Native Interface "Legacy mode" setting.   When connecting with ThingWorx v7.4 and newer, an Industrial Gateway thing template is configured in the NextGen Composer. The instructions for connecting KEPServerEX v6.1 (and newer) with ThingWorx v7.4 (and newer) can be found in the ThingWorx Help Center here:   KEPServerEX / ThingWorx Industrial Connectivity Connection Example (Expand ThingWorx Model Definition and Composer > Industrial Connections > Industrial Connections Example)     When connecting with versions of ThingWorx that are older than v7.4-- or if the older version of ThingWorx Composer will be used-- it will necessary to download and import the KEPServerEX Extension from the ThingWorx marketplace. The "RemoteKEPServerEXThing" thing template will then be used to allow connection with KEPServerEX. Here is a link to the extension download: ThingWorx IoT Marketplace   To enable Legacy Mode in KEPServerEX (v6.1 or newer, when connecting with versions of ThingWorx that are older than v7.4): 1. Open the KEPServerEX Configuration window 2. In the top-left Project view pane, right-click on "Project" and select "Properties" 3. Select the ThingWorx Property Group 4. Change "Enable" to "Yes"; and change "Legacy Mode" to "Enable"     See the chart below for a version compatibility summary:   KEPServerEX version ThingWorx version v5.21 pre-7.4, using RemoteKEPServerEXThing v6.0 pre-7.4, using RemoteKEPServerEXThing v6.1 or higher w/ Legacy Mode enabled pre-7.4, using RemoteKEPServerEXThing v6.1 or higher w/ Legacy Mode disabled (default) v7.4 or higher, using Industrial Gateway
View full tip
ThingWorx DevOps with Azure: The Comprehensive DevOps Guide Written by Tori Firewind, IoT EDC   As promised in a previous post,  attached here is a comprehensive guide to DevOps in ThingWorx, including tutorials and instructions for creating a continuous integration, continuous deployment (CI/CD) process for application development.  There are also updated scripts and entities attached, including an entire sample application for importing, exporting, and testing an application in ThingWorx. From Docker and Github to Azure DevOps and Solution Central, this guide has it all. Learn how to perform your role in the DevOps process whether an administrator or a developer, automate your deployments and testing, and create a more efficient process  for publication changes to production.  A complete DevOps process like this really does facilitate faster and easier updates with fewer risks, fewer delays, and a better pathway to success.  
View full tip
<p>We live in a connected world where we can (want!) to receive instant updates and notifications. ThingWorx leverages the power of Web 2.0 and its Always-On technology to deliver that, but our friendly SMS providers have also provided an easy and powerful way that can be used to deliver SMS notifications right to your phone. Email to Text!</p><div>Set up a 'notification' Thing using our MailServer Template, set up your outgoing e-mail server and you are now ready to invoke the 'SendMessage' service on a given event. All you need now is the email address of your SMS number, which you can find by following this link: <a href="http://sms411.net/how-to-send-email-to-a-phone/" target="_blank"><span style="font-size:8.5pt;line-height:115%;font-family:&quot;Arial&quot;,&quot;sans-serif&quot;">List of e-Mail to SMS  addresses</span></a></div><p class="MsoNormal"><o:p></o:p></p><div><p class="MsoNormal"><o:p></o:p></p></div><p></p>
View full tip
Original Post Date:     June 6, 2016 Description: This is a video tutorial on creating a Value Stream, adding the Value Stream to a Remote Thing, adding, binding and querying the (remote) properties.      
View full tip
ThingWorx Monitoring and Alerting, Part 2 Using Prometheus and Grafana By Tori Firewind, IoT EDC Building Dashboards     To add a panel which monitors some component of the ThingWorx application to a dashboard in Grafana, click to add a new panel. Under “Metrics” in the box at the bottom of the screen, select what ThingWorx metrics you wish to monitor (type “thingworx” in the search box to see them all). For example, select the Platform Subsystem memory in use:     Label filters aren’t necessary, though you may want to sort by instance if you are monitoring multiple ones with the same dashboard. You may also want to take some time to format the Y axis, which by default will show in bytes. Go to the formatting panel on the right side and scroll down to the section called “Standard options”. For the Unit dropdown, start typing “data” and then select “bytes (SI)”. This will automatically determine if the bytes you’ve provided should really print as MB or GB based on how large the numbers are.     Rename the panel, modify it in any other way desired, and then click Apply (last 5 minutes):     Once you add the panel, you can watch the memory usage as it is scraped by selecting the refresh option (10s or 30s, whatever makes sense based on your scrape interval).     The viewing window is stored in the URL, so that you can generate a report for a specific interval (like when a test was occurring), and then store that result or share it in a more compact way: http://localhost:3000/d/nleucPv4k/thingworx-monitoring?orgId=1&from=1668528038732&to=1668536503953  (absolute timestamps):     Dashboards are just collections of panels which report on all of the various metrics of performance and stability that exist for single components of a system. This is because there can be quite a few metrics worth watching for each individual component. Most of the third-party tools come with their own dashboards, but the ThingWorx component is one which for now, requires some thought and creativity.     Consider your use case carefully and look over the various subsystems contained within ThingWorx. Each part of an application is localized to specific subsystems, and some are more business critical than others. What will go at the top of your dashboard? Add rows, add panels per row, and see what the many choices are for watching your system.     Don’t forget that with Telegraf running, VM or machine usage metrics are also available for display on a dashboard. Things like overall CPU and Memory usage are critical to determining the health of a system, as we have demonstrated in our own reasoning in past benchmarks and scale tests. You can create a panel to monitor the mem_used versus mem_total, like so:     Another metric from Telegraf worth adding is the CPU usage, which should be given “percentage” for the units and which needs a label filter of cpu = cpu-total. If we do some resizing and drag-and-dropping, then we now have the first row of a dashboard:     See how the Platform usage climbs steadily and is purged in a cycle? That is the Java Garbage Collection mechanism, and it’s important to remember to leave room for spikes on top of those peaks. Data can also be calculated or processed in some way to make it more useful for determining system health and stability.     The data in the picture below uses the formula submitted = completed + number queued + number failed. It shows the current queue on the left Y-axis and the max queue on the right (since the two numbers usually are drastically different). It looks pretty, but it doesn’t really tell us much about the system in this format, so let’s do some math and find a representation that is a bit more helpful.     Performing a “non-negative derivative” calculation over the submitted and the completed queue counts over time allows for us to look at the status of the queue as a velocity. When the “complete” speed appears behind the “submitted” speed for too long at a time, then that means the queue is filling up and will eventually result in data loss.     If we take this one step further and calculate the average of the submitted minus the completed over time, then we can actually predict approximately when the queue will fill up. This can then be displayed on a dashboard in Grafana, or used as the basis for an alert.   What to Monitor     In addition to monitoring the system which ThingWorx runs upon, ThingWorx itself can easily be monitored down to the subsystems level by Prometheus due to the Metrics endpoint. Many applications have support built into the way they format the data for scraping, including the JVM (which exposes Prometheus-formatted metrics with the JMX Exporter) and the OS (which can use the Node Exporter or Telegraf for the same purpose). For these more generic components, there are popular community dashboards which can be downloaded and used in Grafana for data analysis and review.     For ThingWorx, there’s different kinds of data to track: subsystem data (see the list on the right) and non-subsystem data. There’s queue based versus non-queue data. These different metrics can collectively characterize the overall health of the application, depending on the use case.     For instance, if this is a system with very many connected devices, one metric which may be important to track is the number of total devices defined on the Foundation server vs. the number of devices which are currently connected. If there are relays involved, then many devices suddenly going offline can mean a relay has failed. Another example is if the system sizing depends on an assumption that there will only ever be a fraction of the total number of devices connected at a time. Use cases like these could be monitored easily by keeping track of the total vs. the number of connected devices.     Other common indicators of a healthy ThingWorx application might include the value stream and stream queues. These queues should fluctuate over time as the data is ingested and processed, but they should never be growing in size. If the stream queues are growing, then that means the data is writing to the queues faster than the queues can write to the database. Eventually, when the system runs out of resources to keep track of the queues, data will be lost. Having the stream information displayed in a chart can make it very easy to spot an upward trend in resource usage early on, which can catch a blockage or bottleneck that needs attention before it starts to affect the larger system in catastrophic ways later.     Memory usage information from the various subsystems might be something worth tracking, as well as the event queue. These can indicate that the business logic is functioning with room to handle spikes, and that the server has enough memory to service all three dimensions of an IoT application: the ingestion, the business logic and thing-based alerting, and the user experience and UI. If file transfers are a key part of the use case, then the number of concurrent transfers, the average speed of them, the size of the files, all of this kind of stuff can be tracked and charted in Grafana by making use of the ThingWorx metrics which automatically show up there once you import the Prometheus data source.     A mature dashboard used for a production environment might look a little like this: For further reading about subsystem monitoring, check out the Help Center.   How to Alert     The alerting mechanism built-in to Prometheus is incredibly easy to configure, so it might be tempting to generate tons of alert rules. However, remember that the more noise a system makes, the harder it is for those monitoring that system to know when action is really required. Playbooks which document how to respond to alerts, who to contact, how to act, and all the information necessary to handle an alert, should be created as an ongoing part of the DevOps process.     Alerts should fire with the right severity in the subject line, as well as all of the information about the issue that is currently known, presented in a concise way, so that whoever receives the alert starts thinking about the root cause sooner and recovers the system faster. Those who receive the alert should have the ability to facilitate its resolution, and know who is expected to react to any alerts which come in.     In the ThingWorx monitoring stack, Prometheus handles the alert rules and the generation of alerts, but alert filtering and delivery is managed in an external alerts manager.     Generally, you want your alerts to follow a curve. If the current queue size exceeds 50% of the maximum, perhaps that isn’t a huge deal, if the application catches up quickly. How long are spikes in queue processing expected to last? Perhaps if the queue size is over half-full for 10 seconds, 30 seconds, then that means the queue is falling behind and not catching up. Ok, so this might be a warning level alert. When does this become an error? Well, let’s say the queue exceeds 90% of the max queue size. This might want to alert the moment it hits the mark. Now, farther along the curve, it may not take as long before data gets lost.     As the severity of the situation increases, the threshold for alerting should increase as well. That way when errors do alert, it is a sure thing that they require a response immediately. The alerts are then pushed into the “Alerts Manager” for delivery based on your management rules. The Alerts Manager may decide to withhold warnings altogether, or send them to a much smaller mailing list, whatever filtering helps to ensure the right people receive the right alerts, right when they need them.   In Conclusion, A Healthy Application...     Has stable memory usage that fluctuates predictably and doesn’t grow over time. In a system experiencing mild issues, the memory starts to trend upward:     If left unattended, systems like this may eventually experience outages. Finding the issue this early means there is even time to do some digging, debugging, taking of stack traces, and other such troubleshooting steps before the system must be restarted or recovered. That can really make the difference in identifying and resolving before there are real problems.     One metric which makes for good alerting is the total number of failed stream entries, which can indicate there’s an issue writing to the database even before the queue has started to fill up. Other alerts may include warnings and errors based around percentages of memory used or queues filled, which depend on how long the queues take to fill up and how long the state has been at its increased usage.     Prometheus has all of the tools necessary to make this possible across a variety of infrastructures and use cases. Set it up on a local machine and poke around at what ThingWorx metrics are available to meet your monitoring needs.
View full tip
This might be a well-known topic for some, but I recently had a need that Event Routers fit into perfectly and wanted to share. If you have some neat applications for Event Routers on mashups, feel free to reply!   What? Event Routers are a function on Mashups that let you connect multiple inputs to a single output. For my use case, this was extremely helpful to let me have two different Service Outputs go to the same Widget. They are a really simple tool that can save a lot of headache.   How? Event Routers work by funneling the latest data through to a single output. This is particularly useful for  user-activated actions with the output tied to a widget or another service. The Event Router automatically activates when any one of the Inputs changes.   Example I have two services that generate HTML from different sources, but I want to display just the latest one that the user had activated in a single HTML Text Area widget. The two different services are activated with two different buttons. But how do I show these two outputs in a single widget? Create an Event Router with two HTML inputs!         Now I just tie each service output to the Inputs and tie the Output to the HTML Text Area Text (note: the icon for Input2 is incorrect—it should be HTML as well; this system is running 8.5.1, perhaps it's an issue in that release).         Now when the user clicks on either button, the correct service’s HTML is sent to the HTML Text Area. Ta-da!   P.S. I noticed in some older posts that Event Routers used to be a widget or extension that came and went. Now (8.5+) it is baked into the Functions on the far right side of Mashup Builder.
View full tip
The following code snippet will retrieve a months worth of data from the system and return it as a CSV document suitable for import into your spreadsheet or reporting tool of choice. import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.drm.sdk.Context import com.axeda.common.sdk.id.Identifier import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* def ac = new AuditCriteria() ac.fromDate = Date.parse('yyyy-MM-dd', '2017-03-01') ac.toDate   = Date.parse('yyyy-MM-dd', '2017-03-31') def retString = '' tcount = 0 while ( (results = auditBridge.find(ac)) != null  && tcount < results .totalCount) {   results.audits.each { res ->     retString += "${res?.user?.id},${res?.asset?.serialNumber},${res?.category},${res.message},${res.date}\n"     tcount++   }   ac.pageNumber = ac.pageNumber + 1 } return retString
View full tip
Meet Neal. When Neal joined PTC five years ago, he immediately hit the ground running on IoT initiatives, working in multiple areas ranging from pre-sales to partner relations. Today, he is a Worldwide ThingWorx Center of Excellence Principal Lead at PTC, and his biggest focus is supporting the go-to-market for the Microsoft partnership.   I sat down with Neal recently to hear the details on exactly how Azure and ThingWorx can be used to develop world-class IIoT applications.   Kaya: Can you explain how Azure and ThingWorx work together? Neal: Yes, so Azure provides the cloud infrastructure that our customers need in order to deploy ThingWorx.   By having Azure as our preferred cloud platform, we’re able to specialize our R&D efforts into utilizing functionality that is available in Azure, rather than having to reinvent the wheel ourselves for each cloud platform in the attempt to remain cloud-agnostic. By leveraging a single, already quite powerful, cloud platform through Azure, we’re able to maximize our development efforts.   Kaya: What was the major problem that led to us investigating cloud options? Neal: There were two issues that our users had. The first was we often had complicated installs and setup procedures because we were genericizing the whole process, so the initial setup and run was complicated and expensive. For example, we were requiring them to setup additional VMs and components, and we were giving them generic instructions because we were being very agnostic, when they had already chosen outside of us to use one of the cloud platforms. We knew our customers wanted to move fast, so we had to make it easier and faster for them to see results.   The other issue we ran into with customers was the confusion in the offerings. For ThingWorx, ingest is just one aspect of IoT. ThingWorx is particularly strong in areas like enabling mixed reality and augmented reality as well as application enablement. And, while we also have the ability to perform ingest capabilities, Microsoft is especially powerful when it comes to ingest capabilities and security. We decided that the smartest solution was to leverage Microsoft’s expertise in data ingestion to make ThingWorx even more powerful; so, we made the Azure IoT Hub Connector. By partnering with Microsoft, we have joint architecture where you can see how Microsoft provides key features and ThingWorx will run on top of those features and get you faster to the market to develop the application.   Below is an example of a high-availability deployment of ThingWorx on Azure that utilizes ThingWorx Azure IoT Connectors to access an Azure IoT Hub.  High-Availability Deployment of ThingWorx on Azure Kaya: Why did we partner with Azure? What specific benefits does Azure offer over other cloud services providers? Neal: When we started to look at what our customers were using for cloud services, we noticed that a lot were using Microsoft. When we join forces with Microsoft, we have a much more wholistic offering around digital transformation. With the partnership, PTC and Microsoft are able to leverage each partner’s respective strengths to provide even more powerful IIoT solutions. You have Windchill and Microsoft’s business application strategies; you have Vuforia and Microsoft’s mixed reality and augmented reality strategies; and, you have ThingWorx on the Microsoft Azure cloud. Overall, you have a much more complete and powerful offering together.   Kaya: What is your favorite aspect about working at PTC? Neal: The growth. There’s been a lot of changes over the last five years that I’ve been here. PTC has been able to leverage its strengths and long-time experience in the CAD and PLM markets to enter and ultimately become a leader in the IIoT market, according to reports by research firms like Gartner and Forrester. In short, the growing IIoT market and PTC’s leadership in the industry.   Note to Readers: You’ve likely heard about our major strategic partnership with Microsoft to leverage our respective IIoT and cloud technologies to optimize the creation, deployment, management and overall use of your IIoT applications. If you haven’t heard about the partnership, check out the press release here. If you’re curious about more aspects of PTC’s partnership with Microsoft, check out this site devoted to sharing how ThingWorx and Azure are better together.
View full tip
This video go through the steps required to use the Creo Insight extension: - Download and install the required extension - Set required config.pro options - Create provider in Analytics Manager - Publish sensor from Creo - Create analysis Event in Analysis Manager - Retrieve sensor values from ThingWorx in Creo     See also: - https://www.ptc.com/en/support/article?n=CS277514  for a  written version of those steps. - Creo Help Center  
View full tip
  Use the Collection Widget to organize visual elements of your application.   GUIDE CONCEPT   This project will introduce the Collection Widget.   Following the steps in this guide, you will learn to display different values from a single dataset in real-time.   We will teach you how to utilize the Collection Widget to generate a series of repeated Mashups for every row of data in a table.     YOU'LL LEARN HOW TO   Create a Datashape to define columns of a table Create a Thing with an Infotable Property Create a base Mashup to display data Utilize a Collection Widget to display data from multiple rows of a table   NOTE: This guide's content aligns with ThingWorx 9.3. The estimated time to complete ALL 3 parts of this guide is 60 minutes.      Step 1: Create a Datashape    Create a Datashape   The Collection Widget uses a Service to dynamically define visual content.   The data must be in a tabular format (for example: Data Table, Info Table, or external Database-connection) in order for the Collection Widget to access it.   In this part of the lesson, we'll create a Data Shape to define the columns of the table.   In a subsequent step, we'll create an Info Table Property within a Thing in order to store the information.   In the ThingWorx Composer Browse tab, click Modeling > Data Shapes, + New              2.  In the Name field, enter cwht_datashape.                      3. If Project is not already set, search for and select PTCDefaultProject.         4. At the top, click Field Definitions          First Definition:          1. Click + Add to open the New Field Definition slide-out.          2. In the Name field, enter first_number.          3. Change the Base Type to NUMBER.          4. Check the Is Primary Key box. All Datashapes must have a single Primary Key, and the first Field is as acceptable as any other for our purposes here.                5. At the top-right, click the "Check with a +" button for Done and Add.   Second Definition:   In the Name field, enter second_number.       2. Change the Base Type to NUMBER.               3. At the top-right, click the "Check with a +" button for Done and Add.     Third Definition:   In the Name field, enter third_number.         2. Change the Base Type to NUMBER.               3. At the top-right, click the "Check" button for Done         4. At the top, click Save.                   Step 2: Create a Thing   Create a Thing   In the previous step, we created a Data Shape to define the Info Table Property columns.   Now, we will create a Thing, add an Info Table Property, format the Info Table Property with the Data Shape, and set some default values.   On the ThingWorx Composer Browse tab, click Modeling > Things, + New.           2. In the Name field, enter cwht_thing.         3. If Project is not already set, search for and select PTCDefaultProject.         4. Select GenericThing in the Base Thing Template field       Add InfoTable Property         1. At the top, click Properties and Alerts.        2. Click the + Add button to open the New Property slide-out.        3. In the Name field, enter infotable_property.        4. Change the Base Type to INFOTABLE.        5. In the Data Shape field, select cwht_datashape.        6. Check the Persistent checkbox.       First Default   Check the Has Default Value checkbox. A cwht_datashape icon will appear underneath.            2. Under Has Default Value, click the cwht_datashape button to open the pop-up menu which  sets the default values               3. Click the + Add icon.         4. Enter values in each number field, such as 1, 2, and 3.              5. At the bottom-right, click the green Add button.     Second Default   Click the + Add icon.         2. Enter values in each number field, such as 10, 20, and 30            3. At the bottom-right, click the green Add button       Third Default   Click the + Add icon.       2. Enter values in each number field, such as 100, 200, and 300            3. At the bottom-right, click the greenAddbutton           4.  At the bottom-right, click Save to close the pop-up menu               5. At the top-right, click the "Check" button for Done.       6. At the top, click Save          Click here to view Part 2 of this guide.
View full tip
This expert session focuses on overviewing the patch and upgrade process of the Thingworx platform. It provides information on how to perform a patch upgrade for the platform as well as extensions upgrade, and when an in-place upgrade is applicable. It can be viewed as a quick reference note for upgrading your system.     For full-sized viewing, click on the YouTube link in the player controls.   Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
Video Author:                     Christophe Morfin Original Post Date:            September 13, 2016 Applicable Releases:        ThingWorx Analytics 52.2 to 8.0   Description: In this video we cover the installation of the UploadThing Module.   Useful Links: How to copy files from Windows to Linux  
View full tip
    About   This is part of a ThingBerry related blog post series.         ThingBerry is ThingWorx installed on a RaspBerry Pi, which can be used for portable demonstrations without the need of utilizing e.g. customer networks. Instead the ThingBerry provides its own custom WIFI hotspot and allows Things to connect and send / receive demo data on a small scale.   In this particual blog post we'll discuss on how to connect a ESP8266 module to the ThingBerry WIFI hotspot and send data from a DHT-11 sensor via the MQTT protocol.   As the ThingBerry is a highly unsupported environment for ThingWorx, please see this blog post for all related warnings.   Install MQTT broker on the ThingBerry     To install mosquitto as a MQTT broker, log in to the ThingBerry and run     sudo apt-get install mosquitto   This will provide a basic broker installation, which is good enough for this example. MQTT clients (including ThingWorx) will connect to this broker to exchange messages. There will be no added security like encrypted traffic shown in this example, it's however good practise to secure MQTT broker / client connections.   While the ESP8266 module is publishing information, ThingWorx will subscribe to the corresponding topics to update its internal property values with what is sent by the ESP8266 module.   For more information on MQTT, how to configure it for ThingWorx or more security relevant information also see   https://community.thingworx.com/message/5063#5063 https://community.thingworx.com/community/developers/blog/2016/08/08/securing-mqtt-connection-to-thingworx-platform?sr=tcontent   Configure the ESP8266     There are too many instructions on the web already on how to initially setup the ESP8266 and use it with the Arduino IDE. I'll therefore just refer to Google which covers the topic more extensively than I ever could.   All coding in this example is done in the Arduino IDE and is pushed to the ESP8266 (NodeMCU) via USB. For this you might need to install a CH340g USB driver for the NodeMCU.   In the Arduino IDE under Tools, I have set my environment to   Board: NodeMCU 1.0 (ESP-12E Module) CPU Frequency: 80 MHz Flash Size: 4M (3M SPIFFS) Upload Speed: 115200 Port: COM3   Under Sketch > Include Library > Manage Libraries add / install the following libraries:   DHT sensor library by Adafruit Adafruit Unified Sensor by Adafruit PubSubClient by Nick O'Leary   These bring the libraries necessary to read data from the DHT-11 sensor and to configure the ESP8266 as MQTT client.     Wiring the DHT-11 sensor     The following image shows the PINs on the ESP8266     I'm using a DHT-11 sensor with cables included and already fixed to a board with 3 PINs. In case you're using a different version, there might be additional components and wiring required, like a resistor etc. Google might help here as well.     Ensure that neither board nor sensor are plugged in, and the ESP8266 is powered off.   To hook the sensor up to the ESP8266, join   ( - ) to GND ( + ) to 3.3V (out) to D3   After all the connections are made, connect the ESP8266 via USB to a computer / laptop with the Arudino IDE configured.   Coding   In the Arduino IDE use the following code - adjust the WIFI settings and the MQTT broker configuration. Ensure to rename the ESP_xx name / topic to something more meaningful, e.g. a specific device name (or just leave it as is if in doubt).   Use the ssid and wpa_passphrase from the hostapd.conf used to configure the ThingBerry as WIFI hotspot.   Copy&paste the code below into the Arduino IDE, verify it and upload it to the ESP8266.     If searching for a WIFI connection, the device's blue LED will blink. A successful connection to the broker and publishing the values will result in a static blue LED. In case the LED is off, the connection to the broker is lost or messages cannot be published.   For troubleshooting, use the Serial Monitor function (at 115200 baud) in the Arduino IDE. In case sensor data cannot be read but the wiring is correct and the code addressing the correct PIN verify the sensor is indeed working. It took me a long time to figure out that the first sensor I used was a defective device.   The current configuration sends updates every 10 seconds - longer intervals might make more sense, but can trigger a timeout for the MQTT broker. In this case the program will re-connect automatically and log corresponding messages in the Serial Monitor. This might seem like an error, but is indeed intended behavior by the code and the MQTT broker.     Configure MQTT Thing in ThingWorx     Create a new Thing in ThingWorx based on the MQTT Template. Add two properties:   temperature humidity   Both set to persistent and logged and Data Change Type to ALWAYS. Also configure a Value Stream to log a history of values.   In the configuration, add two more subscriptions. Activate the "subscribe" checkbox and map name (local property) to topic (MQTT topic), e.g.   name = temperature; topic = ESP_xx/temp name = humidity; topic = ESP_xx/hum   Ensure the correct servernames, ports etc. are configured (an empty servername will use the localhost).   Save the configuration. Property values should now be updated from the MQTT broker, depending on what the device is sending.   Code #include "DHT.h" #include "PubSubClient.h" #include "ESP8266WiFi.h" /* * * Configure parameters for sensor and network / MQTT connections * */ // setup DHT 11 pin and sensor #define DHTPin D3 #define DHTTYPE DHT11 // setup WiFi credentials #define WLAN_SSID "mySSID" #define WLAN_PASS "WIFIpassword" // setup MQTT #define MQTTBROKER "mqttbrokerhostname" #define MQTTPORT 1883 // setup built-in blue LED #define LED 2 /* * ============================================================ * * DO NOT CHANGE ANYTHING BELOW * (unless you know what you're doing) * */ // initiate DHT DHT dht(DHTPin, DHTTYPE); // initiate MQTT client WiFiClient wifiClient; PubSubClient client(MQTTBROKER, MQTTPORT, wifiClient); /* * setup */ void setup() { // switch off internal LED pinMode(LED, OUTPUT); digitalWrite(LED, HIGH); // start serial monitor Serial.begin(115200); // start DHT dht.begin(); // start WiFi WiFi.begin(WLAN_SSID, WLAN_PASS); } /* * the loop */ void loop() { // while not connected to WiFi, print "." // after connection exit the loop // blink LED while having no WiFi signal boolean wifiReconnect = false; while (WiFi.status() != WL_CONNECTED) { digitalWrite(LED, LOW); delay(200); Serial.print("."); digitalWrite(LED, HIGH); delay(300); wifiReconnect = true; } // if WiFi has reconnected, print new connection information and turn on LED if (wifiReconnect == true) { // print connection information and local IP address, mac address Serial.println(); Serial.println("WiFi connected"); Serial.println(WiFi.localIP()); Serial.println(WiFi.macAddress()); Serial.println(); // turn on built-in LED to indiciate successful WiFi connection digitalWrite(LED, LOW); } // if MQTT client is not connected, connect again // turn on built-in LED to indicate a successful connection if (!client.connected()) { Serial.println("Disconnected from MQTT server... trying to connect"); if (client.connect("ESP_xx")) { Serial.println("Connected to MQTT server"); Serial.println("Topic = ESP_xx"); digitalWrite(LED, LOW); } else { Serial.println("MQTT connection failed"); digitalWrite(LED, HIGH); } Serial.println(); } // read temperature and humidity from sensor float t = dht.readTemperature(); float h = dht.readHumidity(); if (isnan(t) || isnan(h)) { // if temperature or humidity is not a number, print error Serial.println("Failed retrieving data from DHT sensor"); } else { // print temperature and humidity Serial.print(t); Serial.print("° - "); Serial.print(h); Serial.print("%"); Serial.println(); // only send values to MQTT broker, if client is connected if (client.connected()) { // boolean to check for errors during payload transfer bool isError = false; // create payload and publish values via MQTT client // use buffer to convert float to char* char buffer[10]; dtostrf(t, 0, 0, buffer); if (client.publish("ESP_xx/temp", buffer)) { Serial.print(" published /temp "); } else { Serial.print(" failed /temp "); isError = true; } dtostrf(h, 0, 0, buffer); if (client.publish("ESP_xx/hum", buffer)) { Serial.print(" published /hum "); } else { Serial.print(" failed /hum "); isError = true; } Serial.println(); // on error, turn off LED if (isError == true) { digitalWrite(LED, HIGH); } else { digitalWrite(LED, LOW); } } } // sleep for 10 seconds // if sleep > default mosquitto timeout : a reconnect is forced for each update-cycle delay(10000); }
View full tip
This project is developed out of curiosity of how ThingWorx communicates with sensors and vice versa. Immediately a Smart Parking system idea struck to our mind and I started working on it. While heading from home to office I always worry about car parking space in office especially in rainy season. This project will help user in getting parking space. This project has 4 sections as follows, 1) Smart Parking system: A system application developed in ThingWorx guides user to find empty car parking space. Sensors placed at each car parking slot senses the presence of car. A program running on Raspberry Pi board collects sensor information and sends that information to the Smart Car Parking System application in ThingWorx. The data received through sensor is displayed on ThingWorx dashboard/mashup. 2) Live Traffic: This inherits a Google Map and shows the traffic around user's current location. 3) Traffic Blog: If user is visiting a place and have questions regarding parking, traffic condition etc., then user can post their questions here and people around that area can answer it. Questions are not restricted for parking related questions but like best places to visit in areas, restaurant, shops etc. 4) Automobile Wiki: This page provides an documented help regarding anything related to automobile e.g. how to change car tyres?, how to change car wipers? etc.
View full tip
    April 22, 2025   Hello ThingWorx community members! We are excited to announce the preview release of ThingWorx 10.0, the latest evolution of our IIoT platform for all your Industrial Data Management needs. This release focuses on delivering a powerful, secure, and intelligent foundation for industrial innovation, empowering businesses to achieve more with their IoT solutions. Powerful & Secure: A New Standard in IoT Platforms ThingWorx 10.0 sets a new benchmark for scalability and security in IoT with features like IOT Streams to enhance enterprise industrial data acccess and reliability, caching improvements to increase server scale and response times, and security updates for TLS, Tomcat, Java, and others to ensure top-tier performance and protection. These advancements make ThingWorx 10.0 the most mature and secure platform yet, giving businesses the confidence to scale their IoT deployments while safeguarding their data.   Industrial Solutions: Ready to Drive Performance ThingWorx 10.0 enhances our industrial solutions, advancing connected worker, manufacturing efficiency, and quality use cases. Windchill Navigate View Work Instructions, a powerful, feature-rich app, launches with ThingWorx 10.0. Built on ThingWorx and integrated with Windchill PLM, this task-based solution delivers real-time work instructions, enhancing enterprise collaboration and boosting worker productivity with seamless, intuitive guidance. Alongside Connected Work Cell (CWC), both applications strengthen the connected worker experience in manufacturing with real-time instructions and data. Additionally, enhancements to Real-Time Production Performance Monitoring (RTPPM), and Digital Performance Management (DPM), improve manufacturing performance by optimizing workflows, enhancing service quality, and providing operators with clear, data-driven insights. Data Insights: Unlocking Intelligence from Edge to Cloud ThingWorx 10.0 empowers businesses to harness data-driven intelligence like never before. With advanced analytics and integration with third-party generative AI tools, the platform enables seamless management of industrial data from edge to cloud. Unlock actionable insights and make smarter decisions to stay ahead in a competitive landscape. Get started today The preview release of ThingWorx 10.0 is now available for evaluation. Discover how a powerful, secure, and intelligent IoT platform can transform your industrial operations. Please reach out to your account reps or customer success team to get preview access for this release. Alternatively, drop a comment on this post or submit a Tech Support ticket, and we’ll get in touch with you to discuss onboarding to the ThingWorx 10.0 Private Preview Program.   Lastly watch out this space as we roll out additional details about ThingWorx 10.0 release and other announcements!   Cheers! Ayush Tiwari Director Product Management ThingWorx, a PTC Technology.
View full tip
The System user is pivotal in securing your application and the simplest approach is to assign the System user to ALL Collections and give it Runtime Service Execute. These Collection Permissions ONLY Export to ThingworxStorage vs. the File Export, it becomes quite painful to manage this and then roll this out to a new machine. Best and fastest solution? Script the Assignment, you can take this script which does it for the System user and extend it to include any other Collection Level permissions you might need to set, like adding Entity Create Design Time for the System user. --------------------------------------------------------- //@ThingworxExtensionApiMethod(since={6,6}) //public void AddCollectionRunTimePermission(java.lang.String collectionName, //       java.lang.String type, //       java.lang.String resource, //       java.lang.String principal, //       java.lang.String principalType, //       java.lang.Boolean allow) //    throws java.lang.Exception // //Service Category: //    Permissions // //Service Description: //    Add a run time permission. // //Parameters: //    collectionName - Collection name (Things, Users, ThingShapes, etc.) - STRING //    type - Permission type (PropertyRead PropertyWrite ServiceInvoke EventInvoke EventSubscribe) - STRING //    resource - Resource name (* = all or enter a specific resource to override) - STRING //    principal - Principal name (name of user or group) - STRING //    principalType - Principal type (User or Group) - STRING //    allow - Permission (true = allow, false = deny) - BOOLEAN //Throws: //    java.lang.Exception - If an error occurs //   var params = {     modelTags: undefined /* TAGS */,     type: undefined /* STRING */ }; // result: INFOTABLE dataShape: EntityCount var EntityTypeList = Subsystems["PlatformSubsystem"].GetEntityCount(params); for each (var row in EntityTypeList.rows) {     try {         var params = {             principal: "System" /* STRING */,             allow: true /* BOOLEAN */,             resource: "*" /* STRING */,             type: "ServiceInvoke" /* STRING */,             principalType: "User" /* STRING */,             collectionName: row.name /* STRING */         };         // no return         Resources["CollectionFunctions"].AddCollectionRunTimePermission(params);     }     catch(err) {     } }
View full tip
One of the recurring patterns on the Axeda Platform is making requests from custom objects to other services, to be called either via Scripto, or through Expression Rules that help integrate Axeda data with your custom systems or third parties such as Salesforce.com.  Java developers would normally use a URLConnection to do this, but due to security requirements, access to the URLConnection API is sandboxed, and the HTTPBuilder API is provided instead. Below is a short example of GETting a payload from http://www.mocky.io/v2/57d02c05100000c201208cb5 to your custom object.  One of the requirements of many services is being able to pass in API keys as part of the header request.  While in this example the API key is embedded in the code, the recommended way of storing API keys on the Axeda Platform is to use the External Credential lockbox API.  This allows you to change the API keys securely without needing to change code. import groovyx.net.http.HTTPBuilder import static groovyx.net.http.ContentType.* import static groovyx.net.http.Method.* def http = new HTTPBuilder('https://www.mocky.io') http.request( GET, JSON ) {     uri.path = '/v2/57d02c05100000c201208cb5'     uri.headers.'appKey' = '7661392f-2372-4cba-a921-f1263c938090'     response.success = { resp ->         println "POST response status: ${resp.statusLine}"         logger.info "POST RESPONSE status: ${resp.statusLine}"         assert resp.statusLine.statusCode == 201     } } An example for Salesforce might look like so: import groovyx.net.http.HTTPBuilder import static groovyx.net.http.ContentType.* import static groovyx.net.http.Method.* def xml_body = """<?xml version="1.0" encoding="utf-8" ?> <env:Envelope xmlns:xsd="http://www.w3.org/2001/XMLSchema"     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"     xmlns:env="http://schemas.xmlsoap.org/soap/envelope/">   <env:Body>     <n1:login xmlns:n1="urn:partner.soap.sforce.com">       <n1:username>johndoe@example.com</n1:username>       <n1:password>Password+SECRETKEY</n1:password>     </n1:login>   </env:Body> </env:Envelope> """ def http = new HTTPBuilder('https://login.salesforce.com/') http.request( POST ) {     uri.path = '/services/Soap/u/35.0 '     body = xml_body     response.success = { resp ->         println "POST response status: ${resp.statusLine         logger.info "POST RESPONSE status: ${resp.statusLine}"         assert resp.statusLine.statusCode == 201     } } This request will give you a security token you can use in future calls to Salesforce APIs; you would use Groovy's native XmlSlurper/XmlParser to parse the response and get the session id to use in future requests.  You would then use this session id like in the following example to get the available REST resources: import groovyx.net.http.HTTPBuilder import static groovyx.net.http.ContentType.* import static groovyx.net.http.Method.* def http = new HTTPBuilder('https://na1.salesforce.com/') http.request( POST ) {     uri.path = '/services/data/v29.0'     uri.headers.'Authorization' = 'Bearer SESSIONID'     response.success = { resp ->         println "POST response status: ${resp.statusLine}"         logger.info "POST RESPONSE status: ${resp.statusLine}"         assert resp.statusLine.statusCode == 201     } } Further reading: HttpBuilder Wiki - https://github.com/jgritman/httpbuilder/wiki Groovy Xml Processing - http://groovy-lang.org/processing-xml.html
View full tip
Best Practices in Data Preparation for ThingWorx Analytics Data Preparation is an important phase in the process of Data Analysis when using ThingWorx Analytics. Basically, it is getting your Data from being Raw Data that you might have gathered through your Operational system or from your Data warehouse to the kind of Data ready to be analyzed. In this Document we will be using “Talend Data Preparation Free Desktop” as a Tool to illustrate some examples of the Data Preparations process. This tool could be downloaded under the following Link: https://www.talend.com/products/data-preparation (You could also choose to use another tool) We would also use the Beanpro Dataset in our Examples and illustrations. Checking data formats The analysis starts with a raw data file. The user needs to make sure that the data files can be read. Raw data files come in many different shapes and sizes. For example, spreadsheet data is formatted differently than web data or Sensors collected data and so forth. In ThingWorx Analytics the Data Format acceptable are CSV. So the Data retrieved needs to be inputted into that format before it could be uploaded to TWA Data Example (BeanPro dataset used): After that is done the user needs to actually look at what each field contains. For example, a field is listed as a character field could actually contains none character data. Verify data types Verifying the data types for each feature or field in the Dataset used.  All data falls into one of four categories that affect what sort of analytics that could be applied to it: Nominal data is essentially just a name or an identifier. Ordinal data puts records into order from lowest to highest. Interval data represents values where the differences between them are comparable. Ratio data is like interval data except that it also allows for a value of 0. It's important to understand which categories your data falls into before you feed it into ThingWorx Analytics. For example when doing Predictive Analytics TWA would not accept a Nominal Data Field as Goal. The Goal feature data would have to be of a numerical non nominal type so this needs to be confirmed in an early stage.                 Creating a Data Dictionary A data dictionary is a metadata description of the features included in the Dataset when displayed it consists of a table with 3 columns: - The first column represents a label: that is, the name of a feature, or a combination of multiple (up to 3) features which are fields in the used Dataset. It points to “fieldname” in the configuration json file. - The second column is the Datatype value attached to the label. (Integer, String, Datetime…). It points to “dataType” in the configuration json file. - The third column is a description of the Feature related to the label used in the first column. It points to “description” in the configuration json file. In the context of TWA this Metadata is represented by a Data configuration “json” file that would be uploaded before even uploading the Dataset itself. Sample of BeanPro dataset configuration file below: Verify data accuracy Once it is confirmed that the data is formatted the way that is acceptable by TWA, the user still need to make sure it's accurate and that it makes sense. This step requires some knowledge of the subject area that the Dataset is related to. There isn't really a cut-and-dried approach to verifying data accuracy. The basic idea is to formulate some properties that you think the data should exhibit and test the data to see if those properties hold. Are stock prices always positive? Do all the product codes match the list of valid ones? Essentially, you're trying to figure out whether the data really is what you've been told it is. Identifying outliers Outliers are data points that are distant from the rest of the distribution. They are either very large or very small values compared with the rest of the dataset. Outliers are problematic because they can seriously compromise the Training Models that TWA generates. A single outlier can have a huge impact on the value of the mean. Because the mean is supposed to represent the center of the data, in a sense, this one outlier renders the mean useless. When faced with outliers, the most common strategy is to delete them. Example of the effect of an Outlier in the Feature “AVG Technician Tenure” in BeanPro Dataset:   Dataset with No Outlier: Dataset with Outlier: Deal with missing values Missing values are one of the most common (and annoying) data problems you will encounter. In TWA dealing with the Null values is done by one of the below methods: - Dropping records with missing values from your Dataset. The problem with this is that missing values are frequently not just random little data glitches so this would consider as the last option. - Replacing the NULL values with average values of the responses from the other records of the same field to fill in the missing value Transforming the Dataset - Selecting only certain columns to load which would be relevant to records where salary is not present (salary = null). - Translating coded values: (e.g., if the source system codes male as "1" and female as "2", but the warehouse codes male as "M" and female as "F") - Deriving a new calculated value: (e.g., sale_amount = qty * unit_price) - Transposing or pivoting (turning multiple columns into multiple rows or vice versa) - Splitting a column into multiple columns (e.g., converting a comma-separated list, specified as a string in one column, into individual values in different columns) Please note that: Issue with Talend should be reported to the Talend Team Data preparation is outside the scope of PTC Technical Support so please use this article as an advisable Best Practices document
View full tip
Announcements