cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT Tips

Sort by:
The system user can become a vital point for properly yet conveniently securing your application. From the ThingWorx helpcenter: The system user is a system object in ThingWorx. With the System User, if a service is called from within a service or subscription (a wrapped service call), and the System User is permitted to execute the service, the service will be permitted to execute regardless of the User who initially triggered the sequence of events, scripts, or services. http://support.ptc.com/cs/help/thingworx_hc/thingworx_7.0_hc/index.jspx?id=SystemUser&action=show A few important notes to remember: It is not possible to log in as a system user Adding a system user to the Administrators group will not grant it the administrator permissions Adding a system user to the Everyone organization will not grant it the same visibility As an option, one of the posts on our community provides a script to assign all of the permissions to the system user for a one time set up: https://community.thingworx.com/community/developers/blog/2016/10/28/assigning-the-system-user-through-script Example: 1. Create a new template T1, several things Thing1, Thing2, Thing3 2. Create a new thing NewThing and a new user BlankUser 3. Create a service within NewThing that uses ThingTemplates[“T1"].GetImplementingThings() and give all the permissions to the new non-admin user, BlankUser Now the service on the template T1 can be accessed through the NewThing without explicit permissions for the BlankUser but rather through the system user. When manipulating with data (involving read/write and access to persistence provider), the BlankUser would require more than  just visibility permissions. For example, for a Stream, the following permissions would need to be set up: 1. Visibility on Stream template,StreamProcessingSubsystem, PersistenceProvider 2. Read/write permission on the Stream thing in the use case, created with the Stream template. Similarly, for other sources of data, things, templates and resources involved need visibility and, depending on the scenario, read/write permissions on the specific template.
View full tip
Create a new Thing using the Scheduler Thing Template. The Scheduler Thing will fire a ScheduledEvent Event when the configured schedule is fired. The event is automatically present and does not need to be added manually. Configuration   The Scheduler Configuration is quite straightforward and allows for an exact setup of schedule based on units of time, e.g. seconds, minutes, hours, days of week etc. It can be accessed via the Thing's Entity Configuration   Configuration allows for Changing the runAsUser context - in which the Events will be handled. The user will need visibility and permission on e.g. executing Services or depending Things, which are required to run the Service triggered by the Event. Changing the Schedule - in which time the Events will be fired (by default every minute). The schedule is displayed in CRON String notation and can be changed and viewed in detail by clicking on "More". The CRON String will be generated automatically based on the inputs. Schedules can be configured in Manual mode - allowing for full configuration of each and every time based attribute. Schedules can be configured for a specific time Type - allowing for configuration only based on seconds, minutes, hours, days, weeks, months or years. Below screenshots show schedules running every minute and every Saturday / Sunday at 12:00 ("Every Weekend Day").     Services   Scheduler Things inherit two Services by default from the Thing Template DisableScheduler EnableScheduler These will activate / de-activate the Scheduler and allow / disallow firing Events once a scheduled time is reached If a Scheduler is currenty enabled or disabled can be seen in its properites  
View full tip
A license is not required to download and install KEPServerEX V6. After finishing the full install, all features of the KEPServerEX Configuration are available for setup and testing, including the ThingWorx Native Interface. When either a Client application makes a request of a Driver, or a Plug-in becomes activated, then a license check is performed. If a feature is not licensed, a two-hour demo countdown period will begin for that feature. There are System tags that can be monitored in KEPServerEX to determine which features are licensed, which are in a time-limited demo countdown period, and which have expired. To reset the countdown timer for an unlicensed feature, the KEPServerEX Runtime service can be stopped/restarted. See the links and steps below for more information. How do I download and install KEPServerEX?​ Monitor licensed, time-limited, and expired features in KEPServerEX:​ 1. Open the KEPServerEX Configuration window 2. The right-most icon in the Configuration icon toolbar is the white "QC" icon. Select this icon to launch a new OPC Quick Client window. The server project should automatically build 3. In the left pane, locate the _System folder 4. Within this folder locate the following three Tags, with the corresponding information displaying in the Value column: Tag (Item ID) Information displayed as a String in the Value column _System._ExpiredFeatures Any unlicensed driver or plug-in that has timed out _System._LicensedFeatures Drivers or plug-ins that are currently licensed for use with this install _System._TimeLimitedFeatures Any unlicensed driver or plug-in that is in demo mode, with the remaining demo timer countdown (in seconds) Note: The ThingWorx Native Interface does not require a KEPServerEX license. Restart the KEPServerEX Runtime service The KEPServerEX Runtime service (server_runtime.exe) runs in the background as a system service. Resetting this service will reset the demo countdown for any unlicensed feature. To reset the Runtime service: 1. Right-click on the Administration tool (the green EX icon in the Systray by the System Clock) 2. Select "Stop Runtime Service. 3. Either the user will be prompted to restart this service, or the same Administration tool menu can be used again to "Start Runtime Service"
View full tip
This is using the simplest structure to do a look through an infotable.  It's simple but it avoids having to use row indexes and cleans up the code for readability as well.   //Assume incoming Infotable parameter names "thingList" for each (row in thingList.rows) {      // Now each row is already assigned to the row variable in the loop      var thingName = row.name; }   You can also nest these loops (just use a different variable from "row").  Also important to note to not add or remove row entries of the Infotable inside the loop.  In this case you may end up skipping or repeating rows in the loop since the indexes will be changed.
View full tip
This document is designed to help troubleshoot some commonly seen issues while installing or upgrading the ThingWorx application, prior or instead of contacting Tech Support. This is not a defined template for a guaranteed solution, but rather a reference guide that provides an opportunity to eliminate some of the possible root causes. While following the installation guide and matching the system requirements is sufficient to get a successfully running instance of ThingWorx, some issues could still occur upon launching the app for the first time. Generally, those issues arise from minor environmental details and can be easily fixed by aligning with the proper installation process. Currently, the majority of the installation hiccups are coming from the postgresql side. That being said, the very first thing to note, whether it's a new user trying out the platform or a returning one switching the database to postgresql, note that: Postgresql database must be installed, configured, and running prior to the further Thingworx installation. ThingWorx 7.0+: Installation errors out with 'failed to succeed more than the maximum number of allowed acquisition attempts' Platform being shut down because System Ownership cannot be acquired error ERROR: relation "system_version" does not exist Resolution: Generally, this type of error point at the security/permission issue. As all of the installation operations should be performed by a root/Administrator role, the following points should be verified: Ensure both Tomcat and ThingworxPlatform folders have relevant read/write permissions The title and contents of the configuration file in the ThingworxPlatform folder has changed from 6.x to 7.x Check if the right configuration file is in the folder Verify if the name and password provided in this configuration file matches the ones set in the Postgres DB Run the Database cleanup script, and then set up the database again. Verufy by checking the thingworx table space (about 53 tables should be created)     Thingworx Application: Blank screen, no errors in the logs, "waiting for <url> " gears running be never actually loading, eventually times out     Resolution: Ensure that Java in tomcat is pointing to the right path, should be something like this: C:\Program Files\Java\jre1.8.0_101\bin\server\jvm.dll 6.5+ Postgres:   Error when executing thingworxpostgresDBSetup.bat psql:./thingworx-database-setup.sql:1: ERROR: could not set permissions on directory "D:/ThingworxPostgresqlStorage": Permission denied     Resolution:     The error means that the postgres user was not able to create a directory in the ‘ThingworxPostgresStorage’ directory. As it's related to the security/permission, the following steps can be taken to clear out the error: Assigning read/write permissions to everyone user group to fix the script execution and then execute the batch file: Right-click on ‘ThingworxPostgresStorage’ directory -> Share with -> specific people. Select drop-down, add everyone group and update the permission level to Read/Write. Click Share. Executing the batch file as admin. 2. Installation error message "relation root_entity_collection does not exist" is displayed with Postgresql version of the ThingWorx platform. Resolution:     Such an error message is displayed only if the schema parameter passed to thingworxPostgresSchemaSetup.sh script  is different than $USER or PUBLIC. To clear out the error: Edit the Postgresql configuration file, postgresql.conf, to add to the SEARCH_PATH item your own schema. Other common errors upon launching the application. Two of the most commonly seen errors are 404 and 401.  While there can be a numerous reasons to see those errors, here are the root causes that fall under the "very likely" category: 404 Application not found during a new install: Ensure Thingworx.war was deployed -- check the hard drive directory of Tomcat/webapps and ensure Thingworx.war and Thingworx folder are present as well as the ThingworxStorage in the root (or custom selected location) Ensure the Thingworx.war is not corrupted (may re-download from the support and compare the size) 401 Application cannot be accessed during a new install or upgrade: For Postgresql, ensure the database is running and is connected to, also see the Basic Troubleshooting points below. Verify the tomcat, java, and database (in case of postgresql) versions are matching the system requirement guide for the appropriate platform version Ensure the updrade was performed according to the guide and the necessary folders were removed (after copying as a preventative measure). Ensure the correct port is specified in platform-settings.json (for Postgresql), by default the connection string is jdbc:postgresql://localhost:5432/thingworx Again, it should be kept in mind that while the symptoms are common and can generally be resolved with the same solution, every system environment is unique and may require an individual approach in a guaranteed resolution. Basic troubleshooting points for: Validating PostgreSQL installation Postgres install troubleshooting java.lang.NullPointerException error during PostgreSQL installation ***CRITICAL ERROR: permission denied for relation root_entity_collection Error while running scripts: Could not set permissions on directory "/ThingworxPostgresqlStorage":Permission Denied Acquisition Attempt Failed error Resolution: Ensure 'ThingworxStorage', 'ThingworxPlatform' and 'ThingworxPostgresqlStorage' folders are created The folders have to be present in the root directory unless specifically changed in any configurations Recommended to grant sufficient privileges (if not all) to the database user (twadmin) Note: While running the script in order to create a database, if a schema name other than 'public' is used, the "search_path" in "postgresql.conf" must be changed to reflect 'NewSchemaName, public' Grant permission to user for access to root folders containing 'ThingworxPostgresqlStorage' and 'ThingworxPlatform' The password set for the default 'twadmin' in the pgAdmin III tool must match the password set in the configuration file under the ThingworxPlatform folder Ensure THINGWORX_PLATFORM_SETTINGS variable is set up Error: psql:./thingworx-database-setup.sql:14: ERROR:  could not create directory "pg_tblspc/16419/PG_9.4_201409291/16420": No such file or directory psql:./thingworx-database-setup.sql:16: ERROR:  database "thingworx" does not exist Resolution: Replacing /ThingworxPostgresqlStorage in the .bat file by C:\ThingworxPostgresqlStorage and omitting the -l option in the command window. Also, note the following error Troubleshooting Syntax Error when running postgresql set up scripts
View full tip
Developing Great IoT Solutions Brought to you once again by your EDC team, find attached here a brand-new, comprehensive overview of ThingWorx best practices! This guide was crafted by combining all available feedback, from support cases to PTC Community threads, and tapping all internal resources. Let this guide serve to bridge the knowledge gaps ThingWorx developers most commonly see.    The Developing Great IoT Solutions (DGIS) Guide is a great way to inform both business and technically minded folks about the capabilities of the ThingWorx Platform. Learn how to design good solutions from a high-level, an overview designed specifically with the business audience in mind. Or, learn how to implement good IoT designs through a series of technical examples. Start from very little knowledge of the Platform and end up understanding data structures and aggregation, how to use the collection widget, and how to build a fully functional rules engine for sending and acknowledging alerts in ThingWorx.   For the more advanced among us, check out the Appendix. Find here a handy list of do's and don'ts surrounding ThingWorx best practice in development, with links to KCS, Help Center, and Community content.   Reinforce your understanding of the capabilities of the ThingWorx Platform with this guide, today!   A big thanks to all who were involved on this project! Happy developing!
View full tip
How to score new data with ThingWorx Analytics ?   The following is valid starting with ThingWorx Analytics (TWA) 8.3.0   Overview   Once a training model has been created, one of the main objective is to score new data to predict the value for the goal ThingWorx Analytics can score new data in 2 ways: Batch scoring Real time scoring Batch scoring   Batch scoring will be used when a large amount of data needs to be scored. To perform a batch scoring we will usually follow steps similar to the below ones: Upload the historic data Create a new model with this historic data Upload new data – the one to be scored Perform a prediction job to score those new data Retrieve the prediction job result Uploading the new data can be done in different ways. If using a large amount of data, it can be easier to upload the data via a csv file in a similar way as the historic data. This is the way used in ThingWorx Analytics Builder. If the amount of data is more limited this can be sent in the body of the scoring request. The post Analytics: Prediction Methods Mashup  shows a good example of how to do this using the PredictionThing.BatchScore service. We are focusing below on ThingWorx Analytics Builder, that is uploading new data via a csv file. In order to perform the scoring job only on the new data in step 4 above, we need to be able to filter those added data. If the dataset has already suitable column/feature such as a timestamp for example, we can use this to score only new data after timestamp > newdate, assuming all data are in chronological order. If the dataset has no such feature, we will have to add one  beforehand when we first upload the historic data in step 1 above. We often use a new column/feature named record_purpose to this effect. So initial data can take a value of training for this record_purpose feature since they are used to create the initial model. Then new added data to be scored can get any value that identify those rows only. It is important to note that this record_purpose feature needs to be set with the optType INFORMATIONAL so as to not be taken into account by the learning algorithms.   The video below shows those steps while using ThingWorx Analytics Builder   Real time scoring   Real time scoring is better suited for small amount of data. The process for real time scoring can be done either via the Analytics Server PredictionThing RealTimeScore service or using the Analytics Manager framework. The posts How to work with ordinal and categorical data in ThingWorx Analytics  and Analytics: Prediction Methods Mashup do give  examples of the use of the RealTimeScore service.   We will concentrate below on the Analytics Manager. The process involves the following steps: In Analytics Manager Create an Analysis Provider that uses the AnalyticsServerConnector connector Publish the model created in ThingWorx Analytics Builder to Analytics Manager Enable the model created Create an Analysis Event Map the properties to the datashape field Enable the Event In ThingWorx Composer Relevant properties of the Thing used in the Analysis Event are updated in someway This trigger the analysis job to be executed The scoring result is populated into the result property mapped in the Analysis event The Help Center has got more detailed about this process. The following video shows those steps Following articles can also be of interest for this topic: How to use ThingPredictor in release 8.3 of ThingWorx Analytics Server ? Publish model from Analytics Builder into Analytics Manager using TW.AnalysisServices.AnalyticsServer.AnalyticsServerConnector Creating Template For Thing, And Configure Analysis Event For Real-Time Scoring via Analytics Manager Note that the AnalyticsServerConnector connector in release 8.3 replaces the ThingPredictor connector from previous releases.
View full tip
Connectors allow clients to establish a connection to Tomcat via the HTTP / HTTPS protocol. Tomcat allows for configuring multiple connectors so that users or devices can either connect via HTTP or HTTPS.   Usually users like you and me access websites by just typing the URL in the browser's address bar, e.g. "www.google.com". By default browsers assume that the connection should be established with the HTTP protocol. For HTTPS connections, the protocol has to be specified explictily, e.g. "https://www.google.com"   However - Google automatically forwards HTTP connections automatically as a HTTPS connection, so that all connections are using certificates and are via a secure channel and you will end up on "https://www.google.com" anyway.   To configure ThingWorx to only allow secure connections there are two options...   1) Remove HTTP access   If HTTP access is removed, users can no longer connect to the 80 or 8080 port. ThingWorx will only be accessible on port 443 (or 8443).   If connecting to port 8080 clients will not be redirected. The redirectPort in the Connector is only forwarding requests internally in Tomcat, not switching protocols and ports and not requiring a certificate when being used. The redirected port does not reflect in the client's connection but only manages internal port-forwarding in Tomcat.   By removing the HTTP ports for access any connection on port 80 (or 8080) will end up in an error message that the client cannot connect on this port.   To remove the HTTP ports, edit the <Tomcat>\conf\server.xml and comment out sections like       <!-- commented out to disallow connections on port 80 <Connector port="80" protocol="org.apache.coyote.http11.Http11NioProtocol" connectionTimeout="20000" redirectPort="443" /> -->     Save and restart Tomcat. If opening Tomcat (and ThingWorx) in a browser via http://myServer/ the connection will fail with a "This site can’t be reached", "ERR_CONNECTION_REFUSED" error.   2) Forcing insecure connections through secure ports   Alternatively, port 80 and 8080 can be kept open to still allow users and devices to connect. But instead of only internally forwarding the port, Tomcat can be setup to also forward the client to the new secure port. With this, users and devices can still use e.g. old bookmarks and do not have to explicitly set the HTTPS protocol in the address.   To configure this, edit the <Tomcat>\conf\web.xml and add the following section just before the closing </web-app> tag at the end of the file:     <security-constraint>        <web-resource-collection>              <web-resource-name>HTTPSOnly</web-resource-name>              <url-pattern>/*</url-pattern>        </web-resource-collection>        <user-data-constraint>              <transport-guarantee>CONFIDENTIAL</transport-guarantee>        </user-data-constraint> </security-constraint>     In <Tomcat>\conf\web.xml ensure that all HTTP Connectors (port 80 and 8080) have their redirect port set to the secure HTTPS Connector (usually port 443 or port 8443).   Save and restart Tomcat. If opening Tomcat (and ThingWorx) in a browser via http://myServer/ the connection will now be forwarded to the secure port. The browser will now show the connection as https://myServer/ instead and connections are secure and using certificates.   What next?   Configuring Tomcat to force insecure connection to actually secure HTTPS connection just requires a simple configuration change. If you want to read more about certificates, encryption and how to setup ThingWorx for HTTPS in the first place, be sure to also have a look at   Trust & Encryption - Theory Trust & Encryption - Hands On
View full tip
The KEPServerEX installer download is available from the My Kepware web portal: https://my.kepware.com/mykepware/ It will be necessary to create a new My Kepware account if you have not already done so. When you log in to your My Kepware account, you will be at the My Kepware Home page. In the left column, you will see a link to download the latest release. You can access all of the features of KEPServerEX with this installation, including the Thingworx Native Interface. For more information on installation of KEPServerEX, see this guide: https://www.kepware.com/en-us/products/kepserverex/documents/installation-guide.pdf
View full tip
Business logic and actions in a ThingWorx application are driven by events.  Events are interesting or critical property states that a Thing publishes to subscribers.  Events are defined at the thing, thing template, or thing shape level, and can be as simple as a new data value from a device, to complex events from many data points.  An event that is created on a thing shape or thing template level is inherited down into the entities that implement the shape or template that defines the event.   Types of Events Built-in Custom   Built-in Events Every entity in ThingWorx can use several built-in events.  These events are automatically triggered when a prerequisite condition is met.  In some cases, you must provide information that determines the specifics of that prerequisite.   Common built-in events include: Data Change Alert Timer  In ThingWorx, there are standard events and related data packets (defined by Data Shapes).  The most common type of event is a Data Change related to a Thing property.  The Data Change fires when a property value changes. The specifics of this event can be configured from the Data Change Info section of the Property Creation page.  When you define a property, there are many configuration aspects.   Example:   Using DataChangeEvent, you have a few options for setting an event to fire when there is new data for a property: only if the data has changed only if the data evaluates true or false only if the new value changed beyond a defined threshold   The Alert event is also attached to a property and is set up from the Property Definition page.  An Alert event fires when the value of the property reaches a certain threshold or value.  Depending on the base type, you may specify the condition of this event from the Manage Alerts button.   Timer events can be used to run jobs or fire events on a regular basis.  Things themselves can fire specific events such as Thing Start or “special” things, like a Timer type thing that contains special event(s).   How to Set up and Configure Timers   Creating a Custom Event To create a new custom event:   In ThingWorx Composer, open up the Thing, Thing Template, or Thing Shape for edit where the new Event will be added From the sidebar menu under Entity Information, select Events Click Add My Event Provide the event with a name and a data shape that describes the data being sent to a subscriber Click Done and Save   Once created, the event appears on the Subscriptions page for subscribing.  Since the event is custom built, you must specify when it triggers.  This is often accomplished from within a service.   Best Practices There should be a subscriber to the event in the model. The subscriber is sent a data packet and the subscription is initiated. If no one is subscribed to the event (no one is listening), nothing happens. DataChange events hit the event processing subsystem (see Event Processing Subsystem below), which has fewer threads allocated to it than some other subsystems: If queries within subscriptions to data change events take too long, this will cause a back-up in the event processing subsystem Too much activity in this subsystem may result in poor system performance, and even server crashes Be cautious what types of calculations are performed in data change event subscriptions Refer to the following knowledge base article regarding tricky data change event use cases, the implementation of rules for alerts, and conditional operations:  Complicated Event Calculations in ThingWorx Load Test all applications in Sandbox before committing changes to Production to ensure mashups can handle all design choices on large scales   Event Processing Subsystem The Event Processing Subsystem manages event processing for external subscriptions (Things subscribing to other Things) throughout ThingWorx.   Event Queue Processing Settings Base Type Default Notes Min Threads Allocated to Event Processing Pool NUMBER 16   Max Threads Allocated to Event Processing Pool NUMBER 500   Max Queue Entries Before Adding New Working Thread NUMBER 200000 Maximum number of entries to queue before adding a thread to the pool     Additional Information PTC ThingWorx Help Center - Thing Events Event Types Creating and Triggering Custom Events in ThingWorx  
View full tip
Hello everyone,   Following a recent  experience, I felt it was important to share my insights with you. The core of this article is to demonstrate how you can format a Flux request in ThingWorx and post it to InfluxDB, with the aim of reporting the need for performance in calculations to InfluxDB. The following context is renewable energy. This article is not about Kepware neither about connecting to InfluxDB. As a prerequisite, you may like to read this article: Using Influx to store Value Stream properties from... - PTC Community     Introduction   The following InfluxDB usage has been developed for an electricity energy provider.   Technical Context Kepware is used as a source of data. A simulation for Wind assets based on excel file is configured, delivering data in realtime. SQL Database also gather the same data than the simulation in Kepware. It is used to load historical data into InfluxDB, addressing cases of temporary data loss. Once back online, SQL help to records the lost data in InfluxDB and computes the KPIs. InfluxDB is used to store data overtime as well as calculated KPIs. Invoicing third party system is simulated to get electricity price according time of the day.   Orchestration of InfluxDB operations with ThingWorx ThingWorx v9.4.4 Set the numeric property to log Maintain control over execution logic Format Flux request with dynamic inputs to send to Influx DB  InfluxDB Cloud v2 Store logged property Enable quick data read Execute calculation Note: Free InfluxDB version is slower in write and read, and only 30 days data retention max.     ThingWorx model and services   ThingWorx context Due to the fact relevant numeric properties are logged overtime, new KPIs are calculated based on the logged data. In the following example, each Wind asset triggered each minute a calculation to get the monetary gain based on current power produced and current electricity price. The request is formated in ThingWorx, pushed and executed in InfluxDB. Thus, ThingWorx server memory is not used for this calculation.   Services breakdown CalculateMonetaryKPIs Entry point service to calculate monetary KPIs. Use the two following services: Trigger the FormatFlux service then inject it in Post service. Inputs: No input Output: NOTHING FormatFlux _CalculateMonetaryKPI Format the request in Flux format for monetary KPI calculation. Respect the Flux synthax used by InfluxDB. Inputs: bucketName (STRING) thingName (STRING) Output: TEXT PostTextToInflux Generic service to post the request to InfluxDB, whatever the request is Inputs: FluxQuery (TEXT) influxToken (STRING) influxUrl (STRING) influxOrgName (STRING) influxBucket (STRING) thingName (STRING) Output: INFOTABLE   Highlights - CalculateMonetaryKPIs Find in attachments the full script in "CalculateMonetaryKPIs script.docx". Url, token, organization and bucket are configured in the Persitence Provider used by the ValueStream. We dynamically get it from the ValueStream attached to this thing. From here, we can reuse it to set the inputs of two other services using “MyConfig”.   Highlights - FormatFlux_CalculateMonetaryKPI Find in attachments the full script in "FormatFlux_CalculateMonetaryKPI script.docx". The major part of this script is a text, in Flux synthax, where we inject dynamic values. The service get the last values of ElectricityPrice, Power and Capacity to calculate ImmediateMonetaryGain, PotentialMaxMonetaryGain and PotentialMonetaryLoss.   Flux logic might not be easy for beginners, so let's break down the intermediate variables created on the fly in the Flux request. Let’s take the example of the existing data in the bucket (with only two minutes of values): _time _measurement _field _value 2024-07-03T14:00:00Z WindAsset1 ElectricityPrice 0.12 2024-07-03T14:00:00Z WindAsset1 Power 100 2024-07-03T14:00:00Z WindAsset1 Capacity 150 2024-07-03T15:00:00Z WindAsset1 ElectricityPrice 0.15 2024-07-03T15:00:00Z WindAsset1 Power 120 2024-07-03T15:00:00Z WindAsset1 Capacity 160   The request articulates with the following steps: Get source value Get last price, store it in priceData _time ElectricityPrice 2024-07-03T15:00:00Z 0,15 Get last power, store it in powerData _time Power 2024-07-03T15:00:00Z 120 Get last capacity, store it in capacityData _time Capacity 2024-07-03T15:00:00Z 160 Join the three tables *Data on the same time. Last values of price, power and capacity maybe not set at the same time, so final joinedData may be empty. _time ElectricityPrice Power Capacity 2024-07-03T14:00:00Z 0,15 120 160 Perform calculations gainData store the result: ElectricityPrice * Power _time _measurement _field _value 2024-07-03T15:00:00Z WindAsset1 ImmediateMonetaryGain 18 maxGainData store the result: ElectricityPrice * Capacity lossData store the result: ElectricityPrice * (Capacity – Power) Add the result to original bucket   Highlights - PostTextToInflux Find in attachments the full script in "PostTextToInflux script.docx". Pretty straightforward script, the idea is to have a generic script to post a request. The header is quite original with the vnd.flux content type Url needs to be formatted according InfluxDB API     Well done!   Thanks to these steps, calculated values are stored in InfluxDB. Other services can be created to retrieve relevant InfluxDB data and visualize it in a mashup.     Last comment It was the first time I was in touch with Flux script, so I wasn't comfortable, and I am still far to be proficient. After spending more than a week browsing through InfluxDB documentation and running multiple tests, I achieved limited success but nothing substantial for a final outcome. As a last resort, I turned to ChatGPT. Through a few interactions, I quickly obtained convincing results. Within a day, I had a satisfactory outcome, which I fine-tuned for relevant use.   Here is two examples of two consecutive ChatGPT prompts and answers. It might need to be fine-tuned after first answer.   Right after, I asked to convert it to a ThingWorx script format:   In this last picture, the script won’t work. The fluxQuery is not well formatted for TWX. Please, refer to the provided script "FormatFlux_CalculateMonetaryKPI script.docx" to see how to format the Flux query and insert variables inside. Despite mistakes, ChatGPT still mainly provides relevant code structure for beginners in Flux and is an undeniable boost for writing code.  
View full tip
Predictive models: ​ Predictive model is one of the best technique to perform predictive analytics. This is the development of models that are trained on historical data and make predictions on new data. These models are built in order to analyse the current data records in combination with some historical data.   Use of Predictive Analytics in Thingworx Analytics and How to Access Predictive Analysis Functionality via Thingworx Analytics   Bias and variance are the two components of imprecision in predictive models. Bias in predictive models is a measure of model rigidity and inflexibility, and means that your model is not capturing all the signal it could from the data. Bias is also known as under-fitting.  Variance on the other hand is a measure of model inconsistency, high variance models tend to perform very well on some data points and really bad on others. This is also known as over-fitting and means that your model is too flexible for the amount of training data you have and ends up picking up noise in addition to the signal.   If your model is performing really well on the training set, but much poorer on the hold-out set, then it’s suffering from high variance. On the other hand if your model is performing poorly on both training and test data sets, it is suffering from high bias.   Techniques to improve:   Add more data: Having more data is always a good idea. It allows the “data to tell for itself,” instead of relying on assumptions and weak correlations. Presence of more data results in better and accurate models. The question is when we should ask for more data? We cannot quantify more data. It depends on the problem you are working on and the algorithm you are implementing, example when we work with time series data, we should look for at least one-year data, And whenever you are dealing with neural network algorithms, you are advised to get more data for training otherwise model won’t generalize.  Feature Engineering: Adding new feature decreases bias on the expense of variance of the model. New features can help algorithms to explain variance of the model in more effective way. When we do hypothesis generation, there should be enough time spent on features required for the model. Then we should create those features from existing data sets. Feature Selection: This is one of the most important aspects of predictive modelling. It is always advisable to choose important features in the model and build the model again only with important and significant features. e. let’s say we have 100 variables. There will be variables which drive most of the variance of a model. If we just select the number of features only on p-value basis, then we may still have more than 50 variables. In that case, you should look for other measures like contribution of individual variable to the model. If 90% variance of the model is explained by only 15 variables then only choose those 15 variables in the final model. Multiple Algorithms: Hitting at the right machine learning algorithm is the ideal approach to achieve higher accuracy. Some algorithms are better suited to a particular type of data sets than others. Hence, we should apply all relevant models and check the performance. Algorithm Tuning: We know that machine learning algorithms are driven by parameters. These parameters majorly influence the outcome of learning process. The objective of parameter tuning is to find the optimum value for each parameter to improve the accuracy of the model. To tune these parameters, you must have a good understanding of these meaning and their individual impact on model. You can repeat this process with a number of well performing models. For example: In random forest, we have various parameters like max_features, number_trees, random_state, oob_score and others. Intuitive optimization of these parameter values will result in better and more accurate models. Cross Validation: Cross Validation is one of the most important concepts in data modeling. It says, try to leave a sample on which you do not train the model and test the model on this sample before finalizing the model. This method helps us to achieve more generalized relationships. Ensemble Methods: This is the most common approach found majorly in winning solutions of Data science competitions. This technique simply combines the result of multiple weak models and produce better results. This can be achieved through many ways.  Bagging: It uses several versions of the same model trained on slightly different samples of the training data to reduce variance without any noticeable effect on bias. Bagging could be computationally intensive esp. in terms of memory. Boosting: is a slightly more complicated concept and relies on training several models successively each trying to learn from the errors of the models preceding it. Boosting decreases bias and hardly affects variance.     
View full tip
Learn how to use the DBConnection building block to create your own DB tables in ThingWorx.
View full tip
​​​There are four types of Analytics:                                                                 Prescriptive analytics: What should I do about it? Prescriptive analytics is about using data and analytics to improve decisions and therefore the effectiveness of actions.Prescriptive analytics is related to both Descriptive and Predictive analytics. While Descriptive analytics aims to provide insight into what has happened and Predictive analytics helps model and forecast what might happen, Prescriptive analytics seeks to determine the best solution or outcome among various choices, given the known parameters. “Any combination of analytics, math, experiments, simulation, and/or artificial intelligence used to improve the effectiveness of decisions made by humans or by decision logic embedded in applications.”These analytics go beyond descriptive and predictive analytics by recommending one or more possible courses of action. Essentially they predict multiple futures and allow companies to assess a number of possible outcomes based upon their actions. Prescriptive analytics use a combination of techniques and tools such as business rules, algorithms, machine learning and computational modelling procedures. Prescriptive analytics can also suggest decision options for how to take advantage of a future opportunity or mitigate a future risk, and illustrate the implications of each decision option. In practice, prescriptive analytics can continually and automatically process new data to improve the accuracy of predictions and provide better decision options. Prescriptive analytics can be used in two ways: Inform decision logic with analytics: Decision logic needs data as an input to make the decision. The veracity and timeliness of data will insure that the decision logic will operate as expected. It doesn’t matter if the decision logic is that of a person or embedded in an application — in both cases, prescriptive analytics provides the input to the process. Prescriptive analytics can be as simple as aggregate analytics about how much a customer spent on products last month or as sophisticated as a predictive model that predicts the next best offer to a customer. The decision logic may even include an optimization model to determine how much, if any, discount to offer to the customer. Evolve decision logic: Decision logic must evolve to improve or maintain its effectiveness. In some cases, decision logic itself may be flawed or degrade over time. Measuring and analyzing the effectiveness or ineffectiveness of enterprises decisions allows developers to refine or redo decision logic to make it even better. It can be as simple as marketing managers reviewing email conversion rates and adjusting the decision logic to target an additional audience. Alternatively, it can be as sophisticated as embedding a machine learning model in the decision logic for an email marketing campaign to automatically adjust what content is sent to target audiences. Different technologies of Prescriptive analytics to create action: Search and knowledge discovery: Information leads to insights, and insights lead to knowledge. That knowledge enables employees to become smarter about the decisions they make for the benefit of the enterprise. But developers can embed search technology in decision logic to find knowledge used to make decisions in large pools of unstructured big data. Simulation: ​Simulation imitates a real-world process or system over time using a computer model. Because digital simulation relies on a model of the real world, the usefulness and accuracy of simulation to improve decisions depends a lot on the fidelity of the model. Simulation has long been used in multiple industries to test new ideas or how modifications will affect an existing process or system. Mathematical optimization: Mathematical optimization is the process of finding the optimal solution to a problem that has numerically expressed constraints. Machine learning: “Learning” means that the algorithms analyze sets of data to look for patterns and/or correlations that result in insights. Those insights can become deeper and more accurate as the algorithms analyze new data sets. The models created and continuously updated by machine learning can be used as input to decision logic or to improve the decision logic automatically. Paragmetic AI: ​Enterprises can use AI to program machines to continuously learn from new information, build knowledge, and then use that knowledge to make decisions and interact with people and/or other machines.                                               Use of Prescriptive Analytics in ThingWorx Analytics: Thing Optimizer: Thing Optimizer functionality provides the prescriptive scoring and optimization capabilities of ThingWorx Analytics. While predictive scoring allows you to make predictions about future outcomes, prescriptive scoring allows you to see how certain changes might affect future outcomes. After you have generated a prediction model (also called training a model), you can modify the prescriptive attributes in your data (those attributes marked as levers) to alter the predictions. The prescriptive scoring process evaluates each lever attribute, and returns an optimal value for that feature, depending on whether you want to minimize or maximize the goal variable. Prescriptive scoring results include both an original score (the score before any lever attributes are changed) and an optimized score (the score after optimal values are applied to the lever attributes). In addition, for each attribute identified in your data as a lever, original and optimal values are included in the prescriptive scoring results. How to Access Thing Optimizer Functionality: ThingWorx Analytics prescriptive scoring can only be accessed via the REST API Service. Using a REST client, you can access the Scoring service which includes a series of API endpoints to submit scoring requests, retrieve results, list jobs, and more. Requires installation of the ThingWorx Analytics Server. How to avoid mistakes - Below are some common mistakes while doing Prescriptive analytics: Starting digital analytics without a clear goal Ignoring core metrics Choosing overkill analytics tools Creating beautiful reports with little business value Failing to detect tracking errors                                                                                                                                 Image source: Wikipedia, Content: go.forrester.com(Partially)
View full tip
I imagine a lot of people that face this problem might be using Session Parameters, but there is a secret lost Ninja art that allows you to do it with Mashup parameters which is much more contextual and direct. The key is to have Mashup parameters with the same name. End Result Starting out I am on my main mashup, you can see the Tree Data in the Grid below Clicking on the next node now shows the new mashup and the TO field inside. That To value was passed in using a mashup parameter Clicking the next node, you can see it is actually a different mashup, but I am still passing the TO value How is it done: Here is my mashup with the Tree and Contained mashup, you can see the bindings are in place already, but how did I do it, since the Contained Mashup is empty? First create the new mashups with a mashup parameter named the SAME in this case EntityName Here is Mashup2 and you can see the Mashup parameter with the same name EntityName bound to one of the Value Displays Now how do I bind from my main mashup? What you need to do is to temporarily assign one of the Mashups to the Contained Mashup, here I am showing Mashup1 assigned. This will now allow you to bind not just the Mashup Name, but also bind a value to the Mashup Parameter in that Mashup. Just drag your selected row values onto the contained mashup. Here you can see the parameter showing as a property, I just dropped my value on the contained mashup and I can bind to Name (name of the mashup to show) and EntityName (the value I want to pass to the mashup parameter) Now just remove the assigned mashup from the Contained mashup and you’ll note that the bindings stay intact. That’s it!
View full tip
Welcome to the ThingWorx Manufacturing Apps Community! The ThingWorx Manufacturing Apps are easy to deploy, pre-configured role-based starter apps that are built on PTC’s industry-leading IoT platform, ThingWorx. These Apps provide manufacturers with real-time visibility into operational information, improved decision making, accelerated time to value, and unmatched flexibility to drive factory performance.   This Community page is open to all users-- including licensed ThingWorx users, Express (“freemium”) users, or anyone interested in trying the Apps. Tech Support community advocates serve users on this site, and are here to answer your questions about downloading, installing, and configuring the ThingWorx Manufacturing Apps.     A. Sign up: ThingWorx Manufacturing Apps Community: PTC account credentials are needed to participate in the ThingWorx Community. If you have not yet registered a PTC eSupport account, start with the Basic Account Creation page.   Manufacturing Apps Web portal: Register a login for the ThingWorx Manufacturing Apps web portal, where you can download the free trial and navigate to the additional resources discussed below.     B. Download: Choose a download/packaging option to get started.   i. Express/Freemium Installer (best for users who are new to ThingWorx): If you want to quickly install ThingWorx Manufacturing Apps (including ThingWorx) use the following installer: Download the Express/Freemium Installer   ii. 30-day Developer Kit trial: To experience the capabilities of the ThingWorx Platform with the Manufacturing Apps and create your own Apps: Download the 30-day Developer Kit trial   iii. Import as a ThingWorx Extension (for users with a Manufacturing Apps entitlement-- including ThingWorx commercial customers, PTC employees, and PTC Partners): ThingWorx Manufacturing apps can be imported as ThingWorx extensions into an existing ThingWorx Platform install (v8.1.0). To locate the download, open the PTC Software Download Page and expand the following folders:   ThingWorx Platform | Release 8.x | ThingWorx Manufacturing Apps Extension | Most Recent Datacode     C. Learn After downloading the installer or extensions, begin with Installation and Configuration.   Follow the steps laid out in the ThingWorx Manufacturing Apps Setup and Configuration Guide 8.2   Find helpful getting-started guides and videos available within the 'Get Started' section of the ThingWorx Manufacturing Apps Portal.     D. Customize Once you have successfully downloaded, installed, and configured the Manufacturing Apps, begin to explore the deeper potential of the Apps and the ThingWorx Platform.   Follow along with the discussion and steps contained in the ThingWorx Manufacturing Apps and Service Apps Customization Guide  8.2   Also contained within the the 'Get Started' page of the ThingWorx Manufacturing Apps Portal, find the "Evolve and Expand" section, featuring: -Custom Plant Layout application -Custom Asset Advisor application -Global Plant View application -Thingworx Manufacturing Apps Technical Lab with Sigma Tile (Raspberry Pi application) -Configuring the Apps with demo data set and simulator -Additional Advanced Documentation     E. Get help / give feedback / interact Use the ThingWorx Manufacturing Apps Community page as a resource to find documentation, peruse past forum threads, or post a question to start a discussion! For advanced troubleshooting, licensed users are encouraged to submit support tickets to the PTC My eSupport portal.
View full tip
    Step 11: Build Extension   You can use either Gradle or Ant to build your ThingWorx Extension Project. Ant is the preferred method.   Build Extension with Gradle   Right click on your project ->Gradle (STS)->Tasks Quick Launcher.     NOTE: This opens up a new window.   Set Project from the drop-down menu to your project name and type Tasks as build. Press Enter   NOTE: This will build the project and any error will be indicated in the console window. Your console window will display BUILD SUCCESSFUL. This means that your extension is created and stored as a zip file in your_project->build->distributions folder.   Build Extension with Ant   Go to the Package explorer -> your_project->. Right click on build-extension.xml->Run As->Ant Build   Your console output will indicate BUILD SUCCESSFUL.   NOTE: This will build your project and create the extension zip in the your_project->build->distributions folder of your project.     Step 12: Import Extension    If you have valuable data on your ThingWorx server, save the current state before importing an untested extension by duplicating and renaming the ThingworxStorage directory. This will save all current entities and a new, empty ThingworxStorage directory will be generated when Tomcat is restarted. To restore your saved state, rename the duplicate directory back to ThingworxStorage. Alternatively, If you do not back up your storage, make sure that any entities you want to save are exported into xml format. This way you will be able to restore your ThingWorx server to its initial state by deleting the storage directory before importing the saved entities.   Import Extension In the lower left corner, click Import/Export, then select Import. NOTE: The build produces a zip file in ProjectName->build->distributions folder. This zip file will be required for importing the extension. For the Import Option option, select Extension. Click Browse and choose the zip file in the distributions folder (located in the Exclipse Project's build directory). Click Import.   Create a Thing   Create a Thing using the ThingWorx Composer with the Thing Template set to the WeatherThingTemplate.     Open the ConfigurationTable tab and add the appid from the OpenWeatherMap.org site.   Open the WeatherAppMashup Mashup by searching for WeatherAppMashup in the Search bar.   Click View Mashup in the WeatherAppMashup Mashup window. Type the name of a city (eg. Boston) and click go.   NOTE: You can now see the current temperature reading and weather description of your city in the Mashup.   Troubleshooting   If your import did not get through with the two green checks, you may want to modify your metadata.xml or java code to fix it depending on the error shown in the logs.   Issue Solution JAR Conflict arises between two similar jars JAR conflicts arise when a similar jar is already present in the Composer database. Try to remove the respective jar resources from the metadata.xml. Add these jars explicitly in twx-lib folder in the project folder inside the workspace directory. Now, build the project and import the extension in ThingWorx Composer once again. JAR is missing Add the respective jar resource in metadata.xml using the ThingWorx->New Jar Resource. Now, build the project and import the extension in ThingWorx Composer once again. Minimum Thingworx Version [ 7.2.1] requirements are not met because current version is: 7.1.3 The version of SDK you have used to build your extension is higher than the version of the ThingWorx Composer you are testing against. You can manually edit the configfiles->metadata.xml file to change the Minimum ThingWorx version to your ThingWorx Composer version.   Step 13: Next Steps    Congratulations! You've successfully completed the Create an Extension tutorial, and learned how to:   Install the Eclipse Plugin and Extension SDK Create and configure an Extension project Create Services, Events and Subscriptions Add Composer entities Build and import an Extension   Learn More We recommend the following resources to continue your learning experience:" Capability Guide Build Application Development Tips & Tricks Additional Resources If you have questions, issues, or need additional information, refer to: Resource Link Community Developer Community Forum Support Extension Development Guide
View full tip
  Connect a Raspberry Pi to ThingWorx using the Edge Micro Server (EMS).   Guide Concept   This project will introduce you to the Edge MicroServer (EMS) and how to connect your ThingWorx server to a Raspberry Pi device.   Following the steps in this guide, you will be able to connect to the ThingWorx platform with your Raspberry Pi. The coding will be simple and the steps will be very straight forward.   We will teach you how to utilize the EMS for your Edge device needs. The EMS comes with the Lua Script Resource, which serves as an optional process manager, enabling you to create Properties, Services, Events, and Subscriptions for a remote device on the ThingWorx platform.   You'll learn how to   Set up Raspberry Pi Install, configure and launch the EMS Connect a remote device to ThingWorx   NOTE: This guide's content aligns with ThingWorx 9.3. The estimated time to complete ALL parts of this guide is 30 minutes.    Step 1: Setup Raspberry Pi   Follow the setup instructions to get your Raspberry Pi up and running with the Raspberry Pi OS operating system. Ensure that your Pi has a valid Ethernet or Wifi connection. If your Pi is connected to a monitor/keyboard, run ifconfig from the Command Line Interface (CLI) to determine the IP address. If you are connecting remotely, probe your local network to find your Pi using one of these methods to determine the IP address. Log into your Raspberry Pi using the userid/password combination pi/raspberry.   Step 2: Install the EMS Download the MED-61060-CD-054_SP10_Microserver-Linux-arm-hwfpu-openssl-5-4-10-1509.zip attached here directly to the Raspberry Pi, or transfer it using a SFTP application such as WinSCP. After downloading the EMS zip file, unzip the archive in a suitable location on the Pi using the command below. Use the Tab key to automatically complete file names. unzip /MED-61060-CD-054_SP9_Microserver-Linux-arm-hwfpu-openssl-5-4-10-1509.zip After unzipping the distribution, a sub-directory named /microserver will be created inside the parent directory. Verify that microserver directory was created with the command ls -l   Switch into the microserver directory with the command cd microserver The microserver directory includes the following files.        File Name                    Description wsems An executable file that runs the Edge MicroServer luaScriptResource The Lua utility that is used to run Lua scripts, configure remote things, and integrate with the host system     Step 3: Create Application Key   In this step, you will be using the ThingWorx Composer to generate an Application Key. The Application Key will be used to identify the Edge Agent. The Application Key is tied to a user and has the same entitlements on the server.   Using the Application Key for the default User (Administrator) is not recommended. If administrative access is absolutely necessary, create a User and place the user as a member of the SecurityAdministrators and Administrators User Groups.   Create the User the Application Key will be assigned to.   On the Home screen of Composer click + New.   In the dropdown list, click Applications Key.   Give your Application Key a name (ie, MyAppKey). Set the User Name Reference to a User you created.   Update the Expiration Date field, otherwise it will default to 1 day. Click Save.   Step 4: Configure the EMS   The EMS consists of two distinct components that do slightly different things and communicate with each other. The first is the EMS which creates an AlwaysOn™ connection to the ThingWorx server. It binds things to the platform and automatically provides features like file transfer and tunneling.   The second is the Lua Script Resource (LSR). It is used as a scripting language so that you can add properties, services, and events to the things that you create in the EMS. The LSR communicates with your sensors or devices. The LSR can be installed on the same device as the EMS or on a separate device. For example, one LSR can be a gateway and send data from several different things to a single EMS.     Open a terminal emulator for the Raspberry Pi. Change directory to microserver/etc. cd microserver/etc Create a config.json file. EMS comes with two sample config files that can be used as a reference for creating your config.json file. The config.json.minimal file provides minimum and basic options for getting started. The config.json.complete provides all of the configuration options.   Create the config.json file in the etc folder. sudo nano config.json Edit the config.json file ws_servers - host and port address of the server hosting the ThingWorx Platform. If you are using a Developer Portal hosted server, your server hostname is listed on the dashboard. {"host":"<TwX Server IP>", "port":443} http_server - host and port address of the machine running the LSR. In this case it will be your localhost running on the raspberry pi. {"host":"127.0.0.1","port":8080, "use_default_certificate": true,"ssl": false, "authenticate": false} appKey - the application key generated from the ThingWorx server. Use the keyId generated in the previous step "Create Application Key". "appKey":"<insert keyId>" logger - sets the logging level for debugging purposes. Set to log at a DEBUG level. ("level":"INFO"} certificates - for establishing a secure websocket connection between the ThingWorx server and the EMS. A valid certificate should be used in a production environment but for debugging purposes you can turn off validation and allow self signed certificates. {"validate":false, "disable_hostname_validation": true} NOTE: To ensure a secure connection, use valid certificates, encryption and HTTPS (port : 443) protocol for establishing a websocket connection between the EMS and the ThingWorx Platform. 5. Exit and Save. ctrl x   Sample config.json File   Replace host and appKey with values from your hosted server.   { "ws_servers": [{ "host": "pp-2007011431nt.devportal.ptc.io", "port": 443 }], "appkey": "2d4e9440-3e51-452f-a057-b55d45289264", "http_server": { "host": "127.0.0.1", "port": 8080, "use_default_certificate": true, "ssl": false, "authenticate": false }, "logger": { "level": "INFO" }, "certificates": { "validate": false, "disable_hostname_validation": true } }     Click here to view Part 2 of this guide. 
View full tip
Sometimes it's needed to delete the existing PostgreSQL database, especially if a different major version was installed at first by mistake (for example, 9.6 in place of the supported 9.4). Then it's absolutely necessary to ensure the database is fully deleted and there is no db ghosts. The proper way to uninstall is to go to the postgresql server installation directory and find one uninstall-postgresql file. Double click on the Uninstall-postgresql file to run the un-installer- it will un-install postgresql. In case the uninstall wasn't performed correctly, below are the manual steps to clean it up. One sign of existing "ghost" db, is randomly seeing a second PostgreSQL server in the pgAdmin III or experiencing "error"-less problems when running the ThingWorx installation scripts. To uninstall manually,in this example we will use 9.6 as the version to delete - please replace with your own where needed: Remove the postgresql server installation directory. (rd /s /q "C:\Program Files\PostgreSQL\9.6") Assuming default location. Delete the user 'postgres' (net user postgres /delete) Remove the Registry entries. (HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.6) and (HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Services\postgresql-9.6) Remove the postgresql-9.6 service. (sc delete postgresql-9.6)
View full tip
This document attached to this blog entry actually came out of my first exposure to using the C SDK on a Raspberry PI. I took notes on what I had to do to get my own simple edge application working and I think it is a good introduction to using the C SDK to report real, sampled data. It also demonstrates how you can use the C SDK without having to use HTTPS. It demonstrates how to turn off HTTPS support. I would appreciate any feedback on this document and what additions might be useful to anyone else who tries to do this on their own.
View full tip
It's been challenging, in the absence of OOTB Software Content Management (SCM) system within ThingWorx, to track, maintain code changes in different entities or to roll back in case of any entity is wrongly edited or removed. Though there's possibility to some extent to compare the differences i.e. when importing back the entities from the Source Control repository in ThingworxStorage. However this approach itself has it's own limitations. Note : This is not an official document, rather more of a trick to help workaround this challenge. I'll be using following components in this blog to help setup a workable environment. Git VSCode ThingWorx Scheduler Thing First two components i.e. Git & VSCode is something I preferred to use due to their ease and my familiarity. However, you are free to choose your own versioning platform and IDE to review the branches, commits, diff, etc. or simply use GIT Bash to review the branches and all related changes to the code. This blog divides into following structures Installing & setting up code versioning software Installing IDE Creating a Scheduler Thing in ThingWorx Reviewing the changes 1. Installing & setting up code versioning software As mentioned you can use any code versioning platform of your choice, in this blog I'll be using Git. To download and install Git, refer to the Git- Installing Git Setting up the Git Repository Navigate to the \\ThingworxStorage as this is the folder which contains the path to the \repository folder which in turns have SystemRepository folder. Note: Of course more custom repositories could be created if preference is to separate it from the SystemRepository (available OOTB) Once Git is installed, let's initialize the repository. If you are initializing the repository in ThingworxStorage and would like only repository folder to be tracked, be sure to create .gitignore and add following to it: # folders to ignore database esapi exports extensions logs reports certificates # keystore to ignore keystore.jks Note : Simply remove the folder/file from the .gitignore file if you'd like that file/folder to be tracked within the Git repository. Following commands have been issued in the Git Bash which can be started from Windows Start > Git Bash. Then from the Git Bash navigate to the ThingworxStorage/repository folder. Git Command to initialize repository $ git init Git command to check the status of tracked/un-tracked files/folders $ git status This may or may not return list(s) of files/folders that are not tracked/un-tracked. While issuing this command for the first time, it'll show that the repository and its content i.e. SystemRepository folder is not tracked (file/folder names will likely be highlighted in red color. Git command to configure user and then add required files/folders for tracking $ git config --global user.name "" $ git config --global user.email "" $ git add . This will add all the folders/files that are not ignored in the .gitignore file as we created above. $ git commit -m "" This will perform first commit to the master branch, which is the default branch after the initial setup of the git repository $ git branch -a This will list all available branches within this repository $ git branch e.g. $ git branch newfeatureA This will create a new branch with that name, however to switch to that branch checkout command needs to be used. $ git checkout newfeatureA Note this will reflect the change in the command prompt e.g. it'll be switched from  MINGW64 /c/ThingworxStorage/repository (master) to MINGW64 /c/ThingworxStorage/repository (newfeatureA) If there's a warning with files/folders highlighted in Red it may mean that the files/folders are not yet staged (not yet ready for being committed under the new branch and are not tracked under the new branch). To stage them: $ git add . To add them for tracking under this branch $ git commit -m "Initial commit under branch newfeatureA" Above command will commit with the message defined with "-m" parameter 2. Installing IDE Now that the Git is installed and configured for use, there are several options here e.g. using an IDE like VSCode or Atom or any other IDE for that matter, using Git Bash to review the branches and commit codes via command or to use the Git GUI. I'll be using VSCode (because apart from tracking Git repos I can also debug HTML/CSS, XML with minimum setup time and likely these are going to be the languages mostly used when working with ThingWorx entities) and will install certain extensions to simplify the access and reviewing process of the branches containing code changes, new entities those that are created or getting committed from different users, etc To install VSCode refer to the Setting up Visual Studio Code. This will cover all the platforms which you might be working on. Here are the list of extensions that I have installed within VS Code to work with the Git repository. Required Extensions Git Extension Pack Optional Extensions Markdown All in One XML Tools HTML CSS Support Once installed and done with all the above mentioned extensions simply navigate to the VSCode application's File > Open Folder > \\ThingworxStorage\repository This will open the SystemRepository folder which will also populate the GitLense section highlighting the lists of branches, which one is the active branch (marked with check icon) & what uncommitted changes are remaining in which branch see following: To view the history of all the branches we can use the extension that got installed with the Git Extension Pack by pressing keyboard key F1 and then search for Git: View History (git log) > Select all branches; this will provide overview such as this 3. Creating a Scheduler Thing in ThingWorx Now that we have the playground setup, we can either: Export ThingWorx entities manually by navigating to the ThingWorx Composer > Import/Export > Export > Source Control Entities, or Invoke the ExportSourceControlledEntities service automatically (based on a Scheduler's CRON job) available under Resources > SourceControlFunctions To invoke this service automatically, a subscription could be created to the Scheduler Thing's Event which invokes the execution of ExportSourceControlledEntities service periodically. I'm using this service with following input parameters : var params = { path: "/"/* STRING */, endDate: undefined/* DATETIME */, includeDependents: undefined/* BOOLEAN */, collection: undefined/* STRING */, repositoryName: "SystemRepository"/* THINGNAME */, projectName: undefined/* PROJECTNAME */, startDate: undefined/* DATETIME */, tags: undefined/* TAGS */}; // no return Resources["SourceControlFunctions"].ExportSourceControlledEntities(params); Service is only taking path & repositoryName as input. Of course, this can be updated based on the type of entities, datetime, collection type, etc. that you would want to track in your code's versioning system. 4. Reviewing the changes With the help of the Git tool kit installed in the VS Code (covered in section 2. Installing IDE) we can now track the changed / newly created entities immediately (as soon as they are exported to the respository ) in the Source Control section (also accessible via key combination Ctrl + Shift + G) Based on the scheduled job the entities will be exported to the specified repository periodically which in turn will show up in the branch under which that repository is being tracked. Since I'm using VS Code with Git extension I can track these changes under the tab GitLens Additionally, for quick access to all the new entities created or existing ones modified - Source Control section could be checked Changes marked with "U" are new entities that got added to the repository and are also un-tracked and the changes marked with "M" are the ones that are modified entities compared to their last commit state in the specific branch. To quickly view the difference/modifications in the entity, simply click on the modified file on the left which will then highlight the difference on the right side, like so
View full tip
Announcements