cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Did you get called away in the middle of writing a post? Don't worry you can find your unfinished post later in the Drafts section of your profile page. X

IoT Tips

Sort by:
I imagine a lot of people that face this problem might be using Session Parameters, but there is a secret lost Ninja art that allows you to do it with Mashup parameters which is much more contextual and direct. The key is to have Mashup parameters with the same name. End Result Starting out I am on my main mashup, you can see the Tree Data in the Grid below Clicking on the next node now shows the new mashup and the TO field inside. That To value was passed in using a mashup parameter Clicking the next node, you can see it is actually a different mashup, but I am still passing the TO value How is it done: Here is my mashup with the Tree and Contained mashup, you can see the bindings are in place already, but how did I do it, since the Contained Mashup is empty? First create the new mashups with a mashup parameter named the SAME in this case EntityName Here is Mashup2 and you can see the Mashup parameter with the same name EntityName bound to one of the Value Displays Now how do I bind from my main mashup? What you need to do is to temporarily assign one of the Mashups to the Contained Mashup, here I am showing Mashup1 assigned. This will now allow you to bind not just the Mashup Name, but also bind a value to the Mashup Parameter in that Mashup. Just drag your selected row values onto the contained mashup. Here you can see the parameter showing as a property, I just dropped my value on the contained mashup and I can bind to Name (name of the mashup to show) and EntityName (the value I want to pass to the mashup parameter) Now just remove the assigned mashup from the Contained mashup and you’ll note that the bindings stay intact. That’s it!
View full tip
​​​There are four types of Analytics:                                                                 Prescriptive analytics: What should I do about it? Prescriptive analytics is about using data and analytics to improve decisions and therefore the effectiveness of actions.Prescriptive analytics is related to both Descriptive and Predictive analytics. While Descriptive analytics aims to provide insight into what has happened and Predictive analytics helps model and forecast what might happen, Prescriptive analytics seeks to determine the best solution or outcome among various choices, given the known parameters. “Any combination of analytics, math, experiments, simulation, and/or artificial intelligence used to improve the effectiveness of decisions made by humans or by decision logic embedded in applications.”These analytics go beyond descriptive and predictive analytics by recommending one or more possible courses of action. Essentially they predict multiple futures and allow companies to assess a number of possible outcomes based upon their actions. Prescriptive analytics use a combination of techniques and tools such as business rules, algorithms, machine learning and computational modelling procedures. Prescriptive analytics can also suggest decision options for how to take advantage of a future opportunity or mitigate a future risk, and illustrate the implications of each decision option. In practice, prescriptive analytics can continually and automatically process new data to improve the accuracy of predictions and provide better decision options. Prescriptive analytics can be used in two ways: Inform decision logic with analytics: Decision logic needs data as an input to make the decision. The veracity and timeliness of data will insure that the decision logic will operate as expected. It doesn’t matter if the decision logic is that of a person or embedded in an application — in both cases, prescriptive analytics provides the input to the process. Prescriptive analytics can be as simple as aggregate analytics about how much a customer spent on products last month or as sophisticated as a predictive model that predicts the next best offer to a customer. The decision logic may even include an optimization model to determine how much, if any, discount to offer to the customer. Evolve decision logic: Decision logic must evolve to improve or maintain its effectiveness. In some cases, decision logic itself may be flawed or degrade over time. Measuring and analyzing the effectiveness or ineffectiveness of enterprises decisions allows developers to refine or redo decision logic to make it even better. It can be as simple as marketing managers reviewing email conversion rates and adjusting the decision logic to target an additional audience. Alternatively, it can be as sophisticated as embedding a machine learning model in the decision logic for an email marketing campaign to automatically adjust what content is sent to target audiences. Different technologies of Prescriptive analytics to create action: Search and knowledge discovery: Information leads to insights, and insights lead to knowledge. That knowledge enables employees to become smarter about the decisions they make for the benefit of the enterprise. But developers can embed search technology in decision logic to find knowledge used to make decisions in large pools of unstructured big data. Simulation: ​Simulation imitates a real-world process or system over time using a computer model. Because digital simulation relies on a model of the real world, the usefulness and accuracy of simulation to improve decisions depends a lot on the fidelity of the model. Simulation has long been used in multiple industries to test new ideas or how modifications will affect an existing process or system. Mathematical optimization: Mathematical optimization is the process of finding the optimal solution to a problem that has numerically expressed constraints. Machine learning: “Learning” means that the algorithms analyze sets of data to look for patterns and/or correlations that result in insights. Those insights can become deeper and more accurate as the algorithms analyze new data sets. The models created and continuously updated by machine learning can be used as input to decision logic or to improve the decision logic automatically. Paragmetic AI: ​Enterprises can use AI to program machines to continuously learn from new information, build knowledge, and then use that knowledge to make decisions and interact with people and/or other machines.                                               Use of Prescriptive Analytics in ThingWorx Analytics: Thing Optimizer: Thing Optimizer functionality provides the prescriptive scoring and optimization capabilities of ThingWorx Analytics. While predictive scoring allows you to make predictions about future outcomes, prescriptive scoring allows you to see how certain changes might affect future outcomes. After you have generated a prediction model (also called training a model), you can modify the prescriptive attributes in your data (those attributes marked as levers) to alter the predictions. The prescriptive scoring process evaluates each lever attribute, and returns an optimal value for that feature, depending on whether you want to minimize or maximize the goal variable. Prescriptive scoring results include both an original score (the score before any lever attributes are changed) and an optimized score (the score after optimal values are applied to the lever attributes). In addition, for each attribute identified in your data as a lever, original and optimal values are included in the prescriptive scoring results. How to Access Thing Optimizer Functionality: ThingWorx Analytics prescriptive scoring can only be accessed via the REST API Service. Using a REST client, you can access the Scoring service which includes a series of API endpoints to submit scoring requests, retrieve results, list jobs, and more. Requires installation of the ThingWorx Analytics Server. How to avoid mistakes - Below are some common mistakes while doing Prescriptive analytics: Starting digital analytics without a clear goal Ignoring core metrics Choosing overkill analytics tools Creating beautiful reports with little business value Failing to detect tracking errors                                                                                                                                 Image source: Wikipedia, Content: go.forrester.com(Partially)
View full tip
Hello!   We will host a live Expert Session: "Top 5 items to check for Thingworx Performance Troubleshooting" on Sept 3rdh at 09:00 AM EST.   Please find below the description of the expert session as well as the link to register .   Expert Session: Top 5 items to check for Thingworx Performance Troubleshooting Date and Time: Thursday, Sept 3rd, 2020 09:00 am EST Duration: 1 hour Description: How to troubleshoot performance issues in a Thingworx Environment? Here we will cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support Registration: here   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’   You can also suggest topics for upcoming sessions using this small form.
View full tip
The KEPServerEX installer download is available from the My Kepware web portal: https://my.kepware.com/mykepware/ It will be necessary to create a new My Kepware account if you have not already done so. When you log in to your My Kepware account, you will be at the My Kepware Home page. In the left column, you will see a link to download the latest release. You can access all of the features of KEPServerEX with this installation, including the Thingworx Native Interface. For more information on installation of KEPServerEX, see this guide: https://www.kepware.com/en-us/products/kepserverex/documents/installation-guide.pdf
View full tip
Welcome to the ThingWorx Manufacturing Apps Community! The ThingWorx Manufacturing Apps are easy to deploy, pre-configured role-based starter apps that are built on PTC’s industry-leading IoT platform, ThingWorx. These Apps provide manufacturers with real-time visibility into operational information, improved decision making, accelerated time to value, and unmatched flexibility to drive factory performance.   This Community page is open to all users-- including licensed ThingWorx users, Express (“freemium”) users, or anyone interested in trying the Apps. Tech Support community advocates serve users on this site, and are here to answer your questions about downloading, installing, and configuring the ThingWorx Manufacturing Apps.     A. Sign up: ThingWorx Manufacturing Apps Community: PTC account credentials are needed to participate in the ThingWorx Community. If you have not yet registered a PTC eSupport account, start with the Basic Account Creation page.   Manufacturing Apps Web portal: Register a login for the ThingWorx Manufacturing Apps web portal, where you can download the free trial and navigate to the additional resources discussed below.     B. Download: Choose a download/packaging option to get started.   i. Express/Freemium Installer (best for users who are new to ThingWorx): If you want to quickly install ThingWorx Manufacturing Apps (including ThingWorx) use the following installer: Download the Express/Freemium Installer   ii. 30-day Developer Kit trial: To experience the capabilities of the ThingWorx Platform with the Manufacturing Apps and create your own Apps: Download the 30-day Developer Kit trial   iii. Import as a ThingWorx Extension (for users with a Manufacturing Apps entitlement-- including ThingWorx commercial customers, PTC employees, and PTC Partners): ThingWorx Manufacturing apps can be imported as ThingWorx extensions into an existing ThingWorx Platform install (v8.1.0). To locate the download, open the PTC Software Download Page and expand the following folders:   ThingWorx Platform | Release 8.x | ThingWorx Manufacturing Apps Extension | Most Recent Datacode     C. Learn After downloading the installer or extensions, begin with Installation and Configuration.   Follow the steps laid out in the ThingWorx Manufacturing Apps Setup and Configuration Guide 8.2   Find helpful getting-started guides and videos available within the 'Get Started' section of the ThingWorx Manufacturing Apps Portal.     D. Customize Once you have successfully downloaded, installed, and configured the Manufacturing Apps, begin to explore the deeper potential of the Apps and the ThingWorx Platform.   Follow along with the discussion and steps contained in the ThingWorx Manufacturing Apps and Service Apps Customization Guide  8.2   Also contained within the the 'Get Started' page of the ThingWorx Manufacturing Apps Portal, find the "Evolve and Expand" section, featuring: -Custom Plant Layout application -Custom Asset Advisor application -Global Plant View application -Thingworx Manufacturing Apps Technical Lab with Sigma Tile (Raspberry Pi application) -Configuring the Apps with demo data set and simulator -Additional Advanced Documentation     E. Get help / give feedback / interact Use the ThingWorx Manufacturing Apps Community page as a resource to find documentation, peruse past forum threads, or post a question to start a discussion! For advanced troubleshooting, licensed users are encouraged to submit support tickets to the PTC My eSupport portal.
View full tip
Hello!   We will host a live Expert Session: "What's new in Navigate 9.0" on August 18h at 01:00 PM EST. Please find below the description of the expert session as well as the link to register.   Expert Session: What's new in Navigate 9.0 Date and Time: Tuesday, August 18th, 2020 01:00 pm EST Duration: 1 hour Registration link: https://www.ptc.com/en/special-event/thingworx-navigate Description: This session is the intro of a series that will cover new capabilities of the recent Navigate 9 release and the value that each can bring to your implementation. Then we will have further sessions covering the details of some of them   You can also suggest topics for upcoming sessions using this small form.
View full tip
It's been challenging, in the absence of OOTB Software Content Management (SCM) system within ThingWorx, to track, maintain code changes in different entities or to roll back in case of any entity is wrongly edited or removed. Though there's possibility to some extent to compare the differences i.e. when importing back the entities from the Source Control repository in ThingworxStorage. However this approach itself has it's own limitations. Note : This is not an official document, rather more of a trick to help workaround this challenge. I'll be using following components in this blog to help setup a workable environment. Git VSCode ThingWorx Scheduler Thing First two components i.e. Git & VSCode is something I preferred to use due to their ease and my familiarity. However, you are free to choose your own versioning platform and IDE to review the branches, commits, diff, etc. or simply use GIT Bash to review the branches and all related changes to the code. This blog divides into following structures Installing & setting up code versioning software Installing IDE Creating a Scheduler Thing in ThingWorx Reviewing the changes 1. Installing & setting up code versioning software As mentioned you can use any code versioning platform of your choice, in this blog I'll be using Git. To download and install Git, refer to the Git- Installing Git Setting up the Git Repository Navigate to the \\ThingworxStorage as this is the folder which contains the path to the \repository folder which in turns have SystemRepository folder. Note: Of course more custom repositories could be created if preference is to separate it from the SystemRepository (available OOTB) Once Git is installed, let's initialize the repository. If you are initializing the repository in ThingworxStorage and would like only repository folder to be tracked, be sure to create .gitignore and add following to it: # folders to ignore database esapi exports extensions logs reports certificates # keystore to ignore keystore.jks Note : Simply remove the folder/file from the .gitignore file if you'd like that file/folder to be tracked within the Git repository. Following commands have been issued in the Git Bash which can be started from Windows Start > Git Bash. Then from the Git Bash navigate to the ThingworxStorage/repository folder. Git Command to initialize repository $ git init Git command to check the status of tracked/un-tracked files/folders $ git status This may or may not return list(s) of files/folders that are not tracked/un-tracked. While issuing this command for the first time, it'll show that the repository and its content i.e. SystemRepository folder is not tracked (file/folder names will likely be highlighted in red color. Git command to configure user and then add required files/folders for tracking $ git config --global user.name "" $ git config --global user.email "" $ git add . This will add all the folders/files that are not ignored in the .gitignore file as we created above. $ git commit -m "" This will perform first commit to the master branch, which is the default branch after the initial setup of the git repository $ git branch -a This will list all available branches within this repository $ git branch e.g. $ git branch newfeatureA This will create a new branch with that name, however to switch to that branch checkout command needs to be used. $ git checkout newfeatureA Note this will reflect the change in the command prompt e.g. it'll be switched from  MINGW64 /c/ThingworxStorage/repository (master) to MINGW64 /c/ThingworxStorage/repository (newfeatureA) If there's a warning with files/folders highlighted in Red it may mean that the files/folders are not yet staged (not yet ready for being committed under the new branch and are not tracked under the new branch). To stage them: $ git add . To add them for tracking under this branch $ git commit -m "Initial commit under branch newfeatureA" Above command will commit with the message defined with "-m" parameter 2. Installing IDE Now that the Git is installed and configured for use, there are several options here e.g. using an IDE like VSCode or Atom or any other IDE for that matter, using Git Bash to review the branches and commit codes via command or to use the Git GUI. I'll be using VSCode (because apart from tracking Git repos I can also debug HTML/CSS, XML with minimum setup time and likely these are going to be the languages mostly used when working with ThingWorx entities) and will install certain extensions to simplify the access and reviewing process of the branches containing code changes, new entities those that are created or getting committed from different users, etc To install VSCode refer to the Setting up Visual Studio Code. This will cover all the platforms which you might be working on. Here are the list of extensions that I have installed within VS Code to work with the Git repository. Required Extensions Git Extension Pack Optional Extensions Markdown All in One XML Tools HTML CSS Support Once installed and done with all the above mentioned extensions simply navigate to the VSCode application's File > Open Folder > \\ThingworxStorage\repository This will open the SystemRepository folder which will also populate the GitLense section highlighting the lists of branches, which one is the active branch (marked with check icon) & what uncommitted changes are remaining in which branch see following: To view the history of all the branches we can use the extension that got installed with the Git Extension Pack by pressing keyboard key F1 and then search for Git: View History (git log) > Select all branches; this will provide overview such as this 3. Creating a Scheduler Thing in ThingWorx Now that we have the playground setup, we can either: Export ThingWorx entities manually by navigating to the ThingWorx Composer > Import/Export > Export > Source Control Entities, or Invoke the ExportSourceControlledEntities service automatically (based on a Scheduler's CRON job) available under Resources > SourceControlFunctions To invoke this service automatically, a subscription could be created to the Scheduler Thing's Event which invokes the execution of ExportSourceControlledEntities service periodically. I'm using this service with following input parameters : var params = { path: "/"/* STRING */, endDate: undefined/* DATETIME */, includeDependents: undefined/* BOOLEAN */, collection: undefined/* STRING */, repositoryName: "SystemRepository"/* THINGNAME */, projectName: undefined/* PROJECTNAME */, startDate: undefined/* DATETIME */, tags: undefined/* TAGS */}; // no return Resources["SourceControlFunctions"].ExportSourceControlledEntities(params); Service is only taking path & repositoryName as input. Of course, this can be updated based on the type of entities, datetime, collection type, etc. that you would want to track in your code's versioning system. 4. Reviewing the changes With the help of the Git tool kit installed in the VS Code (covered in section 2. Installing IDE) we can now track the changed / newly created entities immediately (as soon as they are exported to the respository ) in the Source Control section (also accessible via key combination Ctrl + Shift + G) Based on the scheduled job the entities will be exported to the specified repository periodically which in turn will show up in the branch under which that repository is being tracked. Since I'm using VS Code with Git extension I can track these changes under the tab GitLens Additionally, for quick access to all the new entities created or existing ones modified - Source Control section could be checked Changes marked with "U" are new entities that got added to the repository and are also un-tracked and the changes marked with "M" are the ones that are modified entities compared to their last commit state in the specific branch. To quickly view the difference/modifications in the entity, simply click on the modified file on the left which will then highlight the difference on the right side, like so
View full tip
Sometimes it's needed to delete the existing PostgreSQL database, especially if a different major version was installed at first by mistake (for example, 9.6 in place of the supported 9.4). Then it's absolutely necessary to ensure the database is fully deleted and there is no db ghosts. The proper way to uninstall is to go to the postgresql server installation directory and find one uninstall-postgresql file. Double click on the Uninstall-postgresql file to run the un-installer- it will un-install postgresql. In case the uninstall wasn't performed correctly, below are the manual steps to clean it up. One sign of existing "ghost" db, is randomly seeing a second PostgreSQL server in the pgAdmin III or experiencing "error"-less problems when running the ThingWorx installation scripts. To uninstall manually,in this example we will use 9.6 as the version to delete - please replace with your own where needed: Remove the postgresql server installation directory. (rd /s /q "C:\Program Files\PostgreSQL\9.6") Assuming default location. Delete the user 'postgres' (net user postgres /delete) Remove the Registry entries. (HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.6) and (HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Services\postgresql-9.6) Remove the postgresql-9.6 service. (sc delete postgresql-9.6)
View full tip
Connectors allow clients to establish a connection to Tomcat via the HTTP / HTTPS protocol. Tomcat allows for configuring multiple connectors so that users or devices can either connect via HTTP or HTTPS.   Usually users like you and me access websites by just typing the URL in the browser's address bar, e.g. "www.google.com". By default browsers assume that the connection should be established with the HTTP protocol. For HTTPS connections, the protocol has to be specified explictily, e.g. "https://www.google.com"   However - Google automatically forwards HTTP connections automatically as a HTTPS connection, so that all connections are using certificates and are via a secure channel and you will end up on "https://www.google.com" anyway.   To configure ThingWorx to only allow secure connections there are two options...   1) Remove HTTP access   If HTTP access is removed, users can no longer connect to the 80 or 8080 port. ThingWorx will only be accessible on port 443 (or 8443).   If connecting to port 8080 clients will not be redirected. The redirectPort in the Connector is only forwarding requests internally in Tomcat, not switching protocols and ports and not requiring a certificate when being used. The redirected port does not reflect in the client's connection but only manages internal port-forwarding in Tomcat.   By removing the HTTP ports for access any connection on port 80 (or 8080) will end up in an error message that the client cannot connect on this port.   To remove the HTTP ports, edit the <Tomcat>\conf\server.xml and comment out sections like       <!-- commented out to disallow connections on port 80 <Connector port="80" protocol="org.apache.coyote.http11.Http11NioProtocol" connectionTimeout="20000" redirectPort="443" /> -->     Save and restart Tomcat. If opening Tomcat (and ThingWorx) in a browser via http://myServer/ the connection will fail with a "This site can’t be reached", "ERR_CONNECTION_REFUSED" error.   2) Forcing insecure connections through secure ports   Alternatively, port 80 and 8080 can be kept open to still allow users and devices to connect. But instead of only internally forwarding the port, Tomcat can be setup to also forward the client to the new secure port. With this, users and devices can still use e.g. old bookmarks and do not have to explicitly set the HTTPS protocol in the address.   To configure this, edit the <Tomcat>\conf\web.xml and add the following section just before the closing </web-app> tag at the end of the file:     <security-constraint>        <web-resource-collection>              <web-resource-name>HTTPSOnly</web-resource-name>              <url-pattern>/*</url-pattern>        </web-resource-collection>        <user-data-constraint>              <transport-guarantee>CONFIDENTIAL</transport-guarantee>        </user-data-constraint> </security-constraint>     In <Tomcat>\conf\web.xml ensure that all HTTP Connectors (port 80 and 8080) have their redirect port set to the secure HTTPS Connector (usually port 443 or port 8443).   Save and restart Tomcat. If opening Tomcat (and ThingWorx) in a browser via http://myServer/ the connection will now be forwarded to the secure port. The browser will now show the connection as https://myServer/ instead and connections are secure and using certificates.   What next?   Configuring Tomcat to force insecure connection to actually secure HTTPS connection just requires a simple configuration change. If you want to read more about certificates, encryption and how to setup ThingWorx for HTTPS in the first place, be sure to also have a look at   Trust & Encryption - Theory Trust & Encryption - Hands On
View full tip
Persistent vs. Logged Properties By Mike Jasperson, VP of IoT EDC   Executive Summary ThingWorx provides several different “aspects” (or storage options) for how property values are saved.  These options each have different implications for performance and scalability.  Understanding those implications is important for designing a scalable IOT solution.   Persistent Properties are best used for non-telemetry data which will change infrequently (for example only a few times in a day) and where historical values are not required.  When overused, Persistent properties can put significant pressure on the database layer of your ThingWorx implementation, leading to poor performance of your IOT application.  As the number of Things in your IOT application scales up, the quantity or frequency of persistent properties per Thing needs to be carefully considered.   Logged Properties are best used for telemetry data where historical values need to be retained, but also for any other value that is expected to change frequently.  Logged properties can create some additional requirements: a process for handling null/default values after restarts, more disk space, and a data retention policy. There are benefits as well, though, like more flexibility and scalability for the ingestion of larger volumes of data.   Persistent + Logged Properties perform database operations of both aspects.  Combined use should be very limited – only properties that update infrequently (a few times a day), and that must be in-memory in the event of a ThingWorx restart.   In-Memory Only Properties are neither persistent nor logged – they are not stored to the database.  These properties can greatly improve scale for values that need to be available for the application to drive UIs or compute other derived values that will be stored.  However, high-frequency updates of in-memory properties can create scale challenges in HA (high availability) ThingWorx configurations where memory state needs to be constantly shared between multiple ThingWorx nodes.     Find a complete summary as well as example cases in the document attached.
View full tip
Put together a quick example mashup in support of a couple of analytics projects to demonstrate use of the new TWX Analytics 8.1 APIs within TWX mashup builder. The intention here is for use in POCs to provide a quick way of demonstrating customer-facing analytics outputs along with the more detailed view available in Analytics Builder. Required pre-requisites are: ThingWorx 8.1 + Analytics Extensions ThingWorx Analytics Server 8.1 Carousel TWX UI widget (attached) imported into TWX Data set(s) loaded with signals / profiles generated. The demo can be installed by importing the attached entities file into TWX composer then launching the mashup 'EMEA.Analytics.CustomerInsightMashUp'. A quick run through of the functionality ... On launching the mashup, data sets and models are displayed for selection on the left hand-side. On selecting dataset and model, signals are presented in two tabs - first an overview of all signals. The list on the left can be expanded by changing the value for 'Top <n> Contributing Features'. On selecting a signal from the list, the 'Selected Signal Details' tab displays additional charting for value ranges, average goal etc. The number of 'bins' to display can be edited. Similarly, profiles can be viewed from the 'Profiles' tab - each profile can be selected by dragging the upper carousel. This is all done using the Analytics 8.1 "Things" in TWX along with an additional custom Thing with some scripted services (EMEA.Analytics.Helper). Thanks to Arian Van Huelsen & Tanveer Saifee at PTC for their support; all comments / feedback welcome.
View full tip
Original Post Date:     June 6, 2016   Description: This is a video tutorial on creating a DataTable with a DataShape, and adding and retrieving an entry.  
View full tip
This document attached to this blog entry actually came out of my first exposure to using the C SDK on a Raspberry PI. I took notes on what I had to do to get my own simple edge application working and I think it is a good introduction to using the C SDK to report real, sampled data. It also demonstrates how you can use the C SDK without having to use HTTPS. It demonstrates how to turn off HTTPS support. I would appreciate any feedback on this document and what additions might be useful to anyone else who tries to do this on their own.
View full tip
Introduction Oracle 12c release introduced the concept of multi-tenant architecture for housing several databases running as service under a single database, I'll try to address the connectivity and required configuration to connect to one of the Pluggable database running in the multi-tenant architecture. Multi-tenant database architecture in scope of ThingWorx External Data Source What is multi-tenant Database architecture ? Running multiple databases under a single database installation. Oracle 12c allows user to create one database called Container Database (CDB) and then spawn several databases called Pluggable Databases (PDB) running as services under it. Why use multi-tenant architecture? Such a setup allows users to spawn a new PDB as and when needed with limited resource requirements, easily administer several PDBs just by administering the container database - since all the PDBs are contained within a single database's tablespace structure, start and stop individual PDB leading to low cost on maintaining different databases - as the resource management is limited to one CDB. When to use multi-tenant architecture? In scenarios like creating PoCs, different test environments requiring external data storage, maintaining different versions of dataset, having this run in the multi-tenant architecture could help save time, money and effort. Create Container Database (CDB) Creation of a Container Database (CDB) is not very different from creating a non Container Database use the attached guide Installing Oracle Database Software and Creating a Database.pdf same is accessible online. Create Pluggable Database (PDB) Use the attached Multitenant : Create and Configure a Pluggable Database (PDB) in Oracle Database 12c PDF guide to create and plug a Pluggable Database into the Container Database created in previous step, same is accessible online Using above guide I have bunch of pluggable databases as can be seen below. I'll be using TW724 for connecting to ThingWorx server as an external datasource for following example Connect to a Pluggable Database(PDB) as external data source for ThingWorx Download and unzip the Relational Databases Connectors Extension from ThingWorx Marketplace and extract Oracle12Connector_Extension Import Oracle12Connector_Extension to the ThingWorx using Extension -> Import Create a Thing using OracleDBServer12 Thing Template , e.g. TW724_PDB_Thing Navigate to the Configurations for TW724_PDB_Thing to update the default configuration: JDBC Driver Class Name : oracle.jdbc.OracleDriver JDBC Connection String : jdbc:oracle:thin:@//oravm.ptcnet.ptc.com:1521/tw724.ptcnet.ptc.com Database Username : <UserName> Database Password : <password>   5. Once done save the entity Note: A PDB in a container database can be reached only as a service and not using the CDB's SID. In the above configuration TW724 is a PDB which can be connected to via it's service name i.e. TW724.PTCNET.PTC.COM Let's head to the Services tab for TW724_PDB_Thing to query and access the PDB data Creating Services to access the PDB as external database source for ThingWorx Once the configuration is done the TW724_PDB_Thing is ready for use. The queries remain the same as any other SQL query needed to access the data from Oracle. Service for creating a Table Once on the Services tab for the TW724_PDB_Thing click on Add My Service select the service handler as SQL Command to use following script to create a testTable1 in the PDB create table testTable1 (     id NUMBER GENERATED ALWAYS AS IDENTITY primary key,     col1 varchar2(100),     col2 number ) Note: GENERATED ALWAYS AS IDENTITY option is Oracle 12c specific and I included it here for the reason that with Oracle 12c the possibility to auto generate is now built in with that option simplifying the sequence generation when compared with older Oracle versions such as Oracle 11g. User creating table will need access right on creating table and sequence checkout the Oracle documentation on Identity for more on this. Service for getting all the data from the table Add another service with script Select * from testTable1 for getting all the data from the table Service for inserting data into the table Adding another service with script insert into testTable1 (col1, col2) values ('TextValue', 123)  will insert the data into the table created above Service for getting all tables from the PDB i.e. TW724 Using Select * from tab lists all the available tables in the TW724 PDB Summary Just a quick wrap up on how this would look visually refer to the following image. Since this is a scalable setup - given the platform having enough resources it's possible to create upto 252 PDBs under a CDB therefore 252 PDBs could be created and configured to as many things extending the OracleDBServer12 Thing. ______________________________________________________________________________________________________________________________________________ Edit: Common Connection Troubleshooting If you observe the error something like this Unable to Invoke Service GetAllPDBTables on TW724_PDB_Thing : ORA-01033: ORACLE initialization or shutdown in progress Ensure that the pluggable database, in this error TW724 (since this is what I created and used above in my services) is opened and accessible. If it's not opened use the command after logging in as sys/system (with admin rights) in CDB, which is ORCL in via SQL*Plus or SQL Developer or any SQL utility of your choice capable of connecting to Oracle DB and open the pluggable database using the command : alter pluggable database tw724 open;
View full tip
ThingWorx provides multiple ways to deliver your data to the server. You can choose from the C based EMS to your own C application that uses the C SDK as well as SDKs for many popular languages but what can you do if the device you want to collect data on is so small that it need a very lightweight data delivery method. Normally you would consider using the REST web service interface and writing your own custom client to post your data by there is an alternative, MQTT. MQTT is a lightweight protocol that can be used from an Arduino with an Ethernet Shield that can stream real time data directly to ThingWorx by installing the MQTT Marketplace Extension on your server. To learn more about how this kind of solution worked, I created this slide deck while building a hardware example: DeliveringArduinoDataToThingworx.pdf Hopefully, it can help others out who want to create this kind of solution as well.
View full tip
The AddStreamEntries​ snippet does not offer too much information, except that it needs an InfoTable as input. It is however based on the InfoTable for the AddStreamEntity service.     To use the AddStreamEntries table, an InfoTable based on sourceType, values, location, source, timestamp​ and ​tags​ must be used.   In this example, I started with a new Thing based on a ​Stream​ template and the following DataShape:     This DataShape must be converted into an InfoTable with is used as the ​values​ parameter. It's important that the ​timestamp​ parameter has distinct values! Otherwise values matching the same timestamp will be overwritten!   We don't really need the sourceType​ as ThingWorx will automatically determine the type by knowing the source and which kind of Entity Type it is.   I created a new ​MyStreamThing​ with a new service, filling the InfoTable and the Stream. The result is the following code which will add 5 rows to the Stream:     // *** SET UP META DATA FOR INFO TABLE ***   // create a new InfoTable based on AddStreamEntries parameters (timestamp, location, source, sourceType, tags, values)   var myInfoTable = { dataShape: { fieldDefinitions : {} }, rows: [] };   myInfoTable.dataShape.fieldDefinitions['timestamp']  = { name: 'timestamp', baseType: 'DATETIME' }; myInfoTable.dataShape.fieldDefinitions['location']  = { name: 'location', baseType: 'LOCATION' }; myInfoTable.dataShape.fieldDefinitions['source']    = { name: 'source', baseType: 'STRING' }; myInfoTable.dataShape.fieldDefinitions['sourceType'] = { name: 'sourceType', baseType: 'STRING' }; myInfoTable.dataShape.fieldDefinitions['tags']      = { name: 'tags', baseType: 'TAGS' }; myInfoTable.dataShape.fieldDefinitions['values']    = { name: 'values', baseType: 'INFOTABLE' };   // *** SET UP ACTUAL VALUES FOR INFO TABLE ***   // create new meta data   var tags = new Array(); var timestamp = new Date(); var location = new Object(); location.latitude = 0; location.longitude = 0; location.elevation = 0; location.units = "WGS84";   // add rows to InfoTable (~5 times)   for (i=0; i<5; i++) {       // create new values based on Stream DataShape       var params = {           infoTableName : "InfoTable",           dataShapeName : "Cxx-DS"     };       var values = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params);       // add something to the values to make them unique       // create and add new row based on Stream DataShape     // only a single line allowed!       var newValues = new Object();     newValues.a = "aaa" + i; // STRING - isPrimaryKey = true     newValues.b = "bbb" + i; // STRING     newValues.c = "ccc" + i; // STRING       values.AddRow(newValues);       // create new InfoTable row based on meta data & values     // add 10 ms to each object, to make it's timestamp unique     // otherwise entries with the same timestamp will be overwritten       var newEntry = new Object();     newEntry.timestamp = new Date(Date.now() + (i * 10));     newEntry.location = location;     newEntry.source = me.name;     newEntry.tags = tags;     newEntry.values = values;       // add new Info Table row to Info Table           myInfoTable.rows = newEntry;       }       // *** ADD myInfoTable (HOLDING MULITPLE STREAM ENTRIES) TO STREAM       // add stream entries in the InfoTable       var params = {           values: myInfoTable /* INFOTABLE */     };       // no return       Things["MyStreamThing"].AddStreamEntries(params);   To verify the values have been added correctly, call the ​GetStreamEntriesWithData​ service on the ​MyStreamThing​
View full tip
Sometimes you need the values from different ThingTemplate members in ONE grid. Therefore it would be great, if you can join 2 "GetImplementedThingsWithData" results into a common one. Here a script that works generally as long as you don't mess with datatypes on same column names. I'm very interested, if someone can find a much easier solution. The Union function was the only one I found suited for the task, but this needs preparation of the infotables upfront. Input: Table1 :Infotable Table2: Infotable Output: Infotable Here the "Snippet": // Define params for an Infotable to hold column names var params = {   infoTableName: "field" /* STRING */ }; // Define column 1 var newField = new Object(); newField.name = "field"; newField.baseType = 'STRING'; // Two 1 columns Infotables to store the field definition; var field1 = Resources["InfoTableFunctions"].CreateInfoTable(params); field1.AddField(newField); var field2 = Resources["InfoTableFunctions"].CreateInfoTable(params); field2.AddField(newField); // Define the cell to add to Infotable var myField = new Object(); myField.field = ""; myField.baseType = "STRING"; // Loop through Table1 var dataShapeFields = Table1.dataShape.fields; for (var fieldName in dataShapeFields) {   logger.debug('field1 name is ' + dataShapeFields[fieldName].name);     myField.field = dataShapeFields[fieldName].name;    field1.AddRow(myField); } // Loop through Table2 var dataShapeFields = Table2.dataShape.fields; for (var fieldName in dataShapeFields) {   logger.debug('field2 name is ' + dataShapeFields[fieldName].name);    myField.field = dataShapeFields[fieldName].name;    field2.AddRow(myField); } // Using inner join functionality to filter only the values that exist in both var params = {   columns2: "field" /* STRING */,   columns1: "field" /* STRING */, joinType: "INNER" /* STRING */,   t1: field1 /* INFOTABLE */, t2: field2 /* INFOTABLE */,   joinColumns1: "field" /* STRING */,   joinColumns2: "field" /* STRING */ }; var commonFields = Resources["InfoTableFunctions"].Intersect(params); // Loop over the result to build a search string var commonColumns = ""; var tableLength = commonFields.rows.length; for (var x = 0; x < tableLength; x++) {   var row = commonFields.rows ;   commonColumns = commonColumns + row.field + ","; } // Reduce Table1 to match only common columns var params = { t: Table1 /* INFOTABLE */, columns: commonColumns /* STRING */ }; var result1 = Resources["InfoTableFunctions"].Distinct(params); // Reduce Table2 to match only common columns var params = {   t: Table2 /* INFOTABLE */,   columns: commonColumns /* STRING */ }; var result2 = Resources["InfoTableFunctions"].Distinct(params); // At the END JOIN the tables together (does not work if colums are different) var params = {   t1: result1 /* INFOTABLE */,   t2: result2 /* INFOTABLE */ }; var result = Resources["InfoTableFunctions"].Union(params);
View full tip
This video shows the steps to install ThingWorx Analytics release 8.3  
View full tip
Modbus is a commonly used communications protocol that allows data transfer between computers and PLCs. This is intended to be a simple guide on setting up and using a Modbus PLC Simulator with ThingWorx. ThingWorx provides Modbus packages for Windows, Linux and Linux ARM. The Modbus Package contains libraries and lua files intended to be used along with the Edge Microserver. Note: The Modbus package is not intended as an out of the box solution Requirements: ThingWorx Platform Edge Microserver Modbus Package Modbus PLC Simulator In this guide, a free Modbus PLC Simulator​ is used. Here is the direct download link for their v8.20 binary release. Configuring the EMS: The first step is to configure the EMS as a gateway. This is done via adding an auto_bind section in the config.json: "auto_bind": [ {     "name": "ModbusGateway",     "gateway": true }] This creates an ephemeral Thing that only exists when the EMS is running. The next step is to modify the config.lua to include the Modbus configuration. Copy over the contents of the etc folder of the Modbus Package over to the etc folder of the EMS. A sample config_modbus.lua is provided in the Modbus Package as a reference. The following code defines a Thing called MyPLC (which is a Remote Thing created on the Platform): scripts.MyPLC = {     file = "thing.lua",     template = "modbusExample",     identifier = "plc",     updateRate = 2000 } scripts.Thingworx = {     file = "thingworx.lua" } scripts.modbus_handler = {     file = "modbus_handler.lua",     name = "modbus_handler",     host = "localhost" } Adding 'modbusExample' to the above script enables the usage of the same located at /etc/custom/templates/. 'modbusExample' is a reference point for creating a script to add the registers of the PLC. The given template has examples for different basetypes. The different types of available registers are noted and referenced in the modbus.lua file available under /etc/thingworx/lua/. Setting up the PLC Simulator: Extract the mod_RSsim to a folder and run the executable. Since we are 'simulating' a PLC connection, set the protocol to Modbus TCP/IP. Change the I/O to Holding Registers (or any other relevant option), with the Address set to Dec. In the Simulation menu, select 'No animation' if you want to enter values manually or use 'Increment BYTES' to automatically generate values. This PLC Simulator will run at port 502. The Connection: With the EMS & luaScriptResource running, the PLC Simulator should have a connection to the platform with activity on the received/sent section. Now if you open the Remote Thing 'MyPLC' in the platform, the isConnected property (under the Properties section) should be true. (If not, go back to General Information, click on Browse in the Identifier section and select 'plc'). Go back to the Properties section, and click on Manage Bindings. Click on the Remote tab and the list of defined properties should appear. For example, the following code from the modbusExample.lua: properties.Int16HoldRegExample = {key="holding_register/1/40001?format=Int16", handler="modbus_handler", basetype="NUMBER"} denotes a property named Int16HoldRegExample at register 40001. The value at the address 40001 in the PLC Simulator should correspond with the value at the platform once this property is added and the Thing saved. If you are running into any errors when connecting with a Raspberry Pi, please take a look atDuan Gauche's follow up document/ guide - Using your Raspberry Pi with the Edge Microserver and Modbus
View full tip
Alerts are a special type of event.  Alerts allow you to define rules for firing events.  Like events, you must define a subscription to handle a change in state.  All properties in a Thing Shape, Thing Template, or Thing can have one or more alert conditions defined.   You can even define several of the same type of alert.  When an alert condition is met, ThingWorx throws an event. You can subscribe to the event and define the response to the alert using JavaScript.  Events also fire when a property alert is acknowledged and when it goes out of alert condition.   Alert Types Alerts have conditions which describe when the alert is triggered.  The types of conditions available depend upon the property type.  For example, string alerts may be triggered when the string matches pre-set text.  A number alert may be set to trigger when the value of the number is within a range.   EqualTo: Alert is triggered when the defined Value is reached. Applies to Boolean, DateTime, Infotable (in regard to number of rows), Integer, Long, Location, Number, and String base types. NotEqualTo: Alert is triggered when the defined Value is not reached. Applies to Boolean, DateTime, Infotable (in regard to number of rows), Integer, Long, Location, Number, and String base types. Above: Alert is triggered when the defined Limit is exceeded or met (if the Limit is included).  By default, the Limit is included.  Applies to DateTime, Infotable, Integer, Long, and Number base types. Below: Alert is triggered when the alert value is below the defined Limit or meets it (if the Limit is included).  By default, the Limit is included.  Applies to DateTime, Infotable, Integer, Long, and Number base types. InRange: Alert is triggered when a value is between a defined range.  By default, the minimum value is included, but the maximum can be included as well.  Applies to DateTime, Integer, Long, and Number base types. OutofRange: Alert is triggered when a value is outside a defined range.  By default, the minimum value is included, but the maximum can be included as well.  Applies to DateTime, Integer, Long, and Number base types. DeviationAbove: Alert is triggered when the property value minus the alert Value is greater than the alert Limit ((property value - alert value) > alert Limit).  If the Limit is included, the alert is triggered when the property value minus the alert Value is greater than or equal to the alert Limit ((property value - alert value) >= alert Limit).   By default, the Limit is included.  Applies to DateTime, Integer, Long, Location, and Number base types. DeviationBelow: Alert is triggered when the property value minus the alert Value is less than the alert Limit ((property value - alert value) < alert Limit).  If the Limit is included, the alert is triggered when the property value minus the alert Value is less than or equal to the alert Limit ((property value - alert value) <= alert Limit). By default, the Limit is included.  Applies to DateTime, Integer, Long, Location, and Number base types. Anomaly: Alert is triggered when the property value falls outside of an expected pattern as defined by a predictive model.  Applies to Integer, Long, and Number base types.   Alert types are specific to the data type of the property.  Properties configured as the following base types can be used for alerts:   Boolean Datetime Infotable Integer Location Number String   Creating an Alert When creating an alert:   You can set it to be enabled or disabled Alerts must have a ThingWorx-compatible name and can optionally contain a description You must set the limit(s) to determine when the event fires If an Include Limit is included, the event fires when the Limit Condition is met Not including the Limit causes the event to fire when the Limit Condition is surpassed The priority is a metadata field that enables the addition of a priority. It does not impact the Event/Subscription handling or sequence because the system fires events off asynchronously.   Steps to create or modify an Alert:   Select an existing Property or create a new Property for a Thing, Thing Shape, or Thing Template for which to create/update the Alert Click Manage Alerts Click the New Alert drop-down and select the appropriate Alert Type Note:  The available fields will be vary depending on data type of the Property   Deselect Enabled if you do not wish to make the Alert enabled at the present time (Alert is enabled by default) Provide a Name and optional Description for the Alert Enter a Limit (numeric properties) Select Include Limit? if the value entered in the Limit field should trigger the Alert   Select the appropriate Priority. (The Priority is a metadata field for searching and categorization only.  It does not affect the order of processing, CPU or memory usage.) After defining an Alert, you can click New Alert to add additional alerts of either the same or different condition. You can also click Add New to add additional alerts of the same condition. When all Alerts have been created, click Update Click Done Once all Properties have been updated as needed, click Save   Once Alerts are defined, they appear on the Properties page (while in Edit mode).       After an Alert is defined, a Subscription to that Alert can be configured to launch the appropriate business logic, such as notifying a user of an Event through email or text message.     Monitoring Alerts   When an Alert condition is met, ThingWorx fires off an Alert. You can create a Subscription to the Alert so that you are automatically notified when an Alert is triggered.  Alerts are written to the alert history file and can be viewed through the Alert Summary and Alert History Mashups. The system tracks acknowledged and unacknowledged alerts. Alerts do not fire redundant events. For example, if a numeric property has a rule defined that generates an alert when the value is greater than 50, and a value = 51, an alert is generated and an alert event will fire. If another value comes in at 53 before the original alert is acknowledged, another event will not be fired because the current state is still greater than 50.   The Alert History and Alert Summary streams provide functionality to monitor alerts in the system.  Alert History is a comprehensive log that records all information recorded into the alert stream, where the data is stored until manually removed.   The Alert Summary provides the ability to filter by all alerts, unacknowledged alerts, or acknowledged alerts. You can also acknowledge alerts on a selected property or all alerts from a particular source (thing).   This information can be retrieved using Scripts as well, so you can create your own Alert Summary and History mashups.   From the ThingWorx header, choose Monitoring > Alert History. All Alerts are listed here. Click the Alert Summary Click the Unacknowledged tab to view alerts that have not been acknowledged. Choose to acknowledge an alert on a property or on the source. Type a message in the corresponding field. Click Acknowledge.   For each alert, the following displays: Property name. Source thing – lists the thing that contains this property with the alert. Timestamp – indicates when the alert was triggered. Name and type of alert. Duration – details how long the alert has been active. AckBy – indicates if the alert has been acknowledged and, if so, by whom and when. Message – defaults to the condition but is overwritten with the acknowledge message if one exists. Alert description.    The Alert History screen displays all Alerts that were once in an alert condition, but have moved out of that alert condition.  A Data Filter is provided at the top of the mashup to more easily find a particular Source, Property, or Alert.   The Alert History report is a Thingworx Mashup created using standard Thingworx functionality.  This means that any developer has the ability to re-create this report or a modification of this report.       Acknowledging Alerts   An acknowledgement (ack) is an indication that someone has seen the alert and is dealing with it (for example, low helium in an MRI machine and someone is filling it).  Alert History shows when alerts were acknowledged and any comments.   You can acknowledge an alert on a property or on the source. A source acknowledgment acknowledges all alerts on the source Thing for the selected alert in Monitoring > Alert Summary. A property acknowledgment (ack) only acknowledges the alerts on the property for the selected alert in Alert Summary.   For example, you create a Thing with two properties that have alerts set up. You put both properties in their alert states. View Alert Summary and select the Unacknowledged tab. You should see two alerts. Select one, and do a property acknowledgement. The alert you selected moves to the Acknowledged tab and is removed from the Unacknowledged tab. Put both properties in their alert states again, select one of the alerts on the Unacknowledged tab, and this time do a source acknowledgement. In this case, both alerts move to the Acknowledged tab, even though you only selected one of them.    For more information about Alerts, click here. To view a tutorial video on alerts, click here. Refer to this article for best practices affecting alerts.
View full tip
Hello! I have just written a tutorial on how to set up Lua to be run from the command line. As many already know, there is no good way to debug Lua scripts as they are used in package deployment in Software Content Manager, and building such a debugger is a vast and difficult undertaking. As an alternative, running small portions of code in the command line to ensure they will work as expected is one way to verify the validity of the Lua syntax prior to attempting a deployment. Here are the steps to set up Lua as a command line tool:   Grab the GCC compiler called TDM-GCC Run the exe file and follow the install instructions Remember the install directory for this, as the attached install script will need to be configured in a later step Default location in the install file is "C:\Program Files\TDM-GCC\" Note: leaving the "Add to PATH" selected will allow you to compile C code on the command line as well by typing "gcc" (this is not required for this set-up) Download the Lua source code ​This comes as a ".tar.gz" file, which can be tricky to extract in Windows Download 7zip for freeware which can unzip this type of archive Extract the Lua source code and navigate to the top level directory which should just contain one folder named like "lua-x.x.x" where the x's refer to the version Download and extract the attached zip file containing the build file Copy the "build.cmd" file from this to the top level directory of the Lua source code Modify the configuration as needed: Version number default is 5.3.4 (parameter is called lua_version) GCC install path default is "C:\Program Files\TDM-GCC\bin" (parameter is called lua_build_dir) Double click the "build.cmd" file A console window will appear with installation details If you see the following, then it worked successfully: The "lua\" directory will be created in the same folder as the "build.cmd" file Copy the "lua\" directory to "C:\Program Files\" Open "Computer" > "System Properties" > "Advanced system settings" Click "Environment Variables" > "New..." Call the variable "LUA" and make the value "C:\Program Files\lua\bin" Find the "Path" variable and click "Edit..." At the end of what is already there (do NOT delete anything that is already there), add "%LUA%" (make sure there is a ";" between the previous path and this entry) Click "Ok" Open a new command prompt (has to be new to load the new path) and type "lua" to see if it works Example syntax test from Lua command line:   I hope this is helpful to people! Let me know if you have any questions!
View full tip