cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
This video is the 1 st part of a series of 3 videos walking you through how to setup ThingWatcher for Anomaly Detection. In this first video you will learn the basics of how to create connectivity between KEPServer and ThingWorx Platform.   Updated Link for access to this video:  Anomaly Detection 8.0 - Part 1: Connecting KEPServer to ThingWorx: Part 1 of 3
View full tip
The purpose of this document is to see how you can setup an MXChip IoT DevKit and also how send the readings of this microprocessor to ThingWorx through an Azure cloud server. You will also learn how to view the values that are being sent.
View full tip
There is a test on connection server 7.2. With 4core CPU and 8GB memory, we sent 1000 http requests every second and there is 5% http request losted. After changing configuration for connection server, the lost rate drop to 0.86%. Here are some suggestions to improve connection server performance Reset parameters in connection server configuration file cxserver.conf. (..\conf\cxserver.conf) Adjust parameters max-connection-pool-size and max-wait-queue-size Change the default JVM settings. (increase memory for JVM properly) In this case, I created a new file named startMyConnectionServer.bat file with below code: SET CONNECTION_SERVER_HOME=C:\connection-server-7.2.0.2095 SET JAVA_OPTS=-Xms2G -Xmx2G %CONNECTION_SERVER_HOME%\bin\connection-server.bat Increase connection server's hardware (memory, CPU cores) Minimum system requirement 16GB memory, 4 CPU cores. Refer to ThingWorx Core 8.0 System Requirements for more hardware information
View full tip
In this video we cover the different configuration steps required for ThingWorx Analytics Builder extension This video applies to ThingWorx Analytics 52.1 till 8.1.   Note though: - this video uses Classic Composer, the same operations can be done using the New Composer starting with version 8.0 as illustrated in the Help Center - For release 8.1, the Settings menu differs from previous versions, see Video Link : 2079 between times 00:12 sec to 00:40 sec for up to date menu selection.   Updated Link for access to this video:  Installing Thingworx Analytics Builder: Part 2 of 3
View full tip
Concepts of Anomaly Detection used in ThingWatcher ThingWatcher is based on anomaly detection with the normal distribution. What does that mean? Actually,  normally distributed metrics follow a set of probabilistic rules. Upcoming values who follow those rules are recognized as being “normal” or “usual”. Whereas value who break those rules are recognized as being unusual. What is a normal distribution? A normal distribution is a very common probability distribution. In real life, the normal distribution approximates many natural phenomena. A data set is known as “normally distributed” when most of the data aggregate around it's mean, in a symmetric way. Also, it's extreme values get less and less likely to appear. Example When a factory is making 1 kg sugar bags it doesn’t always produce exactly 1 kg. In reality, it is around 1 kg. Most of the time very close to 1 kg and very rarely far from 1 kg. Indeed, the production of 1 kg sugar bag follows a normal distribution. Mathematical rules When a metric appears to be normally distributed it follows some interesting law. As does the sugar bag example. The mean and the median are the same. Both are equal to 1000. It’s because of  the perfectly symmetric “bell-shape” It is the standard deviation called sigma σ that defines how the normal distribution is spread around the mean. In this example σ = 20 68% of all values fall between [mean-σ; mean+σ] For the sugar bag [980; 1020] 95% of all values fall between [mean-2*σ; mean+2*σ] For the sugar bag [960; 1040] 99,7% of all values fall between [mean-3*σ; mean+3*σ] For the sugar bag [940; 1060] The last 3 rules are also known as the 68–95–99.7 rule also called the three-sigma rule of thumb When the rules get broken: it’s an anomaly As previously stated, When a system has been proven normally distributed, it follows a set of rules. Those rules become the model representing the normal behavior of the metric. Under normal conditions, upcoming values will match the normal distribution and the model will be followed. But what happens when the rules get broken? This is when things turn different as something unusual is happening. In theory, in a normal distribution, no values are impossible. If the weights of the bags of sugar were really distributed, we would probably find a bag of sugar of 860 g every billion products. In reality, we approximate this sugar bag example as normally distributed. Also, almost impossible value are approximated as impossible Techniques of Anomaly Detection Technique n°1: outlier value An almost impossible value could be considered as an anomaly. When the value deviates too much from the mean, let’s say by ± 4σ, then we can consider this almost impossible value as an anomaly. (This limit can also be calculated using the percentile). Sugar bags who weigh less than 920 g or more than 1080 g are considered anomalous. Chances are, there is a problem in the production chain. This provides a simple way to define maximum and minimum thresholds. Technique 2: detecting change in the normal distribution Technique n°2 can detect unusual distribution fast, using only some points. But it can’t detect anomalies who move from one sigma σ to another in a usual manner. To detect this kind of anomaly we use a “window” of n last elements. If the mean and standard derivation of this window change too much from usual then we can deduce an anomaly. Using a big window with a lot of values is more stable, but it requires more time to detect the anomaly. The bigger the window is the more stable it becomes. But it would require more time to detect the anomaly as it needs to aggregate more values for the detection.
View full tip
In the 8.2 release, we have upgraded our Mashup Runtime to jQuery 3.2.  This will give the platform a much needed upgrade to its core visualization library which will bring bug fixes, better performance, security enhancements, HTML5 compatibility and support for current browsers.  We will continue to upgrade all of our libraries across the platform with each ThingWorx release to ensure we are current and optimized.   We are releasing this functionality in 8.2 as an early preview and to enable regression testing of your ThingWorx applications.  In the Next Gen Composer, simply click on User Preferences and look for this setting:     Enabling jQuery3 runtime   JQuery 3 does introduce some new breaking changes (not from PTC!) that may affect existing apps.  We recommend turning on the JQuery3 option and testing your apps as soon as possible so there is time to fix any issues.  Please let PTC know if you are finding issues through our support site and our support staff will coach you through the upgrade process.  In 8.3, the jQuery 3 library will be our default for the Mashup design and runtime.  This means you will need to address any compatibly issues that jQuery 3 introduces (if any) to your widgets/applications before upgrading to the 8.4 release, where jQuery 3 will be the only available option.  We are hoping this dual mode, early access will help everyone through the transition and produce the best IoT applications possible!   You can also use the guides here for your reference: https://jquery.com/upgrade-guide/3.0/ https://github.com/jquery/jquery-migrate#migrate-older-jquery-code-to-jquery-30 https://blog.jquery.com/2017/03/16/jquery-3-2-0-is-out/
View full tip
This is an example for setting up remote desktop and file transfer for an asset in Thingworx Utilities using the Java Edge SDK. Step 1.   EMS Configuration ClientConfigurator config = new ClientConfigurator(); // application key RemoteAccessThingKey String appKey = "s2ad46d04-5907-4182-88c2-0aad284f902c"; config.setAppKey(appKey); // Thingworx server Uri config.setUri("wss://10.128.49.63:8445//Thingworx/WS"); config.ignoreSSLErrors(true); config.setReconnectInterval(15); SecurityClaims claims = SecurityClaims.fromAppKey(appKey); config.setSecurityClaims(claims);      // enable tunnels for the EMS config.tunnelsEnabled(true); // initialize a virtual thing with identifier PTCDemoRemoteAccessThing VirtualThing myThing = new VirtualThing(ThingName, "PTCDemoRemoteAccessThing", "PTCDemoRemoteAccessThing", client); **// for the file transfer functionality FileTransferVirtualThing myThing = new FileTransferVirtualThing(ThingName, "PTCDemoRemoteAccessThing", "PTCDemoRemoteAccessThing", client); myThing.addVirtualDirectory("AssetRepo",  "E:/AssetRepo"); Step 2.   Install TightVNC on the asset ( http://www.tightvnc.com/download.php ) Step 3.   Go to Thingworx Composer, search for thing PTCDemoRemoteAccessThing                 a.   pair  PTCDemoRemoteAccessThing with identifier PTCDemoRemoteAccessThing                 b.   go to PTCDemoRemoteAccessThing Configuration  and add a tunnel name: vnc host: asset IP port: 5900 (this is the default port for VNC servers; it can be changed)                 c.   go to PTCDemoRemoteAccessThing Properties and set the vncPassword Troubleshooting                 If the VNC server is on the same machine with the Thingworx server, check “Allow loopback connections” from Access Control tab in TightVNC Server Configuration.
View full tip
  Connect a Raspberry Pi to ThingWorx using the Edge MicroServer (EMS).   GUIDE CONCEPT   This project will utilize the Edge MicroServer (EMS) to connect ThingWorx Foundation to a Raspberry Pi.   YOU'LL LEARN HOW TO   Set up Raspberry Pi Install, configure, and launch the Edge MicroServer (EMS) Connect a remote device to ThingWorx Foundation   NOTE:  The estimated time to complete all parts of this guide is 60 minutes.     Step 1: Introduction   A Raspberry Pi is a small, single-board computer that utilizes an ARM processor and typically runs a variant of Linux.   Due to its small size, relatively affordable cost, and ability to run a full operating system, the Pi is a near-ideal device to utilize as a Proof-of-Concept (PoC) IoT Edge device.   In addition, there is a version of the ThingWorx Edge MicroServer (EMS) built to work on ARM processors. Therefore, this guide will explore getting the EMS running on a Raspberry Pi to connect to ThingWorx Foundation.       As stated in the Overview, you may purchase a Pi directly from the Raspberry Pi web site or from a distribution partner such as Digi-Key or RS.   You will also need an SD card (8+GB... 16+GB recommended) with the Raspbian operating system installed... though this guide will instruct you on installing the Raspbian OS on a microsdhc flash card if you prefer to purchase an SD card separately.   You may alternately wish to purchase a "Pi Canakit". Canakits, depending on the version, typically include a Pi, SD card with a version of Raspbian pre-installed, and various other items like sensors, a case, an HDMI cable, and other accessories.   To make this guide as straight-forward as possible, we'll assume a monitor, USB keyboard, USB mouse, and WiFi connectivity to interact with the Pi.   Note that the Pi has an HDMI port, so you may also need an HDMI-to-DVI convertor or similar if your monitor doesn't natively support HDMI.     Step 2: Format MicroSDHC Card   The microSDHC flash card which the Pi accepts may (or may not) come pre-installed with the Raspbian OS. If Raspbian is pre-installed and working, you may skip this step.   However, these flash cards are susceptible to corruption, especially if proper static-control guards are not followed or if the Pi is powered-down without going through a proper shutdown procedure.   As such, the steps immediately below will assume that you are installing (or re-installing) Raspbian on your microSDHC card.   Depending on your PC's ports, you may also require a microSDHC adapter to insert the flash card into your computer.   Locate your microSDHC card. Remember that 8+GB is mandatory, but 16+GB is recommended to ensure that the Pi has enough swap-space.   Locate your flash card adapter. Note that you may have a different type of adapter. Simply ensure that your PC can recognize the microSDHC card.   Insert the microSDHC card into the adapter.     Insert the adapter-plus-microsdhc card into your PC.     Assuming a Windows PC and either a pre-installed or corrupted flash card, you will receive a pop-up stating that it needs to be formatted prior to use; click Format disk.   On the following Format SDHC Card pop-up, click Start.   On the following Format Confirmation pop-up, click OK.     On the following Format Complete pop-up, click OK.   On the previous Format pop-up which is still open, click Close.   You now have a formatted microSDHC card which Windows can recognize.       Step 3: Flash MicroSDHC Card   Now that the flash card is accessible to Windows, you want to install the Raspbian OS on it.   Once again, this step assumes that you are installing (or re-installing) the Raspbian OS.   If your microSDHC card came pre-installed with Raspbian, then you may skip this step.   Download the Raspbian OS .zip file. Navigate to the download location and locate the Raspbian .zip file.   3. Right-click on the file and select Extract All....   4. On the Extract Pop-up, click Extract.   5. Download the balenaEtcher "flasher" software. 6. Navigate to the download location and locate the balenaEtcher .exe file.   7. Double-click on the balenaEtcher .exe to begin the installation process.   8. On the balenaEtcher installer pop-up, click I Agree. After the installation completes, balenaEtcher will automatically open.   9. Click Select image and navigate to the previously-extracted Raspbian OS .img file.   10. Select the .img file and click Open. Assuming the only microSDHC card currently inserted into your PC is the one for the Pi, then the SD SCSI Disk Device will be pre-selected; otherwise, choose the correct flash disk.   11. Click Flash!. Accept allowing the etcher to make changes to your computer.   12. Wait for balenaEtcher to complete the flashing process; this may take ~5-10 minutes.   13. Remove the microSDHC card and adapter from your PC.     Click here to view Part 2 of this guide.
View full tip
The past few years has seen a tremendous explosion in the numbers of devices with Internet connectivity, and the Axeda product has continued to evolve to meet the requirements of these devices, which are often memory and CPU constrained.  Beginning with the Axeda Platform 6.6 release, the Axeda Adaptive Machine Messaging Protocol (AMMP) has been available that allows any device that can make an HTTP GET/POST request to send data to the Axeda Machine Cloud.  AMMP uses standard, JSON messages sent inside HTTP requests. How to get AMMP AMMP is a separately available product that can be added to an Axeda installation.  Customers should discuss with their Account Managers to get access to the Axeda AMMP product offering.  AMMP is a device codec that requires the installation of the Axeda AnyDevice Codec Server (ACS) component.  This configuration presents a new endpoint for customer installations - an https://example.axeda.com instance will have an ACS instance available at https://example-connect.axeda.com.  Self-hosted customers can create their own hosting/URL infrastructure, but this is the default available to Axeda On-Demand Center customers. How to set up assets to talk AMMP​ Before you can start deploying devices to talk AMMP to an Axeda instance, a Model Communication Profile must be created for any Model that is expected to communicate with the AMMP protocol.  This is required in order to configure the proper egress mechanism so that the device can get access to messages waiting for it on the Axeda Platform.​  Attached to this document is a file called ​​AMMP_CreateCommProfile.groovy.zip​. ​ This file contains a Groovy-based script that is copied into the Platform as a Custom Object, and then executed via Scripto - this is needed to be run for each device Model that requires access via AMMP. ​First Requests​ Since all requests are via HTTP, it is useful to acquire a client that can perform REST API requests.  One highly recommended client is Postman (available in Chrome and standalone versions). The first time a device connects to the Axeda Machine Cloud it should send a registration message.  A registration message must contain a model and serial number and can optionally include a ping rate.  The ping rate is how often the server should expect to hear from the device. The server will mark a device as off-line if it does not get a message before a configurable number ping times pass.  A device should also send a registration message whenever it powers up. Using Postman, select POST  from the drop down next to the URL.  Enter the URL as shown below to register a new device. https://example-connect.axeda.com/ammp/assets/1​ (Customers should use their own instances in the following examples) Click ​Headers​ to add a header named 'Content-Type' with value of 'application/json': Click the Body button and enter the line below.  The "mn" field below should be an already registered and configured model. {  "id": { "mn" : "ExampleModel", "sn" : "ExampleDevice-001", "tn" : 0  },  "pingRate": 60  }      Click the Send button - an HTTP 200 OK response should be the expected result.  Logging into the platform and searching for ​'ExampleDevice-001' should show that it is registered Now that a device has been registered is is now possible to send live information to the Axeda Machine Cloud. Sending information is also done with a POST to a URL.  Instead of the asset's resource, we will be sending to the data resource.  Using Postman, the HTTP POST request will be structured like follows: http://example-connect.axeda.com/ammp/data/1/ExampleModel!ExampleDevice-001 Change the body as follows: {"alarms":[{"name":"over_temp","description":"freezer hot" }]}      Click on the 'Send' button and after the response returns, the alarm can be seen in the Asset Status Page in the Axeda Machine Cloud. Four different types of information can be sent to the data resource and any or all can be included in one POST message.  All four types can have an optional time and priority field. If no time is specified, the time on the server at arrival will be added. Alarm Field Name Purpose Expected Data Type Required? Format, Range Default Value Default Behavior name Identify the alarm String Yes Length: 0 <= N <= ? Valid String Characters description Describe the alarm String No Length: 0 <= N <= ? None severity Describe the severity Integer No Range: 0 <= N <= 1000 0 cause Identify the cause String No Length: 0 <= N <= ? None reason Describe the cause String No Length: 0 <= N <= ? None time Time when the alarm occurred No ISO-8601 or Unix Epoch None Use server time if <Default Value> priority How much priority the server should give to processing No 1 <= N <= 100 1 Alarm Example JSON {     "alarms": [         {             "name": "RadiationLeak",             "description": "A radiation leak has been detected",             "severity": 1000,             "cause": "CoolantPipeBurst",             "reason": "The main coolant pipe exploded",             "time": 1364443200000,             "priority": 100         }     ] }      Once alarms reach the Axeda Machine Cloud, they will be in the "Started" state.  Once an alarm is received, it can be "Acknowledged", "Escalated", or "Closed". Event Field Name Purpose Expected Data Type Required? Format, Range Default Value Default Behavior name Identify the event String Yes Length: 0 <= N <=? Valid String Characters description Describe the event String No Length: 0 <= N <= ? None time Time when the event  occurred Integer (Epoch Timestamp) String (ISO-8601) No ISO-8601 or Unix Epoch None Use server time if <Default Value> priority How much priority the server should give to processing Integer No 1 <= N <= 100 1 Event Example JSON {     "events": [         {             "name": "RadiationLeak",             "description": "A radiation leak has been detected",             "time": 1364443200000,             "priority": 100         }     ] }      Mobile Location Field Name Purpose Expected Data Type Required? Format, Range Default Value Default Behavior latitude Latitude Float Yes -90 <= N <= +90 longitude Longitude Float Yes -180 <= N <= +180 altitude Altitude/Elevation Float No Unbounded time Time when the event occurred Integer (Epoch Timestamp) String (ISO-8601 Timestamp) No ISO-8601 or Unix Epoch None Use server time if <Default Value> priority How much priority the server should give to processing Integer No 1 <= N <= 100 1 Location Example JSON {     "locations": [         {             "latitude": 42.034061,             "longitude": -71.237472,             "altitude": 0.0,             "time": 1364443200000,             "priority": 100         }     ] }      Note: The platform records the history of all the mobile locations a device has reported.  This has implications for the positions displayed in the Asset Status Map Charting components. Data Item Set Field Name Purpose Expected Data Type Required? Format, Range Default Value Default Behavior dataItems A collection of key/value pairs Object<String, JSON Type> Yes Unbounded time Time when the data items were sampled Integer (Epoch Timestamp) String (ISO-8601 Timestamp) No ISO-8601 or Unix Epoch None Use server time if <Default Value> priority How much priority the server should give to processing Integer No 1 <= N <= 100 1 Data Item Set Example JSON { "data" :[   {     "dataItems": {       "CurrentSong": "Comfortably Numb",       "PreviousSong": "Rain When I Die",       "NextSong": "Whole Lotta Love",       "FreeMemory": 1237.24,       "DebugModeEnabled": true     },     "time": 1364443200000, "priority": 100   },   { "dataItems":{       "bar":"camp",       "pot1":23.3     },     "time": 1364443234000, "priority": 100 } ]}      Data Items are sent as sets that share a common recording time and priority.  Data Item values follow JSON representation standards and can be: string, numeric, or Boolean. Below is an example message showing all four information types that can be sent: {   "alarms": [         {             "name": "RadiationLeak",             "description": "A radiation leak has been detected",             "time": 1364443200000         }     ],     "events": [         {             "name": "Foo",             "description": "A Foo occurred"         }     ],     "data":[         {             "dataItems":{               "bar":"camp",               "pot1":23.3},             "time": 1364443200000         }     ],     "locations": [         {             "latitude": 32.00,             "longitude": -78.00     }     ] }      Polling ​Axeda AMMP is designed to provide a minimalist communication protocol between devices.  As such each request has egress items returned in the HTTP response body, no matter the type of data sent as the request body.  This is so that extra requests to retrieve egress do not have to be made.  But if a device has no updates to make, it is still able to make periodic polling requests to the Codec Server to get any egress items available to it. This request is simply a REST request of HTTP POST to the example URL: https://example-connect.axeda.com/ammp/assets/1/ExampleModel!ExampleDevice-001​ ​with an empty request body. Next steps and caveats Keep in mind that egress data can be returned in ANY HTTP response body.  If a device has a programming error or a power failure occurs before the request is processed, then it is possible that request can be lost - permanently.  The Axeda Platform does not replay egress items once it has been delivered to the device.  Additional logical facilities are available on the Axeda Platform to be able to provide replay/retry communications to the device. Bibliography​ Using Axeda Scripto Axeda AMMP Technical Reference (1.2.0 Dec 2014)
View full tip
  Hi, everyone!     Today, we’re launching an exciting new series called “PTC Community Spotlights.” Each post in the series explores a community member’s experience with ThingWorx—how they’re using it, what their favorite part about ThingWorx is, and any tips or tricks they may have to share with the PTC Community.   For the first installment, I spoke with @nmilleson of EAC. Check out our conversation below. Our first PTC Community Spotlight Speaker -- Nick Milleson of EAC Product Development Systems. @Kaya: Hi, @nmilleson, welcome! Thank you for taking the time to meet with me and volunteering to be our first ThingWorx Community spotlight!   @nmilleson: Of course, @Kaya. Happy to be here.   @Kaya: To start, can you tell me a little about yourself?   @nmilleson: Absolutely. My name is Nick Milleson.  I work as an IoT Solution Architect at EAC Product Development Systems (a PTC Partner). I’m located in Apple Valley, Minnesota, which is a suburb of the Twin Cities.   @Kaya: Nice! We always love hearing from our partners about the awesome work they do. As a PTC Partner, what industries do you typically work in?   @nmilleson: I consult for many, many different industries, including defense, transportation, medical devices, construction & aerospace.   @Kaya: Wow, so what PTC products are you most familiar with?   @nmilleson:  My schooling is in mechanical engineering, so I’ve also used Creo, Windchill, and MathCAD.  I have been working with the ThingWorx application and helping clients get the most out of ThingWorx for approximately 7 years.   @Kaya:  Seven years—that’s a while! Do you have any “ThingWorx” stories from over the years you can share with your community peers?   @nmilleson:  Sure thing. I think the coolest thing that I’ve done with ThingWorx was create a custom SVG infographic that featured animations, click events, zoom-ins, and heatmaps based on temperature deltas.  It was a custom widget and it worked really well in ThingWorx.  When I first started learning to use ThingWorx, I took apart an old RC car and hooked up an Arduino to the motors and steering.  I was then able to control it using a ThingWorx mashup.  Pretty fun! I’ll be sure to share a visual so people can check it out.   Nick's awesome custom SVG infographic featuring a ton of neat functionality like zoom-ins & heatmaps. @Kaya:  That’s awesome! Sounds like a fun time indeed. I saw that one of your first publications about ThingWorx for EAC was from 2015 and titled “Updating ThingWorx Using an Arduino Uno and a Serial Connection.”  The ThingWorx platform has certainly evolved since then.  What would you say is your favorite thing about ThingWorx today?   @nmilleson:  It sure has evolved. I would say my favorite thing is that it’s flexible enough to allow you the freedom to design all sorts of applications, while also providing you with all these great tools that make it easy to use as well.   @Kaya:  Thanks for that. I can see that you have been a member of the PTC Community for five years.  Thank you for providing such great contributions.  What do you enjoy most about the PTC Community?   @nmilleson:  I enjoy this Community because everyone seems very willing to help each other out, regardless of the complexity of the issue.  I stick mostly with the IoT Developers section, but I’ll meander into the Manufacturing Apps and ThingWorx Ideas once in a while as well.   @Kaya:  Love to hear it. Now, so the PTC Community can learn a little more about you, how do you spend your time when you aren’t playing with ThingWorx or engaging on the PTC Community?   @nmilleson: Great question. I have been a professional piano player for almost 20 years, so I’m often at a piano bar making music when I’m not doing software development with EAC.   @Kaya: Awesome. Well those are all the questions I have for today. Thank you for sharing your experience with ThingWorx! Truly appreciate it.   @nmilleson: Of course. Happy to be a part of it!   Kaya, here. We love hearing from community members like @nmilleson about how ThingWorx creates value for them amongst a variety of use cases. If you’re active on the community and interested in being featured on the PTC Community Spotlight series, send me a direct message and we’ll get the ball rollin’.   For now, we’ll let Nick “play us” out. Until next time, stay connected!   -Kaya
View full tip
As per ThingWorx Documentation: Updating Properties Automatically in a Mashup A mashup using the GetProperties service can be configured to use websockets and receive updates to properties automatically. When creating or editing a mashup, you can configure the GetProperties service so that the properties are automatically updated by selecting the Automatically update values when able checkbox in the service properties panel. So, the feature to update Properties Automatically in a Mashup is limited to GetProperties service only. Following are the steps to invoke our own custom Service automatically when a property change: 1. Find all the Properties in your Thing for which the DataChange should trigger the custom service. 2. In mashup; add value display widget (or some other widgets) for each property in Step1. 3. Bind the properties from GetProperties service to these widget. 4. Set visible property of these widgets to false so that they don't show up at the RunTime. 5. Now bind the ServiceInvokeCompleted event of GetProperties Service to your custom Service. 6. Save and view Mashup. Result: When any of the Property from Step1 is changed; Custom Service will be invoked in our Mashup.
View full tip
Introduction to the platform extensibility structures and options. Includes overview of setting up the eclipse plugin and build process, as well as install considerations and best practices.     For full-sized viewing, click on the YouTube link in the player controls.   Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
Metrics for Model evaluation used in ThingWorx Analytics In ThingWorx Analytics, we consider different kinds of metrics to evaluate our models. The choice of metric completely depends on the type of model and the implementation plan of the model. After you are finished building your model, these 3 metrics will help you in evaluating your model accuracy. Here are below further explanations about the 3 metrics used. 1-The ROC Curve: To understand what is ROC (Receiver operating characteristic) curve, let's look at the confusion matrix below. We observe that for a probabilistic model, we get a different value for each metric. Hence, for each sensitivity, we get a different specificity. The two vary as follows: The ROC curve is the plot between sensitivity and (1- specificity). (1- specificity) is also known as false positive rate and sensitivity is also known as True Positive rate. Following is the ROC curve for the case in hand Let’s take an example of threshold = 0.5 (refer to confusion matrix). Here is the confusion matrix: As you can see, the sensitivity at this threshold is 99.6% and the (1-specificity) is ~60%. This coordinate becomes on point in our ROC curve. To bring this curve down to a single number, we find the area under this curve (AUC). Note that the area of the entire square is 1*1 = 1. Hence AUC itself is the ratio under the curve and the total area. For the case in hand, we get AUC ROC as 96.4%. Following are a few thumb rules: .90-1 = excellent (A) .80-.90 = good (B) .70-.80 = fair (C) .60-.70 = poor (D) .50-.60 = fail (F) We see that we fall under the excellent band for the current model. But this might simply be over-fitting. In such cases, it becomes very important to have in-time and out-of-time validations. Points to Remember: For a model which gives a class as an output, it will be represented as a single point in ROC plot. Such models cannot be compared with each other as the judgment needs to be taken on a single metric and not using multiple metrics. For instance, a model with parameters (0.2,0.8) and model with parameter (0.8,0.2) can be coming out of the same model, hence these metrics should not be directly compared. 2-Root Mean Squared Error (RMSE) RMSE is the most popular evaluation metric used in regression problems. It follows an assumption that error are unbiased and follow a normal distribution. Here are the key points to consider on RMSE: The power of ‘square root’ empowers this metric to show large number deviations. The ‘squared’ nature of this metric helps to deliver more robust results which prevent canceling the positive and negative error values. In other words, this metric aptly displays the plausible magnitude of the error term. It avoids the use of absolute error values which is highly undesirable in mathematical calculations. When we have more samples, reconstructing the error distribution using RMSE is considered to be more reliable. RMSE is highly affected by outlier values. Hence, make sure you’ve removed outliers from your data set prior to using this metric. As compared to mean absolute error, RMSE gives higher weighting and punishes large errors. 3-Pearson Correlation Coefficient This metric measures how highly correlated are two variables and is measured from -1 to +1. A Pearson Correlation Coefficient of 1 indicates that the data objects are perfectly correlated but in this case, a score of -1 means that the data objects are not correlated. In other words, the Pearson Correlation score quantifies how well two data objects fit a line. There are several benefits to using this type of metric. The first is that the accuracy of the score increases when data is not normalized. As a result, this metric can be used when quantities (i.e. scores) varies. Another benefit is that the Pearson Correlation score can correct for any scaling within an attribute, while the final score is still being tabulated. Thus, objects that describe the same data but use different values can still be used. The below figure demonstrates how the Pearson Correlation score may appear if graphed. The chart demonstrates the Pearson Correlation Coefficient. The axes are the scores given by the labeled critics and the similarity of the scores given by both critics in regards to certain an_items. In essence, the Pearson Correlation score finds the ratio between the covariance and the standard deviation of both objects. In the mathematical form, the score can be described as: In this equation, (x,y) refers to the data objects and N is the total number of attributes
View full tip
Having trouble remembering how to get into Flow? How about make /Flow the URL?   Since the Flow environment uses NGINX to front-end the various components that make up Flow, there is a very sophisticated set of rewrites and proxy_pass directives in the NGINX configuration. All you have to do is add another 'location' fragment to the vhost-flow.conf file that will push /Flow over to /Thingworx/Composer/apps/flow:       location /Flow {       rewrite ^/Flow$ $proxy_scheme://$server_host/Thingworx/Composer/apps/flow permanent;     }   On Linux, the file should be at /etc/nginx/conf.d/vhost-flow.conf   On Windows, the file should be at c:\Program Files\nginx-[version]\conf\conf.d\vhost-flow.conf   Test the updated config file with (nginx may not exist in your normal path): nginx -t   Restart the NGINX service: Linux (one of these will work depending upon your Linux version): systemctl restart nginx service nginx restart Windows: Net stop ThingWorxOrchestrationNginx Net start ThingWorxOrchestrationNginx -or- use the Services app to restart the service   From this point forward https://yourserver/Flow will take you to ThingWorx Flow's home page.
View full tip
The use of the term “SSO” means different things to different people. Among Navigate Admins, it became shorthand for using PingFederate to provide both authentication with a single sign-on component, as well as authorization (checking permissions for access to files). In Navigate 1.5, this was the only option for configuring a production system, and many people were not ready for it. That was the origin of the “must have SSO” statement. Beginning with Navigate 1.6, PTC added a scenario called “Windchill Authentication”, that is suitable for Production and uses your Enterprise LDAP to authenticate users. It will issue a token so you get some of the benefits of single sign-on, but not all the bells and whistles that come with PingFederate. It’s also easier to configure. People have begun referring to Windchill Authentication as “non-SSO”, to distinguish it from PingFederate, even though Windchill Authentication has some SSO functions.   In the install manual, there are three scenarios: Fixed Authentication, Windchill Authentication, and Single Sign-On with PingFederate. People usually begin with Fixed Authentication (the easiest to configure, but not secure so it’s only good for Proof of Concept demonstrations), then do Windchill Authentication before tackling PingFederate. Windchill Authentication can take a couple of days while Webexing with us to get working, but for PingFederate we plan several Webexes over a period of 8 days for a typical install. During that time you will be coordinating with other administrators (such as the AD admin) and waiting for emails etc. to get remote admin tasks done as part of the install. Be prepared, timewise.
View full tip
Usually we want to search out all User list in ThingWorx with Service GetEntityList   But it only shows limited information. In order to see more details like User Extension information etc., and in order to add more search conditions we could encapsulate it in a new created service. Below is an example code: input: emailAddress(String) output: INFOTABLE // Code start here // step 1 Get all user list var params = {            maxItems: undefined /* NUMBER */,           nameMask: undefined /* STRING */,           type: "User" /* STRING */,           tags: undefined /* TAGS */   };   var users = Resources["EntityServices"].GetEntityList(params);   // step 2 get all other properties for user list var params = {            infoTableName: "infotable" /* STRING */,            dataShapeName: "userPropertiesDS" /* DATASHAPENAME */   };   var infotable = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params);   for (var v=0;v<users.length;v++){       var row = new Object();       row.EmailAddress= Users[users .name].emailAddress;            row.name = users .name;       row.fullName = Users[users .name].fullName;       infotable.AddRow(row);   // ...... // Add any other user properties you want to display } // step 3 filter the user list with search conditions // You could add as many parameters as you like var query = {   "filters": {     "type": "AND",     "filters": [       {         "fieldName": "EmailAddress",         "type": "EQ",         "value": emailAddress       },       {         "fieldName": "name",         "type": "EQ",         "value": "user1"       }     ]   } }; var params = {          t: infotable /* INFOTABLE */,          query: query /* QUERY */   }; // result: INFOTABLE var result = Resources["InfoTableFunctions"].Query(params);     Besides, to create a query is also a one step operation in Thingworx , you do not need to create it manually:
View full tip
An introduction to Java SDK, Java SDK and Eclipse, VirtualThing and ConnectedThingClient classes, how to establish communication, and additional features of the SDK.   For full-sized viewing, click on the YouTube link in the player controls.   Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
In the recent times, one of the frequent questions regarding PostgreSQL is which tools are good with PostgreSQL. With the growing functionality of PostgreSQL, the number of vendors are willing to produce tools for PostgreSQL. There are lot of tools for management, development, data visualization and the list if growing. Here, I'm listing a few tools that might be of interest to Thingworx users. psql terminal: The psql client is a command-line client distributed with PostgreSQL, often called as interactive terminal. psql is a simple yet powerful tool with which you can directly interface with the PostgreSQL server. The psql client comes default with the PostgreSQL database. Key features: Issue queries either through commands or from a file. Provides shell-like features to automate tasks. For more information, refer http://www.postgresql.org/docs/9.5/static/app-psql.html pgAdmin III: pgAdmin III is a GUI based administration and development tool for PostgreSQL database. It delivers the needs of both admin and normal users from writing simple SQL queries to developing complex databases. Key features: Open source and cross-platform support. No additional drivers are required. Supports more than 30 different languages. Note: pgAdmin III comes default with postgreSQL9.4 installer. For more information, refer http://www.pgadmin.org/download/ phpPgAdmin: phpPgAdmin is a web-based client for managing PostgreSQL databases. It provides the user with a convenient way to create databases, create tables, alter tables and query the data using SQL. Key features: Open source and supports PostgreSQL 9.x. Requires webserver. Administer multiple servers. Supports the slony master-slave replication engine. For phpPgAdmin download: http://phppgadmin.sourceforge.net/doku.php?id=download TeamPostgreSQL: TeamPostgreSQL is a browser-based tool for PostgreSQL administration. Using TeamPostgreSQL, database objects can be accessed from anywhere in the web browser. Key features: Open source and cross-platform support. Supports SSH for both the web interface and the database connections. GUI with tabbed SQL editors. For TeamPostgreSQL download: http://www.teampostgresql.com/download.jsp   Monitoring Tools pgBadger: pgBadger is a PostgreSQL log analyzer for generating reports from the PostgreSQL log files. It is built in Perl language and uses a javascript and bootstrap libraries. Often seen as a replacement for pgfouine log analyzer. Key features: Open source community project. Autodetects postgreSQL log file formats (stderr, syslog or csvlog). Provides SQL queries related reports and statistics. Can also set limits to only report errors. Generates Pie charts and Time based charts. For more information, refer http://dalibo.github.io/pgbadger/. Git download: https://github.com/dalibo/pgbadger/releases PostgreStats: Postgrestats is a software that has automated scripts to easily view statistics such as commits, rollbacks, user inserts, updates and deletes in a time-based intervals. Postgrestats gets installed and executes on the database server, it customizes the main conf file. Postgrestats also provides an enterprise application for Replication mode and High Availability. Key features: Open source and easy-to-setup installation.  Take a snapshot report based on time intervals. Optional email-on-update. Text file Data storage. Also provides enterprise application, PostgreStats Enterprise. For more information, refer: http://www.postgrestats.com/subs/docs.html    Slemma: Slemma is a collaborative, data visualization tool for PostgreSQL database. Slemma allows database connections with a near to one-click integration and can generate a dashboard from files. Slemma comes with a commercial license with a $29 per user per month pricing. Key features: Create charts and interactive dashboards by selecting tables. Non-developers can easily create visualizations (with no coding). Email dashboards automatically to clients or your entire team. For more information, refer https://slemma.com/ Ubiq: Ubiq is a web-based buisness intelligence and reporting tool for PostgreSQL server. Ubiq creates reports and online dashboards, providing the feature to export in multiple formats. Ubiq is distributed with a commercial license. Key features: Drag & drop interface to create interactive charts, dashboards and reports. Apply powerful filters and functions to the data. Share your work and schedule email reports. For more information, refer http://ubiq.co/tour
View full tip
Announcements