cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
Hello community,   I'm happy to announce that I released GitBackup Extension 4.1.0 with some nice help from the community (special thanks to Tanguy Parmentier who provided all the localization export functionality). Many thanks for the people who provided feedback allowing this features to be prioritized.   Version 4.1.0 brings several new features to the table: (Finally) Allows the user to specify a subset of Entities for Export Allows importing a single Entity from the Workspace, so it’s easier now to checkout a specific commit in the past and import that file version for testing. Adds the capability to Export localization tokens filtered by a specific prefix - overridable by you at export time. Adds a Log screen that contains log entries for some of the most used methods that caused silent fails. Closed remote branches are auto-pruned. Further cleans the ThingWorx XML source files, by removing the ModelPersistenceProviderPackage. The Main Mashup UI is slightly redesigned: there's a new Manage tab which holds the Settings, Delete Git Thing and more. The Export Mashup UX is improved: export buttons are no longer visible if you don't select a project. Supports ThingWorx 9.0 Two separate releases: one for 8.4&8.5 and one for 9.0 The documentation was updated and I suggest further reading the release notes and specifically the ones regarding the new Log and prune capabilities. In addition, the mechanism that cleans the source code is extensible, and most of the entities are editable, allowing you to tweak it to your own usecase.   The Extesion source code is available here: https://github.com/PTCInc/thingworx-gitbackup-extension The importable Extension zip files are available here: https://github.com/PTCInc/thingworx-gitbackup-extension/releases   Special note for people using ThingWorx 9.0: in this version some internal ThingWorx SDK Java methods changed their signature, and this required me to build a special releases for 9.0. The extension I built for 9.0 won't run correctly in 8.4/8.5 and also the reverse. As such, you will always see two releases for each GitBackup version: one for 9.0 and one for the 8.4/8.5. My ask for you is the following: don't click on the latest release Github shows - that will always send you to the latest release, which might not be compatible with the ThingWorx version you are using always use the link above to choose the Extension compatible with your ThingWorx version read carefully which release you download. The title contains the ThingWorx version compatible with that release.   This Extension is licensed under the MIT Licence and is provided as-is and without warranty or support. It is not part of the PTC product suite.   Taking into consideration the statement above: please read first the documentation if you encounter any problems, search first for closed issues, and if none is found for your problem, raise a new issue in GitHub's issue system: https://github.com/PTCInc/thingworx-gitbackup-extension/issues do not open PTC Tech Support tickets for this Extension   For OOTB Git support in the ThingWorx platform, please raise a ThingWorx Idea in the PTC Community here https://community.ptc.com/t5/ThingWorx-Ideas/idb-p/thingworxideas   Thank you!
View full tip
Video Author:                     Stefan Taka Original Post Date:            June 6, 2016   Description: In part 1 of the ThingWorx REST API tutorial, we will introduce you to the ThingWorx API structure, and also demonstrate how the API can be invoked.    Blog post with text examples found here:  REST API Overview and Examples    
View full tip
The Squeal functionality has been discontinued with ThingWorx 8.1, see ThingWorx 8.1.0 Release Notes   There might be scenarios where it should be disabled in earlier versions as well. This can be achieved e.g. with Tomcat Security Constraints. To add such a constraint, open <Tomcat>\webapps\Thingworx\WEB-INF\web.xml At the end of the file add a new constraint just before closing the </web-app> tag:   <security-constraint> <web-resource-collection> <web-resource-name>Forbidden</web-resource-name> <url-pattern>/Squeal/*</url-pattern> </web-resource-collection> <auth-constraint/> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> Save the file and restart Tomcat. Accessing the /Thingworx/Squeal resource now will result in an error message:   HTTP Status 403 - Access to the requested resource has been denied   One scenario to be aware of is when the web.xml changes, e.g. due to updating ThingWorx or other manual changes. In such a case, ensure that the filter is still present in the file and taken into account.
View full tip
Use Case: You’ve published a model from Analytics Builder to Analytics Manager, and then used service CreateOrUpdateThingTemplateForModel on resource TW.AnalysisServices.ModelManagementServicesAPI. A thing created from the resulting template will have an infotable called “data” which needs to be populated in order to trigger an Analysis Event & Job. For example you might have been following the online documentation for Analytics Manager > Working with Thing Predictor > Demo: Using Thing Predictor, link here. This script makes it easy to create a line of test data into field "data" on your thing to trigger the analysis event & job. Also fields causalTechnique, goalName and importantFieldCount are set programmatically, these are needed for the analysis event & job. Also this script might be useful as a general example of how to write to an infotable property on a thing. The JavaScript code is shown here and also attached as a text file to this post: me.causalTechnique = 'FULL_RANGE' me.goalName = 'predict_Compressor_failure' me.importantFieldCount = 3 // ThingPredictor.test_3f1a6a31-e388-4232-9e47-284572658a4a.InputParamsdataDataShape entry object //var newEntry = new Object(); var params = { infoTableName : "InfoTable", dataShapeName : "ThingPredictor.test-integer_afebaef3-b2cf-4347-824c-a39c11ddbb4a.InputParamsdataDataShape" }; // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(ThingPredictor.test_3f1a6a31-e388-4232-9e47-284572658a4a.InputParamsdataDataShape) var myInfoTable = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); // 2 - CREATE INFOTABLE ROW USING object var newEntry = new Object(); newEntry._Pressure = 10.5; // NUMBER newEntry._Temperature = 45.1; // NUMBER newEntry._VibrationX = 81; // NUMBER newEntry._VibrationY = 65; // NUMBER //newEntry.key = 4; // STRING - isPrimaryKey = true // 3 - ADD INFOTABLE ROW USING TO INFOTABLE myInfoTable.AddRow(newEntry); // 3 – PERSIST INFOTABLE TO THE THING PROPERTY ‘data’ me.data = myInfoTable;
View full tip
Sampling Strategy​ This Blog Post will cover the 4 sampling Strategies that are available in ThingWorx Analytics.  It will tell you how the sampling strategy runs behind the scenes, when you may want to use that strategy, and will give you the pros and cons of each strategy. SAMPLE_WITH_REPLACEMENT This strategy is not often used by professionals but still may be useful in certain circumstances.  When you sample with replacement, the value that you randomly selected is then returned to the sample pool.  So there is a chance that you can have the same record multiple times in your sample. Example Let’s say you have a hat that contain 3 cards with different people’s names on them. John Sarah Tom Let’s say you make 2 random selections. The first selection you pull out the name Tom. When you sample with replacement, you would put the name Tom back into the hat and then randomly select a card again.  For your second selection, it is possible to get another name like Sarah, or the same one you selected, Tom. Pros May find improved models in smaller datasets with low row counts Cons The Accuracy of the model may be artificially inflated due to duplicates in the sample SAMPLE_WITHOUT_REPLACEMENT This is the default setting in ThingWorx Analytics and the most commonly used sampling strategy by professionals.  The way this strategy works is after the value is randomly selected from the sample pool, it is not returned.  This ensures that all the values that are selected for the sample, are unique. Example Let’s say you have a hat that contain 3 cards with different people’s names on them. John Sarah Tom Let’s say you make 2 random selections. The first selection you pull out the name Tom. When you sample without replacement, you would randomly select a card from the hat again without adding the card Tom.  For your second selection, you could only get the Sarah or John card. Pros This is the sampling strategy that is most commonly used It will deliver the best results in most cases Cons May not be the best choice if the desired goal is underrepresented in the dataset UPSAMPLE_AND_SAMPLE_WITHOUT_REPLACEMENT This is useful when the desired goal is underrepresented in the dataset.  The features that represent the desired outcome of the goal are copied multiple times so they represent a larger share of the total dataset. Example Let’s say you are trying to discover if a patient is at risk for developing a rare condition, like chronic kidney failure, that affects around .5% of the US population.  In this case, the most accurate model that would be generated would say that no one will get this condition, and according to the numbers, it would be right 99.5% of the time.  But in reality, this is not helpful at all to the use case since you want to know if the patient is at risk of developing the condition. To avoid this from happening, copies are made of the records where the patient did develop the condition so it represents a larger share of the dataset.  Doing this will give ThingWorx Analytics more examples to help it generate a more accurate model. Pros Patterns from the original dataset remain intact Cons Longer training time DOWNSAMPLE_AND_SAMPLE_WITHOUT_REPLACEMENT This is also useful when the desired goal is underrepresented in the dataset. In downsample and sample without replacement, some features that do not represent the desired goal outcome are removed. This is done to increase the desired features percentage of the dataset. Example Let’s continue using the medical example from above.  Instead of creating copies of the desired records, undesired record are removed from the dataset.  This causes the records where patients did develop the condition to occupy a larger percentage of the dataset. Pros Shorter training time Cons Patterns from the original dataset may be lost
View full tip
Expression rules are the heart of the Axeda Platforms processing capability. These rules have an If-Then-Else structure that's easy to create and understand. We think they're like a formula in a spreadsheet. For example, say your asset has a dataitem reading for temperature: IF: temperature > 80 THEN: CreateAlarm("High Temp", 100)                      This rule compares the temperature to 80 every time a reading is received. When this happens, the rule creates an alarm with name "High Temp" and severity 100. Dataitems represent readings from an asset. They are typically sensors or monitoring parameters in an application. But also think of dataitems as variables. The rule can be changed to IF: temperature > threshold                      so that each asset has its own threshold that can be adjusted independently. Look at the complete list of Expression Rule triggers - the events that trigger a rule to run variables - the information you can access in an expression functions - the functions that can be used within an expression actions - these are called in the Then or Else part of an expression to make something happen A rule can calculate a new value. For example, if you wanted to know the max temperature IF: temperature > maxTemperature THEN: SetDataItem("maxTemperature" temperature) To convert a temperature in celsius to fahrenheit IF: temperature THEN: SetDataItem("tempF", temperature*9/5 + 32) The If simply names the variable, so any change to that variable triggers the rule to run. There may be lots of other dataitems reported for an asset, and changes to the other dataitems should not recalculate the temperature. When rules should run only when an asset is in a particular mode or state, or when there is a complex sequence to model, read about how State Machines come to the rescue. Creating and Testing an Expression Rule ​ We're going to create a simple Expression Rule and show it running in a few steps. Above, you saw a rule that created an alarm when temperature > 80. Now, we will make one that converts a temperature in F to one in C. An Expression Rule consists of a few things: Name Description - an optional field to describe the rule Trigger - what makes this rule run? The trigger tells the rule if it applies to Alarms, Data, Files, or many others. If - the logic expression for the condition to evaluate Then - the logic to run if the condition is true Else - the logic to run if the condition is false To begin, log into an Axeda Platform instance. Navigate to the Manage tab Select ​New​, then ​Expression Rule​ Enter this Expression Rule information Name: TempConvert Type: Data Description: Enabled: Leave checked If: TempC Then: SetDataItem("TempF", TempC*9/5 + 32) If you click on functions or scroll down for actions in the Expression Tree, you will see a description in Details. Click the Apply to Asset​ button to select models and specific assets to apply this rule to. Now that you have an Expression Rule, lets try it. Testing the Expression Rule (NEEDS UPDATING) You can test the expression rule by simulating the TempC data using Axeda Simulator, as instructed below. Or, you can use the Expression Rules Debugger to simulate the reading and display the results. For information about using the Expression Rules Debugger, see the Expression Rules Debugger documentation in the on-line Help system.Simulate a TempC reading Launch the Axeda Simulator The Axeda Simulator will launch in a new browser window Enter your registered email address, Developer Connection password, and click Login.       Select asset1 from the Asset dropdown. Under the Data tab, enter the dataitem name TempC, and a value like 28: Then Click Send. To see the exciting result, go back to the Platform window and navigate to the Service tab: and you should see that 28C = 82.4F. You created an Expression Rule that triggers when a value of TempC is received, and creates a new dataitem TempF with a calculated value. This rule applies to your model, but if you had many models of assets, it could apply to as many as you want. You could change the rule to do the conversion only If: TempC > 9 and simulate inputs to see that this is the new behavior. Further Reading Read about how Rule Timers can trigger rules to run on a scheduled basis. (TODO)
View full tip
Utilize the power of ThingWorx to secure, connect, log, analyze, and see what is happening in your highly structured and secure system.   NOTE: Complete the following guides in sequential order. The estimated time to complete this learning path is 240 minutes.   1. Securing Resources and Private Data Part 1 Part 2 2. Connecting External Databases and Model  Part 1 Part 2 Part 3 3. Low Level Device Connection Part 1 Part 2 Part 3 4. Tracking Activities and Statistics 
View full tip
The following Expert Session videos are now available for viewing within the ThingWorx Community: ThingWorx Analytics Installation - This Expert Session will walk you through the complete installation of ThingWorx Analytics from the Prerequisites to Confirming the Installation is successful and all steps in between. The first half of the video gives a breakdown of the components and the process of the installation with the second half being an actual Demo of the Installation.     ThingWorx Analytics API Overview - This Expert Session is designed to help beginners get up and running with ThingWorx Analytics. It covers basic concepts like: What are APIs, how to configure the metadata file, and a live Demo that shows you how to interact and use ThingWorx Analytics in real time. This Expert Session would also be useful for experienced users who need a refresher course.   Decision Tree, ThingWorx Analytics Builder - This Expert Session reviews the concept of “Decision Trees” and the functionality that is available in ThingWorx Analytics Builder. First, you will learn how to create and upload a dataset in ThingWorx Analytics Builder.  After that, it shows you how to train a model and score on the model that was just generated. It then goes into detail on how the prediction learner "Decision Tree" operates and classifies inputs.   Use Case Identification - This Expert Session goes over ways to identify and develop a successful use case for ThingWorx Analytics. The example use case presented here is on employee retention in a fictional company with the goal of maximizing employee retention . This presentation will provide you with all the fundamentals you need to develop your own ThingWorx Analytics use cases from the ground up.   ThingWorx Analytics Signals - This Expert Session will provide you with an in depth explanation behind how Signals are calculated in ThingWorx Analytics, what purpose they serve, and why we use them.  Some basic mathematical concepts are discussed so viewers will have a better idea of how ThingWorx Analytics operates behind the scenes.   Related Links For more information, you can visit a new space dedicated to these helpful technical videos.   Additional Expert Sessions will be highlighted here in the ThingWorx Community every few weeks. Visit the Online Success Guide to access additional information about ThingWorx training and services.
View full tip
Attached is a description about Ensemble Learning Techniques.
View full tip

Only logged in customers with a PTC active maintenance contract can view this content. Learn More

Background: Axeda Agents can be configured with standard drivers to collect event-driven data, which is then sent to the Platform. Axeda provides many standard event-driven data (EDD) drivers for use with the Axeda Agent (as explained in Axeda® Agents EDD Toolkit Reference (PDF)). All EDD  drivers are configured by an xml file and enabled in Axeda Builder, through the Agent Data Items configuration. You can configure an EDD driver to send important information from your process to the Agent, including data items, events and alarms. The manner in which you configure your drivers will affect the ability for your project to operate efficiently. Recommendation: Use drivers to reduce the amount of data sent to the platform. Instead of sending data items to the Platform, which then generates an event or alarm, it is possible to use the drivers to scan for specific data points or conditions and send an event or alarm. Before you can configure your agents, you first need to determine how often you will need your agent to send data to the Enterprise server. Two example workflows and recommendations: If you want to monitor a data item every second or two, configure the Agent to do the monitoring If you want to trend information once per day, perform that logic at the Enterprise Server. These examples may address your actual use case or your needs may fall somewhere in between. Ultimately, you want to consider that time scale (how often you want to monitor or trend data) and resulting data volume should drive how your system handles data. More data is available at the Agent, and at a higher frequency, then that needed at the Platform. Processing at the Agent ensures that only the important results are communicated to the Platform, leading to a “cleaner” experience for the Platform. Using this guidance as a best practice will help reduce network traffic for your customers as well as ensure the best experience for Enterprise users using server data in their dashboards, reports, and custom applications. Need more information? For information about the standard EDD Drivers, see the Axeda® Agents: EDD Drivers Reference (PDF).
View full tip
Several times in the past few months I was hit by a quick need to extract some data about Assets for a customer, and find myself continually hand-writing the code to do so.  Rather than repeat myself any more, I figure I can share my work - maybe PTC customers can benefit from the same effort.    import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.drm.sdk.Context import com.axeda.common.sdk.id.Identifier import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* def retStr = "Device and Location Data\n" def modellist = [:] ModelCriteria mc = new ModelCriteria() mc.modelNumber = "*" tcount = 0 def mresults = modelBridge.find(mc) while ( (mresults = modelBridge.find(mc)) != null && tcount < mresults .totalCount) { mresults.models.each { res -> modellist[res.systemId] = res.modelNumber tcount++ } mc.pageNumber = mc.pageNumber + 1 } locationList = [:] LocationCriteria lc = new LocationCriteria() lc.name = "*" tcount = 0 def lresults = locationBridge.find(lc) while ( (lresults = locationBridge.find(lc)) != null && tcount < lresults .totalCount) { lresults.locations.each { res -> locationList[res.systemId] = res.name tcount++ } lc.pageNumber = lc.pageNumber + 1 } AssetCriteria ac = new AssetCriteria() ac.includeDetails = true ac.name = "*" tcount = 0 def results = assetBridge.find(ac) while ( (results = assetBridge.find(ac)) != null && tcount < results .totalCount) { results.assets.each { res -> retStr += "ID: ${res.systemId} MN: ${res.model.systemId},${modellist[res.model.systemId]} SN: ${res.serialNumber} Name: ${res.name} : Location ${res.location.systemId},${locationList[res.location.systemId]}\n"; tcount++ } ac.pageNumber = ac.pageNumber + 1 } return ["Content-Type": "application/text", "Content": retStr] This will output data like so:    ID: 31342 MN: 14682,CKGW SN: Axeda-CK-Windows10VBox Name: Axeda-CK-Windows10VBox : Location 1,Foxboro ID: 26248 MN: 14682,CKGW SN: CK-CKAMINSKI0L1 Name: CK-CKAMINSKI0L1 : Location 1,Foxboro ID: 30082 MN: 14682,CKGW SN: CK-GW1 Name: CK-GW1 : Location 1,Foxboro ID: 26247 MN: 14681,CKGW-ManagedModel1 SN: CK-MM01 Name: CK-MM01 : Location 1,Foxboro This let's me compare the internal systemId of the Asset, the internal systemId of the Model, and the internal systemId of the Location of the device.  This was to help me attempt to isolate an issue with orphaned devices not being returned in a report - exposing some duplicate locations and devices that needed corrections.    You may find yourself needing to do similar things when building logic for Axeda, or eventually integrating or migrating to Thingworx.  Our v2 API bridges help "bridge" the gap.      
View full tip
    We’re back with a ThingWorx 8.5 highlight! Today, we’ll cover brand-new functionality within ThingWorx Flow—the Azure Connector!   Imagine you’re on the regal shores of ThingWorx, trying to reach the plentiful utopia of Azure IoT services and capabilities. How do you get there? Through the new Azure Connector for ThingWorx Flow!   The Azure Connector for ThingWorx Flow is an OOTB connector that enables you to leverage the power and breadth of Azure services directly in ThingWorx applications with ThingWorx Flow. If you have intermittent needs for high-scale processing, for example, the Azure connector can help optimize costs based on when you require a mighty powerful liege and where you only pay for the necessary processing capacity when it is important. Azure Functions are also useful when you want to leverage some special code that was written in a language like Python that you’d like to easily leverage, such as to pre-process information collected from a device.   With the Azure connector, you’re able to: Support execution of logic apps to access a large number of OOTB systems connectors and customer investments in apps. Enable execution of Azure functions wherever elastic scale is needed to cost-effectively support use cases that have intermittent high-scale needs, such as executing an analysis of device sensors leading up to and following a key alert or incident. Enable use of Azure Cognitive Services to easily leverage a rich set of capabilities from simple text to voice or voice to text to computer vision in end-to-end IoT use cases. A few examples include the ability to: Convert the alert text to speech to deliver to an operator to take urgent action. Convert a verbal description of an assembly problem to a textual description in an automatically generated problem report. Use computer vision for a quality check to automatically pass or fail a part based on a visual image and confidence level. Here are a few of the many different services you can leverage with the Azure Connector in ThingWorx Flow: Anomaly Detector Bing Search Computer Vision Custom Vision Execute Function Execute Logic App Face Recognition LUIS Prediction QnA Prediction Speaker Recognition Speech Service Text Analytics Check out this demo video to see an example in action. In the video, you’ll see us: Create an Azure Function app. Create a ThingWorx Flow that triggers from an alert on a ThingWorx thing. Within the flow, you’ll see us consume the alert information and send it to the Azure Function app, and the Azure function app will process and return a response. (view in My Videos)   While simple, this demo shows the ability to connect ThingWorx and Azure together for an incredibly flexible and powerful distributed IoT system.   Finally, if you’re looking for assistance on how to use the new connector, see our Help Center here.   Stay connected and go with the (ThingWorx) Flow! Kaya        
View full tip
The Axeda Platform has a mature data model that's important to understand when planning to build applications. First, this will introduce the existing objects and how they relate to each other. Axeda Agents can communicate Model – the definition of a type of asset. The model consists of a set of dataitems (its inputs and outputs) and alarms. The platform applies logic to a model, so as assets grow, the system is scalable in terms of management. Asset – or sometimes called Device. An asset has an identifier called a Serial Number which must be unique within its model. Agents report information in terms of the asset. Logic is applied to data and events about that asset. Dataitem – a named reading, such as a sensor or computer value. Dataitems are timestamped values in a sequence. For example, hourly temperatures, or odometer readings, or daily usage statistics. The number of named dataitems is unlimited. Dataitems can be written as well as read, so a value can be sent to an “output”. A dataitem can be a Digital (boolean), Analog (real value) or a String. Mobile Location - a lat/long pair typically read from GPS. This is used to map assets as they move. Alarms – have a name, severity, description, active flag, timestamp, and optional embedded dataitem and value. Alarms sent from an agent may result from logic that detects a condition, or from traps, error codes, etc. An alarm indicates something that's wrong. Files – arbitrary files can be uploaded from an agent. Files are sent with a Hint string. This is metadata that allows rules to process the file based on something other than the file extension. Files are often uploaded when an alarm has been raised, or on demand from a user or rule. Axeda agents have flexible ways to send and receive this information based on time, data changes, user request, etc. The Adaptive Machine Messaging Protocol (AMMP) allows anyone to make an agent that interacts with Axeda Platform using the same data model. Axeda Platform is asset-centric. An asset is an instance of a Model. Each asset is identified by its Model and Serial Number pair. Associated with an asset is Organization (typically the customer) Location or home of the asset (a street address). A location is in a Region. Contacts – people who have a relationship to the asset. Contacts have a role, such as Owner, or Service Agent. Asset Groups – assets are members of groups, and groups can be used to grant privileges, for navigation, or to apply commands. Properties – are additional named attributes of an asset. Properties do not have a time series history like a dataitem. The value of a property may be used to dynamically group assets. Condition – the current condition may be good, warning, error, or needs maintenance, based on the existence of alarms, for example. And, of course, dataitems, alarms and files. Information is processed and organized in the context of an asset, but the processing is managed for models. The only scalable way to manage a lot of assets is to apply rules by kind of asset, not individual assets. Rules apply logic to data as it happens. When a new dataitem is reported, a rule may check against its threshold. When an alarm is created, a rule may create a trouble ticket, or notify the user. All types of rules in the platform – Expression Rules, State Machines, and Threshold Rules – are event based. Rules apply to models, or sometimes to all (such as a standard way of notification on Alarms). The only exceptions are rules that apply to user logins and rules on a system timer. Software packages are another entity in the Platform. Packages are used to distribute files, software, patches, etc. and to script some commands around their delivery. So packages often upgrade or patch software, or load a new option of help file. Packages are defined for a model, and the deployment may be automatic or manual, to one or many.User logins are members of user groups. User groups have both privileges (what they can do) and visibility (which assets they can see). User group visibility allows the group to access assets in an Asset Group, or a Model or Region. How do solutions take advantage of this? Dataitems can be configured to store no data, current value, or history. History is needed if you want to see the temperature plot over the last day. Many times, current value is all that's needed to process rules and see the state of an asset. The option not to store a dataitem makes sense if the dataitem is only used to run a rule, or if it will just be sent to another application. An agent can send a dataitem string to the server, and the server puts the string on the Message Queue to deliver to another application. In a pass-through mode, the dataitem doesn't need to be stored at all. A similar situation is if a string dataitem is parsed by the rule calling a Groovy script. The script can parse the string (which may be XML or part of a log file) and use the SDK to do some action. Alarms are almost always used to notify people that they should do something. Alarms in Axeda have a lifecycle that corresponds to how people interact with them. An alarm begins its life when it's created. From that point, the alarm can be Acknowledged – this means that someone has seen it Escalated – the alarm condition hasn't been fixed for some time Closed – the end of the Alarm's life Suppressed – the alarm is logged in the history, but users don't see it. Set an alarm to suppressed when its just an annoyance and doesn't have any action required. Disabled – occurrences of this alarm are thrown away. Rules don't even see them. The Suppressed and Disabled modes are applied to an alarm of a given name, because they affect all future alarms by that name. Files are uploaded for a few reasons. Log files are typically uploaded so a service tech can diagnose a problem. Data files can be uploaded so a script or external system can process the file and take appropriate action. This can be another way of sending information that doesn't fit in a dataitem. The configuration of an asset – both hardware and software – is called Inventory, and the inventory of assets is important in diagnostics, planning spare parts, knowing what patch to apply, and many more. Extended Objects are attributes that can be added to the objects described here, or can be complete objects that live on their own. Your application can read and write these objects or attributes, and query them. The use is up to you. Resources You can find more information on the architecture of the platform in the Introduction to the Axeda Platform. The Platform SDK and Web Services expose most of these objects for configuration as well as runtime. That means an application can provision models and assets, create the rules and apply them to models, then monitor the behavior of assets, all through Web Services.
View full tip
  Step 8: Configure Template File (Service)   Services are implemented as Lua functions. In our Lua script, Services are divided into two pieces. The first is the Service definition which consists of a Service name, inputs and output. The second part of defining a Service is the service code. The Service code is run when you execute the service.   Create Service Definition Open the PiTemplate.lua file. Append the service definition to the file. Create a service named GetSystemProperties that gets the system properties (cpu temperature, clock frequencies, voltages) from your Raspberry Pi and updates the respective properties on the Thingworx platform. Specify your output type but not the name because the name of every output from a ThingWorx service is always result. serviceDefinitions.GetSystemProperties( output { baseType="BOOLEAN", description="" }, description { "updates properties" } ) NOTE: This service has no input parameters and an output that results in True if the properties were successfully updated on Thingworx.   Create Service code The Service code is run when you execute the Service. Functions in Lua are variables therefore to define the Service code, you will create a variable. The name of the Service has to match the name you specified in the Service definition.   Copy the service code below with comments explaining the logic and add append it to your template file, or download and unzip the full PiTemplate.zip attached here. services.GetSystemProperties = function(me, headers, query, data) log.trace("[PiTemplate]","########### in GetSystemProperties#############") queryHardware() -- if properties are successfully updated, return HTTP 200 code with a true service return value return 200, true end function queryHardware() -- use the vcgencmd shell command to get raspberry pi system values and assign to variables -- measure_temp returns value in Celsius -- measure_clock arm returns value in Hertz -- measure_volts returns balue in Volts local tempCmd = io.popen("vcgencmd measure_temp") local freqCmd = io.popen("vcgencmd measure_clock arm") local voltCmd = io.popen("vcgencmd measure_volts core") -- set property temperature local s = tempCmd:read("*a") s = string.match(s,"temp=(%d+\.%d+)"); log.debug("[PiTemplate]",string.format("temp %.1f",s)) properties.cpu_temperature.value = s -- set property frequency s = freqCmd:read("*a") log.debug("[PiTemplate]",string.format("raw freq %s",s)) s = string.match(s,"frequency%(..%)=(%d+)"); s = s/1000000 log.debug("[PiTemplate]",string.format("scaled freq %d",s)) properties.cpu_freq.value = s -- set property volts s = voltCmd:read("*a") log.debug("[PiTemplate]",string.format("raw volts %s", s)) s = string.match(s,"volt=(%d+\.%d+)"); log.debug("[PiTemplate]",string.format("scaled volts %.1f", s)) properties.cpu_volt.value = s end tasks.refreshProperties = function(me) log.trace("[PiTemplate]","~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In tasks.refreshProperties~~~~~~~~~~~~~ ") queryHardware() end Save the PiTemplate.lua file. cntrl x     Step 9: Run LSR   1. Navigate from installation directory to microserver directory. cd microserver 2. Ensure that wsems is running in a separate terminal session before you start running LuaScriptResource. 3. Ensure that you have ownership to the executable luaScriptResource and executable privileges. To check ownership: Ls -la -rwxrwxr-x 1 pi pi 769058 Jun 9 17:46 luaScriptResourc NOTE: The owner of luaScriptResource should be the user you are logged in as on the Raspberry Pi.   4. Confirm you have executable privileges by running the following command: sudo chmod 775 luaScriptResource 5. Run LuaScriptResource by executing the following command: sudo ./luaScriptResource   6. The output will show an error until we create the corresponding Thing in the next step.     Step 10: Bind Remote Thing Properties   Now we need to register a Thing so your Raspberry Pi can bind to its Properties on the Thingworx Platform.   Create a Thing named PiThing that will bind the scripts.PiThing created in config.lua . Open your Composer screen. Click Things on the left-navigation and the + symbol. Enter PiThing in the Name field and click RemoteThing in the Thing Template field.   Click Save. Ensure that the Remote Thing Property is connected. Click Properties in the left-hand navigation. Verify that the isConnected Property has a value of true. This means that your Raspberry Pi is still connected and now bound to this Thing on Thingworx Platform.   Bind the remote Thing Properties. Make sure the Properties tab is selected and click Edit at the top of the PiThing. Click Manage Bindings. Select the Remote tab at the top.   Click Add All Above Properties or drag and drop the ones you need. Click Done. Click Save. Verify that the Properties were updated with readings from the Raspberry Pi. Both the Value and Default Value for the three Properties will be set to the current reading from the Raspberry Pi. Cover the Raspberry Pi and wait about a minute, then Select the Properties tab and click Refresh. You will see the cpu_temperature value change.     NOTE: The system properties from your Raspberry Pi are now being passed to the server every 30 seconds. Wait a couple of cycles to see if the values from the Raspberry Pi change. If you are impatient, manually change the value of the property using the Set button in the Composer then click Refresh to see the updated value. The value will be temporarily updated for about 30 seconds until the Raspberry Pi reports the current live value.   Troubleshooting Tips   Tip #1 If the properties are not updating, try to stop and start both the wsems and luaScriptResource services.   quit sudo ./wsems or ./luaScriptResource Tip #2 If a wsems and/or luaScriptResource is not shut down gracefully, sometimes the service is still running which can cause issues. You can search and kill any wsems/luaScriptResource services by using the following command. Re-run the GetSystemProperties to test if this fixed the issue.   ps -efl kill -9 <id#>   Step 11: View Data from Devices   In order to demonstrate how ThingWorx can render a visualization of data from connected devices, at this point in the lesson you will import a pre-configured Mashup.   On the ThingWorx server that the EMS is connected to, start on the Home tab of Composer. Import a pre-built Mashup. Download and save the pre-built Mashup XML file attached here: Mashups_PiThingMashup-v91.xml. In Composer, click the Import/Export drop-down at the bottom-left.   Click Import. Leave all default values and click Browse to select the Mashups_PiThingMashup-v91.xml file that you just downloaded. Click Open, then Import, and once you see the success message, click Close. View Mashup displaying live data. Select the home icon in the top left side of Composer, then click Mashups on the left-navigation panel. Click Mashups_PiThingMashup-v91 and you'll see the design view of the Mashup.   Click View Mashup, and you'll see the live Mashup.    TIP: You will need to allow pop-ups in your browser for the Mashup to be displayed.     Click here to view Part 4 of this guide. 
View full tip
ThingWorx Manufacturing Tips & Tricks Webinar is a weekly opportunity to hear PTC Subject Matter Experts present on various topics related to the manufacturing space and applications.   Agenda for this week's recorded session - - Overview & Application Demo - Aron Semle - Architecture Overview - Varathan Ranganathan - Q&A  
View full tip
ThingWorx Analytics Interactive API Guide is a great way for users to familiarize themselves with ThingWorx Analytics APIs calls.  It even gives users the ability to run jobs through its interface.  This blog post will cover how to access the ThingWorx Analytics Interactive API Guide installed on a Virtual Machine or Standalone Server. Steps Get the IP address of the ThingWorx Analytics Server Type ​ip a ​Put that IP address into the desired web browser ​Your IP address may be different from the one in the picture above Add the port number of the server to the end of the IP address ​The Default  port number is 8080 Make sure to put a colon " : " between the end of the IP address and the start of the port number The port number could be different in some cases, depending if it was configured differently during installation ​Hit Enter and the main page will load.
View full tip
5 Common Mistakes to Developing Scalable IoT Applications by Tori Firewind and the IoT EDC Team Introduction To build scalable applications, it’s necessary to identify common mistakes and avoid them at the early stages of development. In an expert session this past month, the PTC Enterprise Deployment Team elaborated on why scalability is important and how to avoid the common development pitfalls in IoT. That video presentation has been adapted here for visual consumption of the content as well.   What is Scalability and Why Does it Matter Enterprise ready applications can scale and easily be maintained, which is important even from day 1 because scalability concerns are the largest cause for delays to Go Lives.  Applications balance many competing requirements, and performance testing is crucial to ensure an application is ready for Go Live. However, don't just test how many remote assets can connect at once, but also any metrics that are expected to increase in time, like the number of remote properties per thing, the frequency of reporting from those properties, or the number of users accessing the system at once. Also consider how connecting more assets will affect the user experience and business logic, and not just the ability to ingest data.   Common Mistake 1: Edge Property Updates Because ThingWorx is always listening for updates pushed from the Edge and those resources are always in use, pulling updates from the Foundation side wastes resources. Fetch from remote every read is essentially a round trip, so it's slower and more memory intensive, but there are reasons to do it, like if the quality tag is needed since the cache doesn't store it. Say a property is pushed at 11:01, and then there's a network issue at 11:02. If the property is pulled from the cache, it will pull the value sent at 11:01 without any indication of there being a more recent value on the Edge device. Most people will use the default options here: read from server cache, which relies on the Edge to push updates, and the VALUE push type, and configuring a threshold is a good idea as well. This way, only those property updates which are truly necessary are sent to the Foundation server. Details on property aspects can be found in KCS Article 252792.   This is well documented in another PTC Community post. This approach is necessary and considered a best practice if there is event logic which depends on multiple properties at once. Sending all of the necessary properties to determine if an event should fire in one Infotable ensures there is no need to query the database each time a property update comes in from the Edge, which ensures independent business logic and reduces the load on the database to improve ingestion performance. This is a very broad topic and future articles will address it more specifically. The When Disconnected property aspect is a good way to configure what happens with Edge property values in a mass disconnect scenario. If revenue depends on uptime, consider losing any data that changes while a device is disconnected. All of the updates can be folded into a single value if the changes themselves aren't needed but an updated value is needed to populate remote properties upon reconnect. Many customers will want to keep all of their data, even when a device is offline and use data stores. In this case, consider how much data each Edge device can store (due to memory limitations on the devices themselves), and therefore how long an outage can last before data is lost anyway. Also consider if Foundation can handle massive spikes in activity when this data comes streaming in. Usually, a Connection Server isn't enough. Remember that the more data needs to be kept, the greater the potential for a thundering herd scenario.   Handling a thundering herd scenario goes beyond sizing considerations. It is absolutely crucial to randomize the delay each device will wait before attempting to reconnect. It should be considered a requirement to have the devices connect slowly and "ramp up" over time for multiple reasons. One is that too much data coming in too fast could overwhelm the ingestion queue and result in data loss. Another is that the business logic could demand so many system resources, that the Foundation server crashes again and again and cannot be recovered. Turning off the business logic it isn't possible if the downtime is unexpected, so definitely rely instead on randomized reconnection times for Edge devices.   Common Mistake 2: Overlooking Differences in HA To accommodate a shared thing model across many servers, changes had to be made in how the thing model is stored and the model tree is walked by the Foundation servers. Model information is no longer cached at the Thing level, and the model tree is therefore walked every time model information is needed, so the number of times a Thing is directly referenced within each service should be limited (see the Help Center for details).   It's best to store whatever information is needed from a Thing in an Infotable, making the Things[thingName] reference a single time, outside of any loops. Storing the property definitions outside of the loop prevents the repetitious Thing references within the service, which otherwise would have occurred twice for each property (for both the name and the description), and then again for every single property on the Thing, a runtime nightmare.   Certain states previously held in memory are now shared across the cluster, like property values, Thing states, and connection statuses. Improvements have been made to minimize the effects of latency on queries, like how they now only return property values on associated Thing Shapes or Thing Templates. Filtering for properties on implementing Things is still possible, but now there is a specific service to do it, called GetThingPropertyValues (covered in detail in the Help Center).    In the script shown above, the first step is a query to get the names of all implementing things of a particular Thing Shape. This is done outside of any loops or queries, so once per service call. Then, an Infotable is built to store what would have been a direct reference to each thing in a traditional loop. This is a very quick loop that doesn't add much by way of runtime since it is all in memory, with no references to the thing model or the database, instead using the results of the first query to build the Infotable. Finally, this thing reference Infotable is passed into the new service GetThingPropertyValues to retrieve all of the property info for all of these things at once, thereby only walking the thing model once. The easiest mistake people would make here is to do a direct thing reference inside of a loop, using code like Things[thingName].Get() over and over again, thereby traversing the thing model repeatedly and adding a lot of runtime. QueryImplementingThingsOptimized is another new service with new parameters for advanced configuration. Searches can now be done on particular networks or to particular depths, and there's an offset parameter that allows for a maximum number of items to be returned starting at any place in the list of Things, where previously if you needed the Things at the end of the list, you had to return all of the Things. All of these options are detailed in the Help Center, as well as the restrictions listed in the image above.    Common Mistake 3: Async Service Misuse   Async services are sometimes required, say if a user has to trigger many updates on many remote things at once by the click of a button on a mashup that should not be locked up waiting for service completion. Too many async service calls, though, result in spikes in activity and competition for resources. To avoid this mistake, do not use async unless strictly necessary, and avoid launching too many async threads in parallel. A thread dump will show how many threads there are and what they are doing.   Common Mistake 4: Thread Pool Overload Adding more threads to the pool may be beneficial in certain circumstances, like if the threads are waiting on other resources to complete their tasks, look stuff up in the database (I/O), or unlock data that can only be accessed one thread at a time (property writes). In this case, threads are waiting on other resources, and not the CPU, so adding more threads to the pool can improve performance. However, too many threads and performance degradation will occur due to increased contention, wasted CPU cycles, and context switches.   To check if there are too many or not enough threads in the pool, take thread dumps and time the completion of requests in the system. Also watch the subsystem memory usage, and note that the side of the queue should never approach the max. Also consider monitoring the overall performance of the system (CPU and Memory) with a tool like Grafana, and remember that a good performance test properly exercises all of the business logic and induces threads in a similar way to real world expectations.   Common Mistake 5: Stream Etiquette Upserts, or updates to database tables, are expensive operations that can interfere with ingestion if they are performed on the wrong tables. This is why Value Stream and Stream data should never be updated by end users of the application. As described in the DGIS document on best practices, aggregation is the key to unlocking optimal performance because it reduces the size of database tables that require upserts. Each data structure shown here has an optimal use in a well-designed ThingWorx application.   Data Tables are great for storing overview information on all of the Things in one view, and queries on this data source are the fastest. Update this data source as often as possible (by timer), allowing enough time for updates to be gathered and any necessary calculations made. Data Tables can also be updated by end users directly because each row locks one at a time during updates. Data Tables should be kept as small as possible to improve performance on mashups, so for instance, consider using one to show all Things per region if there are millions of Things. Roll up information is best stored here to avoid calculations upon mashup load, and while a real-time view of many thousands of things at once is practically impossible, this option allows for a frequently updates overview of many things, which can also drill down to other mashup views that are real-time for one Thing at a time.   Value Streams are best used for data ingestion, and queries to these should be kept to a minimum, largely performed by the roll up logic that populates the Data Tables mentioned above. Queries that chart all of the data coming in are best utilized on individual Thing views so that only a handful of users are querying the same data sources at a time. Also be sure to use start and end dates and make use of the "source" field to improve query performance and create a better user experience. Due to the massive size of the corresponding database tables, it's best to avoid updating Value Streams outside of the data ingestion process altogether.   Streams are similar, but better for storing aggregated, historical data. Usually once per day or per week (outside of business hours if possible), Value Stream data will be smoothed or reduced into less data points and then stored into Streams. This allows for data to be stored for longer periods of time on the server without using up as much memory or hurting query performance. Then the high volume ingested data sources can be purged frequently, as discussed below.   Infotables are the most memory intensive, and are really designed to hold only a small number of rows at a time, usually to facilitate the business logic. Sometimes they will be stored in Streams or Data Tables if they aren't expected to grow larger (see the DGIS Coffee Machine App for an example). Infotables should never be logged; if they are used to transmit Edge property updates (like in the Property Set Approach), they should be processed into other logged (usually local) properties.   Referring to the properties themselves is how to get real-time information on a mashup, say by using the GetProperties service and its auto-update option, which relies on internal websockets. This should be done on individual Thing views only, and sizing considerations need to be made if there will be many of these websockets open at once, say if there are many end users all viewing real-time data at a time.   In the newer versions of ThingWorx, these cannot be updated directly, so find the system object called ThingWorxPersistenceProvider and use the service UpdateStreamDataProcessingSettings. ThingWorx Foundation processes data received from remote devices in batches in order to manage the data flow and reduce database churn. All of these settings configure how large those batches are and how frequently they are flushed to the database (detailed in full in KCS Article 240607). This is very advanced configuration that heavily depends on use case and infrastructure, but some info applies to most people: adjusting the scan rate is usually not beneficial; a healthy queue should never approach the max limit; and defaults differ by database because they function differently. InfluxDB generally works better when there are less processing threads and higher numbers of things per thread, while PostgresDB can have a lot of threads, preferably with less things per thread. That's why the default values shown here are given as the same number of threads (and this can be changed), but Influx has a larger block size and size threshold because it can handle more items per thread. Value Streams ingest all data into the Foundation server, and so the database tables that correspond with these data sources grow very large, very quickly and need to be purged often and outside of business hours, usually once a day or once per week. That's why it's important to reduce the data down to less points and push them into Streams for historical reference. For a span of years, consider a single point a day might be enough, for a span of hours, consider a data point a minute. Push aggregated data into Streams and then purge the rest as soon as it is no longer needed.   In Conclusion
View full tip
Hi,   I launched version 2.1.0 of the ThingWorx GitBackup Extension on Github, in the following repository: https://github.com/vrosu/thingworx-gitbackup-extension This release contains a few UI fixes and experimental support for proxy.   "The GitBackup extension is updated on a best-effort basis and accepts community fixes, that are best done through GitHub. Please report any troubles via GitHub's Issue system.   Important notice: As you might be aware, we have just launched ThingWorx 8.5 and with it a new solution called Solution Central, which is a brand-new cloud-based service to help you package, store, deploy, and manage your ThingWorx applications.   This service is supported by PTC and it contains features that seem to be similar to the GitBackup extension, but they are not. They are different type of tools: Git is a source versioning tool, while Solution Central is an artifact repository. When deploying applications between ThingWorx instances you should use Solution Central, while if you want to version your entities during development you can use the GitBackup extension"  
View full tip
Announcements