cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Stay updated on what is happening on the PTC Community by subscribing to PTC Community Announcements. X

IoT Tips

Sort by:
Sometimes you need to do something on a schedule. Axeda Platform is primarily focused on processing events as they happen. But in the case where some action needs to take place periodically, there are Rule Timers. A Rule Timer has a schedule to run, and a list of rules its associated with. Rule Timer Schedules A schedule is defined by a string using the cron syntax. This syntax is extremely flexible and powerful, but can be hard to understand. The fields are as follows: Seconds (0-59) Minutes (0-59) Hours (0-23) Day-of-Month (1-31) Month (1-12) OR jan,feb,mar, apr ... Day-of-Week (0-7) (Sunday is 0 or 7) OR sun, mon, tue, wed, thu, fri, sat Year (optional field) Some examples "0 0 12 ? * WED" - which means "every Wednesday at 12:00 pm" "0 0/5 * * * ?" - means Fire every 5 minutes "0 0 2 * * ?" - means Fire at 2am every day Note: Rule Timer schedules are in GMT/UTC. Associated Expression Rules The Rule timer has no other purpose but to run Expression Rules. These rules can be System or Asset. SystemTimer means an Expression Rule with type set to SystemTimer. This rule will be run once per scheduled time. A system timer can run a script to export data every day, or enable some rules at the end of a beta program. The other type is an Expression Rule with type AssetTimer. This rule will be run for all associated assets at the scheduled time. Say you want to save the max speed every day for a fleet of vehicles. You have an Expression Rule that's associated with the model Vehicle. The rule  A Rule Timer is set to run every day. Rule SetMax Type Data If: Speed > MaxSpeed Then: SetDataItem("MaxSpeed", Speed) Rule DailyMax Type AssetTimer If MaxSpeed > 0 Then: SetDataItem("DailyMax", MaxSpeed) && SetDataItem("MaxSpeed", 0) Associated to Vehicles RuleTimer NightlyUpdate Schedule "0 0 9 * * * ?" Associated rule: DailyMax The SetMax rule stores the max speed in a dataitem for each Vehicle asset. The NightlyUpdate timer runs at 9am UTC (which is 3am Eastern US, and midnight PST.) It writes the max into a DailyMax dataitem and clears the MaxSpeed to get ready for another day. The effect of the timer is that each Vehicle asset will process an event and run any rules that apply to it. Note! If you create the MaxSpeed dataitem through the model wizard, it doesn't have an initial value! So the rule IF Speed > MaxSpeed will not do the comparison until the first day when the timer sets its value to be 0.
View full tip
Mapping previous versions of ThingWorx Analytics API to ThingWorx Analytics 8.1 Services Since ThingWorx Analytics 8.1, the classic server monolith has been replaced by a series of independent microservices. This new structure groups services around specific elements of functionality (data, training, results). Thus the use of the previous API commands to access ThingWorx Analytics functions has been replaced by the use of ThingWorx Services. Those Services exist within specific Microservice Things accessible in the ThingWorx Platform 8.1. The table below shows a mapping of the most common previous API commands from version 8.0 and previous versions to the version 8.1 related services. The table below does not contain an exhaustive listing either of API commands nor of Services. The API commands used below are samples which might require further information like headers and Body once used. These are used in the table below for reference purposes. Previous API Command Purpose Sample Syntax TWA 8.1 Service Analytics Thing related to Service Service description 1 Version Info GET: http://<IP Address>:8080/1.0/about/versioninfo VersionInfo This service is available in each Mircorservice Thing inheriting from Analytics Server Returns the internal version number for a specific microservice. The first two digits = ThingWorx Core version. The next three digits = version of the microservice. 2 Registering new Dataset POST: http://<IP Address>:8080/1.0/datasets/ CreateDataset Data Microservice Creates the dataset uploads the data along with its metadata and optimizes it automatically. 3 Checking Dataset Status GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name> ListCreatedDatasets Data Microservice This old functionality is replaced by a Service that lists all the created Datasets 4 Creating Metadata POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/configuration CreateDataset Data Microservice (Check line 2 for further information) 5 Checking Dataset Configuration GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/configuration GetDatasetSchema Data Microservice Retrieves the metadata from a dataset. 6 Loading Dataset CSV POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/data CreateDataset Data Microservice (Check line 2 for further information) 7 Checking Job Status GET: http://<IP Address>:8080/1.0/status/<Job ID> GetJobStatus Available in all created Microservices inheriting from AnalyticsJob Server Retrieves the status of a specific job 8 Signals Job POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/signals CreateJob Signals Microservice Create a job to identify signals 9 Signal Results Job GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/signals/<Job ID>/results RetrieveResult Signals Microservice Retrieve a result of a Signals job 10 Profile Job POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/profiles CreateJob Profiling Microservice Creates a job to generate profiles. 11 Profile Result Job GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/profiles/<Job ID>/results RetrieveResult Profiling Micorservice Retrieve the results of a profiles job. 12 Train Model Job POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/prediction CreateJob Training Micorservice Create a prediction model job. 13 Train Model Result Job GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/prediction/<Job ID>/results RetrieveModel Training Microservice Only retrieves the PMML model. But if a holdout for validation was specified in the CreateJob, a validation job is auto-created and runs. 14 Scoring Job POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/predictive_scores BatchScore Prediction Microservice Submit Predictive Scoring Job 15 Scoring Job Result GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/predictive_scores/<Job ID>/results RetrieveResult Prediction Microservice Retrieve results from prediction scoring jobs
View full tip
This Expert Session consists of the general overview for the multitenancy and platform security. It  discusses the available security levels, necessary basic resources, as well as provides information on the system user, and also includes several examples on how-to. It’s assumed that the audience is familiar with the Composer and its navigation.     For full-sized viewing, click on the YouTube link in the player controls.   Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
This Expert Session will provide you with an in depth explanation behind how Signals are calculated in ThingWorx Analytics, what purpose they serve, and why we use them.  Some basic mathematical concepts are discussed so viewers will have a better idea of how ThingWorx Analytics operates behind the scenes.     For full-sized viewing, click on the YouTube link in the player controls.   Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
This Expert Session goes over ways to identify and develop a successful use case for ThingWorx Analytics. The example use case presented here is on employee retention in a fictional company with the goal of maximizing employee retention . This presentation will provide you with all the fundamentals you need to develop your own ThingWorx Analytics use cases from the ground up.     For full-sized viewing, click on the YouTube link in the player controls.   Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
First we need to Understand below terms: Quantitative Variable: A quantitative variable is naturally measured as a number for which meaningful arithmetic operations make sense. Examples: Height, age, crop yield, GPA, salary, temperature, area, air pollution index (measured in parts per million), etc. Categorical variable: Any variable that is not quantitative is categorical. Categorical variables take a value that is one of several possible categories. As naturally measured, categorical variables have no numerical meaning. Examples: Hair color, gender, field of study, college attended, political affiliation, status of disease infection. Ordinal Variables: An ordinal variable is a categorical variable for which the possible values are ordered. Ordinal variables can be considered “in between” categorical and quantitative variables. Example: Educational level might be categorized as     1: Elementary school education     2: High school graduate     3: Some college     4: College graduate     5: Graduate degree •    In this example (and for many ordinal variables), the quantitative differences between the categories are uneven, even though the differences between the labels are the same. (e.g., the difference between 1 and 2 is four years, whereas the difference between 2 and 3 could be anything from part of a year to several years) •    Thus it does not make sense to take a mean of the values. •    Common mistake: Treating ordinal variables like quantitative variables without thinking about whether this is appropriate in the particular situation at hand. Ordinal regression: In statistics, ordinal regression (also called "ordinal classification") is a type of regression analysis used for predicting an ordinal variable. The Ordinal Regression procedure allows you to build models, generate predictions, and evaluate the importance of various predictor variables in cases where the dependent (target) variable is ordinal in nature. Ordinal dependents and linear regression: When you are trying to predict ordinal responses, the usual linear regression models don't work very well. Those methods can work only by assuming that the outcome (dependent) variable is measured on an interval scale. Because this is not true for ordinal outcome variables, the simplifying assumptions on which linear regression relies are not satisfied, and thus the regression model may not accurately reflect the relationships in the data. In particular, linear regression is sensitive to the way you define categories of the target variable. With an ordinal variable, the important thing is the ordering of categories. So, if you collapse two adjacent categories into one larger category, you are making only a small change, and models built using the old and new categorizations should be very similar. Unfortunately, because linear regression is sensitive to the categorization used, a model built before merging categories could be quite different from one built after. Below are some examples pf ordered logistic regression: Example 1: A marketing research firm wants to investigate what factors influence the size of soda (small, medium, large or extra large) that people order at a fast-food chain. These factors may include what type of sandwich is ordered (burger or chicken), whether or not fries are also ordered, and age of the consumer. While the outcome variable, size of soda, is obviously ordered, the difference between the various sizes is not consistent. The difference between small and medium is 10 ounces, between medium and large 8, and between large and extra large 12. Example 2: A researcher is interested in what factors influence modaling in Olympic swimming. Relevant predictors include at training hours, diet, age, and popularity of swimming in the athlete’s home country. The researcher believes that the distance between gold and silver is larger than the distance between silver and bronze. Example 3: A study looks at factors that influence the decision of whether to apply to graduate school. College juniors are asked if they are unlikely, somewhat likely, or very likely to apply to graduate school. Hence, our outcome variable has three categories. Data on parental educational status, whether the undergraduate institution is public or private, and current GPA is also collected. The researchers have reason to believe that the “distances” between these three points are not equal. For example, the “distance” between “unlikely” and “somewhat likely” may be shorter than the distance between “somewhat likely” and “very likely”. How to use and get result by Ordinal Regression: Clink this link for PDF                                                                                                                                                                                                                                                                                                                        PDF source: http://www.norusis.com
View full tip
Ran into this recently thought I share an approach to getting a table with multi-column distinct yet retaining all the columns of the row. If you use Distinct, you get only the Columns you do Distinct on. This isn't very helpful if you want the 'latest' or the 'first occurrences'  of records in your table with a combination of fields being unique. For example I had Process, Part, Dimension and Point for which I had multiple value and date time entries, but I only wanted the latest entries. Following is how I solved it, if you have a better way please leave a comment! P.S.: for the query I used the awesome query builder available in the snippet section! --------------------------------------- var q1Result = Things["MyThing"].QueryStreamEntriesWithData({maxItems:99999, query:query1}); //Below creates a temporary measurement table to store the latest meaurement values var params = {                 infoTableName : "InfoTable",                 dataShapeName : "MyDatashape.DS" }; // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(MyDataShape.DS) var tempTable1 = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); // Extract only the latest measurements for the PART from the measurement result table 'q1Result' //The way we are going to reduce this to unique measurements is //1. records are in reverse order of date time //2. get distinct by Process Part Dim Point //3. Step through and match against distinct set //4. First match goes into final set //5. Upon match remove from distinct set //6. If no match then skip record //7. If no more distinct match records break loop var params = {                 t: q1Result /* INFOTABLE */,                 columns: 'ProcessID,PartID,Dimension,Point' /* STRING */ }; // result: INFOTABLE var distinctResult = Resources["InfoTableFunctions"].Distinct(params); for (var x = 0; x < q1Result.rows.length; x++) {     var query = {       "filters": {         "type": "AND",         "filters": [           {             "fieldName": "ProcessID",             "type": "EQ",             "value": q1Result.rows .ProcessID           },           {             "fieldName": "PartID",             "type": "EQ",             "value": q1Result.rows .PartID          },           {             "fieldName": "Dimension",             "type": "EQ",             "value": q1Result.rows .Dimension           },           {             "fieldName": "Point",             "type": "EQ",             "value": q1Result.rows .Point           }         ]       }     };   var params = {                 t: distinctResult /* INFOTABLE */,                 query: query /* QUERY */ }; // result: INFOTABLE var matchResult = Resources["InfoTableFunctions"].Query(params);     if (matchResult.rows.length == 1) {         tempTable1.AddRow(q1Result.rows );            var params = {             t: distinctResult /* INFOTABLE */,             query: query /* QUERY */         };         // result: INFOTABLE         var distinctResult = Resources["InfoTableFunctions"].DeleteQuery(params);         if (distinctResult.rows.length == 0) {                        break                    }            }    } //I now have a tempTable1 with the full rows and the 4 fields distinct result = tempTable1
View full tip
This is a useful trick for rolling up metrics in Thingworx across various levels of a hierarchy by using Networks, ThingShapes, and recursive service definitions. Say that you have a hierachy of Things in your model such as a Global view, a Region view, and a Store view -- this could be a Mfg Plant, a Building, an Asset, etc; whatever the core metric producing Thing is in your model -- where your Store has KPIs that you want to roll up across regions and globally. First, create a template for each of your hierarchical levels. In my case it is a GlobalTemplate, RegionalTemplate, and StoreTemplate. Add a property to your StoreTemplate that will be the KPI. Now, create a Thing for the Globe, and each of your Regions and Stores. Add them to a hierachical Network as such: Now, we need to create a ThingShape to aggregate our KPIs and apply it to the Global, Regional, and Store template. Now we will define a recursive funciton on our ThingShape called GetKPI ​and define it with the following: //define our base case, when the thing template we are on is the lowest level of our hierarchy, in this case the StoreTemplate if (me.thingTemplate == "StoreTemplate") {     //in our base case, the result is just the property for the metric we want to aggregate     result = me.someMetric } else {     //otherwise, we are at some other level in the hierarchy and we need to get our child connections from the network     //this gets all the things below us in the network     var params = {         name: me.name /* STRING */     };     // result: INFOTABLE dataShape: NetworkConnection     var network = Networks["Network"].GetChildConnections(params);     //loop through each of the things below us in the hierarchy and recursively add the result of GetKPI() to our result     result = 0;     for each (var row in network.rows) {             result += Things[row.to].GetKPI();     } } This is a simple case of just summing up a single property, but we can take this further using the Union and Aggregate snippets provided by thingworx to do other kinds of summarization. First add a new property called someAvgMetric ​to our StoreTemplate, and define a new service GetKPIProperties as such, with an InfoTable result, on the StoreTemplate varparams = {     propertyNames: {"items": ["someMetric", "someAvgMetric"]} /* JSON */ }; // result: INFOTABLE dataShape: "undefined" var result = me.GetNamedProperties(params); Now, define a new service on our ThingShape to utilize this service as our base case, and aggregate the resulting InfoTable when necessary. We'll call this service GetKPIAggregates: //define our base case if (me.thingTemplate == "StoreTemplate") {     //this function will be on the StoreTemplate, and returns the base infotable     result = me.GetKPIProperties() } else {     //grab our network     var params = {         name: me.name /* STRING */     };     // result: INFOTABLE dataShape: NetworkConnection     var network = Networks["Network"].GetChildConnections(params);     //need to create an empty infotable to union into. I glossed over this, but you'll need a datashape here     //create empty infotable     var result = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape({ infoTableName: "InfoTable", dataShapeName: "KPIDataShape" });     //loop through and union each of our results to our new infotable     for each (var row in network.rows) {         var params = {             t1: result /* INFOTABLE */,             t2: Things[row.to].GetKPIAggregates() /* INFOTABLE */         };         var result = Resources["InfoTableFunctions"].Union(params);     }     //aggregate each of our fields     var params = {         t: result /* INFOTABLE */,         columns: "someMetric,someAvgMetric" /* STRING */,         aggregates: "SUM,AVERAGE" /* STRING */,         groupByColumns: undefined /* STRING */     };     // result: INFOTABLE     var result = Resources["InfoTableFunctions"].Aggregate(params);     //need to loop through each of our field names and make them match our base infotable     // infotable datashape iteration     var dataShapeFields = result.dataShape.fields;     for (var fieldName in dataShapeFields) {         var stringName = dataShapeFields[fieldName].name;         var params = {             t: result /* INFOTABLE */,             from: stringName /* STRING */,             to: stringName.split("_")[1] /* STRING */         };         // result: INFOTABLE         var result = Resources["InfoTableFunctions"].RenameField(params);     } } Now, in our mashups, we can use a DynamicThingShape and call our GetKPIs service at any level in our network, and our data will be aggregated correctly for whatever level we are at in the hierarchy!
View full tip
In this blog I will be testing the SAPODataConnector using the SAP Gateway - Demo Consumption System.   Overview   The SAPODataConnector enables the connection to the SAP Netweaver Gateway through the ODdata specification. It is a specialized implementation of the ODataConnector. See Integration Connectors for documentation.   It relies on three components : Integration Runtime : microservice that runs outside of ThingWorx and has to be deployed separately, it uses Web Socket to communicate with the ThingWorx platform (similar to EMS). Integration Subsystem : available by default since 7.4 (not extension needed) Integration Connector : SAPODataConnector available by default in 8.0 (not extension needed)   ThingWorx can use OAuth to access SAP, but in this blog I will just use basic authentication.   SAP Netweaver Gateway Demo system registration   1. Create an account on the Gateway Demo system (credentials to be used on the connector are sent by email) 2. Verify that the account has access to the basic OData sample service : https://sapes4.sapdevcenter.com/sap/opu/odata/IWBEP/GWSAMPLE_BASIC/   Integration Runtime microservice setup   1. Follow WindchillSwaggerConnector hands-on (7.4) - Integration Runtime microservice setup Note: Only one Integration Runtime instance is required for all your Integration Connectors (Multiple instances are supported for High Availability and scale).   SAPODataConnector setup   Use the New Composer UI (some setting, such as API maps, are not available in the ThingWorx legacy composer)     1. Create a DataShape that is used to map the attributes being retrieved from SAP SAPObjectDS : Id (STRING), Name (STRING), Price (NUMBER) 2. Create a Thing named TestSAPConnector that uses SAPODataConnector as thing template 3. Setup the SAP Netweaver Gateway connection under TestSAPConnector > Configuration Generic Connector Connection Settings Authentication Type = fixed HTTP Connector Connection Settings Username = <SAP Gateway user> Password = < SAP Gateway pwd> Base URL : https://sapes4.sapdevcenter.com/sap Relative URL : /opu/odata/IWBEP/GWSAMPLE_BASIC/ Connection URL : /opu/odata/IWBEP/GWSAMPLE_BASIC/$metadata 4. Create the API maps and service under TestSAPConnector > API Maps (New Composer only) Mapping ID : sap EndPoint : getProductSet Select DataShape : SAPObjectDS (created at step 1) and map the following attributes : Name <- Name Id <- ProductID Price <- Price Pick "Create a Service from this mapping"     Testing our Connector   Test the TestSAPConnector::getProductSet service (keep all the input parameters blank)
View full tip
This example provides the ability to generate a simple entity structure and some historical data for each entity. Historical data is run through a ThingWorx service to generate histogram data for display in a bar chart.  The provided ThingWorx entities and PDF document provide the example as well as documentation.
View full tip
Persistent properties are stored in ThingWorx database while non-persistent properties are stored in memory. This means that the persistent values do not get erased or deleted if the thing restarts or platform restarts. The persistent properties can also be retrieved in the same way as non-persistent properties. To explain better how we can retrieve persistent data, please consider the below example: I took a device group which has multiple devices and defined 2 persistent properties. One is serial number and the other is firmware version number. Now in a mashup builder, the user gives the serial number and the corresponding firmware version will be retrieved. I achieved this by writing services for that thing. Created a Data shape with the name DeviceData and defined two filed definitions. One is Serial number and the other is firmware version. Created a thing template with the name DevGroup. Added a property in the thing template with the name DeviceData and selected the basetype as info table. Also selected the previously created data shape (Device Data) in data shape field. And made this property as persistent data by selecting Is Persistent. Saved the property. Created two devices with the names Device1 and Device2. Both the devices use the above template. Now for each device set the property values by clicking the set button in properties link. These values will be persistent, meaning they do not change even after refresh or when you restart ThingWorx. You can even set these properties at run time by just creating a service. Created GetFirmversion service which retrieves the firm version of given service number. Created mashup as shown below: . Once you select Device1, enter the serial number in the numeric entry field and click query button, the firmware version of the given serial number is displayed along with the data of device1. If we enter a wrong number, no data will be displayed. You can also set an error message instead of displaying empty values. Similar case when we select the device2 data. This is one way to retrieve persistent data. You can also obtain the same in many ways.
View full tip
Welcome to the ThingWorx Manufacturing Apps Community! The ThingWorx Manufacturing Apps are easy to deploy, pre-configured role-based starter apps that are built on PTC’s industry-leading IoT platform, ThingWorx. These Apps provide manufacturers with real-time visibility into operational information, improved decision making, accelerated time to value, and unmatched flexibility to drive factory performance.   This Community page is open to all users-- including licensed ThingWorx users, Express (“freemium”) users, or anyone interested in trying the Apps. Tech Support community advocates serve users on this site, and are here to answer your questions about downloading, installing, and configuring the ThingWorx Manufacturing Apps.     A. Sign up: ThingWorx Manufacturing Apps Community: PTC account credentials are needed to participate in the ThingWorx Community. If you have not yet registered a PTC eSupport account, start with the Basic Account Creation page.   Manufacturing Apps Web portal: Register a login for the ThingWorx Manufacturing Apps web portal, where you can download the free trial and navigate to the additional resources discussed below.     B. Download: Choose a download/packaging option to get started.   i. Express/Freemium Installer (best for users who are new to ThingWorx): If you want to quickly install ThingWorx Manufacturing Apps (including ThingWorx) use the following installer: Download the Express/Freemium Installer   ii. 30-day Developer Kit trial: To experience the capabilities of the ThingWorx Platform with the Manufacturing Apps and create your own Apps: Download the 30-day Developer Kit trial   iii. Import as a ThingWorx Extension (for users with a Manufacturing Apps entitlement-- including ThingWorx commercial customers, PTC employees, and PTC Partners): ThingWorx Manufacturing apps can be imported as ThingWorx extensions into an existing ThingWorx Platform install (v8.1.0). To locate the download, open the PTC Software Download Page and expand the following folders:   ThingWorx Platform | Release 8.x | ThingWorx Manufacturing Apps Extension | Most Recent Datacode     C. Learn After downloading the installer or extensions, begin with Installation and Configuration.   Follow the steps laid out in the ThingWorx Manufacturing Apps Setup and Configuration Guide 8.2   Find helpful getting-started guides and videos available within the 'Get Started' section of the ThingWorx Manufacturing Apps Portal.     D. Customize Once you have successfully downloaded, installed, and configured the Manufacturing Apps, begin to explore the deeper potential of the Apps and the ThingWorx Platform.   Follow along with the discussion and steps contained in the ThingWorx Manufacturing Apps and Service Apps Customization Guide  8.2   Also contained within the the 'Get Started' page of the ThingWorx Manufacturing Apps Portal, find the "Evolve and Expand" section, featuring: -Custom Plant Layout application -Custom Asset Advisor application -Global Plant View application -Thingworx Manufacturing Apps Technical Lab with Sigma Tile (Raspberry Pi application) -Configuring the Apps with demo data set and simulator -Additional Advanced Documentation     E. Get help / give feedback / interact Use the ThingWorx Manufacturing Apps Community page as a resource to find documentation, peruse past forum threads, or post a question to start a discussion! For advanced troubleshooting, licensed users are encouraged to submit support tickets to the PTC My eSupport portal.
View full tip
/* Define a DataShape used in an InfoTable Parameter for this service call */ twDataShape* sampleInfoTableAsParameterDs = twDataShape_Create(twDataShapeEntry_Create("ColumnA",NO_DESCRIPTION,TW_STRING)); twDataShape_AddEntry(sampleInfoTableAsParameterDs,twDataShapeEntry_Create("ColumnB",NO_DESCRIPTION,TW_NUMBER)); twDataShape_AddEntry(sampleInfoTableAsParameterDs,twDataShapeEntry_Create("ColumnC",NO_DESCRIPTION,TW_BOOLEAN)); twDataShape_SetName(sampleInfoTableAsParameterDs,"SampleInfoTableAsParameterDataShape");      /* Define Input Parameter that is an InfoTable of Shape SampleInfoTableAsParameterDataShape */ twDataShapeEntry* infoTableDsEntry = twDataShapeEntry_Create("itParam",NULL,TW_INFOTABLE); twDataShapeEntry_AddAspect(infoTableDsEntry, "dataShape", twPrimitive_CreateFromString("SampleInfoTableAsParameterDataShape", TRUE));    twDataShape* inputParametersDefinitionDs = twDataShape_Create(infoTableDsEntry);   /* Register remote function */ twApi_RegisterService(TW_THING, SERVICE_INTEGRATION_THINGNAME, "testMultiRowInfotable", NO_DESCRIPTION,   inputParametersDefinitionDs, TW_NOTHING, NULL, PlatformCallsServiceWithMultiRowInfoTableServiceImpl, NULL); /* Note that you will have to manually create the datashape in ThingWorx before attempting to add this remote service to your Thing. */
View full tip
The AddStreamEntries​ snippet does not offer too much information, except that it needs an InfoTable as input. It is however based on the InfoTable for the AddStreamEntity service.     To use the AddStreamEntries table, an InfoTable based on sourceType, values, location, source, timestamp​ and ​tags​ must be used.   In this example, I started with a new Thing based on a ​Stream​ template and the following DataShape:     This DataShape must be converted into an InfoTable with is used as the ​values​ parameter. It's important that the ​timestamp​ parameter has distinct values! Otherwise values matching the same timestamp will be overwritten!   We don't really need the sourceType​ as ThingWorx will automatically determine the type by knowing the source and which kind of Entity Type it is.   I created a new ​MyStreamThing​ with a new service, filling the InfoTable and the Stream. The result is the following code which will add 5 rows to the Stream:     // *** SET UP META DATA FOR INFO TABLE ***   // create a new InfoTable based on AddStreamEntries parameters (timestamp, location, source, sourceType, tags, values)   var myInfoTable = { dataShape: { fieldDefinitions : {} }, rows: [] };   myInfoTable.dataShape.fieldDefinitions['timestamp']  = { name: 'timestamp', baseType: 'DATETIME' }; myInfoTable.dataShape.fieldDefinitions['location']  = { name: 'location', baseType: 'LOCATION' }; myInfoTable.dataShape.fieldDefinitions['source']    = { name: 'source', baseType: 'STRING' }; myInfoTable.dataShape.fieldDefinitions['sourceType'] = { name: 'sourceType', baseType: 'STRING' }; myInfoTable.dataShape.fieldDefinitions['tags']      = { name: 'tags', baseType: 'TAGS' }; myInfoTable.dataShape.fieldDefinitions['values']    = { name: 'values', baseType: 'INFOTABLE' };   // *** SET UP ACTUAL VALUES FOR INFO TABLE ***   // create new meta data   var tags = new Array(); var timestamp = new Date(); var location = new Object(); location.latitude = 0; location.longitude = 0; location.elevation = 0; location.units = "WGS84";   // add rows to InfoTable (~5 times)   for (i=0; i<5; i++) {       // create new values based on Stream DataShape       var params = {           infoTableName : "InfoTable",           dataShapeName : "Cxx-DS"     };       var values = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params);       // add something to the values to make them unique       // create and add new row based on Stream DataShape     // only a single line allowed!       var newValues = new Object();     newValues.a = "aaa" + i; // STRING - isPrimaryKey = true     newValues.b = "bbb" + i; // STRING     newValues.c = "ccc" + i; // STRING       values.AddRow(newValues);       // create new InfoTable row based on meta data & values     // add 10 ms to each object, to make it's timestamp unique     // otherwise entries with the same timestamp will be overwritten       var newEntry = new Object();     newEntry.timestamp = new Date(Date.now() + (i * 10));     newEntry.location = location;     newEntry.source = me.name;     newEntry.tags = tags;     newEntry.values = values;       // add new Info Table row to Info Table           myInfoTable.rows = newEntry;       }       // *** ADD myInfoTable (HOLDING MULITPLE STREAM ENTRIES) TO STREAM       // add stream entries in the InfoTable       var params = {           values: myInfoTable /* INFOTABLE */     };       // no return       Things["MyStreamThing"].AddStreamEntries(params);   To verify the values have been added correctly, call the ​GetStreamEntriesWithData​ service on the ​MyStreamThing​
View full tip
This video is the 2 nd part of a series of 3 videos walking you through how to setup ThingWatcher for Anomaly Detection. In this video you will learn how to use “Discover UI” from the “New Composer” to bind simulated data coming through KEPServer for Anomaly Detection.   Updated Link for access to this video:  Anomaly Detection 8.0: Configuring Anomaly Alerts:  Part 2 of 3
View full tip
One recurring question that comes up after a customer has been using the scripting capabilities of the Axeda Platform for some time is how to create function libraries that are reusable, in order to reduce the amount of copy and pasting (and testing) that is done to create new functionality.  Below I demonstrate a mechanism for how to accomplish this.  Some things that are typically included in such a library are: Customized ExtendedObject access methods (CRUD) - ExtendedObjects must be created before they can be used, so this can be encapsulated per customer requirements DataItem manipulation Gathering lists of Assets based on criteria Accessing the ExternalCredentialsBridge For those unfamiliar with Custom Objects I suggest some resources at the end of this document to get started.  The first thing we want to do is create a script that is going to be our "Library of Functions": FunctionLibrary.groovy: class GroovyChild {     String hello() {         return "Hello"     }     String world() {         return "World"     } } return new GroovyChild() This can the be subsequently called like this: FunctionCaller.groovy: import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.CustomObjectCriteria import com.axeda.services.v2.CustomObjectType import com.axeda.services.v2.CustomObject CustomObjectCriteria cOC1 = new CustomObjectCriteria() cOC1.setName 'FunctionLibrary' def co = customObjectBridge.findOne cOC1 result = customObjectBridge.execute(co.label ) return ['Content-Type': 'application/text', 'Content': result.hello() + ' ' + result.world() ] A developer would be wise to add in null checking on some of the function returns and do some error reporting if it cannot find and execute the FunctionLibrary.  This is only means a a means to start the users on the path of building their own reusable content libraries. Regard -Chris Kaminski PTC/Axeda Customer Support References: Axeda® Platform Web Services Developer's Reference, v2 REST 6.8.3 August 2015 Axeda® v2 API/Services Developer's Reference Version 6.8.3 August 2015 Axeda® v1 API Developer’s Reference Guide Version 6.8 August 2014 Documentation Map for Axeda® 6.8.2 January 2015
View full tip
In the process of working with a customer, I was curious as to the throughput of a file sent via the Axeda Connected Content feature to one of the Axeda Agent Gateways.   I took a random 50 megabyte blob of data (/dev/urandom) and sent it to one of my test Gateways via a Package deployment: DEBUG   xgEnterpriseProxy: Enterprise Queue Empty INFO    xgSM:  ... Download percent done = 11% INFO    xgSM:  ... Download percent done = 21% INFO    xgSM:  ... Download percent done = 31% INFO    xgSM:  ... Download percent done = 41% INFO    xgSM:  ... Download percent done = 51% INFO    xgSM:  ... Download percent done = 61% INFO    xgSM:  ... Download percent done = 71% INFO    xgSM:  ... Download percent done = 81% INFO    xgSM:  ... Download percent done = 91% INFO    xgSM:  ... Download percent done = 100% DEBUG   xgSM: >>  INTERNAL DEBUG MESSAGE << :  Download time is 4 seconds DEBUG   xgSM: >>  INTERNAL DEBUG MESSAGE << :  Upgrading.  Backing up files to C:\temp\CFKGW\AxedaBackup DEBUG   xgSM: >>  INTERNAL DEBUG MESSAGE << :  Extracting downloaded files from DefaultProject\CFKGW\Downloads\141581_143281.tar.gz to directory C:\temp\CFKGW\ DEBUG   xgSM: >>  INTERNAL DEBUG MESSAGE << :  Extraction Finished About 12MB per second.  This was a sandbox in the PTC On-Demand Center.  Not bad, but not necessarily representative of a real production system.  This sandbox doesn't have 1000 devices trying to get this file at once.  So some benchmarking in your configuration and environment certainly needs to be done. So that done, I thought I'd up the ante - 700 megabytes this time! DEBUG   xgEnterpriseProxy: Enterprise Queue Empty INFO    xgSM:  ... Download percent done = 10% INFO    xgSM:  ... Download percent done = 20% INFO    xgSM:  ... Download percent done = 30% INFO    xgSM:  ... Download percent done = 40% INFO    xgSM:  ... Download percent done = 50% INFO    xgSM:  ... Download percent done = 60% INFO    xgSM:  ... Download percent done = 70% INFO    xgSM:  ... Download percent done = 80% INFO    xgSM:  ... Download percent done = 90% INFO    xgSM:  ... Download percent done = 100% DEBUG   xgSM: >>  INTERNAL DEBUG MESSAGE << :  Download time is 66 seconds So 10MB per second. Directory of C:\temp\cfkgw 05/17/2017  01:32 PM    <DIR>          . 05/17/2017  01:32 PM    <DIR>          .. 05/17/2017  01:03 PM         1,048,576 1mb.dat 05/17/2017  01:04 PM        52,428,800 50meg-randomdata.dat 05/17/2017  01:32 PM       734,003,200 700mb.dat 05/17/2017  01:03 PM    <DIR>          AxedaBackup                3 File(s)    787,480,576 bytes Not bad at all! 
View full tip
If you ever tested mashup rendering on mobile phones, you probably experienced that the mashup was not sizing to fit your mobile display. This "MobileHeader" extension enables to auto adapt the mashup to mobile displays.   It adds the following parameters to the HTML header: <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=0"> <meta name="apple-mobile-web-app-capable" content="yes"> <meta name="apple-mobile-web-app-status-bar-style" content="black-translucent">   In the composer just drop the "MobileHeader" extension into a section of the mashup.   This extension was tested until version 7.4.
View full tip
It usually happens that we need to copy a large file to ThingWorx server periodically, and what's worse, the big file is changing(like a log file). This sample give a simpler way to implement. The main idea in the sample is: 1. Lower the management burden from ThingWorx server and instead it put all the work in edge SDK side 2. Save network burden with only uploading the incremented file and append it to the older file on ThingWorx server   Java SDK version in this sample: 6.0.1-255
View full tip
Announcements