cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
/* Define a DataShape used in an InfoTable Parameter for this service call */ twDataShape* sampleInfoTableAsParameterDs = twDataShape_Create(twDataShapeEntry_Create("ColumnA",NO_DESCRIPTION,TW_STRING)); twDataShape_AddEntry(sampleInfoTableAsParameterDs,twDataShapeEntry_Create("ColumnB",NO_DESCRIPTION,TW_NUMBER)); twDataShape_AddEntry(sampleInfoTableAsParameterDs,twDataShapeEntry_Create("ColumnC",NO_DESCRIPTION,TW_BOOLEAN)); twDataShape_SetName(sampleInfoTableAsParameterDs,"SampleInfoTableAsParameterDataShape");      /* Define Input Parameter that is an InfoTable of Shape SampleInfoTableAsParameterDataShape */ twDataShapeEntry* infoTableDsEntry = twDataShapeEntry_Create("itParam",NULL,TW_INFOTABLE); twDataShapeEntry_AddAspect(infoTableDsEntry, "dataShape", twPrimitive_CreateFromString("SampleInfoTableAsParameterDataShape", TRUE));    twDataShape* inputParametersDefinitionDs = twDataShape_Create(infoTableDsEntry);   /* Register remote function */ twApi_RegisterService(TW_THING, SERVICE_INTEGRATION_THINGNAME, "testMultiRowInfotable", NO_DESCRIPTION,   inputParametersDefinitionDs, TW_NOTHING, NULL, PlatformCallsServiceWithMultiRowInfoTableServiceImpl, NULL); /* Note that you will have to manually create the datashape in ThingWorx before attempting to add this remote service to your Thing. */
View full tip
This has been moved to its new home in the Augmented Reality Category in the PTC Community.
View full tip
When you install Thingworx with PostgreSQL, you can't import the "PostgreSQL" extension because of the conflict of a library file. So, here is a sample "MetaData.xml" file. You can zip this file and simply import it into your Thingworx so that you can have a thing template for "PostgreSQL" database. <Entities>     <ExtensionPackages>         <ExtensionPackage name="PostgreSQL_ExtensionPackage"                       description="PostgreSQL JDBC Extension"                       vendor="ThingWorx Customer Service"                       packageVersion="1.0"                       minimumThingWorxVersion="4.0.0">         </ExtensionPackage>     </ExtensionPackages>     <ThingTemplates>         <ThingTemplate baseThingTemplate="Database" description="PostgreSQL Server" documentationContent="" effectiveThingPackage="" homeMashup="" lastModifiedDate="2015-11-28T11:40:35.355-05:00" name="PostgreSqlServer" tags="" thingPackage="">             <ThingShape description="" documentationContent="" lastModifiedDate="2015-11-28T11:40:35.355-05:00" name="" tags="">                 <PropertyDefinitions/>                 <ServiceDefinitions/>                 <EventDefinitions/>                 <ServiceImplementations/>                 <ServiceMappings/>                 <Subscriptions/>             </ThingShape>             <ImplementedShapes/>             <ConfigurationTables>                 <ConfigurationTable description="" isMultiRow="false" name="ConnectionInfo">                     <DataShape>                         <FieldDefinitions>                             <FieldDefinition aspect.defaultValue="5.0" baseType="NUMBER" description="Maximum number of connections in the pool" name="maxConnections" ordinal="0"/>                             <FieldDefinition aspect.defaultValue="jdbc" baseType="STRING" description="jDBCConnectionURL" name="jDBCConnectionURL" ordinal="0"/>                             <FieldDefinition aspect.defaultValue="SELECT NOW()" baseType="STRING" description="Connection validation string" name="connectionValidationString" ordinal="0"/>                             <FieldDefinition aspect.defaultValue="org.postgresql.Driver" baseType="STRING" description="jDBCDriverClass" name="jDBCDriverClass" ordinal="0"/>                             <FieldDefinition baseType="STRING" description="Database user name" name="userName" ordinal="0"/>                             <FieldDefinition baseType="PASSWORD" description="Database password" name="password" ordinal="0"/>                         </FieldDefinitions>                     </DataShape>                     <Rows>                         <Row>                             <jDBCConnectionURL><![CDATA[jdbc:postgresql://localhost:5432/demo]]></jDBCConnectionURL>                             <maxConnections>100.0</maxConnections>                             <connectionValidationString><![CDATA[SELECT NOW()]]></connectionValidationString>                             <jDBCDriverClass><![CDATA[org.postgresql.Driver]]></jDBCDriverClass>                             <userName />                             <password />                         </Row>                     </Rows>                 </ConfigurationTable>                 <ConfigurationTable description="" isMultiRow="false" name="ConnectionMonitoring">                     <DataShape>                         <FieldDefinitions>                             <FieldDefinition aspect.defaultValue="1.0" baseType="NUMBER" description="Number of retries" name="numberOfRetries" ordinal="0"/>                             <FieldDefinition aspect.defaultValue="2000.0" baseType="NUMBER" description="Retry delay in milliseconds" name="retryDelay" ordinal="0"/>                             <FieldDefinition aspect.defaultValue="false" baseType="BOOLEAN" description="Enable connection monitoring" name="enableMonitor" ordinal="0"/>                             <FieldDefinition aspect.defaultValue="30000.0" baseType="NUMBER" description="Monitor rate in milliseconds" name="connectionMonitorRate" ordinal="0"/>                         </FieldDefinitions>                     </DataShape>                     <Rows>                         <Row>                             <numberOfRetries>1.0</numberOfRetries>                             <retryDelay>2000.0</retryDelay>                             <enableMonitor>false</enableMonitor>                             <connectionMonitorRate>3000.0</connectionMonitorRate>                         </Row>                     </Rows>                 </ConfigurationTable>             </ConfigurationTables>             <avatar/>             <DesignTimePermissions>                 <Create/>                 <Read/>                 <Update/>                 <Delete/>                 <Metadata/>             </DesignTimePermissions>             <RunTimePermissions/>             <InstanceDesignTimePermissions>                 <Create/>                 <Read/>                 <Update/>                 <Delete/>                 <Metadata/>             </InstanceDesignTimePermissions>             <InstanceRunTimePermissions/>         </ThingTemplate>     </ThingTemplates> </Entities>
View full tip
In this video we show: - how to deploy the microservices via jar files - how to setup ThingWorx to use those microservices for anomaly detection   Updated Link for access to this video:  ThingWorx Analytics: Deploying Training and Result Microservices via jar files for Anomaly Detection
View full tip
This video is the 3 rd part of a series of 3 videos walking you through how to setup ThingWatcher for Anomaly Detection. In this video we will use Anomaly Mashup to visualize data received from my remote device.   Updated Link for access to this video:  Anomaly Detection 8.0:  Viewing Data via Anomaly Mashup:  Part 3 of 3
View full tip
This video is the 2 nd part of a series of 3 videos walking you through how to setup ThingWatcher for Anomaly Detection. In this video you will learn how to use “Discover UI” from the “New Composer” to bind simulated data coming through KEPServer for Anomaly Detection.   Updated Link for access to this video:  Anomaly Detection 8.0: Configuring Anomaly Alerts:  Part 2 of 3
View full tip
This video is the 1 st part of a series of 3 videos walking you through how to setup ThingWatcher for Anomaly Detection. In this first video you will learn the basics of how to create connectivity between KEPServer and ThingWorx Platform.   Updated Link for access to this video:  Anomaly Detection 8.0 - Part 1: Connecting KEPServer to ThingWorx: Part 1 of 3
View full tip
In this video we show a simple mashup and services in order to display the ThingPredictor's real time scoring results. This video applies to ThingWorx Analytics 7.4 to 8.1   Updated Link for access to this video:  Showing ThingWorx Analytics Manager's results (ThingPredictor) in a Mashup
View full tip
In this blog I will be inspecting the setup below, using the REST APIs exposed by the different components (log-analysis-free guaranteed).   The components are started in stages, and I will do some exploration between each stages : EMS only EMS + LSR 1 EMS + LSR 1 + LSR 2    The REST APIs   Edge MicroServer (EMS) REST API This API is very similar to the ThingWorx platform REST API, see REST APIs Supported by WS EMS for specificity. I will be monitoring the EMS using the LocalEms virtual Thing (EMS only) ThingWorx Platform REST API This is the well known ThingWorx Core REST API -  see REST API Core Concepts I will be monitoring the EMS, from the platform, using an EMSGateway thing. The EMSGateway exposes on the platform some of the LocalEms services (like GetEdgeThings). I'm also inspecting the remote properties / services / events exposed on the things using the RemoteThing::GetRemoteMetadata service. Lua Script Resource (LSR) REST API This API largely differs from the ones above, the API documentation is served by the LSR itself (e.g. http://localhost:8001/)   I will use it to list the scripts loaded by the LSR process.  Configuration   See above diagram - Platform is listening on port 8084, no encryption, EMS and LSRs on the same host...   Platform configuration : for each edge thing, a corresponding remote thing was manually created on the platform EMSGateway1 as a EMSGateway Thing Template EMSOnlyThing as a RemoteThingWithFileTransfer LUAThing2 as a RemoteThing LUAThing1 as a RemoteThing LUAOnlyThing as a RemoteThing   EMS configuration : /etc/config.json - listening on default port 8000   {     "ws_servers": [{             "host": "tws74neo",             "port": 8084         }     ],     "appKey": "xxxxxx-5417-4248-bc01-yyyyyyy",     "logger": {         "level": "TRACE"     },     "ws_connection": {         "encryption": "none"     },     "auto_bind": [{             "name": "EMSGateway1",             "gateway": true         }, {             "name": "EMSOnlyThing",             "gateway": false         }, {             "name": "LUAThing2",             "host": "localhost",             "port": 8002,             "gateway": false         }, {             "name": "LUAThing1",             "gateway": false         }     ],     "file": {         "virtual_dirs": [{                 "emsrepository": "E:\\ptc\\ThingWorx\\EMS-5-3-2\\repositories\\data"             }         ],         "staging_dir": "E:\\ptc\\ThingWorx\\EMS-5-3-2\\repositories\\staging"     } } LSR process (1) : /etc/config.lua - listening on default port 8001 (using the out of the box sample Lua scripts)   scripts.log_level = "INFO"   scripts.LUAThing1 = {     file = "thing.lua",     template = "example", }   scripts.sample = {   file = "sample.lua" } LSR process (2) : /etc/config2.lua - listening on port 8002 (using the out of the box sample Lua scripts) This LSR process is started with command "luaScriptResource.exe -cfg .\etc\config2.lua"   scripts.log_level = "INFO" scripts.script_resource_port = 8002 scripts.LUAThing2 = {     file = "thing.lua",     template = "example", }   scripts.LUAOnlyThing = {     file = "thing.lua",     template = "example", }   Stage 1 : EMS only     ThingWorx REST API   Request: Call the GetEdgeThings service on the EMSGateway1 thing POST  twx74neo:8084/Thingworx/Things/EMSGateway1/Services/GetEdgeThings Response: As expected, only the remote things flagged as auto_bind are listed   name host port path keepalive timeout proto user accept EMSGateway1   8001.0 / 60000.0 30000.0 http   application/json EMSOnlyThing   8001.0 / 60000.0 30000.0 http   application/json LUAThing1   8001.0 / 60000.0 30000.0 http   application/json LUAThing2 localhost 8002.0 / 60000.0 30000.0 http   application/json                                                               Request: Call the GetRemoteMetadata service on the LUAThing1 thing POST  twx74neo:8084/Thingworx/Things/LUAThing1/Services/GetRemoteMetadata Response: As expected, remote properties / services and events are not available since the LSR associated with this thing is off.   Unable to Invoke Service GetRemoteMetadata on LUAThing1 : null   EMS REST API   Request: Call the GetEdgeThings service on the LocalEms virtual thing POST  localhost:8000/Thingworx/Things/LocalEms/Services/GetEdgeThings Response: output is identical to the gateway thing on the platform   { "name": "EMSGateway1", "host": "", "port": 8001, "path": "/",  "keepalive": 60000, "timeout": 30000, "proto": "http", "user": "", "accept": "application/json" }, { "name": "EMSOnlyThing", "host": "", "port": 8001, "path": "/", "keepalive": 60000, "timeout": 30000, "proto": "http", "user": "", "accept": "application/json" }, { "name": "LUAThing1", "host": "", "port": 8001, "path": "/", "keepalive": 60000, "timeout": 30000, "proto": "http", "user": "", "accept": "application/json" }, { "name": "LUAThing2", "host": "localhost", "port": 8002, "path": "/", "keepalive": 60000, "timeout": 30000, "proto": "http", "user": "", "accept": "application/json"}            LSR REST API N/A - no LSR process started yet.   Stage 2 : EMS + LSR 1 (8001)     ThingWorx REST API   Request: Call the GetEdgeThings service on the EMSGateway1 thing POST  twx74neo:8084/Thingworx/Things/EMSGateway1/Services/GetEdgeThings Response: LUAThing1 is associated to an LUA script   name host port path keepalive timeout proto user accept EMSGateway1   8001.0 / 60000.0 30000.0 http   application/json EMSOnlyThing   8001.0 / 60000.0 30000.0 http   application/json LUAThing1 localhost 8001.0 /scripts/Thingworx 60000.0 15000.0 http   application/json LUAThing2 localhost 8002.0 / 60000.0 30000.0 http   application/json   Request: Call the GetRemoteMetadata service on the LUAThing1 thing POST  twx74neo:8084/Thingworx/Things/LUAThing1/Services/GetRemoteMetadata Response: Now that the Lua script for LUAThing1 is running, remote properties / services and events are available   {"isSystemObject":false,"propertyDefinitions":{"Script_Pushed_Datetime":{"sourceType":"ThingShape","aspects":{"isReadOnly":false,"dataChangeThreshold":0,"defaultValue":1495619610000,"isPersistent":false,"pushThreshold":0,"dataChangeType":"VALUE","cacheTime":0,"pushType":"ALWAYS"},"name":"Script_Pushed_Datetime","description":"","category":"","tags":[],"baseType":"DATETIME","ordinal":0},"Pushed_InMemory_Boolean":{"sourceType":"ThingShape","aspects":....   EMS REST API      LocalEms::GetEdgeThings returns same output as EMSGateway::GetEdgeThings   LSR REST API (port 8001)   Request: List all the scripts running in the first LSR GET  localhost:8001/scripts?format=text/html Response: We find our sample script and the script associated with LUAThing1 (the Thingworx script is part of the infrastructure and always there)   Name Status Result File LUAThing1 Running   E:\ptc\ThingWorx\EMS-5-3-2\etc\thingworx\scripts\thing.lua sample Running   sample.lua Thingworx Running   E:\ptc\ThingWorx\EMS-5-3-2\etc\thingworx\scripts\thingworx.lu   Stage 3 : EMS + LSR 1 (8001) + LSR 2 (8002)     ThingWorx REST API   Request: Call the GetEdgeThings service on the EMSGateway1 thing POST  twx74neo:8084/Thingworx/Things/EMSGateway1/Services/GetEdgeThings Response: LUAOnlyThing is now listed and LUAThing2 is associated with a LUA script   name host port path keepalive timeout proto user accept EMSGateway1   8001.0 / 60000.0 30000.0 http   application/json EMSOnlyThing   8001.0 / 60000.0 30000.0 http   application/json LUAOnlyThing localhost 8002.0 /scripts/Thingworx 60000.0 15000.0 http   application/json LUAThing1 localhost 8001.0 /scripts/Thingworx 60000.0 15000.0 http   application/json LUAThing2 localhost 8002.0 /scripts/Thingworx 60000.0 15000.0 http   application/json   EMS REST API      LocalEms::GetEdgeThings returns same output as  EMSGateway::GetEdgeThings   LSR REST API (port 8002)   Request: List all the scripts running in the second LSR GET  localhost:8002/scripts?format=text/html Response: Returns the status of all the scripts currently loaded   Name Status Result File LUAOnlyThing Running   .\etc\thingworx\scripts\thing.lua LUAThing2 Running   .\etc\thingworx\scripts\thing.lua Thingworx Running   .\etc\thingworx\scripts\thingworx.lua
View full tip
Error: Exception: JavaException: java.lang.Exception: Import Failed: License is not available for Product: null Feature: twx_things Possible root cause: editing web.xml without further restart of the platform. In result, the product name does not pick up and the license path drops while the composer is still running, Fix: Go to LicensingSubsystem -> Services and run the AcquireLicense service. IF the issue does not get resolved, please contact the Support team, attaching your license.bin to the ticket.
View full tip
One recurring question that comes up after a customer has been using the scripting capabilities of the Axeda Platform for some time is how to create function libraries that are reusable, in order to reduce the amount of copy and pasting (and testing) that is done to create new functionality.  Below I demonstrate a mechanism for how to accomplish this.  Some things that are typically included in such a library are: Customized ExtendedObject access methods (CRUD) - ExtendedObjects must be created before they can be used, so this can be encapsulated per customer requirements DataItem manipulation Gathering lists of Assets based on criteria Accessing the ExternalCredentialsBridge For those unfamiliar with Custom Objects I suggest some resources at the end of this document to get started.  The first thing we want to do is create a script that is going to be our "Library of Functions": FunctionLibrary.groovy: class GroovyChild {     String hello() {         return "Hello"     }     String world() {         return "World"     } } return new GroovyChild() This can the be subsequently called like this: FunctionCaller.groovy: import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.CustomObjectCriteria import com.axeda.services.v2.CustomObjectType import com.axeda.services.v2.CustomObject CustomObjectCriteria cOC1 = new CustomObjectCriteria() cOC1.setName 'FunctionLibrary' def co = customObjectBridge.findOne cOC1 result = customObjectBridge.execute(co.label ) return ['Content-Type': 'application/text', 'Content': result.hello() + ' ' + result.world() ] A developer would be wise to add in null checking on some of the function returns and do some error reporting if it cannot find and execute the FunctionLibrary.  This is only means a a means to start the users on the path of building their own reusable content libraries. Regard -Chris Kaminski PTC/Axeda Customer Support References: Axeda® Platform Web Services Developer's Reference, v2 REST 6.8.3 August 2015 Axeda® v2 API/Services Developer's Reference Version 6.8.3 August 2015 Axeda® v1 API Developer’s Reference Guide Version 6.8 August 2014 Documentation Map for Axeda® 6.8.2 January 2015
View full tip
In the process of working with a customer, I was curious as to the throughput of a file sent via the Axeda Connected Content feature to one of the Axeda Agent Gateways.   I took a random 50 megabyte blob of data (/dev/urandom) and sent it to one of my test Gateways via a Package deployment: DEBUG   xgEnterpriseProxy: Enterprise Queue Empty INFO    xgSM:  ... Download percent done = 11% INFO    xgSM:  ... Download percent done = 21% INFO    xgSM:  ... Download percent done = 31% INFO    xgSM:  ... Download percent done = 41% INFO    xgSM:  ... Download percent done = 51% INFO    xgSM:  ... Download percent done = 61% INFO    xgSM:  ... Download percent done = 71% INFO    xgSM:  ... Download percent done = 81% INFO    xgSM:  ... Download percent done = 91% INFO    xgSM:  ... Download percent done = 100% DEBUG   xgSM: >>  INTERNAL DEBUG MESSAGE << :  Download time is 4 seconds DEBUG   xgSM: >>  INTERNAL DEBUG MESSAGE << :  Upgrading.  Backing up files to C:\temp\CFKGW\AxedaBackup DEBUG   xgSM: >>  INTERNAL DEBUG MESSAGE << :  Extracting downloaded files from DefaultProject\CFKGW\Downloads\141581_143281.tar.gz to directory C:\temp\CFKGW\ DEBUG   xgSM: >>  INTERNAL DEBUG MESSAGE << :  Extraction Finished About 12MB per second.  This was a sandbox in the PTC On-Demand Center.  Not bad, but not necessarily representative of a real production system.  This sandbox doesn't have 1000 devices trying to get this file at once.  So some benchmarking in your configuration and environment certainly needs to be done. So that done, I thought I'd up the ante - 700 megabytes this time! DEBUG   xgEnterpriseProxy: Enterprise Queue Empty INFO    xgSM:  ... Download percent done = 10% INFO    xgSM:  ... Download percent done = 20% INFO    xgSM:  ... Download percent done = 30% INFO    xgSM:  ... Download percent done = 40% INFO    xgSM:  ... Download percent done = 50% INFO    xgSM:  ... Download percent done = 60% INFO    xgSM:  ... Download percent done = 70% INFO    xgSM:  ... Download percent done = 80% INFO    xgSM:  ... Download percent done = 90% INFO    xgSM:  ... Download percent done = 100% DEBUG   xgSM: >>  INTERNAL DEBUG MESSAGE << :  Download time is 66 seconds So 10MB per second. Directory of C:\temp\cfkgw 05/17/2017  01:32 PM    <DIR>          . 05/17/2017  01:32 PM    <DIR>          .. 05/17/2017  01:03 PM         1,048,576 1mb.dat 05/17/2017  01:04 PM        52,428,800 50meg-randomdata.dat 05/17/2017  01:32 PM       734,003,200 700mb.dat 05/17/2017  01:03 PM    <DIR>          AxedaBackup                3 File(s)    787,480,576 bytes Not bad at all! 
View full tip
This is part 3 out of 3 videos on Getting Started with ThingWorx Analytics During this video you will learn:   Executing a “Signals” Job Getting the results of the “Signals” Job Executing a “Training Model” Job Getting the results of the “Training Model” Job   Updated Link for access to this video:  Getting Started with ThingWorx Analytics: Part 3 of 3
View full tip
Hi,   I have just been playing around with the Edge MicroServer on a couple of Raspberry Pi's.  I obviously want them connected to ThingWorx, however they stop when the SSH session closes which isn't ideal.  I thought about doing something really quick and dirty using 'nohup', but this could have lead to many running processes, and wouldn't still not have started automatically when the Pi booted.  So, I did it right instead using init.d daemon configurations.   There are two files; one for the EMS and one for the Lua Script Resource.  You need to put these in /etc/init.d, then make sure they are owned by root.   sudo chown root /etc/init.d/thingworx* sudo chgrp root /etc/init.d/thingworx*   You'll need to modify the paths in the first lines of these files to match where you have your microserver folder.  Then you need to update the init.d daemon configs and enable the services.   sudo update-rc.d thingworx-ems defaults sudo update-rc.d thingworx-ems enable sudo update-rc.d thingworx-lsr defaults sudo update-rc.d thingworx-lsr enable   You can then start (stop, etc.) like this:   sudo service thingworx-ems start sudo service thingworx-lsr start   They should both also start automatically after a reboot.   Regards,   Greg
View full tip
This example is to achieve to update objects in Windchill thru extensions. It is really hard to find a resource for Windchill extension's services to take an advantage of them. So, I wrote a simple example to update objects in Windchill from Thingworx.   There are three data shapes needed to do this. One is "PTC.PLM.WindchillPartUfids" which has only "value" field (String) in it and another is "PTC.PLM.WindchillPartCheckedOutDS" which has a "ufid" field (String). Last one is "PTC.PLM.WindchillPartPropertyDS" which has a "ufid" field (String) and fields for "attributes". For an instance of the last data shape, there might be three fields as "ufid", "partPrice" and "quantity" to update parts. In this example, this data shape has two fields which are "ufid" and "almProjectId".   In this example, this needs two input parameters. One is ufid (String) and almProjectId (String). If you need to have multiple objects to update at once, you can use InfoTable type as an "ufid" input parameter instead of String type.   Note that this is an example code and need to handle exceptions if needed.     // To var params = {     infoTableName : "InfoTable",     dataShapeName : "PTC.PLM.WindchillPartUfids" };   // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(PTC.PLM.WindchillPartUfids) var ufids = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params);   // PTC.PLM.WindchillPartUfids entry object var newValue = new Object(); newValue.value = ufid; // STRING   ufids.AddRow(newValue);   // Check out var params = {     ufids: ufids /* INFOTABLE */,     comment: undefined /* STRING */,     dataShape: "PTC.PLM.WindchillPartCheckedOutDS" /* DATASHAPENAME */ };   // checkedOutObjs: INFOTABLE dataShape: "undefined" var checkedOutObjsFromService = me.CheckOut(params);   var params = {     infoTableName : "InfoTable",     dataShapeName : "PTC.PLM.WindchillPartUfids" };   // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(PTC.PLM.WindchillPartUfids) var checkedOutObjs = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params);   try {     var tableLength = checkedOutObjsFromService.rows.length;       for (var x = 0; x < tableLength; x++) {         var row = checkedOutObjsFromService.rows;               // PTC.PLM.WindchillPartUfids entry object         var checkedOutObj = new Object();         checkedOutObj.value = row.ufid.substring(0,row.ufid.lastIndexOf(":")); // STRING               //logger.warn("UFID : " + checkedOutObj.value);         checkedOutObjs.AddRow(checkedOutObj);           /* Update Objects in Windchill */         var params = {             infoTableName : "InfoTable",             dataShapeName : "PTC.PLM.WindchillPartPropertyDS"         };           // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(PTC.ALM.WindchillPartPropertyDS)         var wcInfoTable = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params);           // PTC.ALM.WindchillPartPropertyDS entry object         var newEntry = new Object();         newEntry.ufid = checkedOutObj.value; // STRING         newEntry.almProjectId = almProjectId; // STRING           wcInfoTable.AddRow(newEntry);           var params = {             objects: wcInfoTable /* INFOTABLE */,             modification: "REPLACE" /* STRING */,             dataShape: "PTC.PLM.WindchillPartCheckedOutDS" /* DATASHAPENAME */         };           // result: INFOTABLE dataShape: "undefined"         var result = me.Update(params);     }   } catch(err) {     logger.warn("ERROR Catched");     var params = {         ufids: ufids /* INFOTABLE */,         dataShape: "PTC.PLM.WindchillPartCheckedOutDS" /* DATASHAPENAME */     };       // result: INFOTABLE dataShape: "undefined"     var result = me.CancelCheckOut(params);  }   var params = {     ufids: checkedOutObjs /* INFOTABLE */,     comment: undefined /* STRING */,     dataShape: "PTC.PLM.WindchillPartCheckedOutDS" /* DATASHAPENAME */ };   // result: INFOTABLE dataShape: "undefined" var result = me.CheckIn(params);
View full tip
This video is the 2nd part, of a series of two videos, walking you through the configuration of Analysis Event which is applied for Real-Time Scoring. This part 2 video will walk you through the configuration of Analysis Event for Real-Time Scoring, and validate that a predictions job has been executed based on new input data.   Updated Link for access to this video:  Analytics Manger 7.4: Configure Analysis Event & Real Time Scoring Part 2 of 2
View full tip
This video walks you through the use of Analysis Replay to execute analysis events on historic data. This video applies to ThingWorx Analytics 7.4 to 8.1   Updated Link for access to this video:  Analytics Manager : Using Analysis Replay
View full tip
This video is the 1st part of a series of two videos walking you through the configuration of Analysis Event which is applied for Real-Time Scoring. This 1st video demonstrate how to create a Template and Thing which allows the prediction model to score in real-time. Note that this video is just for demo purposes, customers who have ThingWorx, they of course already have their properties set-up. They just need to configure Analysis Event which is demonstrated in the part 2 video.   Updated Link for access to this video:  Analytics Manger 7.4: Create a Template & Thing for Real-Time Scoring Part 1 of 2
View full tip
This video will walk you through the first steps of how to set-up Analytics Manager for Real-Time Scoring. More specifically this video demonstrate how to share your predictive model from Analytics Builder into Analytics Manger -and test the shared model.   Updated Link for access to this video::  ThingWorx Analytics Manager: Publish & Test a Predictive Model
View full tip
This video will walk you through the first steps of how to set-up Analytics Manager for Real-Time Scoring. More specifically this video demonstrates how to create an Analysis Provider and start ThingPredictor Agent. NOTE: For version 8.1 the startup command for the Agent has changed view the command in PTC Help center.   Updated Link for access to this video:  ThingWorx Analytics Manager: Create an Analysis Provider & Start the ThingPredictor Agent                                  
View full tip
In this part of the Troubleshooting blog series, we will review the process on how to restart individual services essential to the ThingWorx Analytics Application within the Virtual Machine Appliance.   Services have stopped, and I cannot run my Analytics jobs! In some cases, we have had users encounter issues where a system or process has halted and they are unable to proceed with their tasks. This can be due to a myriad of reasons, ranging from OS hanging issues to memory issues with certain components.   As we covered earlier in Part II, the ThingWorx Analytics Application is installed in a CentOS (Linux) Operating System. As with most Linux Operating Systems, you have the ability to manually check and restart processes as needed.   Steps to Restart Services   With how the Application is installed and configured, the services for functionality should auto start when you boot up the VM. You will have to verify that the Appliance is functional by running your desired API call.   If a system is not functioning as expected, you will receive an error in your output when you POST an API call. Some errors are very specific and you can search the Knowledge Database for any existing Knowledge Articles that may solve the issue.   For error messages that do not have an existing article, you may want to attempted the following   Method 1:   If you are encountering issues, and are unsure what process is not working correctly, we would recommend a full Application restart. This involves restarting the Virtual Machine Appliance via the command line terminal.   We would recommend that you use the following command, as root user or using SUDO, as this is known as a “Graceful restart” ​sudo reboot -h now   This will restart the virtual machine safely, and once you are back up and running you can run your API calls to verify functionality. This should resolve any incremental issues you may have faced.   Method 2:   If you want to restart an individual service, there is a particular start order that needs to be followed to make sure the Application is operating as expected.   The first step is check what services are not running, you can use the following command to check what is running and its current status: service –status-all   The services you are looking for are the following: Zookeeper PostgreSQL Server GridWorker(s) Tomcat   If a particular service is not on the running list, you will have to manually start them by using the service start command. service [name of service] start e.g. service tomcat start You may be prompted for the root password   You can verify that the services are operating by running the status check command as described above.   If you need to restart all services, we have a specific start order, this is important to do as there are some dependencies such as Postgres for the GridWorker(s) and Tomcat to use.   The start order is as follows: Zookeeper PostgreSQL Server GridWorker(s) Tomcat   After completing the restart, and verifying that the services are running, run your desired API call.
View full tip
Announcements