cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
Solution Central and Azure Active Directory Written by: Tori Firewind, IoT EDC   As we’ve said in a previous post, Solution Central (SC) is a crucial part of any mature dev ops pipeline. In its latest version (3.1.0), it manages custom solutions even more easily due to the SC API. However, this comes with a few requirements that can be a little tricky.   One of the more complex configuration pieces for using this new API involves a cloud-hosted Active Directory (AD) application within Azure AD. In order to make use of the new API, your organization’s users must exist on an Azure AD tenant that is separate from the PTC tenant. Most PTC customers are placed on this PTC-owned tenant by default, so additional configuration may be required to set up the AD instance within Azure before the SC API can be used. The type of tenant has to be Azure, of course, as that is what PTC uses: a multi-tenancy Azure infrastructure for all authentication.   In this usage, “tenant” is a cloud-hosting, infrastructure term which essentially refers to an Azure VM hosting an Active Directory server, as well as managing the many AD components that would otherwise require a lot of oversight. Users within that AD server are grouped so that only those solutions published by their own organizations are visible within Solution Central. In this way, the term “tenant” can also refer to a partition of some kind of data; a Solution Central tenant includes the users, the solutions, and is sort of like a sub-tenant of the larger PTC infrastructure. PTC uses a global Solution Central tenant for deployment of its own solutions, like DPM (Digital Performance Management), which is therefore available to every user in the PTC Azure AD tenant or a tenant which has been connected using the tutorial below.   There are good reasons to want to use your own Azure AD integrated into Solution Central that do not involve using the SC API. For one thing, it allows for direct control over the users that have access; otherwise, a ticket to PTC support is necessary every time a user needs access granted or removed. The tutorial below is a great reference for anyone using Solution Central, with steps 1-4 offering an easy guide for integrating your own Azure AD and simplifying your user management.   To use the SC API, however, there are additional steps required to create an application for retrieving an oauth token. This application needs the “solution-publisher” role or else bad request/forbidden errors will pop up. This role is automatically available in your Azure AD tenant once it is linked to the PTC AD tenant, but it does have to be manually assigned to the application, whose sole function is to request this access token for authenticating requests against the SC server.   The Azure application you need to create is essentially a plugin with permission to publish solutions against the SC API. It functions a little like a “login” in database terminology, serving the function of authenticating to an endpoint (in this case not tied to a user or any kind of identity). This application must be able to request an access token successfully and return that to the scripts which call upon it in order to perform the project creation and publication requests. The tutorial below steps you through how to create this application and request all of your solutions via the SC API.   Tutorial: Create Azure AD Application to Access SC via API Create a new tenant in Azure AD for users who will have access to the SC API Create a ticket with PTC to provide the tenant ID and begin the onboarding process Once that process completes, you will receive an email with a custom link to login to the Solution Central portal for your organization, where only your solutions exist Users will then need to be added as “Custom Global Administrators” on the SC enterprise application within Azure Portal to grant them access to login to Solution Central in a browser In order for us to be able to use the SC API, however, more work needs to be done; an application to request an oauth token for requests must be created in Azure Portal Navigate to “App Registrations” from the Azure AD page in Azure Portal and then click “New registration” Enter the application name (ours is called “gradle-plugin-tokenfetcher” in the examples shown here), and select the “single tenant” radio button Enter a URL for redirect URI and for “Select a platform” select “Web” Click “Register”, and wait for the application created notification Now we need to add permissions to see Solution Central Open the app from under “App registrations” and select the “API permissions” tab Click “Add a permission” and in the pop-up window that appears, select the “APIs my organization uses” tab, which should have PTC Solution Central listed, if this tenant has been linked to the PTC tenant per step 2 above Select “Application Permissions” and then check the “solution-publisher” role Click “Add permissions” The application now has access to Solution Central as a publisher, able to send solution publications over the API and not just via the Platform interface   The application is ready now, so we can create a request to test that it has access to SC Using Windows Powershell or something similar, create a script to make a couple of cURL requests against first the Azure AD and then the Solution Central API (attached as well): The tenant ID and the directory ID are the same value (listed on the tenant under “Overview”), and the client ID and application ID are the same as well (listed on the application, shown here), so don’t be confused by terminology The client app secret is the “Value” provided by the system when a new client secret is created under “Certificates & Secrets” (remember to copy this as soon as it is made and before clicking away, as after that, it will no longer be visible): The Solution Central app ID can be found under “App registrations”, listed within the details for the “solution-publisher” role: This access token request piece must be done before every request to the SC API, so this secret should be kept in some kind of password management tool (or a global environment variable in GitLab) so that it isn’t found anywhere in the source code If the script pings the SC API successfully, then a list of solutions will be returned
View full tip
The DPM User Experience Written by Tori Firewind, IoT EDC Team   As discussed in a previous post, DPM is a tool designed to be beneficial at all levels of a company, from the operators monitoring automated data on production events from the factory machines themselves, to the production supervisors who need to establish, task out, and track machine maintenance and improvement measures. DPM also engages the continuous improvement and plant leadership, by providing a standardized way to monitor performance that ultimately rolls up to the executive level. The end users of DPM are therefore diverse both in how they access DPM, and how they make use of its various features.   One of the perks to building DPM on top of the ThingWorx Foundation is that many of the webpages (called “mashups”) within ThingWorx are already responsive, and any  which aren’t responsive OOTB can be modified and custom designed for different size viewing screens to ensure that if necessary, end users can access DPM   from a variety of locations and devices. Most of the time, end users will be accessing mashups from hard-wired dashboards mounted on the actual devices,    or from wireless laptops which have standard size screens with standard resolutions. For use cases involving phones or tablets, however, it may be necessary to see how DPM will perform across a variety of bandwidth and latency conditions. Often, cellular or satellite connection is a must to facilitate field team cooperation, and 5G networks often result in worsened performance.   So, to demonstrate the influence of bandwidth and latency on the responsiveness of DPM, the Production Dashboard was loaded in the Google Chrome browser repeatedly under varying conditions. This dashboard is the webpage most operators and field users would access to log event information and production details (so it is widely used by end users). This provides a sort of benchmark of the DPM solution, something which indicates what can be expected and tells us a few things about how DPM should be deployed and configured.   Latency was introduced by hosting the servers involved in the test in different regions (all Azure cloud hosted servers, one in US East, one US West, and one in Japan East). Bandwidth was introduced using a tool on the PC with either no bandwidth or 4 megabits/second.   Browser caching was turned on and off as well, to simulate the difference between new and return users; new users would not have the webpage cached, so their load times are expected to be longer. Tomcat compression was also configured in half of the runs to demonstrate the importance of compression for optimal performance.   Each of these 24 scenarios was then tested 10 times from each location, and the actual data can be found in the attached benchmark document (a working  solution benchmark, which is not designed to be referenced directly, as matters of infrastructure may influence the exact performance of the solution).  Even with bandwidth, every region sees better performance for return users versus new users, which may be important to note. However, because DPM field users most commonly access DPM often, the return user time is a better indicator of adoption, and those numbers look great in our simulations. Notice the top line which shows the very worst of mobile performance, what happens over networks with bandwidth when Tomcat Compression is not enabled. Load times vary only slightly for regular networks when Tomcat Compression is enabled, and they vastly improve performance across regions and on mobile networks, so it is highly recommended (instructions on how to enable are below).   Key Takeaways Latency and bandwidth impact DPM performance in exactly the way one would expect of a web application. While any DPM server can be accessed from any region, regions with more latency will experience delays proportional to the amount of latency. In the chart here, find the three regions represented three times by three different colors (different from the charts above): The three different shades of each color represent the different regions Green represents the optimal configuration settings (Tomcat compression enabled, caching turned on) for returning users with bandwidth limitations (i.e. mobile networks like 5G) Blue shows first-time page visitors with no bandwidth limitations Purple shows first-time visitors that do have bandwidth limitations The uncompressed first-time load for mobile users (those with bandwidth limitations imposed) within the same region is also given to demonstrate the importance of enabling Tomcat Compression (load times only get worse without compression the farther the region) Notice how the green series has lower load times across the board than the blue one, meaning that return users even with bandwidth limitations have better performance across every region than new users. Also notice how the gap is larger between lighter colors and darker colors, where the darker the color, the farther the region from the DPM servers. This indicates that network latency has a more significant influence on performance versus bandwidth, with only longer running transactions like file uploads seeing a significant performance hit when on a network with bandwidth limitations.  Find out how to enable tomcat compression  and review the full solution benchmark in the document attached.  
View full tip
This script creates a csv file from the audit log filtered by the User Access category, so dates of when users logged in or logged out. *** see update below *** Note:  The csv file has the same name as the Groovy script and does NOT have the .csv extension . To get the .csv extension, the Groovy script has to be renamed to AuditEntryToCSV.csv.groovy .  Suggestions on how to improve this are welcome. *** Update ***: The download works without the renamed groovy script by returning text instead of an input stream.  The script has been modified to illustrate this. Parameters: days - the number of days past to fetch audit logs model_name - the model name of the asset serial_number - the serial number of the asset import com.axeda.drm.sdk.device.ModelFinder import com.axeda.drm.sdk.Context import com.axeda.common.sdk.id.Identifier import com.axeda.drm.sdk.device.Model import com.axeda.drm.sdk.device.DeviceFinder import com.axeda.drm.sdk.device.Device import com.axeda.drm.sdk.audit.AuditCategoryList import com.axeda.drm.sdk.audit.AuditCategory import com.axeda.drm.sdk.audit.AuditEntryFinder import com.axeda.drm.sdk.audit.SortType import com.axeda.drm.sdk.audit.AuditEntry import groovy.xml.MarkupBuilder import com.axeda.platform.sdk.v1.services.ServiceFactory /* * AuditEntryToCSV.groovy * * Creates a csv file from the audit log filtered by the User Access category, so dates of when users logged in or logged out. * * @param days        -   (REQ):Str number of days to search. * @param model_name        -   (REQ):Str name of the model. * @param serial_number        -   (REQ):Str serial number of the device. * * @note - the csv file has the same name as the Groovy script and does NOT have the .csv extension . To get * the .csv extension, the Groovy script has to be renamed to AuditEntryToCSV.csv.groovy . * * @author Sara Streeter <sstreeter@axeda.com> */ def writer = new StringWriter() def xml = new MarkupBuilder(writer) try {    def ctx = Context.getUserContext()    ModelFinder modelFinder = new ModelFinder(ctx)    modelFinder.setName(parameters.model_name)    Model model = modelFinder.find()    DeviceFinder deviceFinder = new DeviceFinder(ctx)    deviceFinder.setSerialNumber(parameters.serial_number)    Device device = deviceFinder.find()    AuditCategoryList acl = new AuditCategoryList()    acl.add(AuditCategory.USER_ACCESS)    long now = System.currentTimeMillis()    Date today = new Date(now)    def paramdays = parameters.days ? parameters.days: 5    long days = 1000 * 60 * 60 * 24 * Integer.valueOf(paramdays)    AuditEntryFinder aef = new AuditEntryFinder(ctx)    aef.setCategories(acl)    aef.setToDate(today)    aef.setFromDate(new Date(now - (days)))    aef.setSortType(SortType.DATE)    aef.sortDescending()    List<AuditEntry> audits = aef.findAll() // use a Data Accumulator to store the information def dataStoreIdentifier = "FILE-CSV-audit_log" def daSvc = new ServiceFactory().dataAccumulatorService if (daSvc.doesAccumulationExist(dataStoreIdentifier, device.id.value)) {     daSvc.deleteAccumulation(dataStoreIdentifier, device.id.value) } // assemble the response    audits.each { AuditEntry audit ->            def row = [                audit?.id.value,                audit?.user?.username,                audit?.date,                audit?.category?.bundleKey,                audit?.message            ]         row = row.join(',')         row += '\n'         daSvc.writeChunk(dataStoreIdentifier, device.id.value, row);        } // stream the data accumulator to create the file    InputStream is = daSvc.streamAccumulation(dataStoreIdentifier, device.id.value) return ['Content-Type': 'text/csv', 'Content-Disposition':'attachment; filename=AuditEntryCSVFile.csv', 'Content': is.text] } catch (def ex) {    xml.Response() {        Fault {            Code('Groovy Exception')            Message(ex.getMessage())            StringWriter sw = new StringWriter();            PrintWriter pw = new PrintWriter(sw);            ex.printStackTrace(pw);            Detail(sw.toString())        }    } logger.info(writer.toString()) }
View full tip
I recently had a customer who wanted to run services on ThingWorx from Power BI to retrieve existing operational data, and we were a bit stumped on how to pass the API key over in the headers, so I did a bit of Googling and pieced together the solution. It's not quite intuitive on the Power BI side, so I thought it would be helpful to share. If you have any other experience with integrating ThingWorx with Power BI, feel free to add a comment.    Prepare ThingWorx Create an Application Key that has Run Time execution access to the services you need. Understand the inputs needed for the service you would like. I'll have examples of none, one, an InfoTable, and multiple inputs.   Power BI Following the following steps in Power BI: 1. In Power BI, create a new blank query   2. On the left, right click on Query1 and go to the Advanced Editor:   3. Replace all of the body content with the following, replacing your API key, appropriate end point, and base URL as needed (this is an example with NO input parameters, I'll follow with examples of other parameters):     let appKey = "your-application-key-here", endpoint = "Things/YourThingNameHere/Services/YourServiceNameHere", baseUrl = "https://YourServerNameHere/Thingworx/", url = Text.Combine({baseUrl,endpoint}), body = "", request = Web.Contents( url, [ Headers = [ appKey = appKey, #"Content-Type" = "application/json", Accept = "application/json" ], Content = Text.ToBinary(body) ] ), Source = Json.Document(request) in Source       4. Click "Done", and now you'll have a warning about how to connect. Click the "Edit Credentials" button. 5. Leave it on Anonymous and click "Connect":   6. You should now see the return data coming from ThingWorx.   Note that I had a little trouble with this authentication initially and it saved the wrong method. To clear that out, go to the ribbon bar item "Data source settings" and select the server and clear it out.   Other Examples Here is an example for sending a single string parameter:   let appKey = "your-application-key-here", endpoint = "Things/YourThingNameHere/Services/YourServiceNameHere", baseUrl = "https://YourServerNameHere/Thingworx/", url = Text.Combine({baseUrl,endpoint}), body = "{""InputParameter"": ""InputValue""}", request = Web.Contents( url, [ Headers = [ appKey = appKey, #"Content-Type" = "application/json", Accept = "application/json" ], Content = Text.ToBinary(body) ] ), Source = Json.Document(request) in Source     Here's an example of sending a string and an integer: let appKey = "your-application-key-here", endpoint = "Things/YourThingNameHere/Services/YourServiceNameHere", baseUrl = "https://YourServerNameHere/Thingworx/", url = Text.Combine({baseUrl,endpoint}), body = "{""InputString"": ""Hello, world!"", ""InputNumber"" : 42}", request = Web.Contents( url, [ Headers = [ appKey = appKey, #"Content-Type" = "application/json", Accept = "application/json" ], Content = Text.ToBinary(body) ] ), Source = Json.Document(request) in Source   Here is an example for sending an InfoTable. Note that you must supply the dataShape with fieldDefinitions. If you're using an existing Data Shape, you can get the JSON by using the service GetDataShapeMetadataAsJSON() that is on the data shape.     let appKey = "your-application-key-here", endpoint = "Things/YourThingNameHere/Services/YourServiceNameHere", baseUrl = "https://YourServerNameHere/Thingworx/", url = Text.Combine({baseUrl,endpoint}), body = "{""propertyNames"": { ""rows"": [ { ""name"": ""FirstEntityName"", ""description"": ""The first entity"" }, { ""name"": ""SecondEntityName"", ""description"": ""The second entity"" }], ""dataShape"": { ""fieldDefinitions"": { ""name"": { ""name"": ""name"", ""aspects"": { ""isPrimaryKey"": true }, ""description"": ""Entity name"", ""baseType"": ""STRING"", ""ordinal"": 0 }, ""description"": { ""name"": ""description"", ""aspects"": {}, ""description"": ""Entity description"", ""baseType"": ""STRING"", ""ordinal"": 0 } } } }}", request = Web.Contents( url, [ Headers = [ appKey = appKey, #"Content-Type" = "application/json", Accept = "application/json" ], Content = Text.ToBinary(body) ] ), Source = Json.Document(request) in Source       If I find any more interesting ways to use Power BI with ThingWorx services, I'll add them on here.  
View full tip
Announcing: ThingWorx Solution Central 3.1.0 and its New API Written by: Tori Firewind of the IoT EDC   Solution Central 3.1.0 ThingWorx Solution Central (SC) is the solution management tool for ThingWorx and Digital Performance Management (DPM), the latest version of which (DPM 1.1) can now be deployed directly from the PTC Solutions menu of SC. Streamlining packaging strategies and ensuring efficient solution deployment can now be done for all kinds of ThingWorx solutions, even those with heavy customization. The new API allows for updated building blocks from within the PTC Solutions menu to be easily discovered and deployed right alongside custom solutions. Even the most advanced developers can now house their deployment management process within SC.   As discussed in a previous article, Solution Central forms a necessary part of a mature DevOps pipeline, usually as a set of services within Foundation which finalize and publish the solution to the Solution Central servers. The recommendation to utilize Solution Central from within Foundation remains a best practice for ThingWorx DevOps because the vast majority of solutions benefit from using the ThingWorx APIs, which scan and check for dependencies and proper XML formatting on each included entity.   Packaging and publishing the solution from within ThingWorx is the easiest and most straightforward way recommended by PTC, but it is now possible to publish to Solution Central using a standard API for those who need to publish from Jenkins or other build jobs. If there are legacy extensions, 3rd party tool dependencies, or other customizations within the ThingWorx application, then it may be beneficial to use this new API instead.   The new API also allows for editable extensions and entities within a published solution, though PTC still recommends avoiding this as a general practice. It is usually better for purposes of maintainability and ease of upgrades to just publish the solution again (with an incremented) version each time any changes are made.   How to Use the API Within Solution Central, a new menu option has been added to review the API, with information about the different types of requests and their parameters and responses. To access this within the Solution Central UI, open the help menu and navigate to “Public APIs” (see the image on the right). To see a sample response, select a request type and scroll down to the “Responses” section (shown below). Examples of error responses are also provided, and it’s important to ensure that whatever makes these requests can properly report or log any errors for troubleshooting and maintenance of the DevOps process.   The general steps for making use of this API are as follows: Create the solution resource with a POST Add some files to that solution with PUTs First, create the solution archive with the right Solution Identifiers; this should contain at least one project XML and all of the entities belonging to that ThingWorx project Next, compute MD5 using a tool like DigestUtils on the contents of the archive; this checksum is required for Solution Central Compute the SHA hash on the archive and save it; this will need to be provided along with the archive in the PUT requests Compute MD5 on the hash file also Finally, make the two PUT requests, one for the archive and one for its hash; for example cURL requests, see the Help Center Publish the solution using a PUT So, with a little more work it is now possible to make use of SC in a more custom DevOps process. It is now possible to build JSON or XML solutions using development tools outside of ThingWorx and still publish these customizations to Solution Central. The process of DevOps Management just became more versatile, and with the ease of deployment of DPM and other PTC building blocks as well, ThingWorx is now more accessible and easy to use than ever before.
View full tip
This script finds all the data items both current and historical on all the assets of a model and outputs them as XML. Parameters: model_name from_time to_time import com.axeda.drm.sdk.Context import com.axeda.drm.sdk.device.ModelFinder import com.axeda.drm.sdk.device.Model import com.axeda.drm.sdk.device.DeviceFinder import com.axeda.drm.sdk.data.CurrentDataFinder import com.axeda.drm.sdk.device.Device import com.axeda.drm.sdk.data.HistoricalDataFinder import groovy.xml.MarkupBuilder /* * AllDataItems2XML.groovy * * Find all the historical and current data items for all assets in a given model. * * @param model_name        -   (REQ):Str name of the model. * @param from_time         -   (REQ):Long millisecond timestamp to begin query from. * @param to_time           -   (REQ):Long millisecond timestamp to end query at. * * @note from_time and to_time should be provided because it limits the query size. * * @author Sara Streeter <sstreeter@axeda.com> */ def response = [:] def writer = new StringWriter() def xml = new MarkupBuilder(writer) // measure the script run time def timeProfiles = [:] def scriptStartTime = new Date() try { // getUserContext is supported as of release 6.1.5 and higher     final def CONTEXT = Context.getUserContext() // confirm that required parameters have been provided     validateParameters(actual: parameters, expected: ["model_name", "from_time", "to_time"]) // find the model     def modelFinder = new ModelFinder(CONTEXT)     modelFinder.setName(parameters.model_name)     Model model = modelFinder.findOne() // throw exception if no model found     if (!model) {         throw new Exception("No model found for ${parameters.model_name}.")     } // find all assets of that model     def assetFinder = new DeviceFinder(CONTEXT)     assetFinder.setModel(model)     def assets = assetFinder.findAll() // find the current and historical data values for each asset //note: since device will be set on the datafinders going forward, a dummy device is set on instantiation which is not actually stored     def currentDataFinder = new CurrentDataFinder(CONTEXT, new Device(CONTEXT, "placeholder", model))     def historicalDataFinder = new HistoricalDataFinder(CONTEXT, new Device(CONTEXT, "placeholder", model))     historicalDataFinder.startDate = new Date(parameters.from_time as Long)     historicalDataFinder.endDate = new Date(parameters.to_time as Long) // assemble the response     xml.Response(){         assets.each { Device asset ->             currentDataFinder.device = asset             def currentValueList = currentDataFinder.find()             historicalDataFinder.device = asset             def valueList = historicalDataFinder.find()             Asset(){                     id(asset.id.value)                     name( asset.name)                     serial_number(asset.serialNumber)                     model_id( asset.model.id.value)                     model_name(asset.model.name)                     current_data(){                         currentValueList.each{ data ->                         timestamp( data?.getTimestamp()?.format("yyyyMMdd HH:mm"))                          name(data?.dataItem?.name)                          value( data?.asString())                     }}                     historical_data(){                         valueList.each { data ->                         timestamp( data?.getTimestamp()?.format("yyyyMMdd HH:mm"))                          name(data?.dataItem?.name)                          value( data?.asString())                     }}             }         }     } } catch (def ex) {       xml.Response() {     Fault {           Code('Groovy Exception')           Message(ex.getMessage())           StringWriter sw = new StringWriter();           PrintWriter pw = new PrintWriter(sw);           ex.printStackTrace(pw);           Detail(sw.toString())         }       } } return ['Content-Type': 'text/xml', 'Content': writer.toString()] private Map createTimeProfile(String label, Date startTime, Date endTime) {     [             (label): [                     startTime: [timestamp: startTime.time, readable: startTime.toString()],                     endTime: [timestamp: endTime.time, readable: endTime.toString()],                     profile: [                             elapsed_millis: endTime.time - startTime.time,                             elapsed_secs: (endTime.time - startTime.time) / 1000                     ]             ]     ] } private validateParameters(Map args) {     if (!args.containsKey("actual")) {         throw new Exception("validateParameters(args) requires 'actual' key.")     }     if (!args.containsKey("expected")) {         throw new Exception("validateParameters(args) requires 'expected' key.")     }     def config = [             require_username: false     ]     Map actualParameters = args.actual.clone() as Map     List expectedParameters = args.expected     config.each { key, value ->         if (args.options?.containsKey(key)) {             config[key] = args.options[key]         }     }     if (!config.require_username) { actualParameters.remove("username") }     expectedParameters.each { paramName ->         if (!actualParameters.containsKey(paramName) || !actualParameters[paramName]) {             throw new IllegalArgumentException(                     "Parameter '${paramName}' was not found in the query; '${paramName}' is a reqd. parameter.")         }     } } Sample Output: <Response>   <Asset>   <id>2864</id>   <name>keg24</name>   <serial_number>keg24</serial_number>   <model_id>1081</model_id>   <model_name>Kegerator</model_name>   <current_data>   <timestamp>20111103 14:44</timestamp>   <name>currKegPercentage</name>   <value>34.0</value>   <timestamp>20111103 14:38</timestamp>   <name>currTempF</name>   <value>43.0</value>   </current_data>   <historical_data />   </Asset>   <Asset>   <id>2861</id>   <name>keg28</name>   <serial_number>keg28</serial_number>   <model_id>1081</model_id>   <model_name>Kegerator</model_name>   <current_data>   <timestamp />   <name>currKegPercentage</name>   <value>?</value>   <timestamp>20111103 14:21</timestamp>   <name>currTempF</name>   <value>43.0</value>   </current_data>   <historical_data />   </Asset>   <Asset>   <id>2863</id>   <name>keg21</name>   <serial_number>keg21</serial_number>   <model_id>1081</model_id>   <model_name>Kegerator</model_name>   <current_data>   <timestamp />   <name>currKegPercentage</name>   <value>?</value>   <timestamp>20111103 14:39</timestamp>   <name>currTempF</name>   <value>42.0</value>   </current_data>   <historical_data />   </Asset>   <Asset>   <id>2862</id>   <name>keg25</name>   <serial_number>keg25</serial_number>   <model_id>1081</model_id>   <model_name>Kegerator</model_name>   <current_data>   <timestamp>20111103 14:36</timestamp>   <name>currKegPercentage</name>   <value>34.0</value>   <timestamp />   <name>currTempF</name>   <value>?</value>   </current_data>   <historical_data />   </Asset>   <Asset>   <id>2867</id>   <name>keg29</name>   <serial_number>keg29</serial_number>   <model_id>1081</model_id>   <model_name>Kegerator</model_name>   <current_data>   <timestamp>20111103 14:48</timestamp>   <name>currKegPercentage</name>   <value>35.0</value>   <timestamp />   <name>currTempF</name>   <value>?</value>   </current_data>   <historical_data />   </Asset>   <Asset>   <id>2865</id>   <name>keg27</name>   <serial_number>keg27</serial_number>   <model_id>1081</model_id>   <model_name>Kegerator</model_name>   <current_data>   <timestamp>20111103 14:39</timestamp>   <name>currKegPercentage</name>   <value>34.0</value>   <timestamp>20111103 14:44</timestamp>   <name>currTempF</name>   <value>42.0</value>   </current_data>   <historical_data />   </Asset>   <Asset>   <id>2866</id>   <name>keg23</name>   <serial_number>keg23</serial_number>   <model_id>1081</model_id>   <model_name>Kegerator</model_name>   <current_data>   <timestamp>20111103 14:46</timestamp>   <name>currKegPercentage</name>   <value>34.0</value>   <timestamp />   <name>currTempF</name>   <value>?</value>   </current_data>   <historical_data />   </Asset> </Response>
View full tip
MachNation  Podcast Replay: Enterprise-Specific Implementation Testing a podcast, by Mike Jasperson,  VP of the IoT EDC   MachNation, a company   exclusively dedicated to testing and benchmarking Internet of Things (IoT) platforms, end-to-end solutions, and services, has conducted a recent podcast series featuring our very own Mike Jasperson, Vice President of the IoT Enterprise Deployment Center here at PTC. Performance IoT    is a podcast series that brings together experts who make IoT performance testing and high-resiliency IoT part of their IoT journey. Mike Jasperson's podcast is episode 5 in the series, titled:   Enterprise-Specific Implementation Testing .  Enjoy!
View full tip
This is an example of an advanced apc-metadata.xml file for use with Axeda Artisan that includes examples of how to create different data structures on the platform, including Expression Rules and System Timers. Step 1 - In the upload.xml make sure the artisan-installer is 1.2     <dependencies>         <dependency>             <groupId>com.axeda.community</groupId>             <artifactId>artisan-installer</artifactId>             <version>1.2</version>         </dependency>     </dependencies> Step 2 - <?xml version="1.0" encoding="UTF-8"?> <!--         apc-metadata.xml --> <apcmetadata>     <models>         <model>             <name>DemoModel</name>             <!--standalone or gateway-->             <type>gateway</type>             <DataItems>                 <DataItem>                     <name>PendingMessage</name>                     <!--string or digital or analog-->                     <type>string</type>                     <visible>false</visible>                     <stored>true</stored>                     <!--0 = no history,1 = Stored,2 = no storage,3 = on change-->                     <storageOption>2</storageOption>                     <readOnly>false</readOnly>                 </DataItem>                 <DataItem>                     <name>ConsumableData</name>                     <!--string or digital or analog-->                     <type>string</type>                     <visible>false</visible>                     <stored>true</stored>                     <!--0 = no history,1 = Stored,2 = no storage,3 = on change-->                     <storageOption>2</storageOption>                     <readOnly>false</readOnly>                 </DataItem>             </DataItems>         </model>     </models>     <ruleTimers>           <ruletimer>               <name>SFTP Retry</name>               <description></description>               <!--midnight gmt-->               <schedule>0 0 0 * * ?</schedule>               <rules>                   <rule>SFTP Retry</rule>               </rules>           </ruletimer>     </ruleTimers>     <expressionRules>                 <rule>             <name>SFTP Retry</name>             <description></description>             <enabled>true</enabled>             <applyToAll>true</applyToAll>             <type>SystemTimer</type>             <ifExpression><![CDATA[true]]></ifExpression>             <thenExpression>                 <![CDATA[ExecuteCustomObject("SFTPRetry")]]></thenExpression>             <elseExpression></elseExpression>             <consecutive>true</consecutive>             <models>                 <model>DemoMOdel</model>             </models>         </rule>     </expressionRules>     <customobjects>         <customobject>             <name>GetChartData</name>             <type>Action</type>             <sourcefile>GetChartData.groovy</sourcefile>             <params>                 <param name="username" description="(REQUIRED) The name of the calling user"/>             </params>         </customobject>         <customobject>             <name>GetChartData_rss</name>             <type>Action</type>             <sourcefile>GetChartData_rss.groovy</sourcefile>             <params>                 <param name="username" description="(REQUIRED) The name of the calling user"/>             </params>         </customobject>         <customobject>             <name>GetAddress</name>             <type>Action</type>             <sourcefile>GetAddress.groovy</sourcefile>             <params>                 <!--<param name="username" description="(REQUIRED) The name of the calling user"/>-->             </params>         </customobject>     </customobjects>     <applications>         <application>             <description>Chart Example</description>             <applicationId>chartexample</applicationId>             <indexFile>index.html</indexFile>             <!--<zipFile></zipFile>-->             <sourcePath>artisan-starter-html/src/main/webapp</sourcePath>         </application>     </applications> </apcmetadata>
View full tip
Now that ThingWorx 9.3 is live, let’s take a closer look at some of our new features we released for Composer and Mashup Builder.   Referenced By Find where entities, code references, and project dependencies are used in your existing projects using the new “Referenced by” report feature. This feature is not automatically enabled because it is intended to be used during development, since it can call upon all of the entities in your project and can impact your load times in production. That being said, this is your friendly reminder to turn off this feature during production.     How to enable: Go to the relationship subsystem and tick the check box to enable during development.   How it works: The “Referenced By” feature finds any entity that is referenced in your ThingWorx environment based on the supplied search characteristics. You can run this “Referenced By” report in the Composer or via a ThingWorx service “GetWhereUsed”.  When you use the “Referenced By” feature on an entity, you can find all of the references in the system for that entity in Mashup bindings, service script references, thing property bindings, and more.     Grid Widget The new grid widget component, which was available for preview in ThingWorx 9.2, is now complete, so it’s time to get your grid on! We have improved styling and performance capabilities with this new widget, including greater support for inline editing, autocreation of columns based on infotables, and adding new footer areas. You can also configure the grid using the property editor in Mashup Builder, where previously we had plain text entry.   Style Migration The new Style Migration is a game changer. It allows you to retain the same look and feel of your Mashups as you upgrade from previous versions of ThingWorx to the newest web components and features available in 9.3. Improved from our previous migrator, this allows you to move to the latest platform version and capabilities without having to re-implement or redesign your applications and widgets.     How it works: When you upgrade to the latest version of ThingWorx, you will see a pop-up window appear if you have any legacy widgets or layouts in your Mashup. The window will have the option for you to apply one of three style themes to your Mashup: PTC Convergence Theme (the new ThingWorx Default theme), Legacy Styles Theme (the old ThingWorx theme, from version 8.0 and earlier), or Custom Theme (choose from custom themes you defined using the Theme Editor and Style Theme that will appear when custom theme is selected in the pop-up). Depending on how you already styled or would like to style your Mashups, select an option and click migrate. This migrator maintains previous coloring, spacing, and other design properties better than previous migrators. You, of course, have the option to not upgrade your Mashup, but we recommend that you migrate, especially where we have new widgets available to replace legacy versions. If there are any issues with your migration, you can always click “Undo” in the toolbar.   Things to Consider: This migration will work best with ThingWorx default styling, out of the box styling, and Mashups with widgets that we now have replacements for (these are marked legacy in the builder). Always make sure you review your Mashups to make sure bindings and properties remained. Note that custom CSS will not be migrated, and custom widgets developed outside the standard platform installation will remain the same on the new Mashup.   Other Bonus Features That’s not all we rolled out in ThingWorx 9.3. You will also see Composer enhancements for test execution on ThingTemplates and dynamic use of Master Mashups to allow for swapping out Masters at run time based on predefined conditions based on your users. Plus, we now have truncation support for the breadcrumb component and tabs component, which utilizes an ellipsis pattern for long  text for a more user-friendly application.  With enhancements to our charts, you can now show/hide legends and format axis in new ways. We also support localization for our new web component widgets, .   How has your experience been building solutions with the latest updates to Composer and Mashup Builder? How can we continue to build upon these enhancements? Let us know what you think.   Stay connected, Rachel
View full tip
Shown below is example code that when deployed in the appropriate container, will allow an end-user to talk to the Axeda Platform Integration Queue. A customer should supply their unique values for the following properties: queueName user password url import java.util.Properties; import javax.jms.*; import javax.naming.*; public class SampleConsumer {     private String queueName = "com.axeda.integration.ACME.queue";     private String user = "system";     private String password = "manager"; //private String url = "ssl://hostname:61616";   private String url = "tcp://hostname:61616";     private boolean transacted;     private boolean isRunning = false;     public static void main(String[] args) throws NamingException, JMSException     {         SampleConsumer consumer = new SampleConsumer();         consumer.run();     }     public SampleConsumer()     {         /** For SSL connections only, add the following: **/ //        System.setProperty("javax.net.ssl.keyStore", "path/to/client.ks"); //        System.setProperty("javax.net.ssl.keyStorePassword", "password"); //        System.setProperty("javax.net.ssl.trustStore", "path/to/client.ts");     }     public void run() throws NamingException, JMSException     {           isRunning = true;            //JNDI properties         Properties props = new Properties();         props.setProperty(Context.INITIAL_CONTEXT_FACTORY, "org.apache.activemq.jndi.ActiveMQInitialContextFactory");         props.setProperty(Context.PROVIDER_URL, url);            //specify queue propertyname as queue.jndiname         props.setProperty("queue.slQueue", queueName);            javax.naming.Context ctx = new InitialContext(props);         ConnectionFactory connectionFactory = (ConnectionFactory)ctx.lookup("ConnectionFactory");         Connection connection = connectionFactory.createConnection(user, password);         connection.start();            Session session = connection.createSession(transacted, Session.AUTO_ACKNOWLEDGE);            Destination destination = (Destination)ctx.lookup("slQueue");         //Using Message selector ObjectClass = ‘AlarmImpl’         MessageConsumer consumer = session.createConsumer(destination, "ObjectClass= 'LinkedList'");            while (isRunning)         {             System.out.println("Waiting for message...");             Message message = consumer.receive(1000);             if (message != null && message instanceof TextMessage) {                 TextMessage txtMsg = (TextMessage)message;                 System.out.println("Received: " + txtMsg.getText());             }         }         System.out.println("Closing connection");         consumer.close();         session.close();         connection.close();     } }
View full tip
Introduction to Digital Performance Management (DPM) Written by: Tori Firewind, IoT EDC   “Digital Performance Management (DPM) is a closed-loop, problem solving solution that helps manufacturers identify, prioritize, and solve their biggest loss challenges, resulting in reduced cost, increased revenue, and improved service levels.” – DPM Help Center What is DPM? Digital Performance Manager (DPM) is an application which improves factory efficiency across a variety of different areas, namely “the four P’s” of Digital Transformation: products, processes, places, and people. Each performance issue in a factory can be mapped to at least one of these improvement categories in a new strategy for Continuous Improvement (CI) founded by PTC.   Figure 1 – Each performance issue in a factory can be mapped to at least one of 4 fundamental improvement categories: products, processes, places, and people. PTC’s new, industry-leading strategy for continuous improvement (CI) in factories is a “best practice” approach, taking the collective knowledge of many customers to form a focused, prescriptive path for success. 11 Closing the Loop Across Products, Processes, People, and Places, Manufacturing Leadership Journal   At PTC, CI in factories is driven by a “best practice”  approach, with years of experience in manufacturing solutions combining with the collective knowledge of the many diverse use cases PTC has encountered, to generate a focused, prescriptive path for improvement in any individual factory. Figure 2 – DPM is a closed loop for continuous improvement, a strategy built around industry standard best practices and years of experience.  PTC is also defining new industry standards for OEE analysis by using time as a currency within DPM. This standardization technique improves intuitive impact assessment and allows for direct comparison of metrics (see the Help Center for details on how each metric is calculated).   DPM creates a closed loop for CI, from the monitoring phase performed both automatically and through manual operator input, to the prioritization and analyzation phases performed by plant managers. DPM helps plant managers by tracking metrics of factory performance that often go overlooked by other systems. With Analytics, DPM can also do much of the analysis automatically, finding the root causes much more rapidly. Figure 3 – All levels of the company are involved in solving the same problems effectively and efficiently with DPM. Instead of 100 people working on 100 different problems, some of which might not significantly improve OEE anyway, these same 100 people can tackle the top few problems one at a time, knocking out barriers to continuous improvement together. Production supervisors who manage the entire production line then know which less-than-effective components on the line need help. They can quickly design and redesign solutions for specific production issues. Task management within DPM helps both the production manager and the maintenance engineer to complete the improvement process. Using other PTC tools like Creo and Vuforia make the path to improvement even faster and easier, requiring less expert knowledge from the front-line workers and empowering every level of participation in the digital transformation process to make a direct, measurable impact on physical production.       How Does DPM Work? DPM as an IoT application sits on top of the ThingWorx Foundation server, a platform for IoT development that is extensible and customizable. Manufacturers therefore find they rarely have to rip and replace existing systems and assets to reap the benefits of DPM, which gathers, aggregates, and stores production data (both automatically and through manual input on the Production Dashboard), so that it can be analyzed using time as a currency. DPM also manages the process of implementing improvements (using the Action Tracker) based on the collected data, and provides an easy way to confirm that the improvements make a real difference in the overall OEE (through the Performance Analysis Dashboard). Because the analysis occurs before and after the steps to improve are taken, manufacturers can rest assured that any resources invested on the improvements aren’t done so in vain; DPM is a predictive and prescriptive analysis process.   DPM makes use of an external SQL Server to run queries against collected data and perform aggregation and analysis tasks in the background, on a separate server location than the thing model and ingestion database. This ensures that use cases involving real-time alerts and events, high-capacity ingestion, or others are still possible on the ThingWorx Foundation server.   The IoT EDC is focusing in on DPM alone for a series of  technical briefs which provide insight and expert level recommendations regarding DPM usage and configuration.  Stay tuned into the PTC Community for more updates to come.
View full tip
Background Getting a performance benchmark of your running application is an important thing to do when deploying and scaling up an application in production.  This not only helps focus in on performance issues quickly, but also allows for safely planning for scaling up and resource sizing based on real concrete data.   I recently created a tool and made a post about capturing and analysing ThingWorx utilisation statistics to do such an analysis, as well as identifying potential performance bottlenecks. Although they are rich and precise, utilisation statistics fall short in a number of areas however - specifically being able to count and time specific service executions, as well as identifying and sorting based on the host executing the service.   Tomcat Access Log Analysis As ThingWorx is a Tomcat web application, Tomcat logs details of the requests being made to the application server and ThingWorx REST API.  The default settings include the host (IP address), date/timestamp, and request URI; which can be decoded to reveal relevant details like the calling entities and service executions.   Adding 3 key additional variables (%s %B %D) to the server.xml access log value also gives us the HTTP response code, service execution time, and bytes returned from Tomcat.  This is super useful as we can now determine exact time of service executions, and run statistics on their execution totals and execution time.     Once you have an access log file looking like the one above, you can attempt to load it into the access_log sheet in the analysis Excel workbook that I created.  You do this by click on the access_log table, then selecting "Data > Get Data > Data Source Settings".  You'll then be prompted with the following or similar pop-up allowing you to navigate to your access_log file to select and then load.     It should be noted that you'll have to Refresh the table after selecting the new access_log.txt file so that it is read in and populates the table.  You can do this by right-clicking on the table and saying Refresh, or using the Data > Refresh button.   This workbook relies on a number of formulas to slice and dice the timestamp, and during my attempts at importing I had significant issues with this due to some of the ways that Excel does things automatically without any manual options.  You really need to make sure that the timestamps are imported and converted correctly, or something in the workbook will likely not work as intended.  One thing that I had to do was to add 1 second to round up 00:00:00 for the first entries as this was being imported as a date without the time part, and then the next lines imported as a date/time.   Depending on how many lines your file is, you'll likely also have to "Fill Down" the formulas on the right side of the sheet which may be empty in the table after importing your new data set.  I had the best results by selecting the cells in question on the last row, then going down to the bottom corner, pushing and holding Shift, clicking on the last cell bottom right, and then selecting Home > Fill > Down to pull the formulas down from the top.   Once the data is loaded, you'll be able to start poking around.  The filters and sorting by the named columns is really helpful as you can start out by doing things like removing a particular host, sorting by longest execution times, selecting execution times greater than 4 seconds, or only showing activity aimed at a particular entity or service.     You really need to make sure that the imported data worked fine and looks perfect, as the next steps will totally break if not.  With the data loaded, you can now go to the Summary Data table and right-click on one of the tables and select Refresh.  This is reload the data in into the pivot table and re-run their calculations.   Once the refresh is complete, you should see the table summary like shown here; there are Day, Hour, and Minute expand/collapse buttons.  You should also see the Day, Hour, Month fields showing in the Field Definitions on the right.  This is the part that is painful -- if the dates are in the wrong format and Excel is unable to auto-detect everything in the same way, then you will not get these automatically created fields.     With the data reloaded, and Pivot Tables re-built, you should be able to go over to the Dashboard sheet to start looking at and analysing the graphs.  This one is showing the Top 10 services organised into hourly buckets with cumulated service execution times.     I'm not going to go into all of the workbooks features, but you can also individually select a set of key services that you want to have a look at together across both the execution count and execution time dimensions.     Next you can see the coordinated view of both total service execution time over number or service executions.  This is helpful for looking for patterns where a service may be executing longer but being triggered the same amount of times, compared to both being executed and taking more time.  I've created a YouTube video (see bottom) which goes through using all of the features as well as providing other pointers to using it.     Getting into a finer level of detail, this "bonus" sheet provides a Pivot Table and Pivot Chart which allows for exploring minimum, maximum and average execution time for a specific service.  Comparing this with the utilisation subsystem metrics taken during the same period now provide much deeper insight as we can pinpoint there the peaks were, how long they lasted, and where the slow executions were in relation to other services being executed at that time (example: identifying many queries/data processing occurring simultaneously).     Without further ado, you can download and play with my ThingWorx Tomcat Access Log Analysis Excel Workbook, and check out the recorded demonstration and explanation for more details on loading and analysis use. [YouTube] ThingWorx Tomcat Access Logs - Service Performance Analysis
View full tip
Since the advent of 6.1.6 we've been able to access the body of a post in a Groovy script.  This frees us from the tyranny of those pesky predefined parameters and opens up all sorts of Javascript object-passing possibilities. To keep this example as simple as possible, there are only two files: postbody.html TestPostBody.groovy postbody.html <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"             "http://www.w3.org/TR/html4/loose.dtd"> <html> <head>     <title>Scripto Post Body Demo</title>     <meta http-equiv="content-type" content="text/html; charset=utf-8">     <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no"/>     <meta name="apple-mobile-web-app-capable" content="yes"/>     <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">     <meta http-equiv="refresh" content="3600"> </head> <body> <div id="wrapper"> <div id="header"> <h1>Scripto Post Body Demo</h1> </div> <p> Username:<input type="text" id="username" /><br /> Password:<input type="password" id="password"  /><br /><br /> Enter some valid JSON (validate it <a href="http://jsonlint.com/" alt="jsonlint">here</a> if you're not sure): <br /><textarea id="jsoninput" rows=10 cols=20></textarea><br /> Enter arbitrary Text: <br /><textarea id="textinput" rows=10 cols=20></textarea> <input type="submit" value="Go" id="submitbtn"  onclick="poststuff();"/> </p> <div id="response"></div> </div>         <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js"></script>         <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.16/jquery-ui.min.js"></script>   <script type="text/javascript">         var poststuff= function(){             var data = {}             var temp             if ($("#jsoninput").val() != ""){                 try {                     temp = $.parseJSON($("#jsoninput").val())                 }                 catch (e){                     temp = ""                 }                 if (temp && temp != ""){                     data = JSON.stringify(temp)                 }             }             else if ($("#textinput").val() != ""){                 data.text = $("#textinput").val()                 data = JSON.stringify(data)             }             else data = {"testing":"hello"}             if ($("#username").val() != "" && $("#password").val() != ""){                 // you need contentType in order for the POST to succeed                 var promise = $.ajax({                     type:"POST",                     url: "http://dev6.axeda.com/services/v1/rest/Scripto/execute/TestPostBody?username=" + $("#username").val() + "&password=" + $("#password").val(),                     data: data,                     contentType:"application/json; charset=utf-8",                     dataType:"text"                 })                 $.when(promise).then(function(json){                     $("#response").html("<p>" + JSON.stringify(json) + "</p><br />Check your console for the object.<br />")                     console.log($.parseJSON(json))                     $("#jsoninput").val("")                     $("#textinput").val("")                 })             }         } </script> </body> </html> TestPostBody.groovy import net.sf.json.JSONObject import groovy.json.JsonSlurper import static com.axeda.drm.sdk.scripto.Request.*; try {     // just get the string body content     response = [body: body]     response.element = []     // parse the text into a JSON Object or JSONArray suitable for traversing in Groovy     // this assumes the body is stringified JSON     def slurper = new JsonSlurper()     def result = slurper.parseText(body)     result.each{ response.element << it } } catch (Exception e) {     response = [                 faultcode: 'Groovy Exception',                 faultstring: e.message             ]; } return ["Content-Type": "application/json","Content":JSONObject.fromObject(response).toString(2)] The "body" variable is passed in as a standalone implicit object of type String.  The key here is that to process the string as a Json object in Groovy, we send stringed JSON from the Javascript, rather than the straight JSON object. FYI: If you happen to be using Scripto Editor, you might like to know that importing the Request class disables the sidebar input of parameters.  You can enter the parameters in the sidebar, but if this import is included the parameters will not be visible to the script. To access the POST body through the Request Object, you can also refer to: Using Axeda Scripto
View full tip
As of Dec. 23, 2021, ThingWorx 9.3.0 is available for download and on Jan. 7, 2022, it will be available for Cloud Services.   What’s new in ThingWorx 9.3? Composer and Mashup Builder  Enhanced ability to find entities, code references, and project dependencies with a new Referenced by report feature. New Grid widget allows for improved visualization of row/columnar type data, with improved performance, styling, and configuration experiences.  Composer enhancements allow for faster configuration and application management in the areas of test execution and dynamic use of Master Mashups.    Analytics  New configuration parameters (binningStrategy and allowOverlap) available for profiles to tailor application of the algorithm to meet needs of domain use cases.  New option to utilize K-Fold cross validation to improve quality of predictive models created on limited data sets using industry standard technique.  Automate treatment of missing values to reduce data preparation before application of advanced analytics algorithms.    Foundation  Query performance improvements with indexing enables performance of quick “search” queries across thousands of connected assets and pre-filtering of query resultsets for better overall query performance.  Upgrade script improvements for logging, script execution and error handling which simplify the upgrade process.    Remote Access and Control  Improvements to Remote Access and Control includes Auto & Click to Launch capabilities that streamline the remote access startup process by starting the support application you need for you. New connectivity options enable user control and selection of local IP and Port addresses used in the session creation.    Software Content Management  Instructions can now be configured to execute either Synchronously or Asynchronously for a streamlined execution process. Added support for data migration of older, from the audit data table information into a format to the new audit database.    View release notes here and be sure to upgrade to 9.3!   Stay Connected, Rachel
View full tip
Requirements: 6.1.2+ Geofences are geometric shapes drawn virtually on a geographical area that represents a fence that can be crossed by a device.  The Axeda Platform has built-in support for mobile locations and geofences, which can be linked to the rules engine to enable notifications based on geofence crossing. What this tutorial covers This tutorial demonstrates the workflow of creating a geofence through to creating the expression rules with notifications, then how the mobile location can trigger the rules. 1) Creating the Geofence 2) Creating the Expression Rule There is currently no user interface built into the Axeda Applications Console which interacts with geofences.  For a sample application with a geofence user interface, see Sample Project: Traxeda​ (TODO).  For a single Custom Object that includes all of the functionality described below, see the end of  this document. The properties of a geofence are a name, a description, and a series of coordinates based on Well-Known Text (WKT) syntax (see the OpenGIS Simple Features Specification). def addGeofence(CONTEXT, map){     Geofence myGeofence = new Geofence(CONTEXT)        myGeofence.name = map.name     if(map.type != "polygon" && map.type != "circle")     {         throw new Exception("Invalid type: need 'polygon' or 'circle', not '$map.type'")     }     else if(map.type == "polygon")     {         def geo = map.locs.loc.inject( "POLYGON (("){ str, item ->             def lng = item.lng             def lat = item.lat             str += "$lng $lat,"         str         }         //the first location also has to be the last location         myGeofence.geometry = geo + map.locs.loc[0].lng + " " + map.locs.loc[0].lat + "))"         //Something like this is built:         //POLYGON ((-71.082118 42.383892,-70.867198 42.540923,-71.203654 42.495374,-71.284678 42.349394,-71.163829 42.221382,-71.003154 42.266114,-71.082118 42.383892))     }     else if(map.type == "circle")     {         def lng = map.locs.loc[0].lng         def lat = map.locs.loc[0].lat         myGeofence.geometry = "POINT ($lng $lat)"         //POINT (-71.082118 42.383892)         myGeofence.buffer = map.radius.toDouble()     }     myGeofence.description = "ALERT:::$map.alertType:::$map.alert"     try {          myGeofence.store()     }     catch (e){         logger.info e.localizedMessage             return null     }     myGeofence } The geofence itself does not interact with devices in any way.  Rather it is the Expression Rule that is applied to models and devices and that invokes the geofence when a mobile location is passed in. Creating the Expression Rule The Expression Rule for the Geofence is built as follows: TYPE: MobileLocation IF:  Expression set to "InNamedGeofence" for entering and "!InNamedGeofence" for exiting. The following function creates this expression rule: /* Sample call createGeofenceExpressionRule(CONTEXT, "My Geofence", "rule_MyGeofence", "in", "You entered the geofence!", "SDK Generated Geofence Rule", 100) */ def createGeofenceExpressionRule(com.axeda.drm.sdk.Context CONTEXT, String geofencename, String rulename, String alertType, String alertMessage, String ruledescription, int severity){     ExpressionRuleFinder erf = new ExpressionRuleFinder(CONTEXT)     erf.setName(rulename)     ExpressionRule expressionRule1 = erf.findOne()     expressionRule1?.delete()        def expressionRule = new ExpressionRule(CONTEXT)     expressionRule.setName(rulename)     expressionRule.setDescription(ruledescription)     expressionRule.setTriggerName("MobileLocation")     def ifExpStr = "InNamedGeofence(\"$geofencename\", Location.location)"     if(alertType == "out"){         ifExpStr = "!" + ifExpStr     }     expressionRule.setIfExpression(new Expression(ifExpStr))     expressionRule.setThenExpression(new Expression("CreateAlarm(\"$alertMessage\", severity)"))     expressionRule.setEnabled(true)     expressionRule.setConsecutive(false)     expressionRule.store()     expressionRule } Then the rule associations must be created to apply the rule to a model or device. /* Sample call findOrCreateRuleAssociations(CONTEXT, myModel, expressionRule, "EXPRESSION_RULE", "MODEL") Where expressionRule is the rule created in the above example */ def findOrCreateRuleAssociations(Context CONTEXT, Object entity, Object rule, String ruleType, String entityType){     // rule type is whether this is an expression rule     ruleType = ruleType ?: "EXPRESSION_RULE"     entityType = entityType ?: "DEVICE_INCLUDE"     RuleAssociationFinder ruleAssociationFinder = new RuleAssociationFinder(CONTEXT)     ruleAssociationFinder.setRuleId(rule.id.value)     ruleAssociationFinder.setRuleType(RuleType.valueOf(ruleType))     ruleAssociationFinder.setEntityId(entity.id.value)     ruleAssociationFinder.setEntityType(EntityType.valueOf(entityType))     def ruleAssociations = ruleAssociationFinder.findAll()     if (!ruleAssociations || ruleAssociations?.size() == 0){         def ruleAssociation = new RuleAssociation(CONTEXT)         ruleAssociation.entityId = entity.id.value         ruleAssociation.entityType = EntityType.valueOf(entityType)         ruleAssociation.ruleType = RuleType.valueOf(ruleType)         ruleAssociation.setRuleId(rule.id.value)         ruleAssociation.store()         ruleAssociations = [ruleAssociation]     }     return ruleAssociations } The rule will now be triggered when any device of the applied model sends a mobile location within the geofence, which in turn will create an alarm. Here is a custom object with the complete geofence functionality: import com.axeda.drm.sdk.Context import com.axeda.drm.sdk.geofence.Geofence import com.axeda.drm.sdk.geofence.GeofenceFinder import com.axeda.drm.sdk.rules.engine.Expression import com.axeda.drm.sdk.rules.engine.ExpressionRule import com.axeda.drm.sdk.rules.engine.ExpressionRuleFinder import com.axeda.drm.sdk.rules.engine.RuleAssociation import com.axeda.drm.sdk.rules.engine.RuleAssociationFinder import com.axeda.drm.sdk.rules.engine.RuleType import com.axeda.drm.sdk.common.EntityType import com.axeda.drm.sdk.device.Model import com.axeda.drm.sdk.device.ModelFinder try {     def Context CONTEXT = Context.getSDKContext()     def model = findOrCreateModel(CONTEXT, "FooModel")     def sampleCircle = [         "name": "My Circle",         "alert": "My Geofence Alert Text",         "type": "circle",         "alertType": "in",         "radius": "65.76",         "locs": [             [                 "loc": [   "lat": "42.60970621339408",   "lng": "-73.201904296875"   ]             ]         ]     ]     def samplePolygon = [         "name": "My Polygon",         "alert": "My Geofence Alert Text",         "type": "polygon",         "alertType": "out",         "locs": [             ["loc": [  "lng": -71.2604999542236,  "lat": 42.3384903145478  ]],             ["loc": [  "lng": -71.4218616485596,  "lat": 42.3242772020001  ]],             ["loc": [  "lng": -71.5585041046143,  "lat": 42.2653600946699  ]],             ["loc": [  "lng": -71.5413379669189,  "lat": 42.1885837119108  ]],             ["loc": [  "lng": -71.4719867706299,  "lat": 42.1137514551207  ]],             ["loc": [  "lng": -71.3737964630127,  "lat": 42.0398506628541  ]],             ["loc": [  "lng": -71.2508869171143,  "lat": 42.0311807962068  ]],             ["loc": [  "lng": -71.1355304718018,  "lat": 42.2084223174036  ]],             ["loc": [  "lng": -71.2604999542236,  "lat": 42.3384903145478  ]]         ]     ]     // find geofence if it exists     def circle = findGeofenceByName(CONTEXT, sampleCircle.name)     // create circular geofence     if (!circle){         circle = addGeofence(CONTEXT, sampleCircle)     }     // create rule for circular geofence     def circleRule = createGeofenceExpressionRule(CONTEXT, circle.name, "${circle.name}__Rule",                                                                            sampleCircle.alertType, sampleCircle.alert, "SDK Generated Geofence Rule", 100)     // apply rule to new Model     findOrCreateRuleAssociations(CONTEXT, model, circleRule, "EXPRESSION_RULE", "MODEL")     def polygon = findGeofenceByName(CONTEXT, samplePolygon.name)     if (!polygon){         polygon = addGeofence(CONTEXT, samplePolygon)     }     def polygonRule = createGeofenceExpressionRule(CONTEXT, polygon.name, "${polygon.name}__Rule",                                                                               samplePolygon.alertType, samplePolygon.alert, "SDK Generated Geofence Rule", 100)     // apply rule to new Model     findOrCreateRuleAssociations(CONTEXT, model, polygonRule, "EXPRESSION_RULE", "MODEL") } catch (Exception e) {     logger.info(e.localizedMessage) } return true def findGeofenceByName(CONTEXT, name){     GeofenceFinder geofenceFinder = new GeofenceFinder(CONTEXT)     geofenceFinder.setName(name)     def geofence = geofenceFinder.find()     geofence } def addGeofence(CONTEXT, map){     Geofence myGeofence = new Geofence(CONTEXT)     myGeofence.name = map.name     if(map.type != "polygon" && map.type != "circle") {         throw new Exception("Invalid type: need 'polygon' or 'circle', not '$map.type'")     } else if(map.type == "polygon") {         def geo = map.locs.loc.inject( "POLYGON (("){ str, item ->             def lng = item.lng             def lat = item.lat             str += "$lng $lat,"             str         }         //the first location also has to be the last location         myGeofence.geometry = geo + map.locs.loc[0].lng + " " + map.locs.loc[0].lat + "))"         //Something like this is built:         //POLYGON ((-71.082118 42.383892,-70.867198 42.540923,-71.203654 42.495374,-71.284678 42.349394,-71.163829 42.221382,-71.003154  42.266114,-71.082118 42.383892))     } else if(map.type == "circle") {         def lng = map.locs.loc[0].lng         def lat = map.locs.loc[0].lat         myGeofence.geometry = "POINT ($lng $lat)"         //POINT (-71.082118 42.383892)         myGeofence.buffer = map.radius.toDouble()     }     myGeofence.description = "ALERT:::$map.alertType:::$map.alert"     try {         myGeofence.store()     }  catch (e) {         logger.info e.localizedMessage         return null     }     myGeofence } def createGeofenceExpressionRule(com.axeda.drm.sdk.Context CONTEXT, String geofencename, String rulename,                                                      String alertType, String alertMessage, String ruledescription, int severity) {     ExpressionRuleFinder erf = new ExpressionRuleFinder(CONTEXT)     erf.setName(rulename)     ExpressionRule expressionRule1 = erf.findOne()     expressionRule1?.delete()     def expressionRule = new ExpressionRule(CONTEXT)     expressionRule.setName(rulename)     expressionRule.setDescription(ruledescription)     expressionRule.setTriggerName("MobileLocation")     def ifExpStr = "InNamedGeofence(\"$geofencename\", Location.location)"     if(alertType == "out"){         ifExpStr = "!" + ifExpStr     }     expressionRule.setIfExpression(new Expression(ifExpStr))     expressionRule.setThenExpression(new Expression("CreateAlarm(\"$alertMessage\", severity)"))     expressionRule.setEnabled(true)     expressionRule.setConsecutive(false)     expressionRule.store()     expressionRule } def findOrCreateRuleAssociations(Context CONTEXT, Object entity, Object rule, String ruleType, String entityType) {     // rule type is whether this is an expression rule     ruleType = ruleType ?: "EXPRESSION_RULE"     entityType = entityType ?: "DEVICE_INCLUDE"     RuleAssociationFinder ruleAssociationFinder = new RuleAssociationFinder(CONTEXT)     ruleAssociationFinder.setRuleId(rule.id.value)     ruleAssociationFinder.setRuleType(RuleType.valueOf(ruleType))     ruleAssociationFinder.setEntityId(entity.id.value)     ruleAssociationFinder.setEntityType(EntityType.valueOf(entityType))     def ruleAssociations = ruleAssociationFinder.findAll()     if (!ruleAssociations || ruleAssociations?.size() == 0){         def ruleAssociation = new RuleAssociation(CONTEXT)         ruleAssociation.entityId = entity.id.value         ruleAssociation.entityType = EntityType.valueOf(entityType)         ruleAssociation.ruleType = RuleType.valueOf(ruleType)         ruleAssociation.setRuleId(rule.id.value)         ruleAssociation.store()         ruleAssociations = [ruleAssociation]     }     return ruleAssociations } def findOrCreateModel(Context CONTEXT, String modelName) {     ModelFinder modelFinder = new ModelFinder(CONTEXT)     modelFinder.setName(modelName)     def model = modelFinder.find()     if (!model){         model = new Model(CONTEXT, modelName);         model.store();     }     return model } https://gist.github.com/axeda/6529288/raw/5ffca58c3c48256b81287d6a6f2d2db63cd5cd2b/AddGeofence.groovy
View full tip
In a recent post, I gave an overview of the types of Building Blocks that are available with the ThingWorx Platform. As a reminder, Building Blocks are a collection of entities packaged together for modular software development. They are intended to be reusable, repeatable, and scalable, and they are the fastest way to either build your own solution or customize a pre-made PTC solution, like ThingWorx Digital Performance Management. There are four types of Building Blocks we will talk about for the development of IIoT applications and solutions on the ThingWorx platform: Connectors, Domain Models, Business Logic, and UI. In this post, we are going to do a deep dive on Connectors, which improve application performance and the transfer of data from disparate devices and systems.   What does a Connector look like in ThingWorx? All ThingWorx Building Blocks follow the same naming convention of CompanyName.BuildingBlockName, so any PTC-created Connectors will appear as PTC.Connector. Connectors in ThingWorx are external integrations that can come in through an industrial system, like an MES that could be connected to with ThingWorx Kepware, or business system, like a CRM that could be connected to via ThingWorx Flow or REST APIs. It could also be a connection to an external database. These are your data connections, so their structure will be somewhat dependent upon your database and assets.   What does a Connector look like in use in a PTC Solution? If we use the example of Digital Performance Management (DPM), one of the connectors we use is a Database Manager(ptc.DBConnection.Manager). It pulls information from the database that is being used from the implementation of DPM. If you think of Building Blocks like bricks, Connectors are the foundation. In this case, the Database Manager sits at the bottom layer of bricks to connect the asset data to the next layer of bricks (Domain Models, which I will cover in the next post) and allows you to pull any information you need.   How can you use a Connector in your solutions? As mentioned above, a Connector is the foundation building block for most solutions. It is what aggregates and transfers your solution-related data into the ThingWorx platform for use. The Connectors we currently have available on the ThingWorx platform will “talk” to your database and the other building blocks you use in your solutions, so for your own solutions, a Connector will be the entry point of your data into your solution.   How can you adapt a Connector for your own solutions? Because all PTC building blocks are built with JavaScript in the ThingWorx Mashup Builder, you can leverage existing Connectors on the ThingWorx platform and extend these same Connectors for your unique use case or build your own. You can view the code we used to create Connectors, so if they don’t pull data into your solution the way you want it to flow, you can override the Connector’s functions with your own capabilities.   The ThingWorx PM team is here to listen to your thoughts and feedback, so tell us: What questions do you have about Connectors and how they can improve your experience building solutions in the ThingWorx platform? Or, if you are waiting for the full deep dive into Building Blocks, keep an eye out for our next post on Domain Models, where we will cover the next “layer up” of the types of Building Blocks for use in ThingWorx.   Stay Connected, Rachel  
View full tip
The Axeda Platform provides a few mechanisms for putting user-defined pages or UI modules into the dashboards, or allowing end-users to host AJAX based applications from the same instance their data is retrieved from.  This simple application illustrates the use of jQuery to call Scripto and return a JSON formatted array of current data for an Axeda asset. Prerequisites: First steps taken with Axeda Artisan Basic understanding of HTML, JavaScript and jQuery Axeda Platform v6.5 or greater (Axeda Customers and Partners) Artisan project attached to this article Features: Authentication from a Web app Use of CurrentDataFinder API Scripto from jQuery Files of Note ​(Locations are from the root of Artisan project) index.html – main HTML index page ..\artisan-starter-html\src\main\webapp\index.html app.js – JavaScript code to build application and call Scripto ..\artisan-starter-html\src\main\webapp\scripts\app.js axeda.js – axeda web services JavaScript code ..\artisan-starter-html\src\main\webapp\scripts\axeda.js DataItemsWithScripto.groovy – custom object on Axeda platform ..\artisan-starter-scripts\src\main\groovy\DataItemsWithScripto.groovy Screenshots: Further Reading Developing with Axeda Artisan Extending the Axeda Platform UI - Custom Tabs and Modules
View full tip
ThingWorx DevOps with Azure: The Comprehensive DevOps Guide Written by Tori Firewind, IoT EDC   As promised in a previous post,  attached here is a comprehensive guide to DevOps in ThingWorx, including tutorials and instructions for creating a continuous integration, continuous deployment (CI/CD) process for application development.  There are also updated scripts and entities attached, including an entire sample application for importing, exporting, and testing an application in ThingWorx. From Docker and Github to Azure DevOps and Solution Central, this guide has it all. Learn how to perform your role in the DevOps process whether an administrator or a developer, automate your deployments and testing, and create a more efficient process  for publication changes to production.  A complete DevOps process like this really does facilitate faster and easier updates with fewer risks, fewer delays, and a better pathway to success.  
View full tip
One of the killer features of the Axeda Platform is the Axeda Console, a browser-based online portal where developers and business users alike can browse information in an out-of-the-box graphical user interface.  The Axeda Console is functional, re-brandable and extensible, and can easily form the foundation for a customized connected product experience. Let's take a tour of the Axeda Console and explore what it means to have a full-featured connected app right at the start of your development. What this tutorial covers This tutorial discusses the landscape of the online browser-based suite of tools accessible to Axeda customers.  It does not do a deep dive into each of the available applications, but rather serves as an introduction to the user interface. Sections of the Axeda Applications Console that are discussed: Landing Page (Home) User Preferences Asset Dashboard Axeda Help Note: This article features screenshots from Axeda 6.5 which is the current release as of July 1, 2013.  In prior versions the Axeda Applications Console has also been referred to as ServiceLink.  Stay tuned for Axeda 6.6! What can I do from here? From the landing page for the Axeda Console, you can access recent assets in your right sidebar or search for assets in the left sidebar.  Each of the links in the main Welcome text corresponds to a main tab. Troubleshoot, Monitor, and Service Assets - (Service tab) An Overview of the status of assets, filterable by a search on fields such as serial number, model, organization, etc. Access and Control Remote Assets - (Access tab) If you are familiar with Windows Remote Desktop, this will seem familiar.  This allows you to log into and control an asset as if you were typing from a physical keyboard directly into it without having to be on the same network or in the same location.  This is particularly useful when the asset is behind a firewall or other controlled network. Install and Deploy Software Updates - (Software tab) This tool provides the ability to create, view, configure, delete and deploy software packages (like a file that contains an update) to assets. View Usage Data and Asset Charts - (Usage tab) You can use the Axeda Usage application to track and analyze asset usage. Add New Assets, Organizations and Models - (Configuration tab) Find tools here for creating, updating and deleting domain objects. Administrator Users, Groups and Assets - (Administration tab) Manage users, groups, auditing, and system-setup tasks The remaining tabs that are not linked from the Home page are either custom tabs or less frequently used tabs (depending on use case). The custom tabs are examples of custom applications that are not distributed out of the box with an Axeda instance. Wireless - (custom tab) an integration with the Jasper API that allows the user to monitor SIMs activated in their assets Maintenance - track information about the operation of machines against service cycles Case - manage the resolutions of asset issues Report - (requires an additional license) provides a suite of standard reports, custom reports may also be added Dashboard - allows you to create a landing page that displays information that is interesting to you Simulator - (custom tab) an app that allows you to set data items, alarms, mobile locations, and geofences on an asset For more details on Custom Tabs and the Extended UI, please take a look at [Extending the UI - Custom Tabs and Modules]. (Coming soon) User Preferences Each user in an Axeda instance has a certain set of privileges and visibility, which determine what actions she can take and what information she can see.  A user also has control over certain aspects of her own use of the Axeda Console, which are configurable from the yellow Preferences link, located in the top right corner of the page. This opens up the User Preferences page. The User Preferences link allows you to set defaults for your user only.  From here you can change the following settings: User Attributes (email and password) Locale - Change the locale which also sets the display language Time Zone - Change the time zone as displayed in the Applications Console (note that this does NOT affect individual asset time zone.  Asset time zone is reported by the agent) Notification Styles - Specify which contact methods are appropriate for you and for what severity of triggered alert Default Application - Set which tab should open up when you log into the Axeda Console Items Per Page (Long Table) - For longer listings of items, how many rows should be displayed Items Per Page (Short Table) - For shorter listings of items, how many rows should be displayed Asset Dashboard As the asset is the center of the Axeda universe, so the Asset Dashboard could be considered the central feature of the Axeda Console.  You can open up the dashboard for any particular asset by clicking it in the Service tab or in the Recent Assets shortcuts. You can also add modules within the Asset Dashboard that are either a custom application or the output of an Extended UI Module type custom object.  From the Asset Dashboard you have an at-a-glance view of this asset's current data, alarms, uploaded files, and location to name a few. The Asset Dashboard is built for viewing information about the asset.  To perform create/read/update/delete functions on the asset, you will need to search using the Configuration tool instead. To view a list of models or any domain object available for configuration, click the drop down arrow next to the View sub-tab and select the object name. Once you have the list of models displayed, click the Preferences link on the model to access a Model Preferences Dashboard that allows you to configure the model image, the modules displayed, and other features of the Asset Dashboard. Axeda Help As part of learning more about the Axeda Console, make use of the documentation available to you by clicking the Help link in the top right corner of the page. This will open a pop-up which contains information about the page you have open.  It allows you to do a deep dive into any aspect of the Axeda Console, and includes search and a browsable index on Axeda topics. Make sure to research topics in the Help section while troubleshooting your assets and applications.
View full tip
Hey Community Members – I have exciting news to share! Last week, PTC announced that ThingWorx was recognized as a frontrunner IIOT Platform in four leading analyst research reports on the industrial software market.   In alphabetical order, the reports PTC/ThingWorx was named in were: ABI Research’s Smart Manufacturing Platforms Competitive Ranking The Forrester Wave™: Industrial IoT Software Platforms, Q3 2021 The Gartner® Magic QuadrantTM for Industrial IoT Platforms IDC MarketScape for Industrial IoT Platforms and Applications for Manufacturing Not only is PTC the only company included in all four reports, but it also placed as a leader in all four, which hopefully makes you feel confident in your choice to use ThingWorx. The research criteria the analyst firms use to make selections is unique to each firm, but to us, all of them measure the value ThingWorx can bring to an industrial business.   What value does ThingWorx bring to you?   Stay connected, Rachel               Gartner, Magic Quadrant for Industrial IoT Platforms, Alfonso Velosa, Ted Friedman, Katell Thielemann, Emil Berthelsen, Peter Havart-Simkin, Eric Goodness, Matthew Flatley, Lloyd Jones, Kevin Quinn, 18 October 2021 Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. Gartner and Magic Quadrant are registered trademarks of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved. (this will be soon updated in our Policy as well)  
View full tip
The long-awaited manufacturing solution,  ThingWorx Digital Performance Management (DPM), has arrived! Announced at PTC’s  Manufacturing Live event, DPM provides key use cases around overall equipment effectiveness and real-time performance monitoring, while delivering insights with analytics and automated bottleneck identification tools. DPM gives customers clear insight into where and what to fix to drive efficiencies. Composed of modular building blocks with a foundation on the ThingWorx platform, DPM is easily configurable and customizable for closed-loop problem solving that drives productivity.   Let’s take a deeper look into what DPM is and how you can implement it to ensure your investment in the ThingWorx platform and digital transformation delivers business impact.   Monitor in real-time with Production Dashboard The Production Dashboard allows for automated or manual data entry of reason codes with a simple interface for limited disruption. Rather than providing front-line workers with the typical, difficult to understand, percentage based KPIs, Production Dashboard standardizes all losses, so operators can proactively resolve issues during production. You can configure this dashboard to collect granular data and allow opportunities for continuous improvement in process tracking.     Focus with Bottleneck Analysis Bottleneck analysis automatically identifies bottlenecks across the manufacturing process. Identifying bottlenecks can help you prioritize the highest-impact opportunities in the business process. This saves you having to manually identify and analyze potential issues and frees you up to work on other projects.   Prioritize with Time Loss Waterfall and Analyze with Loss Reason Pareto Monitor and analyze performance with data visualizations that help you pinpoint root causes and suggest improvements. Bring together your siloed data into one system and create a standard for how performance is measured and reported.   Improve with Action Tracker Action Tracker allows you to create continuous improvement actions tied to real production losses, to ensure your actions are having positive impact and return.  Create a digital workspace for teams to collaborate and learn from each other. Plus, you can track the improvements delivered through each individual action, so you can drill down and create transparency of work being done.   Confirm value delivered with Scorecard (Available in Later Versions) With the Scorecard feature, you can leverage a standard scorecard for enterprise wide KPIs to summarize factory health and compare similar factory operations. Use the scorecard to create trending and reporting that can be filtered based on the audience you are presenting data to. The scorecard gives you a consistent view that measures performance across the network and drives visibility and accountability across your business.   How do you plan to leverage DPM or the building blocks that make it up? We’d love to hear your thoughts on the first manufacturing solution from PTC.   Stay connected, Rachel   
View full tip
Announcements