cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - When posting, your subject should be specific and summarize your question. Here are some additional tips on asking a great question. X

IoT Tips

Sort by:
The IoT Building Block Design Framework By Victoria Firewind and Ward Bowman, Sr. Director of the IoT EDC   Building Block Overview As detailed quite extensively on its own designated Help Center page, building blocks are the future of scalable and maintainable IoT architecture. They are a way to organize development and customization of ThingWorx solutions into modular, well-defined components or packages. Each building block serves a specific purpose and exists as independently as possible from other modules. Some blocks facilitate external data integration, some user interface features, and others the manipulation or management of different kinds of equipment. There are really no limits to how custom a ThingWorx solution can be, and customizations are often a major hurdle to a well-oiled dev ops pipeline. It’s therefore crucial for us all to use a standard framework, to ensure that each piece of customization is insular, easy-to-replace, and much more maintainable. This is the foundation of good IoT application design.   PTC’s Building Block Framework At PTC, building blocks are broken down in a couple different ways: categories and types. The category of a building block is primarily in reference to its visibility and availability for use by the greater ThingWorx community. We use our own framework here at PTC, so our solution offerings are based around solution-specific building blocks, things which we provide as complete, SaaS solutions. These solution-specific building blocks combine up into single solutions like DPM, offerings which require a license, but can then easily be deployed to a number of systems. PTC solutions provide many, complex, OOTB features, like the Production Dashboard of DPM, and the OperationKPI building block.   Anyone doing any sort of development with ThingWorx, however, does still have access to many other building blocks, included with the domain specific building block category. These are the pieces used to build the solution building blocks like DPM, and they can be used to build other more custom solutions as well. Take the OperationKPI block which can vary greatly from one customer to the next in how it is calculated or analyzed. The pieces used to build the version that ships with DPM are right there, for instance, Shift and ReasonCode. They are designed to have minimal dependencies themselves, meant to be used as dependencies by custom blocks which do custom versions of the module logic found within the PTC solution-specific blocks. Then there are the common blocks as well, and these are used even more widely for things like user management and database connectivity.   The type of the building block refers really to where that building block falls in the greater design. A UI building block consumes data and displays it, so that is the View of a classic MVC design pattern. However, sometimes user input is needed, so perhaps that UI building block will depend on another block. This other block could have a utility entity that fuels UI logic, and the benefit to having that be a separate block is then the ease with which it can be subbed out from one ThingWorx instance to the next.   Let’s say there are regional differences in how people make use of technologies. If the differences are largely driven by what data is available from physical devices at a particular site, then perhaps these differences require different services to process user input or queries on the same UI mashups used across every site. Well in this case, having the UI block stand alone is smart, because then the Model and Controller blocks can be abstracted out and instantiated differently at different sites.   The two types that largely define the Model and Controller of the classic MVC are called Abstract and Implementation building blocks, and the purposes are intertwined, but distinct: abstract building blocks expose the common API endpoints which allow for the implementation blocks to vary so readily. The implementation blocks are then those which actually alter the data model, which is what happens when the InitializeSolution method is called. In that service, they are given everything they need to generate their data tables and data constraints, so that they are ready to be used once the devices are connected.   Occasionally, UI differences will also be necessary when factories or regions have different ways of doing things. When this happens, even mashups can be abstracted using abstract building blocks. Modular mashups which can be combined into larger ones can be provided in an abstract building block, and these can be used to form custom mashups from common parts across many sites in different implementation blocks all based on the same abstract one.   The last type of building block is the standard building block, which is the most generic. This one is not intended to be overridden, often serving as the combination of other building blocks into final solutions. It is also the most basic combination of components necessary to adhere to the building block design framework and interface with the shared deployment infrastructure. The necessary components include a project entity, which contains all of the entities for the building block, an entry point, which contains the metadata (name, description, version, list of dependencies, etc.) and overrides the service for automatic model generation (DeployComponent), and the building block manager. The dashed arrows indicate an entity implements another, and the solid arrows, extension. The manager is the primary service layer for the building block, and it makes most of the implementation decisions. It also has all of the information required to configure menus and select which contained mashups to use when combining modular mashups into larger views. The manager really consists of 3 entities: a thing template with the properties and configuration tables, a thing which makes use of these, and a thing shape which defines all of the services (which can often be overridden) which the manager thing may make use of.   Most building blocks will also contain security entities which handle user permissions, like groups which can be updated with users. Anyone requiring access to the contents of a particular building block then simply needs to be added to the right groups later on. This as well as model logic entities like thing shapes can be used concurrently to use organizational security for visibility controls on individual equipment, allowing some users to see some machines and not others, and so on.   All of the Managers must be registered in the DefaultGlobalManagerConfiguration table on the PTC.BaseManager thing, or in the ManagerConfiguration table of any entity that implements the PTC.Base.ConfigManagement_TS thing shape. Naming conventions should also be kept standard across blocks, and details on those best practices can be found in the Build Block Help Center.   Extending the Data Model Using Building Blocks Customizing the data model using building blocks is pretty straight-forward, but there are some design considerations to be made. One way to customize the data model is to add custom properties to existing data model entities. For instance, let’s say you need an additional field to keep track of the location of a job order, and so you could add a City field to PTC.JobOrder.JobOrder_AP data shape. However, doing this also requires substantial modification to the PTC.JobOrder.Manager thing, if the new data shape field is meant to interface with the database.   So this method is not as straight-forward as it may seem, but it is the easiest solution in an “object-oriented” design pattern, one where the data varies very little from site to site, but the logic that handles that data does vary. In this design pattern, there will be a thing shape for each implementing building block that handles the same data shape the abstract building block uses, just in different ways. A common use case for why this may happen is if the UI components vary from site to site or region to region, and the logic that powers them must also vary, but the data source is relatively consistent.   Another way to extend the data model is to add custom data shapes and custom managers to go with them. This requires you to create a new custom building block, one which extends the base entry point as needed for the type of block. This method may seem more complicated than just extending a data shape, but it is also easier to do programmatically: all you have to do is create a bunch of entities (thing templates, thing shapes, etc.) which implement base entities. A fresh data shape can be created with complete deliberation for the use case, and then services which already exist can be overridden to handle the new data shape instead. It is a cleaner, more automation friendly approach.   However, the database will still need to be updated, and this time CRUD services are necessary as well, those for creating and managing instances of the data shape. So, this option is not really less effort in the short term. In the long-term, though, scripts that automate much of the process for building block generation can be used to quickly and easily allow for development of new modules in a more complex ThingWorx solution. The ideal is to use abstract blocks defined by the nature of the data shape which each of their implementing blocks will use to hook the view into the data model.   This “data-driven” design pattern is one which involves creating a brand-new block for each customization to the data model. The functionality of each of these blocks centers around what the data table and constraints must look like for that data store, and the logic to handle the different data types should vary within each implementation block, but override the common abstract block interface so that any data source can be plugged into the thing model, and ThingWorx will know what to do with it.   The requests can then contain whatever information they contain (like MQTT), and which logic is chosen to process that data will be selected based on what information is received by the ThingWorx solution for each message. Data shapes can be abstracted in this way, so that a single subscription need exist to the data shape used in the abstract building block, and its logic knows how to call whatever functions are necessary based on whatever data it receives. This allows for very maintainable API creation for both data ingestion and event processing.
View full tip
Using the Solution Central API Pitfalls to Avoid by Victoria Firewind, IoT EDC   Introduction The Solution Central API provides a new process for publishing ThingWorx solutions that are developed or modified outside of the ThingWorx Platform. For those building extensions, using third party libraries, or who just are more comfortable developing in an IDE external to ThingWorx, the SC API makes it simple to still utilize Solution Central for all solution management and deployment needs, according to ThingWorx dev ops best practices. This article hones in one some pitfalls that may arise while setting up the infrastructure to use the SC API and assumes that there is already AD integration and an oauth token fetcher application configured for these requests.   CURL One of the easiest ways to interface with the SC API is via cURL. In this way, publishing solutions to Solution Central really involves a series of cURL requests which can be scripted and automated as part of a mature dev ops process. In previous posts, the process of acquiring an oauth token is demonstrated. This oauth token is good for a few moments, for any number of requests, so the easiest thing to do is to request a token once before each step of the process.   1. GET info about a solution (shown) or all solutions (by leaving off everything after "solutions" in the URL)     $RESULT=$(curl -s -o test.zip --location --request GET "https://<your_sc_url>/sc/api/solutions/org.ptc:somethingoriginal12345:1.0.0/files/SampleTwxExtension.zip" ` --header "Authorization: Bearer $ACCESS_TOKEN" ` --header 'Content-Type: application/json' ` )     Shown here in the URL, is the GAV ID (Group:Artifact_ID:Version). This is shown throughout the Swagger UI (found under Help within your Solution Central portal) as {ID}, and it includes the colons. To query for solutions, see the different parameter options available in the Swagger UI found under Help in the SC Portal (cURL syntax for providing such parameters is shown in the next example).   Potential Pitfall: if your solution is not published yet, then you can get the information about it, where it exists in the SC repo, and what files it contains, but none of the files will be downloadable until it is published. Any attempt to retrieve unpublished files will result in a 404.   2. Create a new solution using POST     $RESULT=$(curl -s --location --request POST "https://<your_sc_url>/sc/api/solutions" ` --header "Authorization: Bearer $ACCESS_TOKEN" ` --header 'Content-Type: application/json' ` -d '"{\"groupId\": \"org.ptc\", \"artifactId\": \"somethingelseoriginal12345\", \"version\": \"1.0.0\", \"displayName\": \"SampleExtProject\", \"packageType\": \"thingworx-extension\", \"packageMetadata\": {}, \"targetPlatform\": \"ThingWorx\", \"targetPlatformMinVersion\": \"9.3.1\", \"description\": \"\", \"createdBy\": \"vfirewind\"}"' )     It will depend on your Powershell or Bash settings whether or not the escape characters are needed for the double quotes, and exact syntax may vary. If you get a 201 response, this was successful.   Potential Pitfalls: the group ID and artifact ID syntax are very particular, and despite other sources, the artifact ID often cannot contain capital letters. The artifact ID has to be unique to previously published solutions, unless those solutions are first deleted in the SC portal. The created by field does not need to be a valid ThingWorx username, and most of the parameters given here are required fields.   3.  PUT the files into the project     $RESULT=$(curl -L -v --location --request PUT "<your_sc_url>/sc/api/solutions/org.ptc:somethingelseoriginal12345:1.0.0/files" ` --header "Authorization: Bearer $ACCESS_TOKEN" ` --header 'Accept: application/json' ` --header 'x-sc-primary-file:true' ` --header 'Content-MD5:08a0e49172859144cb61c57f0d844c93' ` --header 'x-sc-filename:SampleTwxExtension.zip' ` -d "@SampleTwxExtension.zip" ) $RESULT=$(curl -L --location --request PUT "https://<your_sc_url>/sc/api/solutions/org.ptc:somethingelseoriginal12345:1.0.0/files" ` --header "Authorization: Bearer $ACCESS_TOKEN" ` --header 'Accept: application/json' ` --header 'Content-MD5:fa1269ea0d8c8723b5734305e48f7d46' ` --header 'x-sc-filename:SampleTwxExtension.sha' ` -d "@SampleTwxExtension.sha" )     This is really TWO requests, because both the archive of source files and its hash have to be sent to Solution Central for verifying authenticity. In addition to the hash file being sent separately, the MD5 checksum on both the source file archive and the hash has to be provided, as shown here with the header parameter "Content-MD5". This will be a unique hex string that represents the contents of the file, and it will be calculated by Azure as well to ensure the file contains what it should.   There are a few ways to calculate the MD5 checksums and the hash: scripts can be created which use built-in Windows tools like certutil to run a few commands and manually save the hash string to a file:      certutil -hashfile SampleTwxExtension.zip MD5 certutil -hashfile SampleTwxExtension.zip SHA256 # By some means, save this SHA value to a file named SampleTwxExtension.sha certutil -hashfile SampleTwxExtension.sha MD5       Another way is to use Java to generate the SHA file and calculate the MD5 values:      public class Main { private static String pathToProject = "C:\\Users\\vfirewind\\eclipse-workspace\\SampleTwxExtension\\build\\distributions"; private static String fileName = "SampleTwxExtension"; public static void main(String[] args) throws NoSuchAlgorithmException, FileNotFoundException { String zip_filename = pathToProject + "\\" + fileName + ".zip"; String sha_filename = pathToProject + "\\" + fileName + ".sha"; File zip_file = new File(zip_filename); FileInputStream zip_is = new FileInputStream(zip_file); try { // Calculate the MD5 of the zip file String md5_zip = DigestUtils.md5Hex(zip_is); System.out.println("------------------------------------"); System.out.println("Zip file MD5: " + md5_zip); System.out.println("------------------------------------"); } catch(IOException e) { System.out.println("[ERROR] Could not calculate MD5 on zip file named: " + zip_filename + "; " + e.getMessage()); e.printStackTrace(); } try { // Calculate the hash of the zip and write it to a file String sha = DigestUtils.sha256Hex(zip_is); File sha_output = new File(sha_filename); FileWriter fout = new FileWriter(sha_output); fout.write(sha); fout.close(); System.out.println("[INFO] SHA: " + sha + "; written to file: " + fileName + ".sha"); // Now calculate MD5 on the hash file FileInputStream sha_is = new FileInputStream(sha_output); String md5_sha = DigestUtils.md5Hex(sha_is); System.out.println("------------------------------------"); System.out.println("Zip file MD5: " + md5_sha); System.out.println("------------------------------------"); } catch (IOException e) { System.out.println("[ERROR] Could not calculate MD5 on file name: " + sha_filename + "; " + e.getMessage()); e.printStackTrace(); } }     This method requires the use of a third party library called the commons codec. Be sure to add this not just to the class path for the Java project, but if building as a part of a ThingWorx extension, then to the build.gradle file as well:     repositories { mavenCentral() } dependencies { compile fileTree(dir:'twx-lib', include:'*.jar') compile fileTree(dir:'lib', include:'*.jar') compile 'commons-codec:commons-codec:1.15' }       Potential Pitfalls: Solution Central will only accept MD5 values provided in hex, and not base64. The file paths are not shown here, as the archive file and associated hash file shown here were in the same folder as the cURL scripts. The @ syntax in Powershell is very particular, and refers to reading the contents of the file, in this case, or uploading it to SC (and not just the string value that is the name of the file). Every time the source files are rebuilt, the MD5 and SHA values need to be recalculated, which is why scripting this process is recommended.   4. Do another PUT request to publish the project      $RESULT=$(curl -L --location --request PUT "https://<your_sc_url>/sc/api/solutions/org.ptc:somethingelseoriginal12345:1.0.0/publish" ` --header "Authorization: Bearer $ACCESS_TOKEN" ` --header 'Accept: application/json' ` --header 'Content-Type: application/json' ` -d '"{\"publishedBy\": \"vfirewind\"}"' )     The published by parameter is necessary here, but it does not have to be a valid ThingWorx user for the request to work. If this request is successful, then the solution will show up as published in the SC Portal:    Other Pitfalls Remember that for this process to work, the extensions within the source file archive must contain certain identifiers. The group ID, artifact ID, and version have to be consistent across a couple of files in each extension: the metadata.xml file for the extension and the project.xml file which specifies which projects the extensions belong to within ThingWorx. If any of this information is incorrect, the final PUT to publish the solution will fail.   Example Metadata File:     <?xml version="1.0" encoding="UTF-8"?> <Entities> <ExtensionPackages> <ExtensionPackage artifactId="somethingoriginal12345" dependsOn="" description="" groupId="org.ptc" haCompatible="false" minimumThingWorxVersion="9.3.0" name="SampleTwxExtension" packageVersion="1.0.0" vendor=""> <JarResources> <FileResource description="" file="sampletwxextension.jar" type="JAR"></FileResource> </JarResources> </ExtensionPackage> </ExtensionPackages> <ThingPackages> <ThingPackage className="SampleTT" description="" name="SampleTTPackage"></ThingPackage> </ThingPackages> <ThingTemplates> <ThingTemplate aspect.isEditableExtensionObject="false" description="" name="SampleTT" thingPackage="SampleTTPackage"></ThingTemplate> </ThingTemplates> </Entities>       Example Projects XML File:     <?xml version="1.0" encoding="UTF-8"?> <Entities> <Projects> <Project artifactId="somethingoriginal12345" dependsOn="{&quot;extensions&quot;:&quot;&quot;,&quot;projects&quot;:&quot;&quot;}" description="" documentationContent="" groupId="org.ptc" homeMashup="" minPlatformVersion="" name="SampleExtProject" packageVersion="1.0.0" projectName="SampleExtProject" publishResult="" state="DRAFT" tags=""> </Project> </Projects> </Entities>       Another large issue that may come up is that requests often fail with a 500 error and without any message. There are often more details in the server logs, which can be reviewed internally by PTC if a support case is opened. Common causes of 500 errors include missing parameter values that are required, including invalid characters in the parameter strings, and using an API URL which is not the correct endpoint for the type of request. Another large cause of 500 errors is providing MD5 or hash values that are not valid (a mismatch will show differently).    Another common error is the 400 error, which happens if any of the code that SC uses to parse the request breaks. A 400 error will also occur if the files are not being opened or uploaded correctly due to some issue with the @ syntax (mentioned above).  Another common 400 error is a mismatch between the provided MD5 value for the zip or SHA file, and the one calculated by Azure ("message: Md5Mismatch"), which can indicate that there has been some corruption in the content of the upload, or simply that the MD5 values aren't being calculated correctly. The files will often say they have 100% uploaded, even if they aren't complete, errors appear in the console, or the size of the file is smaller than it should be if it were a complete upload (an issue with cURL).   Conclusion Debugging with cURL can be a challenge. Note that adding "-v" to a cURL command provides additional information, such as the number of bytes in each request and a reprint of the parameters to ensure they were read correctly. Even still, it isn't always possible for SC to indicate what the real cause of an issue is. There are many things that can go wrong in this process, but when it goes right, it goes very right. The SC API can be entirely scripted and automated, allowing for seamless inclusion of externally-developed tools into a mature dev ops process.
View full tip
Solution Central and Azure Active Directory Written by: Tori Firewind, IoT EDC   As we’ve said in a previous post, Solution Central (SC) is a crucial part of any mature dev ops pipeline. In its latest version (3.1.0), it manages custom solutions even more easily due to the SC API. However, this comes with a few requirements that can be a little tricky.   One of the more complex configuration pieces for using this new API involves a cloud-hosted Active Directory (AD) application within Azure AD. In order to make use of the new API, your organization’s users must exist on an Azure AD tenant that is separate from the PTC tenant. Most PTC customers are placed on this PTC-owned tenant by default, so additional configuration may be required to set up the AD instance within Azure before the SC API can be used. The type of tenant has to be Azure, of course, as that is what PTC uses: a multi-tenancy Azure infrastructure for all authentication.   In this usage, “tenant” is a cloud-hosting, infrastructure term which essentially refers to an Azure VM hosting an Active Directory server, as well as managing the many AD components that would otherwise require a lot of oversight. Users within that AD server are grouped so that only those solutions published by their own organizations are visible within Solution Central. In this way, the term “tenant” can also refer to a partition of some kind of data; a Solution Central tenant includes the users, the solutions, and is sort of like a sub-tenant of the larger PTC infrastructure. PTC uses a global Solution Central tenant for deployment of its own solutions, like DPM (Digital Performance Management), which is therefore available to every user in the PTC Azure AD tenant or a tenant which has been connected using the tutorial below.   There are good reasons to want to use your own Azure AD integrated into Solution Central that do not involve using the SC API. For one thing, it allows for direct control over the users that have access; otherwise, a ticket to PTC support is necessary every time a user needs access granted or removed. The tutorial below is a great reference for anyone using Solution Central, with steps 1-4 offering an easy guide for integrating your own Azure AD and simplifying your user management.   To use the SC API, however, there are additional steps required to create an application for retrieving an oauth token. This application needs the “solution-publisher” role or else bad request/forbidden errors will pop up. This role is automatically available in your Azure AD tenant once it is linked to the PTC AD tenant, but it does have to be manually assigned to the application, whose sole function is to request this access token for authenticating requests against the SC server.   The Azure application you need to create is essentially a plugin with permission to publish solutions against the SC API. It functions a little like a “login” in database terminology, serving the function of authenticating to an endpoint (in this case not tied to a user or any kind of identity). This application must be able to request an access token successfully and return that to the scripts which call upon it in order to perform the project creation and publication requests. The tutorial below steps you through how to create this application and request all of your solutions via the SC API.   Tutorial: Create Azure AD Application to Access SC via API Create a new tenant in Azure AD for users who will have access to the SC API Create a ticket with PTC to provide the tenant ID and begin the onboarding process Once that process completes, you will receive an email with a custom link to login to the Solution Central portal for your organization, where only your solutions exist Users will then need to be added as “Custom Global Administrators” on the SC enterprise application within Azure Portal to grant them access to login to Solution Central in a browser In order for us to be able to use the SC API, however, more work needs to be done; an application to request an oauth token for requests must be created in Azure Portal Navigate to “App Registrations” from the Azure AD page in Azure Portal and then click “New registration” Enter the application name (ours is called “gradle-plugin-tokenfetcher” in the examples shown here), and select the “single tenant” radio button Enter a URL for redirect URI and for “Select a platform” select “Web” Click “Register”, and wait for the application created notification Now we need to add permissions to see Solution Central Open the app from under “App registrations” and select the “API permissions” tab Click “Add a permission” and in the pop-up window that appears, select the “APIs my organization uses” tab, which should have PTC Solution Central listed, if this tenant has been linked to the PTC tenant per step 2 above Select “Application Permissions” and then check the “solution-publisher” role Click “Add permissions” The application now has access to Solution Central as a publisher, able to send solution publications over the API and not just via the Platform interface   The application is ready now, so we can create a request to test that it has access to SC Using Windows Powershell or something similar, create a script to make a couple of cURL requests against first the Azure AD and then the Solution Central API (attached as well): The tenant ID and the directory ID are the same value (listed on the tenant under “Overview”), and the client ID and application ID are the same as well (listed on the application, shown here), so don’t be confused by terminology The client app secret is the “Value” provided by the system when a new client secret is created under “Certificates & Secrets” (remember to copy this as soon as it is made and before clicking away, as after that, it will no longer be visible): The Solution Central app ID can be found under “App registrations”, listed within the details for the “solution-publisher” role: This access token request piece must be done before every request to the SC API, so this secret should be kept in some kind of password management tool (or a global environment variable in GitLab) so that it isn’t found anywhere in the source code If the script pings the SC API successfully, then a list of solutions will be returned
View full tip
MachNation  Podcast Replay: Enterprise-Specific Implementation Testing a podcast, by Mike Jasperson,  VP of the IoT EDC   MachNation, a company   exclusively dedicated to testing and benchmarking Internet of Things (IoT) platforms, end-to-end solutions, and services, has conducted a recent podcast series featuring our very own Mike Jasperson, Vice President of the IoT Enterprise Deployment Center here at PTC. Performance IoT    is a podcast series that brings together experts who make IoT performance testing and high-resiliency IoT part of their IoT journey. Mike Jasperson's podcast is episode 5 in the series, titled:   Enterprise-Specific Implementation Testing .  Enjoy!
View full tip
Introduction to Digital Performance Management (DPM) Written by: Tori Firewind, IoT EDC   “Digital Performance Management (DPM) is a closed-loop, problem solving solution that helps manufacturers identify, prioritize, and solve their biggest loss challenges, resulting in reduced cost, increased revenue, and improved service levels.” – DPM Help Center What is DPM? Digital Performance Manager (DPM) is an application which improves factory efficiency across a variety of different areas, namely “the four P’s” of Digital Transformation: products, processes, places, and people. Each performance issue in a factory can be mapped to at least one of these improvement categories in a new strategy for Continuous Improvement (CI) founded by PTC.   Figure 1 – Each performance issue in a factory can be mapped to at least one of 4 fundamental improvement categories: products, processes, places, and people. PTC’s new, industry-leading strategy for continuous improvement (CI) in factories is a “best practice” approach, taking the collective knowledge of many customers to form a focused, prescriptive path for success. 11 Closing the Loop Across Products, Processes, People, and Places, Manufacturing Leadership Journal   At PTC, CI in factories is driven by a “best practice”  approach, with years of experience in manufacturing solutions combining with the collective knowledge of the many diverse use cases PTC has encountered, to generate a focused, prescriptive path for improvement in any individual factory. Figure 2 – DPM is a closed loop for continuous improvement, a strategy built around industry standard best practices and years of experience.  PTC is also defining new industry standards for OEE analysis by using time as a currency within DPM. This standardization technique improves intuitive impact assessment and allows for direct comparison of metrics (see the Help Center for details on how each metric is calculated).   DPM creates a closed loop for CI, from the monitoring phase performed both automatically and through manual operator input, to the prioritization and analyzation phases performed by plant managers. DPM helps plant managers by tracking metrics of factory performance that often go overlooked by other systems. With Analytics, DPM can also do much of the analysis automatically, finding the root causes much more rapidly. Figure 3 – All levels of the company are involved in solving the same problems effectively and efficiently with DPM. Instead of 100 people working on 100 different problems, some of which might not significantly improve OEE anyway, these same 100 people can tackle the top few problems one at a time, knocking out barriers to continuous improvement together. Production supervisors who manage the entire production line then know which less-than-effective components on the line need help. They can quickly design and redesign solutions for specific production issues. Task management within DPM helps both the production manager and the maintenance engineer to complete the improvement process. Using other PTC tools like Creo and Vuforia make the path to improvement even faster and easier, requiring less expert knowledge from the front-line workers and empowering every level of participation in the digital transformation process to make a direct, measurable impact on physical production.       How Does DPM Work? DPM as an IoT application sits on top of the ThingWorx Foundation server, a platform for IoT development that is extensible and customizable. Manufacturers therefore find they rarely have to rip and replace existing systems and assets to reap the benefits of DPM, which gathers, aggregates, and stores production data (both automatically and through manual input on the Production Dashboard), so that it can be analyzed using time as a currency. DPM also manages the process of implementing improvements (using the Action Tracker) based on the collected data, and provides an easy way to confirm that the improvements make a real difference in the overall OEE (through the Performance Analysis Dashboard). Because the analysis occurs before and after the steps to improve are taken, manufacturers can rest assured that any resources invested on the improvements aren’t done so in vain; DPM is a predictive and prescriptive analysis process.   DPM makes use of an external SQL Server to run queries against collected data and perform aggregation and analysis tasks in the background, on a separate server location than the thing model and ingestion database. This ensures that use cases involving real-time alerts and events, high-capacity ingestion, or others are still possible on the ThingWorx Foundation server.   The IoT EDC is focusing in on DPM alone for a series of  technical briefs which provide insight and expert level recommendations regarding DPM usage and configuration.  Stay tuned into the PTC Community for more updates to come.
View full tip
Unlocking the Power of Industrial Data Presentation by Mike Jasperson, VP of the IoT Enterprise Deployment Center   his video presentation was performed at the Digital Transformations in Manufacturing conference of 2021, hosted by Enterprise Digital. In this presentation, Mike Jasperson goes over the benefits to modernizing and consolidating access to time-stamped data that is ingested from equipment and sensors into a central location like ThingWorx. Moving away from monolithic, legacy, and siloed systems, and towards more agile solutions, has never been more critical in order to increase machine, operational, and business efficiencies while also opening up visibility into data systems and infrastructure deployments.   This video partners with InfluxData to help customers extract value from IoT data systems, maximizing both performance and operational capabilities of their monitoring systems. To stay competitive in the IoT market, it's important to review the best practices for scaling and testing your industrial metrics solutions, as well as how to get the best performance out of your digital data solutions by using time-series optimized databases like InfluxDB. Open source technologies discussed here are a great way to create modular and upgradable solutions and accelerate IoT innovation.     (view in My Videos)
View full tip
Hi All   Our expert session: Thingworx Flow Overview is tomorrow!!! Click the link below to register and remember to talk about it to colleagues that might benefit from its content.   Expert Session: Thingworx Flow Overview Date and Time: December 10th, 8h00 EST Duration: 1 hour Host: Antony Moffa; Vinay Vaidya - Thingworx IoT Platfom Senior Directors Registration Here: https://www.ptc.com/en/customer-success/expert-sessions-for-thingworx-foundation-webcasts    See you there!   Here are other upcoming sessions that might be of your interest: Upgrade to Thingworx 9 – How to Plan / Evaluate Impacts This session will highlight the key points you should evaluate to properly plan your upgrade to Thingworx 9 Register Here Active Active Clustering This session will cover the main aspects of the High Availability Clustering feature launched with the ThingWorx 9.0 release Register Here
View full tip
Update to Connected Factories Benchmark   Scenario Three: One Kepware Server in ThingWorx 9.0 The goal of this scenario is to confirm the same performance in ThingWorx 9.0 as seen in scenario one, where one Kepware Server represented a single factory in version 8.5.   Matrix 1 - Slow (15s slow properties, 1s fast) The lower frequency tests performed the same in 9.0. Even the 10k ingestion test, which lies very close to the boundary for a single Kepware Server, passed with no errors. Matrix 2 – Fast (5s slow properties, 500ms fast) These showed similar results, but the 500 thing, 50-10 property test had data loss in 9.0. However, the write rate is much higher than PTC recommends for a single Kepware Server anyway.     Matrix 3 – Faster (1s slow properties, 200ms fast) The fastest tests had similar results as well. The larger tests ran with more success with two Kepware Servers (data not shown here).   Conclusions ThingWorx 9.0 is similarly capable of ingesting data using Kepware Server. A single instance can still achieve up to 10k wps. Future scenarios will now make use of ThingWorx 9.0.   Download the updated draft here!
View full tip
Leveraging Dell and VMWare for Asset Monitoring in Connected Factories   As an extension of the Connected Factory Reference Benchmark performed on Microsoft Azure , PTC partnered with Dell Technologies in producing this document, a baseline which illustrates the effectiveness of ThingWorx and Kepware when combined with Dell and VMWare technologies to create solutions for on-premises and hybrid Connected Factory implementations. Please join us in thanking Bhagyashree Angadi, Brian Anzaldua, Todd Edmunds, Mike Hayes, and the Dell Customer Solution Center team in Limerick, Ireland for working with the IOT Enterprise Deployment Center on this benchmark!   This benchmark is of a very similar design to a previous publication, but this time designed specifically with Dell Technologies in mind. In a Dell/VMWare architecture, the close proximity of Kepware Server and ThingWorx Foundation provides ideal conditions for network throughput between these components. Combined with the ability to easily monitor and resize virtual machines as your business needs evolve, these hardware configurations can be very effective in on-premises or hybrid deployment scenarios.
View full tip
ThingWorx and Azure IoT Hub Benchmark This Azure IoT Hub Reference Benchmark showcases the capabilities of ThingWorx and Microsoft Azure IoT Hub, a cloud-hosted solution backend that facilitates secure and reliable communication between an IoT application such as ThingWorx and the devices it manages. By making use of this third party tool, remote monitoring with ThingWorx has never been simpler.   In this benchmark, PTC verifies the reliability and scalability of ingesting data through the Azure IoT Hub into the Azure IoT Hub Connector(s) and ThingWorx Foundation. The preliminary version of this document focuses primarily on how the Azure IoT Hub’s capabilities modify and/or enhance the data ingestion and device management capabilities of ThingWorx.   Find the benchmark document attached here, and stay tuned for more reference benchmarks to come!
View full tip
ThingWorx 8.5 Architecture Deployment Guide The ThingWorx 8.5 Architecture Deployment Guide has recently been updated with a few bug fixes and semantic corrections. Note that older guides are still available as well. Look forward to the 9.0 Deployment Guide coming soon.   Happy developing!
View full tip