cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Did you get called away in the middle of writing a post? Don't worry you can find your unfinished post later in the Drafts section of your profile page. X

IoT Tips

Sort by:
Predicting time to failure (TTF) or remaining useful life (RUL) is a common need in IIOT world. We are looking here at some  ways to implement it. We are going to use one of the Nasa dataset publicly available that simulates the Turbofan engine degradation (https://c3.nasa.gov/dashlink/resources/139/) . The original dataset has got 26 features as below Column 1 – asset id Column 2 – cycle/time of sensor data collection Column 3- 5 – operational setting Column 6-26 – sensor measurement In the training dataset the sensor measurement ends when the failure occurs.     Data Collection Since the prediction model is based on historic data, the data collection is a critical point. In some cases the data would have been already collected form the past and you need to make the best out of it. See the Data preparation chapter below. In situation where you are collecting data, a few points are good to keep in mind, some may or may not apply depending on the type of data to be collected. More frequent (higher frequency) collection is usually better, especially for electronic measure. In situation where one or more specific sensor values are known to impact the TTF, it is good to take measure at different values of this sensor until the failure without artificially modifying the values. For example, for a light bulb with normal working voltage of 1.5V, it is good to take some measure at let’s say 1V, 1.5V , 2V , 3V and 4V. But each time run till the failure. Do not start at 1.5V and switch to 4V after 1h. This would compromise what the model can learn. More variation is better as it helps the prediction model to generalize. In the same example of voltage it is best to collect data for 1V, 1.5V , 2V , 3V and 4V rather than just 1.5V which would be the normal running condition. This also depends on the use case, for example if we know for sure that voltage will always be between 1.45V and 1.55V, then we could focus only on data collection in this range. Once the failure is reached, stop collecting data. We are indeed not interested in what happens after the failure. Collecting data after the failure will also lead to lower prediction model accuracy. Each failure run should be a separate cycle in the dataset. In other word from a metadata stand point, each failure run should be represented by a different ENTITY_ID. TTF business need Before going into data preparation and model creation we need to understand what information is important in term of TTF prediction for our business need. There are several ways to conceive the TTF, for example: Exact time value when failure might occur This is probably going to be the most challenging to predict However one should consider if it is really necessary. Indeed do we need to know that a failure might occur in 12 min as opposed to 14 min ? Very often knowing that the time to failures is less than X min, is what is important, not the exact time. So the following options are often more appropriate. Threshold For some application knowing that a critical threshold is reached is all that is needed. In this case a Boolean goal, for example lessThan30min or healthy with yes/no values, can be used This is usually much easier than the exact value above. Range For other applications we may need to have a bit more insight and try to predict some ranges, for example: lessThan30min, 30to60min, 60to90min and moreThan90min In this case we will define an ordinal goal The caveat here is that currently ThingWorx Analytics Builder does not support ordinal goal, though ThingWorx Analytics Server does support it. So it only means that the model creation needs to be done through the API. This is the option we will take with the NASA dataset. The picture below shows the 3 different types of TTF listed above   Data Preparation   General Feature engineering Data Preparation is always a very important step for any machine learning work. It is important to present the data in the best suitable way for the algorithms to give the best results. There are a lot of practices that can be used but beyond the scope of this post. The Feature Engineering  post gives some starting point on this. There are also a lot of resources available on the Internet to get started, though the use of a data scientist may be necessary. As an example, in the original NASA dataset we can see that a few features have a constant value therefore there are unlikely to impact the prediction and will be removed. This will allow to free computational resources and prevent confusion in the model. Sensor data resampling The data sampling across the different sensor should be uniform. In a real case scenario we may though have sensors data collected at different time interval. Data transformation/extrapolation should  be done so that all sensor values are at the same frequency in the uploaded dataset. TTF feature Since we want to predict the time to failure, we do need a column in the dataset that represent this values for the data we have. In a real case scenario we obviously cannot measure the time to failure, but we usually have sensor data up to the point of failure, which we can use to derived the TTF values. This is what happens in the NASA dataset, the last cycle corresponds to the time when the failure occurred. We can therefore derive a new feature TTF in this dataset. This will start at 0 for the last cycle when failure occurred, and will be incremented by 1 up to the very first measurement, as shown below:   Once this TTF column is defined, we may need to transform it further depending on the path we choose for TTF prediction, as described in the TTF business need chapter. In the case of the NASA dataset we are choosing a range TTF with values of more100, 50to100, 10to50 and less10 to represent the number of remaining cycles till the predicted failure. This is the information we need to predict in order to plan a suitable maintenance action. Our transformed TTF column look as below:     Once the data in csv is ready, we need to create the json file to represent the metadata. In the case of range TTF this will be defined as an ordinal goal as below (see attachment for the full matadata json file) {         "fieldName": "TTF",         "values": ["less10",                   "10to50",                   "50to100",                   "more100"],         "range": null,         "dataType": "STRING",         "opType": "ORDINAL",         "timeSamplingInterval": null,         "isStatic": false   }   Model creation Once the data is ready it can be uploaded into ThingWorx Analytics and work on the prediction model can start. ThingWorx Analytics is designed to make machine learning easy and accessible to non data scientists, so this steps will be easier than when using other solutions. However some trial and error are needed to refine the model which may also involve reworking the dataset. Important considerations: When dealing with Time to failure prediction, it is usually needed to unset the Use Goal History in the Advanced parameters of the model creation wizard. If using API, the equivalent is to set the virtualSensor parameter to true. Tests with Redundancy Filter enabled should be done as this has shown to give better results. In a first attempt it is a good idea to keep lookback Size to 0. This indicates to ThingWorx Analytics to find the best lookback size between 2, 4, 8 and 16. If you need a different value or know that a different value is better suited, you can change this value accordingly. However bear in mind the following: Larger lookback size will lead to less data being available to train, since more data are needed to predict one goal. Larger lookback do lead to significant memory increase – See https://www.ptc.com/en/support/article?n=CS294545     In the case of the NASA dataset, since we are using an ordinal goal, we need to execute it through API. This can be done through mashup and services (see How to work with ordinal and categorical data in ThingWorx Analytics ? for an example) for a more productive way. As a test the TrainingThing.CreateJob service can be called from the Composer directly, as shown below:       Once the model is created we can check some performance statistics in ThingWorx Analytics Builder or, in the case of ordinal goal, via the ValidationThing.RetrieveResults service. The parameter most relevant in the case of ordinal goal will be the confusion matrix. Here is the confusion matrix I get   Another validation is to compute some PVA (Predicted Vs Actual) results for some validation data. ThingWorx Analytics does validation automatically when using ThingWorx Analytics Builder and present some useful performance metrics and graph. In the case of ordinal goal, we can still get this automatic validation run (hence the above confusion matrix), but no PVA graph or data is available. This can be done manually if some data are kept aside and not passed to the training microservice. Once the model is completed, we can then score (using PredictionThing.RealTimeScore or BatchScore for ordinal goal, or Builder UI for other goal) this validation dataset and compare the prediction result with the actual value. here is one example:     Depending on the business case this model can be deemed acceptable or may need rework, such as change the range values, change learners’ parameters, modify dataset … There is certainly a fair amount of experimentation before creating the optimal model but hopefully this post does give some good starting points.   Resources:   Original Dataset attached as train_FD001-original.csv Transformed dataset attached as train_FD001-TTF-transformed.csv json metadata file for transformed dataset attached as train_FD001-ttford.json                    
View full tip
Applicable releases: 8.3 and above   Description: In this video, we will see  Basic concepts of SVM Some scenarios for a better understanding
View full tip
  Meet Anthony. Anthony came to PTC from a large industrial company as a user of Axeda, a leading device connectivity company that PTC acquired in 2014. With a background in aerospace engineering and experience in a variety of industries including life safety, healthcare, nuclear power, and oil and gas, Anthony has been working to create new value around innovation for customers transitioning to ThingWorx. When he’s not working on IIoT, he’s playing music on Cape Cod, photographing Hawaiian landscapes or bringing awesome inflatable chairs into the office.    You may recall from last week's post that Thing Presence is one of the top three features in 8.4 that you might not know about. This week, I spoke with Anthony for a deeper dive on Thing Presence. Check out how it went.   Kaya: What was the challenge users were facing that led us to create Thing Presence? Anthony: Axeda customers transitioning to ThingWorx were struggling with the connectivity use case and kept asking, ‘why is my asset always offline?’ When we evaluated it, we discovered that it was tied to the difference between AlwaysOn and Axeda eMessage protocol architectures. IsConnected will always report false for polling and duty cycle devices.   The use case of Thing Presence is to know that the asset is reporting into the network and is ready to provide information (push) or be accessed to retrieve information or do a remote desktop support (pull). This use case is relevant for any asset in ThingWorx that uses duty cycle.   In ThingWorx 8.4 (coming in early 2019), the new IsReporting state will inform the user when a polling device is communicating on a regular basis. If it is, then IsReporting will be true. The IsReporting state resolves the discrepancy wherein devices that are on duty cycle appear disconnected due to the IsConnected state reporting false.  New "IsReporting" state improves visibility of an asset's communication state Kaya: How exactly does Thing Presence work? Anthony: You can think of it in terms of having teenagers. You tell them they need to check in with you on a regular basis through text message. If a text is missed, all of a sudden you take action.   Now, imagine the teenager is a device. If a device was supposed to check in every five minutes and it misses one poll, I want to flag that as a problem. The challenge with that from a service perspective is that sometimes your service personnel will go out and work on a device and may need to take it offline for a bit of time; we need to factor that in. We certainly don’t want to deploy someone or try to fix something when a service technician is already there.   You might decide, ‘my average service visit is an hour, so, if I miss a couple of pings, I’m okay; but, if I’m offline for more than an hour, then I’d like to know about it because I’d like to take action.’ Thing Presence allows you to define that window.   Kaya: You’ve mentioned ‘duty cycle’ and ‘polling cycle.’ Can you explain these terms? Anthony: Duty cycle and polling cycle are the same thing. It means that a device has a time for which it is expected to check in, and, provided that it checks in within that timeframe, all is good with the connection.   Connected services rely on a connection. As soon as the connection is broken, I no longer have the ability to service the asset.   Kaya: Given everything we’ve discussed, where do you see Thing Presence headed? Anthony: The next piece of the equation for us is to provide information on the health of the connection. When you look at servicing a remote asset, you need to a) know that it is communicating, and b) know that the connection is healthy before you try anything. I wouldn’t want to try a software update if I am losing connection with my asset on a regular basis.   What do we mean by health? We mean: is the device checking in when it should be? If it’s not, is there a pattern to that connection, and are those patterns tied to applications? For example, is it only on during working hours? Does it turn off during holidays? If the device is in a school, does it turn off during summer maintenance work? This allows us to garner insights on how and when the equipment is being used, not just operating status. At the end of the day, does this mean I can apply analytics and AI tools to it? Absolutely. Is it the first place I would apply it? Probably not.   Kaya: In your mind, what is the next big thing coming in ThingWorx that you’re particularly excited about? Anthony: Mashup 2.0 and Asset Advisor 8.4. (Double mic drop.)   Kaya: That’s awesome. My last question is related to you. Can you tell me what your favorite aspect is about working at PTC? Anthony: The chaos. Very often it’s chaos that breeds innovation. What I mean by that is that, if you try to create something because you sit down and you say, ‘I am going to innovate,’ very often that is a failure because the majority of the time it’s the influence of a deadline, a customer need or an application at hand that makes the environment trying and sometimes hectic. But, it is in these challenging environments where you can be the most creative and innovative as an engineer.
View full tip
Push update what is it? It is a mechanism that GetProperties supports so that the Server can push a value to a client mashup. This will allow you to see values update in your mashup real time without needing the refresh widget. Another great way to use the push updates is to propagate events that tie to specific content fetches. Let's say your mashup has three areas: KPIs, Alerts, Live values. Using some logic server side you can set up a 'tracker' Thing with properties that indicate that one of those areas has updated data. Bring these notifications as property values into the mashup using GetProperties and as the Server pushes updates to the mashup runtime, you can map it to a Validator or Expression widget (set to autoevaluate) which in turn can now run the necessary Service to fetch the updated information for the specific area.
View full tip
As it is not available in support.ptc.com. Please provide Creo View and Thing View Widget Documentation or guide to view 3D Object in custom mashup/UI except for the ThingWorx Navigate app.   I am posting this request to the community. Not for this ThingWorx developers portal after discussing it with PTC technical support. Please refer to article CS291582.   LeighPTC, I have no option to do move to the community again. But this had happened.   The post Creo View and Thing View Widget Documentation to view 3D Object in custom mashup/UI. was moved by LeighPTC.   Please don't move this request to the ThingWorx developers portal.    So that PTC Customer can have Creo View and Thing View Widget Documentation to view 3D Object in custom mashup/UI. As it is not available.   Many thanks, Rahul
View full tip
  If you’ve ever wished you could see into the future, you’ve come to the right place! Put your reflective suits and sunglasses on to prepare for a glimpse into the future of our upcoming ThingWorx 8.4 release! Here are sneak peeks of the top three features you may not have known are coming in ThingWorx 8.4.   1. Thing Presence While it sounds like something from an episode of Ghost Hunters, Thing Presence provides insight into the communication state of polling or duty cycle Things (those that check in and out on a periodic basis). We’re introducing a new IsReporting state, which would be set to true when polling assets check in on time and are considered “present in the network.” This helps to bridge the gap where the traditional ThingWorx IsConnected state reports offline and does not coincide with the actual network presence of the device.   Thing Presence: New "IsReporting" State2. Data Helpers You may not know what Data Helpers are, but if you’re a longstanding ThingWorx developer you likely know about Expression and Validator widgets. These widgets were handy because they allowed you to write conditional logic or input validation to drive behaviors in the UI, but were super frustrating to use. They took up lots of room on the visual layout canvas and only had a very little textbox to edit them. In the 8.4 release, we are happy to announce that these two widgets will no longer be placed on the layout canvas. Instead, they will have a dedicated editor to work from with plenty of room for code development, parameter configuration and event definition and binding. We’re wrapping all of this functionality into a nice little feature called…Data Helpers. Data Helpers: Expression and Validator Widgets No Longer in Layout Canvas3. ThingWorx Flow In case Thing Presence and Data Helpers aren’t exciting enough, we’re also introducing ThingWorx Flow, a neat new feature set that dramatically speeds development of connected applications through integrations with business systems like Salesforce and SAP. Imagine that, when a certain alert triggers, you want to automatically create a Salesforce service ticket and even send an emergency text to an operator to prevent damage to a device. A large set of out-of-the-box system connectors (PTC Windchill, Office 365, Google Docs, Slack, Jira and more) are included, which you can drag and drop onto a canvas to visually define a workflow. In the example below, a ThingWorx-connected device element, a Salesforce “create case” action and a Twilio text message connector were dropped into the canvas to create a visual workflow. Orchestration: Example Workflow that Creates Salesforce Cases and Alerts OperatorsThing Presence, Data Helpers & Flow—get ready for these and more in ThingWorx 8.4!   Stay tuned for future posts that go into greater depth about each of these features and comment your thoughts below!   Stay connected, Kaya
View full tip
Exciting news! ThingWorx now has improved support for Docker containers to help you manage CI/CD, improve development efficiency in your organization and save costs. Check out these FAQs below and, as always, reach out to me if you have any additional questions.   Stay connected, Kaya   FAQs: ThingWorx Docker Containers   What are Docker Containers? From Docker.com: “a Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings”. Learn more here.   What's the difference between Docker containers and VMs? Containers are an abstraction at the app layer that packages code and dependencies together, whereas Virtual Machines (VMs) are an abstraction of physical hardware turning one server into many servers. Here are some great discussions on it on Stack Overflow. Containers vs. VMs   How can I build ThingWorx Docker images? Check out the Building ThingWorx 8.3 Docker Images Guide or watch this video to instruct you on how to build and test Docker containers. (view in My Videos)   How does PTC support building ThingWorx Docker images? PTC provides the ability for customers and partners to build ThingWorx Docker images. A customer can download the Dockerfiles and scripts packaged as a zip folder from the PTC Software Downloads Portal under “ThingWorx Platform,” then “Release 8.3”  then“ThingWorx Dockerfiles.” (Please note that you must be logged in for the link to function properly.) PTC Software Downloads PortalThe zip folder contains the Dockerfiles, template jar, and scripts to fetch Tomcat, and ThingWorx WAR files using CLI. Java must be downloaded manually from the vendor's website. We also provide an instructional guide called “Build ThingWorx Docker Images” available on the Reference Documents page on the Support Portal.   How are ThingWorx Docker images different from the usual delivery media of WAR files? The WAR file delivery is typically accompanied by an installation guide that contains the manual steps for creating the VM or bare-metal environment. That guide includes instructions for the administrator to manually install the prerequisites, including Tomcat, Java, and ThingWorx platform settings files. To deploy and run the WAR file, the administrator follows the guide to create the runtime environment on an OS. In contrast, the Dockerfile build in this delivery automates the creation of a Docker image once supplied with the prerequisites.   Do you have any reference deployment and guidance? Yes, you can refer to our blog post to learn how to deploy and run ThingWorx Docker containers on your existing Kubernetes environment.   Is there any recommendation on which Container Orchestrator as a Service (CaaS) a customer should run ThingWorx Foundation Docker container images on? You can use Docker-Compose for testing, but it is generally not suggested for production deployment use cases. In a production environment, customers should use container orchestrators such as Kubernetes, OpenShift, Azure Kubernetes Service (AKS), or Amazon Elastic Container Service for Kubernetes (Amazon EKS), to deploy and manage ThingWorx Docker images.   What are the skill sets required? Familiarity with OS CLI and Docker tools is required to build building the ThingWorx Docker images. Familiarity with Docker-compose to run the resulting Docker containers is needed to test the resulting builds. We don’t recommend Docker-Compose for production use, but when using it for local testing and demo purposes, users can rapidly install ThingWorx and get it up and running in minutes. We expect PTC partners and customers who want to run ThingWorx containerized instances in their production environment to possess the required skill sets within their DevOps team.   How is ThingWorx licensing handled with the Docker images? By default, the container created from these Docker images starts up in a limited mode with no license supplied. You can configure your username and password for the PTC licensing portal to automatically load a license via environment variables passed into the container on startup. Additionally, you can mount a volume to the /ThingworxPlatform directory, which contains your license file, or to retrieve a license request. To keep your Host ID consistent, ensure that the /ThingworxStorage and /ThingworxPlatform directories are persisted and not removed with individual container restarts. More detailed instructions can be found in the build guide or in a Kubernetes blog post .   Is Docker free? What version of Docker does PTC support for ThingWorx? Docker is open-source and licensed under the Apache 2 license. Information on Docker licensing can be found here. The following Docker versions are required: Docker Community Edition (docker-ce) Version 18.05.0-ce is recommended. To install the Docker Community Edition on your system, follow the instructions for your operating system on the Docker website here. Docker Compose (docker-compose) Version 1.17.1 is recommended. To install the Docker Compose on your system, follow the instructions for your operating system on the Docker website here. What persistence providers are currently supported? PTC provides the ability to build ThingWorx Foundation containers for the following supported persistence providers: H2 Microsoft SQL Server PostgreSQL Additional persistence providers will be added to the Docker build delivery as the ThingWorx Foundation Platform releases support for those new databases in future releases.   What are some of the security best practices? For production use, customers are strongly advised to secure their Docker environments by following all the recommendations provided by Docker. Review and implement the best practices detailed at https://docs.docker.com/engine/security/security/.   Can we build Docker images for ThingWorx High Availability (HA) architecture? Yes. ThingWorx Dockerfiles are provided for both basic ThingWorx deployment architecture and HA ThingWorx deployment architecture.   How easy is the rehosting and upgrading of ThingWorx releases on Docker with existing data? In Kubernetes environment, data is kept in a separate volume and can be attached to different containers. When one container dies, the data can be attached to a different container and the container should start without issue. For more information, please refer to the upgrade section of the Building ThingWorx 8.3 Docker Images Guide.   Is it okay to use the Docker exec and access the bash shell to make config changes or should I always rebuild the image and re-deploy?­ Although using Docker exec to gain access to the container internals is useful for testing and troubleshooting issues, any changes made will not be saved after a container is stopped. To configure a container's environment, variables are passed in during the start process. This can be done with Docker start commands, using compose files with environment variables defined, or with helm charts. More detailed instructions can be found in the build guide or in this blog post .   What if there are issues? Should I call PTC Technical Support? We are providing the scripts and reference documents solely to empower our community to build ThingWorx Docker images. We believe that customers using Docker in their production processes would have expertise to manage running Docker containers themselves. If there are any issues or questions regarding the build scripts provided in the PTC official downloads portal, then customers can contact PTC Technical Support at 1-800-477-6435 or visit us online at: http://support.ptc.com. PTC does not provide support for orchestration troubleshooting.   What can you share about future roadmap plans? As we are enabling our customers and partners to build ThingWorx Foundation Platform Docker images, we plan to do the same for upcoming products such as ThingWorx Integration & Orchestration, ThingWorx Analytics, upcoming persistence providers such as InfluxDB, and many more. We also plan to provide additional reference architecture examples and use cases to help developers understand how to use Docker containers in their DevOps and production environments.   Where can I learn more about Docker containers and container orchestrators? See these resources below for additional information: https://training.docker.com/ https://kubernetes.io/docs/tutorials/online-training/overview/
View full tip
I have created a mashup which allows you to easily use and test the Prescriptions functionality in Thingworx Analytics (TWA). This is where you choose 1 or more fields for optimization, and TWA tells you how to adjust those fields to get an optimal outcome.   The functionality is based on a public sample dataset for concrete mixtures, full details are included in the attached documentation.  
View full tip
If your mashups are feeling cramped and crowded, we’ve got some good news for you. Coming in our next release, ThingWorx will offer truly responsive layouts for your mashups, allowing you to adjust the size of your viewport or present a mashup on a variety of displays—laptop, tablet, phone, etc.—without compromising presentation or responsiveness. This new layout capability is based on Flexbox. Check out this article to learn more.   Here’s a sneak peek of our new layout editor. It’s still in development, but I hope you can start to see how you can use this in your UI development.   Sneak Peek: Responsive Mashup Layout Notice the ability to split the layout into containers. Each of the containers allows you to horizontally or vertically layout, distribute from left or right, align from top or bottom, justify space between widgets, etc. This gives you as the developer ultimate control to define the responsiveness of the mashup.   Like what you see? Any feedback? Let me know below!   Stay connected, Kaya
View full tip
Several times in the past few months I was hit by a quick need to extract some data about Assets for a customer, and find myself continually hand-writing the code to do so.  Rather than repeat myself any more, I figure I can share my work - maybe PTC customers can benefit from the same effort.    import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.drm.sdk.Context import com.axeda.common.sdk.id.Identifier import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* def retStr = "Device and Location Data\n" def modellist = [:] ModelCriteria mc = new ModelCriteria() mc.modelNumber = "*" tcount = 0 def mresults = modelBridge.find(mc) while ( (mresults = modelBridge.find(mc)) != null && tcount < mresults .totalCount) { mresults.models.each { res -> modellist[res.systemId] = res.modelNumber tcount++ } mc.pageNumber = mc.pageNumber + 1 } locationList = [:] LocationCriteria lc = new LocationCriteria() lc.name = "*" tcount = 0 def lresults = locationBridge.find(lc) while ( (lresults = locationBridge.find(lc)) != null && tcount < lresults .totalCount) { lresults.locations.each { res -> locationList[res.systemId] = res.name tcount++ } lc.pageNumber = lc.pageNumber + 1 } AssetCriteria ac = new AssetCriteria() ac.includeDetails = true ac.name = "*" tcount = 0 def results = assetBridge.find(ac) while ( (results = assetBridge.find(ac)) != null && tcount < results .totalCount) { results.assets.each { res -> retStr += "ID: ${res.systemId} MN: ${res.model.systemId},${modellist[res.model.systemId]} SN: ${res.serialNumber} Name: ${res.name} : Location ${res.location.systemId},${locationList[res.location.systemId]}\n"; tcount++ } ac.pageNumber = ac.pageNumber + 1 } return ["Content-Type": "application/text", "Content": retStr] This will output data like so:    ID: 31342 MN: 14682,CKGW SN: Axeda-CK-Windows10VBox Name: Axeda-CK-Windows10VBox : Location 1,Foxboro ID: 26248 MN: 14682,CKGW SN: CK-CKAMINSKI0L1 Name: CK-CKAMINSKI0L1 : Location 1,Foxboro ID: 30082 MN: 14682,CKGW SN: CK-GW1 Name: CK-GW1 : Location 1,Foxboro ID: 26247 MN: 14681,CKGW-ManagedModel1 SN: CK-MM01 Name: CK-MM01 : Location 1,Foxboro This let's me compare the internal systemId of the Asset, the internal systemId of the Model, and the internal systemId of the Location of the device.  This was to help me attempt to isolate an issue with orphaned devices not being returned in a report - exposing some duplicate locations and devices that needed corrections.    You may find yourself needing to do similar things when building logic for Axeda, or eventually integrating or migrating to Thingworx.  Our v2 API bridges help "bridge" the gap.      
View full tip
How to score new data with ThingWorx Analytics ?   The following is valid starting with ThingWorx Analytics (TWA) 8.3.0   Overview   Once a training model has been created, one of the main objective is to score new data to predict the value for the goal ThingWorx Analytics can score new data in 2 ways: Batch scoring Real time scoring Batch scoring   Batch scoring will be used when a large amount of data needs to be scored. To perform a batch scoring we will usually follow steps similar to the below ones: Upload the historic data Create a new model with this historic data Upload new data – the one to be scored Perform a prediction job to score those new data Retrieve the prediction job result Uploading the new data can be done in different ways. If using a large amount of data, it can be easier to upload the data via a csv file in a similar way as the historic data. This is the way used in ThingWorx Analytics Builder. If the amount of data is more limited this can be sent in the body of the scoring request. The post Analytics: Prediction Methods Mashup  shows a good example of how to do this using the PredictionThing.BatchScore service. We are focusing below on ThingWorx Analytics Builder, that is uploading new data via a csv file. In order to perform the scoring job only on the new data in step 4 above, we need to be able to filter those added data. If the dataset has already suitable column/feature such as a timestamp for example, we can use this to score only new data after timestamp > newdate, assuming all data are in chronological order. If the dataset has no such feature, we will have to add one  beforehand when we first upload the historic data in step 1 above. We often use a new column/feature named record_purpose to this effect. So initial data can take a value of training for this record_purpose feature since they are used to create the initial model. Then new added data to be scored can get any value that identify those rows only. It is important to note that this record_purpose feature needs to be set with the optType INFORMATIONAL so as to not be taken into account by the learning algorithms.   The video below shows those steps while using ThingWorx Analytics Builder   Real time scoring   Real time scoring is better suited for small amount of data. The process for real time scoring can be done either via the Analytics Server PredictionThing RealTimeScore service or using the Analytics Manager framework. The posts How to work with ordinal and categorical data in ThingWorx Analytics  and Analytics: Prediction Methods Mashup do give  examples of the use of the RealTimeScore service.   We will concentrate below on the Analytics Manager. The process involves the following steps: In Analytics Manager Create an Analysis Provider that uses the AnalyticsServerConnector connector Publish the model created in ThingWorx Analytics Builder to Analytics Manager Enable the model created Create an Analysis Event Map the properties to the datashape field Enable the Event In ThingWorx Composer Relevant properties of the Thing used in the Analysis Event are updated in someway This trigger the analysis job to be executed The scoring result is populated into the result property mapped in the Analysis event The Help Center has got more detailed about this process. The following video shows those steps Following articles can also be of interest for this topic: How to use ThingPredictor in release 8.3 of ThingWorx Analytics Server ? Publish model from Analytics Builder into Analytics Manager using TW.AnalysisServices.AnalyticsServer.AnalyticsServerConnector Creating Template For Thing, And Configure Analysis Event For Real-Time Scoring via Analytics Manager Note that the AnalyticsServerConnector connector in release 8.3 replaces the ThingPredictor connector from previous releases.
View full tip
To help explain some of the different ways in which a prediction can be triggered from a Thingworx Analytics Model, I've built a mashup which allows you to easily trigger these types of prediction:   - API Realtime Prediction - Analytics Manager: Event - API Batch Prediction   For information on setting up this environment to use the mashup with some sample data, please see the attached instructions document: Prediction-Methods-Mashup.pdf. The referenced resource files can be found inside resources.zip   For more information on prediction scoring please see this related post: How to score new data with ThingWorx Analytics 8.3.x
View full tip
Disclaimer: Please note that, while the ThingWorx Git Backup Extension is a very useful tool, it is not a PTC product, nor is it supported by PTC.   Hi ThingWorx users,   Trying to manage your ThingWorx application artifacts in a CI process? Wondering who changed that line of code in your Thing Service? Trying to see what your Mashup looked like last release? Time to Git excited! Introducing the Git Backup Extension, an open-source tool available here to offer a stronger integration with the Git source repository. This Git feature can push or pull code and artifacts (like entities, data exports or extension dependencies) to your Git repository.   Here are some highlights of how this works within ThingWorx:   First, configure your Git repo to work with ThingWorx by creating a Git Backup Thing. Then, simply open your new Thing, navigate to the Configuration editor and enter information like your Git URL, your Git username and password, your repo and branch names, etc. See example below. Configuring your Git repoWith this configuration in place, you can now use the Home Mashup of this new Git thing to browse the repository and pull down contents to your local ThingWorx instance. For new projects, you can also push new entities to the repo as you work on your application.   As you and your team are working, you’ll want to see the differences of the files you are editing and working on collaboratively. The Git extension feature makes this easy. Just like you can see diffs clearly delineated for a file with your Git client, you can see the same with this Git integration in ThingWorx. Similar to the git status command, the Git ThingWorx extension will show you the list of files you have changed that are available to push, as well as their diffs. See an example below. Checking the Git status While working, if you want to switch branches or pull down a new project, you can check out a specific version and see all commits available on that branch (see below). Checking out a specific commit Want to learn more or try it for yourself? Find the open-source Git Backup Extension here and check out the Git Backup Extension User Guide for guidance.   Stay connected, Kaya   P.S. What do you think? Comment your thoughts below!
View full tip
Meet Neal. When Neal joined PTC five years ago, he immediately hit the ground running on IoT initiatives, working in multiple areas ranging from pre-sales to partner relations. Today, he is a Worldwide ThingWorx Center of Excellence Principal Lead at PTC, and his biggest focus is supporting the go-to-market for the Microsoft partnership.   I sat down with Neal recently to hear the details on exactly how Azure and ThingWorx can be used to develop world-class IIoT applications.   Kaya: Can you explain how Azure and ThingWorx work together? Neal: Yes, so Azure provides the cloud infrastructure that our customers need in order to deploy ThingWorx.   By having Azure as our preferred cloud platform, we’re able to specialize our R&D efforts into utilizing functionality that is available in Azure, rather than having to reinvent the wheel ourselves for each cloud platform in the attempt to remain cloud-agnostic. By leveraging a single, already quite powerful, cloud platform through Azure, we’re able to maximize our development efforts.   Kaya: What was the major problem that led to us investigating cloud options? Neal: There were two issues that our users had. The first was we often had complicated installs and setup procedures because we were genericizing the whole process, so the initial setup and run was complicated and expensive. For example, we were requiring them to setup additional VMs and components, and we were giving them generic instructions because we were being very agnostic, when they had already chosen outside of us to use one of the cloud platforms. We knew our customers wanted to move fast, so we had to make it easier and faster for them to see results.   The other issue we ran into with customers was the confusion in the offerings. For ThingWorx, ingest is just one aspect of IoT. ThingWorx is particularly strong in areas like enabling mixed reality and augmented reality as well as application enablement. And, while we also have the ability to perform ingest capabilities, Microsoft is especially powerful when it comes to ingest capabilities and security. We decided that the smartest solution was to leverage Microsoft’s expertise in data ingestion to make ThingWorx even more powerful; so, we made the Azure IoT Hub Connector. By partnering with Microsoft, we have joint architecture where you can see how Microsoft provides key features and ThingWorx will run on top of those features and get you faster to the market to develop the application.   Below is an example of a high-availability deployment of ThingWorx on Azure that utilizes ThingWorx Azure IoT Connectors to access an Azure IoT Hub.  High-Availability Deployment of ThingWorx on Azure Kaya: Why did we partner with Azure? What specific benefits does Azure offer over other cloud services providers? Neal: When we started to look at what our customers were using for cloud services, we noticed that a lot were using Microsoft. When we join forces with Microsoft, we have a much more wholistic offering around digital transformation. With the partnership, PTC and Microsoft are able to leverage each partner’s respective strengths to provide even more powerful IIoT solutions. You have Windchill and Microsoft’s business application strategies; you have Vuforia and Microsoft’s mixed reality and augmented reality strategies; and, you have ThingWorx on the Microsoft Azure cloud. Overall, you have a much more complete and powerful offering together.   Kaya: What is your favorite aspect about working at PTC? Neal: The growth. There’s been a lot of changes over the last five years that I’ve been here. PTC has been able to leverage its strengths and long-time experience in the CAD and PLM markets to enter and ultimately become a leader in the IIoT market, according to reports by research firms like Gartner and Forrester. In short, the growing IIoT market and PTC’s leadership in the industry.   Note to Readers: You’ve likely heard about our major strategic partnership with Microsoft to leverage our respective IIoT and cloud technologies to optimize the creation, deployment, management and overall use of your IIoT applications. If you haven’t heard about the partnership, check out the press release here. If you’re curious about more aspects of PTC’s partnership with Microsoft, check out this site devoted to sharing how ThingWorx and Azure are better together.
View full tip
Just like the perfect sandwich, we know that you have specific preferences and requirements for your ThingWorx deployment. Whether you like to keep things simple with a classic grilled cheese or you like to spice things up with a more elaborate chipotle mayo BLT, we’ve got you covered. Our ThingWorx Deployment Architecture Guide explains what you’ll need to deploy ThingWorx in three different scenarios: production, enterprise and high-availability (pictured below).   Deployment Architecture for ThingWorx on Azure in High-Availability We’ve recently published Version 1.1 of the ThingWorx Deployment Architecture Guide. In it, you can find updated deployment architecture diagrams to more distinctly show the data and application layers within a ThingWorx environment. Our team has also added a new section on what you’ll need to deploy ThingWorx on Microsoft Azure, PTC’s preferred cloud platform.   Check it out here or in the attachment section on the right.   Stay connected, Kaya
View full tip
I got this excellent question and I thought it worthwhile to put my answer here as well.   There are two ways to segregate information between clients. By default we use a ‘software’ approach to segregation by using Organizations. This allows you to designate a Client to an Organization/Organizational nodes and give those nodes ‘visibility’ to specific entities within the software. This will mean that ‘through software logic’ users can only see what they’ve been given visibility to see. This mainly applies to all the client’s equipment (Thingworx Things). They can only see their own equipment. This would also apply to a specific set of their data which is ValueStream data because that can only be retrieved from the perspective of a Thingworx Thing   Blog/Wiki/DataTable/Streams can store data across clients and do not utilize visibility on a row basis, in this case appropriate queries would need to be created to allow retrieval for only a specific client. In this case we use a construct for security that utilizes what we call the ‘system user’ and wrapper routines that work of the CurrentUser context, this allows you to create enforced validated queries against the data that will allow a user to only retrieve their specific data.   In regards to the data itself, if you need to, you can provide actual ‘physical’ segregation by using multiple persistence provider and mapping Blog/Wiki/DataTable/Streams/ValueStreams to different persistence providers. Persistence providers are basically additional database schemas (in one and the same database or different database) of the Thingworx data storage schema, allowing you to completely separate the location of where data is stored between clients. Note that just creating unique Blog/Wiki/DataTable/Streams/ValueStreams per client and using visibility is still only a logical / software way of providing segregation because the data will be stored in one and the same database schema also known as the Thingworx data persistence provider.
View full tip
Hi, ThingWorx users! We’re excited to share that we have partnered with InfluxData to make time series analysis in ThingWorx even easier. InfluxDB is a database by InfluxData that is “built specifically for metrics and events that empower developers to build next-generation IIoT, analytics and monitoring applications.”   Why InfluxData?  Today, application developers expect robust querying capabilities, fast response time, easy ways of aggregating and pivoting on data and leveraging results for reporting and visualization. IT and devops administrators also expect cost-effective storage and easy ways of aging data through archiving and the ability to keep large amounts of historical data to satisfy analysis requirements.   That’s why we’ve partnered with InfluxData to make it easier for developers to store, analyze and act on IIoT data in real-time. With InfluxData, developers can build connected IIoT applications more quickly while still incorporating the following capabilities: monitoring real-time alerting predictive maintenance streaming data anomaly and event detection visual and report-based analysis   We considered a few technologies for the purpose of improving ThingWorx time series analysis. Here are a few reasons we chose InfluxData: high compression of data ~45x ability to handle millions of writes per second* ability to read around thousands of rows in milliseconds* supports the standard time series functions of sampling, interpolation, time bucketing, aggregation, selector, transformation, predictor, etc.  * Query and write times will vary based on an individual ThingWorx application’s implementation with Influx. For example, as the number of concurrent reads increases, the query speed decreases. With the upcoming 8.4 release, the ThingWorx Sizing Guide will be updated to reflect representative performance for ThingWorx developers.   In addition to improved query capability, ThingWorx time series with Influx can now use less memory and CPU, giving your platform servers a bit of a break.   To start strategizing on how InfluxData can help you in your ThingWorx journey, here is a sneak preview of what it will look like:   New Features The new ThingWorx Influx Persistence Provider will make query services like ValueStream Thing QueryPropertyHistory, Stream Thing QueryStreamData and QueryStreamEntries even better. Simply create a new instance of the persistence provider, configure it to use your InfluxDB instance, create a new value stream (or stream) from the new persistence provider, and you’ll be writing, reading and analyzing your time series data like never before.   We’re also introducing a new enhancement to improve InfoTable support with time series data, including providing the ability to use a driver property. The driver property can be specified with the QueryPropertyHistoryWithDriverProperty service for time alignment and filling backward/forward in your stream queries.   Let’s walk through a driver property example where you have the properties of temperature, speed and battery level. Timestamp Temperature Speed Battery Level 1480589076592000000 80.003 5012 79 1480589077537000000 80.010 5011 79 1480589077550000000 80.010 5009 79 1480589077562000000 80.030 5011 78   Let’s say temperature is the key driver for your analysis. In other words, you are not concerned if speed or battery level changes—you only care about when temperature changes. We can specify temperature to be the driver property for that particular time and only return stream values for temperature, speed and battery level when temperature changes. If speed or battery level changes (but temperature does not change), the rows associated with those changes would not be included in the results set because neither speed nor battery level is a driver property. See chart below.  Timestamp Temperature (driver) Speed Battery Level 1480589076592000000 80.003 5012 79 1480589077537000000 80.010 5011 79 1480589077562000000 80.030 5011 78  Note that only three of the four rows are returned above because one entry in the original table did not have a change in temperature.    Stay Tuned Look out for these time series improvements and InfluxData integration in our upcoming 8.4 release. I’ll be sure to keep you updated on additional new features coming in our next release (like Orchestration and Mashup Builder 2.0), so check back shortly or subscribe to this Community so we can stay in touch. As always, if you have any questions, just ask Kaya!   Stay connected, Kaya
View full tip
GOBOT Framework GOBOT is a framework written in Go programming language. Useful for connecting robotic components, variety of hardware & IoT devices.   Framework consists of Robots -> Virtual entity representing rover, drones, sensors etc. Adaptors -> Allows connectivity to the hardware e.g. connection to Arduino is done using Firmata Adaptor, defining how to talk to it Drivers -> Defines specific functionality to support on specific hardware devices e.g. buttons, sensors, etc. API -> Provides RESTful API to query Robot status There are additional core features of the framework that I recommend having a look esp. Events, Commands allowing Subscribing / Publishing events to the device for more refer to the doc There's already a long list of Platforms for which the drivers and adaptors are available. For this blog I will be working with Arduino + Garmin LidarLite v3. There are cheaper versions available for distance measurement, however if you are looking for high performance, high precision optical distance measurement sensor, then this is it. Pre-requisite Install Go see doc Install Gort Install Gobot Wire-up LidarLite Sensor with Arduino How to connect For our current setup I have Arduino connected to Ubuntu 16 over serial port, see here if you are looking for a different platform.   For ubuntu you just need following 3 commands to connect and upload the firmata as our Adaptor to prepare Arduino for connectivity   // Look for the connected serial devices $ gort scan serial // install avrdude to upload firmata to the Arduino $ gort arduino install // uploading the firmata to the serial port found via first scan command, mine was found at /dev/ttyACM0 $ gort arduino upload firmata /dev/ttyACM0 Reading Sensor data Since there is a available driver for the LidarLite, I will be using it in the following Go code below in a file called main.go which connects and reads the sensor data.   For connecting and reading the sensor data we need the driver, connection object & the task / work that the robot is supposed to perform. Adaptor firmataAdaptor := firmata.NewAdaptor("/dev/ttyACM0") // this the port on which for me Arduino is connecting Driver As previously mentioned that Gobot provides several drivers on of the them is LidarLite we will be using this like so   d := i2c.NewLIDARLiteDriver(firmataAdaptor) Work Now that we have the adaptor & the driver setup lets assign the work this robot needs to do, which is to read the distance work := func() { gobot.Every(1*time.Second, func() { dist, err := d.Distance() if err != nil { log.Fatalln("failed to get dist") } fmt.Println("Fetching the dist", dist, "cms") }) } Notice the Every function provided by gobot to define that we want to perform certain action as the time lapses, here we are gathering the distance.   Note: The distance returned by the lidarLite sensor is in CMs & the max range for the sensor is 40m Robot Now we create the robot representing our entity which in this case is simple, its just the sensor itself   lidarRobot := gobot.NewRobot("lidarBot", []gobot.Connection{firmataAdaptor}, []gobot.Device{d}, work)   This defines the vitual representation of the entity and the driver + the work this robot needs to do. Here's the complete code. Before running this pacakge make sure to build it as you likely will have to execute the runnable with sudo. To build simply navigate to the folder in the shell where the main.go exists and execute   $ go build   This will create runnable file with the package name execute the same with sudo if needed like so   $ sudo ./GarminLidarLite   And if everything done as required following ouput will appear with sensor readings printed out every second 2018/08/05 22:46:54 Initializing connections... 2018/08/05 22:46:54 Initializing connection Firmata-634725A2E59CBD50 ... 2018/08/05 22:46:54 Initializing devices... 2018/08/05 22:46:54 Initializing device LIDARLite-5D4F0034ECE4D0EB ... 2018/08/05 22:46:54 Robot lidarBot initialized. 2018/08/05 22:46:54 Starting Robot lidarBot ... 2018/08/05 22:46:54 Starting connections... 2018/08/05 22:46:54 Starting connection Firmata-634725A2E59CBD50 on port /dev/ttyACM0... 2018/08/05 22:46:58 Starting devices... 2018/08/05 22:46:58 Starting device LIDARLite-5D4F0034ECE4D0EB... 2018/08/05 22:46:58 Starting work... Fetching the dist 166 cms Fetching the dist 165 cms Fetching the dist 165 cms Here's complete code for reference   package main import ( "fmt" "log" "time" "gobot.io/x/gobot" "gobot.io/x/gobot/drivers/i2c" "gobot.io/x/gobot/platforms/firmata" ) func main() { lidarLibTest() } // reading Garmin LidarLite data func lidarLibTest() { firmataAdaptor := firmata.NewAdaptor("/dev/ttyACM0") d := i2c.NewLIDARLiteDriver(firmataAdaptor) work := func() { gobot.Every(1*time.Second, func() { dist, err := d.Distance() if err != nil { log.Fatalln("failed to get dist") } fmt.Println("Fetching the dist", dist, "cms") }) } lidarRobot := gobot.NewRobot("lidarBot", []gobot.Connection{firmataAdaptor}, []gobot.Device{d}, work) lidarRobot.Start() }
View full tip
Prerequisite Install Go Install VSCode or desired IDE to write Go code, e.g. GoLand (commercial license required, 30days trial) Install Go extension for VSCode (if you are working with VSCode)   Content Building GET Request Building PUT Request Building POST Request   Building GET Request I'll be using net/http package from Go to perform the GET request to the ThingWorx Server by importing it   import (     "net/http" ) Next, we use the NewRequest() which takes method, URL & body. Since I'm sending a GET request my method will be GET, and the URL to the ThingWorx server & no body so will leave it to nil     url := myurl req, _ := http.NewRequest("GET", url, nil)   We are ignoring the error that NewRequest is returning as its already handled within the NewRequest() for us Use Header to add the request header to be received by the ThingWorx Server, note Header is of type map[string] []string (a key : value pair)     req.Header.Add("appKey", appkey) // passing the appkey from ThingWorx Server for authentication req.Header.Add("Accept", "application/json") // accepts json as response req.Header.Add("Cache-Control", "no-cache") // not using cache to fetch data Now we invoke the DefaultClient to perform the request & handling the error res, err := http.DefaultClient.Do(req)     if err != nil {         log.Println("Failed to get all entity list from the server", err)     } We need to close the body once we have received it and then we try to read the Body returned in our request     defer res.Body.Close()     body, _ := ioutil.ReadAll(res.Body) Here's complete function accepting URL & Application Key as string. Notice I am starting the function name with capital which denotes that I am making this as an exported function. See Exported/Unexported Identifiers In Go for more     func GetTwxServerEntities(myurl string, appkey string) {     url := myurl     req, _ := http.NewRequest("GET", url, nil)     req.Header.Add("appKey", appkey)     req.Header.Add("Accept", "application/json")     req.Header.Add("Cache-Control", "no-cache")     res, err := http.DefaultClient.Do(req)     if err != nil {         log.Println("Failed to get all entity list from the server", err)     }     defer res.Body.Close()     body, _ := ioutil.ReadAll(res.Body)     //fmt.Println(res)     fmt.Println(string(body)) }   Building PUT Request   To send property updates to the ThingWorx Server I'll create NewReader to read the strings which is JSON in this example payload := strings.NewReader("{\"Prop1\" : \"Demo 101\",\"Prop2\" : 1001}") Like GET request NewRequest is invoked to perform the PUT request like so   req, _ := http.NewRequest("PUT", url, payload) Adding the header details : req.Header.Add("appKey", appkey) req.Header.Add("Content-Type", "application/json") req.Header.Add("Cache-Control", "no-cache") Invoke the client to perform the request res, err := http.DefaultClient.Do(req) if err != nil {         log.Println("Failed to Put the value to the ThingWorx server", err)     } Here's the complete function which takes a URL and appKey and then updates 2 property values for a Thing on the ThingWorx Server:   e.g. myurl= http://tw831psql:8080/Thingworx/Things/RESTThing/Properties/*   func TwxPut(myurl string, appkey string) {     url := myurl     payload := strings.NewReader("{\"Prop1\" : \"Demo 101\",\"Prop2\" : 1001}")     req, _ := http.NewRequest("PUT", url, payload)     req.Header.Add("appKey", appkey)     req.Header.Add("Content-Type", "application/json")     req.Header.Add("Cache-Control", "no-cache")     res, err := http.DefaultClient.Do(req)     if err != nil {         log.Println("Failed to Put the value to the ThingWorx server", err)     }     fmt.Println(res)      } And I can now verify that the property has been updated for the Thing called RESTThing   Building POST Request   Similar to GET & PUT we have to create new Request of method POST to invoke a Service in this example, for this I have already created a service that counts up a numeric property value stored in the CountUpProp property already existing under the RESTThing entity   req, _ := http.NewRequest("POST", url, nil) Adding the Headers to the req req.Header.Add("appKey", appKey) req.Header.Add("Content-Type", "application/json") req.Header.Add("Cache-Control", "no-cache") Handling response and the error in case of an issue res, err := http.DefaultClient.Do(req)     if err != nil {         log.Println("Posting to Thingworx server failed with error", err)     }     fmt.Println(res) Here's complete thought : func TwxPost(myurl string, appKey string) {     // e.g. http://tw831psql:8080/Thingworx/Things/RESTThing/Services/CountUpService     url := myurl     req, _ := http.NewRequest("POST", url, nil)     req.Header.Add("appKey", appKey)     req.Header.Add("Content-Type", "application/json")     req.Header.Add("Cache-Control", "no-cache")     res, err := http.DefaultClient.Do(req)     if err != nil {         log.Println("Posting to Thingworx server failed with error", err)     }     fmt.Println(res) } Verifying property update after the service invoke   All the above functions now can be called for e.g. in a main()   func main() {     var myurl string     var appkey string     // Provide URL for ThingWorx fmt.Println("Enter URL, eg. http://localhost:8080/Thingworx/Server") // accepting URL at runtime     fmt.Scanln(&myurl)     // Provide appKey from the ThingWorx platform fmt.Println("Enter valid ThingWorx Application Key ") // accepting appKey at runtime     fmt.Scanln(&appkey)     GetTwxServerEntities(myurl, appkey)     TwxPut(myurl, appkey)     TwxPost(myurl, appkey) }  
View full tip
The use of the term “SSO” means different things to different people. Among Navigate Admins, it became shorthand for using PingFederate to provide both authentication with a single sign-on component, as well as authorization (checking permissions for access to files). In Navigate 1.5, this was the only option for configuring a production system, and many people were not ready for it. That was the origin of the “must have SSO” statement. Beginning with Navigate 1.6, PTC added a scenario called “Windchill Authentication”, that is suitable for Production and uses your Enterprise LDAP to authenticate users. It will issue a token so you get some of the benefits of single sign-on, but not all the bells and whistles that come with PingFederate. It’s also easier to configure. People have begun referring to Windchill Authentication as “non-SSO”, to distinguish it from PingFederate, even though Windchill Authentication has some SSO functions.   In the install manual, there are three scenarios: Fixed Authentication, Windchill Authentication, and Single Sign-On with PingFederate. People usually begin with Fixed Authentication (the easiest to configure, but not secure so it’s only good for Proof of Concept demonstrations), then do Windchill Authentication before tackling PingFederate. Windchill Authentication can take a couple of days while Webexing with us to get working, but for PingFederate we plan several Webexes over a period of 8 days for a typical install. During that time you will be coordinating with other administrators (such as the AD admin) and waiting for emails etc. to get remote admin tasks done as part of the install. Be prepared, timewise.
View full tip
In this session, we pick up where we left off with the mashup which was worked on in part 1 of our Advanced Mashup Expert Session series. Specifically, we will explore the concepts of master mashups, session variables, and media entities, using each to further enhance the look and feel of our mashup.     For full-sized viewing, click on the YouTube link in the player controls. Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
Announcements