cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Learn all about PTC Community Badges. Engage with PTC and see how many you can earn! X

IoT Tips

Sort by:
Applicable releases: 8.3 and above   Description: In this video, we will see  Basic concepts of SVM Some scenarios for a better understanding
View full tip
  Meet Anthony. Anthony came to PTC from a large industrial company as a user of Axeda, a leading device connectivity company that PTC acquired in 2014. With a background in aerospace engineering and experience in a variety of industries including life safety, healthcare, nuclear power, and oil and gas, Anthony has been working to create new value around innovation for customers transitioning to ThingWorx. When he’s not working on IIoT, he’s playing music on Cape Cod, photographing Hawaiian landscapes or bringing awesome inflatable chairs into the office.    You may recall from last week's post that Thing Presence is one of the top three features in 8.4 that you might not know about. This week, I spoke with Anthony for a deeper dive on Thing Presence. Check out how it went.   Kaya: What was the challenge users were facing that led us to create Thing Presence? Anthony: Axeda customers transitioning to ThingWorx were struggling with the connectivity use case and kept asking, ‘why is my asset always offline?’ When we evaluated it, we discovered that it was tied to the difference between AlwaysOn and Axeda eMessage protocol architectures. IsConnected will always report false for polling and duty cycle devices.   The use case of Thing Presence is to know that the asset is reporting into the network and is ready to provide information (push) or be accessed to retrieve information or do a remote desktop support (pull). This use case is relevant for any asset in ThingWorx that uses duty cycle.   In ThingWorx 8.4 (coming in early 2019), the new IsReporting state will inform the user when a polling device is communicating on a regular basis. If it is, then IsReporting will be true. The IsReporting state resolves the discrepancy wherein devices that are on duty cycle appear disconnected due to the IsConnected state reporting false.  New "IsReporting" state improves visibility of an asset's communication state Kaya: How exactly does Thing Presence work? Anthony: You can think of it in terms of having teenagers. You tell them they need to check in with you on a regular basis through text message. If a text is missed, all of a sudden you take action.   Now, imagine the teenager is a device. If a device was supposed to check in every five minutes and it misses one poll, I want to flag that as a problem. The challenge with that from a service perspective is that sometimes your service personnel will go out and work on a device and may need to take it offline for a bit of time; we need to factor that in. We certainly don’t want to deploy someone or try to fix something when a service technician is already there.   You might decide, ‘my average service visit is an hour, so, if I miss a couple of pings, I’m okay; but, if I’m offline for more than an hour, then I’d like to know about it because I’d like to take action.’ Thing Presence allows you to define that window.   Kaya: You’ve mentioned ‘duty cycle’ and ‘polling cycle.’ Can you explain these terms? Anthony: Duty cycle and polling cycle are the same thing. It means that a device has a time for which it is expected to check in, and, provided that it checks in within that timeframe, all is good with the connection.   Connected services rely on a connection. As soon as the connection is broken, I no longer have the ability to service the asset.   Kaya: Given everything we’ve discussed, where do you see Thing Presence headed? Anthony: The next piece of the equation for us is to provide information on the health of the connection. When you look at servicing a remote asset, you need to a) know that it is communicating, and b) know that the connection is healthy before you try anything. I wouldn’t want to try a software update if I am losing connection with my asset on a regular basis.   What do we mean by health? We mean: is the device checking in when it should be? If it’s not, is there a pattern to that connection, and are those patterns tied to applications? For example, is it only on during working hours? Does it turn off during holidays? If the device is in a school, does it turn off during summer maintenance work? This allows us to garner insights on how and when the equipment is being used, not just operating status. At the end of the day, does this mean I can apply analytics and AI tools to it? Absolutely. Is it the first place I would apply it? Probably not.   Kaya: In your mind, what is the next big thing coming in ThingWorx that you’re particularly excited about? Anthony: Mashup 2.0 and Asset Advisor 8.4. (Double mic drop.)   Kaya: That’s awesome. My last question is related to you. Can you tell me what your favorite aspect is about working at PTC? Anthony: The chaos. Very often it’s chaos that breeds innovation. What I mean by that is that, if you try to create something because you sit down and you say, ‘I am going to innovate,’ very often that is a failure because the majority of the time it’s the influence of a deadline, a customer need or an application at hand that makes the environment trying and sometimes hectic. But, it is in these challenging environments where you can be the most creative and innovative as an engineer.
View full tip
Push update what is it? It is a mechanism that GetProperties supports so that the Server can push a value to a client mashup. This will allow you to see values update in your mashup real time without needing the refresh widget. Another great way to use the push updates is to propagate events that tie to specific content fetches. Let's say your mashup has three areas: KPIs, Alerts, Live values. Using some logic server side you can set up a 'tracker' Thing with properties that indicate that one of those areas has updated data. Bring these notifications as property values into the mashup using GetProperties and as the Server pushes updates to the mashup runtime, you can map it to a Validator or Expression widget (set to autoevaluate) which in turn can now run the necessary Service to fetch the updated information for the specific area.
View full tip
As it is not available in support.ptc.com. Please provide Creo View and Thing View Widget Documentation or guide to view 3D Object in custom mashup/UI except for the ThingWorx Navigate app.   I am posting this request to the community. Not for this ThingWorx developers portal after discussing it with PTC technical support. Please refer to article CS291582.   LeighPTC, I have no option to do move to the community again. But this had happened.   The post Creo View and Thing View Widget Documentation to view 3D Object in custom mashup/UI. was moved by LeighPTC.   Please don't move this request to the ThingWorx developers portal.    So that PTC Customer can have Creo View and Thing View Widget Documentation to view 3D Object in custom mashup/UI. As it is not available.   Many thanks, Rahul
View full tip
  If you’ve ever wished you could see into the future, you’ve come to the right place! Put your reflective suits and sunglasses on to prepare for a glimpse into the future of our upcoming ThingWorx 8.4 release! Here are sneak peeks of the top three features you may not have known are coming in ThingWorx 8.4.   1. Thing Presence While it sounds like something from an episode of Ghost Hunters, Thing Presence provides insight into the communication state of polling or duty cycle Things (those that check in and out on a periodic basis). We’re introducing a new IsReporting state, which would be set to true when polling assets check in on time and are considered “present in the network.” This helps to bridge the gap where the traditional ThingWorx IsConnected state reports offline and does not coincide with the actual network presence of the device.   Thing Presence: New "IsReporting" State2. Data Helpers You may not know what Data Helpers are, but if you’re a longstanding ThingWorx developer you likely know about Expression and Validator widgets. These widgets were handy because they allowed you to write conditional logic or input validation to drive behaviors in the UI, but were super frustrating to use. They took up lots of room on the visual layout canvas and only had a very little textbox to edit them. In the 8.4 release, we are happy to announce that these two widgets will no longer be placed on the layout canvas. Instead, they will have a dedicated editor to work from with plenty of room for code development, parameter configuration and event definition and binding. We’re wrapping all of this functionality into a nice little feature called…Data Helpers. Data Helpers: Expression and Validator Widgets No Longer in Layout Canvas3. ThingWorx Flow In case Thing Presence and Data Helpers aren’t exciting enough, we’re also introducing ThingWorx Flow, a neat new feature set that dramatically speeds development of connected applications through integrations with business systems like Salesforce and SAP. Imagine that, when a certain alert triggers, you want to automatically create a Salesforce service ticket and even send an emergency text to an operator to prevent damage to a device. A large set of out-of-the-box system connectors (PTC Windchill, Office 365, Google Docs, Slack, Jira and more) are included, which you can drag and drop onto a canvas to visually define a workflow. In the example below, a ThingWorx-connected device element, a Salesforce “create case” action and a Twilio text message connector were dropped into the canvas to create a visual workflow. Orchestration: Example Workflow that Creates Salesforce Cases and Alerts OperatorsThing Presence, Data Helpers & Flow—get ready for these and more in ThingWorx 8.4!   Stay tuned for future posts that go into greater depth about each of these features and comment your thoughts below!   Stay connected, Kaya
View full tip
If your mashups are feeling cramped and crowded, we’ve got some good news for you. Coming in our next release, ThingWorx will offer truly responsive layouts for your mashups, allowing you to adjust the size of your viewport or present a mashup on a variety of displays—laptop, tablet, phone, etc.—without compromising presentation or responsiveness. This new layout capability is based on Flexbox. Check out this article to learn more.   Here’s a sneak peek of our new layout editor. It’s still in development, but I hope you can start to see how you can use this in your UI development.   Sneak Peek: Responsive Mashup Layout Notice the ability to split the layout into containers. Each of the containers allows you to horizontally or vertically layout, distribute from left or right, align from top or bottom, justify space between widgets, etc. This gives you as the developer ultimate control to define the responsiveness of the mashup.   Like what you see? Any feedback? Let me know below!   Stay connected, Kaya
View full tip
Several times in the past few months I was hit by a quick need to extract some data about Assets for a customer, and find myself continually hand-writing the code to do so.  Rather than repeat myself any more, I figure I can share my work - maybe PTC customers can benefit from the same effort.    import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.drm.sdk.Context import com.axeda.common.sdk.id.Identifier import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* def retStr = "Device and Location Data\n" def modellist = [:] ModelCriteria mc = new ModelCriteria() mc.modelNumber = "*" tcount = 0 def mresults = modelBridge.find(mc) while ( (mresults = modelBridge.find(mc)) != null && tcount < mresults .totalCount) { mresults.models.each { res -> modellist[res.systemId] = res.modelNumber tcount++ } mc.pageNumber = mc.pageNumber + 1 } locationList = [:] LocationCriteria lc = new LocationCriteria() lc.name = "*" tcount = 0 def lresults = locationBridge.find(lc) while ( (lresults = locationBridge.find(lc)) != null && tcount < lresults .totalCount) { lresults.locations.each { res -> locationList[res.systemId] = res.name tcount++ } lc.pageNumber = lc.pageNumber + 1 } AssetCriteria ac = new AssetCriteria() ac.includeDetails = true ac.name = "*" tcount = 0 def results = assetBridge.find(ac) while ( (results = assetBridge.find(ac)) != null && tcount < results .totalCount) { results.assets.each { res -> retStr += "ID: ${res.systemId} MN: ${res.model.systemId},${modellist[res.model.systemId]} SN: ${res.serialNumber} Name: ${res.name} : Location ${res.location.systemId},${locationList[res.location.systemId]}\n"; tcount++ } ac.pageNumber = ac.pageNumber + 1 } return ["Content-Type": "application/text", "Content": retStr] This will output data like so:    ID: 31342 MN: 14682,CKGW SN: Axeda-CK-Windows10VBox Name: Axeda-CK-Windows10VBox : Location 1,Foxboro ID: 26248 MN: 14682,CKGW SN: CK-CKAMINSKI0L1 Name: CK-CKAMINSKI0L1 : Location 1,Foxboro ID: 30082 MN: 14682,CKGW SN: CK-GW1 Name: CK-GW1 : Location 1,Foxboro ID: 26247 MN: 14681,CKGW-ManagedModel1 SN: CK-MM01 Name: CK-MM01 : Location 1,Foxboro This let's me compare the internal systemId of the Asset, the internal systemId of the Model, and the internal systemId of the Location of the device.  This was to help me attempt to isolate an issue with orphaned devices not being returned in a report - exposing some duplicate locations and devices that needed corrections.    You may find yourself needing to do similar things when building logic for Axeda, or eventually integrating or migrating to Thingworx.  Our v2 API bridges help "bridge" the gap.      
View full tip
Meet Neal. When Neal joined PTC five years ago, he immediately hit the ground running on IoT initiatives, working in multiple areas ranging from pre-sales to partner relations. Today, he is a Worldwide ThingWorx Center of Excellence Principal Lead at PTC, and his biggest focus is supporting the go-to-market for the Microsoft partnership.   I sat down with Neal recently to hear the details on exactly how Azure and ThingWorx can be used to develop world-class IIoT applications.   Kaya: Can you explain how Azure and ThingWorx work together? Neal: Yes, so Azure provides the cloud infrastructure that our customers need in order to deploy ThingWorx.   By having Azure as our preferred cloud platform, we’re able to specialize our R&D efforts into utilizing functionality that is available in Azure, rather than having to reinvent the wheel ourselves for each cloud platform in the attempt to remain cloud-agnostic. By leveraging a single, already quite powerful, cloud platform through Azure, we’re able to maximize our development efforts.   Kaya: What was the major problem that led to us investigating cloud options? Neal: There were two issues that our users had. The first was we often had complicated installs and setup procedures because we were genericizing the whole process, so the initial setup and run was complicated and expensive. For example, we were requiring them to setup additional VMs and components, and we were giving them generic instructions because we were being very agnostic, when they had already chosen outside of us to use one of the cloud platforms. We knew our customers wanted to move fast, so we had to make it easier and faster for them to see results.   The other issue we ran into with customers was the confusion in the offerings. For ThingWorx, ingest is just one aspect of IoT. ThingWorx is particularly strong in areas like enabling mixed reality and augmented reality as well as application enablement. And, while we also have the ability to perform ingest capabilities, Microsoft is especially powerful when it comes to ingest capabilities and security. We decided that the smartest solution was to leverage Microsoft’s expertise in data ingestion to make ThingWorx even more powerful; so, we made the Azure IoT Hub Connector. By partnering with Microsoft, we have joint architecture where you can see how Microsoft provides key features and ThingWorx will run on top of those features and get you faster to the market to develop the application.   Below is an example of a high-availability deployment of ThingWorx on Azure that utilizes ThingWorx Azure IoT Connectors to access an Azure IoT Hub.  High-Availability Deployment of ThingWorx on Azure Kaya: Why did we partner with Azure? What specific benefits does Azure offer over other cloud services providers? Neal: When we started to look at what our customers were using for cloud services, we noticed that a lot were using Microsoft. When we join forces with Microsoft, we have a much more wholistic offering around digital transformation. With the partnership, PTC and Microsoft are able to leverage each partner’s respective strengths to provide even more powerful IIoT solutions. You have Windchill and Microsoft’s business application strategies; you have Vuforia and Microsoft’s mixed reality and augmented reality strategies; and, you have ThingWorx on the Microsoft Azure cloud. Overall, you have a much more complete and powerful offering together.   Kaya: What is your favorite aspect about working at PTC? Neal: The growth. There’s been a lot of changes over the last five years that I’ve been here. PTC has been able to leverage its strengths and long-time experience in the CAD and PLM markets to enter and ultimately become a leader in the IIoT market, according to reports by research firms like Gartner and Forrester. In short, the growing IIoT market and PTC’s leadership in the industry.   Note to Readers: You’ve likely heard about our major strategic partnership with Microsoft to leverage our respective IIoT and cloud technologies to optimize the creation, deployment, management and overall use of your IIoT applications. If you haven’t heard about the partnership, check out the press release here. If you’re curious about more aspects of PTC’s partnership with Microsoft, check out this site devoted to sharing how ThingWorx and Azure are better together.
View full tip
Just like the perfect sandwich, we know that you have specific preferences and requirements for your ThingWorx deployment. Whether you like to keep things simple with a classic grilled cheese or you like to spice things up with a more elaborate chipotle mayo BLT, we’ve got you covered. Our ThingWorx Deployment Architecture Guide explains what you’ll need to deploy ThingWorx in three different scenarios: production, enterprise and high-availability (pictured below).   Deployment Architecture for ThingWorx on Azure in High-Availability We’ve recently published Version 1.1 of the ThingWorx Deployment Architecture Guide. In it, you can find updated deployment architecture diagrams to more distinctly show the data and application layers within a ThingWorx environment. Our team has also added a new section on what you’ll need to deploy ThingWorx on Microsoft Azure, PTC’s preferred cloud platform.   Check it out here or in the attachment section on the right.   Stay connected, Kaya
View full tip
I got this excellent question and I thought it worthwhile to put my answer here as well.   There are two ways to segregate information between clients. By default we use a ‘software’ approach to segregation by using Organizations. This allows you to designate a Client to an Organization/Organizational nodes and give those nodes ‘visibility’ to specific entities within the software. This will mean that ‘through software logic’ users can only see what they’ve been given visibility to see. This mainly applies to all the client’s equipment (Thingworx Things). They can only see their own equipment. This would also apply to a specific set of their data which is ValueStream data because that can only be retrieved from the perspective of a Thingworx Thing   Blog/Wiki/DataTable/Streams can store data across clients and do not utilize visibility on a row basis, in this case appropriate queries would need to be created to allow retrieval for only a specific client. In this case we use a construct for security that utilizes what we call the ‘system user’ and wrapper routines that work of the CurrentUser context, this allows you to create enforced validated queries against the data that will allow a user to only retrieve their specific data.   In regards to the data itself, if you need to, you can provide actual ‘physical’ segregation by using multiple persistence provider and mapping Blog/Wiki/DataTable/Streams/ValueStreams to different persistence providers. Persistence providers are basically additional database schemas (in one and the same database or different database) of the Thingworx data storage schema, allowing you to completely separate the location of where data is stored between clients. Note that just creating unique Blog/Wiki/DataTable/Streams/ValueStreams per client and using visibility is still only a logical / software way of providing segregation because the data will be stored in one and the same database schema also known as the Thingworx data persistence provider.
View full tip
GOBOT Framework GOBOT is a framework written in Go programming language. Useful for connecting robotic components, variety of hardware & IoT devices.   Framework consists of Robots -> Virtual entity representing rover, drones, sensors etc. Adaptors -> Allows connectivity to the hardware e.g. connection to Arduino is done using Firmata Adaptor, defining how to talk to it Drivers -> Defines specific functionality to support on specific hardware devices e.g. buttons, sensors, etc. API -> Provides RESTful API to query Robot status There are additional core features of the framework that I recommend having a look esp. Events, Commands allowing Subscribing / Publishing events to the device for more refer to the doc There's already a long list of Platforms for which the drivers and adaptors are available. For this blog I will be working with Arduino + Garmin LidarLite v3. There are cheaper versions available for distance measurement, however if you are looking for high performance, high precision optical distance measurement sensor, then this is it. Pre-requisite Install Go see doc Install Gort Install Gobot Wire-up LidarLite Sensor with Arduino How to connect For our current setup I have Arduino connected to Ubuntu 16 over serial port, see here if you are looking for a different platform.   For ubuntu you just need following 3 commands to connect and upload the firmata as our Adaptor to prepare Arduino for connectivity   // Look for the connected serial devices $ gort scan serial // install avrdude to upload firmata to the Arduino $ gort arduino install // uploading the firmata to the serial port found via first scan command, mine was found at /dev/ttyACM0 $ gort arduino upload firmata /dev/ttyACM0 Reading Sensor data Since there is a available driver for the LidarLite, I will be using it in the following Go code below in a file called main.go which connects and reads the sensor data.   For connecting and reading the sensor data we need the driver, connection object & the task / work that the robot is supposed to perform. Adaptor firmataAdaptor := firmata.NewAdaptor("/dev/ttyACM0") // this the port on which for me Arduino is connecting Driver As previously mentioned that Gobot provides several drivers on of the them is LidarLite we will be using this like so   d := i2c.NewLIDARLiteDriver(firmataAdaptor) Work Now that we have the adaptor & the driver setup lets assign the work this robot needs to do, which is to read the distance work := func() { gobot.Every(1*time.Second, func() { dist, err := d.Distance() if err != nil { log.Fatalln("failed to get dist") } fmt.Println("Fetching the dist", dist, "cms") }) } Notice the Every function provided by gobot to define that we want to perform certain action as the time lapses, here we are gathering the distance.   Note: The distance returned by the lidarLite sensor is in CMs & the max range for the sensor is 40m Robot Now we create the robot representing our entity which in this case is simple, its just the sensor itself   lidarRobot := gobot.NewRobot("lidarBot", []gobot.Connection{firmataAdaptor}, []gobot.Device{d}, work)   This defines the vitual representation of the entity and the driver + the work this robot needs to do. Here's the complete code. Before running this pacakge make sure to build it as you likely will have to execute the runnable with sudo. To build simply navigate to the folder in the shell where the main.go exists and execute   $ go build   This will create runnable file with the package name execute the same with sudo if needed like so   $ sudo ./GarminLidarLite   And if everything done as required following ouput will appear with sensor readings printed out every second 2018/08/05 22:46:54 Initializing connections... 2018/08/05 22:46:54 Initializing connection Firmata-634725A2E59CBD50 ... 2018/08/05 22:46:54 Initializing devices... 2018/08/05 22:46:54 Initializing device LIDARLite-5D4F0034ECE4D0EB ... 2018/08/05 22:46:54 Robot lidarBot initialized. 2018/08/05 22:46:54 Starting Robot lidarBot ... 2018/08/05 22:46:54 Starting connections... 2018/08/05 22:46:54 Starting connection Firmata-634725A2E59CBD50 on port /dev/ttyACM0... 2018/08/05 22:46:58 Starting devices... 2018/08/05 22:46:58 Starting device LIDARLite-5D4F0034ECE4D0EB... 2018/08/05 22:46:58 Starting work... Fetching the dist 166 cms Fetching the dist 165 cms Fetching the dist 165 cms Here's complete code for reference   package main import ( "fmt" "log" "time" "gobot.io/x/gobot" "gobot.io/x/gobot/drivers/i2c" "gobot.io/x/gobot/platforms/firmata" ) func main() { lidarLibTest() } // reading Garmin LidarLite data func lidarLibTest() { firmataAdaptor := firmata.NewAdaptor("/dev/ttyACM0") d := i2c.NewLIDARLiteDriver(firmataAdaptor) work := func() { gobot.Every(1*time.Second, func() { dist, err := d.Distance() if err != nil { log.Fatalln("failed to get dist") } fmt.Println("Fetching the dist", dist, "cms") }) } lidarRobot := gobot.NewRobot("lidarBot", []gobot.Connection{firmataAdaptor}, []gobot.Device{d}, work) lidarRobot.Start() }
View full tip
Prerequisite Install Go Install VSCode or desired IDE to write Go code, e.g. GoLand (commercial license required, 30days trial) Install Go extension for VSCode (if you are working with VSCode)   Content Building GET Request Building PUT Request Building POST Request   Building GET Request I'll be using net/http package from Go to perform the GET request to the ThingWorx Server by importing it   import (     "net/http" ) Next, we use the NewRequest() which takes method, URL & body. Since I'm sending a GET request my method will be GET, and the URL to the ThingWorx server & no body so will leave it to nil     url := myurl req, _ := http.NewRequest("GET", url, nil)   We are ignoring the error that NewRequest is returning as its already handled within the NewRequest() for us Use Header to add the request header to be received by the ThingWorx Server, note Header is of type map[string] []string (a key : value pair)     req.Header.Add("appKey", appkey) // passing the appkey from ThingWorx Server for authentication req.Header.Add("Accept", "application/json") // accepts json as response req.Header.Add("Cache-Control", "no-cache") // not using cache to fetch data Now we invoke the DefaultClient to perform the request & handling the error res, err := http.DefaultClient.Do(req)     if err != nil {         log.Println("Failed to get all entity list from the server", err)     } We need to close the body once we have received it and then we try to read the Body returned in our request     defer res.Body.Close()     body, _ := ioutil.ReadAll(res.Body) Here's complete function accepting URL & Application Key as string. Notice I am starting the function name with capital which denotes that I am making this as an exported function. See Exported/Unexported Identifiers In Go for more     func GetTwxServerEntities(myurl string, appkey string) {     url := myurl     req, _ := http.NewRequest("GET", url, nil)     req.Header.Add("appKey", appkey)     req.Header.Add("Accept", "application/json")     req.Header.Add("Cache-Control", "no-cache")     res, err := http.DefaultClient.Do(req)     if err != nil {         log.Println("Failed to get all entity list from the server", err)     }     defer res.Body.Close()     body, _ := ioutil.ReadAll(res.Body)     //fmt.Println(res)     fmt.Println(string(body)) }   Building PUT Request   To send property updates to the ThingWorx Server I'll create NewReader to read the strings which is JSON in this example payload := strings.NewReader("{\"Prop1\" : \"Demo 101\",\"Prop2\" : 1001}") Like GET request NewRequest is invoked to perform the PUT request like so   req, _ := http.NewRequest("PUT", url, payload) Adding the header details : req.Header.Add("appKey", appkey) req.Header.Add("Content-Type", "application/json") req.Header.Add("Cache-Control", "no-cache") Invoke the client to perform the request res, err := http.DefaultClient.Do(req) if err != nil {         log.Println("Failed to Put the value to the ThingWorx server", err)     } Here's the complete function which takes a URL and appKey and then updates 2 property values for a Thing on the ThingWorx Server:   e.g. myurl= http://tw831psql:8080/Thingworx/Things/RESTThing/Properties/*   func TwxPut(myurl string, appkey string) {     url := myurl     payload := strings.NewReader("{\"Prop1\" : \"Demo 101\",\"Prop2\" : 1001}")     req, _ := http.NewRequest("PUT", url, payload)     req.Header.Add("appKey", appkey)     req.Header.Add("Content-Type", "application/json")     req.Header.Add("Cache-Control", "no-cache")     res, err := http.DefaultClient.Do(req)     if err != nil {         log.Println("Failed to Put the value to the ThingWorx server", err)     }     fmt.Println(res)      } And I can now verify that the property has been updated for the Thing called RESTThing   Building POST Request   Similar to GET & PUT we have to create new Request of method POST to invoke a Service in this example, for this I have already created a service that counts up a numeric property value stored in the CountUpProp property already existing under the RESTThing entity   req, _ := http.NewRequest("POST", url, nil) Adding the Headers to the req req.Header.Add("appKey", appKey) req.Header.Add("Content-Type", "application/json") req.Header.Add("Cache-Control", "no-cache") Handling response and the error in case of an issue res, err := http.DefaultClient.Do(req)     if err != nil {         log.Println("Posting to Thingworx server failed with error", err)     }     fmt.Println(res) Here's complete thought : func TwxPost(myurl string, appKey string) {     // e.g. http://tw831psql:8080/Thingworx/Things/RESTThing/Services/CountUpService     url := myurl     req, _ := http.NewRequest("POST", url, nil)     req.Header.Add("appKey", appKey)     req.Header.Add("Content-Type", "application/json")     req.Header.Add("Cache-Control", "no-cache")     res, err := http.DefaultClient.Do(req)     if err != nil {         log.Println("Posting to Thingworx server failed with error", err)     }     fmt.Println(res) } Verifying property update after the service invoke   All the above functions now can be called for e.g. in a main()   func main() {     var myurl string     var appkey string     // Provide URL for ThingWorx fmt.Println("Enter URL, eg. http://localhost:8080/Thingworx/Server") // accepting URL at runtime     fmt.Scanln(&myurl)     // Provide appKey from the ThingWorx platform fmt.Println("Enter valid ThingWorx Application Key ") // accepting appKey at runtime     fmt.Scanln(&appkey)     GetTwxServerEntities(myurl, appkey)     TwxPut(myurl, appkey)     TwxPost(myurl, appkey) }  
View full tip
The use of the term “SSO” means different things to different people. Among Navigate Admins, it became shorthand for using PingFederate to provide both authentication with a single sign-on component, as well as authorization (checking permissions for access to files). In Navigate 1.5, this was the only option for configuring a production system, and many people were not ready for it. That was the origin of the “must have SSO” statement. Beginning with Navigate 1.6, PTC added a scenario called “Windchill Authentication”, that is suitable for Production and uses your Enterprise LDAP to authenticate users. It will issue a token so you get some of the benefits of single sign-on, but not all the bells and whistles that come with PingFederate. It’s also easier to configure. People have begun referring to Windchill Authentication as “non-SSO”, to distinguish it from PingFederate, even though Windchill Authentication has some SSO functions.   In the install manual, there are three scenarios: Fixed Authentication, Windchill Authentication, and Single Sign-On with PingFederate. People usually begin with Fixed Authentication (the easiest to configure, but not secure so it’s only good for Proof of Concept demonstrations), then do Windchill Authentication before tackling PingFederate. Windchill Authentication can take a couple of days while Webexing with us to get working, but for PingFederate we plan several Webexes over a period of 8 days for a typical install. During that time you will be coordinating with other administrators (such as the AD admin) and waiting for emails etc. to get remote admin tasks done as part of the install. Be prepared, timewise.
View full tip
In this session, we pick up where we left off with the mashup which was worked on in part 1 of our Advanced Mashup Expert Session series. Specifically, we will explore the concepts of master mashups, session variables, and media entities, using each to further enhance the look and feel of our mashup.     For full-sized viewing, click on the YouTube link in the player controls. Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
This video builds upon the mashup created in the basic session, and strives to create a more polished, user-friendly interface that is ready for deployment. In part 1, we’ll take a look at advanced layout designs and include a more varied set of widgets to help draw attention to some of the more pertinent properties being captured within the mashup.   For full-sized viewing, click on the YouTube link in the player controls. Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.  
View full tip
Database backups are vital when it comes to ensure data integrity and data safety. PostgreSQL offers simple solutions to generate backups of the existing ThingWorx database instance and recover them when needed.   Please note that this does not replace a proper and well-defined disaster recovery plan. Export and Import are part of this strategy, but are not reflecting the complete strategy. The commands used in this post are for Windows, but can be adjusted to work on Linux-based systems as well.   Backup   To create a Backup, the export / dump functionality of PostgreSQL can be used.     pg_dump -U postgres -C thingworx > thingworxDump.sql     The -C option will include the statement to create the database in the .sql file and map it to the existing tablespace and user (e.g. 'thingworx' and 'twadmin'). The tablespace and user can be seen in the .sql file in the line with "ALTER DATABASE <dbname> OWNER TO <user>;"   In the above example, we're backing up the thingworx database schema and dump it into the thingworxDump.sql file   Tablespace, username & password are also included in the platform_settings.json   Restore   To restore the database, we just assume an empty PostgreSQL installation. We need to create the DB schema user first via the following commandline:     psql -U postgres -c "CREATE USER twadmin WITH PASSWORD 'ts';"     With the user created, we can now re-generate the tablespace and grant permissions to the twadmin user:   psql -U postgres -c "CREATE TABLESPACE thingworx OWNER twadmin LOCATION 'C:\ThingWorx\ThingworxPostgresqlStorage';" psql -U postgres -c "GRANT ALL PRIVILEGES ON TABLESPACE thingworx TO twadmin;" Finally the database itself can be restored by using the following commandline:   psql -U postgres < thingworxDump.sql This will create the database and populate it with tables, functions and sequences and will also restore the data from the .sql file.   It is important to have database, username and tablespace match with the original system - otherwise granting permissions and re-creating data might fail on recovery. User and tablespace can also be reused, so only the database has to be deleted before restoring it.   Part of the strategy   Part of the backup and recovery strategy should be to actually test the backup as well as the recovery part of the procedure. It should be well tested on a test-environment before deployed to any production environments. Take backups on a regular basis and test for disaster recovery once or twice a year, to ensure the procedure is still valid.   Data is the most important source in your application - protect it!  
View full tip
The purpose of this document is to see how you can setup an MXChip IoT DevKit and also how send the readings of this microprocessor to ThingWorx through an Azure cloud server. You will also learn how to view the values that are being sent.
View full tip
Fresh look at getting started with ThingWorx in a relevant context that outlines the DEVOPS needed to kick-start your programming.     For full-sized viewing, click on the YouTube link in the player controls. Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
Help Center link on how to control file transfers from the edge client using the EdgeControlled ThingShape.   The EdgeControlled ThingShape is a default entity included with ThingWorx that allows you to manage the amount of egress being sent from the platform to the Edge.   At the time of writing this post, the available 'When Disconnected' settings for a remotely bound property in ThingWorx are 'Fold' and 'Ignore'. Setting a property to 'Fold' while using this EdgeControlled ThingShape is necessary whether the device is connected all the time or only for brief updates.   To use this ThingShape in a real world scenario you might code your edge client to invoke the DequeueEgress REST API function available through this ThingShape. The parameter you pass in is then the number of messages you would like the client to receive. The result of this function is how many messages the platform then actually sent.   A quick setup: 1. Create a RemoteThing entity in ThingWorx 2. Create an ApplicationKey entity in ThingWorx 3. Setup an edge client to bind to that RemoteThing using the specified ApplicationKey 4. Manage Bindings on the Properties page of the RemoteThing, and pull in a few properties you would like to send property updates to 5. Set the 'When Disconnected' value to 'Fold' for each property you want to queue messages for   5a. Set any other settings on the properties you'd like; ie. persistence, logged 6. Save the Thing 7. Add the EdgeControlled ThingShape to the Thing 8. Save the Thing 9. Update property values, see exceptions thrown, but the value will be queued 10. Invoke DequeueEgress on the RemoteThing, with the number of messages to send to the edge client passed in as the parameter value   10a. Notice 'Fold' means only the last value set for a property will be sent to the edge client. There is currently no retention available for any values previously set to the property and stored as the message to be sent. Those values are lost upon a new value coming in before it's dequeued. 11. Verify the edge client has received the expected egress, and the return result of the DequeueEgress function was the expected # of messages sent.
View full tip
Underneath, video is about ThingWorx Analytics and walks through following functions: Upload a Dataset. Create a Training Model. Create a Scoring Job.
View full tip
Previously Installing & Connecting C SDK to Federated ThingWorx with VNC Tunneling to the Edge device   Pre-requisite Download and install Web Sockets Tunnels Widget and Library Extension from PTC Marketplace   Configuring Tunnel Subsystem   1. Logon to ThingWorx Composer > System > Tunnel Subsystem > Configuration 2. Public host name used for tunnels & Public port used for tunnels parameters require publically address FQDN or IP of the instance running ThingWorx server and the port on which its listening. 3. From the screenshot above the TW802Neo is the name of the server and 443 is the port configured for ThingWorx to listen on 4. Navigate back to the RemoteThing we created above to connect our C SDK client to on the ThingWorx platform and ensure that the Enable Tunnelling is turned on   5. Click on Configuration and click Add My Tunnel button to configure where should the tunnel be opened to   6. In the above example Host and Port parameter is from the Ubuntu machine were my C SDK client is running together with the VNC server. If you are looking for more detail on how to configure these topics refer to the Simple diagnostic utility to analyse tunneling performance in ThingWorx 7. Once done, Save the entity   Configuring Remote Access & WebSocket tunnel widgets in a Mashup   1. Navigate to the ThingWorx Composer > Visualization > Mashups > New to create a Mashup 2. From the list of Widgets drag and drop following two widgets that are added as part of the Web Socket Tunnel Widget and Library extension Remote Access Web Socket Tunnel AcceptSelfSignedCert (if you are configuring ThingWorx with self-signed certificate)   4. Since I have my C SDK client binding to SteamSensor2 (remember it was created with RemoteThingWithTunnel ThingTemplate) I have that selected that as RemoteThingName 5. TunnelName is the vnc as configured in the SteamSensor2 configuration 6. For Web Socket Tunnel Widget following configuration is required RemoteThingName TunnelName VNCPassword 7. Parameters defined in a & b point are exactly the same as already done for the Remote Access Widget parameter above, the VNCPassword is the same password with which you have configured your VNC server with   8. Once done Save the mashup and View it   9. Click on the Remote Access to download the websocket adapter plugin, once downloaded click on it to initiate the websocket you will be prompted with following     Accept and Run   This will open the Remote tunnel with following confirmation, Don't click on OK else it will close   You can now utilize this tunnel to perform required action. Note that if in case there is no connection through this opened tunnel it will time out and will close automatically. 10. For doing remote desktop to the edge device I will use the Remote Device button , which will open a new browser window like so 11. Click on Connect to initiate the remote desktop session, like so 12. I can now start the terminal on the edge device and navigate through    Up Next Configuring ThingWorx Federation for Fetching data from the C SDK client from Publisher to subscriber ThingWorx entity
View full tip
Announcements