cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Did you get called away in the middle of writing a post? Don't worry you can find your unfinished post later in the Drafts section of your profile page. X

IoT Tips

Sort by:
I created this recently for another group -  might be useful    Video Link to Create a Database connection to you Postgres Thingwork Database 
View full tip
Since the marketplace extension is no longer supported and the drivers may be outdated, you may build your own jdbc package/extension: Download the Extension Metadata file Here Download the appropriate JDBC driver Build the extension structure by creating the directory lib/common Place the JAR file in this directory location: lib/common/<JDBC driver jar file> Modify the name attribute of the ExtensionPackage entity in the metadata.xml file as needed Point the file attribute of the FileResource entity to the name of the JDBC JAR file The metadata also contains a ThingTemplate the name is set to MySqlServer, but can be modified as needed Select the lib folder and metadata.xml file and send to a zip archive Tip: The name of the zip archive should match the name given in the name attribute of the ExtensionPackage entity in the metadata.xml file Import the newly created extension as usual To the JDBC extension, simply create a new thing and assign it the new ThingTemplate that was imported with the JDBC extension Configuration Field Explanation: JDBC Driver Class Name ​Depends on the driver being used Refer to documentation JDBC Connection String ​Defines the information needed to establish a connection with the database Connection string examples can be found in the ThingWorx Help Center ConnectionValidationString ​A simple query that will work regardless of table names to be executed to verify return values from the database   Alternatively, you may download the jdbc connector creator from the marketplace here https://marketplace.thingworx.com/Items/jdbc-connector-extension Then you may just view the mashup and use it to package your jdbc jar into an extension (which can be later imported into ThingWorx).  
View full tip
Attached (as PDF) are some steps to quickly get started with the Thingworx MQTT Extension so that you can subscribe / publish topics.
View full tip
Objective Use Influx as a database to store data coming from Kepware ThingWorx Industrial Connectivity server   Prerequisite Configure ThingWorx connection to Kepware’s KEPServerEX  and bind tags that exist in KEPServerEX to things in the ThingWorx model as referenced in Industrial Connections Example   Configuration Steps 1. Create database in Influx for ThingWorx: Connect:    influx -precision rfc3339 > SHOW DATABASES > CREATE DATABASE thingworx   2. Create Influx Persistence Provider     and configure   3. In the Industrial Thing where the Remote Properties are bounded define Value Stream     and make sure to have Persistence Provider set to Influx and is set to Active     4. In the Value Stream Properties and Alerts define the mappings using Manage Bindings to specify what properties are to be stored in this value stream     5. Save it and test it to make sure properties are stored in Influx: > use thingworx > show measurements   name   ----   Channel1.Device1 > show field keys on thingworx from "Channel1.Device1" > select Channel1_Device1_Tag2 from "Channel1.Device1"   name: Channel1.Device1   fieldKey fieldType   -------- ---------   Channel1_Device1_Tag10 integer   Channel1_Device1_Tag11 integer   Channel1_Device1_Tag12 integer   Channel1_Device1_Tag13 integer   Channel1_Device1_Tag14 integer   Channel1_Device1_Tag15 integer   Channel1_Device1_Tag16 integer   Channel1_Device1_Tag17 integer   Channel1_Device1_Tag18 integer   Channel1_Device1_Tag19 integer   Channel1_Device1_Tag2 integer   Channel1_Device1_Tag20 integer   Channel1_Device1_Tag21 integer   Channel1_Device1_Tag3 integer   Channel1_Device1_Tag4 integer   Channel1_Device1_Tag5 integer   Channel1_Device1_Tag6 integer   Channel1_Device1_Tag7 integer   Channel1_Device1_Tag8 integer   Channel1_Device1_Tag9 integer   shows data stored in Channel1_Device1_Tag2: >select Channel1_Device1_Tag2 from "Channel1.Device1"   2019-02-20T16:26:13.699Z 8043   2019-02-20T16:26:14.715Z 8044   2019-02-20T16:26:15.728Z 8045   2019-02-20T16:26:16.728Z 8046   2019-02-20T16:26:17.727Z 8047   2019-02-20T16:26:18.725Z 8048   2019-02-20T16:26:19.724Z 8049   2019-02-20T16:26:20.722Z 8050   2019-02-20T16:26:21.723Z 8051   2019-02-20T16:26:22.722Z 8052
View full tip
  Meet Chris. Chris leads the ThingWorx Flow portfolio, which will be releasing in 8.4. Chris has “a long history with workflow and a lot of interest in it.” He was originally hired as a product manager by Jim Heppelmann [PTC’s CEO] about 20 years ago to build a workflow engine for Windchill, our PLM solution. As a matter of fact, this feature is still a part of Windchill today. Pretty neat, right? We were able to leverage Chris’ workflow expertise and experience to create ThingWorx Flow. Check out more below.   Kaya: Why did we create ThingWorx Flow? What challenge does it solve? Chris: In order for customers to realize the full business value for connected solutions, it typically isn’t enough to just surface an alert to show a dashboard of health. Customers want to automatically do things like create service tickets or automatically order consumables as they run low. To do that—to realize that automated value—you need a couple of things. First, you need to be able to easily connect to enterprise systems (like SAP, Salesforce or Microsoft Dynamics) to create service cases. Second, you need to be able to define and execute your business process logic.   Kaya: Can you give me an example? Chris: Definitely. The business process may involve the need to replace a filter based on a contamination alert. After I receive the alert, I need to look up in the bill of materials to find out which filter is applicable to that particular device or product; if it’s not in stock onsite, I need to order it. When it arrives, I then need to schedule the service, knowing that it’s going to be there at a particular time, which entails working with another service system like Salesforce.   So, you have a number of these different enterprise systems that you need to connect to, and you need to support that orchestrated business process to realize the value of a fully automated flow. That’s what has really led us to do it—because, again, the compelling business value in these connected device solutions is often around an automated approach to service or maintenance issues. The value is compelling because automated processes minimize delays, support business growth and scale without hiring as many additional people and eliminate human errors and variability that lead to improved quality at lower costs or stronger margins. Sample ThingWorx Orchestrate Workflow: Send RFQ to Supplier and Add Part to List If State ChangesKaya: So, who ultimately benefits from Flow? Chris: The benefits are realized by an organization as a whole in terms of increased responsiveness, flexibility, reduced costs and higher availability. The ThingWorx solution is unique in that it allows an organization to quickly realize these benefits by optimizing the participation of several key roles. As a result, no role becomes a bottleneck to addressing an urgent business need.   In addition to developers, the many business analysts across the organization are now empowered to define and modify their business processes themselves through Flow. They do this by using the system connections (or subflows) created by their system experts and ThingWorx models that were defined by developers. All of this work is achieved using our visual modeling tool without the need for extensive training.   The system experts can define portions of flows or subflows that get, create or edit information in the systems they understand most within the Flow editor. This enables each role to focus on using their most valuable skills and ultimately eliminates the bottlenecks that cause reduction in business responsiveness when everything must be done by a few highly-specialized experts. Now, developers can leverage the outputs of flows to drive behavior in the application and visualize key KPIs and overall health.   Kaya: So, I see that we’re not only connecting business systems, but we’re also connecting people. We enable them to leverage each other’s different perspectives and areas of expertise. I know we gave developers a sneak peek of ThingWorx Flow in one of our latest posts. Do you have a more detailed demo that we could share this week? Chris: Sure thing. Check this out. (view in My Videos)   Kaya: Wow! I can definitely see how Flow saves immense time in workflow creation. Now, final question: what is your favorite aspect about working at PTC? Chris: The most interesting thing to me is that I get to work with so many different customers in so many different verticals that have such a diverse set of challenging and interesting problems. It’s fun to work together with these customers for us to understand their problems, their unique environments and then to, with them, build solutions using a combination of our products and their existing systems and tooling—that’s what I think is most fascinating.   I learn something new about the way things are made, manufactured, built and serviced every day, and it just makes my life much more interesting. I don’t feel like I’m doing the same thing every day. Working with customers to understand and address their challenges and help them realize their business goals is really rewarding and, again, the diversity of those requests is what makes the job particularly interesting and fascinating.
View full tip
  Meet Anthony. Anthony came to PTC from a large industrial company as a user of Axeda, a leading device connectivity company that PTC acquired in 2014. With a background in aerospace engineering and experience in a variety of industries including life safety, healthcare, nuclear power, and oil and gas, Anthony has been working to create new value around innovation for customers transitioning to ThingWorx. When he’s not working on IIoT, he’s playing music on Cape Cod, photographing Hawaiian landscapes or bringing awesome inflatable chairs into the office.    You may recall from last week's post that Thing Presence is one of the top three features in 8.4 that you might not know about. This week, I spoke with Anthony for a deeper dive on Thing Presence. Check out how it went.   Kaya: What was the challenge users were facing that led us to create Thing Presence? Anthony: Axeda customers transitioning to ThingWorx were struggling with the connectivity use case and kept asking, ‘why is my asset always offline?’ When we evaluated it, we discovered that it was tied to the difference between AlwaysOn and Axeda eMessage protocol architectures. IsConnected will always report false for polling and duty cycle devices.   The use case of Thing Presence is to know that the asset is reporting into the network and is ready to provide information (push) or be accessed to retrieve information or do a remote desktop support (pull). This use case is relevant for any asset in ThingWorx that uses duty cycle.   In ThingWorx 8.4 (coming in early 2019), the new IsReporting state will inform the user when a polling device is communicating on a regular basis. If it is, then IsReporting will be true. The IsReporting state resolves the discrepancy wherein devices that are on duty cycle appear disconnected due to the IsConnected state reporting false.  New "IsReporting" state improves visibility of an asset's communication state Kaya: How exactly does Thing Presence work? Anthony: You can think of it in terms of having teenagers. You tell them they need to check in with you on a regular basis through text message. If a text is missed, all of a sudden you take action.   Now, imagine the teenager is a device. If a device was supposed to check in every five minutes and it misses one poll, I want to flag that as a problem. The challenge with that from a service perspective is that sometimes your service personnel will go out and work on a device and may need to take it offline for a bit of time; we need to factor that in. We certainly don’t want to deploy someone or try to fix something when a service technician is already there.   You might decide, ‘my average service visit is an hour, so, if I miss a couple of pings, I’m okay; but, if I’m offline for more than an hour, then I’d like to know about it because I’d like to take action.’ Thing Presence allows you to define that window.   Kaya: You’ve mentioned ‘duty cycle’ and ‘polling cycle.’ Can you explain these terms? Anthony: Duty cycle and polling cycle are the same thing. It means that a device has a time for which it is expected to check in, and, provided that it checks in within that timeframe, all is good with the connection.   Connected services rely on a connection. As soon as the connection is broken, I no longer have the ability to service the asset.   Kaya: Given everything we’ve discussed, where do you see Thing Presence headed? Anthony: The next piece of the equation for us is to provide information on the health of the connection. When you look at servicing a remote asset, you need to a) know that it is communicating, and b) know that the connection is healthy before you try anything. I wouldn’t want to try a software update if I am losing connection with my asset on a regular basis.   What do we mean by health? We mean: is the device checking in when it should be? If it’s not, is there a pattern to that connection, and are those patterns tied to applications? For example, is it only on during working hours? Does it turn off during holidays? If the device is in a school, does it turn off during summer maintenance work? This allows us to garner insights on how and when the equipment is being used, not just operating status. At the end of the day, does this mean I can apply analytics and AI tools to it? Absolutely. Is it the first place I would apply it? Probably not.   Kaya: In your mind, what is the next big thing coming in ThingWorx that you’re particularly excited about? Anthony: Mashup 2.0 and Asset Advisor 8.4. (Double mic drop.)   Kaya: That’s awesome. My last question is related to you. Can you tell me what your favorite aspect is about working at PTC? Anthony: The chaos. Very often it’s chaos that breeds innovation. What I mean by that is that, if you try to create something because you sit down and you say, ‘I am going to innovate,’ very often that is a failure because the majority of the time it’s the influence of a deadline, a customer need or an application at hand that makes the environment trying and sometimes hectic. But, it is in these challenging environments where you can be the most creative and innovative as an engineer.
View full tip
Meet Neal. When Neal joined PTC five years ago, he immediately hit the ground running on IoT initiatives, working in multiple areas ranging from pre-sales to partner relations. Today, he is a Worldwide ThingWorx Center of Excellence Principal Lead at PTC, and his biggest focus is supporting the go-to-market for the Microsoft partnership.   I sat down with Neal recently to hear the details on exactly how Azure and ThingWorx can be used to develop world-class IIoT applications.   Kaya: Can you explain how Azure and ThingWorx work together? Neal: Yes, so Azure provides the cloud infrastructure that our customers need in order to deploy ThingWorx.   By having Azure as our preferred cloud platform, we’re able to specialize our R&D efforts into utilizing functionality that is available in Azure, rather than having to reinvent the wheel ourselves for each cloud platform in the attempt to remain cloud-agnostic. By leveraging a single, already quite powerful, cloud platform through Azure, we’re able to maximize our development efforts.   Kaya: What was the major problem that led to us investigating cloud options? Neal: There were two issues that our users had. The first was we often had complicated installs and setup procedures because we were genericizing the whole process, so the initial setup and run was complicated and expensive. For example, we were requiring them to setup additional VMs and components, and we were giving them generic instructions because we were being very agnostic, when they had already chosen outside of us to use one of the cloud platforms. We knew our customers wanted to move fast, so we had to make it easier and faster for them to see results.   The other issue we ran into with customers was the confusion in the offerings. For ThingWorx, ingest is just one aspect of IoT. ThingWorx is particularly strong in areas like enabling mixed reality and augmented reality as well as application enablement. And, while we also have the ability to perform ingest capabilities, Microsoft is especially powerful when it comes to ingest capabilities and security. We decided that the smartest solution was to leverage Microsoft’s expertise in data ingestion to make ThingWorx even more powerful; so, we made the Azure IoT Hub Connector. By partnering with Microsoft, we have joint architecture where you can see how Microsoft provides key features and ThingWorx will run on top of those features and get you faster to the market to develop the application.   Below is an example of a high-availability deployment of ThingWorx on Azure that utilizes ThingWorx Azure IoT Connectors to access an Azure IoT Hub.  High-Availability Deployment of ThingWorx on Azure Kaya: Why did we partner with Azure? What specific benefits does Azure offer over other cloud services providers? Neal: When we started to look at what our customers were using for cloud services, we noticed that a lot were using Microsoft. When we join forces with Microsoft, we have a much more wholistic offering around digital transformation. With the partnership, PTC and Microsoft are able to leverage each partner’s respective strengths to provide even more powerful IIoT solutions. You have Windchill and Microsoft’s business application strategies; you have Vuforia and Microsoft’s mixed reality and augmented reality strategies; and, you have ThingWorx on the Microsoft Azure cloud. Overall, you have a much more complete and powerful offering together.   Kaya: What is your favorite aspect about working at PTC? Neal: The growth. There’s been a lot of changes over the last five years that I’ve been here. PTC has been able to leverage its strengths and long-time experience in the CAD and PLM markets to enter and ultimately become a leader in the IIoT market, according to reports by research firms like Gartner and Forrester. In short, the growing IIoT market and PTC’s leadership in the industry.   Note to Readers: You’ve likely heard about our major strategic partnership with Microsoft to leverage our respective IIoT and cloud technologies to optimize the creation, deployment, management and overall use of your IIoT applications. If you haven’t heard about the partnership, check out the press release here. If you’re curious about more aspects of PTC’s partnership with Microsoft, check out this site devoted to sharing how ThingWorx and Azure are better together.
View full tip
Just like the perfect sandwich, we know that you have specific preferences and requirements for your ThingWorx deployment. Whether you like to keep things simple with a classic grilled cheese or you like to spice things up with a more elaborate chipotle mayo BLT, we’ve got you covered. Our ThingWorx Deployment Architecture Guide explains what you’ll need to deploy ThingWorx in three different scenarios: production, enterprise and high-availability (pictured below).   Deployment Architecture for ThingWorx on Azure in High-Availability We’ve recently published Version 1.1 of the ThingWorx Deployment Architecture Guide. In it, you can find updated deployment architecture diagrams to more distinctly show the data and application layers within a ThingWorx environment. Our team has also added a new section on what you’ll need to deploy ThingWorx on Microsoft Azure, PTC’s preferred cloud platform.   Check it out here or in the attachment section on the right.   Stay connected, Kaya
View full tip
GOBOT Framework GOBOT is a framework written in Go programming language. Useful for connecting robotic components, variety of hardware & IoT devices.   Framework consists of Robots -> Virtual entity representing rover, drones, sensors etc. Adaptors -> Allows connectivity to the hardware e.g. connection to Arduino is done using Firmata Adaptor, defining how to talk to it Drivers -> Defines specific functionality to support on specific hardware devices e.g. buttons, sensors, etc. API -> Provides RESTful API to query Robot status There are additional core features of the framework that I recommend having a look esp. Events, Commands allowing Subscribing / Publishing events to the device for more refer to the doc There's already a long list of Platforms for which the drivers and adaptors are available. For this blog I will be working with Arduino + Garmin LidarLite v3. There are cheaper versions available for distance measurement, however if you are looking for high performance, high precision optical distance measurement sensor, then this is it. Pre-requisite Install Go see doc Install Gort Install Gobot Wire-up LidarLite Sensor with Arduino How to connect For our current setup I have Arduino connected to Ubuntu 16 over serial port, see here if you are looking for a different platform.   For ubuntu you just need following 3 commands to connect and upload the firmata as our Adaptor to prepare Arduino for connectivity   // Look for the connected serial devices $ gort scan serial // install avrdude to upload firmata to the Arduino $ gort arduino install // uploading the firmata to the serial port found via first scan command, mine was found at /dev/ttyACM0 $ gort arduino upload firmata /dev/ttyACM0 Reading Sensor data Since there is a available driver for the LidarLite, I will be using it in the following Go code below in a file called main.go which connects and reads the sensor data.   For connecting and reading the sensor data we need the driver, connection object & the task / work that the robot is supposed to perform. Adaptor firmataAdaptor := firmata.NewAdaptor("/dev/ttyACM0") // this the port on which for me Arduino is connecting Driver As previously mentioned that Gobot provides several drivers on of the them is LidarLite we will be using this like so   d := i2c.NewLIDARLiteDriver(firmataAdaptor) Work Now that we have the adaptor & the driver setup lets assign the work this robot needs to do, which is to read the distance work := func() { gobot.Every(1*time.Second, func() { dist, err := d.Distance() if err != nil { log.Fatalln("failed to get dist") } fmt.Println("Fetching the dist", dist, "cms") }) } Notice the Every function provided by gobot to define that we want to perform certain action as the time lapses, here we are gathering the distance.   Note: The distance returned by the lidarLite sensor is in CMs & the max range for the sensor is 40m Robot Now we create the robot representing our entity which in this case is simple, its just the sensor itself   lidarRobot := gobot.NewRobot("lidarBot", []gobot.Connection{firmataAdaptor}, []gobot.Device{d}, work)   This defines the vitual representation of the entity and the driver + the work this robot needs to do. Here's the complete code. Before running this pacakge make sure to build it as you likely will have to execute the runnable with sudo. To build simply navigate to the folder in the shell where the main.go exists and execute   $ go build   This will create runnable file with the package name execute the same with sudo if needed like so   $ sudo ./GarminLidarLite   And if everything done as required following ouput will appear with sensor readings printed out every second 2018/08/05 22:46:54 Initializing connections... 2018/08/05 22:46:54 Initializing connection Firmata-634725A2E59CBD50 ... 2018/08/05 22:46:54 Initializing devices... 2018/08/05 22:46:54 Initializing device LIDARLite-5D4F0034ECE4D0EB ... 2018/08/05 22:46:54 Robot lidarBot initialized. 2018/08/05 22:46:54 Starting Robot lidarBot ... 2018/08/05 22:46:54 Starting connections... 2018/08/05 22:46:54 Starting connection Firmata-634725A2E59CBD50 on port /dev/ttyACM0... 2018/08/05 22:46:58 Starting devices... 2018/08/05 22:46:58 Starting device LIDARLite-5D4F0034ECE4D0EB... 2018/08/05 22:46:58 Starting work... Fetching the dist 166 cms Fetching the dist 165 cms Fetching the dist 165 cms Here's complete code for reference   package main import ( "fmt" "log" "time" "gobot.io/x/gobot" "gobot.io/x/gobot/drivers/i2c" "gobot.io/x/gobot/platforms/firmata" ) func main() { lidarLibTest() } // reading Garmin LidarLite data func lidarLibTest() { firmataAdaptor := firmata.NewAdaptor("/dev/ttyACM0") d := i2c.NewLIDARLiteDriver(firmataAdaptor) work := func() { gobot.Every(1*time.Second, func() { dist, err := d.Distance() if err != nil { log.Fatalln("failed to get dist") } fmt.Println("Fetching the dist", dist, "cms") }) } lidarRobot := gobot.NewRobot("lidarBot", []gobot.Connection{firmataAdaptor}, []gobot.Device{d}, work) lidarRobot.Start() }
View full tip
The purpose of this document is to see how you can setup an MXChip IoT DevKit and also how send the readings of this microprocessor to ThingWorx through an Azure cloud server. You will also learn how to view the values that are being sent.
View full tip
Previously Installing & Connecting C SDK to Federated ThingWorx with VNC Tunneling to the Edge device Installing and configuring Web Socket Tunnel Extension on ThingWorx Platform Overview     Using the C SDK Edge client configuration we did earlier, we'll now create above illustrated setup. In this C SDK Client we'll push the data to ThingWorx Publisher with servername : TW802Neo to ThingWorx Subscriber servername : TW81. Notice that the SteamSensor2 on the pulisher server is the one binding to the C SDK client and the FederatedSteamThing on subscriber is only getting data from the SteamSensor2. Let's crack on!   Content   Configure ThingWorx to publish Configure ThingWorx to subscribe Publish entities from Publisher to the Subscriber Create Mashup to view data published to the subscriber Pre-requisite Minimum requirement is to have two ThingWorx servers running. Note that both ThingWorx systems can be publisher & subscriber at the same time.   Configure ThingWorx Publisher   Configuring Federation Subsystem   1. Navigate to ThingWorx Composer > Subsystems > Federation Subsystems and configure the following highlighted sections   Essentially its required to configure the Server Identification, i.e. My Server name (FQDN / IP) , My Server Description (optional) Federation subscribers this server publishes to, i.e. all the server you want to publish to from this server. Refer to the Federation Subsystem doc in the Help Center to check detail description on each configurable parameter.   2. Save the federation subsystem   Configuring a Thing to be published   1. Navigate back to the Composer home page and select the entity which you'd like to publish 2. In this case I'm using SteamSensor2 which is created to connect to the C SDK client 3. To publish edit the entity and click on Publish checkbox, like so 4. Save the entity   Configure ThingWorx Subscriber     Configuring Federation Subsystem   1. Navigate to ThingWorx Composer > Subsystems > Federation Subsystems and configure the following highlighted sections   Essentially its required to configure the Server Identification, i.e. My Server name (FQDN / IP) , My Server Description (optional) Refer to help center doc on Federation Subsystems should you need more detail on the configurable parameter If you only want to use this server as a subscriber of entities from the publishing ThingWorx Server, in that case you don't have to Configure the section Federation subscribers this server publishes to, I've configured here anyway to show that both servers can work as publishers and subscribers   2. Save the federation subsystem   Configuring a Thing to subscribe to a published Thing 1. Subscribing to an entity is fairly straight forward, I'll demonstrate by utilizing the C SDK client which is currently pushing values to my remote thing called SteamSensor2 on server https://tw802neo:443/Thingworx 2. I have already Published the StreamSensor2, see above section Configuring a Thing to be published 3. Create a Thing called FederatedStreamThing with RemoteThingWithTunnels as ThingTemplate, 4. Browser for the Identifier and select the required entity to which binding must be done, like so   5. Navigate to the Properties section for the entity, click Manage Bindings to search for the remote properties like so for adding them to this thing:     6. Save the entity and then we can see the values that were pushed from the client C SDK     7. Finally, we can also quickly see the values pulled via a Mashup created in the subscriber ThingWorx , below a is a simple mashup with grid widget pulling values using QueryPropertyHistory service  
View full tip
This is a follow-up post on my initial document about Edge Microserver (EMS) and Lua Script Resource (LSR) security. While the first part deals with fundamentals on secure configurations, this second part will give some more practical tips and tricks on how to implement these security measurements.   For more information it's also recommended to read through the Setting Up Secure Communications for WS EMS and LSR chapter in the ThingWorx Help Center. See also Trust & Encryption Theory and Hands On for more information and examples - especially around the concept of the Chain of Trust, which will be an important factor for this post as well.   In this post I will only reference the High Security options for both, the EMS and the LSR. Note that all commands and directories are Linux based - Windows equivalents might slightly differ.   Note - some of the configuration options are color coded for easy recognition: LSR resources / EMS resources   Password Encryption   It's recommended to encrypt all passwords and keys, so that they are not stored as cleartext in the config.lua / config.json files.   And of course it's also recommended, to use a more meaningful password than what I use as an example - which also means: do not use any password I mentioned here for your systems, they might too easy to guess now 🙂   The luaScriptResource script can be used for encryption:   ./luaScriptResource -encrypt "pword123" ############ Encrypted String AES:A26fBYKHJq+eMu0Fm2FlDw== ############   The wsems script can be used for encryption:   ./wsems -encrypt "pword123" ############ Encrypted String AES:A26fBYKHJq+eMu0Fm2FlDw== ############   Note that the encryption for both scripts will result in the same encrypted string. This means, either the wsems or luaScriptResource scripts can be used to retrieve the same results.   The string to encrypt can be provided with or without quotation marks. It is however recommended to quote the string, especially when the string contains blanks or spaces. Otherwise unexpected results might occur as blanks will be considered as delimiter symbols.   LSR Configuration   In the config.lua there are two sections to be configured:   scripts.script_resource which deals with the configuration of the LSR itself scripts.rap which deals with the connection to the EMS   HTTP Server Authentication   HTTP Server Authentication will require a username and password for accessing the LSR REST API.     scripts.script_resource_authenticate = true scripts.script_resource_userid = "luauser" scripts.script_resource_password = "pword123"     The password should be encrypted (see above) and the configuration should then be updated to   scripts.script_resource_password = "AES:A26fBYKHJq+eMu0Fm2FlDw=="   HTTP Server TLS Configuration   Configuration   HTTP Server TLS configuration will enable TLS and https for secure and encrypted communication channels from and to the LSR. To enable TLS and https, the following configuration is required:     scripts.script_resource_ssl = true scripts.script_resource_certificate_chain = "/pathToLSR/lsrcertificate.pem" scripts.script_resource_private_key = "/pathToLSR/key.pem" scripts.script_resource_passphrase = "keyForLSR"     It's also encouraged to not use the default certificate, but custom certificates instead. To explicitly set this, the following configuration can be added:     scripts.script_resource_use_default_certificate = false     Certificates, keys and encryption   The passphrase for the private key should be encrypted (see above) and the configuration should then be updated to     scripts.script_resource_passphrase = "AES:A+Uv/xvRWENWUzourErTZQ=="     The private_key should be available as .pem file and starts and ends with the following lines:     -----BEGIN ENCRYPTED PRIVATE KEY----- -----END ENCRYPTED PRIVATE KEY-----     As it's highly recommended to encrypt the private_key, the LSR needs to know the password for how to encrypt and use the key. This is done via the passphrase configuration. Naturally the passphrase should be encrypted in the config.lua to not allow spoofing the actual cleartext passphrase.   The certificate_chain holds the Chain of Trust of the LSR Server Certificate in a .pem file. It holds multiple entries for the the Root, Intermediate and Server Specific certificate starting and ending with the following line for each individual certificate and Certificate Authority (CA):     -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----     After configuring TLS and https, the LSR REST API has to be called via https://lsrserver:8001 (instead of http).   Connection to the EMS   Authentication   To secure the connection to the EMS, the LSR must know the certificates and authentication details for the EMS:     scripts.rap_server_authenticate = true scripts.rap_userid = "emsuser" scripts.rap_password = "AES:A26fBYKHJq+eMu0Fm2FlDw=="     Supply the authentication credentials as defined in the EMS's config.json - as for any other configuration the password can be used in cleartext or encrypted. It's recommended to encrypt it here as well.   HTTPS and TLS   Use the following configuration establish the https connection and using certificates     scripts.rap_ssl = true scripts.rap_cert_file = "/pathToLSR/emscertificate.pem" scripts.rap_deny_selfsigned = true scripts.rap_validate = true     This forces the certificate to be validated and also denies selfsigned certificates. In case selfsigned certificates are used, you might want to adjust above values.   The cert_file is the full Chain of Trust as configured in the EMS' config.json http_server.certificate options. It needs to match exactly, so that the LSR can actually verify and trust the connections from and to the EMS.   EMS Configuration   In the config.lua there are two sections to be configured:   http_server which enables the HTTP Server capabilities for the EMS certificates which holds all certificates that the EMS must verify in order to communicate with other servers (ThingWorx Platform, LSR)   HTTP Server Authentication and TLS Configuration   HTTP Server Authentication will require a username and password for accessing the EMS REST API. HTTP Server TLS configuration will enable TLS and https for secure and encrypted communication channels from and to the EMS.   To enable both the following configuration can be used:   "http_server": { "host": "<emsHostName>", "port": 8000, "ssl": true, "certificate": "/pathToEMS/emscertificate.pem", "private_key": "/pathToEMS/key.pem", "passphrase": "keyForEMS", "authenticate": true, "user": "emsuser", "password": "pword123" }   The passphrase as well as the password should be encrypted (see above) and the configuration should then be updated to   "passphrase": "AES:D6sgxAEwWWdD5ZCcDwq4eg==", "password": "AES:A26fBYKHJq+eMu0Fm2FlDw=="   See LSR configuration for comments on the certificate and the private_key. The same principals apply here. Note that the certificate must hold the full Chain of Trust in a .pem file for the server hosting the EMS.   After configuring TLS and https, the EMS REST API has to be called via https://emsserver:8000 (instead of http).   Certificates Configuration   The certificates configuration hold all certificates that the EMS will need to validate. If ThingWorx is configured for HTTPS and the ws_connection.encryption is set to "ssl" the Chain of Trust for the ThingWorx Platform Server Certificate must be present in the .pem file. If the LSR is configured for HTTPS the Chain of Trust for the LSR Server Certificate must be present in the .pem file.   "certificates": { "validate": true, "allow_self_signed": false, "cert_chain" : "/pathToEMS/listOfCertificates.pem" } The listOfCertificates.pem is basicially a copy of the lsrcertificate.pem with the added ThingWorx certificates and CAs.   Note that all certificates to be validated as well as their full Chain of Trust must be present in this one .pem file. Multiple files cannot be configured.   Binding to the LSR   When binding to the LSR via the auto_bind configuration, the following settings must be configured:   "auto_bind": [{ "name": "<ThingName>", "host": "<lsrHostName>", "port": 8001, "protocol": "https", "user": "luauser", "password": "AES:A26fBYKHJq+eMu0Fm2FlDw==" }]   This will ensure that the EMS connects to the LSR via https and proper authentication.   Tips   Do not use quotation marks (") as part of the strings to be encrypted. This could result in unexpected behavior when running the encryption script. Do not use a semicolon (:) as part of any username. Authentication tokens are passed from browsers as "username:password" and a semicolon in a username could result in unexpected authentication behavior leading to failed authentication requests. In the Server Specific certificates, the CN must match the actual server name and also must match the name of the http_server.host (EMS) or script_resource_host (LSR) In the .pem files first store Server Specific certificates, then all required Intermediate CAs and finally all required Root CAs - any other order could affect the consistency of the files and the certificate might not be fully readable by the scripts and processes. If the EMS is configured with certifcates, the LSR must connect via a secure channel as well and needs to be configured to do so. If the LSR is configured with certifcates, the EMS must connect via a secure channel as well and needs to be configured to do so. For testing REST API calls with resources that require encryptions and authentcation, see also How to run REST API calls with Postman on the Edge Microserver (EMS) and Lua Script Resource (LSR)   Export PEM data from KeyStore Explorer   To generate a .pem file I usually use the KeyStore Explorer for Windows - in which I have created my certificates and manage my keystores. In the keystore, select a certificate and view its details Each certificate and CA in the chain can be viewed: Root, Intermediate and Server Specific Select each certificate and CA and use the "PEM" button on the bottom of the interface to view the actual PEM content Copy to clipboard and paste into .pem file To generate a .pem file for the private key, Right-click the certificate > Export > Export Private Key Choose "PKCS #8" Check "Encrypted" and use the default algorithm; define an "Encryption Password"; check the "PEM" checkbox and export it as .pkcs8 file The .pkcs8 file can then be renamed and used as .pem file The password set during the export process will be the scripts.script_resource_passphrase (LSR) or the http_server.passphrase (EMS) After generating the .pem files I copy them over to my Linux systems where they will need 644 permissions (-rw-r--r--)
View full tip
Key Functional Highlights Add connectivity to National Instruments TestStand Make it easier to edit the apps Easier to find mashups and things in Composer Support for Asset sub-types Open up the tag picker to allow adding any connection types through Composer General App Improvements Enhance tag picker to improve speed of configuration Make it easier to add additional properties to assets Make app configuration more intuitive by centralizing the configuration Controls Advisor Merge the Server and Connection status fields Asset Advisor Performance improvement when displaying pages Add support for CFS/ServiceMax integration Added trial support for Service     Compatibility ThingWorx 8.2.x KEPServerEX 6.2 and later KEPServerEX V6.1 and older as well as different OPC Servers (with Kepware OPC aggregator) National Instruments TestStand 2016 SP1 and later Support upgrade from 8.0.1 and later     Documentation What’s New in ThingWorx Manufacturing Apps ThingWorx Manufacturing Apps Setup and Configuration Guide What’s New in ThingWorx Service Apps ThingWorx Service Apps Setup and Configuration Guide ThingWorx Manufacturing and Service Apps Customization Guide     Download ThingWorx Manufacturing Apps Freemium portal ThingWorx Manufacturing and Service Apps Extensions
View full tip
Connectors allow clients to establish a connection to Tomcat via the HTTP / HTTPS protocol. Tomcat allows for configuring multiple connectors so that users or devices can either connect via HTTP or HTTPS.   Usually users like you and me access websites by just typing the URL in the browser's address bar, e.g. "www.google.com". By default browsers assume that the connection should be established with the HTTP protocol. For HTTPS connections, the protocol has to be specified explictily, e.g. "https://www.google.com"   However - Google automatically forwards HTTP connections automatically as a HTTPS connection, so that all connections are using certificates and are via a secure channel and you will end up on "https://www.google.com" anyway.   To configure ThingWorx to only allow secure connections there are two options...   1) Remove HTTP access   If HTTP access is removed, users can no longer connect to the 80 or 8080 port. ThingWorx will only be accessible on port 443 (or 8443).   If connecting to port 8080 clients will not be redirected. The redirectPort in the Connector is only forwarding requests internally in Tomcat, not switching protocols and ports and not requiring a certificate when being used. The redirected port does not reflect in the client's connection but only manages internal port-forwarding in Tomcat.   By removing the HTTP ports for access any connection on port 80 (or 8080) will end up in an error message that the client cannot connect on this port.   To remove the HTTP ports, edit the <Tomcat>\conf\server.xml and comment out sections like       <!-- commented out to disallow connections on port 80 <Connector port="80" protocol="org.apache.coyote.http11.Http11NioProtocol" connectionTimeout="20000" redirectPort="443" /> -->     Save and restart Tomcat. If opening Tomcat (and ThingWorx) in a browser via http://myServer/ the connection will fail with a "This site can’t be reached", "ERR_CONNECTION_REFUSED" error.   2) Forcing insecure connections through secure ports   Alternatively, port 80 and 8080 can be kept open to still allow users and devices to connect. But instead of only internally forwarding the port, Tomcat can be setup to also forward the client to the new secure port. With this, users and devices can still use e.g. old bookmarks and do not have to explicitly set the HTTPS protocol in the address.   To configure this, edit the <Tomcat>\conf\web.xml and add the following section just before the closing </web-app> tag at the end of the file:     <security-constraint>        <web-resource-collection>              <web-resource-name>HTTPSOnly</web-resource-name>              <url-pattern>/*</url-pattern>        </web-resource-collection>        <user-data-constraint>              <transport-guarantee>CONFIDENTIAL</transport-guarantee>        </user-data-constraint> </security-constraint>     In <Tomcat>\conf\web.xml ensure that all HTTP Connectors (port 80 and 8080) have their redirect port set to the secure HTTPS Connector (usually port 443 or port 8443).   Save and restart Tomcat. If opening Tomcat (and ThingWorx) in a browser via http://myServer/ the connection will now be forwarded to the secure port. The browser will now show the connection as https://myServer/ instead and connections are secure and using certificates.   What next?   Configuring Tomcat to force insecure connection to actually secure HTTPS connection just requires a simple configuration change. If you want to read more about certificates, encryption and how to setup ThingWorx for HTTPS in the first place, be sure to also have a look at   Trust & Encryption - Theory Trust & Encryption - Hands On
View full tip
The Protocol Adapter Toolkit (PAT) is an SDK that allows developers to write a custom Connector that enables edge devices to connect to and communicate with the ThingWorx Platform.   In this blog, I will be dabbling with the MQTT Sample Project that uses the MQTT Channel introduced in PAT 1.1.2.   Preamble All the PAT sample projects are documented in detail in their respective README.md. This post is an illustrated Walk-thru for the MQTT Sample project, please review its README.md for in depth information. More reading in Protocol Adapter Toolkit (PAT) overview PAT 1.1.2 is supported with ThingWorx Platform 8.0 and 8.1 - not fully supported with 8.2 yet.   MQTT Connector features The MQTT Sample project provides a Codec implementation that service MQTT requests and a command line MQTT client to test the Connector. The sample MQTT Codec handles Edge initiated requests read a property from the ThingWorx Platform write a property to the ThingWorx Platform execute a service on the ThingWorx Platform send an event to the ThingWorx Platform (also uses a ServiceEntityNameMapper to map an edgeId to an entityName) All these actions require a security token that will be validated by a Platform service via a InvokeServiceAuthenticator.   This Codec also handles Platform initiated requests (egress message) write a property to the Edge device execute a service without response on the Edge device  Components and Terminology       Mqtt Messages originated from the Edge Device (inbound) are published to the sample MQTT topic Mqtt Messages originated from the Connector (outbound) are published to the mqtt/outbound MQTT topic   Codec A pluggable component that interprets Edge Messages and converts them to ThingWorx Platform Messages to enable interoperability between an Edge Device and the ThingWorx Platform. Connector A running instance of a Protocol Adapter created using the Protocol Adapter Toolkit. Edge Device The device that exists external to the Connector which may produce and/or consume Edge Messages. (mqtt) Edge Message A data structure used for communication defined by the Edge Protocol.  An Edge Message may include routing information for the Channel and a payload for Codec. Edge Messages originate from the Edge Device (inbound) as well as the Codec (outbound). (mqtt) Channel The specific mechanism used to transmit Edge Messages to and from Edge Devices. The Protocol Adapter Toolkit currently includes support for HTTP, WebSocket, MQTT, and custom Channels. A Channel takes the data off of the network and produces an Edge Message for consumption by the Codec and takes Edge Messages produced by the Codec and places the message payload data back onto the network. Platform Connection The connection layer between a Connector and ThingWorx core Platform Message The abstract representation of a message destined for and coming from the ThingWorx Platform (e.g. WriteProperty, InvokeService). Platform Messages are produced by the Codec for incoming messages and provided to the Codec for outgoing messages/responses.   Installation and Build  Protocol Adapter Toolkit installation The media is available from PTC Software Downloads : ThingWorx Connection Server > Release 8.2 > ThingWorx Protocol Adapter Toolkit Just unzip the media on your filesystem, there is no installer The MQTT Sample Project is available in <protocol-adapter-toolkit>\samples\mqtt Eclipse Project setup Prerequisite : Eclipse IDE (I'm using Neon.3 release) Eclipse Gradle extension - for example the Gradle IDE Pack available in the Eclipse Marketplace Import the MQTT Project : File > Import > Gradle (STS) > Gradle (STS) Project Browser to <protocol-adapter-toolkit>\samples\mqtt, then [Build Model] and select the mqtt project     Review the sample MQTT codec and test client Connector : mqtt > src/main/java > com.thingworx.connector.sdk.samples.codec.MqttSampleCodec decode : converts an MqttEdgeMessage to a PlatformRequest encode (3 flavors) : converts a PlatformMessage or an InfoTable or a Throwable to a MqttEdgeMessage Note that most of the conversion logic is common to all sample projects (websocket, rest, mqtt) and is done in an helper class : SampleProtocol The SampleProtocol sources are available in the <protocol-adapter-toolkit>\samples\connector-api-sample-protocol project - it can be imported in eclipse the same way as the mqtt. SampleTokenAuthenticator and SampleEntityNameMapper are also defined in the <protocol-adapter-toolkit>\samples\connector-api-sample-protocol project. Client : mqtt > src/client/java > com.thingworx.connector.sdk.samples.MqttClient Command Line MQTT client based on Eclipse Paho that allows to test edge initiated and platform initiated requests. Build the sample MQTT Connector and test client Select the mqtt project then RMB > Gradle (STS) > Task Quick Launcher > type Clean build +  [enter] This creates a distributable archive (zip+tar) in <protocol-adapter-toolkit>\samples\mqtt\build\distributions that packages the sample mqtt connector, some startup scripts, an xml with sample entities to import on the platform and a sample connector.conf. Note that I will test the connector and the client directly from Eclipse, and will not use this package. Runtime configuration and setup MQTT broker I'm just using a Mosquitto broker Docker image from Docker Hub​   docker run -d -p 1883:1883 --name mqtt ncarlier/mqtt  ThingWorx Platform appKey and ConnectionServicesExtension From the ThingWorx Composer : Create an Application Key for your Connector (remember to increase the expiration date - to make it simple I bind it to Administrator) Import the ConnectionServicesExtension-x.y.z.zip and pat-extension-x.y.z.zip extensions available in <protocol-adapter-toolkit>\requiredExtensions  Connector configuration Edit <protocol-adapter-toolkit>\samples\mqtt\src\main\dist\connector.conf Update the highlighted entries below to match your configuration :   include "application" cx-server {   connector {     active-channel = "mqtt"     bind-on-first-communication = true     channel.mqtt {       broker-urls = [ "tcp://localhost:1883" ]       // at least one subscription must be defined       subscriptions {        "sample": [ "com.thingworx.connector.sdk.samples.codec.MqttSampleCodec", 1 ]       }       outbound-codec-class = "com.thingworx.connector.sdk.samples.codec.MqttSampleCodec"     }   }   transport.websockets {     app-key = "00000000-0000-0000-0000-000000000000"     platforms = "wss://thingWorxServer:8443/Thingworx/WS"   }   // Health check service default port (9009) was in used on my machine. Added the following block to change it.   health-check {      port = 9010   } }  Start the Connector Run the Connector directly from Eclipse using the Gradle Task RMB > Run As ... > Gradle (STS) Build (Alternate technique)  Debug as Java Application from Eclipse Select the mqtt project, then Run > Debug Configurations .... Name : mqtt-connector Main class:  com.thingworx.connectionserver.ConnectionServer On the argument tab add a VM argument : -Dconfig.file=<protocol-adapter-toolkit>\samples\mqtt\src\main\dist\connector.conf Select [Debug]  Verify connection to the Platform From the ThingWorx Composer, Monitoring > Connection Servers Verify that a Connection Server with name protocol-adapter-cxserver-<uuid> is listed  Testing  Import the ThingWorx Platform sample Things From the ThingWorx Composer Import/Export > From File : <protocol-adapter-toolkit>\samples\mqtt\src\main\dist\SampleEntities.xml Verify that WeatherThing, EntityNameConverter and EdgeTokenAuthenticator have been imported. WeatherThing : RemoteThing that is used to test our Connector EdgeTokenAuthenticator : holds a sample service (ValidateToken) used to validate the security token provided by the Edge device EntityNameConverter : holds a sample service (GetEntityName) used to map an edgeId to an entityName  Start the test MQTT client I will run the test client directly from Eclipse Select the mqtt project, then Run > Run Configurations .... Name : mqtt-client Main class:  com.thingworx.connector.sdk.samples.MqttClient On the argument tab add a Program argument : tcp://<mqtt_broker_host>:1883 Select [Run] Type the client commands in the Eclipse Console  Test Edge initiated requests     Read a property from the ThingWorx Platform In the MQTT client console enter : readProperty WeatherThing temp   Sending message: {"propertyName":"temp","requestId":1,"authToken":"token1234","action":"readProperty","deviceId":"WeatherThing"} to topic: sample Received message: {"temp":56.3,"requestId":1} from topic: mqtt/outbound Notes : An authToken is sent with the request, it is validated by a platform service using the SampleTokenAuthenticator (this authenticator is common to all the PAT samples and is defined in <protocol-adapter-toolkit>\samples\connector-api-sample-protocol) EntityNameMapper is not used by readProperty (no special reason for that) The PlatformRequest message built by the codec is ReadPropertyMessage   Write a property to the ThingWorx Platform In the MQTT client console enter : writeProperty WeatherThing temp 20   Sending message: {"temp":"20","propertyName":"temp","requestId":2,"authToken":"token1234","action":"writeProperty","deviceId":"WeatherThing"} to topic: sample Notes : An authToken is sent with the request, it is validated by a platform service using the SampleTokenAuthenticator EntityNameMapper is not used by writeProperty The PlatformRequest message built by the codec is WritePropertyMessage No Edge message is sent back to the device   Send an event to the ThingWorx Platform   In the MQTT client console enter : fireEvent Weather WeatherEvent SomeDescription   Sending message: {"requestId":5,"authToken":"token1234","action":"fireEvent","eventName":"WeatherEvent","message":"Some description","deviceId":"Weather"} to topic: sample Notes : An authToken is sent with the request, it is validated by a platform service using the SampleTokenAuthenticator fireEvent uses a EntityNameMapper (SampleEntityNameMapper) to map the deviceId (Weather) to a Thing name (WeatherThing), the mapping is done by a platform service The PlatformRequest message built by the codec is FireEventMessage No Edge message is sent back to the device   Execute a service on the ThingWorx Platform ... can be tested with the GetAverageTemperature on WeatherThing ... Test Platform initiated requests     Write a property to the Edge device The MQTT Connector must be configured to bind the Thing with the Platform when the first message is received for the Thing. This was done by setting the bind-on-first-communication=true in connector.conf When a Thing is bound, the remote egress messages will be forwarded to the Connector The Edge initiated requests above should have done the binding, but if the Connector was restarted since, just bind again with : readProperty WeatherThing isConnected From the ThingWorx composer update the temp property value on WeatherThing to 30 An egress message is logged in the MQTT client console :   Received message: {"egressMessages":[{"propertyName":"temp","propertyValue":30,"type":"PROPERTY"}]} from topic: mqtt/outbound   Execute a service on the ThingWorx Platform ... can be tested with the SetNtpService on WeatherThing ...
View full tip
Protocol Adapter Toolkit (PAT) is an SDK that allows developers to write a custom Connector that enables edge devices (without native AlwaysOn support) to connect to and communicate with the ThingWorx Platform. A typical use case is edge communication using a protocol that can't be changed (e.g. MQTT). Prior to PAT, developers had to use the ThingWorx (Edge or Platform) SDKs, or the ThingWorx REST interface, to enable the edge devices to communicate with ThingWorx. Overview PAT provides three main components: the Channel, the Codec, and the ThingWorx Platform Connection. The Channel implements a network protocol to communicate directly with the Edge Device. Its responsibilities include reading data from an Edge Device, writing data to an Edge Device, and routing data to the correct Codec. You can implement your own custom channel or use one of the out of the box channels provided by PTC : WebSocket, HTTP (1.0.x) and MQTT (1.1.x). The Codec translates messages from your edge devices into messages that ThingWorx platform can process (property read/write,service call, events), and provides a means to take the results of those actions and turn them back into messages for the device.  You must implement the Codec. The Platform Connection layer sends and receives messages with the ThingWorx platform. Note : The PAT Connector capabilities depend on edge protocol and channel implementation. Installation The PAT installation media contains : README.md - start here SDK (Java API) and runtime libraries PAT skeleton project (Gradle) Sample codec implementations for the WebSocket, HTTP, and MQTT channels (Gradle) Sample Custom Channel implementation (basic TCP protocol adapter) (Gradle) Required extensions to be installed on the platform : ConnectionServicesExtension and pat-extension Reference Documents ThingWorx Protocol Adapter Toolkit Developers Guide 1.0.0 README.md in various levels of installation folders ThingWorx Connection Services and Compatibility Matrix 1.0.0 Related Knowledge Protocol Adapter Toolkit - MQTT Sample Project hands-on (1.1.x)
View full tip
Video Author:                      Stefan Taka Original Post Date:            June 6, 2016   Description: In part 2 of the ThingWorx REST API tutorial, we will demonstrate various common uses of the ThingWorx REST API.      
View full tip
Video Author:                     Stefan Taka Original Post Date:            June 6, 2016   Description: In part 1 of the ThingWorx REST API tutorial, we will introduce you to the ThingWorx API structure, and also demonstrate how the API can be invoked.    Blog post with text examples found here:  REST API Overview and Examples    
View full tip
Video Author:                     Stefan Tatka Original Post Date:            June 6, 2016   Description: This ThingWorx Tutorial will demonstrate how to configure and initiate remote file transfers using the .NET SDK.      
View full tip
Video Author:                      Asia Garrouj Original Post Date:            June 13, 2017 Applicable Releases:        ThingWorx Analytics 8.0   Description: This video is the first of a 3 part series walking you through how to setup ThingWatcher for Anomaly Detection. In this first video you will learn the basics of how to establish connectivity between KEPServer and the ThingWorx Platform.    
View full tip
Video Author:                     John Greiner Original Post Date:            January 3, 2017 Applicable Releases:        ThingWorx Analytics 52.0 to 8.0   Description: This video walks you through how to access the ThingWorx Analytics Interactive API Guide.       Get the IP address of the ThingWorx Analytics Server Type:  ip a   Put that IP address into the desired web browser Your IP address may be different from the one in the picture above Add the port number of the server to the end of the IP address The Default  port number is 8080 Make sure to put a colon " : " between the end of the IP address and the start of the port number The port number could be different in some cases, depending if it was configured differently during installation Hit Enter and the main page will load.  
View full tip
Announcements