Community Tip - Did you get an answer that solved your problem? Please mark it as an Accepted Solution so others with the same problem can find the answer easily. X

IoT Tips

Sort by:
  Part I – Securing connection from remote device to Thingworx platform The goal of this first part is to setup a certificate authority (CA) and sign the certificates to authenticate MQTT clients. At the end of this first part the MQTT broker will only accept clients with a valid certificate. A note on terminology: TLS (Transport Layer Security) is the new name for SSL (Secure Sockets Layer).  Requirements The certificates will be generated with openssl (check if already installed by your distribution). Demonstrations will be done with the open source MQTT broker, mosquitto. To install, use the apt-get command: $ sudo apt-get install mosquitto $ sudo apt-get install mosquitto-clients Procedure NOTE: This procedure assumes all the steps will be performed on the same system. 1. Setup a protected workspace Warning: the keys for the certificates are not protected with a password. Create and use a directory that does not grant access to other users. $ mkdir myCA $ chmod 700 myCA $ cd myCA 2. Setup a CA and generate the server certificates Download and run the generate-CA.sh script to create the certificate authority (CA) files, generate server certificates and use the CA to sign the certificates. NOTE: Open the script to customize it at your convenience. $ wget https://github.com/owntracks/tools/raw/master/TLS/generate-CA.sh . $ bash ./generate-CA.sh The script produces six files: ca.crt, ca.key, ca.srl, myhost.crt,  myhost.csr,  and myhost.key. There are: certificates (.crt), keys (.key), a request (.csr a serial number record file (.slr) used in the signing process. Note that the myhost files will have different names on your system (ubuntu in my case) Three of them get copied to the /etc/mosquitto/ directories: $ sudo cp ca.crt /etc/mosquitto/ca_certificates/ $ sudo cp myhost.crt myhost.key /etc/mosquitto/certs/ They are referenced in the /etc/mosquitto/mosquitto.conf file like this: After copying the files and modifying the mosquitto.conf file, restart the server: $ sudo service mosquitto restart 3. Checkpoint To validate the setup at this point, use mosquitto_sub client: If not already installed please install it: Change folder to ca_certificates and run the command : The topics are updated every 10 seconds. If debugging is needed you can add the -d flag to mosquitto_sub and/or look at /var/logs/mosquitto/mosquitto.log. 4. Generate client certificates The following openssl commands would create the certificates: $ openssl genrsa -out client.key 2048 $ openssl req -new -out client.csr  -key client.key -subj "/CN=client/O=example.com" $ openssl x509 -req -in client.csr -CA ca.crt  -CAkey ca.key -CAserial ./ca.srl -out client.crt  -days 3650 -addtrust clientAuth The argument -addtrust clientAuth makes the resulting signed certificate suitable for use with a client. 5. Reconfigure Change the mosquitto configuration file To add the require_certificate line to the end of the /etc/mosquitto/mosquitto.conf file so that it looks like this: Restart the server: $ sudo service mosquitto restart 6. Test The mosquitto_sub command we used above now fails: Adding the --cert and --key arguments satisfies the server: $ mosquitto_sub -t \$SYS/broker/bytes/\# -v --cafile ca.crt --cert client.crt --key client.key To be able to obtain the corresponding certificates and key for my server (named ubuntu), use the following syntax: And run the following command: Conclusion This first part permit to establish a secure connection from a remote thing to the MQTT broker. In the next part we will restrict this connection to TLS 1.2 clients only and allow the websocket connection.
View full tip
Just like the perfect sandwich, we know that you have specific preferences and requirements for your ThingWorx deployment. Whether you like to keep things simple with a classic grilled cheese or you like to spice things up with a more elaborate chipotle mayo BLT, we’ve got you covered. Our ThingWorx Deployment Architecture Guide explains what you’ll need to deploy ThingWorx in three different scenarios: production, enterprise and high-availability (pictured below).   Deployment Architecture for ThingWorx on Azure in High-Availability We’ve recently published Version 1.1 of the ThingWorx Deployment Architecture Guide. In it, you can find updated deployment architecture diagrams to more distinctly show the data and application layers within a ThingWorx environment. Our team has also added a new section on what you’ll need to deploy ThingWorx on Microsoft Azure, PTC’s preferred cloud platform.   Check it out here or in the attachment section on the right.   Stay connected, Kaya
View full tip
Official name: DataStax Enterprise, sometimes referred as Cassandra. Note: DBA skills required, free self-paced training can be found here Training | DataStax The extension package can further be obtained through Technical Support. Thingworx 6.0 introduces DSE as a backend database scaling to much greater byte count, ad Neo4j performance limitations hit at 50Gbs. Some of the main reasons to consider DSE are: 1. Elastic scalability -- Alows to easily add capacity online to accommodate more customers and more data when needed. 2. Always on architecture -- Contains no single point of failure (as with traditional master/slave RDBMS's and other NoSQL solutions) resulting in continious availability for business-critical applications that can't afford to go down. 3. Fast linear-scale performance -- Enables sub-second response times with linear scalability (double the throughput with two nodes, quadruple it with four, and so on) to deliver response time speeds. 4. Flexible data storage -- Easily accommodates the full range of data formats - structured, semi-structured and unstructured -- that run through today's modern applications. 5. Easy data distribution -- Read and write to any node with all changes being automatically synchronized across a cluster, giving maximum flexibility to distribute data by replicating across multiple datacenters, cloud, and even mixed cloud/on-premise environments. Note: Windows+DSE is currently not fully supported. Connecting Thingworx: Prerequisite: fully configured DSE database. 1. Obtain the dse_persistancePackage 2. Import as an extension in Composer. 3. In composer, create a new persistence provider. 4. Select the imported package as Persistence Provider Package. 5. In Configuration tab:      - For Cassandra Cluster Host, enter the IP address set in cassandra.yaml or localhost if hosted locally      - Enter new of existing Cassandra Keyspace name      - Enter Solr Cluster URL      - Other fields can be left at default (*) 6. Go to Services and execute TestConnectivity service to ensure True response. 7. When creating new Stream, Value Stream, or a Data Table, set Persistence Provider to the one created in previous steps. Currently all reads and writes are done through Thingworx and all Thingworx data is encoded in DSE.  Opcenter still allows to see connectes streams, datatables, valuestreams. *SimpleStrategy can be used for a single data center, or NetworkTopologyStrategy is recommended for most deployments, because it is much easier to expand to multiple data centers when required by future expansion. Is there a limit of data per node? 1 TB is a reasonable limit on how much data a single node can handle, but in reality, a node is not at all limited by the size of the data, only the rate of operations. A node might have only 80 GB of data on it, but if it's continuously hit with random reads and doesn't have a lot of RAM, it might not even be able to handle that number of requests at a reasonable rate. Similarly, a node might have 10 TB of data, but if it's rarely read from, or there is a small portion of data that is hot (so it could be effectively cached), it will do just fine. If the replication factor is above 1 and there is no reads at consistency level ALL, other replicas will be able to respond quickly to read requests, so there won't be a large difference in latency seen from a client perspective.
View full tip
The Axeda Platform has long had the ability to write custom logic to retrieve, manipulate and create data.  In the current versions of the Platform, there are two classes of API, Version 1 (v1) and Version 2 (v2).  The v1 APIs allow a developer to work with data on the Platform, but all of the APIs are subject to the maxQueryResults configuration property, which by default limits the number of results per query to 1000. For some subsets of data, this can be inadequate to process data.  In comes the v2 API, which introduces pagination. One of the first things a new user does when exploring the V2 API, is something like the following: HistoricalDataItemValueCriteria criteria = new HistoricalDataItemValueCriteria() criteria.assetId = '9701' criteria.startDate = '2014-07-23T12:33:00Z' criteria.endDate = '2014-07-23T12:44:00Z' DataItemBridge dbridge = com.axeda.sdk.v2.dsl.Bridges.dataItemBridge FindDataItemValueResult results = dbridge.findHistoricalValues(criteria)           And they get frustrated when they only get the same 100 rows of data.  Repeat after me: V2 API invocations (find operations) are limited to batches of 100 results at a time! But that's not the end of the story.  With a small change, the query above can be tuned to iterate through all results that match the search criteria:  HistoricalDataItemValueCriteria criteria = new HistoricalDataItemValueCriteria() criteria.assetId = '9701' criteria.startDate = '2014-07-23T12:33:00Z' criteria.endDate = '2014-07-23T12:44:00Z' criteria.pageNumber = 1 criteria.pageSize = 100 // Default. DataItemBridge dbridge = com.axeda.sdk.v2.dsl.Bridges.dataItemBridge FindDataItemValueResult results = dbridge.findHistoricalValues(criteria) tcount = 0 while ( (results = dbridge.findHistoricalValues(criteria)) != null  && tcount < results .totalCount) {   results.dataItems.each { res ->     tcount++   }   criteria.pageNumber = criteria.pageNumber + 1 }    I currently recommend that people avoid using the count() or countDomainObjectByCriteria() functions if you're then going to call a find.  Currently both the count*() and find functions compute total results, and doubles execution time of just those two calls.  Total count is only computed when running the first find() operation, so the code pattern above is so far the most efficient way I've seen to run these operations on the platform. So having covered how to do this in code (custom objects), let's turn our attention to the REST APIs - the other entry-point for using these capabilities.  The REST API doesn't offer a count*() function, but the first find() invocation (if using XML) brings back totalCount as part of the result set.  You can use this in your application to decide how many times to call the REST end-point to retrieve your data.  So for the example above: POST:  https://customer-sandbox.axeda.com/services/v2/dataItem/findHistoricalValues HEADERS: Content-Type: application/xml Accept: application/xml BODY: <?xml version="1.0" encoding="UTF-8"?> <HistoricalDataItemValueCriteria xmlns="http://www.axeda.com/services/v2" pageSize="100" pageNumber="1"> <assetId>9701</assetId> <StartDate>2014-07-23T12:33:00Z</StartDate> <endDate>2014-07-23T12:35:02Z</endDate> </HistoricalDataItemValueCriteria>      RESULTS: <v2:FindAssetResult totalCount="1882" xmlns:v2="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">    <v2:criteria pageSize="100" pageNumber="1">       <v2:name>*</v2:name>       <v2:propertyNames/>    </v2:criteria>    <v2:assets>    </v2:assets> </v2:FindAssetResult>      Or JSON: POST:  https://customer-sandbox.axeda.com/services/v2/dataItem/findHistoricalValues HEADERS: Content-Type: application/xml Accept: application/xml BODY: {   "id":  9701,   "startDate": "2014-07-23T12:33:00Z",   "endDate": "2014-07-23T12:35:02Z",   "pageNumber": 1,   "pageSize": 2 }      And that's how you work around the maxQueryResults limitation of the v1 APIs.  Some APIs do not currently have matching v2 Bridges (e.g. MobileLocation and DataItemAssociation), in which case the limitation will still apply.  Creative use of the query Criteria will allow you to work around these limitations as we continue to improve the V2 API. Regards, -Chris
View full tip
Connect and Monitor Industrial Plant Equipment Learning Path   Learn how to connect and monitor equipment that is used at a processing plant or on a factory floor.   NOTE: Complete the following guides in sequential order. The estimated time to complete this learning path is 180 minutes.   Create An Application Key  Install ThingWorx Kepware Server Connect Kepware Server to ThingWorx Foundation Part 1 Part 2 Create Industrial Equipment Model Build an Equipment Dashboard Part 1 Part 2
View full tip
This document is a general reference/help with configuring and troubleshooting google email account with the ThingWorx mail extension. To start with the configuration: SMTP: smtp.gmail.com 587, TLS checked.  If SSL is being used, the port should be 465. POP3: pop.gmail.com 995 To test, go to "Services" and click on "test" for the SendMessage service. Successful request will show an empty screen with green "result" at the top. Possible errors: Could not connect to SMTP host: smtp.gmail.com, port: 587 with nothing else in the logs. Check your Internet connection to ensure it's not being blocked. <hostname:port>/Thingworx/Common/locales/en-US/translation-login.json 404 (Not Found) Check your gmail folders for incoming messages regarding a sign-in from unknown device. The subject will be "Someone has your password", and the email  content will include the device, location, and timestamp of when the incident occurred. Ensure to check the "this was me" option to prevent from further blocking. This may or may not be sufficient, sometimes this leads to another error - "Please log in via your web browser and 534-5.7.14 then try again. 534-5.7.14 Learn more at 534 5.7.14..." The error can be resolved by: Turning off “less secure”  feature in your Gmail settings. You have to be logged in to your gmail account to follow the link: https://www.google.com/settings/security/lesssecureapps​ Changing your gmail password afterwards. I don't have a valid explanation as to why, but this is a required step, and the error doesn't clear without changing the password.
View full tip
The following power point contains some reference slides to start up with DSE/ThingWorx integration. Start with understanding DSE architecture and specifically, the differences compared to regular Relational Databases. Free online courses offered by DataStaxAcademy: –https://academy.datastax.com/courses/understanding-cassandra-architecture –https://academy.datastax.com/courses/installing-and-configuring-cassandra   The following section will guide you through some of the specifics: http://datastax.com/documentation/cassandra/2.0/cassandra/architecture/architecturePlanningAbout_c.html
View full tip
The ThingWorx EMS and SDK based applications follow a three step process when connecting to the Platform: Establish the physical websocket:  The client opens a websocket to the Platform using the host and port that it has been configured to use.  The websocket URL exposed at the Platform is /Thingworx/WS.  TLS will be negotiated at this time as well. Authenticate:  The client sends a AUTH message to the platform, containing either an App Key (recommended) or username/password.  The AUTH message is part of the Thingworx AlwaysOn protocol.  If the client attempts to send any other message before the AUTH, the server will disconnect it.  The server will also disconnect the client if it does not receive an AUTH message within 15 seconds.  This time is configurable in the WSCommunicationSubsystem Configuration tab and is named "Amount of time to wait for authentication message (secs)." Once authenticated the SDK/EMS is able to interact with the Platform according to the permissions applied to its credentials.  For the EMS, this means that any client making HTTP calls to its REST interface can access Platform functionality.  For this reason, the EMS only listens for HTTP connections on localhost (this can be changed using the http_server.host setting in your config.json). At this point, the client can make requests to the platform and interact with it, much like a HTTP client can interact with the Platform's REST interface.  However, the Platform can still not direct requests to the edge. Bind:  A BIND message is another message type in the ThingWorx AlwaysOn protocol.  A client can send a BIND message to the Platform containing one or more Thing names or identifiers.  When the Platform receives the BIND message, it will associate those Things with the websocket it received the BIND message over.  This will allow the Platform to send request messages to those Things, over the websocket.  It will also update the isConnected and lastConnection time properties for the newly bound Things. A client can also send an UNBIND request.  This tells the Platform to remove the association between the Thing and the websocket.  The Thing's isConnected property will then be updated to false. For the EMS, edge applications can register using the /Thingworx/Things/LocalEms/Services/AddEdgeThing service (this is how the script resource registers Things).  When a registration occurs, the EMS will send a BIND message to the Platform on behalf of that new resource.  Edge applications can de-register (and have an UNBIND message sent) by calling /Thingworx/Things/LocalEms/RemoveEdgeThing.
View full tip
<p>We live in a connected world where we can (want!) to receive instant updates and notifications. ThingWorx leverages the power of Web 2.0 and its Always-On technology to deliver that, but our friendly SMS providers have also provided an easy and powerful way that can be used to deliver SMS notifications right to your phone. Email to Text!</p><div>Set up a 'notification' Thing using our MailServer Template, set up your outgoing e-mail server and you are now ready to invoke the 'SendMessage' service on a given event. All you need now is the email address of your SMS number, which you can find by following this link: <a href="http://sms411.net/how-to-send-email-to-a-phone/" target="_blank"><span style="font-size:8.5pt;line-height:115%;font-family:&quot;Arial&quot;,&quot;sans-serif&quot;">List of e-Mail to SMS  addresses</span></a></div><p class="MsoNormal"><o:p></o:p></p><div><p class="MsoNormal"><o:p></o:p></p></div><p></p>
View full tip
Meet Neal. When Neal joined PTC five years ago, he immediately hit the ground running on IoT initiatives, working in multiple areas ranging from pre-sales to partner relations. Today, he is a Worldwide ThingWorx Center of Excellence Principal Lead at PTC, and his biggest focus is supporting the go-to-market for the Microsoft partnership.   I sat down with Neal recently to hear the details on exactly how Azure and ThingWorx can be used to develop world-class IIoT applications.   Kaya: Can you explain how Azure and ThingWorx work together? Neal: Yes, so Azure provides the cloud infrastructure that our customers need in order to deploy ThingWorx.   By having Azure as our preferred cloud platform, we’re able to specialize our R&D efforts into utilizing functionality that is available in Azure, rather than having to reinvent the wheel ourselves for each cloud platform in the attempt to remain cloud-agnostic. By leveraging a single, already quite powerful, cloud platform through Azure, we’re able to maximize our development efforts.   Kaya: What was the major problem that led to us investigating cloud options? Neal: There were two issues that our users had. The first was we often had complicated installs and setup procedures because we were genericizing the whole process, so the initial setup and run was complicated and expensive. For example, we were requiring them to setup additional VMs and components, and we were giving them generic instructions because we were being very agnostic, when they had already chosen outside of us to use one of the cloud platforms. We knew our customers wanted to move fast, so we had to make it easier and faster for them to see results.   The other issue we ran into with customers was the confusion in the offerings. For ThingWorx, ingest is just one aspect of IoT. ThingWorx is particularly strong in areas like enabling mixed reality and augmented reality as well as application enablement. And, while we also have the ability to perform ingest capabilities, Microsoft is especially powerful when it comes to ingest capabilities and security. We decided that the smartest solution was to leverage Microsoft’s expertise in data ingestion to make ThingWorx even more powerful; so, we made the Azure IoT Hub Connector. By partnering with Microsoft, we have joint architecture where you can see how Microsoft provides key features and ThingWorx will run on top of those features and get you faster to the market to develop the application.   Below is an example of a high-availability deployment of ThingWorx on Azure that utilizes ThingWorx Azure IoT Connectors to access an Azure IoT Hub.  High-Availability Deployment of ThingWorx on Azure Kaya: Why did we partner with Azure? What specific benefits does Azure offer over other cloud services providers? Neal: When we started to look at what our customers were using for cloud services, we noticed that a lot were using Microsoft. When we join forces with Microsoft, we have a much more wholistic offering around digital transformation. With the partnership, PTC and Microsoft are able to leverage each partner’s respective strengths to provide even more powerful IIoT solutions. You have Windchill and Microsoft’s business application strategies; you have Vuforia and Microsoft’s mixed reality and augmented reality strategies; and, you have ThingWorx on the Microsoft Azure cloud. Overall, you have a much more complete and powerful offering together.   Kaya: What is your favorite aspect about working at PTC? Neal: The growth. There’s been a lot of changes over the last five years that I’ve been here. PTC has been able to leverage its strengths and long-time experience in the CAD and PLM markets to enter and ultimately become a leader in the IIoT market, according to reports by research firms like Gartner and Forrester. In short, the growing IIoT market and PTC’s leadership in the industry.   Note to Readers: You’ve likely heard about our major strategic partnership with Microsoft to leverage our respective IIoT and cloud technologies to optimize the creation, deployment, management and overall use of your IIoT applications. If you haven’t heard about the partnership, check out the press release here. If you’re curious about more aspects of PTC’s partnership with Microsoft, check out this site devoted to sharing how ThingWorx and Azure are better together.
View full tip
The .net-sdk can be configured to emit very detailed debugging and diagnostic information to a log file during execution. The .net-sdk uses the standard .NET System.Diagnostic infrastructure for Logging, as such, all configuration of the .net-sdk logger is done via the standard .NET Logging configuration system. By default, Logging is configured via the standard .NET “App.config” file. Log messages can be routed to any standard .NET TraceListener. Optionally, ThingWorx provides a FixedFieldTraceListener which can be used to output log messages to a file. The use of the ThingWorx provided FixedFieldTraceListener is recommended. The FixedFieldTraceListener when configured will automatically create a "logs" directory in the same location as (a sibling to) the running executable file (.exe). This "logs" directory will contain the log files. Every .NET Class can be configured as a specific “Trace Source” which emits log messages. It is recommended to add at least the following Trace Sources to your App.config file to receive the most useful amount of information: com.thingworx.communications.client.BaseClient com.thingworx.communications.client.ConnectedThingClient com.thingworx.communications.client.things.VirtualThing com.thingworx.communications.client.TwApiWrapper com.thingworx.communications.client.things.filetransfer.FileTransferVirtualThing com.thingworx.communications.client.things.contentloader.ContentLoaderVirtualThing The amount of information emitted can range from very low level Trace messages (the Verbose setting) to nothing at all (the Off setting). The “SourceLevels Enumeration” can be used to control how much information is written out to the log file. For reference, this is the <add name="SourceSwitch" value="Information" /> element in the sample below. Below is sample App.config file. <?xml version="1.0" encoding="utf-8"?> <configuration>     <system.diagnostics>       <sources>         <source name="com.thingworx.common.utils.JSONUtilities" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.TwApiWrapper" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.BaseClient" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.ConnectedThingClient" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.things.contentloader.ContentLoaderVirtualThing" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.things.filetransfer.FileTransferVirtualThing" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.things.VirtualThing" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.metadata.annotations.MetadataAnnotationParser" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>       </sources>       <switches>         <add name="SourceSwitch" value="Information" />       </switches>       <sharedListeners>         <add name="file" type="com.thingworx.common.logging.FixedFieldTraceListener, thingworx-dotnet-common, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" initializeData="false"/>       </sharedListeners>       <trace autoflush="true" indentsize="4" />   </system.diagnostics> </configuration>
View full tip
Scripto provides a RESTful endpoint for Groovy Custom Objects on the Axeda Platform.  Custom Objects exposed via Scripto can be accessed via a GET or a POST, and the script will have access to request parameters or body contents. Any Custom Object of the "Action" type will automatically be exposed via Scripto. The URL for a Scripto service is currently defined by the name of the Custom Object: GET: http://{{YourHostName}}/services/v1/rest/Scripto/execute/<customObjectName> Scripto enables the creation of "Domain Specific Services". This allows implementers to take the Axeda Domain Objects (Assets, Models, DataItems, Alarms) and expose them via a service that models the real-world domain directly (trucks, ATMs, MRI Machines, sensor readings). This is especially useful when creating a domain-specific UI, or when integrating with another application that will push or pull data. Authentication There are several ways to test your Scripto scripts, as well as several different authentication methods. The following authentication methods can be used: Request Parameter credentials: ?username=<yourUserName>&password=<yourPassword> Request Parameter sessionId (retrieved from the Auth service): ?sessionid=<sessionId> Basic Authentication (challenge): From a browser or CURL, simply browse to the URL to receive an HTTP Basic challenge. Request Parameters You can access the parameters to the Groovy script via two Objects, Call and Request. Request is actually just a sub-class of Call, so the values will always be the same regardless of which Object you use.  Although parameters may be accessed off of either object, Call is preferable when Chaining Custom Objects (TODO LINK) together.  Call also includes a reference to the logger which can be used to log debug messages. GET:  http://{{YourHostName}}/services/v1/rest/Scripto/execute/<Your Script Name>?sessionid=<Session Id>&serial_number=mySerialNumber Accessing Parameters through the Request Object import com.axeda.drm.sdk.scripto.Request // Request.parameters is a map of strings def serial_number = Request.parameters.serial_number assert serial_number == "mySerialNumber"       Accessing Parameters through the Call Object import com.axeda.drm.sdk.customobject.Call // Call.parameters is a map of strings def serial_number = Call.parameters.serial_number assert serial_number == "mySerialNumber"       Accessing the POST Body through the Request Object The content from a POST request to Scripto is accessible as a string via the body field in the Request object.  Use Slurpers for XML or JSON to parse it into an object. POST:  http://{{YourHostName}}/services/v1/rest/Scripto/execute/<Your Script Name>?sessionid=<Session Id> Response: { "serial_number":"mySerialNumber"} import com.axeda.drm.sdk.scripto.Request def body = Request.body def slurper = new JsonSlurper() def result = slurper.parseText(body) assert result.serial_number == "mySerialNumber"       Returning Plain Text Groovy custom objects must return some content.  The format of that content is flexible and can be returned as plain text, JSON, XML, or even binary files. The follow example simply returns plain text. GET:  http://{{YourHostName}}/services/v1/rest/Scripto/execute/<Your Script Name> // Outputs:  hello return ["Content-Type":"text/plain","Content":"hello"]       Returning JSON We use the JSONObject Class to format our Map-based content into a JSON structure. The eliminates the need for any concern around formatting, you just build up Maps of Maps and it will be properly formatted by the fromObject() utility method. GET:  http://{{YourHostName}}/services/v1/rest/Scripto/execute/<Your Script Name> import net.sf.json.JSONObject root = [   items:[    num_1: “one”,    num_2: “two”            ] ] /** Outputs {   "items": {  "num_1": "one", "num_2": "two"  } } **/ return ['Content-Type': 'application/json', 'Content': JSONObject.fromObject(root).toString(2)]       Link to JSONObject documentation Returning XML To return XML, we use the MarkupBuilder to build the XML response. This allows us to create code that follows the format of the XML that is being generated. GET:  http://{{YourHostName}}/services/v1/rest/Scripto/execute/<Your Script Name>?sessionid=<Session Id> import groovy.xml.MarkupBuilder def writer = new StringWriter() def xml = new MarkupBuilder(writer) xml.root(){     items(){         num_1("one")         num_2("two")     } } /** Outputs <root>   <items>     <num_1>one</num_1>     <num_2>two</num_2>   </items> </root> **/ return ['Content-Type': 'text/xml', 'Content': writer.toString()]       Link to Groovy MarkupBuilder documentation Returning Binary Content To return binary content, you typically will use the fileStore API to upload a file that you can then download using Scripto.  See the fileInfo section to learn more. In this example we connect the InputStream which is associated with the getFileData() method directly to the output of the Scripto script. This will cause the bytes available in the stream to be directly forwarded to the client as the body of the response. GET:  http://{{Your Host Name}}/services/v1/rest/Scripto/execute/{{Your Script Name}}?sessionid={{Session Id}}&fileId=123 import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* def contentType = parameters.type ?: 'image/jpg' return ['Content':fileInfoBridge.getFileData(parameters.fileId), 'Content-Type':contentType]   The Auth Service - Authentication via AJAX Groovy scripts are accessible to AJAX-powered HTML apps with Axeda instance credentials.  To obtain a session from an Axeda server, you should make a GET call to the Authentication service. The service is located at the following example URL: https://{{YourHostName}}/services/v1/rest/Auth/login This service accepts a valid username/password combination in the incoming Request parameters and returns a SessionID. The parameter names it expects to see are as follows: Property Name Description principal.username The username for the valid Axeda credential. password The password for the supplied credential. A sample request to the Auth Service: GET: https://{{YourHostName}}/services/v1/rest/Auth/login?principal.username=YOURUSER&password=YOURPASS Would yield this response (at this time the response is always in XML): <ns1:WSSessionInfo xsi:type="ns1:WSSessionInfo" xmlns:ns1="http://type.v1.webservices.sl.axeda.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">   <ns1:created>2013-08-12T13:19:37 +0000</ns1:created>   <ns1:expired>false</ns1:expired>   <ns1:sessionId>19c33190-dded-4655-b2c0-921528f7b873</ns1:sessionId> <ns1:sessionTimeout> 1800 </ns1:sessionTimeout> </ns1:WSSessionInfo>       The response fields are as follows: Field Name Description created The timestamp for the date the session was created expired A boolean indicating whether or not this session is expired (should be false) sessionId The ID of the session which you will use in subsequent requests sessionTimeout The time (in seconds) that this session will remain active for The Auth Service is frequently invoked from JavaScript as part of Custom Applications. The following code demonstrates this style of invocation. function authenticate(host, username, password) {             try {                 netscape.security.PrivilegeManager.enablePrivilege("UniversalBrowserRead");             } catch (e) {                 // must be IE             }             var xmlHttpReq = false;             var self = this;             // Mozilla/Safari             if (window.XMLHttpRequest) {                 self.xmlHttpReq = new XMLHttpRequest();             }             // IE             else if (window.ActiveXObject) {                 self.xmlHttpReq = new ActiveXObject("Microsoft.XMLHTTP");             }             var SERVICES_PATH = "/services/v1/rest/"             var url = host + SERVICES_PATH + "Auth/login?principal.username=" + username + "&password=" + password;             self.xmlHttpReq.open('GET', url, true);             self.xmlHttpReq.onreadystatechange = function() {                 if (self.xmlHttpReq.readyState == 4) {                     getSessionId(self.xmlHttpReq.responseXML);                 }             }             self.xmlHttpReq.send() } function getSessionId(xml) {             var value             if (window.ActiveXObject) {                 // xml traversing with IE                 var objXML = new ActiveXObject("MSXML2.DOMDocument.6.0");                 objXML.async = false;                 var xmldoc = objXML.loadXML(xml);                 objXML.setProperty("SelectionNamespaces", "xmlns:ns1='http://type.v1.webservices.sl.axeda.com'");                 objXML.setProperty("SelectionLanguage","XPath");                 value =  objXML.selectSingleNode("//ns1:sessionId").childNodes[0].nodeValue;             } else {                 // xml traversing in non-IE browsers                 var node = xml.getElementsByTagNameNS("*", "sessionId")                 value = node[0].textContent             }             return value } authenticate ("http://mydomain.axeda.com", "myUsername", "myPassword")       Calling Scripto via AJAX Once you have obtained a session id through authentication via AJAX, you can use that session id in Scripto calls. The following is a utility function which is frequently used to wrap Scripto invocations from a UI. function callScripto(host, scriptName, sessionId, parameter) {             try {                 netscape.security.PrivilegeManager.enablePrivilege("UniversalBrowserRead");             } catch (e) {                 // must be IE             }             var xmlHttpReq = false;             var self = this;             // Mozilla/Safari             if (window.XMLHttpRequest) {                 self.xmlHttpReq = new XMLHttpRequest();             }             // IE             else if (window.ActiveXObject) {                 self.xmlHttpReq = new ActiveXObject("Microsoft.XMLHTTP");             }             var url = host + SERVICES_PATH + "Scripto/execute/" + scriptName + "?sessionid=" + sessionId;             self.xmlHttpReq.open('GET', url, true);             self.xmlHttpReq.onreadystatechange = function() {                 if (self.xmlHttpReq.readyState == 4) {                     updatepage(div, self.xmlHttpReq.responseText);                 }             }             self.xmlHttpReq.send(parameter); } function updatepage(div, str) {             document.getElementById(div).innerHTML = str; } callScripto("http://mydomain.axeda.com", "myGroovyScriptName", "mySessionId", "myparameter=foo")       A more modern jQuery-based example might look like the following: function callScripto(host, scriptName, sessionId, parameter) {     var url = host + '/services/v1/rest/Scripto/execute/' + scriptName + '?sessionid=' + sessionId     if ( parameter != null ) url += '&' + parameter     $.ajax({url: url,               success:  function(response) {  updatepage(div, response); }           }); } function updatepage(div, str) {     $("#" + div).innerHTML = str } callScripto("http://mydomain.axeda.com", "myGroovyScriptName", "mySessionId", "myparameter=foo") In Conclusion As shown above, Scripto offers a number of ways to interact with the platform.  On each version of the Axeda Platform, all supported v1 and v2 APIs are available for Scripto to interact with the Axeda domain objects and implement business logic to solve real-world customer problems. Bibliography ​(PTC.net account required)     Axeda v2 API/Services Developer's Reference Version 6.8.3 August 2015     Axeda® v1 API Developer’s Reference Guide Version 6.8 August 2014     Documentation Map for Axeda® 6.8.2 January 2015
View full tip
In this blog I will be covering the initial setup of ThingWorx Android SDK with a sample app (supplied with the Android SDK) and set it up with a web based revision control system like Bitbucket's (free account plan). I'll also be covering quick information on how to enable the HTTPS connection for the ThingWorx server on Windows platform. This will allow for secured connection to the ThingWorx server from the Android application. Do note this is only a reference guide for setting it up with revision control system, you’re free to choose to setup the Android Studio without Bitbucket or with any other revision control system. It’ll be just fine to have a local Git/SVN/Perforce etc. to manage the code repository, setting Android Studio with Bitbucket is officially not supported. Pre-requisite: 1. Download and unzip the ThingWorx Android SDK from https://support.ptc.com 2. An account with Bitbucket is required 3. Download and install Android Studio Project Setup 1. Unzip the downloaded Android SDK on a local drive 2. Start the Android Studio > Import project > navigate to the sample applications location provided with the ThingWorx Android SDK e.g. Thingworx-Android-SDK-X.x.x\samples 3. Select one of the sample application e.g. androidShell, with ThingWorx Android SDK X.x.x there are 3 sample applications currently available when the Android SDK is downloaded: a. Android File Brwoser b. Android Shell c. Android Steam Thing 4. For this blog I'll be setting up the androidShell Android Application 5. Do note that all the sample projects are built using Gradle, so while importing if required select Gradle as the build system for the sample application 6. Once imported successfully in Android Studio you should be able to see the Android project and its file structure like so 7. We'll need an account with Bitbucket, create one if you don't have already 8. Logon to Bitbucket and create a team and new repository under that team 9. Navigate to the repository created in Bitbucket 10. Create a local GIT code repo if you don't already have one, and copy over all the contents from the Android SDK x.x.x.zip to that location 11. On your local machine open a command prompt and navigate to the drive where the local GIT's code repository resides, i.e. the folder where you unzipped the android SDK and execute the commands as mentioned: a. git remote add origin https://<accountName@bitbucket.org>/<teamName>/<projectName>.git b. git push -u origin master Note: This will add the contents of your local GIT repository to the empty code repository you’ve created under the team on Bitbucket. 12. You can also use SourceTree UI application on windows for creating, GIT and Mercury based code repository and connect it to your Bitbucket account 13. On successful commit following will be logged in the command prompt Enabling HTTPS on Tomcat and connecting to Android Application Securing Tomcat on Windows You can skip this section if you already have Tomcat running ThingWorx configured for HTTPS connection. 1. Execute the command in a command prompt C:\>"%JAVA_HOME%\bin\keytool" -genkey -alias tomcat -keyalg RSA -keystore C:\KeystoreTomcat\tomcat.keystore Note: Executing above step will require you to add additional information to the keystore like, Org name, full name location, etc. 2. Edit the <tomcatInstallation>\conf\server.xml file   <Connector   port="8443" protocol="org.apache.coyote.http11.Http11NioProtocol"               maxThreads="200" SSLEnabled="true" scheme="https" secure="true" keystoreFile="C:\KeystoreTomcat\tomcat.keystore" keystorePass="<giveYourPasswordHere>" clientAuth="false" sslProtocol="TLS">       <!-- <SSLHostConfig>             <Certificate certificateKeystoreFile="conf/localhost-rsa.jks"                         type="RSA" />         </SSLHostConfig>-->     </Connector> 3. Restart the tomcat service Note: For detailed information on securing the Tomcat on Windows refer to SSL/TLS Configuration How-TO . Note HTTPS setup is only for the Tomcat where ThingWorx is deployed and does not involve certificate setup on the Android Application side Finally,to test if the HTTPS setup was successful or not navigate to https://<serverName/IP>:8443/Thingworx in a web browser. Port 8443 is the default HTTPS port. Starting and connecting Android Application to ThingWorx Now that we have a working secured Tomcat and Android Studio setup with the sample Android Application, androidShell. Let's build and run the application using an emulated Android device in Android Studio 1. Navigate to https://localhost:8443/Thingworx > Import/Export > Import from file and import the Thing entity which will connect to the SampleThing when you’ll run the Android application, for e.g. I imported the Thingworx-Android-SDK-X.X.X\samples\android-shell\entity\Things_AndroidSampleThing.xml because I will be running the androidShell application 2. Attempting to run the Android application without the entity created in ThingWorx first, Thing will show as unbound in RemoteThing. 3. Create an AppKey in ThingWorx > Security > Application Keys with sufficient rights 4. Go to the Android Studio's toolbar and click on AVD Manager icon 5. This will open the Android Virtual Device Manager and lets you create a virtual Android device 6. You can of course use your own actual device to connect over USB and install and test the application on actual hardware 7. If you already have a Virtual or actual device connected to the system, click on Make Project icon in the toolbar 8. Once the Make finishes run the sample application, in my case androidShell application with the Run icon, like so 9. You may now be prompted with the options to select a device virtual or actual 10. Select as desired and click OK 11. This will now launch the application on the selected device, I have selected to launch on the virtual device which will start and emulated Android Device 12. When initiating/running the application for the first time you will be directed to the Settings screen allowing you to enter the connection URI and the Application Key to connect to the ThingWorx server 13. You have to follow one of the following two URI schemes while attempting to connect to a ThingWorx Server a. For HTTP connection use : ws://<machineIP/Name>:8080/Thingworx/WS b. For HTTPS connection use : wss://<machineIP/Name>:8443/Thingworx/WS Note: Ports may differ as these are the default ports, if you are running ThingWorx on different port enhance the URI accordingly 14. Since my ThingWorx is reachable on HTTPS connection i'll use the HTTPS connectino URI scheme and the application key that I have already created in the ThingWorx, which is bound to Administrator user 15. Once done press the back button on the screen to initiate the connection attempt 16. If all's set as it should be you will be able to see the Connected to Server option checked and the Property count for the Serial Number Property being updated every second For more detail on ThingWorx Android SDK refer to the ThingWorx Edge SDKs and WebScocket based Edge MicroServer (WS EMS) Help Center
View full tip
Hi Community,   I've recently had a number of questions from colleagues around architectures involving MQTT and what our preferred approach was.  After some internal verification, I wanted to share an aggregate of my findings with the ThingWorx Architect and Developer Community.   PTC currently supports four methods for integrating with MQTT for IoT projects. ThingWorx Azure IoT Hub Connector ThingWorx MQTT Extension ThingWorx Kepware Server Choice is nice, but it adds complexity and sometimes confusion.  The intent of this article is to clarify and provide direction on the subject to help others choose the path best suited for their situation.   ThingWorx MQTT Extension The ThingWorx MQTT extension has been available on the marketplace as an unsupported “PTC Labs” extension for a number of years.  Recently its status has been upgraded to “PTC Supported” and it has received some attention from R&D getting some bug fixes and security enhancements.  Most people who have used MQTT with ThingWorx are familiar with this extension.  As with anything, it has advantages and disadvantages.  You can easily import the extension without having administrative access to the machine, it’s easy to move around and store with projects, and can be up and running quite quickly.  However it is also quite limited when it comes to the flexibility required when building a production application, is tied directly to the core platform, and does not get feature/functionality updates.   The MQTT extension is a good choice for PoCs, demos, benchmarks, and prototypes as it provides MQTT integration relatively quickly and easily.  As an extension which runs with the core platform, it is not a good choice as a part of a client/enterprise application where MQTT communication reliability is critical.   ThingWorx Azure IoT Hub Connector Although Azure IoT Hub is not a fully functional MQTT broker, Azure IoT does support MQTT endpoints on both IoT Hub and IoT Edge.  This can be an interesting option to have MQTT devices publish to Azure IoT and be integrated to ThingWorx using the Azure IoT Hub Connector without actually requiring an MQTT broker to run and be maintained.  The Azure IoT Hub Connector works similarly to the PAT and is built on the Connection Server, but adds the notion of device management and security provided by Azure IoT.   When using Azure IoT Edge configured as a transparent gateway with buffering (store and forward) enabled, this approach has the added benefit of being able to buffer MQTT device messages at a remote site with the ability to handle Internet interruptions without losing data.   This approach has the added benefit of having far greater integrated security capabilities by leveraging certificates and tying into Azure KeyVault, as well as easily scaling up resources receiving the MQTT messages (IoT Hub and Azure IoT Hub Connector).  Considering that this approach is build on the Connection Server core, it also follows our deployment guidance for processing communications outside of the core platform (unlike the extension approach).   ThingWorx Kepware Server As some will note, KepWare has some pretty awesome MQTT capabilities: both as north and southbound interfaces.  The MQTT Client driver allows creating an MQTT channel to devices communicating via MQTT with auto-tag creation (from the MQTT payload).  Coupled with the native ThingWorx AlwaysOn connection, you can easily connect KepWare to an on-premise MQTT broker and connect these devices to ThingWorx over AlwaysOn.   The IoT Gateway plug-in has an MQTT agent which allows publishing data from all of your KepWare connected devices to an MQTT broker or endpoint.  The MQTT agent can also receive tag updates on a different topic and write back to the controllers.  We’ve used this MQTT agent to connect industrial control system data to ThingWorx through cloud platforms like Azure IoT, AWS, and communications providers.   ThingWorx Product Segment Direction A key factor in deciding how to design your solution should be aligned with our product development direction.  The ThingWorx Product Management and R&D teams have for years been putting their focus on scalable and enterprise-ready approaches that our partners and customers can build upon.  I mention this to make it clear that not all supported approaches carry the same weight.  Although we do support the MQTT extension, it is not in active development due to the fact that out-of-platform microservices-based communication interfaces are our direction forward.   The Azure IoT Hub Connector, being built on the Connection Server is currently the way forward for MQTT communications to the ThingWorx Foundation.   Regards,   Greg Eva
View full tip
This project is developed out of curiosity of how ThingWorx communicates with sensors and vice versa. Immediately a Smart Parking system idea struck to our mind and I started working on it. While heading from home to office I always worry about car parking space in office especially in rainy season. This project will help user in getting parking space. This project has 4 sections as follows, 1) Smart Parking system: A system application developed in ThingWorx guides user to find empty car parking space. Sensors placed at each car parking slot senses the presence of car. A program running on Raspberry Pi board collects sensor information and sends that information to the Smart Car Parking System application in ThingWorx. The data received through sensor is displayed on ThingWorx dashboard/mashup. 2) Live Traffic: This inherits a Google Map and shows the traffic around user's current location. 3) Traffic Blog: If user is visiting a place and have questions regarding parking, traffic condition etc., then user can post their questions here and people around that area can answer it. Questions are not restricted for parking related questions but like best places to visit in areas, restaurant, shops etc. 4) Automobile Wiki: This page provides an documented help regarding anything related to automobile e.g. how to change car tyres?, how to change car wipers? etc.
View full tip
This is a basic troubleshooting guide for ThingWorx. It goes over the importance, types and levels of logs, getting started on troubleshooting the Composer, Mashup and Remote Connectivity.     For full-sized viewing, click on the YouTube link in the player controls.   Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
    About   This is part of a ThingBerry related blog post series.         ThingBerry is ThingWorx installed on a RaspBerry Pi, which can be used for portable demonstrations without the need of utilizing e.g. customer networks. Instead the ThingBerry provides its own custom WIFI hotspot and allows Things to connect and send / receive demo data on a small scale.   In this particual blog post we'll discuss on how to connect a ESP8266 module to the ThingBerry WIFI hotspot and send data from a DHT-11 sensor via the MQTT protocol.   As the ThingBerry is a highly unsupported environment for ThingWorx, please see this blog post for all related warnings.   Install MQTT broker on the ThingBerry     To install mosquitto as a MQTT broker, log in to the ThingBerry and run     sudo apt-get install mosquitto   This will provide a basic broker installation, which is good enough for this example. MQTT clients (including ThingWorx) will connect to this broker to exchange messages. There will be no added security like encrypted traffic shown in this example, it's however good practise to secure MQTT broker / client connections.   While the ESP8266 module is publishing information, ThingWorx will subscribe to the corresponding topics to update its internal property values with what is sent by the ESP8266 module.   For more information on MQTT, how to configure it for ThingWorx or more security relevant information also see   https://community.thingworx.com/message/5063#5063 https://community.thingworx.com/community/developers/blog/2016/08/08/securing-mqtt-connection-to-thingworx-platform?sr=tcontent   Configure the ESP8266     There are too many instructions on the web already on how to initially setup the ESP8266 and use it with the Arduino IDE. I'll therefore just refer to Google which covers the topic more extensively than I ever could.   All coding in this example is done in the Arduino IDE and is pushed to the ESP8266 (NodeMCU) via USB. For this you might need to install a CH340g USB driver for the NodeMCU.   In the Arduino IDE under Tools, I have set my environment to   Board: NodeMCU 1.0 (ESP-12E Module) CPU Frequency: 80 MHz Flash Size: 4M (3M SPIFFS) Upload Speed: 115200 Port: COM3   Under Sketch > Include Library > Manage Libraries add / install the following libraries:   DHT sensor library by Adafruit Adafruit Unified Sensor by Adafruit PubSubClient by Nick O'Leary   These bring the libraries necessary to read data from the DHT-11 sensor and to configure the ESP8266 as MQTT client.     Wiring the DHT-11 sensor     The following image shows the PINs on the ESP8266     I'm using a DHT-11 sensor with cables included and already fixed to a board with 3 PINs. In case you're using a different version, there might be additional components and wiring required, like a resistor etc. Google might help here as well.     Ensure that neither board nor sensor are plugged in, and the ESP8266 is powered off.   To hook the sensor up to the ESP8266, join   ( - ) to GND ( + ) to 3.3V (out) to D3   After all the connections are made, connect the ESP8266 via USB to a computer / laptop with the Arudino IDE configured.   Coding   In the Arduino IDE use the following code - adjust the WIFI settings and the MQTT broker configuration. Ensure to rename the ESP_xx name / topic to something more meaningful, e.g. a specific device name (or just leave it as is if in doubt).   Use the ssid and wpa_passphrase from the hostapd.conf used to configure the ThingBerry as WIFI hotspot.   Copy&paste the code below into the Arduino IDE, verify it and upload it to the ESP8266.     If searching for a WIFI connection, the device's blue LED will blink. A successful connection to the broker and publishing the values will result in a static blue LED. In case the LED is off, the connection to the broker is lost or messages cannot be published.   For troubleshooting, use the Serial Monitor function (at 115200 baud) in the Arduino IDE. In case sensor data cannot be read but the wiring is correct and the code addressing the correct PIN verify the sensor is indeed working. It took me a long time to figure out that the first sensor I used was a defective device.   The current configuration sends updates every 10 seconds - longer intervals might make more sense, but can trigger a timeout for the MQTT broker. In this case the program will re-connect automatically and log corresponding messages in the Serial Monitor. This might seem like an error, but is indeed intended behavior by the code and the MQTT broker.     Configure MQTT Thing in ThingWorx     Create a new Thing in ThingWorx based on the MQTT Template. Add two properties:   temperature humidity   Both set to persistent and logged and Data Change Type to ALWAYS. Also configure a Value Stream to log a history of values.   In the configuration, add two more subscriptions. Activate the "subscribe" checkbox and map name (local property) to topic (MQTT topic), e.g.   name = temperature; topic = ESP_xx/temp name = humidity; topic = ESP_xx/hum   Ensure the correct servernames, ports etc. are configured (an empty servername will use the localhost).   Save the configuration. Property values should now be updated from the MQTT broker, depending on what the device is sending.   Code   #include "DHT.h" #include "PubSubClient.h" #include "ESP8266WiFi.h"   /* * * Configure parameters for sensor and network / MQTT connections * */   // setup DHT 11 pin and sensor   #define DHTPin D3 #define DHTTYPE DHT11   // setup WiFi credentials   #define WLAN_SSID "mySSID" #define WLAN_PASS "WIFIpassword"   // setup MQTT   #define MQTTBROKER "mqttbrokerhostname" #define MQTTPORT 1883   // setup built-in blue LED   #define LED 2   /* * ============================================================ * * DO NOT CHANGE ANYTHING BELOW * (unless you know what you're doing) * */   // initiate DHT   DHT dht(DHTPin, DHTTYPE);   // initiate MQTT client   WiFiClient wifiClient; PubSubClient client(MQTTBROKER, MQTTPORT, wifiClient);   /* * setup */   void setup() {     // switch off internal LED     pinMode(LED, OUTPUT);   digitalWrite(LED, HIGH);     // start serial monitor     Serial.begin(115200);     // start DHT     dht.begin();     // start WiFi     WiFi.begin(WLAN_SSID, WLAN_PASS);   }   /* * the loop */   void loop() {     // while not connected to WiFi, print "."   // after connection exit the loop   // blink LED while having no WiFi signal     boolean wifiReconnect = false;     while (WiFi.status() != WL_CONNECTED) {       digitalWrite(LED, LOW);       delay(200);       Serial.print(".");       digitalWrite(LED, HIGH);       delay(300);       wifiReconnect = true;     }     // if WiFi has reconnected, print new connection information and turn on LED     if (wifiReconnect == true) {       // print connection information and local IP address, mac address       Serial.println();     Serial.println("WiFi connected");     Serial.println(WiFi.localIP());     Serial.println(WiFi.macAddress());     Serial.println();       // turn on built-in LED to indiciate successful WiFi connection       digitalWrite(LED, LOW);     }     // if MQTT client is not connected, connect again   // turn on built-in LED to indicate a successful connection     if (!client.connected()) {       Serial.println("Disconnected from MQTT server... trying to connect");       if (client.connect("ESP_xx")) {         Serial.println("Connected to MQTT server");       Serial.println("Topic = ESP_xx");         digitalWrite(LED, LOW);       } else {         Serial.println("MQTT connection failed");         digitalWrite(LED, HIGH);       }       Serial.println();     }     // read temperature and humidity from sensor     float t = dht.readTemperature();   float h = dht.readHumidity();     if (isnan(t) || isnan(h)) {       // if temperature or humidity is not a number, print error       Serial.println("Failed retrieving data from DHT sensor");     } else {       // print temperature and humidity       Serial.print(t);     Serial.print("° - ");     Serial.print(h);     Serial.print("%");     Serial.println();       // only send values to MQTT broker, if client is connected       if (client.connected()) {         // boolean to check for errors during payload transfer         bool isError = false;         // create payload and publish values via MQTT client       // use buffer to convert float to char*         char buffer[10];         dtostrf(t, 0, 0, buffer);         if (client.publish("ESP_xx/temp", buffer)) {             Serial.print("  published /temp  ");           } else {             Serial.print("  failed /temp  ");         isError = true;           }            dtostrf(h, 0, 0, buffer);         if (client.publish("ESP_xx/hum", buffer)) {             Serial.print("  published /hum  ");           } else {             Serial.print("  failed /hum  ");         isError = true;           }         Serial.println();         // on error, turn off LED         if (isError == true) {           digitalWrite(LED, HIGH);         } else {             digitalWrite(LED, LOW);           }       }     }     // sleep for 10 seconds   // if sleep > default mosquitto timeout : a reconnect is forced for each update-cycle     delay(10000);   }
View full tip
In this blog I will be inspecting the setup below, using the REST APIs exposed by the different components (log-analysis-free guaranteed).   The components are started in stages, and I will do some exploration between each stages : EMS only EMS + LSR 1 EMS + LSR 1 + LSR 2    The REST APIs   Edge MicroServer (EMS) REST API This API is very similar to the ThingWorx platform REST API, see REST APIs Supported by WS EMS for specificity. I will be monitoring the EMS using the LocalEms virtual Thing (EMS only) ThingWorx Platform REST API This is the well known ThingWorx Core REST API -  see REST API Core Concepts I will be monitoring the EMS, from the platform, using an EMSGateway thing. The EMSGateway exposes on the platform some of the LocalEms services (like GetEdgeThings). I'm also inspecting the remote properties / services / events exposed on the things using the RemoteThing::GetRemoteMetadata service. Lua Script Resource (LSR) REST API This API largely differs from the ones above, the API documentation is served by the LSR itself (e.g. http://localhost:8001/)   I will use it to list the scripts loaded by the LSR process.  Configuration   See above diagram - Platform is listening on port 8084, no encryption, EMS and LSRs on the same host...   Platform configuration : for each edge thing, a corresponding remote thing was manually created on the platform EMSGateway1 as a EMSGateway Thing Template EMSOnlyThing as a RemoteThingWithFileTransfer LUAThing2 as a RemoteThing LUAThing1 as a RemoteThing LUAOnlyThing as a RemoteThing   EMS configuration : /etc/config.json - listening on default port 8000   {     "ws_servers": [{             "host": "tws74neo",             "port": 8084         }     ],     "appKey": "xxxxxx-5417-4248-bc01-yyyyyyy",     "logger": {         "level": "TRACE"     },     "ws_connection": {         "encryption": "none"     },     "auto_bind": [{             "name": "EMSGateway1",             "gateway": true         }, {             "name": "EMSOnlyThing",             "gateway": false         }, {             "name": "LUAThing2",             "host": "localhost",             "port": 8002,             "gateway": false         }, {             "name": "LUAThing1",             "gateway": false         }     ],     "file": {         "virtual_dirs": [{                 "emsrepository": "E:\\ptc\\ThingWorx\\EMS-5-3-2\\repositories\\data"             }         ],         "staging_dir": "E:\\ptc\\ThingWorx\\EMS-5-3-2\\repositories\\staging"     } } LSR process (1) : /etc/config.lua - listening on default port 8001 (using the out of the box sample Lua scripts)   scripts.log_level = "INFO"   scripts.LUAThing1 = {     file = "thing.lua",     template = "example", }   scripts.sample = {   file = "sample.lua" } LSR process (2) : /etc/config2.lua - listening on port 8002 (using the out of the box sample Lua scripts) This LSR process is started with command "luaScriptResource.exe -cfg .\etc\config2.lua"   scripts.log_level = "INFO" scripts.script_resource_port = 8002 scripts.LUAThing2 = {     file = "thing.lua",     template = "example", }   scripts.LUAOnlyThing = {     file = "thing.lua",     template = "example", }   Stage 1 : EMS only     ThingWorx REST API   Request: Call the GetEdgeThings service on the EMSGateway1 thing POST  twx74neo:8084/Thingworx/Things/EMSGateway1/Services/GetEdgeThings Response: As expected, only the remote things flagged as auto_bind are listed   name host port path keepalive timeout proto user accept EMSGateway1   8001.0 / 60000.0 30000.0 http   application/json EMSOnlyThing   8001.0 / 60000.0 30000.0 http   application/json LUAThing1   8001.0 / 60000.0 30000.0 http   application/json LUAThing2 localhost 8002.0 / 60000.0 30000.0 http   application/json                                                               Request: Call the GetRemoteMetadata service on the LUAThing1 thing POST  twx74neo:8084/Thingworx/Things/LUAThing1/Services/GetRemoteMetadata Response: As expected, remote properties / services and events are not available since the LSR associated with this thing is off.   Unable to Invoke Service GetRemoteMetadata on LUAThing1 : null   EMS REST API   Request: Call the GetEdgeThings service on the LocalEms virtual thing POST  localhost:8000/Thingworx/Things/LocalEms/Services/GetEdgeThings Response: output is identical to the gateway thing on the platform   { "name": "EMSGateway1", "host": "", "port": 8001, "path": "/",  "keepalive": 60000, "timeout": 30000, "proto": "http", "user": "", "accept": "application/json" }, { "name": "EMSOnlyThing", "host": "", "port": 8001, "path": "/", "keepalive": 60000, "timeout": 30000, "proto": "http", "user": "", "accept": "application/json" }, { "name": "LUAThing1", "host": "", "port": 8001, "path": "/", "keepalive": 60000, "timeout": 30000, "proto": "http", "user": "", "accept": "application/json" }, { "name": "LUAThing2", "host": "localhost", "port": 8002, "path": "/", "keepalive": 60000, "timeout": 30000, "proto": "http", "user": "", "accept": "application/json"}            LSR REST API N/A - no LSR process started yet.   Stage 2 : EMS + LSR 1 (8001)     ThingWorx REST API   Request: Call the GetEdgeThings service on the EMSGateway1 thing POST  twx74neo:8084/Thingworx/Things/EMSGateway1/Services/GetEdgeThings Response: LUAThing1 is associated to an LUA script   name host port path keepalive timeout proto user accept EMSGateway1   8001.0 / 60000.0 30000.0 http   application/json EMSOnlyThing   8001.0 / 60000.0 30000.0 http   application/json LUAThing1 localhost 8001.0 /scripts/Thingworx 60000.0 15000.0 http   application/json LUAThing2 localhost 8002.0 / 60000.0 30000.0 http   application/json   Request: Call the GetRemoteMetadata service on the LUAThing1 thing POST  twx74neo:8084/Thingworx/Things/LUAThing1/Services/GetRemoteMetadata Response: Now that the Lua script for LUAThing1 is running, remote properties / services and events are available   {"isSystemObject":false,"propertyDefinitions":{"Script_Pushed_Datetime":{"sourceType":"ThingShape","aspects":{"isReadOnly":false,"dataChangeThreshold":0,"defaultValue":1495619610000,"isPersistent":false,"pushThreshold":0,"dataChangeType":"VALUE","cacheTime":0,"pushType":"ALWAYS"},"name":"Script_Pushed_Datetime","description":"","category":"","tags":[],"baseType":"DATETIME","ordinal":0},"Pushed_InMemory_Boolean":{"sourceType":"ThingShape","aspects":....   EMS REST API      LocalEms::GetEdgeThings returns same output as EMSGateway::GetEdgeThings   LSR REST API (port 8001)   Request: List all the scripts running in the first LSR GET  localhost:8001/scripts?format=text/html Response: We find our sample script and the script associated with LUAThing1 (the Thingworx script is part of the infrastructure and always there)   Name Status Result File LUAThing1 Running   E:\ptc\ThingWorx\EMS-5-3-2\etc\thingworx\scripts\thing.lua sample Running   sample.lua Thingworx Running   E:\ptc\ThingWorx\EMS-5-3-2\etc\thingworx\scripts\thingworx.lu   Stage 3 : EMS + LSR 1 (8001) + LSR 2 (8002)     ThingWorx REST API   Request: Call the GetEdgeThings service on the EMSGateway1 thing POST  twx74neo:8084/Thingworx/Things/EMSGateway1/Services/GetEdgeThings Response: LUAOnlyThing is now listed and LUAThing2 is associated with a LUA script   name host port path keepalive timeout proto user accept EMSGateway1   8001.0 / 60000.0 30000.0 http   application/json EMSOnlyThing   8001.0 / 60000.0 30000.0 http   application/json LUAOnlyThing localhost 8002.0 /scripts/Thingworx 60000.0 15000.0 http   application/json LUAThing1 localhost 8001.0 /scripts/Thingworx 60000.0 15000.0 http   application/json LUAThing2 localhost 8002.0 /scripts/Thingworx 60000.0 15000.0 http   application/json   EMS REST API      LocalEms::GetEdgeThings returns same output as  EMSGateway::GetEdgeThings   LSR REST API (port 8002)   Request: List all the scripts running in the second LSR GET  localhost:8002/scripts?format=text/html Response: Returns the status of all the scripts currently loaded   Name Status Result File LUAOnlyThing Running   .\etc\thingworx\scripts\thing.lua LUAThing2 Running   .\etc\thingworx\scripts\thing.lua Thingworx Running   .\etc\thingworx\scripts\thingworx.lua
View full tip
From the documentation, a SOLR node is only needed in case of using DataTables.  If the SOLR configuration field left blank, the extension will request to provide an input. Are SOLR nodes required or optional in order to use DSE with TW (in the hypothetical case of not using DataTables)?      -- As for functionality of the Thingworx, a Solr node is not required. However, the extension does try to validate the configuration, and hence, at this point, a SOLR node is mandatory to properly configure the extension. This will be fixed in the future. When there are 2 entries for addresses, one for a Cassandra Cluster and one for a Solr Cluster, are they the same Cluster, or different Clusters?      -- They could be either. There can be one machine with SOLR enabled and using the same IP for both Cassandra and Solr. However, it's not recommend for production workloads. It would be perfectly fine for development or test environments. In a Cluster, in order to have Solr and Cassandra nodes, use of Datacenters is required. Even if a Datacenter isn't explicitlydefined, a default install of DSE will create two data centers called "Cassandra" and "Solr" which is what would be seen see in the default "Cassandra Keyspace Settings" property in the configuration. If the user does create Datacenters with specific names then they will have to update the "Cassandra Keyspace Settings" property to reflect the same. I.e. replication = {'class':'NetworkTopologyStrategy', 'Cassandra':1, 'Solr':1} The number in front (1 being the default) represents the replication factor (https://docs.datastax.com/en/cql/3.1/cql/cql_using/update_ks_rf_t.html) depending on the number of nodes in each data center
View full tip
  Meet Chris. Chris leads the ThingWorx Flow portfolio, which will be releasing in 8.4. Chris has “a long history with workflow and a lot of interest in it.” He was originally hired as a product manager by Jim Heppelmann [PTC’s CEO] about 20 years ago to build a workflow engine for Windchill, our PLM solution. As a matter of fact, this feature is still a part of Windchill today. Pretty neat, right? We were able to leverage Chris’ workflow expertise and experience to create ThingWorx Flow. Check out more below.   Kaya: Why did we create ThingWorx Flow? What challenge does it solve? Chris: In order for customers to realize the full business value for connected solutions, it typically isn’t enough to just surface an alert to show a dashboard of health. Customers want to automatically do things like create service tickets or automatically order consumables as they run low. To do that—to realize that automated value—you need a couple of things. First, you need to be able to easily connect to enterprise systems (like SAP, Salesforce or Microsoft Dynamics) to create service cases. Second, you need to be able to define and execute your business process logic.   Kaya: Can you give me an example? Chris: Definitely. The business process may involve the need to replace a filter based on a contamination alert. After I receive the alert, I need to look up in the bill of materials to find out which filter is applicable to that particular device or product; if it’s not in stock onsite, I need to order it. When it arrives, I then need to schedule the service, knowing that it’s going to be there at a particular time, which entails working with another service system like Salesforce.   So, you have a number of these different enterprise systems that you need to connect to, and you need to support that orchestrated business process to realize the value of a fully automated flow. That’s what has really led us to do it—because, again, the compelling business value in these connected device solutions is often around an automated approach to service or maintenance issues. The value is compelling because automated processes minimize delays, support business growth and scale without hiring as many additional people and eliminate human errors and variability that lead to improved quality at lower costs or stronger margins. Sample ThingWorx Orchestrate Workflow: Send RFQ to Supplier and Add Part to List If State ChangesKaya: So, who ultimately benefits from Flow? Chris: The benefits are realized by an organization as a whole in terms of increased responsiveness, flexibility, reduced costs and higher availability. The ThingWorx solution is unique in that it allows an organization to quickly realize these benefits by optimizing the participation of several key roles. As a result, no role becomes a bottleneck to addressing an urgent business need.   In addition to developers, the many business analysts across the organization are now empowered to define and modify their business processes themselves through Flow. They do this by using the system connections (or subflows) created by their system experts and ThingWorx models that were defined by developers. All of this work is achieved using our visual modeling tool without the need for extensive training.   The system experts can define portions of flows or subflows that get, create or edit information in the systems they understand most within the Flow editor. This enables each role to focus on using their most valuable skills and ultimately eliminates the bottlenecks that cause reduction in business responsiveness when everything must be done by a few highly-specialized experts. Now, developers can leverage the outputs of flows to drive behavior in the application and visualize key KPIs and overall health.   Kaya: So, I see that we’re not only connecting business systems, but we’re also connecting people. We enable them to leverage each other’s different perspectives and areas of expertise. I know we gave developers a sneak peek of ThingWorx Flow in one of our latest posts. Do you have a more detailed demo that we could share this week? Chris: Sure thing. Check this out. (view in My Videos)   Kaya: Wow! I can definitely see how Flow saves immense time in workflow creation. Now, final question: what is your favorite aspect about working at PTC? Chris: The most interesting thing to me is that I get to work with so many different customers in so many different verticals that have such a diverse set of challenging and interesting problems. It’s fun to work together with these customers for us to understand their problems, their unique environments and then to, with them, build solutions using a combination of our products and their existing systems and tooling—that’s what I think is most fascinating.   I learn something new about the way things are made, manufactured, built and serviced every day, and it just makes my life much more interesting. I don’t feel like I’m doing the same thing every day. Working with customers to understand and address their challenges and help them realize their business goals is really rewarding and, again, the diversity of those requests is what makes the job particularly interesting and fascinating.
View full tip
About This is the second part of a ThingBerry related blog post series. ThingBerry is ThingWorx installed on a RaspBerry Pi, which can be used for portable demonstrations without the need of utilizing e.g. customer networks. Instead the ThingBerry provides its own custom WIFI hotspot and allows Things to connect and send / receive demo data on a small scale. In this particual blog post we'll discuss on how to setup the ThingBerry as a WIFI hotspot to directly connect with mobile devices or other Raspberry Pis. As the ThingBerry is a highly unsupported environment for ThingWorx, please see this blog post for all related warnings. In case this guide looks familiar, it's probably because it's based on https://frillip.com/using-your-raspberry-pi-3-as-a-wifi-access-point-with-hostapd/ WIFI Hot Spot As the ThingBerry is currently connected via ethernet we can utilize the Raspberry Pi's WIFI connection to create a private network where all the wireless devices can connect to, e.g. another Raspberry Pi or a ESP8266 First we need to install dnsmasq and hostapd. Those will help setting up the access point and create a private DNS server to dynamically assign IP-addresses to connecting devices. sudo apt-get install dnsmasq hostapd Interfaces We will need to configure the wlan0 interface with a static IP. For this the dhcpcd needs to ignore the wlan0 interface. sudo nano /etc/dhcpcd.conf Paste the following content to the end of the file. This must be ABOVE any interface lines you may have added earlier! denyinterfaces wlan0 Save and exit. Let's now configure the static IP. sudo nano /etc/network/interfaces Comment out ALL lines for the wlan* configurations (e.g. wlan0, wlan1). By default there are three lines which need to be commented out by adding a # at the beginning of the line. After this the wlan0 can be pasted in: allow-hotplug wlan0 iface wlan0 inet static   address 192.168.0.1 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 Save and exit. Now restart the dhcpcd service and reload the wlan0 configuration with sudo service dhcpcd restart sudo ifdown wlan0 sudo ifup wlan0 Hostapd Hostapd is used to configure the actual WIFI hot spot, e.g. the SSID and the WIFI password (wpa_passphrase) that's required to connect to this network. sudo nano /etc/hostapd/hostapd.conf Paste the following content: interface=wlan0 driver=nl80211 ssid=thingberry hw_mode=g channel=6 wmm_enabled=1 ieee80211n=1 country_code=DE macaddr_acl=0 ht_capab=[HT40][SHORT-GI-20][DSSS_CCK-40] auth_algs=1 ignore_broadcast_ssid=0 wpa=2 wpa_key_mgmt=WPA-PSK wpa_passphrase=changeme rsn_pairwise=CCMP If you prefer another SSID or a more secure password, please ensure updating above configuration! Save and exit. Check if the configuration is working via sudo /usr/sbin/hostapd /etc/hostapd/hostapd.conf It should return correctly, without any errors and finally show "wlan0: AP-ENABLED". With this you can now connect to the "thingberry" SSID. However there's no IP assigned automatically - so that sucks​ can be improved... Stop hostapd with CTRL+C and let's start it on boot. sudo nano /etc/default/hostapd At the end of the file, paste the following content: DAEMON_CONF="/etc/hostapd/hostapd.conf" Save and exit. DNSMASQ Dnsmasq allows to assign dynamic IP addresses. Let's backup the original configuration file and create a new one. sudo mv /etc/dnsmasq.conf /etc/dnsmasq.conf.orig  sudo nano /etc/dnsmasq.conf Paste the following content: interface=wlan0 listen-address=192.168.0.1 bind-interfaces dhcp-range=192.168.0.100,192.168.0.199,255.255.255.0,12h Save and exit. This will make the DNS service listen on 192.168.0.1 and assign IP addresses between 192.168.0.100 and 192.168.0.199 with a 12 hour lease. Next step is to setup the IPV4 forwarding for the wlan0 interface. sudo nano /etc/sysctl.conf Uncomment the following line: You can search in nano with CTRL+W net.ipv4.ip_forward=1 Save and exit. Hostname translation To be able to call the ThingBerry with its actual hostname, the hostname needs to be mapped in the host configuration. sudo nano /etc/hosts Search the line with your hostname and update it to the local IP address (as configured in the listen-address above), e.g. 192.168.0.1 thingberry Save and exit. Please note that this is the hostname and not the SSID of the network! Finalizing the Configuration Run the following command to enable the fowarding and reboot. sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" sudo reboot Optional: Internet Access The ThingBerry is independent of any internet traffic. However if your connected devices or the ThingBerry itself need to contact the internet, the WIFI connection needs to route those packages to and from the (plugged-in) ethernet device. This can done through iptables sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE  sudo iptables -A FORWARD -i eth0 -o wlan0 -m state --state RELATED,ESTABLISHED -j ACCEPT  sudo iptables -A FORWARD -i wlan0 -o eth0 -j ACCEPT All traffic will be internet related traffic will be forwarded between eth0 and wlan0 and vice versa. To load this configuration every time the ThingBerry is booted, it needs to be saved via sudo sh -c "iptables-save > /etc/iptables.ipv4.nat" Run this file by editing sudo nano /etc/rc.local Above the line exit 0 add the following line: iptables-restore < /etc/iptables.ipv4.nat Save and exit. Verification Connect to the new WIFI hotspot via the SSID and the password configured earlier through any WIFI capable device. When connecting to your new access point check your device's IP settings (in iOS, Android or a laptop / desktop device). It should show a 192.168.0.x IP address and should be pingable from the ThingBerry console. Connect to http://192.168.0.1/Thingworx or http://<servername>/Thingworx to open the Composer.
View full tip
Announcements