cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Learn all about the Community Ranking System, a fun gamification element of the PTC Community. X

IoT Tips

Sort by:
This video will walk you through the first steps of how to set-up Analytics Manager for Real-Time Scoring. More specifically this video demonstrates how to create an Analysis Provider and start ThingPredictor Agent. NOTE: For version 8.1 the startup command for the Agent has changed view the command in PTC Help center.   Updated Link for access to this video:  ThingWorx Analytics Manager: Create an Analysis Provider & Start the ThingPredictor Agent                                  
View full tip
About This is the second part of a ThingBerry related blog post series. ThingBerry is ThingWorx installed on a RaspBerry Pi, which can be used for portable demonstrations without the need of utilizing e.g. customer networks. Instead the ThingBerry provides its own custom WIFI hotspot and allows Things to connect and send / receive demo data on a small scale. In this particual blog post we'll discuss on how to setup the ThingBerry as a WIFI hotspot to directly connect with mobile devices or other Raspberry Pis. As the ThingBerry is a highly unsupported environment for ThingWorx, please see this blog post for all related warnings. In case this guide looks familiar, it's probably because it's based on https://frillip.com/using-your-raspberry-pi-3-as-a-wifi-access-point-with-hostapd/ WIFI Hot Spot As the ThingBerry is currently connected via ethernet we can utilize the Raspberry Pi's WIFI connection to create a private network where all the wireless devices can connect to, e.g. another Raspberry Pi or a ESP8266 First we need to install dnsmasq and hostapd. Those will help setting up the access point and create a private DNS server to dynamically assign IP-addresses to connecting devices. sudo apt-get install dnsmasq hostapd Interfaces We will need to configure the wlan0 interface with a static IP. For this the dhcpcd needs to ignore the wlan0 interface. sudo nano /etc/dhcpcd.conf Paste the following content to the end of the file. This must be ABOVE any interface lines you may have added earlier! denyinterfaces wlan0 Save and exit. Let's now configure the static IP. sudo nano /etc/network/interfaces Comment out ALL lines for the wlan* configurations (e.g. wlan0, wlan1). By default there are three lines which need to be commented out by adding a # at the beginning of the line. After this the wlan0 can be pasted in: allow-hotplug wlan0 iface wlan0 inet static   address 192.168.0.1 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 Save and exit. Now restart the dhcpcd service and reload the wlan0 configuration with sudo service dhcpcd restart sudo ifdown wlan0 sudo ifup wlan0 Hostapd Hostapd is used to configure the actual WIFI hot spot, e.g. the SSID and the WIFI password (wpa_passphrase) that's required to connect to this network. sudo nano /etc/hostapd/hostapd.conf Paste the following content: interface=wlan0 driver=nl80211 ssid=thingberry hw_mode=g channel=6 wmm_enabled=1 ieee80211n=1 country_code=DE macaddr_acl=0 ht_capab=[HT40][SHORT-GI-20][DSSS_CCK-40] auth_algs=1 ignore_broadcast_ssid=0 wpa=2 wpa_key_mgmt=WPA-PSK wpa_passphrase=changeme rsn_pairwise=CCMP If you prefer another SSID or a more secure password, please ensure updating above configuration! Save and exit. Check if the configuration is working via sudo /usr/sbin/hostapd /etc/hostapd/hostapd.conf It should return correctly, without any errors and finally show "wlan0: AP-ENABLED". With this you can now connect to the "thingberry" SSID. However there's no IP assigned automatically - so that sucks​ can be improved... Stop hostapd with CTRL+C and let's start it on boot. sudo nano /etc/default/hostapd At the end of the file, paste the following content: DAEMON_CONF="/etc/hostapd/hostapd.conf" Save and exit. DNSMASQ Dnsmasq allows to assign dynamic IP addresses. Let's backup the original configuration file and create a new one. sudo mv /etc/dnsmasq.conf /etc/dnsmasq.conf.orig  sudo nano /etc/dnsmasq.conf Paste the following content: interface=wlan0 listen-address=192.168.0.1 bind-interfaces dhcp-range=192.168.0.100,192.168.0.199,255.255.255.0,12h Save and exit. This will make the DNS service listen on 192.168.0.1 and assign IP addresses between 192.168.0.100 and 192.168.0.199 with a 12 hour lease. Next step is to setup the IPV4 forwarding for the wlan0 interface. sudo nano /etc/sysctl.conf Uncomment the following line: You can search in nano with CTRL+W net.ipv4.ip_forward=1 Save and exit. Hostname translation To be able to call the ThingBerry with its actual hostname, the hostname needs to be mapped in the host configuration. sudo nano /etc/hosts Search the line with your hostname and update it to the local IP address (as configured in the listen-address above), e.g. 192.168.0.1 thingberry Save and exit. Please note that this is the hostname and not the SSID of the network! Finalizing the Configuration Run the following command to enable the fowarding and reboot. sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" sudo reboot Optional: Internet Access The ThingBerry is independent of any internet traffic. However if your connected devices or the ThingBerry itself need to contact the internet, the WIFI connection needs to route those packages to and from the (plugged-in) ethernet device. This can done through iptables sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE  sudo iptables -A FORWARD -i eth0 -o wlan0 -m state --state RELATED,ESTABLISHED -j ACCEPT  sudo iptables -A FORWARD -i wlan0 -o eth0 -j ACCEPT All traffic will be internet related traffic will be forwarded between eth0 and wlan0 and vice versa. To load this configuration every time the ThingBerry is booted, it needs to be saved via sudo sh -c "iptables-save > /etc/iptables.ipv4.nat" Run this file by editing sudo nano /etc/rc.local Above the line exit 0 add the following line: iptables-restore < /etc/iptables.ipv4.nat Save and exit. Verification Connect to the new WIFI hotspot via the SSID and the password configured earlier through any WIFI capable device. When connecting to your new access point check your device's IP settings (in iOS, Android or a laptop / desktop device). It should show a 192.168.0.x IP address and should be pingable from the ThingBerry console. Connect to http://192.168.0.1/Thingworx or http://<servername>/Thingworx to open the Composer.
View full tip
ThingWorx 7.4 covers the following areas of the product portfolio:  ThingWorx Analytics, ThingWorx Utilities and ThingWorx Foundation which includes Core, Connection Server and Edge capabilities. Key Functional Highlights Highlights of the release include: Source Integration: Improved integration framework making it easy to connect with external systems, including a standard Windchill connector, to retrieve data on demand. Industrial Connectivity: New Industrial Gateway and Discover navigation tool simplifying the mapping of tags to properties, including performance enhancements for data updates. Edge/CSDK: Build process improvements, Subscribed Property Manager (SPM) enhancements, asynchronous service requests and TLS updates to increase developer productivity, improve application performance and strengthen security. AWS IoT Connector: The latest version 1.2 of the Connector allows customers to more fully leverage their investment in AWS IoT. It features improved deployment automation via CloudFormation and automatic extension installation, ThingWorx Edge JavaScript SDK for use with the Connector with support for properties, services and events, and just-in-time certificate registrations. Contextualize Next Generation Composer: Re-imagined Composer using modern browser concepts to improve developer efficiency including enhanced functionality, updated user interface and optimized workflows. Engage These features will be available at the end of March 2017. Advanced Grid: New grid widget with improved design, context menu, multi-column sorting, global search and many more common grid-related features. Tree Grid: New tree-grid widget with same features as advanced grid plus ability to pre-load tree levels, dynamically load child data and auto expand all nodes to build more powerful mashups. Administer & Manage MS SQL Server: New persistence provider for model and run-time data providing customers an alternative to PostgreSQL. Security: ThingWorx worked with industry standard security scanning and auditing tools to identify and correct all non-trivial, and many trivial, vulnerabilities to ensure secure software best practices. Licensing: Link ThingWorx to PTC systems of record to manage user entitlement and provide usage information and auditing capability critical to TWX, PTC and its partners.  Documentation ThingWorx 7.4 Reference Documents ThingWorx Core 7.4 Release Notes ThingWorx Core Help Center ThingWorx Edge SDKs and WebSocket-based Edge MicroServer Help Center ThingWorx Connection Services Help Center ThingWorx Utilities Help Center Additional information ThingWorx eSupport Portal ThingWorx Developer Portal ThingWorx Marketplace Download ThingWorx Platform – Select Release 7.4 ThingWorx Edge C SDK 1.4 – Select Most Recent Datecode, C-SDK-1-4-0 ThingWorx AWS IoT Connector 1.2 – Select Release 7.4 ThingWorx Utilities – Select Release 7.4
View full tip
Does WSEMS use SSL pinning? Yes, it does. To explain the process a bit more, the ssl pinning is the act of verifying a server certificate by comparing it to the exact certificate that is expected. We “install” a certificate on the EMS (copying the cert to the ems device, and specifying the cert in config.json), EMS will then check all incoming certs against the cert in config.json, looking for an exact match and verifying the certificate chain. Should it be expected that the client, WSEMS.exe, installs the client certificate in order to validate the server certificate using SSL pinning? This does not happen automatically, the cert must be downloaded and manually added to the device. The config must also be manually updated. If so, how we issue the client cert for WSEMS.exe? Does it need to be issued the same way as the server certificate? If talking about pinning, one could use either the server cert directly, or the root cert (recommended). The root cert is the public cert of the signing authority, e.g. if the server uses a verisign issued cert, one can verify the authenticity of the signer by having the verisign public cert. If talking about client certs (2 way auth, client verifies server, server verifies client) then the process is a little different. What happens when the client certificate expires? Do all the devices go offline when the client certificate expires? The device won’t connect with a failure to authorize. Once the server cert expires, the server cert needs to be updated everywhere. This is the advantage to using something like the verisign public cert (root cert) as the installed cert on the client. The root certs usually last longer than the issued certs, but they will have to replaced eventually when they expire. When using Entrust certificates, obtained at https://www.entrust.com/get-support/ssl-certificate-support/root-certificate-downloads/ , how do we know that the certifactes are fed to the WSEMS configuration incorrectly? With the wrong configuration, WSEMS would error out rejecting the Entrust certificate: Non-fips Error code 20: Invalid certificate Fips: Error code 19:  self signed certificate in certificate chain How to properly install Entrust certificate to be accessed by EMS? Public certificate needs to be downloaded and put in a directory that is accessible to the EMS. The path to be added to the cert chain field of the configuration. Even if its trusted it will always need to be installed on the EMS. If a certificate is self-signed, meaning it's non-trusted by default, OpenSSL would throw errors and compain. To solve this, it needs to be installed as a trusted server. If it's signed by a non-trusted CA, that CA's certificate needs to be installed as well. Once the certificates are obtained this if the block of the configuration we are interested in: "certificates":    {                   "validate":          true, //Not validated model is not recommended                       "allow_self_signed":  false, //Self signed model is not recommended, yet theoretically better than non-validated                       "cert_chain":      " " } //recommended, trusted - also the only way to work with a FIPS enabled EMS May use the SSL test tool (for example,https://www.ssllabs.com )to find all chains. For Entrust, both Entrust L1k and G2 need to be downloaded: Because cert_chain is an array type of field, the format to install the certificates paths, would be: "cert_chain": ["path1"], ["path2"] "cert_chain": ["C:\\ThingworxLocation\\MicroServerLocation\\locationofcertificates\\entrust_l1k.pem","C:\\ThingworxLocation\\MicroServerLocation\\locationofcertificates\\entrust_g2_ca.pem"]
View full tip
In case we would like to create an external application and we aren't sure what's the best solution to use, below are some useful tips. Scenario: Let's say we use a gateway in order to access the external application we want to create. We would like to implement this gateway translating the ThingWorx standard protocol to the SCADA protocol. The system administrator, who manages the grid, has the own secure system, with a standard for communication inside the SCADA system, and we want to be able to get data from our system to the system they have. Let's also consider that the data is connected on the electrical field. Tips: It is recommended to develop a 3rd party that on one side talks to ThingWorx, and on the other side, talks to the SCADA system. This external ThingWorx application that we want to implement would have a series-edge-interface allowing to enter in our customer's Ethernet network, in order for both systems to communicate. JDBC is not recommended - it's mostly for connecting to the data base, which in our case is not the main purpose. Each REST API call to the platform uses a security accreditation (appkey or user/password); depending of the permissions contained in that token, the access can be allowed to certain parts of the platform. Reasons for using REST API: REST API is simple and not dependent on any format that the data comes from ThingWorx. It can be used offline, online, synchronously, asynchronously, and is easy to manage from a formatting point of view. ThingWorx can give a lot of options: like exporting information via xml to a plain xml file, to parse it to whatever protocol we have on the other hand: Either our application would have to handle xml inputs from ThingWorx and process it towards SCADA compatible output. Or our application will have to handle xml input from ThingWorx and process it towards SCADA compatible output. Or we can talk directly via REST API and read on a per-thing basis (using the web services). The interface application just has to know how to read xmls or REST API calls (which are provided with an xml formatted response). SDK has already library written in C, C# and Java. SDK C, C# and Java use the AlwaysOn protocol (web socket)  and are more firewall friendly. It's mostly like for speed and automated processing, so when known exactly what happens and we trust the other side and we know there are little chances for errors.​ If we go with REST API or SDK , the application that is developed will have complete access inside ThingWorx, like change/edit the things. If we want to have access in both ways, not only in reading data, but also in update/delete information, etc, SDK and REST API ​can be used, because we have the whole range of commands, like set property values, call on services, etc. We can limit access, if we want, for security reasons. SDK offers access to the same services as REST API, but in a different way. Otherwise, it's better to go with xml decoupled files. Conclusion: for this particular scenario, better use SDK.
View full tip
Introduction In-Memory Column stores the data in columnar format contrary to row format. Allowing users to run faster analytics which is also the idea behind this is to push the computation as close to the data store as possible. In this post I'll configure the Oracle database to enable this feature and then populate one or more tables in the In Memory Column store. This could be particularly helpful if you are using Oracle 12c as an external data store for storing data in database table via JDBC connection, current/historic values from DataTable, Streams or ValueStreams for running analytics or DMLs with lots of join and require lot of computation before the data is finally presented on to the Mashup(s). For this post I used the data generated by temperature sensor getting stored in ValueStream, exported to CSV from the ValueStream and imported it in the Oracle table. In-Memory Column Store vs In-Memory database Usage As mentioned above Oracle 12c version 12.1.2 comes with in built In-Memory Column Store feature. As the name suggest it allows data to be populated in RAM enabling high speed transaction and analytics on data without the need to traverse the HDD, and in some cases this is much faster than the buffer cache of the SGA. Without going into too much nitty-gritty it's important to note that In-Memory Column  Store does not equate to In-Memory database. While it could be possible to move the entire schema, if it's small enough, to fit in the defined memory for In-Memory Column Store, the idea however is to speed up the computation requiring analytics on one or more table(s) which are heavily queried by the users. If you are interested in In-Memory Database as persistence provider for ThingWorx please refer to the documentation Using SAP HANA as the Persistence Provider which is one of the option among other available Persistence Providers for ThingWorx. What changes are required to the current Oracle 12c installation or JDBC connection to ThingWorx? In-Memory Column Store feature is an inbuilt feature in Oracle 12.1.2 and only needs to be enabled, as it's not by default. This can be enabled without having to bring any sort of change to the following : 1. The existing SQL services created within ThingWorx 2. General application architecture accessing the tables in the Oracle database over JDBC 3. Existing Oracle 12c installation Getting Started What will it take to enable In-Memory Column Store? This feature can be enabled by following few steps : 1. Enable this feature in the Oracle 12.1.2 installation, by assigning some memory in RAM for InMemory Column 2. Adjust the SGA size for the database to incorporate the memory assigned to the In-Memory Column 3. Bounce the database As mentioned above though this is an inbuilt feature with Oracle 12.1.2, but is not enabled by default and we can confirm this by executing following SQL in SQL*Plus or Oracle SQL Developer connecting to database for which we are enabling this feature. SQL> show parameter INMEMORY; Things to consider before enabling 1. Ensure that the hardware/ VM hosting the Oracle installation have sufficient RAM, 2. Ensure to bump up the SGA by the amount of memory assigned to In-Memory Column store, failing to do so may lead to database failing to start and will require recovery Note: Minimum memory that can be assigned to In-Memory Column Store is 100M Setting it all up For my test setup I will be assigning 5G to the In-Memory Column Store and will add this amount to the current SGA, to do this let's start the SQL*Plus with the rights that will allow me to make changes to the existinng SGA, so i'm using sys@orcl as sysdba  (ORCL is the test DB name i have for my database) Step 1: Start SQL*Plus, e.g. sqlplus sys@orcl as sysdba Step 2: ALTER SYSTEM SET INMEMORY_SIZE = 5G SCOPE=SPFILE; Step 3: ALTER SYSTEM SET SGA_TARGET = 20G SCOPE=SPFILE; Once done, bounce the database. And that's it! We should now be able to confirm that, via SQL*Plus, certain amount of memory, 5G in my case, has been assigned to the In-Memory Column Store feature SQL> show parameter inmemory Populating the In-Memory Column Store In-Memory Column Store will only populate the data from the table only on the first use or if the table is marked critical which will tell Oracle to populate as soon as the database comes online after restart. For more detail on the commands concerning the In-Memory Column Store refer to the OTN webpage I'll now use the SensorHistory table in which i have the ValueStream's exported data in CSV format, currently this table is holding ~32million+ rows, and populate them in columnar architecture of the In Memory Column Store with following command: SQL>ALTER TABLE SENSORHISTORY INMEMORY; // marking the table to be eligible for In-Memory column with default parameters Just to confirm that the data is still not populated since we have only marked the table to be eligible for In-Memory Column Store, if I now query the dynamic view V$IM_SEGMENTS for current usage of the InMemory, it'll confirm this: So now let's populate the In-Memory with a query which would require full table scan, e.g. SQL> select property_name, count(*) from sensorhistory           Group by property_name; Let's recheck the dynamic view V$IM_SEGMENTS As mentioned above, that this is completely transparent to the application layer, so if you already have an existing JDBC connection in ThingWorx to Oracle, all the existing services created for that table will continue to work as expected. If you don't have an existing JDBC connection to Oracle, it can be created with usual steps with no special configuration for In-Memory. Creating JDBC connection I'm including this section for the purpose of completeness, if you already have a working JDBC connection to Oracle 12.1.2 you can skip to Conclusion below.Now for accessing the above database along with the In-Memory Column Store table we'll now setup the JDBC connection to the Oracle, for that download and import the TW_Download_Relational Databases Connectors.zip (ThingWorx Marketplace) > unzip to access the Oracle12Connector_Extension.zip Step 1 : Import the extension in the ThingWorx by navigating to Import/Export > Import > Extensions Step 2: Create a Thing using the OracleDBServer12 Template, part of the extension we just imported Step 3: Here's how a valid configuration would look like to successfully connect to the database, ORCL in this case Step 4: Navigate to the Properties in the Entity Information panel on the left and verify that the isConnected property value is True. Conclusion This is a very short introduction to what could be a setup for improving the data analytics performed on the stored data, manifold. The data in the In-Memory Column Store is not stored in conventional row format, rather in large columnar format. If the need is to have simple SQL queries with not so many joins it could be that the SGA Cache would be sufficient and probably be faster and you may not gain much by configuring the In-Memory Column Store. However, queries requiring heavy computation on large data sets, having In-Memory Column Store configured could bring manifold increase in performance. Therefore if you need more guidelines on where you'd want to use the In-Memory Column Store, feel free to give following listed good reads a try along with real world data use case for reference. I will try to find some time to run my own benchmark and will try to put it out in a separate blog on performance gain. 1. Oracle Database In-Memory with Oracle Database 12c Release 2 : Oracle white paper 2. When to Use Oracle Database In-Memory :  Identifying Use Cases for Application Acceleration 3. Oracle Database 12c In-Memory Option 4. Testing Oracle In-Memory Column Store @ CERN
View full tip
The ThingWorx Platform is fully exposed using the REST API including every property, service, subsystem, and function.  This means that a remote device can integrate with ThingWorx by sending correctly formatted Hyper Text Transfer Protocol (HTTP) requests. Such an application could alter thing properties, execute services, and more. To help you get started using the REST API for connecting your edge devices to ThingWorx, our ThingWorx developers put together a few resources on the Developer Portal: New to developing with ThingWorx? Use our REST API Quickstart guide that explains how to: create your first Thing, add a property to your Thing, then send and retrieve data. Advanced ThingWorx user? This new REST API how-to series features instructions on how to use REST API for many common tasks, incl. a troubleshooting section. Use ThingWorx frequently but haven’t learned the syntax by heart? We got you covered. The REST API cheat sheet gives details of the most frequently used REST API commands.
View full tip
One of the issues we have encountered recently is the fact that we could not establish a VNC Remote session. The edge was located outside of the internal network where the Tomcat was hosted, and all access to the instance was through an Apache reverse proxy. The EMS was able to connect successfully to the Server, because the Apache had correctly setup the Websocket forwarding through the following directive: ProxyPass "/Thingworx/WS/"  "wss://192.168.0.2/Thingworx/WS" However, we saw that tunnels immediately closed after creation and as a result (or so we thought), we could not connect from the HTML5 VNC viewer. More diagnostics revealed that you need to have ProxyPass directives for the following: -the EMS will make calls to another WS endpoint, called WSTunnelServer. After you setup this, the EMS will be able to create tunnels to the server. -the HTML5 VNC page will make a "websocket" call to yet another WS endpoint, called WSTunnelClient. Only at this step you have the ability to successfully use tunnels through a reverse proxy. Hope it helps!
View full tip
Hello, Since there have been discussions regarding SNMP capabilities when using the ThingWorx platform, I have made a guide on how you can manage SNMP content with the help of an EMS. Library used: SNMP4J - http://www.snmp4j.org/ Purpose: trapping SNMP events from an edge network, through a JAVA SDK EdgeMicroServer implementation then forwarding the trap information to the ThingWorx server. Background: There are devices, like network elements (routers, switches) that raise SNMP traps in external networks. Usually there are third party systems that collect this information, and then you can query them, but if you want to catch directly the trap, you can use this starter-kit implementation. Attached to this blog post you can find an archive containing the source code files and the entities that you will need for running the example, and a document with information on how to setup and run and the thought process behind the project. Regards, Andrei Valeanu
View full tip
Hi everyone, As everyone knows already, the main way to define Properties inside the EMS Java SDK is to use annotations at the beginning of the VirtualThing class implementation. There are some use-cases when we need to define those properties dynamically, at runtime, like for example when we use a VirtualThing to push a sensor's data from a Device Cloud to the ThingWorx server, for multiple customers. In this case, the number properties differ based on customers, and due to the large number of variations, we need to be able to define programmatically the Properties themselves. The following code will do just that: for (int i = 0; i < int_PropertiesLength; i++) {     Node nNode = device_Properties.item(i);     PropertyDefinition pd;     AspectCollection aspects = new AspectCollection();     if (NumberUtils.isNumber(str_NodeValue))     {         pd = new PropertyDefinition(nNode.getNodeName(), " ", BaseTypes.NUMBER);     }     else if (str_NodeValue=="true"|str_NodeValue=="false")     {         pd = new PropertyDefinition(nNode.getNodeName(), " ", BaseTypes.BOOLEAN);     }     else     pd = new PropertyDefinition(nNode.getNodeName(), " ", BaseTypes.STRING);     aspects.put(Aspects.ASPECT_DATACHANGETYPE,    new StringPrimitive(DataChangeType.VALUE.name()));     //Add the dataChangeThreshold aspect     aspects.put(Aspects.ASPECT_DATACHANGETHRESHOLD, new NumberPrimitive(0.0));     //Add the cacheTime aspect     aspects.put(Aspects.ASPECT_CACHETIME, new IntegerPrimitive(0));     //Add the isPersistent aspect     aspects.put(Aspects.ASPECT_ISPERSISTENT, new BooleanPrimitive(false));     //Add the isReadOnly aspect     aspects.put(Aspects.ASPECT_ISREADONLY, new BooleanPrimitive(true));     //Add the pushType aspect     aspects.put("pushType", new StringPrimitive(DataChangeType.ALWAYS.name()));     aspects.put(Aspects.ASPECT_ISLOGGED,new BooleanPrimitive(true));     //Add the defaultValue aspect if needed...     //aspects.put(Aspects.ASPECT_DEFAULTVALUE, new BooleanPrimitive(true));     pd.setAspects(aspects);     super.defineProperty(pd); }  //you need to comment initializeFromAnnotations() and use instead the initialize() in order for this to work. //super.initializeFromAnnotations();   super.initialize(); Please put this code in the Constructor method of your VirtualThing extending implementation. It needs to be run exactly once, at any instance creation. This method relies on the manual discovery of the sensor properties that you will do before this. Depending on the implementation you can either do the discovery of the properties here in this method (too slow), or you can pass it as a parameter to the constructor (better). Hope it helps!
View full tip
This is an example for setting up remote desktop and file transfer for an asset in Thingworx Utilities using the Java Edge SDK. Step 1.   EMS Configuration ClientConfigurator config = new ClientConfigurator(); // application key RemoteAccessThingKey String appKey = "s2ad46d04-5907-4182-88c2-0aad284f902c"; config.setAppKey(appKey); // Thingworx server Uri config.setUri("wss://10.128.49.63:8445//Thingworx/WS"); config.ignoreSSLErrors(true); config.setReconnectInterval(15); SecurityClaims claims = SecurityClaims.fromAppKey(appKey); config.setSecurityClaims(claims);      // enable tunnels for the EMS config.tunnelsEnabled(true); // initialize a virtual thing with identifier PTCDemoRemoteAccessThing VirtualThing myThing = new VirtualThing(ThingName, "PTCDemoRemoteAccessThing", "PTCDemoRemoteAccessThing", client); **// for the file transfer functionality FileTransferVirtualThing myThing = new FileTransferVirtualThing(ThingName, "PTCDemoRemoteAccessThing", "PTCDemoRemoteAccessThing", client); myThing.addVirtualDirectory("AssetRepo",  "E:/AssetRepo"); Step 2.   Install TightVNC on the asset ( http://www.tightvnc.com/download.php ) Step 3.   Go to Thingworx Composer, search for thing PTCDemoRemoteAccessThing                 a.   pair  PTCDemoRemoteAccessThing with identifier PTCDemoRemoteAccessThing                 b.   go to PTCDemoRemoteAccessThing Configuration  and add a tunnel name: vnc host: asset IP port: 5900 (this is the default port for VNC servers; it can be changed)                 c.   go to PTCDemoRemoteAccessThing Properties and set the vncPassword Troubleshooting                 If the VNC server is on the same machine with the Thingworx server, check “Allow loopback connections” from Access Control tab in TightVNC Server Configuration.
View full tip
Introduction Oracle 12c release introduced the concept of multi-tenant architecture for housing several databases running as service under a single database, I'll try to address the connectivity and required configuration to connect to one of the Pluggable database running in the multi-tenant architecture. Multi-tenant database architecture in scope of ThingWorx External Data Source What is multi-tenant Database architecture ? Running multiple databases under a single database installation. Oracle 12c allows user to create one database called Container Database (CDB) and then spawn several databases called Pluggable Databases (PDB) running as services under it. Why use multi-tenant architecture? Such a setup allows users to spawn a new PDB as and when needed with limited resource requirements, easily administer several PDBs just by administering the container database - since all the PDBs are contained within a single database's tablespace structure, start and stop individual PDB leading to low cost on maintaining different databases - as the resource management is limited to one CDB. When to use multi-tenant architecture? In scenarios like creating PoCs, different test environments requiring external data storage, maintaining different versions of dataset, having this run in the multi-tenant architecture could help save time, money and effort. Create Container Database (CDB) Creation of a Container Database (CDB) is not very different from creating a non Container Database use the attached guide Installing Oracle Database Software and Creating a Database.pdf same is accessible online. Create Pluggable Database (PDB) Use the attached Multitenant : Create and Configure a Pluggable Database (PDB) in Oracle Database 12c PDF guide to create and plug a Pluggable Database into the Container Database created in previous step, same is accessible online Using above guide I have bunch of pluggable databases as can be seen below. I'll be using TW724 for connecting to ThingWorx server as an external datasource for following example Connect to a Pluggable Database(PDB) as external data source for ThingWorx Download and unzip the Relational Databases Connectors Extension from ThingWorx Marketplace and extract Oracle12Connector_Extension Import Oracle12Connector_Extension to the ThingWorx using Extension -> Import Create a Thing using OracleDBServer12 Thing Template , e.g. TW724_PDB_Thing Navigate to the Configurations for TW724_PDB_Thing to update the default configuration: JDBC Driver Class Name : oracle.jdbc.OracleDriver JDBC Connection String : jdbc:oracle:thin:@//oravm.ptcnet.ptc.com:1521/tw724.ptcnet.ptc.com Database Username : <UserName> Database Password : <password>   5. Once done save the entity Note: A PDB in a container database can be reached only as a service and not using the CDB's SID. In the above configuration TW724 is a PDB which can be connected to via it's service name i.e. TW724.PTCNET.PTC.COM Let's head to the Services tab for TW724_PDB_Thing to query and access the PDB data Creating Services to access the PDB as external database source for ThingWorx Once the configuration is done the TW724_PDB_Thing is ready for use. The queries remain the same as any other SQL query needed to access the data from Oracle. Service for creating a Table Once on the Services tab for the TW724_PDB_Thing click on Add My Service select the service handler as SQL Command to use following script to create a testTable1 in the PDB create table testTable1 (     id NUMBER GENERATED ALWAYS AS IDENTITY primary key,     col1 varchar2(100),     col2 number ) Note: GENERATED ALWAYS AS IDENTITY option is Oracle 12c specific and I included it here for the reason that with Oracle 12c the possibility to auto generate is now built in with that option simplifying the sequence generation when compared with older Oracle versions such as Oracle 11g. User creating table will need access right on creating table and sequence checkout the Oracle documentation on Identity for more on this. Service for getting all the data from the table Add another service with script Select * from testTable1 for getting all the data from the table Service for inserting data into the table Adding another service with script insert into testTable1 (col1, col2) values ('TextValue', 123)  will insert the data into the table created above Service for getting all tables from the PDB i.e. TW724 Using Select * from tab lists all the available tables in the TW724 PDB Summary Just a quick wrap up on how this would look visually refer to the following image. Since this is a scalable setup - given the platform having enough resources it's possible to create upto 252 PDBs under a CDB therefore 252 PDBs could be created and configured to as many things extending the OracleDBServer12 Thing. ______________________________________________________________________________________________________________________________________________ Edit: Common Connection Troubleshooting If you observe the error something like this Unable to Invoke Service GetAllPDBTables on TW724_PDB_Thing : ORA-01033: ORACLE initialization or shutdown in progress Ensure that the pluggable database, in this error TW724 (since this is what I created and used above in my services) is opened and accessible. If it's not opened use the command after logging in as sys/system (with admin rights) in CDB, which is ORCL in via SQL*Plus or SQL Developer or any SQL utility of your choice capable of connecting to Oracle DB and open the pluggable database using the command : alter pluggable database tw724 open;
View full tip
This document is a general reference/help with configuring and troubleshooting google email account with the ThingWorx mail extension. To start with the configuration: SMTP: smtp.gmail.com 587, TLS checked.  If SSL is being used, the port should be 465. POP3: pop.gmail.com 995 To test, go to "Services" and click on "test" for the SendMessage service. Successful request will show an empty screen with green "result" at the top. Possible errors: Could not connect to SMTP host: smtp.gmail.com, port: 587 with nothing else in the logs. Check your Internet connection to ensure it's not being blocked. <hostname:port>/Thingworx/Common/locales/en-US/translation-login.json 404 (Not Found) Check your gmail folders for incoming messages regarding a sign-in from unknown device. The subject will be "Someone has your password", and the email  content will include the device, location, and timestamp of when the incident occurred. Ensure to check the "this was me" option to prevent from further blocking. This may or may not be sufficient, sometimes this leads to another error - "Please log in via your web browser and 534-5.7.14 then try again. 534-5.7.14 Learn more at 534 5.7.14..." The error can be resolved by: Turning off “less secure”  feature in your Gmail settings. You have to be logged in to your gmail account to follow the link: https://www.google.com/settings/security/lesssecureapps​ Changing your gmail password afterwards. I don't have a valid explanation as to why, but this is a required step, and the error doesn't clear without changing the password.
View full tip
When we do connections to JDBC connected databases, as much as possible we want to leverage the Database Server side capabilities to do the querying, aggregating and everything else for us. So if we need to do filtering or math add it to the query statement so it is done server side, by the Database server not the Thingworx runtime server. Besides that there is much more that can be done, since databases are powerful (all the good stuff like joins, union, distinct, generated and calculated fields and what not) The more we can do Database server side the better it is for the Thingworx runtime performance. Now everyone hopefully knows the [[  ]] parameter substitution. So we can easily build an SQL service that has several input parameters, it will look like: Select * from table where item1=[[par1]] AND item2 =[[par2]] etc. But we can take this up a notch with the super powerful yet super dangerous <<   >> Now we can do a service that just says <<sqlQuery>> and use another service to build something like: select * from table where item1 in “val1,val2,val3” etc. If you can avoid it, only use [[  ]] but if needed there always is <<   >> but you must make sure that you properly secure that service with the system user and validate the service that invokes this service, since <<  >> is vulnerable to SQL Injection
View full tip
  Part I – Securing connection from remote device to Thingworx platform The goal of this first part is to setup a certificate authority (CA) and sign the certificates to authenticate MQTT clients. At the end of this first part the MQTT broker will only accept clients with a valid certificate. A note on terminology: TLS (Transport Layer Security) is the new name for SSL (Secure Sockets Layer).  Requirements The certificates will be generated with openssl (check if already installed by your distribution). Demonstrations will be done with the open source MQTT broker, mosquitto. To install, use the apt-get command: $ sudo apt-get install mosquitto $ sudo apt-get install mosquitto-clients Procedure NOTE: This procedure assumes all the steps will be performed on the same system. 1. Setup a protected workspace Warning: the keys for the certificates are not protected with a password. Create and use a directory that does not grant access to other users. $ mkdir myCA $ chmod 700 myCA $ cd myCA 2. Setup a CA and generate the server certificates Download and run the generate-CA.sh script to create the certificate authority (CA) files, generate server certificates and use the CA to sign the certificates. NOTE: Open the script to customize it at your convenience. $ wget https://github.com/owntracks/tools/raw/master/TLS/generate-CA.sh . $ bash ./generate-CA.sh The script produces six files: ca.crt, ca.key, ca.srl, myhost.crt,  myhost.csr,  and myhost.key. There are: certificates (.crt), keys (.key), a request (.csr a serial number record file (.slr) used in the signing process. Note that the myhost files will have different names on your system (ubuntu in my case) Three of them get copied to the /etc/mosquitto/ directories: $ sudo cp ca.crt /etc/mosquitto/ca_certificates/ $ sudo cp myhost.crt myhost.key /etc/mosquitto/certs/ They are referenced in the /etc/mosquitto/mosquitto.conf file like this: After copying the files and modifying the mosquitto.conf file, restart the server: $ sudo service mosquitto restart 3. Checkpoint To validate the setup at this point, use mosquitto_sub client: If not already installed please install it: Change folder to ca_certificates and run the command : The topics are updated every 10 seconds. If debugging is needed you can add the -d flag to mosquitto_sub and/or look at /var/logs/mosquitto/mosquitto.log. 4. Generate client certificates The following openssl commands would create the certificates: $ openssl genrsa -out client.key 2048 $ openssl req -new -out client.csr  -key client.key -subj "/CN=client/O=example.com" $ openssl x509 -req -in client.csr -CA ca.crt  -CAkey ca.key -CAserial ./ca.srl -out client.crt  -days 3650 -addtrust clientAuth The argument -addtrust clientAuth makes the resulting signed certificate suitable for use with a client. 5. Reconfigure Change the mosquitto configuration file To add the require_certificate line to the end of the /etc/mosquitto/mosquitto.conf file so that it looks like this: Restart the server: $ sudo service mosquitto restart 6. Test The mosquitto_sub command we used above now fails: Adding the --cert and --key arguments satisfies the server: $ mosquitto_sub -t \$SYS/broker/bytes/\# -v --cafile ca.crt --cert client.crt --key client.key To be able to obtain the corresponding certificates and key for my server (named ubuntu), use the following syntax: And run the following command: Conclusion This first part permit to establish a secure connection from a remote thing to the MQTT broker. In the next part we will restrict this connection to TLS 1.2 clients only and allow the websocket connection.
View full tip
The ThingWorx Android SDK was designed to load without modification into Android Studio but due to recent changes in Android Studio, the Thingworx Always On Android SDK 1.0 needs to have minor modifications made to its build files before it will load or build. This changes will be made in the 1.1 release in July, 2016 but until then they will have to be updated after the user downloads the SDK. Android Studio changed the minimum required Android build tools for gradle. This also forces a new version of gradle to be required. The following changes must be made to each set of SDK project build files. The tw-android-sdk/gradle/wrapper/gradle-wrapper.properties file ( there is only one present in the entire sdk) must have this line changed from: distributionUrl=https\://services.gradle.org/distributions/gradle-2.4-all.zip to: distributionUrl=https\://services.gradle.org/distributions/gradle-2.10-all.zip In all build.gradle files change from: buildToolsVersion "19.1.0" to: buildToolsVersion “21.0.0” also in all build.gradle files change from: dependencies { classpath 'com.android.tools.build:gradle:1.3.0' } to: dependencies { classpath 'com.android.tools.build:gradle:2.1.0' } Now you should be able to import and build all examples again.
View full tip
Attached to this article is my slide deck from Liveworx 2016 titled "Getting Mobile with ThingWorx and Android". It is an overview of the ThingWorx android SDK which had its 1.0 release this past April. Also attached is the Light Blue Bean bluetooth integration example I demoed at the show.
View full tip
The following videos are provided to help users get started with ThingWorx: ThingWorx Installation Installing ThingWorx (Neo4j) in Windows ThingWorx PostgreSQL Setup for Windows ThingWorx PostgreSQL for RHEL ThingWorx Data Storage Introduction to Streams Introduction to Value Streams Introduction to DataTables Introduction to InfoTables ThingWorx Concepts & Functionality Introduction to Media Entities Using State Formatting in a Mashup Configuring Properties ThingWorx REST API REST API (Part 1) REST API (Part 2) ThingWorx Edge SDK Configuring File Transfer with the .NET SDK ThingWorx Analytics *new* Getting Started with ThingWorx Analytics Part 1 Getting Started with ThingWorx Analytics Part 2 Installing ThingWorx Analytics Builder Part 1 of 3 Installing ThingWorx Analytics Builder Part 2 of 3 Installing ThingWorx Analytics Builder Part 3 of 3 Creating Signals in the Analytics Builder How to Access the ThingWorx Analytics Interactive API Guide ThingWorx Widgets How to Create and Configure the Auto Refresh Widget How to Create and Define a Blog Widget How to Create and Configure a Button Widget How to Use the Divider and Shape Widgets How to Create and Configure a Chart Widget How to Use a Contained Mashup How to Use the Data Filter Widget How to Use an Expression Widget How to Create and Configure a Gauge Widget How to Create and Configure a Checkbox Widget How to Use a Contained Mashup Widget How to Use a Data Export Widget How to Use the DateTime Picker Widget How to Use the Editable Grid Widget Using Fieldset and Panel Widgets How to Use the File Upload Widget How to Use the Folding Panel Widget How to Use the Google Location Picker How to Use the Google Map Widget How to Use a Grid Widget How to Use an HTML TextArea Widget How to Use the List Widget How to Use a Label Widget How to Use the Layout Widget How to Use the LED Display Widget How to Use the List Widget How to Use the Masked Textbox Widget Navigation in ThingWorx: Using Menus, the Navigation Widget, Link Widget, and Contained Mashups How to Use the Numeric Entry Widget How to Use the Pie Chart Widget How to Use the Property Display Widget How to Use the Radio Button Widget How to Use the Repeater Widget How to Use the Slider Widget How to Use the SQUEAL Search Widget How to Use the Responsive Tab Widget How to Use the Tag Cloud Widget How to Use the Tag Picker Widget How to Use the TextArea and TextBox Widgets How to Use the Time Selector Widget How to Use the Tree Widget How to Use the Value Display Widget How to Use the Web Frame Widget How to Create and Define a Wiki How to Use the XY Chart Quick note: Thread will be updated with more videos as they are added.
View full tip
Objective Learn how the Scripto Web Service helps you to present platform information in your HTML with JavaScript and dynamic page updating.  After this tutorial, you will know how to create Axeda Custom Objects that return formatted results to JavaScript using XmlHttpResponse, and how a very simple page can incorporate platform data into your browser-based user interface. Part 1 - Simple Scripto In Using Scripto, you learned how Scripto can be called from very simple clients, even the most basic HTTP tools. This tutorial builds on the examples in that tutorial. The following HelloWorld script accepts a parameter named "foo". This means that the caller of the script may supply a value for this parameter, and the script simple returns a message that includes the value supplied. import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* return "Hello world, ${parameters.foo}" In the first part of this tutorial, we'll be creating an HTML page with some JavaScript that simply calls the HelloWorld script and puts the result on the page. Create an HTML File Open up your favorite text editor and create a blank document. Paste in this simple scaffold, which includes a very simple FORM with fields for your developer platform email and password, and the "foo" parameter. <html> <head> <title>Axeda Developer Connection Simple Ajax HelloWorld Example</title> </head> <body> <form name="f1">         Platform email (login): <input name="username" type="text"><br/>         Password: <input name="password" type="password"><br/>         foo: <input name="foo" type="text"><br/> <input value="Go" type="button" onclick='JavaScript: callScripto()'/></p> <div id="result"></div> </form> </body> </html> Pretty basic HTML that you've seen lots of times. Notice the form onclick refers to a JavaScript function. We'll be adding that next. Add the JavaScript Directly under the <title> tag, add the following <script language="Javascript"> var scriptoURL ="http://dev6.axeda.com/services/v1/rest/Scripto/execute/"; var scriptName ="HelloWorld2"; </script> This defines our JavaScript block, and a couple of constants to tell our script where the server's Scripto REST endpoint is, and the name of the script we will be running. Let's add in our callScripto() function. Paste the following directly under the scriptName variable declaration: function callScripto(){ try{                 netscape.security.PrivilegeManager.enablePrivilege("UniversalBrowserRead"); }catch(e){ // must be IE }    var xmlHttpReq =false;    var self =this;    // Mozilla/Safari    if(window.XMLHttpRequest){                 self.xmlHttpReq =new XMLHttpRequest();    }// IE elseif(window.ActiveXObject){                 self.xmlHttpReq =new ActiveXObject("Microsoft.XMLHTTP"); }    var form = document.forms['f1'];    var username = form.username.value;    var password = form.password.value;    var url = scriptoURL + scriptName +"?username="+ username +"&password="+ password;             self.xmlHttpReq.open('POST', url,true);             self.xmlHttpReq.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded');             self.xmlHttpReq.onreadystatechange =function() {       if(self.xmlHttpReq.readyState ==4){                     updatepage(self.xmlHttpReq.responseText);       }    }    var foo = form.foo.value;    var qstr ='foo='+escape(foo);    self.xmlHttpReq.send(qstr); } That was a lot to process in one chunk, so let's examine each piece. This piece just tells the browser that we'll be wanting to make some Ajax calls to a remote server. We'll be running the example right off a local file system (at first), so this is necessary to ask for permission. try{                 netscape.security.PrivilegeManager.enablePrivilege("UniversalBrowserRead"); }catch(e){ // must be IE } This part creates an XmlHttpRequest object, which is a standard object available in browsers via JavaScript. Because of slight browser differences, this code creates the correct object based on the browser type. var xmlHttpReq =false; var self =this; // Mozilla/Safari if(window.XMLHttpRequest){                 self.xmlHttpReq =new XMLHttpRequest(); } // IE elseif(window.ActiveXObject){                 self.xmlHttpReq =new ActiveXObject("Microsoft.XMLHTTP"); } Next we create the URL that will be used to make the HTTP call. This simply combines our scriptoURL, scriptName, and platform credentials. var form = document.forms['f1']; var username = form.username.value; var password = form.password.value; var url = scriptoURL + scriptName +"?username="+ username +"&password="+ password; Now let's tell the xmlHttpReq object what we want from it. we'll also reference the name of another JavaScript function which will be invoked when the operation completes. self.xmlHttpReq.open('POST', url,true);     self.xmlHttpReq.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded');     self.xmlHttpReq.onreadystatechange =function(){ if(self.xmlHttpReq.readyState ==4){             updatepage(self.xmlHttpReq.responseText); } } Finally, for this function, we'll grab the "foo" parameter from the form and tell the prepped xmlHttpReq object to post it. var qstr ='foo='+escape(foo);     self.xmlHttpReq.send(qstr); almost done. We just need to supply the updatepage function that we referenced above. Add this code directly before the </script> close tag: function updatepage(str){             document.getElementById("result").innerHTML = str; } Try it out Save your file as helloworld.html and open it in a browser by starting your browser and choosing "Open File". You can also download a zip with the file prepared for you at the end of this page. If you are using Internet Explorer, IE will pop a bar asking you if it is OK for the script inside this page to execute a script. Choose "Allow Blocked Content". Type in your platform email address (the address you registered for the developer connection with) and your password. Enter any text that you like for "foo". When you click "Go", the area below the button will display the result of the Scripto call. Note that if you are using Mozilla Firefox, you will be warned about the script wanting to access a remote server. Click "Allow". Congratulations! You have learned how to call a Custom Object-backed Scripto service to display dynamic platform content inside a very simple HTML page. Next Steps Be sure to check out the tutorial on Hosting Custom Applications to learn how you can make this page get directly served from your platform account, with its very own URL. Also explore code samples that show more sophisticated HTML+AJAX examples using Google Charts and other presentation tools.
View full tip
Adaptive Machine Messaging Protocol The Adaptive Machine Messaging Protocol (AMMP) is a simple, byte-efficient, lightweight messaging protocol used to facilitate Internet of Things (IoT) communications and to build IoT connectivity into your product. Using a RESTful API, AMMP provides a semantic structure for IoT information exchange and leverages HTTPS as the means for sending and receiving messages between an edge device and the Axeda® Machine Cloud®. AMMP uses JavaScript Object Notation (JSON) allowing any device that is capable of making an HTTP transmission to interact with the Axeda Platform. Utilizing a common network transport that is friendly to local network proxies and firewalls, and at the same time using JSON for a compact, human-readable, language-independent, and easily constructed data representation, AMMP simplifies device communication and reduces the work needed to connect to the Axeda Machine Cloud. For complete information about the Adaptive Machine Messaging Protocol, refer to the Adaptive Machine Messaging Protocol (AMMP) Technical Reference. AMMP Toolkits The AMMP Toolkits are libraries that allows you to connect your devices to the Axeda Platform using AMMP.  The AMMP Toolkits support transmission of data, alarms, events, locations; error handling and reporting; as well as exchanging files with the Axeda Platform. AMMP Android-Based Toolkit The AMMP Android-Based Toolkit library conforms to the AMMP Protocol Version 1.1. AMMP Android Toolkit AMMP Android Toolkit Developers Reference AMMP Protocol v1.1 Technical Reference AMMP Java-Based ToolkitThe AMMP Java-Based Toolkit library conforms to the AMMP Protocol Version 1.1.  AMMP Java Toolkit AMMP Java Toolkit Developers Reference AMMP Protocol v1.1 Technical Reference AMMP C-Based ToolkitThe AMMP C-Based Toolkit library conforms to the AMMP Protocol Version 1.1. AMMP C Toolkit AMMP C Toolkit Developers Reference AMMP Protocol v1.1 Technical Reference The above resources may be found at the PTC Support Portal.
View full tip
Introduction The Edge MicroServer (EMS) and Lua Script Resource (LSR) are Edge software that can be used to connect remote devices to the ThingWorx platform. Using a Gateway is beneficial because, this will allow you to run one instance of the EMS on a server and then many instances of the LSR on different devices all over the world. All communication to the platform will be handled by this one EMS Gateway server. The EMS Gateway can be set up in two different types of scenarios: Self-Identifying Remote Things and Explicitly defined Remote Things​. The scenario I'm going to discuss below will involve explicitly defined Remote Things, a ThingWorx server, an EMS, and a LSR. We will need at least 1 server to run the ThingWorx platform and EMS, but these can always be on separate servers as well. We will also need some other machine or device that will run the LSR. Visit the support downloads page to find the latest EMS releases. The LSR is contained within the EMS download. You can also navigate to the Edge Support site to read more about the EMS and LSR oif this is the first time you have ever configured one. The "ThingWorx WebSocket-based Edge MicroServer Developer's Guide" is also provided inside of the zip file that contains the EMS for further information. Setting up the EMS Once we have obtained the EMS download from the support site (see the section above for links) we can begin creating our config.json file. The image below is a working config.json file for using the EMS as a Gateway. The settings in here are particular to my personal IP addresses and Application Key, but the concept remains the same, and I will go into further detail on the necessary sections, below the image. ws_servers The host and port parameters are always set to the IP address and port that the ThingWorx platform is being hosted on When the EMS and ThingWorx platform are on the same server, "localhost" can be used instead of an IP address appKey The appKey section is the value of an Application Key in the ThingWorx platform that should be used for the authentication of the EMS to the platform An Application Key will need to be created and assigned a user with proper priveledges prior to authenticating certificates The certificates section should be validating and pointing to proper certificates, but in the example above I am not validating any certificates for the sake of simplicity More can be read about the certificates sections here logger The logging section is out of scope of this article, but further reading on ​logger​ configurations can be found here The section in the example above will work for basic logging needs http_server The http_server section configuration parameters will tell the EMS what host and port to spin up a server on and if there is authentication necessary by any LSRs trying to connect The LSR has settings that will explicitly call out whatever value is set to the host and port in this section, so make sure to set these to an open port that is not in use or blocked behind a firewall Further reading on the http_server section can be found here auto_bind You can see above that there are two objects defined in the auto_bind section. One of these is binding the EMS to an EMSGateway Thing in the platform called "EdgeGateway" and the other is defined in the config.lua file for the LSR The gateway parameter is set to true only in the object, "EdgeGateway", that is being used for the EMS to bind to The host and port defined for the "OtherEdgeThing" should point to the port and IP address that the LSR is running on in the other device By default, the LSR runs on port 8001, but you can always double check the listening port by finding the Process Identification (PID) number of the luaScriptResource.exe and then matching the PID to the corresponding line item in the output of netstat -ao command in a console window The protocol can be set to "http" in an example application, but make sure to use "https" when security is of concern All further reading on the sections of the config.json file can be found in the config.json.complete file included with the EMS download and on the Edge Help Center under the "Creating a Configuration File" section and the "Viewing All Options" section. Setting up the LSR In this example, the LSR is going to run on a separate server and point to the EMS server. Below is a screenshot of two very important additions (rap_host and rap_port) to the default config.lua file: rap_host The rap_host field should be set to the IP address where the EMS is hosted rap_port The rap_port field should be set to the port parameter defined in the config.json http_server section script_resource_host The ​script_resource_host​ field must be set to ensure that the EMS will know what IP address to communicate with the LSR at scripts.OtherEdgeThing This line is necessary to identify what the name of the LSR is that will register with the EMS to bind to the platform "OtherEdgeThing" can be changed to anything, but make sure that the auto_bind section in the config.json aligns with what you've defined in the config.lua file at this line Running the EMS and LSR Now that we have configured the LSR and EMS to point to each other and the platform we can try running both of these applications to make sure we are successful. Make sure the ThingWorx platform is running Create a RemoteThing with the name given in the auto_bind section for the LSR we are connecting Create an EMSGateway with the name given in the auto_bind section for the EMS as a Gateway to bind to Start the EMS This can be done by double clicking the wsems.exe when in Windows, running it as a service, or running it directly from the command line Start the LSR This can be done by double clicking the luaScriptResource.exe when in Windows, running it as a service, or running it directly from the command line Navigate to the ThingWorx platform and make sure that the Things you have created are connected Do this by navigating to the Properties menu option and refreshing the isConnected property You should be able to browse remote properties and services for each bound RemoteThing, and this means you have successfully setup the EMS as a Gateway device to external LSR applications running on remote devices Any further questions about browsing remote properties or other configuration settings in the .config files is most likely addressed in the Edge Help Center under the EMS section​, and if not, feel free to comment directly on this document.
View full tip
Announcements