cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
This video is the 2nd part, of a series of two videos, walking you through the configuration of Analysis Event which is applied for Real-Time Scoring. This part 2 video will walk you through the configuration of Analysis Event for Real-Time Scoring, and validate that a predictions job has been executed based on new input data.   Updated Link for access to this video:  Analytics Manger 7.4: Configure Analysis Event & Real Time Scoring Part 2 of 2
View full tip
This video walks you through the use of Analysis Replay to execute analysis events on historic data. This video applies to ThingWorx Analytics 7.4 to 8.1   Updated Link for access to this video:  Analytics Manager : Using Analysis Replay
View full tip
This video is the 1st part of a series of two videos walking you through the configuration of Analysis Event which is applied for Real-Time Scoring. This 1st video demonstrate how to create a Template and Thing which allows the prediction model to score in real-time. Note that this video is just for demo purposes, customers who have ThingWorx, they of course already have their properties set-up. They just need to configure Analysis Event which is demonstrated in the part 2 video.   Updated Link for access to this video:  Analytics Manger 7.4: Create a Template & Thing for Real-Time Scoring Part 1 of 2
View full tip
This video will walk you through the first steps of how to set-up Analytics Manager for Real-Time Scoring. More specifically this video demonstrates how to create an Analysis Provider and start ThingPredictor Agent. NOTE: For version 8.1 the startup command for the Agent has changed view the command in PTC Help center.   Updated Link for access to this video:  ThingWorx Analytics Manager: Create an Analysis Provider & Start the ThingPredictor Agent                                  
View full tip
In this part of the Troubleshooting blog series, we will review the process on how to restart individual services essential to the ThingWorx Analytics Application within the Virtual Machine Appliance.   Services have stopped, and I cannot run my Analytics jobs! In some cases, we have had users encounter issues where a system or process has halted and they are unable to proceed with their tasks. This can be due to a myriad of reasons, ranging from OS hanging issues to memory issues with certain components.   As we covered earlier in Part II, the ThingWorx Analytics Application is installed in a CentOS (Linux) Operating System. As with most Linux Operating Systems, you have the ability to manually check and restart processes as needed.   Steps to Restart Services   With how the Application is installed and configured, the services for functionality should auto start when you boot up the VM. You will have to verify that the Appliance is functional by running your desired API call.   If a system is not functioning as expected, you will receive an error in your output when you POST an API call. Some errors are very specific and you can search the Knowledge Database for any existing Knowledge Articles that may solve the issue.   For error messages that do not have an existing article, you may want to attempted the following   Method 1:   If you are encountering issues, and are unsure what process is not working correctly, we would recommend a full Application restart. This involves restarting the Virtual Machine Appliance via the command line terminal.   We would recommend that you use the following command, as root user or using SUDO, as this is known as a “Graceful restart” ​sudo reboot -h now   This will restart the virtual machine safely, and once you are back up and running you can run your API calls to verify functionality. This should resolve any incremental issues you may have faced.   Method 2:   If you want to restart an individual service, there is a particular start order that needs to be followed to make sure the Application is operating as expected.   The first step is check what services are not running, you can use the following command to check what is running and its current status: service –status-all   The services you are looking for are the following: Zookeeper PostgreSQL Server GridWorker(s) Tomcat   If a particular service is not on the running list, you will have to manually start them by using the service start command. service [name of service] start e.g. service tomcat start You may be prompted for the root password   You can verify that the services are operating by running the status check command as described above.   If you need to restart all services, we have a specific start order, this is important to do as there are some dependencies such as Postgres for the GridWorker(s) and Tomcat to use.   The start order is as follows: Zookeeper PostgreSQL Server GridWorker(s) Tomcat   After completing the restart, and verifying that the services are running, run your desired API call.
View full tip
About This is the second part of a ThingBerry related blog post series. ThingBerry is ThingWorx installed on a RaspBerry Pi, which can be used for portable demonstrations without the need of utilizing e.g. customer networks. Instead the ThingBerry provides its own custom WIFI hotspot and allows Things to connect and send / receive demo data on a small scale. In this particual blog post we'll discuss on how to setup the ThingBerry as a WIFI hotspot to directly connect with mobile devices or other Raspberry Pis. As the ThingBerry is a highly unsupported environment for ThingWorx, please see this blog post for all related warnings. In case this guide looks familiar, it's probably because it's based on https://frillip.com/using-your-raspberry-pi-3-as-a-wifi-access-point-with-hostapd/ WIFI Hot Spot As the ThingBerry is currently connected via ethernet we can utilize the Raspberry Pi's WIFI connection to create a private network where all the wireless devices can connect to, e.g. another Raspberry Pi or a ESP8266 First we need to install dnsmasq and hostapd. Those will help setting up the access point and create a private DNS server to dynamically assign IP-addresses to connecting devices. sudo apt-get install dnsmasq hostapd Interfaces We will need to configure the wlan0 interface with a static IP. For this the dhcpcd needs to ignore the wlan0 interface. sudo nano /etc/dhcpcd.conf Paste the following content to the end of the file. This must be ABOVE any interface lines you may have added earlier! denyinterfaces wlan0 Save and exit. Let's now configure the static IP. sudo nano /etc/network/interfaces Comment out ALL lines for the wlan* configurations (e.g. wlan0, wlan1). By default there are three lines which need to be commented out by adding a # at the beginning of the line. After this the wlan0 can be pasted in: allow-hotplug wlan0 iface wlan0 inet static   address 192.168.0.1 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 Save and exit. Now restart the dhcpcd service and reload the wlan0 configuration with sudo service dhcpcd restart sudo ifdown wlan0 sudo ifup wlan0 Hostapd Hostapd is used to configure the actual WIFI hot spot, e.g. the SSID and the WIFI password (wpa_passphrase) that's required to connect to this network. sudo nano /etc/hostapd/hostapd.conf Paste the following content: interface=wlan0 driver=nl80211 ssid=thingberry hw_mode=g channel=6 wmm_enabled=1 ieee80211n=1 country_code=DE macaddr_acl=0 ht_capab=[HT40][SHORT-GI-20][DSSS_CCK-40] auth_algs=1 ignore_broadcast_ssid=0 wpa=2 wpa_key_mgmt=WPA-PSK wpa_passphrase=changeme rsn_pairwise=CCMP If you prefer another SSID or a more secure password, please ensure updating above configuration! Save and exit. Check if the configuration is working via sudo /usr/sbin/hostapd /etc/hostapd/hostapd.conf It should return correctly, without any errors and finally show "wlan0: AP-ENABLED". With this you can now connect to the "thingberry" SSID. However there's no IP assigned automatically - so that sucks​ can be improved... Stop hostapd with CTRL+C and let's start it on boot. sudo nano /etc/default/hostapd At the end of the file, paste the following content: DAEMON_CONF="/etc/hostapd/hostapd.conf" Save and exit. DNSMASQ Dnsmasq allows to assign dynamic IP addresses. Let's backup the original configuration file and create a new one. sudo mv /etc/dnsmasq.conf /etc/dnsmasq.conf.orig  sudo nano /etc/dnsmasq.conf Paste the following content: interface=wlan0 listen-address=192.168.0.1 bind-interfaces dhcp-range=192.168.0.100,192.168.0.199,255.255.255.0,12h Save and exit. This will make the DNS service listen on 192.168.0.1 and assign IP addresses between 192.168.0.100 and 192.168.0.199 with a 12 hour lease. Next step is to setup the IPV4 forwarding for the wlan0 interface. sudo nano /etc/sysctl.conf Uncomment the following line: You can search in nano with CTRL+W net.ipv4.ip_forward=1 Save and exit. Hostname translation To be able to call the ThingBerry with its actual hostname, the hostname needs to be mapped in the host configuration. sudo nano /etc/hosts Search the line with your hostname and update it to the local IP address (as configured in the listen-address above), e.g. 192.168.0.1 thingberry Save and exit. Please note that this is the hostname and not the SSID of the network! Finalizing the Configuration Run the following command to enable the fowarding and reboot. sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" sudo reboot Optional: Internet Access The ThingBerry is independent of any internet traffic. However if your connected devices or the ThingBerry itself need to contact the internet, the WIFI connection needs to route those packages to and from the (plugged-in) ethernet device. This can done through iptables sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE  sudo iptables -A FORWARD -i eth0 -o wlan0 -m state --state RELATED,ESTABLISHED -j ACCEPT  sudo iptables -A FORWARD -i wlan0 -o eth0 -j ACCEPT All traffic will be internet related traffic will be forwarded between eth0 and wlan0 and vice versa. To load this configuration every time the ThingBerry is booted, it needs to be saved via sudo sh -c "iptables-save > /etc/iptables.ipv4.nat" Run this file by editing sudo nano /etc/rc.local Above the line exit 0 add the following line: iptables-restore < /etc/iptables.ipv4.nat Save and exit. Verification Connect to the new WIFI hotspot via the SSID and the password configured earlier through any WIFI capable device. When connecting to your new access point check your device's IP settings (in iOS, Android or a laptop / desktop device). It should show a 192.168.0.x IP address and should be pingable from the ThingBerry console. Connect to http://192.168.0.1/Thingworx or http://<servername>/Thingworx to open the Composer.
View full tip
There are multiples approaches to improve the performance Increase the NetWork bandwidth between client PC and ThingWorx Server Reduce the unnecessary handover when client submit requests to ThingWorx server through NetWork Here are suggestions to reduce the unnecessary handover between client and server Eliminate the use of proxy servers between client and ThingWorx It is compulsory to download Combined.version.date.xxx.xxx.js file when the first time to load mashup page (TTFB and Content Download time). Loading performance is affected by using of proxy servers between client and ThingWorx Server. This is testing result with proxy server set up This is the test result after eliminiating proxy server from the same environment Cut off extensions that not used in ThingWorx server After installed extensions, the size of Combined.version.date.xxx.xxx.js increased Avoid Http/Https request errors There is a https request error when calling Google map. It takes more than 20 seconds
View full tip
ThingWorx 7.4 covers the following areas of the product portfolio:  ThingWorx Analytics, ThingWorx Utilities and ThingWorx Foundation which includes Core, Connection Server and Edge capabilities. Key Functional Highlights Highlights of the release include: Source Integration: Improved integration framework making it easy to connect with external systems, including a standard Windchill connector, to retrieve data on demand. Industrial Connectivity: New Industrial Gateway and Discover navigation tool simplifying the mapping of tags to properties, including performance enhancements for data updates. Edge/CSDK: Build process improvements, Subscribed Property Manager (SPM) enhancements, asynchronous service requests and TLS updates to increase developer productivity, improve application performance and strengthen security. AWS IoT Connector: The latest version 1.2 of the Connector allows customers to more fully leverage their investment in AWS IoT. It features improved deployment automation via CloudFormation and automatic extension installation, ThingWorx Edge JavaScript SDK for use with the Connector with support for properties, services and events, and just-in-time certificate registrations. Contextualize Next Generation Composer: Re-imagined Composer using modern browser concepts to improve developer efficiency including enhanced functionality, updated user interface and optimized workflows. Engage These features will be available at the end of March 2017. Advanced Grid: New grid widget with improved design, context menu, multi-column sorting, global search and many more common grid-related features. Tree Grid: New tree-grid widget with same features as advanced grid plus ability to pre-load tree levels, dynamically load child data and auto expand all nodes to build more powerful mashups. Administer & Manage MS SQL Server: New persistence provider for model and run-time data providing customers an alternative to PostgreSQL. Security: ThingWorx worked with industry standard security scanning and auditing tools to identify and correct all non-trivial, and many trivial, vulnerabilities to ensure secure software best practices. Licensing: Link ThingWorx to PTC systems of record to manage user entitlement and provide usage information and auditing capability critical to TWX, PTC and its partners.  Documentation ThingWorx 7.4 Reference Documents ThingWorx Core 7.4 Release Notes ThingWorx Core Help Center ThingWorx Edge SDKs and WebSocket-based Edge MicroServer Help Center ThingWorx Connection Services Help Center ThingWorx Utilities Help Center Additional information ThingWorx eSupport Portal ThingWorx Developer Portal ThingWorx Marketplace Download ThingWorx Platform – Select Release 7.4 ThingWorx Edge C SDK 1.4 – Select Most Recent Datecode, C-SDK-1-4-0 ThingWorx AWS IoT Connector 1.2 – Select Release 7.4 ThingWorx Utilities – Select Release 7.4
View full tip
Recently, mentor.axeda.com was retired. The content has not yet been fully migrated to Thingworx Community, though a plan is in place to do this over the coming weeks. Attached, please find OneNote 2016 and PDF attachments that contain the content that was previously available on the Axeda Mentor website.
View full tip
This is just a quick reference on how to install pgadmin 3 if the autoinstall with yum command (sudo yum install pgadmin3) fails. Two routes would be available. Try running yum list pgadmin* ​If you see something like this: that means the package is available in the highlighted repository, you'd just need to add it. rpm -Uvh http://yum.postgresql.org/9.4/redhat/rhel-7-x86_64/pgdg-redhat94-9.4-3.noarch.rpm After that, try sudo yum install pgadmin3_94 (insert your actual version) 2. If you would like a different version (or the latest one) of pgadmin, what you could do is grab the .tar.gz file from here https://www.postgresql.org/ftp/pgadmin3/release/ Then manually install it. For example for version 1.22.2 (the version is just for demoing purposes – I grabbed the top one available in the list): mv pgadmin3-1.22.2.tar.gz /usr/local/src cd /usr/local/src tar –zxvf pgadmin3-1.22.2.tar.gz cd pgadmin3-1.22.2 ./configure make make install Then you would need to configure your server to allow remote user access of the database using pgadmin. Two config files would need to be modified: Open up the postgresql.conf configuration file.  Do a search or find for the phrase ‘listen_addresses’ without quotes.  In order to open the access up to all IP addresses change the value to a *.  The default is set to ‘localhost’ which does not allow connection from remote computers. Next open the pg_hba.conf configuration file. Scroll down to the bottom of the file to the section marked # IPv4 local connections.  Add in the following code on its own line, just underneath the 127.0.0.1/32 line necessary for ‘localhost’. host all all youripaddress/32          trust This will allow for local access of the database server to the computer with the IP address you specified.  To add additional remote computers simply add a new line with their appropriate IP address. Now that your configuration is complete restart the PostgreSQL database server for the changes to take effect. su – su postgres pg_ctl restart –D /usr/local/pgsql/data
View full tip
Welcome to the Thingworx Community area for code examples and sharing.   We have a few how-to items and basic guidelines for posting content in this space.  The Jive platform the our community runs on provides some tools for posting and highlighting code in the document format that this area is based on.  Please try to follow these settings to make the area easy to use, read and follow.   At the top of your new document please give a brief description of the code sample that you are posting. Use the code formatting tool provided for all parts of code samples (including if there are multiple in one post). Try your best to put comments in the code to describe behavior where needed. You can edit documents, but note each time you save them a new version is created.  You can delete old versions if needed. You may add comments to others code documents and modify your own code samples based on comments. If you have alternative ways to accomplish the same as an existing code sample please post it to the comments. We encourage everyone to add alternatives noted in comments to the main post/document.   Format code: The double blue arrows allow you to select the type of code being inserted and will do key word highlighting as well as add line numbers for reference and discussions.
View full tip
This has been moved to its new home in the Augmented Reality Category in the PTC Community.
View full tip
This has been moved to its new home in the Augmented Reality Category in the PTC Community.
View full tip
Does WSEMS use SSL pinning? Yes, it does. To explain the process a bit more, the ssl pinning is the act of verifying a server certificate by comparing it to the exact certificate that is expected. We “install” a certificate on the EMS (copying the cert to the ems device, and specifying the cert in config.json), EMS will then check all incoming certs against the cert in config.json, looking for an exact match and verifying the certificate chain. Should it be expected that the client, WSEMS.exe, installs the client certificate in order to validate the server certificate using SSL pinning? This does not happen automatically, the cert must be downloaded and manually added to the device. The config must also be manually updated. If so, how we issue the client cert for WSEMS.exe? Does it need to be issued the same way as the server certificate? If talking about pinning, one could use either the server cert directly, or the root cert (recommended). The root cert is the public cert of the signing authority, e.g. if the server uses a verisign issued cert, one can verify the authenticity of the signer by having the verisign public cert. If talking about client certs (2 way auth, client verifies server, server verifies client) then the process is a little different. What happens when the client certificate expires? Do all the devices go offline when the client certificate expires? The device won’t connect with a failure to authorize. Once the server cert expires, the server cert needs to be updated everywhere. This is the advantage to using something like the verisign public cert (root cert) as the installed cert on the client. The root certs usually last longer than the issued certs, but they will have to replaced eventually when they expire. When using Entrust certificates, obtained at https://www.entrust.com/get-support/ssl-certificate-support/root-certificate-downloads/ , how do we know that the certifactes are fed to the WSEMS configuration incorrectly? With the wrong configuration, WSEMS would error out rejecting the Entrust certificate: Non-fips Error code 20: Invalid certificate Fips: Error code 19:  self signed certificate in certificate chain How to properly install Entrust certificate to be accessed by EMS? Public certificate needs to be downloaded and put in a directory that is accessible to the EMS. The path to be added to the cert chain field of the configuration. Even if its trusted it will always need to be installed on the EMS. If a certificate is self-signed, meaning it's non-trusted by default, OpenSSL would throw errors and compain. To solve this, it needs to be installed as a trusted server. If it's signed by a non-trusted CA, that CA's certificate needs to be installed as well. Once the certificates are obtained this if the block of the configuration we are interested in: "certificates":    {                   "validate":          true, //Not validated model is not recommended                       "allow_self_signed":  false, //Self signed model is not recommended, yet theoretically better than non-validated                       "cert_chain":      " " } //recommended, trusted - also the only way to work with a FIPS enabled EMS May use the SSL test tool (for example,https://www.ssllabs.com )to find all chains. For Entrust, both Entrust L1k and G2 need to be downloaded: Because cert_chain is an array type of field, the format to install the certificates paths, would be: "cert_chain": ["path1"], ["path2"] "cert_chain": ["C:\\ThingworxLocation\\MicroServerLocation\\locationofcertificates\\entrust_l1k.pem","C:\\ThingworxLocation\\MicroServerLocation\\locationofcertificates\\entrust_g2_ca.pem"]
View full tip
In case we would like to create an external application and we aren't sure what's the best solution to use, below are some useful tips. Scenario: Let's say we use a gateway in order to access the external application we want to create. We would like to implement this gateway translating the ThingWorx standard protocol to the SCADA protocol. The system administrator, who manages the grid, has the own secure system, with a standard for communication inside the SCADA system, and we want to be able to get data from our system to the system they have. Let's also consider that the data is connected on the electrical field. Tips: It is recommended to develop a 3rd party that on one side talks to ThingWorx, and on the other side, talks to the SCADA system. This external ThingWorx application that we want to implement would have a series-edge-interface allowing to enter in our customer's Ethernet network, in order for both systems to communicate. JDBC is not recommended - it's mostly for connecting to the data base, which in our case is not the main purpose. Each REST API call to the platform uses a security accreditation (appkey or user/password); depending of the permissions contained in that token, the access can be allowed to certain parts of the platform. Reasons for using REST API: REST API is simple and not dependent on any format that the data comes from ThingWorx. It can be used offline, online, synchronously, asynchronously, and is easy to manage from a formatting point of view. ThingWorx can give a lot of options: like exporting information via xml to a plain xml file, to parse it to whatever protocol we have on the other hand: Either our application would have to handle xml inputs from ThingWorx and process it towards SCADA compatible output. Or our application will have to handle xml input from ThingWorx and process it towards SCADA compatible output. Or we can talk directly via REST API and read on a per-thing basis (using the web services). The interface application just has to know how to read xmls or REST API calls (which are provided with an xml formatted response). SDK has already library written in C, C# and Java. SDK C, C# and Java use the AlwaysOn protocol (web socket)  and are more firewall friendly. It's mostly like for speed and automated processing, so when known exactly what happens and we trust the other side and we know there are little chances for errors.​ If we go with REST API or SDK , the application that is developed will have complete access inside ThingWorx, like change/edit the things. If we want to have access in both ways, not only in reading data, but also in update/delete information, etc, SDK and REST API ​can be used, because we have the whole range of commands, like set property values, call on services, etc. We can limit access, if we want, for security reasons. SDK offers access to the same services as REST API, but in a different way. Otherwise, it's better to go with xml decoupled files. Conclusion: for this particular scenario, better use SDK.
View full tip
This has been moved to its new home in the Augmented Reality Category in the PTC Community.
View full tip
Introduction In-Memory Column stores the data in columnar format contrary to row format. Allowing users to run faster analytics which is also the idea behind this is to push the computation as close to the data store as possible. In this post I'll configure the Oracle database to enable this feature and then populate one or more tables in the In Memory Column store. This could be particularly helpful if you are using Oracle 12c as an external data store for storing data in database table via JDBC connection, current/historic values from DataTable, Streams or ValueStreams for running analytics or DMLs with lots of join and require lot of computation before the data is finally presented on to the Mashup(s). For this post I used the data generated by temperature sensor getting stored in ValueStream, exported to CSV from the ValueStream and imported it in the Oracle table. In-Memory Column Store vs In-Memory database Usage As mentioned above Oracle 12c version 12.1.2 comes with in built In-Memory Column Store feature. As the name suggest it allows data to be populated in RAM enabling high speed transaction and analytics on data without the need to traverse the HDD, and in some cases this is much faster than the buffer cache of the SGA. Without going into too much nitty-gritty it's important to note that In-Memory Column  Store does not equate to In-Memory database. While it could be possible to move the entire schema, if it's small enough, to fit in the defined memory for In-Memory Column Store, the idea however is to speed up the computation requiring analytics on one or more table(s) which are heavily queried by the users. If you are interested in In-Memory Database as persistence provider for ThingWorx please refer to the documentation Using SAP HANA as the Persistence Provider which is one of the option among other available Persistence Providers for ThingWorx. What changes are required to the current Oracle 12c installation or JDBC connection to ThingWorx? In-Memory Column Store feature is an inbuilt feature in Oracle 12.1.2 and only needs to be enabled, as it's not by default. This can be enabled without having to bring any sort of change to the following : 1. The existing SQL services created within ThingWorx 2. General application architecture accessing the tables in the Oracle database over JDBC 3. Existing Oracle 12c installation Getting Started What will it take to enable In-Memory Column Store? This feature can be enabled by following few steps : 1. Enable this feature in the Oracle 12.1.2 installation, by assigning some memory in RAM for InMemory Column 2. Adjust the SGA size for the database to incorporate the memory assigned to the In-Memory Column 3. Bounce the database As mentioned above though this is an inbuilt feature with Oracle 12.1.2, but is not enabled by default and we can confirm this by executing following SQL in SQL*Plus or Oracle SQL Developer connecting to database for which we are enabling this feature. SQL> show parameter INMEMORY; Things to consider before enabling 1. Ensure that the hardware/ VM hosting the Oracle installation have sufficient RAM, 2. Ensure to bump up the SGA by the amount of memory assigned to In-Memory Column store, failing to do so may lead to database failing to start and will require recovery Note: Minimum memory that can be assigned to In-Memory Column Store is 100M Setting it all up For my test setup I will be assigning 5G to the In-Memory Column Store and will add this amount to the current SGA, to do this let's start the SQL*Plus with the rights that will allow me to make changes to the existinng SGA, so i'm using sys@orcl as sysdba  (ORCL is the test DB name i have for my database) Step 1: Start SQL*Plus, e.g. sqlplus sys@orcl as sysdba Step 2: ALTER SYSTEM SET INMEMORY_SIZE = 5G SCOPE=SPFILE; Step 3: ALTER SYSTEM SET SGA_TARGET = 20G SCOPE=SPFILE; Once done, bounce the database. And that's it! We should now be able to confirm that, via SQL*Plus, certain amount of memory, 5G in my case, has been assigned to the In-Memory Column Store feature SQL> show parameter inmemory Populating the In-Memory Column Store In-Memory Column Store will only populate the data from the table only on the first use or if the table is marked critical which will tell Oracle to populate as soon as the database comes online after restart. For more detail on the commands concerning the In-Memory Column Store refer to the OTN webpage I'll now use the SensorHistory table in which i have the ValueStream's exported data in CSV format, currently this table is holding ~32million+ rows, and populate them in columnar architecture of the In Memory Column Store with following command: SQL>ALTER TABLE SENSORHISTORY INMEMORY; // marking the table to be eligible for In-Memory column with default parameters Just to confirm that the data is still not populated since we have only marked the table to be eligible for In-Memory Column Store, if I now query the dynamic view V$IM_SEGMENTS for current usage of the InMemory, it'll confirm this: So now let's populate the In-Memory with a query which would require full table scan, e.g. SQL> select property_name, count(*) from sensorhistory           Group by property_name; Let's recheck the dynamic view V$IM_SEGMENTS As mentioned above, that this is completely transparent to the application layer, so if you already have an existing JDBC connection in ThingWorx to Oracle, all the existing services created for that table will continue to work as expected. If you don't have an existing JDBC connection to Oracle, it can be created with usual steps with no special configuration for In-Memory. Creating JDBC connection I'm including this section for the purpose of completeness, if you already have a working JDBC connection to Oracle 12.1.2 you can skip to Conclusion below.Now for accessing the above database along with the In-Memory Column Store table we'll now setup the JDBC connection to the Oracle, for that download and import the TW_Download_Relational Databases Connectors.zip (ThingWorx Marketplace) > unzip to access the Oracle12Connector_Extension.zip Step 1 : Import the extension in the ThingWorx by navigating to Import/Export > Import > Extensions Step 2: Create a Thing using the OracleDBServer12 Template, part of the extension we just imported Step 3: Here's how a valid configuration would look like to successfully connect to the database, ORCL in this case Step 4: Navigate to the Properties in the Entity Information panel on the left and verify that the isConnected property value is True. Conclusion This is a very short introduction to what could be a setup for improving the data analytics performed on the stored data, manifold. The data in the In-Memory Column Store is not stored in conventional row format, rather in large columnar format. If the need is to have simple SQL queries with not so many joins it could be that the SGA Cache would be sufficient and probably be faster and you may not gain much by configuring the In-Memory Column Store. However, queries requiring heavy computation on large data sets, having In-Memory Column Store configured could bring manifold increase in performance. Therefore if you need more guidelines on where you'd want to use the In-Memory Column Store, feel free to give following listed good reads a try along with real world data use case for reference. I will try to find some time to run my own benchmark and will try to put it out in a separate blog on performance gain. 1. Oracle Database In-Memory with Oracle Database 12c Release 2 : Oracle white paper 2. When to Use Oracle Database In-Memory :  Identifying Use Cases for Application Acceleration 3. Oracle Database 12c In-Memory Option 4. Testing Oracle In-Memory Column Store @ CERN
View full tip
This has been moved to its new home in the Augmented Reality Category in the PTC Community.
View full tip
  PLEASE NOTE DataConnect has now been deprecated and no longer in use and supported.   We are regularly asked in the community how to send data from ThingWorx platform to ThingWorx Analytics in order to perform some analytics computation. There are different ways to achieve this and it will depend on the business needs. If the analytics need is about anomaly detection, the best way forward is to use ThingWatcher without the use of ThingWorx Analytics server. The ThingWatcher Help Center is an excellent place to start, and a quick start up program can be found in this blog. If the requirement is to perform a full blown analytics computation, then sending data to ThingWorx Analytics is required. This can be achieved by Using ThingWorx DataConnect, and this is what this blog will cover Using custom implementation. I will be very pleased to get your feedback on your experience in implementing custom solution as this could give some good ideas to others too. In this blog we will use the example of a smart Tractor in ThingWorx where we collect data points on: Speed Distance covered since last tyre change Tyre pressure Amount of gum left on the tyre Tyre size. From an Analytics perspective the gum left on the tyre would be the goal we want to analyse in order to know when the tyre needs changing.   We are going to cover the following: Background Workflow DataConnect configuration ThingWorx Configuration Data Analysis Definition Configuration Data Analysis Definition Execution Demo files   Background For people not familiar with ThingWorx Analytics, it is important to know that ThingWorx Analytics only accepts a single datafile in a .csv format. Each columns of the .csv file represents a feature that may have an impact on the goal to analyse. For example, in the case of the tyre wear, the distance covered, the speed, the pressure and tyre size will be our features. The goal is also included as a column in this .csv file. So any solution sending data to ThingWorx Analytics will need to prepare such a .csv file. DataConnect will perform this activity, in addition to some transformation too.   Workflow   Decide on the properties of the Thing to be collected, that are relevant to the analysis. Create service(s) that collect those properties. Define a Data Analysis Definition (DAD) object in ThingWorx. The DAD uses a Data Shape to define each feature that is to be collected and sends them to ThingWorx Analytics. Part of the collection process requires the use of the services created in point 2. Upon execution, the DAD will create one skinny csv file per feature and send those skinny .csv files to DataConnect. In the case of our example the DAD will create a speed.csv, distance.csv, pressure.csv, gumleft.csv, tyresize.csv and id.csv. DataConnect processes those skinny csv files to create a final single .csv file that contains all these features. During the processing, DataConnect will perform some transformation and synchronisation of the different skinny .csv files. The resulting dataset csv file is sent to ThingWorx Analytics Server where it can then be used as any other dataset file.     DataConnect configuration   As seen in this workflow a ThingWorx server, DataConnect server and a ThingWorx Analytics server will need to be installed and configured. Thankfully, the installation of DataConnect is rather simple and well described in the ThingWorx DataConnect User’s guide. Below I have provided a sample of a working dataconnect.conf file for reference, as this is one place where syntax can cause a problem:   ThingWorx Configuration The platform Subsystem needs to be configured to point to the DataConnect server . This is done under SYSTEM > Subsystems > PlatformSubsystem:     DAD Configuration The most critical part of the process is to properly configure the DAD as this is what will dictate the format and values filled in the skinny csv files for the specific features. The first step is to create a data shape with as many fields as feature(s)/properties collected.   Note that one field must be defined as the primary key. This field is the one that uniquely identifies the Thing (more on this later). We can then create the DAD using this data shape as shown below   For each feature, a datasource needs to be defined to tell the DAD how to collect the values for the skinny csv files. This is where custom services are usually needed. Indeed, the Out Of The Box (OOTB) services, such as QueryNumberPropertyHistory, help to collect logged values but the id returned by those services is continuously incremented. This does not work for the DAD. The id returned by each services needs to be what uniquely identifies the Thing. It needs to be the same for all records for this Thing amongst the different skinny csv files. It is indeed this field that is then used by DataConnect to merge all the skinny files into one master dataset csv file. A custom service can make use of the OOTB services, however it will need to override the id value. For example the below service uses QueryNumberPropertyHistory to retrieve the logged values and timestamp but then overrides the id with the Thing’s name.     The returned values of the service then needs to be mapped in the DAD to indicate which output corresponds to the actual property’s value, the Thing id and the timestamp (if needed). This is done through the Edit Datasource window (by clicking on Add Datasource link or the Datasource itself if already defined in the Define Feature window).   On the left hand side, we define the origin of the datasource. Here we have selected the service GetHistory from the Thing Template smartTractor. On the right hand side, we define the mapping between the service’s output and the skinny .csv file columns. Circled in grey are the output from the service. Circled in green are what define the columns in the .csv file. A skinny csv file will have 1 to 3 columns, as follow: One column for the ID. Simply tick the radio button corresponding to the service output that represents the ID One column representing the value of the Thing property. This is indicated by selecting the link icon on the left hand side in front of the returned data which represent the value (in our example the output data from the service is named value) One column representing the Timestamp. This is only needed when a property is time dependant (for example, time series dataset). On the example the property is Distance, the distance covered by the tyre does depend on the time, however we would not have a timestamp for the TyreSize property as the wheel size will remain the same. How many columns should we have (and therefore how many output should our service has)? The .csv file representing the ID will have one column, and therefore the service collecting the ID returns only one output (Thing name in our smartTractor example – not shown here but is available in the download) Properties that are not time bound will have a csv file with 2 columns, one for the ID and one for the value of the property. Properties that are time bound will have 3 columns: 1 for the ID, one for the value and one for the timestamp. Therefore the service will have 3 outputs.   Additionally the input for the service may need to be configured, by clicking on the icon.   Once the datasources are configured, you should configure the Time Sampling Interval in the General Information tab. This sampling interval will be used by DataConnect to synchronize all the skinny csv files. See the Help Center for a good explanation on this.   DAD Execution Once the above configuration is done, the DAD can be executed to collect properties’ value already logged on the ThingWorx platform. Select Execution Settings in the DAD and enter the time range for property collection:   A dataset with the same name as the DAD is then created in DataConnect as well as in ThingWorx Analytics Server Dataconnect:     ThingWorx Analytics:   The dataset can then be processed as any other dataset inside ThingWorx Analytics.   Demo files   For convenience I have also attached a ThingWorx entities export that can be imported into a ThingWorx platform for you to take a closer look at the setup discussed in this blog. Attached is also a small simulator to populate the properties of the Tractor_1 Thing. The usage is : java -jar TWXTyreSimulatorClient.jar  hostname_or_IP port AppKey For example: java -jar TWXTyreSimulatorClient.jar 192.168.56.106 8080 d82510b7-5538-449c-af13-0bb60e01a129   Again feel free to share your experience in the comments below as they will be very interesting for all of us. Thank you
View full tip
Validator widgets provide an easy way to evaluate simple expressions and allow users to see different outcomes in a Mashup. Using a validator is fairly intuitive for simple expressions, such as is my field blank? But if we need to evaluate a more complex scenario based on multiple parameters, then we can user our validator with a backing service that will perform more complex analytics. To show how services can work in conjunction with a validator widget, let’s consider a slightly more complicated scenario such as: A web form needs to validate that the zip or postal code entered by the user is consistent with the country the user selected. Let’s go through the steps of validating our form: Create a service with two input parameters. Our service will use regex to validate a postal code based on the user’s country.  Here’s some sample code we could use on an example Thing: //Input parameters: Country and PostalCode (strings) //Country-based regular expressions: var reCAD = /^[ABCEGHJKLMNPRSTVXY]{1}\d{1}[A-Z]{1} *\d{1}[A-Z]{1}\d{1}$/; var reUS = /^\d{5}(-\d{4})?$/; var reUK = /^[A-Za-z]{1,2}[\d]{1,2}([A-Za-z])?\s?[\d][A-Za-z]{2}$/; var search = ""; //Validate based on Country: if (Country==="CAD")                search = reCAD.exec(PostalCode); else if (Country==="US")                search = reUS.exec(PostalCode); else if (Country==="UK")                search = reUK.exec(PostalCode); (search == null) ? result = false: result = true; Set up a simple mashup to collect the parameters and pass them into our service Add a validator widget Configure the validator widget to parse the output of the service. ServiceInvokeComplete on the service should trigger the evaluation and pass the result of the service into a new parameter on the widget called ServiceResult (Boolean). The expression for the evaluator would then be: ServiceResult? true:false Based on the output of the validator, provide a message confirming the postal code is valid or invalid Add a button to activate the service and the validator evaluation Of course, in addition to providing a message we can also use the results of the validator to activate additional services (such as writing the results of the form to the database). For an example you can import into your ThingWorx environment, please see the attached .zip file which contains a sample mashup and a test thing with a validator service.
View full tip
Here is a sample to run ConvertJSON just for test 1. Create a DataShape 2. There are 4 input for service ConvertJSON fieldMap (The structure of infotable in the json row) json (json content)   { "rows":[         {             "email":"example1@ptc.com"         },         {             "name":"Lily",             "email":"example2@ptc.com"         }     ] } rowPath (json rows that indicate table information) rows dataShape (infotable dataShape) UserEmail
View full tip
Announcements