cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
This video will walk you through the first steps of how to set-up Analytics Manager for Real-Time Scoring. More specifically this video demonstrate how to share your predictive model from Analytics Builder into Analytics Manger -and test the shared model.   Updated Link for access to this video::  ThingWorx Analytics Manager: Publish & Test a Predictive Model
View full tip
This video will walk you through the first steps of how to set-up Analytics Manager for Real-Time Scoring. More specifically this video demonstrates how to create an Analysis Provider and start ThingPredictor Agent. NOTE: For version 8.1 the startup command for the Agent has changed view the command in PTC Help center.   Updated Link for access to this video:  ThingWorx Analytics Manager: Create an Analysis Provider & Start the ThingPredictor Agent                                  
View full tip
In this part of the Troubleshooting blog series, we will review the process on how to restart individual services essential to the ThingWorx Analytics Application within the Virtual Machine Appliance.   Services have stopped, and I cannot run my Analytics jobs! In some cases, we have had users encounter issues where a system or process has halted and they are unable to proceed with their tasks. This can be due to a myriad of reasons, ranging from OS hanging issues to memory issues with certain components.   As we covered earlier in Part II, the ThingWorx Analytics Application is installed in a CentOS (Linux) Operating System. As with most Linux Operating Systems, you have the ability to manually check and restart processes as needed.   Steps to Restart Services   With how the Application is installed and configured, the services for functionality should auto start when you boot up the VM. You will have to verify that the Appliance is functional by running your desired API call.   If a system is not functioning as expected, you will receive an error in your output when you POST an API call. Some errors are very specific and you can search the Knowledge Database for any existing Knowledge Articles that may solve the issue.   For error messages that do not have an existing article, you may want to attempted the following   Method 1:   If you are encountering issues, and are unsure what process is not working correctly, we would recommend a full Application restart. This involves restarting the Virtual Machine Appliance via the command line terminal.   We would recommend that you use the following command, as root user or using SUDO, as this is known as a “Graceful restart” ​sudo reboot -h now   This will restart the virtual machine safely, and once you are back up and running you can run your API calls to verify functionality. This should resolve any incremental issues you may have faced.   Method 2:   If you want to restart an individual service, there is a particular start order that needs to be followed to make sure the Application is operating as expected.   The first step is check what services are not running, you can use the following command to check what is running and its current status: service –status-all   The services you are looking for are the following: Zookeeper PostgreSQL Server GridWorker(s) Tomcat   If a particular service is not on the running list, you will have to manually start them by using the service start command. service [name of service] start e.g. service tomcat start You may be prompted for the root password   You can verify that the services are operating by running the status check command as described above.   If you need to restart all services, we have a specific start order, this is important to do as there are some dependencies such as Postgres for the GridWorker(s) and Tomcat to use.   The start order is as follows: Zookeeper PostgreSQL Server GridWorker(s) Tomcat   After completing the restart, and verifying that the services are running, run your desired API call.
View full tip
About This is the second part of a ThingBerry related blog post series. ThingBerry is ThingWorx installed on a RaspBerry Pi, which can be used for portable demonstrations without the need of utilizing e.g. customer networks. Instead the ThingBerry provides its own custom WIFI hotspot and allows Things to connect and send / receive demo data on a small scale. In this particual blog post we'll discuss on how to setup the ThingBerry as a WIFI hotspot to directly connect with mobile devices or other Raspberry Pis. As the ThingBerry is a highly unsupported environment for ThingWorx, please see this blog post for all related warnings. In case this guide looks familiar, it's probably because it's based on https://frillip.com/using-your-raspberry-pi-3-as-a-wifi-access-point-with-hostapd/ WIFI Hot Spot As the ThingBerry is currently connected via ethernet we can utilize the Raspberry Pi's WIFI connection to create a private network where all the wireless devices can connect to, e.g. another Raspberry Pi or a ESP8266 First we need to install dnsmasq and hostapd. Those will help setting up the access point and create a private DNS server to dynamically assign IP-addresses to connecting devices. sudo apt-get install dnsmasq hostapd Interfaces We will need to configure the wlan0 interface with a static IP. For this the dhcpcd needs to ignore the wlan0 interface. sudo nano /etc/dhcpcd.conf Paste the following content to the end of the file. This must be ABOVE any interface lines you may have added earlier! denyinterfaces wlan0 Save and exit. Let's now configure the static IP. sudo nano /etc/network/interfaces Comment out ALL lines for the wlan* configurations (e.g. wlan0, wlan1). By default there are three lines which need to be commented out by adding a # at the beginning of the line. After this the wlan0 can be pasted in: allow-hotplug wlan0 iface wlan0 inet static   address 192.168.0.1 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 Save and exit. Now restart the dhcpcd service and reload the wlan0 configuration with sudo service dhcpcd restart sudo ifdown wlan0 sudo ifup wlan0 Hostapd Hostapd is used to configure the actual WIFI hot spot, e.g. the SSID and the WIFI password (wpa_passphrase) that's required to connect to this network. sudo nano /etc/hostapd/hostapd.conf Paste the following content: interface=wlan0 driver=nl80211 ssid=thingberry hw_mode=g channel=6 wmm_enabled=1 ieee80211n=1 country_code=DE macaddr_acl=0 ht_capab=[HT40][SHORT-GI-20][DSSS_CCK-40] auth_algs=1 ignore_broadcast_ssid=0 wpa=2 wpa_key_mgmt=WPA-PSK wpa_passphrase=changeme rsn_pairwise=CCMP If you prefer another SSID or a more secure password, please ensure updating above configuration! Save and exit. Check if the configuration is working via sudo /usr/sbin/hostapd /etc/hostapd/hostapd.conf It should return correctly, without any errors and finally show "wlan0: AP-ENABLED". With this you can now connect to the "thingberry" SSID. However there's no IP assigned automatically - so that sucks​ can be improved... Stop hostapd with CTRL+C and let's start it on boot. sudo nano /etc/default/hostapd At the end of the file, paste the following content: DAEMON_CONF="/etc/hostapd/hostapd.conf" Save and exit. DNSMASQ Dnsmasq allows to assign dynamic IP addresses. Let's backup the original configuration file and create a new one. sudo mv /etc/dnsmasq.conf /etc/dnsmasq.conf.orig  sudo nano /etc/dnsmasq.conf Paste the following content: interface=wlan0 listen-address=192.168.0.1 bind-interfaces dhcp-range=192.168.0.100,192.168.0.199,255.255.255.0,12h Save and exit. This will make the DNS service listen on 192.168.0.1 and assign IP addresses between 192.168.0.100 and 192.168.0.199 with a 12 hour lease. Next step is to setup the IPV4 forwarding for the wlan0 interface. sudo nano /etc/sysctl.conf Uncomment the following line: You can search in nano with CTRL+W net.ipv4.ip_forward=1 Save and exit. Hostname translation To be able to call the ThingBerry with its actual hostname, the hostname needs to be mapped in the host configuration. sudo nano /etc/hosts Search the line with your hostname and update it to the local IP address (as configured in the listen-address above), e.g. 192.168.0.1 thingberry Save and exit. Please note that this is the hostname and not the SSID of the network! Finalizing the Configuration Run the following command to enable the fowarding and reboot. sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" sudo reboot Optional: Internet Access The ThingBerry is independent of any internet traffic. However if your connected devices or the ThingBerry itself need to contact the internet, the WIFI connection needs to route those packages to and from the (plugged-in) ethernet device. This can done through iptables sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE  sudo iptables -A FORWARD -i eth0 -o wlan0 -m state --state RELATED,ESTABLISHED -j ACCEPT  sudo iptables -A FORWARD -i wlan0 -o eth0 -j ACCEPT All traffic will be internet related traffic will be forwarded between eth0 and wlan0 and vice versa. To load this configuration every time the ThingBerry is booted, it needs to be saved via sudo sh -c "iptables-save > /etc/iptables.ipv4.nat" Run this file by editing sudo nano /etc/rc.local Above the line exit 0 add the following line: iptables-restore < /etc/iptables.ipv4.nat Save and exit. Verification Connect to the new WIFI hotspot via the SSID and the password configured earlier through any WIFI capable device. When connecting to your new access point check your device's IP settings (in iOS, Android or a laptop / desktop device). It should show a 192.168.0.x IP address and should be pingable from the ThingBerry console. Connect to http://192.168.0.1/Thingworx or http://<servername>/Thingworx to open the Composer.
View full tip
ThingWorx 7.4 covers the following areas of the product portfolio:  ThingWorx Analytics, ThingWorx Utilities and ThingWorx Foundation which includes Core, Connection Server and Edge capabilities. Key Functional Highlights Highlights of the release include: Source Integration: Improved integration framework making it easy to connect with external systems, including a standard Windchill connector, to retrieve data on demand. Industrial Connectivity: New Industrial Gateway and Discover navigation tool simplifying the mapping of tags to properties, including performance enhancements for data updates. Edge/CSDK: Build process improvements, Subscribed Property Manager (SPM) enhancements, asynchronous service requests and TLS updates to increase developer productivity, improve application performance and strengthen security. AWS IoT Connector: The latest version 1.2 of the Connector allows customers to more fully leverage their investment in AWS IoT. It features improved deployment automation via CloudFormation and automatic extension installation, ThingWorx Edge JavaScript SDK for use with the Connector with support for properties, services and events, and just-in-time certificate registrations. Contextualize Next Generation Composer: Re-imagined Composer using modern browser concepts to improve developer efficiency including enhanced functionality, updated user interface and optimized workflows. Engage These features will be available at the end of March 2017. Advanced Grid: New grid widget with improved design, context menu, multi-column sorting, global search and many more common grid-related features. Tree Grid: New tree-grid widget with same features as advanced grid plus ability to pre-load tree levels, dynamically load child data and auto expand all nodes to build more powerful mashups. Administer & Manage MS SQL Server: New persistence provider for model and run-time data providing customers an alternative to PostgreSQL. Security: ThingWorx worked with industry standard security scanning and auditing tools to identify and correct all non-trivial, and many trivial, vulnerabilities to ensure secure software best practices. Licensing: Link ThingWorx to PTC systems of record to manage user entitlement and provide usage information and auditing capability critical to TWX, PTC and its partners.  Documentation ThingWorx 7.4 Reference Documents ThingWorx Core 7.4 Release Notes ThingWorx Core Help Center ThingWorx Edge SDKs and WebSocket-based Edge MicroServer Help Center ThingWorx Connection Services Help Center ThingWorx Utilities Help Center Additional information ThingWorx eSupport Portal ThingWorx Developer Portal ThingWorx Marketplace Download ThingWorx Platform – Select Release 7.4 ThingWorx Edge C SDK 1.4 – Select Most Recent Datecode, C-SDK-1-4-0 ThingWorx AWS IoT Connector 1.2 – Select Release 7.4 ThingWorx Utilities – Select Release 7.4
View full tip
ThingWorx 7.4 introduces a new licensing system. A license file (license.bin) needs to be placed in the ThingworxPlatform folder. A new license file is also required if you upgrade from 7.4  to a major or minor release (not service pack-level releases). For example: • If you are using version 7.3, a license is not required. • If you upgrade from version 7.4.1 to version 7.4.2, a license upgrade is not required. • If you upgrade from version 7.4.3 to version 7.5.2, a license upgrade is required. Refer to the Installing ThingWorx 7.4 guide or Upgrading ThingWorx 7.4 guide for detailed process steps. Paid customers would have unlimited use of entities for 7.4.0. As currently a license file is locked to  version rather than SCN/host and is part of download package on  PTC Support, customers can use the same downloadable for multiple instances. Developer Trial Edition provides a constrained license file (5 users, 100 things, 120 days), and the license file is part of on premise download package on Dev Portal. Developer Trial Edition for Manufacturing (Kinex) provides a constrained license file (5 users, 100 things, no Composer access), and license file is part of download package on Kepware Portal. A new Licensing Subsystem is now available. Licensing subsystem services include: -AcquireLicense– service allows for retrieval of feature entitlements in license.bin, used when new license dropped in folder (no need to instance restart) –GetCurrentLicenseInfo – returns info on current license file –GetRemainingDaysInLicense –used for trial editions –GetLicenseUsageData – returns nformation about user’s license usage –PurgeLicenseUsageData –deletes the license usage data that is two years and older
View full tip
The following code snippet will retrieve a months worth of data from the system and return it as a CSV document suitable for import into your spreadsheet or reporting tool of choice. import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.drm.sdk.Context import com.axeda.common.sdk.id.Identifier import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* def ac = new AuditCriteria() ac.fromDate = Date.parse('yyyy-MM-dd', '2017-03-01') ac.toDate   = Date.parse('yyyy-MM-dd', '2017-03-31') def retString = '' tcount = 0 while ( (results = auditBridge.find(ac)) != null  && tcount < results .totalCount) {   results.audits.each { res ->     retString += "${res?.user?.id},${res?.asset?.serialNumber},${res?.category},${res.message},${res.date}\n"     tcount++   }   ac.pageNumber = ac.pageNumber + 1 } return retString
View full tip
This is using the simplest structure to do a look through an infotable.  It's simple but it avoids having to use row indexes and cleans up the code for readability as well.   //Assume incoming Infotable parameter names "thingList" for each (row in thingList.rows) {      // Now each row is already assigned to the row variable in the loop      var thingName = row.name; }   You can also nest these loops (just use a different variable from "row").  Also important to note to not add or remove row entries of the Infotable inside the loop.  In this case you may end up skipping or repeating rows in the loop since the indexes will be changed.
View full tip
Recently, mentor.axeda.com was retired. The content has not yet been fully migrated to Thingworx Community, though a plan is in place to do this over the coming weeks. Attached, please find OneNote 2016 and PDF attachments that contain the content that was previously available on the Axeda Mentor website.
View full tip
This is just a quick reference on how to install pgadmin 3 if the autoinstall with yum command (sudo yum install pgadmin3) fails. Two routes would be available. Try running yum list pgadmin* ​If you see something like this: that means the package is available in the highlighted repository, you'd just need to add it. rpm -Uvh http://yum.postgresql.org/9.4/redhat/rhel-7-x86_64/pgdg-redhat94-9.4-3.noarch.rpm After that, try sudo yum install pgadmin3_94 (insert your actual version) 2. If you would like a different version (or the latest one) of pgadmin, what you could do is grab the .tar.gz file from here https://www.postgresql.org/ftp/pgadmin3/release/ Then manually install it. For example for version 1.22.2 (the version is just for demoing purposes – I grabbed the top one available in the list): mv pgadmin3-1.22.2.tar.gz /usr/local/src cd /usr/local/src tar –zxvf pgadmin3-1.22.2.tar.gz cd pgadmin3-1.22.2 ./configure make make install Then you would need to configure your server to allow remote user access of the database using pgadmin. Two config files would need to be modified: Open up the postgresql.conf configuration file.  Do a search or find for the phrase ‘listen_addresses’ without quotes.  In order to open the access up to all IP addresses change the value to a *.  The default is set to ‘localhost’ which does not allow connection from remote computers. Next open the pg_hba.conf configuration file. Scroll down to the bottom of the file to the section marked # IPv4 local connections.  Add in the following code on its own line, just underneath the 127.0.0.1/32 line necessary for ‘localhost’. host all all youripaddress/32          trust This will allow for local access of the database server to the computer with the IP address you specified.  To add additional remote computers simply add a new line with their appropriate IP address. Now that your configuration is complete restart the PostgreSQL database server for the changes to take effect. su – su postgres pg_ctl restart –D /usr/local/pgsql/data
View full tip
Welcome to the Thingworx Community area for code examples and sharing.   We have a few how-to items and basic guidelines for posting content in this space.  The Jive platform the our community runs on provides some tools for posting and highlighting code in the document format that this area is based on.  Please try to follow these settings to make the area easy to use, read and follow.   At the top of your new document please give a brief description of the code sample that you are posting. Use the code formatting tool provided for all parts of code samples (including if there are multiple in one post). Try your best to put comments in the code to describe behavior where needed. You can edit documents, but note each time you save them a new version is created.  You can delete old versions if needed. You may add comments to others code documents and modify your own code samples based on comments. If you have alternative ways to accomplish the same as an existing code sample please post it to the comments. We encourage everyone to add alternatives noted in comments to the main post/document.   Format code: The double blue arrows allow you to select the type of code being inserted and will do key word highlighting as well as add line numbers for reference and discussions.
View full tip
This has been moved to its new home in the Augmented Reality Category in the PTC Community.
View full tip
This has been moved to its new home in the Augmented Reality Category in the PTC Community.
View full tip
The following code is best practice when creating any "entity" in Thingworx service script.  When a new entity is created (like a Thing) it will be loaded into the JVM memory immediately, but is not committed to disk until a transaction (service) successfully completes.  For this reason ALL code in a service must be in a try/catch block to handle exceptions.  In order to rollback the create call the catch must call a delete for any entity created.  In line comments give further detail.     try {     var params = {         name: "NewThingName",         description: "This Is A New Thing",         thingTemplateName: "GenericThing"     };     Resources["EntityServices"].CreateThing(params);    // Always enable and restart a new thing to make it active on the Platform     Things["NewThingName"].Enable();     Things["NewThingName"].Restart();       //Now Create an Organization for the new Thing     var params = {         topOUName: "NewOrgName",         name: "NewOrgName",         description: "New Orgianization for new Thing",         topOUDescription: "New Org Main"     };     Resources["EntityServices"].CreateOrganization(params);       // Any code that could potentially cause an exception should     // also be included in the try-catch block. } catch (err) {     // If an exception is caught, we need to attempt to delete everything     // that was created to roll back the entire transaction.     // If we do not do this a "ghost" entity will remain in memory     // We must do this in reverse order of creation so there are no dependency conflicts     // We also do not know where it failed so we must attempt to remove all of them,     // but also handle exceptions in case they were not created       try {         var params = {name: "NewOrgName"};         Resources["EntityServices"].DeleteOrganization(params);     }     catch(ex2) {//Org was not created     }       try {         var params = {name: "NewThingName"};         Resources["EntityServices"].DeleteThing(params);     }     catch(ex2) {//Thing was not created     } }
View full tip
1. Create a network and added all Entities that implement from a specific ThingShape in the network 2. Create a ThingShape mashup as below Note: Bind the Entity parameter to DynamicThingShapes_TracotrShape's service GetProperties input EntityName. Laso bind mashup RefreshRequested event to that service 3. Create a mashup named ContentShape, add Tree widget and ContainedMashp in it 4. Bind Service GetNetworkConnection's Selected Row(s) result and Selected RowsChanged event to ContainedMashup widget Note: Master can total replace ThingShape mashup. Suggest to use Master after ThingWorx 6.0
View full tip
Does WSEMS use SSL pinning? Yes, it does. To explain the process a bit more, the ssl pinning is the act of verifying a server certificate by comparing it to the exact certificate that is expected. We “install” a certificate on the EMS (copying the cert to the ems device, and specifying the cert in config.json), EMS will then check all incoming certs against the cert in config.json, looking for an exact match and verifying the certificate chain. Should it be expected that the client, WSEMS.exe, installs the client certificate in order to validate the server certificate using SSL pinning? This does not happen automatically, the cert must be downloaded and manually added to the device. The config must also be manually updated. If so, how we issue the client cert for WSEMS.exe? Does it need to be issued the same way as the server certificate? If talking about pinning, one could use either the server cert directly, or the root cert (recommended). The root cert is the public cert of the signing authority, e.g. if the server uses a verisign issued cert, one can verify the authenticity of the signer by having the verisign public cert. If talking about client certs (2 way auth, client verifies server, server verifies client) then the process is a little different. What happens when the client certificate expires? Do all the devices go offline when the client certificate expires? The device won’t connect with a failure to authorize. Once the server cert expires, the server cert needs to be updated everywhere. This is the advantage to using something like the verisign public cert (root cert) as the installed cert on the client. The root certs usually last longer than the issued certs, but they will have to replaced eventually when they expire. When using Entrust certificates, obtained at https://www.entrust.com/get-support/ssl-certificate-support/root-certificate-downloads/ , how do we know that the certifactes are fed to the WSEMS configuration incorrectly? With the wrong configuration, WSEMS would error out rejecting the Entrust certificate: Non-fips Error code 20: Invalid certificate Fips: Error code 19:  self signed certificate in certificate chain How to properly install Entrust certificate to be accessed by EMS? Public certificate needs to be downloaded and put in a directory that is accessible to the EMS. The path to be added to the cert chain field of the configuration. Even if its trusted it will always need to be installed on the EMS. If a certificate is self-signed, meaning it's non-trusted by default, OpenSSL would throw errors and compain. To solve this, it needs to be installed as a trusted server. If it's signed by a non-trusted CA, that CA's certificate needs to be installed as well. Once the certificates are obtained this if the block of the configuration we are interested in: "certificates":    {                   "validate":          true, //Not validated model is not recommended                       "allow_self_signed":  false, //Self signed model is not recommended, yet theoretically better than non-validated                       "cert_chain":      " " } //recommended, trusted - also the only way to work with a FIPS enabled EMS May use the SSL test tool (for example,https://www.ssllabs.com )to find all chains. For Entrust, both Entrust L1k and G2 need to be downloaded: Because cert_chain is an array type of field, the format to install the certificates paths, would be: "cert_chain": ["path1"], ["path2"] "cert_chain": ["C:\\ThingworxLocation\\MicroServerLocation\\locationofcertificates\\entrust_l1k.pem","C:\\ThingworxLocation\\MicroServerLocation\\locationofcertificates\\entrust_g2_ca.pem"]
View full tip
In case we would like to create an external application and we aren't sure what's the best solution to use, below are some useful tips. Scenario: Let's say we use a gateway in order to access the external application we want to create. We would like to implement this gateway translating the ThingWorx standard protocol to the SCADA protocol. The system administrator, who manages the grid, has the own secure system, with a standard for communication inside the SCADA system, and we want to be able to get data from our system to the system they have. Let's also consider that the data is connected on the electrical field. Tips: It is recommended to develop a 3rd party that on one side talks to ThingWorx, and on the other side, talks to the SCADA system. This external ThingWorx application that we want to implement would have a series-edge-interface allowing to enter in our customer's Ethernet network, in order for both systems to communicate. JDBC is not recommended - it's mostly for connecting to the data base, which in our case is not the main purpose. Each REST API call to the platform uses a security accreditation (appkey or user/password); depending of the permissions contained in that token, the access can be allowed to certain parts of the platform. Reasons for using REST API: REST API is simple and not dependent on any format that the data comes from ThingWorx. It can be used offline, online, synchronously, asynchronously, and is easy to manage from a formatting point of view. ThingWorx can give a lot of options: like exporting information via xml to a plain xml file, to parse it to whatever protocol we have on the other hand: Either our application would have to handle xml inputs from ThingWorx and process it towards SCADA compatible output. Or our application will have to handle xml input from ThingWorx and process it towards SCADA compatible output. Or we can talk directly via REST API and read on a per-thing basis (using the web services). The interface application just has to know how to read xmls or REST API calls (which are provided with an xml formatted response). SDK has already library written in C, C# and Java. SDK C, C# and Java use the AlwaysOn protocol (web socket)  and are more firewall friendly. It's mostly like for speed and automated processing, so when known exactly what happens and we trust the other side and we know there are little chances for errors.​ If we go with REST API or SDK , the application that is developed will have complete access inside ThingWorx, like change/edit the things. If we want to have access in both ways, not only in reading data, but also in update/delete information, etc, SDK and REST API ​can be used, because we have the whole range of commands, like set property values, call on services, etc. We can limit access, if we want, for security reasons. SDK offers access to the same services as REST API, but in a different way. Otherwise, it's better to go with xml decoupled files. Conclusion: for this particular scenario, better use SDK.
View full tip
This has been moved to its new home in the Augmented Reality Category in the PTC Community.
View full tip
Introduction In-Memory Column stores the data in columnar format contrary to row format. Allowing users to run faster analytics which is also the idea behind this is to push the computation as close to the data store as possible. In this post I'll configure the Oracle database to enable this feature and then populate one or more tables in the In Memory Column store. This could be particularly helpful if you are using Oracle 12c as an external data store for storing data in database table via JDBC connection, current/historic values from DataTable, Streams or ValueStreams for running analytics or DMLs with lots of join and require lot of computation before the data is finally presented on to the Mashup(s). For this post I used the data generated by temperature sensor getting stored in ValueStream, exported to CSV from the ValueStream and imported it in the Oracle table. In-Memory Column Store vs In-Memory database Usage As mentioned above Oracle 12c version 12.1.2 comes with in built In-Memory Column Store feature. As the name suggest it allows data to be populated in RAM enabling high speed transaction and analytics on data without the need to traverse the HDD, and in some cases this is much faster than the buffer cache of the SGA. Without going into too much nitty-gritty it's important to note that In-Memory Column  Store does not equate to In-Memory database. While it could be possible to move the entire schema, if it's small enough, to fit in the defined memory for In-Memory Column Store, the idea however is to speed up the computation requiring analytics on one or more table(s) which are heavily queried by the users. If you are interested in In-Memory Database as persistence provider for ThingWorx please refer to the documentation Using SAP HANA as the Persistence Provider which is one of the option among other available Persistence Providers for ThingWorx. What changes are required to the current Oracle 12c installation or JDBC connection to ThingWorx? In-Memory Column Store feature is an inbuilt feature in Oracle 12.1.2 and only needs to be enabled, as it's not by default. This can be enabled without having to bring any sort of change to the following : 1. The existing SQL services created within ThingWorx 2. General application architecture accessing the tables in the Oracle database over JDBC 3. Existing Oracle 12c installation Getting Started What will it take to enable In-Memory Column Store? This feature can be enabled by following few steps : 1. Enable this feature in the Oracle 12.1.2 installation, by assigning some memory in RAM for InMemory Column 2. Adjust the SGA size for the database to incorporate the memory assigned to the In-Memory Column 3. Bounce the database As mentioned above though this is an inbuilt feature with Oracle 12.1.2, but is not enabled by default and we can confirm this by executing following SQL in SQL*Plus or Oracle SQL Developer connecting to database for which we are enabling this feature. SQL> show parameter INMEMORY; Things to consider before enabling 1. Ensure that the hardware/ VM hosting the Oracle installation have sufficient RAM, 2. Ensure to bump up the SGA by the amount of memory assigned to In-Memory Column store, failing to do so may lead to database failing to start and will require recovery Note: Minimum memory that can be assigned to In-Memory Column Store is 100M Setting it all up For my test setup I will be assigning 5G to the In-Memory Column Store and will add this amount to the current SGA, to do this let's start the SQL*Plus with the rights that will allow me to make changes to the existinng SGA, so i'm using sys@orcl as sysdba  (ORCL is the test DB name i have for my database) Step 1: Start SQL*Plus, e.g. sqlplus sys@orcl as sysdba Step 2: ALTER SYSTEM SET INMEMORY_SIZE = 5G SCOPE=SPFILE; Step 3: ALTER SYSTEM SET SGA_TARGET = 20G SCOPE=SPFILE; Once done, bounce the database. And that's it! We should now be able to confirm that, via SQL*Plus, certain amount of memory, 5G in my case, has been assigned to the In-Memory Column Store feature SQL> show parameter inmemory Populating the In-Memory Column Store In-Memory Column Store will only populate the data from the table only on the first use or if the table is marked critical which will tell Oracle to populate as soon as the database comes online after restart. For more detail on the commands concerning the In-Memory Column Store refer to the OTN webpage I'll now use the SensorHistory table in which i have the ValueStream's exported data in CSV format, currently this table is holding ~32million+ rows, and populate them in columnar architecture of the In Memory Column Store with following command: SQL>ALTER TABLE SENSORHISTORY INMEMORY; // marking the table to be eligible for In-Memory column with default parameters Just to confirm that the data is still not populated since we have only marked the table to be eligible for In-Memory Column Store, if I now query the dynamic view V$IM_SEGMENTS for current usage of the InMemory, it'll confirm this: So now let's populate the In-Memory with a query which would require full table scan, e.g. SQL> select property_name, count(*) from sensorhistory           Group by property_name; Let's recheck the dynamic view V$IM_SEGMENTS As mentioned above, that this is completely transparent to the application layer, so if you already have an existing JDBC connection in ThingWorx to Oracle, all the existing services created for that table will continue to work as expected. If you don't have an existing JDBC connection to Oracle, it can be created with usual steps with no special configuration for In-Memory. Creating JDBC connection I'm including this section for the purpose of completeness, if you already have a working JDBC connection to Oracle 12.1.2 you can skip to Conclusion below.Now for accessing the above database along with the In-Memory Column Store table we'll now setup the JDBC connection to the Oracle, for that download and import the TW_Download_Relational Databases Connectors.zip (ThingWorx Marketplace) > unzip to access the Oracle12Connector_Extension.zip Step 1 : Import the extension in the ThingWorx by navigating to Import/Export > Import > Extensions Step 2: Create a Thing using the OracleDBServer12 Template, part of the extension we just imported Step 3: Here's how a valid configuration would look like to successfully connect to the database, ORCL in this case Step 4: Navigate to the Properties in the Entity Information panel on the left and verify that the isConnected property value is True. Conclusion This is a very short introduction to what could be a setup for improving the data analytics performed on the stored data, manifold. The data in the In-Memory Column Store is not stored in conventional row format, rather in large columnar format. If the need is to have simple SQL queries with not so many joins it could be that the SGA Cache would be sufficient and probably be faster and you may not gain much by configuring the In-Memory Column Store. However, queries requiring heavy computation on large data sets, having In-Memory Column Store configured could bring manifold increase in performance. Therefore if you need more guidelines on where you'd want to use the In-Memory Column Store, feel free to give following listed good reads a try along with real world data use case for reference. I will try to find some time to run my own benchmark and will try to put it out in a separate blog on performance gain. 1. Oracle Database In-Memory with Oracle Database 12c Release 2 : Oracle white paper 2. When to Use Oracle Database In-Memory :  Identifying Use Cases for Application Acceleration 3. Oracle Database 12c In-Memory Option 4. Testing Oracle In-Memory Column Store @ CERN
View full tip
1. Use Postman or any other software for Rest Api call to the ThingWorx. 2. Create a query in Postman with following parameters: Type: POST URL: https://<IP>:<PORT>/Thingworx/Users/<UserName>/Services/AssignNewPassword <IP>: IP of the server where ThingWorx is installed. <PORT>: Port on which ThingWorx is running (if required). <UserName>: User Name of the user whom Password is to be reset. Headers: appkey : Your Administrator App key or App key of user having Permission for AssignNewPassword Service for the user. Content-Type: application/json Body: {     "newPassword":"NewPasswordHere",     "newPasswordConfirm":"NewPasswordHere" } 3. Send the Query. 4. Login using new Password.
View full tip
This has been moved to its new home in the Augmented Reality Category in the PTC Community.
View full tip
Announcements