cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - When posting, your subject should be specific and summarize your question. Here are some additional tips on asking a great question. X

IoT Tips

Sort by:
Troubleshooting platform issues is  generally done by using a layer approach, similar to a simplified OSI Model. From bottom to top, the following layers represent the areas to analyze during each step: 1. Physical (Server, power, wired connections): check the server status and condition, CPU and memory levels. 2. Software (Operating system, tomcat, java versions, compatibility, and configuration): refer to the compatibility matrix to ensure the requirements are met; verify Tomcat  java configuration. * Note: Tomcat manager, server status, conveniently provides this information in one place. 3. Network: ensure  proper connectivity, port availability, firewall  configuration, and additional security, if applicable. 4. Application. The main focus of this blog post will concentrate on the step 4. As the Thingworx application is driven by Tomcat, first available tools coming "out-of-the-box" is the built-in Tomcat manager app.  Clicking on the "Server Status" provides the information on the versions, memory usage, processes, times and thread counts. Keep in mind, the default Tomcat maximum thread number is 200. Some additional tools that could assist in troubleshooting java applications and gathering performance metrics are: Javamelody, new relic, profiler4j. These have to be obtained, installed, and configured separately. Javamelody: Free and lightweight monitoring tool which does not do any profiling, safe to use in production environments. It comes with a series of plug-ins including for Grails, Jenkins and Jira. New relic: Real-time Java application monitoring, features code deployment reports, transaction tracing across different tiers and the ability to create alerts. Subscription fee applies. Profiler4j: Profiler4J is a free open-source tool for profiling in Java. It is enabled by passing an argument at start-up with a path to the Profiler4J .jar file. It comes with several graphs and charts showing a call graph with method details, a call tree, a memory monitor, a class list and thread monitoring. From the application perspective, Thingworx composer provides a PlatformSubsystem and LoggingSubsystem: PlatformSubsystem contains such services as GetPerformanceMetrics, GetSummaryInformation, GetThingworxVersion, and more to provide fundamental information for any troubleshooting scenario. LoggingSubsystem contains the logs, log settings, and other monitoring values. List of recommended tools for troubleshooting all layers: Wireshark: monitors network traffic Jstack: monitors memory consumption of specific threads Dynatrace: system performance and web application performance jconsole: system or application performance ​​
View full tip
Scripto provides a RESTful endpoint for Groovy Custom Objects on the Axeda Platform.  Custom Objects exposed via Scripto can be accessed via a GET or a POST, and the script will have access to request parameters or body contents. Any Custom Object of the "Action" type will automatically be exposed via Scripto. The URL for a Scripto service is currently defined by the name of the Custom Object: GET: http://{{YourHostName}}/services/v1/rest/Scripto/execute/<customObjectName> Scripto enables the creation of "Domain Specific Services". This allows implementers to take the Axeda Domain Objects (Assets, Models, DataItems, Alarms) and expose them via a service that models the real-world domain directly (trucks, ATMs, MRI Machines, sensor readings). This is especially useful when creating a domain-specific UI, or when integrating with another application that will push or pull data. Authentication There are several ways to test your Scripto scripts, as well as several different authentication methods. The following authentication methods can be used: Request Parameter credentials: ?username=<yourUserName>&password=<yourPassword> Request Parameter sessionId (retrieved from the Auth service): ?sessionid=<sessionId> Basic Authentication (challenge): From a browser or CURL, simply browse to the URL to receive an HTTP Basic challenge. Request Parameters You can access the parameters to the Groovy script via two Objects, Call and Request. Request is actually just a sub-class of Call, so the values will always be the same regardless of which Object you use.  Although parameters may be accessed off of either object, Call is preferable when Chaining Custom Objects (TODO LINK) together.  Call also includes a reference to the logger which can be used to log debug messages. GET:  http://{{YourHostName}}/services/v1/rest/Scripto/execute/<Your Script Name>?sessionid=<Session Id>&serial_number=mySerialNumber Accessing Parameters through the Request Object import com.axeda.drm.sdk.scripto.Request // Request.parameters is a map of strings def serial_number = Request.parameters.serial_number assert serial_number == "mySerialNumber"       Accessing Parameters through the Call Object import com.axeda.drm.sdk.customobject.Call // Call.parameters is a map of strings def serial_number = Call.parameters.serial_number assert serial_number == "mySerialNumber"       Accessing the POST Body through the Request Object The content from a POST request to Scripto is accessible as a string via the body field in the Request object.  Use Slurpers for XML or JSON to parse it into an object. POST:  http://{{YourHostName}}/services/v1/rest/Scripto/execute/<Your Script Name>?sessionid=<Session Id> Response: { "serial_number":"mySerialNumber"} import com.axeda.drm.sdk.scripto.Request def body = Request.body def slurper = new JsonSlurper() def result = slurper.parseText(body) assert result.serial_number == "mySerialNumber"       Returning Plain Text Groovy custom objects must return some content.  The format of that content is flexible and can be returned as plain text, JSON, XML, or even binary files. The follow example simply returns plain text. GET:  http://{{YourHostName}}/services/v1/rest/Scripto/execute/<Your Script Name> // Outputs:  hello return ["Content-Type":"text/plain","Content":"hello"]       Returning JSON We use the JSONObject Class to format our Map-based content into a JSON structure. The eliminates the need for any concern around formatting, you just build up Maps of Maps and it will be properly formatted by the fromObject() utility method. GET:  http://{{YourHostName}}/services/v1/rest/Scripto/execute/<Your Script Name> import net.sf.json.JSONObject root = [   items:[    num_1: “one”,    num_2: “two”            ] ] /** Outputs {   "items": {  "num_1": "one", "num_2": "two"  } } **/ return ['Content-Type': 'application/json', 'Content': JSONObject.fromObject(root).toString(2)]       Link to JSONObject documentation Returning XML To return XML, we use the MarkupBuilder to build the XML response. This allows us to create code that follows the format of the XML that is being generated. GET:  http://{{YourHostName}}/services/v1/rest/Scripto/execute/<Your Script Name>?sessionid=<Session Id> import groovy.xml.MarkupBuilder def writer = new StringWriter() def xml = new MarkupBuilder(writer) xml.root(){     items(){         num_1("one")         num_2("two")     } } /** Outputs <root>   <items>     <num_1>one</num_1>     <num_2>two</num_2>   </items> </root> **/ return ['Content-Type': 'text/xml', 'Content': writer.toString()]       Link to Groovy MarkupBuilder documentation Returning Binary Content To return binary content, you typically will use the fileStore API to upload a file that you can then download using Scripto.  See the fileInfo section to learn more. In this example we connect the InputStream which is associated with the getFileData() method directly to the output of the Scripto script. This will cause the bytes available in the stream to be directly forwarded to the client as the body of the response. GET:  http://{{Your Host Name}}/services/v1/rest/Scripto/execute/{{Your Script Name}}?sessionid={{Session Id}}&fileId=123 import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* def contentType = parameters.type ?: 'image/jpg' return ['Content':fileInfoBridge.getFileData(parameters.fileId), 'Content-Type':contentType]   The Auth Service - Authentication via AJAX Groovy scripts are accessible to AJAX-powered HTML apps with Axeda instance credentials.  To obtain a session from an Axeda server, you should make a GET call to the Authentication service. The service is located at the following example URL: https://{{YourHostName}}/services/v1/rest/Auth/login This service accepts a valid username/password combination in the incoming Request parameters and returns a SessionID. The parameter names it expects to see are as follows: Property Name Description principal.username The username for the valid Axeda credential. password The password for the supplied credential. A sample request to the Auth Service: GET: https://{{YourHostName}}/services/v1/rest/Auth/login?principal.username=YOURUSER&password=YOURPASS Would yield this response (at this time the response is always in XML): <ns1:WSSessionInfo xsi:type="ns1:WSSessionInfo" xmlns:ns1="http://type.v1.webservices.sl.axeda.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">   <ns1:created>2013-08-12T13:19:37 +0000</ns1:created>   <ns1:expired>false</ns1:expired>   <ns1:sessionId>19c33190-dded-4655-b2c0-921528f7b873</ns1:sessionId> <ns1:sessionTimeout> 1800 </ns1:sessionTimeout> </ns1:WSSessionInfo>       The response fields are as follows: Field Name Description created The timestamp for the date the session was created expired A boolean indicating whether or not this session is expired (should be false) sessionId The ID of the session which you will use in subsequent requests sessionTimeout The time (in seconds) that this session will remain active for The Auth Service is frequently invoked from JavaScript as part of Custom Applications. The following code demonstrates this style of invocation. function authenticate(host, username, password) {             try {                 netscape.security.PrivilegeManager.enablePrivilege("UniversalBrowserRead");             } catch (e) {                 // must be IE             }             var xmlHttpReq = false;             var self = this;             // Mozilla/Safari             if (window.XMLHttpRequest) {                 self.xmlHttpReq = new XMLHttpRequest();             }             // IE             else if (window.ActiveXObject) {                 self.xmlHttpReq = new ActiveXObject("Microsoft.XMLHTTP");             }             var SERVICES_PATH = "/services/v1/rest/"             var url = host + SERVICES_PATH + "Auth/login?principal.username=" + username + "&password=" + password;             self.xmlHttpReq.open('GET', url, true);             self.xmlHttpReq.onreadystatechange = function() {                 if (self.xmlHttpReq.readyState == 4) {                     getSessionId(self.xmlHttpReq.responseXML);                 }             }             self.xmlHttpReq.send() } function getSessionId(xml) {             var value             if (window.ActiveXObject) {                 // xml traversing with IE                 var objXML = new ActiveXObject("MSXML2.DOMDocument.6.0");                 objXML.async = false;                 var xmldoc = objXML.loadXML(xml);                 objXML.setProperty("SelectionNamespaces", "xmlns:ns1='http://type.v1.webservices.sl.axeda.com'");                 objXML.setProperty("SelectionLanguage","XPath");                 value =  objXML.selectSingleNode("//ns1:sessionId").childNodes[0].nodeValue;             } else {                 // xml traversing in non-IE browsers                 var node = xml.getElementsByTagNameNS("*", "sessionId")                 value = node[0].textContent             }             return value } authenticate ("http://mydomain.axeda.com", "myUsername", "myPassword")       Calling Scripto via AJAX Once you have obtained a session id through authentication via AJAX, you can use that session id in Scripto calls. The following is a utility function which is frequently used to wrap Scripto invocations from a UI. function callScripto(host, scriptName, sessionId, parameter) {             try {                 netscape.security.PrivilegeManager.enablePrivilege("UniversalBrowserRead");             } catch (e) {                 // must be IE             }             var xmlHttpReq = false;             var self = this;             // Mozilla/Safari             if (window.XMLHttpRequest) {                 self.xmlHttpReq = new XMLHttpRequest();             }             // IE             else if (window.ActiveXObject) {                 self.xmlHttpReq = new ActiveXObject("Microsoft.XMLHTTP");             }             var url = host + SERVICES_PATH + "Scripto/execute/" + scriptName + "?sessionid=" + sessionId;             self.xmlHttpReq.open('GET', url, true);             self.xmlHttpReq.onreadystatechange = function() {                 if (self.xmlHttpReq.readyState == 4) {                     updatepage(div, self.xmlHttpReq.responseText);                 }             }             self.xmlHttpReq.send(parameter); } function updatepage(div, str) {             document.getElementById(div).innerHTML = str; } callScripto("http://mydomain.axeda.com", "myGroovyScriptName", "mySessionId", "myparameter=foo")       A more modern jQuery-based example might look like the following: function callScripto(host, scriptName, sessionId, parameter) {     var url = host + '/services/v1/rest/Scripto/execute/' + scriptName + '?sessionid=' + sessionId     if ( parameter != null ) url += '&' + parameter     $.ajax({url: url,               success:  function(response) {  updatepage(div, response); }           }); } function updatepage(div, str) {     $("#" + div).innerHTML = str } callScripto("http://mydomain.axeda.com", "myGroovyScriptName", "mySessionId", "myparameter=foo") In Conclusion As shown above, Scripto offers a number of ways to interact with the platform.  On each version of the Axeda Platform, all supported v1 and v2 APIs are available for Scripto to interact with the Axeda domain objects and implement business logic to solve real-world customer problems. Bibliography ​(PTC.net account required)     Axeda v2 API/Services Developer's Reference Version 6.8.3 August 2015     Axeda® v1 API Developer’s Reference Guide Version 6.8 August 2014     Documentation Map for Axeda® 6.8.2 January 2015
View full tip
In the recent times, one of the frequent questions regarding PostgreSQL is which tools are good with PostgreSQL. With the growing functionality of PostgreSQL, the number of vendors are willing to produce tools for PostgreSQL. There are lot of tools for management, development, data visualization and the list if growing. Here, I'm listing a few tools that might be of interest to Thingworx users. psql terminal: The psql client is a command-line client distributed with PostgreSQL, often called as interactive terminal. psql is a simple yet powerful tool with which you can directly interface with the PostgreSQL server. The psql client comes default with the PostgreSQL database. Key features: Issue queries either through commands or from a file. Provides shell-like features to automate tasks. For more information, refer http://www.postgresql.org/docs/9.5/static/app-psql.html pgAdmin III: pgAdmin III is a GUI based administration and development tool for PostgreSQL database. It delivers the needs of both admin and normal users from writing simple SQL queries to developing complex databases. Key features: Open source and cross-platform support. No additional drivers are required. Supports more than 30 different languages. Note: pgAdmin III comes default with postgreSQL9.4 installer. For more information, refer http://www.pgadmin.org/download/ phpPgAdmin: phpPgAdmin is a web-based client for managing PostgreSQL databases. It provides the user with a convenient way to create databases, create tables, alter tables and query the data using SQL. Key features: Open source and supports PostgreSQL 9.x. Requires webserver. Administer multiple servers. Supports the slony master-slave replication engine. For phpPgAdmin download: http://phppgadmin.sourceforge.net/doku.php?id=download TeamPostgreSQL: TeamPostgreSQL is a browser-based tool for PostgreSQL administration. Using TeamPostgreSQL, database objects can be accessed from anywhere in the web browser. Key features: Open source and cross-platform support. Supports SSH for both the web interface and the database connections. GUI with tabbed SQL editors. For TeamPostgreSQL download: http://www.teampostgresql.com/download.jsp   Monitoring Tools pgBadger: pgBadger is a PostgreSQL log analyzer for generating reports from the PostgreSQL log files. It is built in Perl language and uses a javascript and bootstrap libraries. Often seen as a replacement for pgfouine log analyzer. Key features: Open source community project. Autodetects postgreSQL log file formats (stderr, syslog or csvlog). Provides SQL queries related reports and statistics. Can also set limits to only report errors. Generates Pie charts and Time based charts. For more information, refer http://dalibo.github.io/pgbadger/. Git download: https://github.com/dalibo/pgbadger/releases PostgreStats: Postgrestats is a software that has automated scripts to easily view statistics such as commits, rollbacks, user inserts, updates and deletes in a time-based intervals. Postgrestats gets installed and executes on the database server, it customizes the main conf file. Postgrestats also provides an enterprise application for Replication mode and High Availability. Key features: Open source and easy-to-setup installation.  Take a snapshot report based on time intervals. Optional email-on-update. Text file Data storage. Also provides enterprise application, PostgreStats Enterprise. For more information, refer: http://www.postgrestats.com/subs/docs.html    Slemma: Slemma is a collaborative, data visualization tool for PostgreSQL database. Slemma allows database connections with a near to one-click integration and can generate a dashboard from files. Slemma comes with a commercial license with a $29 per user per month pricing. Key features: Create charts and interactive dashboards by selecting tables. Non-developers can easily create visualizations (with no coding). Email dashboards automatically to clients or your entire team. For more information, refer https://slemma.com/ Ubiq: Ubiq is a web-based buisness intelligence and reporting tool for PostgreSQL server. Ubiq creates reports and online dashboards, providing the feature to export in multiple formats. Ubiq is distributed with a commercial license. Key features: Drag & drop interface to create interactive charts, dashboards and reports. Apply powerful filters and functions to the data. Share your work and schedule email reports. For more information, refer http://ubiq.co/tour
View full tip
From the documentation, a SOLR node is only needed in case of using DataTables.  If the SOLR configuration field left blank, the extension will request to provide an input. Are SOLR nodes required or optional in order to use DSE with TW (in the hypothetical case of not using DataTables)?      -- As for functionality of the Thingworx, a Solr node is not required. However, the extension does try to validate the configuration, and hence, at this point, a SOLR node is mandatory to properly configure the extension. This will be fixed in the future. When there are 2 entries for addresses, one for a Cassandra Cluster and one for a Solr Cluster, are they the same Cluster, or different Clusters?      -- They could be either. There can be one machine with SOLR enabled and using the same IP for both Cassandra and Solr. However, it's not recommend for production workloads. It would be perfectly fine for development or test environments. In a Cluster, in order to have Solr and Cassandra nodes, use of Datacenters is required. Even if a Datacenter isn't explicitlydefined, a default install of DSE will create two data centers called "Cassandra" and "Solr" which is what would be seen see in the default "Cassandra Keyspace Settings" property in the configuration. If the user does create Datacenters with specific names then they will have to update the "Cassandra Keyspace Settings" property to reflect the same. I.e. replication = {'class':'NetworkTopologyStrategy', 'Cassandra':1, 'Solr':1} The number in front (1 being the default) represents the replication factor (https://docs.datastax.com/en/cql/3.1/cql/cql_using/update_ks_rf_t.html) depending on the number of nodes in each data center
View full tip
The purpose of this post is to provide some ideas and help diagnosing issues in mashup. First, check if the problem occurs at mashup runtime or in design(edit) mode. Runtime: Is the issue visual or related to improper service execution? (e.g, "my data is displaying correctly but the styling or formatting is wrong" -- visual, "my data is displayed incorrectly but the styling and formatting is right" -- improper service execution) For visual/styling/formatting issues, return to the edit mode of mashup, and ensure the proper style definitions were set up. Ensure the logic behind the connections is correct. Check configuration of the widget(s) involved. Were there any changes made to the styles after the mashup was saved and run the first time? If so, try - clearing the browser cache;  -reconnecting the dependent entity with the style involved in the issue. If the problem persists, contact technical support to raise a cosmetic defect ticket. For improper service execution, return to the composer and use the "test" button on the service to execute and validate the output. If the outputs are incorrect, check the code inside of the service. If the outputs come out as expected, try reconnecting the service in the mashup design mode and clearing the browser cache. If the issue is related to the data from the user database not displaying  -- ensure the database connectivity and proper credentials. If the problem persists, reach out to the technical support to raise a defect.    2.   Design/edit mode: If the widgets are not displaying correctly or not appearing in the list: Check the extensions involved are appearing under the extension manager. Re-upload if needed and restart the composer. If the Google Maps widget is not showing in the mashup the first time of being used, allow up to 2 hrs to load and cache. Submit a ticket to technical support, including the screenshots of the issue. For other styling, formatting, or improper display issues at design time: document the observation and supply the screenshots to the technical support team for investigation. Note: See Tools and approaches used in troubleshooting Twx issues.
View full tip
This document attached to this blog entry actually came out of my first exposure to using the C SDK on a Raspberry PI. I took notes on what I had to do to get my own simple edge application working and I think it is a good introduction to using the C SDK to report real, sampled data. It also demonstrates how you can use the C SDK without having to use HTTPS. It demonstrates how to turn off HTTPS support. I would appreciate any feedback on this document and what additions might be useful to anyone else who tries to do this on their own.
View full tip
This document provides API information for all 51.0 releases of ThingWorx Machine Learning.
View full tip
Here are some tips on how to submit a ticket to the ThingWorx technical support team and what to expect. Providing a typical minimum information is always a good practice to lessen the questions and unnecessary back-and-forth communication prior to the actual investigation of the problem. Open a new ticket for each separate issue. We do track every technical issue that comes in. If the ticket is being submitted for troubleshooting: Please provide the versions of Thingworx, Tomcat, java; Operating System and specs. Attach the list of the extensions used. Include a detailed description of the problem; if applicable, include the screenshots. Evaluate the business impact caused by the issue. Optional: state the method of contact preference, whether it's a phone or email, and time if applicable. Expect a support engineer (SE) to establish the first contact via email, letting known of the case ownership, and further investigation. If the ticket is being submitted for enhancement request or improvement: Please provide a clear description of the feature, use case(s), expectations and any additional details that might play a role in prioritizing the request. Once the ticket has been created, it will be assigned to a support engineer (SE) who will then place a request (Jira) to R&D and provide a Jira # to the point of contact in the support ticket Enhancement requests and improvements are always considered; however, the delivery is not guaranteed. Once an SE provides the case contact with the Jira #, the support ticket will be closed, and the point of contact may reach out to the SE at any time to check on the status of the Jira. If the ticket is being submitted for a bug or a defect: Please provide the versions of Thingworx, Tomcat, java; Operating System and specs. Include a clear description of the problem, expected result, current result; a Evaluate the business impact. If reproducible, include the steps. Optional: include the entities and data (.xml, .json if applicable) to demonstrate the issue Once the ticket has been created, it will be assigned to a support engineer (SE) who will then place a request (Jira) to R&D and provide a Jira # to the point of contact in the support ticket (assuming no further information is required) The R&D will provide an estimate release after the issue is evaluated. Upon sending the ETA to the case contact, the SE will close the support ticket.
View full tip
PostgreSQL is a powerful, open source object-relational database system that provides unlimited database size. Thingworx 6.5 introduces PostgreSQL as persistence provider and supports High Availability. Main advantages with Thingworx Postgres are 1. Highly customizable PostgreSQL also includes a framework that allows developers to define and create their own custom data types along with supporting functions and operators that define their behavior. Triggers and stored procedures can be written in C and loaded into the database as a library, allowing great flexibility in extending its capabilities. 2. Synchronous replication PostgreSQL streaming replication is asynchronous by default. Synchronous replication offers the ability to confirm that all changes made by a transaction have been transferred to one synchronous standby server. This extends the standard level of durability offered by a transaction commit. The only possibility that data can be lost is if both the primary and the standby suffer crashes at the same time. 3. Write ahead logging for fault tolerance The Write Ahead Log (WAL), is the feature of PostgreSQL that allows it to recover data, usually up to the point where the server stopped. As you make changes to your data, PostgreSQL aggressively writes those changes to the WAL. PostgreSQL issues a checkpoint when a buffer limit is reached. When PostgreSQL restarts, it replays the changes from the WAL since the last Checkpoint, to bring the database back to the state of the last completed commit. Master node sends a live stream of data changes to the slave nodes through the WAL and slaves applies this data and stay up to date. 4. Point-in time recovery Point-in-time Recovery (PITR) also called as incremental database backup , online backup or may be archive backup. This mechanism use the history records stored in WAL file to do roll-forward changes made since last database full backup. With Point-in-time Recovery, database backup down time can totally eliminated because this mechanism can make database backup and system access happened at the same time. with PITR, we backup the latest archive log file since last backup instead of full database backup everyday. Thingworx streams data from the connected devices and postgres handles it with a greater scalability. In Thingworx, postgresql acts as a persistence provider that stores both run-time data and metadata about things. Run-time data is the data that is persisted once the things are composed and are used by connected devices to store their data. Streams and value streams fetch huge amounts of data, once the streaming data reaches a limit fo 50gb neo4j can't handle the performance. For example, for a singleStream that has 50 properties that gathers data from 10000 devices, it will quickly hit the memory limit with neo persistence provider. So, it is strongly recommended to choose postgresql for a better performance issues. Overview of Installing Thingworx PostgreSQL: Install latest version of Java and make sure environment variables are configured. Follow the instructions in Installing Thingworx 6.5​ to install tomcat. Instructions/commands may vary for different Linux flavors. Install PostgreSQL. For Linux/Unix environments, YUM-Installation Guidelines. Create 'ThingworxPostgresqlStorage' and 'ThingworxPlatform' folders in the root directory( / ), assign access permissions to the user. Copy modelproviderconfig.json file (from Thingworx download package) to 'ThingworxPlatform' folder. Execute ThingworxPostgresSchemaSetup and ThingworxPostgresDBSetup scripts (.bat for windows and .sh for Unix/Linux environments), for further instructions follow Getting Started with PostgreSQL ThingWorx Administrators Guide​. Restart the tomcat.
View full tip
Below is where I will discuss the simple implementation of constructing a POST request in Java. I have embedded the entire source at the bottom of this post for easy copy and paste. To start you will want to define the URL you are trying to POST to: String url = "http://127.0.0.1:80/Thingworx/Things/Thing_Name/Services/​Service_to_Post_to​"; Breaking down this url String: ​http://​ - a non-SSL connection is being used in this example 127.0.0.1:80 -- the address and port that ThingWorx is hosted on /Thingworx -- this bit is necessary because we are talking to ThingWorx /Things -- Things is used as an example here because the service I am posting to is on a Thing Some alternatives to substitute in are ThingTemplates, ThingShapes, Resources, and Subsystems /​Thing_Name​ -- Substitute in the name of your Thing where the service is located /Services -- We are calling a service on the Thing, so this is how you drill down to it /​Service_to_Post_to​ -- Substitute in the name of the service you are trying to invoke Create a URL object: URL obj = new URL(url); Class URL, included in the java.net.URL import, represents a Uniform Resource Locator, a pointer to a "resource" on the Internet. Adding the port is optional, but if it is omitted port 80 will be used by default. Define a HttpURLConnection object to later open a single connection to the URL specified: HttpURLConnection con = (HttpURLConnection) obj.openConnection(); Class HttpURLConnection, included in the java.net.HttpURLConnection import, provides a single instance to connect to the URL specified. The method openConnection is called to create a new instance of a connection, but there is no connection actually made at this point. Set the type of request and the header values to pass: con.setRequestMethod("POST"); con.setRequestProperty("Accept", "application/json"); con.setRequestProperty("Content-Type", "application/json"); con.setRequestProperty("appKey", "80aab639-ad99-43c8-a482-2e1e5dc86a2d"); You can see that we are performing a POST request, passing in an Accept header, a Content-Type header, and a ThingWorx specific appKey header. Pass true into the setDoOutput method because we are performing a POST request; when sending a PUT request we would pass in true as well. When there is no request body being sent false can be passed in to denote there is no "output" and we are making a GET request.         con.setDoOutput(true); Create a DataOutputStream object that wraps around the con object's output stream. We will call the flush method on the DataOutputStream object to push the REST request from the stream to the url defined for POSTing. We immediately close the DataOutputStream object because we are done making a request.         DataOutputStream wr = new DataOutputStream(con.getOutputStream());     wr.flush();     wr.close();           The DataOutputStream class lets the Java SDK write primitive Java data types to the ​con​ object's output stream. The next line returns the HTTP status code returned from the request. This will be something like 200 for success or 401 for unauthorized.         int responseCode = con.getResponseCode(); The final block of this code uses a BufferedReader that wraps an InputStreamReader that wraps the con object's input stream (the byte response from the server). This BufferedReader object is then used to iterate through each line in the response and append it to a StringBuilder object. Once that has completed we close the BufferedReader object and print the response we just retrieved.         BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));     String inputLine;     StringBuilder response = new StringBuilder();     while((inputLine = in.readLine()) != null) {       response.append(inputLine);     }     in.close();     System.out.println(response.toString());    The InputStreamReader decodes bytes to character streams using a specified charset.         The BufferedReader provides a more efficient way to read characters from an InputStreamReader object.         The StringBuilder object is an unsynchronized method of creating a String representation of the content residing in the BufferedReader object. StringBuffer can be used instead in a case where multi-threaded synchronization is necessary.      Below is the block of code in it's entirety from the discussion above: public void sendPost() throws Exception {   String url = "http://127.0.0.1:80/Thingworx/Things/Thing_Name/Services/Service_to_Post_to";   URL obj = new URL(url);   HttpURLConnection con = (HttpURLConnection) obj.openConnection();   //add request header   con.setRequestMethod("POST");   con.setRequestProperty("Accept", "application/json");   con.setRequestProperty("Content-Type", "application/json");   con.setRequestProperty("appKey", "80aab639-ad99-43c8-a482-2e1e5dc86a2d");   // Send post request   con.setDoOutput(true);   DataOutputStream wr = new DataOutputStream(con.getOutputStream());   wr.flush();   wr.close();   int responseCode = con.getResponseCode();   System.out.println("\nSending 'POST' request to URL : " + url);   System.out.println("Response Code : " + responseCode);   BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));   String inputLine;   StringBuilder response = new StringBuilder();   while((inputLine = in.readLine()) != null) {   response.append(inputLine);   }   in.close();   //print result   System.out.println(response.toString());   }
View full tip
Modbus is a commonly used communications protocol that allows data transfer between computers and PLCs. This is intended to be a simple guide on setting up and using a Modbus PLC Simulator with ThingWorx. ThingWorx provides Modbus packages for Windows, Linux and Linux ARM. The Modbus Package contains libraries and lua files intended to be used along with the Edge Microserver. Note: The Modbus package is not intended as an out of the box solution Requirements: ThingWorx Platform Edge Microserver Modbus Package Modbus PLC Simulator In this guide, a free Modbus PLC Simulator​ is used. Here is the direct download link for their v8.20 binary release. Configuring the EMS: The first step is to configure the EMS as a gateway. This is done via adding an auto_bind section in the config.json: "auto_bind": [ {     "name": "ModbusGateway",     "gateway": true }] This creates an ephemeral Thing that only exists when the EMS is running. The next step is to modify the config.lua to include the Modbus configuration. Copy over the contents of the etc folder of the Modbus Package over to the etc folder of the EMS. A sample config_modbus.lua is provided in the Modbus Package as a reference. The following code defines a Thing called MyPLC (which is a Remote Thing created on the Platform): scripts.MyPLC = {     file = "thing.lua",     template = "modbusExample",     identifier = "plc",     updateRate = 2000 } scripts.Thingworx = {     file = "thingworx.lua" } scripts.modbus_handler = {     file = "modbus_handler.lua",     name = "modbus_handler",     host = "localhost" } Adding 'modbusExample' to the above script enables the usage of the same located at /etc/custom/templates/. 'modbusExample' is a reference point for creating a script to add the registers of the PLC. The given template has examples for different basetypes. The different types of available registers are noted and referenced in the modbus.lua file available under /etc/thingworx/lua/. Setting up the PLC Simulator: Extract the mod_RSsim to a folder and run the executable. Since we are 'simulating' a PLC connection, set the protocol to Modbus TCP/IP. Change the I/O to Holding Registers (or any other relevant option), with the Address set to Dec. In the Simulation menu, select 'No animation' if you want to enter values manually or use 'Increment BYTES' to automatically generate values. This PLC Simulator will run at port 502. The Connection: With the EMS & luaScriptResource running, the PLC Simulator should have a connection to the platform with activity on the received/sent section. Now if you open the Remote Thing 'MyPLC' in the platform, the isConnected property (under the Properties section) should be true. (If not, go back to General Information, click on Browse in the Identifier section and select 'plc'). Go back to the Properties section, and click on Manage Bindings. Click on the Remote tab and the list of defined properties should appear. For example, the following code from the modbusExample.lua: properties.Int16HoldRegExample = {key="holding_register/1/40001?format=Int16", handler="modbus_handler", basetype="NUMBER"} denotes a property named Int16HoldRegExample at register 40001. The value at the address 40001 in the PLC Simulator should correspond with the value at the platform once this property is added and the Thing saved. If you are running into any errors when connecting with a Raspberry Pi, please take a look atDuan Gauche's follow up document/ guide - Using your Raspberry Pi with the Edge Microserver and Modbus
View full tip
Every edge component that connects to the ThingWorx platform requires an Application Key.  This 'AppKey' provides both authentication and authorization control.  When an edge component connects it steps through a connection process.  The second step of that process is to send the AppKey to the platform.  The platform will inspect the key and ensure that it is valid.  It also creates a session for that edge connection and associates the AppKey with the session.  Any future requests that are sent over that AlwaysOn connection will execute under the security context configured for the user associated with the AppKey. In order for edge applications to interact with the platform they require a certain set of permissions.  It is a best practice to not associate the Administrator user with an Application Key.  Doing this would allow an edge application to invoke any and all services on the platform, and to modify the property values of any thing.  The permissions applied to an edge component's AppKey should be the minimum set required for your application to function. The AppKey associated with an edge component is typically associated with a single Thing, or a collection of Things, usually of the same ThingTemplate.  Identify the Thing(s) or ThingTemplate(s) that your application will interact with.  There are four types of interactions for edge components: property reads, property writes, service invocations, and event executions (edge components do not subscribe to events).  These four types of interactions match the runtime permissions that can be configured on a Thing, or the 'run time instance' permissions for a ThingTemplate. If an edge application will be reading or writing all properties of a particular Thing, then applying the 'read property' and 'write property' permissions is appropriate.  If only a select set of a Thing's properties will be read or written, then read and/or write permission should be disabled, and only the select properties should be enabled using overrides. Since every Thing has a number of generic services, the 'service execute' permission should be disabled, and overrides should be configured for the selected services that the edge needs access to.  In addition, overrides should be configured for the 'UpdateSubscribeProeprtyValues' service and for the 'ProcessRemoteEvents' service.  Edge components often use these service to update a collection of properties or to fire a set of events. Finally, if your edge application triggers events on a Thing, overrides should be used to provide execute permission for those events. In summary, the safest path to configuring edge permissions is to create a new user and AppKey with no permissions applied, and to then selectively apply permissions for that user only on the Thing or Things that your edge components will interact with.
View full tip
Prerequisite Download the .NET SDK from the PTC Support Portal and set up the SteamSensor Example according the directions found in the ThingWorx Help Center SDK Steam Sensor Example In ThingWorx Create a Remote thing using the RemoteThingWithFileTransfer template (SteamSensor1 in example) Create a file repository and execute the CreateFolder service to create a folder in the repository folder in ThingworxStorage (MyRepository in example) In SteamThing.cs At the top of the file, import the file transfer class using com.thingworx.communications.client.things.filetransfer;” Create a virtual thing that extends FileTransferVirtualThing E.g. using steam sensor Thing public class SteamThing : FileTransferVirtualThing Edit SteamThing as follows {               public SteamThing(string name, string description, string identifier, ConnectedThingClient client, Dictionary<string, string> virtualDirectories)             : base(name, description, client, virtualDirectories) } In Client.cs Create a new Dictionary above the Steam Things. Select any name you wish as the virtual directory name and set the directory path. In this example, it is named EdgeDirectory and set to the root of the C Drive. Dictionary<string, string> virtualDirectories = new Dictionary<string, string>()             {                 {"EdgeDirectory", "C:\\"}             }; Modify the SteamThing to include your newly created virtual directories in the SteamThing parameters // Create two Virtual Things SteamThing sensor1 = new SteamThing("SteamSensor1", "1st Floor Steam Sensor", "SN0001", client, virtualDirectories); SteamThing sensor2 = new SteamThing("SteamSensor2", "2nd Floor Steam Sensor", "SN0002", client, virtualDirectories); To send or receive a file from the server, it is recommended that the built in GetFile and Send File are used. Create a remote service in the SDK containing either GetFile or SendFile GetFile — Get a file from the Server. sourceRepo — The entityName to get the file from. sourcePath — The path to the file to get. sourceFile — Name of the file to get. targetPath — The local VIRTUAL path of the resulting file (not including the file name). targetFile — Name of the resulting file in the target directory. timeout — Timeout, in seconds, for the transfer. A zero will use the systems default timeout. async — If true return immediately and call a callback function when the transfer is complete if false, block until the transfer is complete. Note that the file callback function will be called in any case. E.g. GetFile("MyRepository", "/", "test.txt", "EdgeDirectory", "movedFile.txt", 10000, true); SendFile — Sends a file to the Server. This method takes the following parameters: sourcePath — The VIRTUAL path to the file to send (not including the file name). sourceFile — Name of the file to send. targetRepo — Target repostiory of the file. targetPath — Path of the resulting file in the target repo (not including the file name). targetFile — Name of the resulting file in the target directory. timeout — Timeout, in seconds, for the transfer. A zero will use the systems default timeout. async — If true return immediately and call a callback function when the transfer is complete if false, block until the transfer is complete. Note that the file callback function will be called in any case. E.g. SendFile("/EdgeDirectory", "test.txt", "MyRepository", "/", "movedFile.txt",  10000,  true); From Composer, bring in the Remote Service on the SteamSensor thing and execute it. Files can now be transferred to or from the .NET SDK
View full tip
Hello Developer Community, We are pleased to announce pre-release availability of the ThingWorx Edge SDK for Android! The Android SDK beta is built off the Java SDK code base, but replaces the Netty websocket client with Autobahn for compatibility with Android OS.  Those familiar with the Java SDK API will feel very much at home in the Android SDK.  We recommend beginning with the included sample application.  Please watch this thread for upcoming beta releases.  b4 adds file transfers between the ThingWorx platform and Android devices and an example application. We welcome your questions and comments in the thread below!  Happy coding! Regards, ThingWorx Edge Products Team
View full tip
Official name: DataStax Enterprise, sometimes referred as Cassandra. Note: DBA skills required, free self-paced training can be found here Training | DataStax The extension package can further be obtained through Technical Support. Thingworx 6.0 introduces DSE as a backend database scaling to much greater byte count, ad Neo4j performance limitations hit at 50Gbs. Some of the main reasons to consider DSE are: 1. Elastic scalability -- Alows to easily add capacity online to accommodate more customers and more data when needed. 2. Always on architecture -- Contains no single point of failure (as with traditional master/slave RDBMS's and other NoSQL solutions) resulting in continious availability for business-critical applications that can't afford to go down. 3. Fast linear-scale performance -- Enables sub-second response times with linear scalability (double the throughput with two nodes, quadruple it with four, and so on) to deliver response time speeds. 4. Flexible data storage -- Easily accommodates the full range of data formats - structured, semi-structured and unstructured -- that run through today's modern applications. 5. Easy data distribution -- Read and write to any node with all changes being automatically synchronized across a cluster, giving maximum flexibility to distribute data by replicating across multiple datacenters, cloud, and even mixed cloud/on-premise environments. Note: Windows+DSE is currently not fully supported. Connecting Thingworx: Prerequisite: fully configured DSE database. 1. Obtain the dse_persistancePackage 2. Import as an extension in Composer. 3. In composer, create a new persistence provider. 4. Select the imported package as Persistence Provider Package. 5. In Configuration tab:      - For Cassandra Cluster Host, enter the IP address set in cassandra.yaml or localhost if hosted locally      - Enter new of existing Cassandra Keyspace name      - Enter Solr Cluster URL      - Other fields can be left at default (*) 6. Go to Services and execute TestConnectivity service to ensure True response. 7. When creating new Stream, Value Stream, or a Data Table, set Persistence Provider to the one created in previous steps. Currently all reads and writes are done through Thingworx and all Thingworx data is encoded in DSE.  Opcenter still allows to see connectes streams, datatables, valuestreams. *SimpleStrategy can be used for a single data center, or NetworkTopologyStrategy is recommended for most deployments, because it is much easier to expand to multiple data centers when required by future expansion. Is there a limit of data per node? 1 TB is a reasonable limit on how much data a single node can handle, but in reality, a node is not at all limited by the size of the data, only the rate of operations. A node might have only 80 GB of data on it, but if it's continuously hit with random reads and doesn't have a lot of RAM, it might not even be able to handle that number of requests at a reasonable rate. Similarly, a node might have 10 TB of data, but if it's rarely read from, or there is a small portion of data that is hot (so it could be effectively cached), it will do just fine. If the replication factor is above 1 and there is no reads at consistency level ALL, other replicas will be able to respond quickly to read requests, so there won't be a large difference in latency seen from a client perspective.
View full tip
The ThingWorx EMS and SDK based applications follow a three step process when connecting to the Platform: Establish the physical websocket:  The client opens a websocket to the Platform using the host and port that it has been configured to use.  The websocket URL exposed at the Platform is /Thingworx/WS.  TLS will be negotiated at this time as well. Authenticate:  The client sends a AUTH message to the platform, containing either an App Key (recommended) or username/password.  The AUTH message is part of the Thingworx AlwaysOn protocol.  If the client attempts to send any other message before the AUTH, the server will disconnect it.  The server will also disconnect the client if it does not receive an AUTH message within 15 seconds.  This time is configurable in the WSCommunicationSubsystem Configuration tab and is named "Amount of time to wait for authentication message (secs)." Once authenticated the SDK/EMS is able to interact with the Platform according to the permissions applied to its credentials.  For the EMS, this means that any client making HTTP calls to its REST interface can access Platform functionality.  For this reason, the EMS only listens for HTTP connections on localhost (this can be changed using the http_server.host setting in your config.json). At this point, the client can make requests to the platform and interact with it, much like a HTTP client can interact with the Platform's REST interface.  However, the Platform can still not direct requests to the edge. Bind:  A BIND message is another message type in the ThingWorx AlwaysOn protocol.  A client can send a BIND message to the Platform containing one or more Thing names or identifiers.  When the Platform receives the BIND message, it will associate those Things with the websocket it received the BIND message over.  This will allow the Platform to send request messages to those Things, over the websocket.  It will also update the isConnected and lastConnection time properties for the newly bound Things. A client can also send an UNBIND request.  This tells the Platform to remove the association between the Thing and the websocket.  The Thing's isConnected property will then be updated to false. For the EMS, edge applications can register using the /Thingworx/Things/LocalEms/Services/AddEdgeThing service (this is how the script resource registers Things).  When a registration occurs, the EMS will send a BIND message to the Platform on behalf of that new resource.  Edge applications can de-register (and have an UNBIND message sent) by calling /Thingworx/Things/LocalEms/RemoveEdgeThing.
View full tip
The .net-sdk can be configured to emit very detailed debugging and diagnostic information to a log file during execution. The .net-sdk uses the standard .NET System.Diagnostic infrastructure for Logging, as such, all configuration of the .net-sdk logger is done via the standard .NET Logging configuration system. By default, Logging is configured via the standard .NET “App.config” file. Log messages can be routed to any standard .NET TraceListener. Optionally, ThingWorx provides a FixedFieldTraceListener which can be used to output log messages to a file. The use of the ThingWorx provided FixedFieldTraceListener is recommended. The FixedFieldTraceListener when configured will automatically create a "logs" directory in the same location as (a sibling to) the running executable file (.exe). This "logs" directory will contain the log files. Every .NET Class can be configured as a specific “Trace Source” which emits log messages. It is recommended to add at least the following Trace Sources to your App.config file to receive the most useful amount of information: com.thingworx.communications.client.BaseClient com.thingworx.communications.client.ConnectedThingClient com.thingworx.communications.client.things.VirtualThing com.thingworx.communications.client.TwApiWrapper com.thingworx.communications.client.things.filetransfer.FileTransferVirtualThing com.thingworx.communications.client.things.contentloader.ContentLoaderVirtualThing The amount of information emitted can range from very low level Trace messages (the Verbose setting) to nothing at all (the Off setting). The “SourceLevels Enumeration” can be used to control how much information is written out to the log file. For reference, this is the <add name="SourceSwitch" value="Information" /> element in the sample below. Below is sample App.config file. <?xml version="1.0" encoding="utf-8"?> <configuration>     <system.diagnostics>       <sources>         <source name="com.thingworx.common.utils.JSONUtilities" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.TwApiWrapper" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.BaseClient" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.ConnectedThingClient" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.things.contentloader.ContentLoaderVirtualThing" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.things.filetransfer.FileTransferVirtualThing" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.things.VirtualThing" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.metadata.annotations.MetadataAnnotationParser" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>       </sources>       <switches>         <add name="SourceSwitch" value="Information" />       </switches>       <sharedListeners>         <add name="file" type="com.thingworx.common.logging.FixedFieldTraceListener, thingworx-dotnet-common, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" initializeData="false"/>       </sharedListeners>       <trace autoflush="true" indentsize="4" />   </system.diagnostics> </configuration>
View full tip
Recently a customer from the ThingWorx Academic Program sent in a sample program they were having problems with. They were trying to post data from a Raspberry PI using Python to their ThingWorx server. It turns out that their program did work just fine and was also a great example of posting data from a PI using REST. Here is how to set up this example. 1. Import the attached "Things_TempAndHumidityThing.xml" entity file. 2. from the PI run 'sudo pip install requests' 3. from the PI run 'sudo pip install logging' 4. from the PI run 'sudo pip install http_client' 5. Create a python file call test.py that contains this example code: #!/usr/bin/python import requests import json import logging import sys # These two lines enable debugging at httplib level (requests->urllib3->http.client) # You will see the REQUEST, including HEADERS and DATA, and RESPONSE with HEADERS but without DATA. # The only thing missing will be the response.body which is not logged. try:     import http.client as http_client except ImportError:     # Python 2     import httplib as http_client http_client.HTTPConnection.debuglevel = 1 # You must initialize logging, otherwise you'll not see debug output. logging.basicConfig() logging.getLogger().setLevel(logging.DEBUG) requests_log = logging.getLogger("requests.packages.urllib3") requests_log.setLevel(logging.DEBUG) requests_log.propagate = True #NYP Webserver URL in Thingworx NYP_Webhost = sys.argv[1] App_Key = sys.argv[2] ThingName = 'TempAndHumidityThing' headers = { 'Content-Type': 'application/json', 'appKey': App_Key } payload = { 'Prop_Temperature': 45, 'Prop_Humidity': 33 } response = requests.put(NYP_Webhost + '/Thingworx/Things/' + ThingName + '/Properties/*', headers=headers, json=payload, verify=False) 6. From the command line run, './test.py http://twhome:8080 e9274d87-58aa-4d60-b27f-e67962f3e5c4' except substitute your server and your app key. 7. A successful response should look like: INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): twhome send: 'PUT /Thingworx/Things/TempAndHumidityThing/Properties/* HTTP/1.1\r\nHost: twhome:8080\r\nappKey: e9274d87-58aa-4d60-b27f-e67962f3e5c4\r\nContent-Length: 45\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nUser-Agent: python-requests/2.8.1\r\nConnection: keep-alive\r\nContent-Type: application/json\r\n\r\n{"Prop_Temperature": 45, "Prop_Humidity": 33}' reply: 'HTTP/1.1 200 OK\r\n' header: Server: Apache-Coyote/1.1 header: Set-Cookie: JSESSIONID=E7436D2E6AE81C84EC197D406E7E365A; Path=/Thingworx/; HttpOnly header: Expires: 0 header: Cache-Control: no-store, no-cache header: Cache-Control: post-check=0, pre-check=0 header: Pragma: no-cache header: Content-Type: text/html;charset=UTF-8 header: Transfer-Encoding: chunked header: Date: Mon, 09 Nov 2015 12:39:24 GMT DEBUG:requests.packages.urllib3.connectionpool:"PUT /Thingworx/Things/TempAndHumidityThing/Properties/* HTTP/1.1" 200 None My thanks to the customer who sent in the simple example.
View full tip
ThingWorx provides multiple ways to deliver your data to the server. You can choose from the C based EMS to your own C application that uses the C SDK as well as SDKs for many popular languages but what can you do if the device you want to collect data on is so small that it need a very lightweight data delivery method. Normally you would consider using the REST web service interface and writing your own custom client to post your data by there is an alternative, MQTT. MQTT is a lightweight protocol that can be used from an Arduino with an Ethernet Shield that can stream real time data directly to ThingWorx by installing the MQTT Marketplace Extension on your server. To learn more about how this kind of solution worked, I created this slide deck while building a hardware example: DeliveringArduinoDataToThingworx.pdf Hopefully, it can help others out who want to create this kind of solution as well.
View full tip
This is a slide deck I created while learning how to post data from an Arduino to ThingWorx using MQTT protocol.
View full tip
Announcements