cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
The System user is pivotal in securing your application and the simplest approach is to assign the System user to ALL Collections and give it Runtime Service Execute. These Collection Permissions ONLY Export to ThingworxStorage vs. the File Export, it becomes quite painful to manage this and then roll this out to a new machine. Best and fastest solution? Script the Assignment, you can take this script which does it for the System user and extend it to include any other Collection Level permissions you might need to set, like adding Entity Create Design Time for the System user. --------------------------------------------------------- //@ThingworxExtensionApiMethod(since={6,6}) //public void AddCollectionRunTimePermission(java.lang.String collectionName, //       java.lang.String type, //       java.lang.String resource, //       java.lang.String principal, //       java.lang.String principalType, //       java.lang.Boolean allow) //    throws java.lang.Exception // //Service Category: //    Permissions // //Service Description: //    Add a run time permission. // //Parameters: //    collectionName - Collection name (Things, Users, ThingShapes, etc.) - STRING //    type - Permission type (PropertyRead PropertyWrite ServiceInvoke EventInvoke EventSubscribe) - STRING //    resource - Resource name (* = all or enter a specific resource to override) - STRING //    principal - Principal name (name of user or group) - STRING //    principalType - Principal type (User or Group) - STRING //    allow - Permission (true = allow, false = deny) - BOOLEAN //Throws: //    java.lang.Exception - If an error occurs //   var params = {     modelTags: undefined /* TAGS */,     type: undefined /* STRING */ }; // result: INFOTABLE dataShape: EntityCount var EntityTypeList = Subsystems["PlatformSubsystem"].GetEntityCount(params); for each (var row in EntityTypeList.rows) {     try {         var params = {             principal: "System" /* STRING */,             allow: true /* BOOLEAN */,             resource: "*" /* STRING */,             type: "ServiceInvoke" /* STRING */,             principalType: "User" /* STRING */,             collectionName: row.name /* STRING */         };         // no return         Resources["CollectionFunctions"].AddCollectionRunTimePermission(params);     }     catch(err) {     } }
View full tip
When we do connections to JDBC connected databases, as much as possible we want to leverage the Database Server side capabilities to do the querying, aggregating and everything else for us. So if we need to do filtering or math add it to the query statement so it is done server side, by the Database server not the Thingworx runtime server. Besides that there is much more that can be done, since databases are powerful (all the good stuff like joins, union, distinct, generated and calculated fields and what not) The more we can do Database server side the better it is for the Thingworx runtime performance. Now everyone hopefully knows the [[  ]] parameter substitution. So we can easily build an SQL service that has several input parameters, it will look like: Select * from table where item1=[[par1]] AND item2 =[[par2]] etc. But we can take this up a notch with the super powerful yet super dangerous <<   >> Now we can do a service that just says <<sqlQuery>> and use another service to build something like: select * from table where item1 in “val1,val2,val3” etc. If you can avoid it, only use [[  ]] but if needed there always is <<   >> but you must make sure that you properly secure that service with the system user and validate the service that invokes this service, since <<  >> is vulnerable to SQL Injection
View full tip
In this video we cover the process of installing ThingWorx Analytics Server 52.1. Make sure to have reviewed the part 1 video about pre requisite   Updated Link for access to this video:  Installing ThingWorx Analytics Server: Part 2 of 2
View full tip
In this video we review the prequisite needed prior of installing ThingWorx analytics server 52.1   Updated Link for access to this video:  Installing ThingWorx Analytics Server: Part 1 - Prerequisites
View full tip
ThingWorx Analytics is capable of being assembled in multiple Operating Systems. In this post, we will discuss common issues that have been encountered by other users. Permissions Denied – Read/Write access to Third Party Components This is encountered when executing the desired Shell script to begin the creation process. In MacOS and Linux you may encounter a “Permissions Denied” error on the two required components in the creation, the packer-post-processor-vhd and packer components. Error Message This will result in a Terminal dialog message that will read “Process Completed, No Artifacts Created”. This indicates that the Packer Script has failed to complete the task, and the desired appliance images were not created. To correct this issue, you will have to change the permissions of the packer-post-processor-vhd and packer components to be able to be read and executable by the user account that is attempting to create the appliance. Solution Run the following commands in the Virtual Machine terminal (you may need to run as SUDO or as Root): chmod +x packer-post-processor-vhd ​chmod +x packer After running the above command, run the Shell script of the desired VM Appliance output. This should resolve the issue with “Permission Denied” while executing the build scripts. Error Starting Appliance in VirtualBox Users have experienced this issue at the first run of the Appliance, right after it has been assembled. This issue is unique to VirtualBox versions 5.0 and above. Error Message – Dialog Box If you encounter the error depicted below, please check under settings for the imported OVA for any errors: This issue is the result of invalid settings in the Appliance Configuration. You will need to check for Invalid Settings, by navigating to the Settings Menu for the Appliance: The “Invalid settings detected” indicates that when the Product was assembled, some configuration settings were not applied correctly by the creation tool scripts. Solution Hover your mouse over the settings and it will direct you to cause, in this case it is due to remote monitor setup. Just change the settings in Display (Remote Display Tab) by unchecking the Enable Server button. Press OK after unchecking the “Enable Server” option, and start the Appliance.
View full tip
The following is a set of custom objects that will trigger from an Expression Rule, and cause a file uploaded by a remote agent to be sent on to the Thingworx platform instance of your choice once completed.  The expression rules should be configured as so: ​Type: ​File ​IF: ​1 == 1 ​THEN: ​ExecuteCustomObject("SendUploadedFiles") SendUploadedFiles.groovy: import com.axeda.drm.sdk.data.UploadedFile import static com.axeda.sdk.v2.dsl.Bridges.* logger.info("Executing AsyncExecutor") //Spawn async thread for each upload compressedFile.getFiles().each {   UploadedFile upFile ->     // last parameter is a timeout for the execution measured in seconds (200 minutes)     customObjectBridge.initiateAsync("SendToThingWorx",                                     [                                       fileID: upFile.id,                                       hint: parameters.hint,                                       deviceID: context.device.id                                     ], 200 * 60) }     SendToThingworx.groovy: import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* import groovyx.net.http.HTTPBuilder import static groovyx.net.http.ContentType.* import static groovyx.net.http.Method.* import com.axeda.drm.sdk.data.*  // UploadedFileFinder stuff. import com.axeda.drm.sdk.Context import com.axeda.common.sdk.id.Identifier import org.apache.commons.codec.binary.Base64 def retStr = "this is a sample groovy script. your username: ${parameters.username}\n" def context = Context.getAdminContext() def thingName = 'ExampleThing' def thingworxHost = 'sample.cloud.thingworx.com' def twxApplicationKey = '00000000-0000-0000-0000-00000000000' def fileid = parameters.fileID def finder = new UploadedFileFinder(context) finder.setId( new Identifier( fileid.toLong()) ) UploadedFile uf = finder.find() def is = new FileInputStream( uf.extractFile() ) retStr += "UF: ${uf.name} ${uf.fileSize} ${uf.actualFileName}\n" logger.info  "UF: ${uf.name} ${uf.fileSize} ${uf.actualFileName}\n" def bOut = new ByteArrayOutputStream() int b = 0 int count = 0 while ( (b = is.read()) != -1  ) {     count ++     bOut.write(b) } is?.close() byte[] bRes = bOut.toByteArray() logger.info "Length: ${bRes.length}" retStr += "Count: ${count}  Length: ${bRes.length}\n" def b64 = new Base64() def outputStr = b64.encodeBase64String(bRes) retStr += "Length of base64 string: ${outputStr.length()}\n" logger.info "Length of base64 string: ${outputStr.length()}\n" logger.info "===========================================" logger.info outputStr logger.info "===========================================" def http = new HTTPBuilder("https://${thingworxHost}") http.request(POST, JSON) {     uri.path = "/Thingworx/Things/${thingName}/Services/SaveBinary"     body  = [path: uf.name, content: outputStr ]     headers = [appKey: twxApplicationKey ,                       Accept: 'application/json',                       "content-type": "application/json"             ]     response.success = { resp ->         println "POST response status: ${resp.statusLine}"         logger.info "POST RESPONSE status: ${resp.statusLine}"     }     response.failure = { resp ->         logger.info "RequestMessage: ${resp.statusLine}"         logger.info "Request failed: ${resp.status}"     } } return retStr    
View full tip
Let's consider that we have two Streams Stream1 and Stream2 with same DataShape StreamDS. DataShape StreamDS has two fields Id (number) and Name (string). We want to copy all the entries from Stream1 to Stream2. Steps: 1. Open Stream1 Stream in Composer and run GetStreamEntriesWithData service. 2. In the popup click on Create DataShape from Result option to create a new DataShape GetStreamEntriesDS. 3. Create a Service and use JavaScript like below (Added Comments for Details): // Create Temporary Infotable to hold output of GetStreamEntriesWithData Service var paramsForInfotable = {   infoTableName: "InfoTable" /* STRING */,   dataShapeName: "GetStreamEntriesDS" /* DATASHAPENAME */ }; // result: INFOTABLE var InfotableForCopy = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(paramsForInfotable); //Save output of GetStreamEntriesWithData Service to Temporary Infotable InfotableForCopy var paramsForGetStreamEntriesWithDataService = {   oldestFirst: false /* BOOLEAN */,   maxItems: 10000 /* NUMBER */ }; // result: INFOTABLE dataShape: "GetStreamEntriesDS" InfotableForCopy = Things["Stream1"].GetStreamEntriesWithData(paramsForGetStreamEntriesWithDataService); // Read the data from Infotable row by row and add it to new Stream var tableLength = InfotableForCopy.rows.length; for (var x = 0; x < tableLength; x++) {   var row = InfotableForCopy.rows ; // values:INFOTABLE(Datashape: StreamDS) var values = Things["Stream2"].CreateValues(); values.Id = row.Id; //NUMBER values.Name = row.Name; //STRING var paramsForAddStreamEntryService = {   sourceType: row.sourceType /* STRING */,   values: values /* INFOTABLE*/,   location: row.location /* LOCATION */,   source: row.source /* STRING */,   timestamp: row.timestamp /* DATETIME */,   tags: row.tags /* TAGS */ }; // AddStreamEntry(tags:TAGS, timestamp:DATETIME, source:STRING, values:INFOTABLE(StreamDS), location:LOCATION):NOTHING Things["Stream2"].AddStreamEntry(paramsForAddStreamEntryService); } var result = InfotableForCopy;
View full tip
Putting this out because this is a difficult problem to troubleshoot if you don't do it right. Let's say you have an application where you have visibility permissions in effect. So you have Users group removed from the Everyone Organization Now you have a Thing "Thing1" with Properties that are being logged to a ValueStream "VS1" What do you need to make this work? Obviously the necessary permissions to Write the values to the Thing1 and read the values from Thing1 (for UI) But for visibility what you'll need is: Visibility to Thing1 (makes sense) Visibility to the Persistence Provider of the ValueStream VS1 !!!! Nope you don't need Visibility to the ValueStream itself, but you DO need Visibility to the Persistence Provider of that ValueStream The way the lack of this permission was showing in the Application Log was a message about trying to provide a Null value.
View full tip
Get MQTT (like mosquitto) operating with SSL - use http://rockingdlabs.dunmire.org/exercises-experiments/ssl-client-certs-to-secure-mqtt as your primary guide to building out your self-signed CA cert and your server cert and key. Simply follow their directions with the one caveat of setting IPLIST and HOSTLIST environment variables prior to executing the generate-CA.sh script. This will be necessary for hosted environments like AWS where the actual IP address of the system cannot be used to access the server from the internet. Put the external facing IP address in IPLIST and the external facing fully qualified domain name (FQDN) into HOSTLIST. If you have multiple usable ip addresses or hostname aliases, enclose them in quotes and separate them with spaces  (export IPLIST="1.2.3.4 5.6.7.8") Complete steps 1-3 in the instructions above. This is sufficient to get the MQTT traffic encrypted and use it with Thingworx. Do not proceed until you can make a mosquitto_pub and mosquitto_sub pass data using the --cafile option and get an error if you do not supply the --cafile option. Make sure you have a copy of the ca.crt file generated by the script above to reference in the commands. Note that it may be necessary to use the ip address rather than the FQDN. mosquitto_sub --cafile path/to/ca.crt -h ipaddr -t topic mosquitto_pub --cafile path/to/ca.crt -h ipaddr -t topic -m message Create an MQTT Thing in Thingworx based on the MQTT ThingTemplate. Create a property in the new thing for sending messages to the MQTT broker. In the configuration page for the new MQTT Thing, set the serverName, serverPort and check the useSSL checkbox. In the Property to MQTT topic mappings, create a publish entry that points to the property you created in the thing and set the topic to the mqtt topic on which you want to publish . The ca.crt file created in the above script is the certificate for a new Certificate Authority (self-signed, so not really official). Clients may have to import this certificate into their trusted CA Root store in order to make the encryption work. Add the ca.crt file from the mqtt broker system to a keystore file that will become tomcat's truststore (the list of CAs trusted by the server). See the Tomcat documentation if you need to configure https on tomcat as well. Create a new keystore if one does not already exist as a truststore. keytool -import -trustcacerts -file /path/to/ca/ca.crt -alias CA_ALIAS -keystore path/to/TrustStore -storepass mypassword). Replace the CA_ALIAS with some identifying string like MyPrivateCertificateAuthority. It did not appear to care about the CA_ALIAS value used. Replace path/to/truststore to point to the file that already exists or you want to create. Add the following to the CATALINA_OPTS for starting tomcat -Djavax.net.ssl.trustStore=path/to/TrustStore"  -Djavax.net.ssl.trustStorePassword=xxxx Replace path/to/TrustStore with the pathname of the file you created / updated with keytool above. Replace xxxx with whatever password you used in the keytool command above Restart tomcat. Check the mqtt Thing for its isConnected property. It should now be true. If it is not, then check the log files for mosquitto and for tomcat looking for SSL issues. Change the property value and see it appear in the output of a (properly constructed) mosquitto_sub --cafile path/to/TrustStore -t test somewhere.
View full tip
In this blog we will have a look at the installation of the Thingworx Analytics Builder extension. This is used as guideline but make sure to check the Help Center for your release as steps do vary with versions. The installation has been divided in 3 parts: Introduction and import of the extension into Thingworx Platform Video Link : 1568 Configuration of the extension Note: For release 8.1, the Settings menu differs from previous versions, seeWhat's New in ThingWorx Analytics Builder 8.1 between times 00:12 sec to 00:40 sec for up to date menu selection. Video Link : 1572 Installation of the UploadThing module Note: this step no longer applies as of ThingWorx Analytics 8.1 Video Link : 1573 Useful links: PTC Download page for Thingworx Analytics PTC Reference Document page for Thingworx Analytics How to copy files from Windows to Linux ?
View full tip
In this video we cover the installation of the UploadThing module. This video applies to ThingWorx Analytics 52.2 till 8.0. This is no longer applicable with ThingWorx Analytics 8.1   Useful links: How to copy files from Windows to Linux Updated Link for access to this video:  Installing Thingworx Analytics Builder: Part 3 of 3  
View full tip
In this video we cover the different configuration steps required for ThingWorx Analytics Builder extension This video applies to ThingWorx Analytics 52.1 till 8.1.   Note though: - this video uses Classic Composer, the same operations can be done using the New Composer starting with version 8.0 as illustrated in the Help Center - For release 8.1, the Settings menu differs from previous versions, see Video Link : 2079 between times 00:12 sec to 00:40 sec for up to date menu selection.   Updated Link for access to this video:  Installing Thingworx Analytics Builder: Part 2 of 3
View full tip
In this video we cover: a short introduction of Thingworx Analytics Builder The import of the Thingworx Analytics Builder extension   This video applies to ThingWorx Analytics 52.1 till 8.1   Updated Link for access to this video:  Installing Thingworx Analytics Builder:  Part 1 of 3
View full tip
Sometimes M2M Assets should poll the platform on demand, such as in the case of avoiding excessive data charges from chatty assets.  A mechanism was developed that instructs the Asset to contact (poll) the platform for actions that the Asset needs to act on such as File Uploads, Set DataItem, etc. The Shoulder Tap SMS message is the platform’s way of contacting the Asset – tapping it on the shoulder to let it know there’s a message waiting for it.  The Asset responds by polling for the waiting message.  This implementation in the platform provides a way to configure the Model Profile that is responsible for sending an SMS Shoulder Tap message to an M2M Asset.  The Model Profile contains model-wide instructions for how and when a Shoulder Tap message should be sent. How does it work? The M2M asset is set not to poll the Axeda Platform for a long period, but the Enterprise user has some actions that the Asset needs to act upon such as FOTA (Firmware Over-the-Air).       Software package deployed to M2M Asset from Axeda Platform and put into Egress queue.       The Shoulder Tap mechanism executes a Custom Object that then sends a message to the Asset through a delivery method like SMS, UDP, etc.       The Asset’s SMS, etc. handler receives the message and the Asset then sends a POLL to the Platform and acts upon the action in the egress queue How do you make Shoulder Tap work for your M2M Assets? The first step is to create a Model Profile, the model profile will tell Asset of this model, how to communicate. For Example, if the Model Profile is Shoulder Tap, then the mechanism used to communicate to the Asset will imply Shoulder Tap.  Execute the attached custom object, createSMSModelProfile.groovy, and it will create a Model Profile named "SMSModelProfile". When you create a new Model, you will see  “SMSModelProfile“ appear in the Communication Profile dropdown list as follows: The next step is to create the Custom Object Transport script which is responsible for sending out the SMS or other method of communication to the Asset.  In this example the custom object is be named SMSCustomObject​.  The contents of this custom object are outside the scope of this article, but could be REST API calls to Twilio, Jasper or to a wireless provider's REST APIs to communicate with the remote device using an SMS message.   This could also be used with the IntegrationPublisher API to send a JMS message to a server the customer controls which could then be used to talk directly with custom libraries that are not directly compatible with the Axeda Platform. Once the Shoulder Tap scripting has been tested and is working correctly, you can now directly send a Shoulder Tap to the Asset from an action or through an ExtendedUI Module, such as shown below: import com.axeda.platform.sdk.v1.services.ServiceFactory; final ServiceFactory sFact = new ServiceFactory() def assetId = (Long)parameters.get("assetId") def stapService = ServiceFactory.getShoulderTapService() stapService.sendShoulderTap( assetId ) See Extending the Axeda Platform UI - Custom Tabs and Modules for more about creating and configuring Extended UI Modules. What about Retries? maxRetryCount  - This built in attribute’s value defines the number of times the platform will retry to send the Shoulder Tap message before it gives up. retryInterval -The retry interval that can be used if the any message delivery needs to be retried. Retry Count and Interval are configured in the Model Profile Custom Object like so: final DeliveryMethodDescriptor dmd = new DeliveryMethodDescriptor(); fdmd.setMaxRetryCount(2); fdmd.setRetryInterval(60);
View full tip
One of the recurring patterns on the Axeda Platform is making requests from custom objects to other services, to be called either via Scripto, or through Expression Rules that help integrate Axeda data with your custom systems or third parties such as Salesforce.com.  Java developers would normally use a URLConnection to do this, but due to security requirements, access to the URLConnection API is sandboxed, and the HTTPBuilder API is provided instead. Below is a short example of GETting a payload from http://www.mocky.io/v2/57d02c05100000c201208cb5 to your custom object.  One of the requirements of many services is being able to pass in API keys as part of the header request.  While in this example the API key is embedded in the code, the recommended way of storing API keys on the Axeda Platform is to use the External Credential lockbox API.  This allows you to change the API keys securely without needing to change code. import groovyx.net.http.HTTPBuilder import static groovyx.net.http.ContentType.* import static groovyx.net.http.Method.* def http = new HTTPBuilder('https://www.mocky.io') http.request( GET, JSON ) {     uri.path = '/v2/57d02c05100000c201208cb5'     uri.headers.'appKey' = '7661392f-2372-4cba-a921-f1263c938090'     response.success = { resp ->         println "POST response status: ${resp.statusLine}"         logger.info "POST RESPONSE status: ${resp.statusLine}"         assert resp.statusLine.statusCode == 201     } } An example for Salesforce might look like so: import groovyx.net.http.HTTPBuilder import static groovyx.net.http.ContentType.* import static groovyx.net.http.Method.* def xml_body = """<?xml version="1.0" encoding="utf-8" ?> <env:Envelope xmlns:xsd="http://www.w3.org/2001/XMLSchema"     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"     xmlns:env="http://schemas.xmlsoap.org/soap/envelope/">   <env:Body>     <n1:login xmlns:n1="urn:partner.soap.sforce.com">       <n1:username>johndoe@example.com</n1:username>       <n1:password>Password+SECRETKEY</n1:password>     </n1:login>   </env:Body> </env:Envelope> """ def http = new HTTPBuilder('https://login.salesforce.com/') http.request( POST ) {     uri.path = '/services/Soap/u/35.0 '     body = xml_body     response.success = { resp ->         println "POST response status: ${resp.statusLine         logger.info "POST RESPONSE status: ${resp.statusLine}"         assert resp.statusLine.statusCode == 201     } } This request will give you a security token you can use in future calls to Salesforce APIs; you would use Groovy's native XmlSlurper/XmlParser to parse the response and get the session id to use in future requests.  You would then use this session id like in the following example to get the available REST resources: import groovyx.net.http.HTTPBuilder import static groovyx.net.http.ContentType.* import static groovyx.net.http.Method.* def http = new HTTPBuilder('https://na1.salesforce.com/') http.request( POST ) {     uri.path = '/services/data/v29.0'     uri.headers.'Authorization' = 'Bearer SESSIONID'     response.success = { resp ->         println "POST response status: ${resp.statusLine}"         logger.info "POST RESPONSE status: ${resp.statusLine}"         assert resp.statusLine.statusCode == 201     } } Further reading: HttpBuilder Wiki - https://github.com/jgritman/httpbuilder/wiki Groovy Xml Processing - http://groovy-lang.org/processing-xml.html
View full tip
Attached is a description about Ensemble Learning Techniques.
View full tip
Best Practices in Data Preparation for ThingWorx Analytics Data Preparation is an important phase in the process of Data Analysis when using ThingWorx Analytics. Basically, it is getting your Data from being Raw Data that you might have gathered through your Operational system or from your Data warehouse to the kind of Data ready to be analyzed. In this Document we will be using “Talend Data Preparation Free Desktop” as a Tool to illustrate some examples of the Data Preparations process. This tool could be downloaded under the following Link: https://www.talend.com/products/data-preparation (You could also choose to use another tool) We would also use the Beanpro Dataset in our Examples and illustrations. Checking data formats The analysis starts with a raw data file. The user needs to make sure that the data files can be read. Raw data files come in many different shapes and sizes. For example, spreadsheet data is formatted differently than web data or Sensors collected data and so forth. In ThingWorx Analytics the Data Format acceptable are CSV. So the Data retrieved needs to be inputted into that format before it could be uploaded to TWA Data Example (BeanPro dataset used): After that is done the user needs to actually look at what each field contains. For example, a field is listed as a character field could actually contains none character data. Verify data types Verifying the data types for each feature or field in the Dataset used.  All data falls into one of four categories that affect what sort of analytics that could be applied to it: Nominal data is essentially just a name or an identifier. Ordinal data puts records into order from lowest to highest. Interval data represents values where the differences between them are comparable. Ratio data is like interval data except that it also allows for a value of 0. It's important to understand which categories your data falls into before you feed it into ThingWorx Analytics. For example when doing Predictive Analytics TWA would not accept a Nominal Data Field as Goal. The Goal feature data would have to be of a numerical non nominal type so this needs to be confirmed in an early stage.                 Creating a Data Dictionary A data dictionary is a metadata description of the features included in the Dataset when displayed it consists of a table with 3 columns: - The first column represents a label: that is, the name of a feature, or a combination of multiple (up to 3) features which are fields in the used Dataset. It points to “fieldname” in the configuration json file. - The second column is the Datatype value attached to the label. (Integer, String, Datetime…). It points to “dataType” in the configuration json file. - The third column is a description of the Feature related to the label used in the first column. It points to “description” in the configuration json file. In the context of TWA this Metadata is represented by a Data configuration “json” file that would be uploaded before even uploading the Dataset itself. Sample of BeanPro dataset configuration file below: Verify data accuracy Once it is confirmed that the data is formatted the way that is acceptable by TWA, the user still need to make sure it's accurate and that it makes sense. This step requires some knowledge of the subject area that the Dataset is related to. There isn't really a cut-and-dried approach to verifying data accuracy. The basic idea is to formulate some properties that you think the data should exhibit and test the data to see if those properties hold. Are stock prices always positive? Do all the product codes match the list of valid ones? Essentially, you're trying to figure out whether the data really is what you've been told it is. Identifying outliers Outliers are data points that are distant from the rest of the distribution. They are either very large or very small values compared with the rest of the dataset. Outliers are problematic because they can seriously compromise the Training Models that TWA generates. A single outlier can have a huge impact on the value of the mean. Because the mean is supposed to represent the center of the data, in a sense, this one outlier renders the mean useless. When faced with outliers, the most common strategy is to delete them. Example of the effect of an Outlier in the Feature “AVG Technician Tenure” in BeanPro Dataset:   Dataset with No Outlier: Dataset with Outlier: Deal with missing values Missing values are one of the most common (and annoying) data problems you will encounter. In TWA dealing with the Null values is done by one of the below methods: - Dropping records with missing values from your Dataset. The problem with this is that missing values are frequently not just random little data glitches so this would consider as the last option. - Replacing the NULL values with average values of the responses from the other records of the same field to fill in the missing value Transforming the Dataset - Selecting only certain columns to load which would be relevant to records where salary is not present (salary = null). - Translating coded values: (e.g., if the source system codes male as "1" and female as "2", but the warehouse codes male as "M" and female as "F") - Deriving a new calculated value: (e.g., sale_amount = qty * unit_price) - Transposing or pivoting (turning multiple columns into multiple rows or vice versa) - Splitting a column into multiple columns (e.g., converting a comma-separated list, specified as a string in one column, into individual values in different columns) Please note that: Issue with Talend should be reported to the Talend Team Data preparation is outside the scope of PTC Technical Support so please use this article as an advisable Best Practices document
View full tip
The following is valid  for ThingWorx Analytics (TWA) 52.0.2 till 8.0 For release 8.3.0 and above see How to score new data in ThingWorx Analytics 8.3.x ?   Overview The main steps are as follow: Create a dataset Configure the dataset Upload data to the dataset Optimize the dataset Create filters for training and scoring data Train the model Execute scoring on existing data Upload new data to dataset Execute scoring on new data TWA models are dataset centric, which means a model created with one dataset cannot be reused with a different dataset. In order to be able to score new data, a specific feature, record purpose in the below example, is included in the dataset. This feature needs to be included from the beginning when the data is first uploaded to TWA. A filter on that feature can then be created to allow to isolate desired data. When new data comes in, they are added to the original dataset but with a specific value for the filtered feature (record purpose), which allows to discriminate and score only those new records. Process Create a dataset This example uses the beanpro demo dataset Create dataset is done through a POST on datasets REST API as below 2. Configure dataset This is done through a POST on <dataset>/configuration REST API 3.      Upload data         4.      Optimize the dataset         5.      Create filters The dataset includes a feature named record purpose created especially to differentiate between the rows to be used for training and the rows to be used for scoring. New data to be added will have record purpose set to scoringnew, which will allow to execute a scoring job limited to those filtered new rows Filter for training data: Filter for new scoring data        6.      Train the model This is done through a POST on <dataset>/prediction API        7.      Score the training data This is done through a POST on <dataset>/predictive_scores API. Note the use of the filter TrainingData created earlier. This allow to score only the rows with training as value for record purpose feature. Note: scoring could also be done without filter at this stage, in which case all the data in the dataset will be scored and not just the ones with training fore record purpose   Retrieving the scoring result show all the records in the dataset:   8.      Upload new data The newly uploaded csv file should only contains new record. This will be appended to the existing ones.   Note that the new record (it could be more than one) has a value scoringnew for the record purpose feature: This will allow to use the previously created filter ScoringNewData so that a new scoring job will only take into account this new record.   9.      Scoring new data A POST on API predictive_scores is executed however using the filter ScoringNewData. This results in only the newly added data to be scored and therefore a much quicker execution time too. Retrieving the scoring result shows only the new record:
View full tip
This Guide contains all the Linux commands that you may have to use for ThingWorx Analytics Installation or day to day use. Command/Category Description Network/port ip a List the ips of all of the network interfaces ssh How to jump from one machine to another ping Send packets to a remote machine.  useful for testing connectivity netstat –anp Check active port cat < /dev/tcp/localhost/8080 Test connection to a port Replace localhost with desired hostname or ip, replace 8080 with desired port number (/dev/tcp/host/port) exit Exit my current sign in.  this lets one disconnect from remove ssh sessions or if one has changed one's user e.g. switched to root scp Retrieve something via ssh Resource Usage free -m Check memory -m is for output in Mb Mpstat -P ALL CPU usage top Process usage jvmtop Collect cpu usage of jvm and its thread https://github.com/patric-r/jvmtop (requires jdk to be installed) File Interaction cp / mv Copy and move respectively. mv just deletes the source file.  Usage: cp /source/location/file /output/location/file cat Mostly used to just print the contents of a file to the command line.  can also print multiple files at once:  cat /var/log/gridworker/warning.log /var/log/gridworker/error.log vim / vi A command line text editor. Not the most user friendly (none of them are) but really useful. Here's a cheat sheet for the commands https://www.fprintf.net/vimCheatSheet.html rm Remove files chmod Change the access permissions of files chown Change the user or group ownership of files grep A text based filtering.  Useful for making a larger list smaller and more targeted.  Almost always used after a pipe (see pipe below) less Generally used to view the contents of a file with more friendly scrolling locate Find a file by name Directory ls What’s in the directory.  Will do the current directory but you can also pass the directory e.g. ls /var/log/tomcat. Black writing is files Blue writing is directories Red writing is compressed file pwd Tell me which directory I'm currently in cd Change directory.  provide the directory to change to or just use cd to return to the user's home directory clear Clears the screen Terminal clears all provided commands mkdir Creates Directories Running Processes ps Query what services are running. usually use ps -aux to get a full, sorted list.  using grep with this is helpful systemctl The correct way to interact with services that are running Package installation yum install <packageName> Install a package. More useful commands: https://www.centos.org/docs/5/html/5.1/Deployment_Guide/s1-yum-useful-commands.html yum list installed List installed packages yum list <package> List available packages yum --showduplicates list java-1.7.0-openjdk-devel Use --showduplicates to see all versions Can use * for package name: *openjdk* rpm -ql <packagename> Find where package are installed Note: works if package installed with yum Yumdownloader --urls <packageName> Find URL where a package is downloaded from. Note: need to install yum-utils package Repoquery --requires <packageName> Find dependencies of a package Note: need to install yum-utils package repoquery --qf=%{name} -g --list --grouppkgs=all [groups] | xargs repotrack -a x86_64 -p /repos/Packages Download a package with all its dependencies. Need to install yum-utils package From  <http://unix.stackexchange.com/questions/50642/download-all-dependencies-with-yumdownloader-even-if-already-installed> Other Commands curl http://localhost:8080/1.0/about/versioninfo Send REST call via command line Use -X POST (default GET) for a POST (see man page - https://curl.haxx.se/docs/manual.html for example) See also http://www.codingpedia.org/ama/how-to-test-a-rest-api-from-command-line-with-curl/ Find / -type f -exec grep -I mystring {} \; Search string in files Sudo -u user command Execute a command as different user The below helpers are not commands themselves, but can be used in conjunction with the above commands. Helper Description 'pipe' The | character.  lets one chain commands.  e.g. ps -aux | grep java ./ The shorthand way to refer to this directory explicitly ../ The shorthand way to refer to the parent directory 'tab completion' Pressing tab will let linux guess what command/option best fits what's currently written.  very useful for navigating directories and long-named files (NOTE: not necessarily tab based upon one's keyboard layout/language) 'ctrl-r' Look up the mostly likely command that matches what one typing.  so if one earlier used ps -aux | grep java | less and the hit ctrl-r and typed -aux it would likely pull that command or at least the most recent one that matches
View full tip
Best Practices in Data Preparation for ThingWorx Analytics
View full tip
Internationalization and Localization Internationalization (often abbreviated I18N – from "I" + 18 more letters + "n") is the process of developing software that supports many languages, including those with non-Latin character sets. Localization (L10N) refers to developing applications that can be delivered in many languages, relying on the underlying architecture of I18N. This how-to article focuses mostly on localization, since the infrastructure is in place and stable. Create a Localization Table You create a Localization Table entity when you need to add support for another language to the application you're developing. Someone from Sales has said "There's an opportunity if we can deliver the Spiffy application in Estonian." This suggests that an Estonian-speaking end user should be able to run Spiffy and see all of its labels, messages, prompts, dialogs, and so on in Estonian. Most of the cost of adding Estonian language support is in a (usually contracted) service that does the English-to-Estonian (or whatever target language) translations. Such services employ native speakers who can get the nuances of translation correct. See Tips for translators below for suggestions on improving the accuracy of the translation. In Composer, view the Localization Tables list. Begin by duplicating an existing table (e.g. check Default or another language and click Duplicate) or by clicking New. A new tab will open with a New Localization Table in edit mode. The fields shown are: Locale (required). This is the official language tag of the new language. Language tags are defined by an Internet standard, IETF BCP 47. Briefly, they consist of a standard abbreviation for a language (e.g. en for English, de for German), followed optionally by a script subtag (e.g. Cyrl for Cyrilic), followed optionally by a region code (a country code, such as CH for Switzerland or HK for Hong Kong, or a U.N. region number), followed optionally by other qualifiers such as dialect. A simple example is es, Spanish. A complex one is sl-Latn-IT-nedis, Slovenian rendered in Latin characters as spoken in Italy in the Natisone dialect. Software rarely needs such highly specific language tags; the most specific practical examples are the various scripts and regions for Chinese (e.g. zh-Hans-CN, zh-Hant-TW). Language Name (Native) (required). This is the name of the language as written in that language, such that it would be readable by a native speaker. For example, 日本語 for Japanese, ਪੰਜਾਬੀ ਦੇ for Punjabi, or Deutsche for German. Language Name (Common). This is the name of the language as written in a common administrative language. For an application delivered internationally, English is probably a safe choice. Administrators at a customer site might change these to be in the language of the headquarters country. Description. Free form text describing the language. This will appear to end-users as a tooltip as they hover over language choices. Tags. Standard ThingWorx entity tags. Home Mashup. Does not apply. Avatar. An icon for this language. The default is . No other icons are delivered as standard, but language selection interfaces in many products use national flags to help distinguish choices, and those could be supplied here. Avatars are 48x48px images. There may be political implications in choosing a flag or other symbol for a language; use caution. Note that subtags of a language tag are separated by a hyphen, as in zh-Hans-SG. Using underscore is a Java convention that does not conform to BCP 47.A complete properties definition for Czech might look like this: Once the table has been created and saved, you can edit the translated text in Composer. Under Entity Information, select Localization Tokens. A grid similar to this will appear: The columns shown are: Token Name. This is the symbol used by mashup developers to insert a localized string into a certain place in a widget. For example, no matter how the phrase "Add New Page" is rendered (Neue Seite hinzufügen, Adicionar nova página, 새 페이지 추가...) the application developer is only concerned that the token addNewPage appears on the proper widget. See How tokens are resolved below for more information. This Language. How the text is to be represented in this language, that is, the language of the Localization Table currently being viewed or edited. Language. How the text has already been represented in any other language currently defined on the system. This is simply for reference purposes, to compare one translation with another. Usage. Can be set to Label, Message, or left unspecified. This is a guide to translators, who have to be concerned about the size of translated text. Usage Label suggests that the text needs to fit in a confined space, such as in a column header or on the face of a button. Usage Message suggests that the text is meant for a popup, error message, help, or somewhere that full sentences can be accommodated. Context. This is a free-form text field to provide instructions, advice, context, or other explanatory material to the translator. For the token book, for example, the context field can distinguish between the senses of book (something to read), book a table, book a sale, or book a prisoner, which may all have different translations. Translations can be entered in Composer. However, it's also likely that a third-party translator will do the work without using this editor. See Tips for translators below. Define language preferences for a user The reason for localization is to present user interfaces in the best language for a given user. To support this, each ThingWorx user is associated with one or more languages – those that that user can read comfortably. Some applications might offer just one language or a few, some many, and the supported languages may or may not overlap. So each user defines an ordered preference list, saying in effect: my best language is Catalan, but I'm decent in Spanish, and if those aren't available I did spend a few years in Hungary, and as a last resort there was some French in school. This would be represented in ThingWorx as: ca,es,hu,fr. A user from Scotland might have language preference en-UK,en, meaning that English with United Kingdom spellings and vocabulary is best (tyre, windscreen), but if not available then any English will do (tire, windshield). (It is not necessary to spell out related preferences of this type – see How tokens are resolved.) Any application then interacts with a given user in the best language that the application and user have in common.To define the language preference(s) for a user, open the Users list in Composer: Then choose an existing user to edit, or click New to create a new account. The only localization related information here is the Languages field. An administrator who knows the names of available languages may edit or paste an ordered, comma-separated list into the Languages field (e.g.  ca,es,hu,fr-CA). Clicking the Edit... button brings up a drag-and-drop preferences editor: The column on the left shows available (unselected) languages. The column on the right shows this user's languages, with the top entry being the most preferred language. Dragging a language from left to right adds it to the user's list; from right to left removes it; dragging rows up and down on the right changes the preference order. As language entries are dragged, a highlight appears to show where they might be dropped: A user with no language preference set will have all tokens resolved from the Default and System tables. Language Preferences can be set programmatically, as detailed in KCS Article CS243270. Localize Mashups The job of the application developer is to keep hard-coded natural language strings out of applications. To support this, widgets define an attribute isLocalizable: true for widget properties that can contain text. This shows up in the Mashup editor as a globe icon next to each localizable property. In this example, both the Text and ToolTipField properties are localizable: Clicking the globe icon changes the property from static to localized. The appearance in the Mashup editor changes accordingly: Clicking the magic wand icon opens the localization token picker: The list of tokens on the right corresponds to the Token Name column in the Localization Table editor. This is the key that is common to the meaning of a word or phrase, independent of its translation into natural languages. Select one from the list, or click to create a new one. Enter the token name and its Default (usually English) value: Note that, complying with best practices for extension developers, the token name has been namespaced: this token belongs to Acme Inc.'s Spiffy application. The rest of the name is descriptive and may reflect other development standards.When a new token is created, it becomes available to edit in every configured Localization Table. If these are not updated, then the default (English) value will be shown wherever the token occurs. How tokens are resolved What happens at run time when the UI needs to display the value of a localization token? The answer is determined by the current user's language preferences the set of Localization Tables configured on the system the presence or absence of a translation for a given token in a given table To visualize this, picture the user's language preferences as a stack, with the most preferred language on top and the least one sitting on the floor – where the floor consists of the Default and System Localization Tables: The user's language preference is fr,pt,ru,hi (French, Portuguese, Russian, Hindi, with French most preferred). The system is configured with Localization Tables, which have no order, for it (Italian), fr-CA (Canadian French), ru (Russian), pt-BR(Brazilian Portuguese), es (Spanish), and the default (likely Engish). Now the UI needs to present this user with the best value for the token com.acme.spiffy.labelAssembly. To resolve this, we start at the top of the stack. Is there a fr Localization Table? There is. Does it contain a translation for com.acme.spiffy.labelAssembly? For the sake of illustration, assume that it does not – perhaps other applications have French support, but the Spiffy application doesn't, so there aren't any com.acme.spiffy.* tokens in the French Localization Table. So we still need a value. Continuing down through the user's preferences, the next acceptable language is pt. Is there a pt localization table? No. There is a Brazilian Portuguese translation, but that won't help a user from Portugal. Still looking, we move to the next language, ru. Is there a ru Localization Table? There is. Does it contain a translation forcom.acme.spiffy.labelAssembly? It does: Ассамблея – so the token has a value, and that is what gets displayed in the UI. Suppose that the user's preferences were more specific, something like this: The users's language preference is fr-CA,pt-BR,ru-Cyrl-RU,sl-Latn-IT-nedis (Canadian French, Brazilian Portuguese, Russian in Cyrillic characters as used in Russia, Slovenian in Latin characters as used in Italy where the Natisone River dialect prevails). ThingWorx treats this by internally expanding the stack to include acceptable fall-back languages. In effect, it looks like: Of the four languages that the user can accept and that the system defines (fr-CA, fr, pt-BR, ru) the first one containing the desired token determines its value in the UI. Token and translation management for applications While it's possible to edit localized values using the Localization Table editor in Composer, translations are usually done in bulk by subject-matter experts. While workflow will vary among organizations and projects, the following example illustrates the basic process. ACME, Inc. is developing a ThingWorx application called Cambot for controlling security cameras. ACME's developer begins by constructing a mashup: This is the first draft. There is an area for the video widget, to be added later, and some button and label widgets for choosing and controlling a camera. The widgets have been given static labels: As shown here, the text for the pan left button has been entered simply as "Pan Left." But the Cambot app needs to be localized, and delivered in English, French, and Spanish. The next step for the developer is to replace all of the static text with localization tokens. Clicking the globe icon to the left of the label property changes the text from static to tokenized: and adds a magic picker for localization tokens. This is a new application, and will need its own set of localization tokens. To create the one for "Pan Left," click the magic wand to open the tokens picker: and then click "+ Localization Token" to add a new one. A dialog opens prompting for the token name and its default (English) value: Note that the token name has been namespaced for two reasons: to prevent conflicts with tokens from other sources, and to allow the developer and translators to work only with application-specific tokens. On clicking "Add Localization Token," the token is created and the default value saved. The mashup builder now shows: . After all of the tokens needed by the application have been defined, they and their values may be seen on the Localization Tokens editor for the Default Localization Table. By entering the namespace prefix in the filter textbox, the display can be restricted to the tokens for this application: As application development continues, and more tokens are required, this process is repeated. When tokens are defined, the developer should edit the Default Localization Table to supply Usage and Context information for each one: Finally, it's time to do the translations for French and Spanish. First, create the localization tables for those languages, as described above in "Create a Localization Table." From the Import/Export menu, select EXPORT / To File: Then, depending on the file format desired, choose either the Entities or Single Entity tab. For Entities, set the Collections value to Localization Tables, enter the namespace in the Token Prefix field, and choose XML as the Export Type: This will produce a single output file, containing a Localization Table element for every language defined on the system – in this example, English, French, and Spanish -- but including only the com.acme.cambot tokens. For Single Entity, choose the language to export, specify the prefix, and choose XML: This must be repeated, once for each language, and creates a separate XML file for each. In either case, the translator should be supplied with the Default XML and the file for the language to be added. (Or, the tokens and values may be converted to and from other formats, depending on the requirements of the translation service. In any case, the translated values must be in the same XML format before they can be imported.) The Default export file will contain a <Rows> element like this: < Rows >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttonnext]]> </ name >         < context > <![CDATA[Button to switch view to next camera]]> </ context >         < value > <![CDATA[Next Camera]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttonpanleft]]> </ name >         < context > <![CDATA[Button to pan view to the left]]> </ context >         < value > <![CDATA[Pan Left]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttonpanright]]> </ name >         < context > <![CDATA[Button to pan view to the right]]> </ context >         < value > <![CDATA[Pan Right]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttonprev]]> </ name >         < context > <![CDATA[Button to switch view to previous camera]]> </ context >         < value > <![CDATA[Prev. Camera]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttontiltdown]]> </ name >         < context > <![CDATA[Button to tilt view down]]> </ context >         < value > <![CDATA[Tilt Down]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttontiltup]]> </ name >         < context > <![CDATA[Button to tilt view up]]> </ context >         < value > <![CDATA[Tilt Up]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttonzoomin]]> </ name >         < context > <![CDATA[Button to view more detail]]> </ context >         < value > <![CDATA[Zoom In]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttonzoomout]]> </ name >         < context > <![CDATA[Button to expand view]]> </ context >         < value > <![CDATA[Zoom Out]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.labelcamera]]> </ name >         < context > <![CDATA[Label for current camera name]]> </ context >         < value > <![CDATA[Camera:]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.labelrecording]]> </ name >         < context > <![CDATA[Notice displayed when camera is recording]]> </ context >         < value > <![CDATA[Recording]]> </ value >     </ Row > </ Rows > Whereas the French and Spanish export files will contain an empty <Rows/> element. This is where the new translations should be added. When the translations are ready, check that the <LocalizationTable> attributes (name, description, languageCommon, languageNative) are correct. Then import the new languages and inspect the results using the Localization Table editor. Localization tables for an application may be bundled into an extension .zip file as other entities are handled; on import, the tokens for the application will be merged with existing localization tables for the same language. In the case that a brand new language is being introduced, note that many widgets use tokens from the System localization table. These will need to be translated as well – however, there is no easy way to restrict the set of tokens to those actually used. At present this is a manual filtering step. For existing languages, check to see if the System tokens have already been translated. Important note on character encoding In handling the export, transmission and editing of XML files, it's important to ensure that UTF-8 encoding is maintained throughout. Encoding problems can show up either as errors when the file is re-imported, or as localized strings with question marks or other unexpected characters in place of accented letters. ThingWorx must run with UTF-8 as the default file encoding. Specify the Java option -Dfile.encoding=UTF-8 on launch. Windows In %CATALINA_HOME%\bin\setenv.bat, include this command:     set CATALINA_OPTS=-Dfile.encoding=UTF-8 Tips for translators Each token in an exported Localization Table XML file is defined by four fields: name, value, usage, and context. While name might be suggestive, it is actually arbitrary and should not be relied on. Value contains the natural language value for the token in another language (as agreed upon). Translating from this language into the target language is the object. Usage hints at constraints on the size of the translated text. ThingWorx widgets do not in general resize to fit contents; so a button label, column heading, field label, etc. may be more difficult to translate. Because the default language is likely to be English, and English is a particularly compact language, the application may have been designed with narrow constraints. Such tokens should be marked as tricky by having a usage value of Label. Tokens with a usage of Message are for strings in more adaptable spaces, such as a texarea, warning message, etc. Context allows the application developer to provide translation hints. This may disambiguate synonyms, explain usage, discuss space constraints, specify tone of voice, or anything else applicable. The interesting section of a language's XML representation is contained in the <Rows> element. For example: <Rows> example 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 < Rows >     < Row >         < usage />         < name > <![CDATA[com.acme.spiffy.labelPart]]> </ name >         < context />         < value > <![CDATA[Part]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[Label]]> </ usage >         < name > <![CDATA[com.acme.spiffy.labelAssembly]]> </ name >         < context > <![CDATA[Label identifying the name of the assembly being edited, appears as Assembly: external_name]]> </ context >         < value > <![CDATA[Assembly]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[Message]]> </ usage >         < name > <![CDATA[com.acme.spiffy.warningIncomplete]]> </ name >         < context > <![CDATA[Pop-up warning message on Save]]> </ context >         < value > <![CDATA[A referenced part is missing, undefined, or not allowed in this assembly.]]> </ value >     </ Row > </ Rows > In this example, the token defined in lines 2 through 7 is missing the translation cues usage and context. The translator's only option is to intuit the sense of "Part" – is it a noun or a verb? – and attempt a reasonable guess. Access to a running example of the application would clearly be helpful. Lines 8 through 13 identify a label and describe how it is used; lines 14 through 19 do the same for a message. The translator would know that space for the translation of "Assembly" might be limited but that the warning message can be expressed naturally. A translator working on French might then edit this file as follows (again, only the <Rows> element is illustrated): After translating 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 < Rows >     < Row >         < usage />         < name > <![CDATA[com.acme.spiffy.labelPart]]> </ name >         < context />         < value > <![CDATA[Partie]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[Label]]> </ usage >         < name > <![CDATA[com.acme.spiffy.labelAssembly]]> </ name >         < context > <![CDATA[Label identifying the name of the assembly being edited, appears as Assembly: external_name]]> </ context >         < value > <![CDATA[Assemblée]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[Message]]> </ usage >         < name > <![CDATA[com.acme.spiffy.warningIncomplete]]> </ name >         < context > <![CDATA[Pop-up warning message on Save]]> </ context >         < value > <![CDATA[Une partie référencé est manquant, indéfini, ou non autorisés dans cette assemblée.]]> </ value >     </ Row > </ Rows > Note that only the <value> elements need to be translated – the context and usage are hints for the translator. System tokens for international data formats There are several tokens used for formatting that are also subject to localization. Token Default value Notes datepickerDayNamesMin Su,Mo,Tu,We,Th,Fr,Sa Day-of-week abbreviations used in calendar heading. datepickerFirstDay 0 First day of the week, 0 for Sunday, 1 for Monday... datepickerMonthNames January,February,March,April,May,June,July,August,September,October,November,December Month names used in calendar heading. dateTimeFormat_Default yyyy-MM-dd HH:mm:ss Date and time format codes are defined by the moment.js library. dateTimeFormat_FullDateTime LLLL dateTimeFormat_LongDate LL dateTimeFormat_LongDateTime LLL dateTimeFormat_MediumDate ll dateTimeFormat_ShortDate l dateTimeFormat_TimeOnly LT shortDateFormat mm/DD/yyyy See also KCS Article CS241828​ for details about numeric localization. Allowing users to set their own language preferences It may not be practical for the Administrator to set the language preferences for each user. An application may elect to expose the preferences editor to the end user, so that each user may select from the available languages those that are useful. To support this, ThingWorx Composer offers a Preferences widget in the Mashup builder. The widget may be inserted into any application wherever the designer chooses. It may be tied to a button or menu item, or simply appear in a layout with other widgets – perhaps along with application-specific preferences and other settings. To use the Preferences widget, design a mashup for it to appear in. The minimal case would be a responsive page mashup containing nothing but the preferences widget. Add the Preferences widget by dragging it into place: A placeholder for the widget appears in the mashup: The widget may be customized by setting various properties: These properties are specific to the Preferences widget: ShowClearRecent: Check this to include the option for the user to clear the Most Recently Used history. You may specify a localized tooltip. ShowRestoreTabs: Check this to include the option for the user to set tab restoration to ask, always, or never. You may specify a localized tooltip. ShowLanguages: Check this to include the option for the user to edit language preferences. You may specify a localized tooltip. ShowUserName: Check this to label the preferences widget with the user's name. ShowUserAvatar: Check this to label the preferences widget with the user's avatar, if one is defined. Style: Style the preferences widget itself. ButtonStyle: Style the Clear Recent and Edit buttons. These should probably be set to the application's primary button style. After adding the Preferences widget to a mashup, provide some way for the user to navigate to it, consistent with the application's UI design. The mashup may be tied to a menu entry, or assigned to a Navigation widget, or included in a page within the application's workflow – whatever suits the application design. Here is an example of providing access to preferences through a button in the application's title area: 1) The Navigation widget is placed in the page header. 2) The MashupName property is set to the mashup containing a Preferences widget. 3) The TargetWindow property is set to Modal Popup. 4) For a more interesting UI, the button label is bound from the user's name. At runtime, the example looks like this: Note that there is also a menu item leading to the mashup with the Preferences widget.
View full tip
Announcements