cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
Below is where I will discuss the simple implementation of constructing a POST request in Java. I have embedded the entire source at the bottom of this post for easy copy and paste. To start you will want to define the URL you are trying to POST to: String url = "http://127.0.0.1:80/Thingworx/Things/Thing_Name/Services/​Service_to_Post_to​"; Breaking down this url String: ​http://​ - a non-SSL connection is being used in this example 127.0.0.1:80 -- the address and port that ThingWorx is hosted on /Thingworx -- this bit is necessary because we are talking to ThingWorx /Things -- Things is used as an example here because the service I am posting to is on a Thing Some alternatives to substitute in are ThingTemplates, ThingShapes, Resources, and Subsystems /​Thing_Name​ -- Substitute in the name of your Thing where the service is located /Services -- We are calling a service on the Thing, so this is how you drill down to it /​Service_to_Post_to​ -- Substitute in the name of the service you are trying to invoke Create a URL object: URL obj = new URL(url); Class URL, included in the java.net.URL import, represents a Uniform Resource Locator, a pointer to a "resource" on the Internet. Adding the port is optional, but if it is omitted port 80 will be used by default. Define a HttpURLConnection object to later open a single connection to the URL specified: HttpURLConnection con = (HttpURLConnection) obj.openConnection(); Class HttpURLConnection, included in the java.net.HttpURLConnection import, provides a single instance to connect to the URL specified. The method openConnection is called to create a new instance of a connection, but there is no connection actually made at this point. Set the type of request and the header values to pass: con.setRequestMethod("POST"); con.setRequestProperty("Accept", "application/json"); con.setRequestProperty("Content-Type", "application/json"); con.setRequestProperty("appKey", "80aab639-ad99-43c8-a482-2e1e5dc86a2d"); You can see that we are performing a POST request, passing in an Accept header, a Content-Type header, and a ThingWorx specific appKey header. Pass true into the setDoOutput method because we are performing a POST request; when sending a PUT request we would pass in true as well. When there is no request body being sent false can be passed in to denote there is no "output" and we are making a GET request.         con.setDoOutput(true); Create a DataOutputStream object that wraps around the con object's output stream. We will call the flush method on the DataOutputStream object to push the REST request from the stream to the url defined for POSTing. We immediately close the DataOutputStream object because we are done making a request.         DataOutputStream wr = new DataOutputStream(con.getOutputStream());     wr.flush();     wr.close();           The DataOutputStream class lets the Java SDK write primitive Java data types to the ​con​ object's output stream. The next line returns the HTTP status code returned from the request. This will be something like 200 for success or 401 for unauthorized.         int responseCode = con.getResponseCode(); The final block of this code uses a BufferedReader that wraps an InputStreamReader that wraps the con object's input stream (the byte response from the server). This BufferedReader object is then used to iterate through each line in the response and append it to a StringBuilder object. Once that has completed we close the BufferedReader object and print the response we just retrieved.         BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));     String inputLine;     StringBuilder response = new StringBuilder();     while((inputLine = in.readLine()) != null) {       response.append(inputLine);     }     in.close();     System.out.println(response.toString());    The InputStreamReader decodes bytes to character streams using a specified charset.         The BufferedReader provides a more efficient way to read characters from an InputStreamReader object.         The StringBuilder object is an unsynchronized method of creating a String representation of the content residing in the BufferedReader object. StringBuffer can be used instead in a case where multi-threaded synchronization is necessary.      Below is the block of code in it's entirety from the discussion above: public void sendPost() throws Exception {   String url = "http://127.0.0.1:80/Thingworx/Things/Thing_Name/Services/Service_to_Post_to";   URL obj = new URL(url);   HttpURLConnection con = (HttpURLConnection) obj.openConnection();   //add request header   con.setRequestMethod("POST");   con.setRequestProperty("Accept", "application/json");   con.setRequestProperty("Content-Type", "application/json");   con.setRequestProperty("appKey", "80aab639-ad99-43c8-a482-2e1e5dc86a2d");   // Send post request   con.setDoOutput(true);   DataOutputStream wr = new DataOutputStream(con.getOutputStream());   wr.flush();   wr.close();   int responseCode = con.getResponseCode();   System.out.println("\nSending 'POST' request to URL : " + url);   System.out.println("Response Code : " + responseCode);   BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));   String inputLine;   StringBuilder response = new StringBuilder();   while((inputLine = in.readLine()) != null) {   response.append(inputLine);   }   in.close();   //print result   System.out.println(response.toString());   }
View full tip
Hello everyone,   Following a recent  experience, I felt it was important to share my insights with you. The core of this article is to demonstrate how you can format a Flux request in ThingWorx and post it to InfluxDB, with the aim of reporting the need for performance in calculations to InfluxDB. The following context is renewable energy. This article is not about Kepware neither about connecting to InfluxDB. As a prerequisite, you may like to read this article: Using Influx to store Value Stream properties from... - PTC Community     Introduction   The following InfluxDB usage has been developed for an electricity energy provider.   Technical Context Kepware is used as a source of data. A simulation for Wind assets based on excel file is configured, delivering data in realtime. SQL Database also gather the same data than the simulation in Kepware. It is used to load historical data into InfluxDB, addressing cases of temporary data loss. Once back online, SQL help to records the lost data in InfluxDB and computes the KPIs. InfluxDB is used to store data overtime as well as calculated KPIs. Invoicing third party system is simulated to get electricity price according time of the day.   Orchestration of InfluxDB operations with ThingWorx ThingWorx v9.4.4 Set the numeric property to log Maintain control over execution logic Format Flux request with dynamic inputs to send to Influx DB  InfluxDB Cloud v2 Store logged property Enable quick data read Execute calculation Note: Free InfluxDB version is slower in write and read, and only 30 days data retention max.     ThingWorx model and services   ThingWorx context Due to the fact relevant numeric properties are logged overtime, new KPIs are calculated based on the logged data. In the following example, each Wind asset triggered each minute a calculation to get the monetary gain based on current power produced and current electricity price. The request is formated in ThingWorx, pushed and executed in InfluxDB. Thus, ThingWorx server memory is not used for this calculation.   Services breakdown CalculateMonetaryKPIs Entry point service to calculate monetary KPIs. Use the two following services: Trigger the FormatFlux service then inject it in Post service. Inputs: No input Output: NOTHING FormatFlux _CalculateMonetaryKPI Format the request in Flux format for monetary KPI calculation. Respect the Flux synthax used by InfluxDB. Inputs: bucketName (STRING) thingName (STRING) Output: TEXT PostTextToInflux Generic service to post the request to InfluxDB, whatever the request is Inputs: FluxQuery (TEXT) influxToken (STRING) influxUrl (STRING) influxOrgName (STRING) influxBucket (STRING) thingName (STRING) Output: INFOTABLE   Highlights - CalculateMonetaryKPIs Find in attachments the full script in "CalculateMonetaryKPIs script.docx". Url, token, organization and bucket are configured in the Persitence Provider used by the ValueStream. We dynamically get it from the ValueStream attached to this thing. From here, we can reuse it to set the inputs of two other services using “MyConfig”.   Highlights - FormatFlux_CalculateMonetaryKPI Find in attachments the full script in "FormatFlux_CalculateMonetaryKPI script.docx". The major part of this script is a text, in Flux synthax, where we inject dynamic values. The service get the last values of ElectricityPrice, Power and Capacity to calculate ImmediateMonetaryGain, PotentialMaxMonetaryGain and PotentialMonetaryLoss.   Flux logic might not be easy for beginners, so let's break down the intermediate variables created on the fly in the Flux request. Let’s take the example of the existing data in the bucket (with only two minutes of values): _time _measurement _field _value 2024-07-03T14:00:00Z WindAsset1 ElectricityPrice 0.12 2024-07-03T14:00:00Z WindAsset1 Power 100 2024-07-03T14:00:00Z WindAsset1 Capacity 150 2024-07-03T15:00:00Z WindAsset1 ElectricityPrice 0.15 2024-07-03T15:00:00Z WindAsset1 Power 120 2024-07-03T15:00:00Z WindAsset1 Capacity 160   The request articulates with the following steps: Get source value Get last price, store it in priceData _time ElectricityPrice 2024-07-03T15:00:00Z 0,15 Get last power, store it in powerData _time Power 2024-07-03T15:00:00Z 120 Get last capacity, store it in capacityData _time Capacity 2024-07-03T15:00:00Z 160 Join the three tables *Data on the same time. Last values of price, power and capacity maybe not set at the same time, so final joinedData may be empty. _time ElectricityPrice Power Capacity 2024-07-03T14:00:00Z 0,15 120 160 Perform calculations gainData store the result: ElectricityPrice * Power _time _measurement _field _value 2024-07-03T15:00:00Z WindAsset1 ImmediateMonetaryGain 18 maxGainData store the result: ElectricityPrice * Capacity lossData store the result: ElectricityPrice * (Capacity – Power) Add the result to original bucket   Highlights - PostTextToInflux Find in attachments the full script in "PostTextToInflux script.docx". Pretty straightforward script, the idea is to have a generic script to post a request. The header is quite original with the vnd.flux content type Url needs to be formatted according InfluxDB API     Well done!   Thanks to these steps, calculated values are stored in InfluxDB. Other services can be created to retrieve relevant InfluxDB data and visualize it in a mashup.     Last comment It was the first time I was in touch with Flux script, so I wasn't comfortable, and I am still far to be proficient. After spending more than a week browsing through InfluxDB documentation and running multiple tests, I achieved limited success but nothing substantial for a final outcome. As a last resort, I turned to ChatGPT. Through a few interactions, I quickly obtained convincing results. Within a day, I had a satisfactory outcome, which I fine-tuned for relevant use.   Here is two examples of two consecutive ChatGPT prompts and answers. It might need to be fine-tuned after first answer.   Right after, I asked to convert it to a ThingWorx script format:   In this last picture, the script won’t work. The fluxQuery is not well formatted for TWX. Please, refer to the provided script "FormatFlux_CalculateMonetaryKPI script.docx" to see how to format the Flux query and insert variables inside. Despite mistakes, ChatGPT still mainly provides relevant code structure for beginners in Flux and is an undeniable boost for writing code.  
View full tip
The system user can become a vital point for properly yet conveniently securing your application. From the ThingWorx helpcenter: The system user is a system object in ThingWorx. With the System User, if a service is called from within a service or subscription (a wrapped service call), and the System User is permitted to execute the service, the service will be permitted to execute regardless of the User who initially triggered the sequence of events, scripts, or services. http://support.ptc.com/cs/help/thingworx_hc/thingworx_7.0_hc/index.jspx?id=SystemUser&action=show A few important notes to remember: It is not possible to log in as a system user Adding a system user to the Administrators group will not grant it the administrator permissions Adding a system user to the Everyone organization will not grant it the same visibility As an option, one of the posts on our community provides a script to assign all of the permissions to the system user for a one time set up: https://community.thingworx.com/community/developers/blog/2016/10/28/assigning-the-system-user-through-script Example: 1. Create a new template T1, several things Thing1, Thing2, Thing3 2. Create a new thing NewThing and a new user BlankUser 3. Create a service within NewThing that uses ThingTemplates[“T1"].GetImplementingThings() and give all the permissions to the new non-admin user, BlankUser Now the service on the template T1 can be accessed through the NewThing without explicit permissions for the BlankUser but rather through the system user. When manipulating with data (involving read/write and access to persistence provider), the BlankUser would require more than  just visibility permissions. For example, for a Stream, the following permissions would need to be set up: 1. Visibility on Stream template,StreamProcessingSubsystem, PersistenceProvider 2. Read/write permission on the Stream thing in the use case, created with the Stream template. Similarly, for other sources of data, things, templates and resources involved need visibility and, depending on the scenario, read/write permissions on the specific template.
View full tip
In this post, I show how you can downsample time-series data on server side using the LTTB algorithm. The export comes with a service to setup sample data and a mashup which shows the data with weak to strong downsampling.   Motivation: Users displaying time series data on mashups and dashboards (usually by a service using a QueryPropertyHistory-flavor in the background) might request large amounts of data by selecting large date ranges to be visualized, or data being recorded in high resolution. The newer chart widgets in Thingworx can work much better with a higher number of data points to display. Some also provide their own downsampling so only the „necessary“ points are drawn (e.g. no need to paint beyond the screen‘s resolution). See discussion here. However, as this is done in the widgets, this means the data reduction happens on client site, so data is sent over the network only to be discarded. It would be beneficial to reduce the number of points delivered to the client beforehand. This would also improve the behavior of older widgets which don’t have support for downsampling. Many methods for downsampling are available. One option is partitioning the data and averaging out each partition, as described here. A disadvantage is that this creates and displays points which are not in the original data. This approach here uses Largest-Triangle-Three-Buckets (LTTB) for two reasons: resulting data points exist in the original data set and the algorithm preserves the shape of the original curve very well, i.e. outliers are displayed and not averaged out. It also seems computationally not too hard on the server. Setting it up: Import Entities from LTTB_Entities.xml Navigate to thing LTTB.TestThing in project LTTB, run service downsampleSetup to setup some sample data Open mashup LTTB.Sampling_MU: Initially, there are 8000 rows sent back. The chart widget decides how many of them are displayed. You can see the rowcount in the debug info. Using the button bar, you determine to how many points the result will be downsampled and sent to the client. Notice how the curve get rougher, but the shape is preserved. How it works: The potentially large result of QueryPropertyHistory is downsampled by running it through LTTB. The resulting Infotable is sent to the widget (see service LTTB.TestThing.getData). LTTB implementation itself is in service downsampleTimeseries     Debug mode allows you to see how much data is sent over the network, and how much the number decreases proportionally with the downsampling.   LTTB.TestThing.getData;   The export and the widget is done with TWX  9 but it's only the widget that really needs TWX 9. I guess the code would need some more error-checking for robustness, but it's a good starting point.  
View full tip
Installing an Open Source Time Series Platform For testing InfluxDB and its graphical user interface, Chronograf I'm using Docker images for easy deployment. For this post I assume you have worked with Docker before.   In this setup, InfluxDB and Chronograf will share an internal docker network to exchange data.   InfluxDB can be accessed e.g. by ThingWorx via its exposed port 8086. Chronograf can be accessed to administrative purposes via its port 8888. The following commands can be used to create a InfluxDB environment.   Pull images   sudo docker pull influxdb:latest sudo docker pull chronograf:latest   Create a virtual network   sudo docker network create influxdb   Start the containers   sudo docker run -d --name=influxdb -p 8086:8086 --net=influxdb --restart=always influxdb sudo docker run -d --name=chronograf -p 8888:8888 --net=influxdb --restart=always chronograf --influxdb-url=http://influxdb:8086     InfluxDB should now be reachable and will also restart automatically when Docker (or the Operating System) are restarted.
View full tip
Key Functional Highlights ThingWorx 8.0 covers the following areas of the product portfolio:  ThingWorx Analytics, ThingWorx Utilities and ThingWorx Foundation which includes Core, Connection Server and Edge capabilities. Highlights of the release include: ThingWorx Foundation Native Industrial Connectivity: Enhancements to ThingWorx allow users to seamlessly map data from ThingWorx Industrial Connectivity to the ThingModel. With over 150 protocols supporting thousands of devices, ThingWorx Industrial Connectivity allows users to connect, monitor, and manage diverse automation devices from ThingWorx. With this new capability, users can quickly integrate industrial operations data in IoT solutions for smart, connected operations. Native AWS IoT and Azure IoT Cloud Support: ThingWorx 8 now has deeper, native integration with AWS IoT and Azure IoT Hub clouds so you can gain cost efficiencies and standardize on the device cloud provider of your choice.  This support strengthens the connection between leading cloud providers and ThingWorx. Next Generation Composer: Re-imagined Composer using modern browser concepts to improve developer efficiency including enhanced functionality, updated user interface and optimized workflows. Product Installers:  New, Docker-based product installers for Foundation and Analytics make it easy and fast for customers to get the core platform and analytics server running. Single Sign On (SSO): Provides the ability to login once and access all PTC apps and enterprise systems. License Management: Simple, automated, licensing system for collection, storage, reporting, management and auditing of licensing entitlements. Integration Connectors: Integration Connectors allow Thingworx developers and administrators quick and easy access to the data stored on external ERP, PLM, Manufacturing and other systems to quickly develop applications providing improved Contextualization and Analysis. Thingworx 8.0 delivers ‘OData’ and ‘SAP OData’ connectors plus the ability to connect to generic web services to supplement the ‘Swagger’ and ‘Windchill Swagger’ Connectors released in Thingworx 7.4. An improved mapping tool allows Business Administrators to quickly and easily transform retrieved data into a standard Thingworx format for easy consumption. Includes single sign on support for improved user experience. ThingWorx Analytics Native Anomaly Detection: ThingWorx 8 features more tightly integrated analytics capabilities, including the ability to configure anomaly alerts on properties directly from the ThingWorx Composer. ThingWatcher technology is utilized to increase machine monitoring capabilities by automatically learning normal behavior, continuously monitoring data streams and raising alerts when abnormal conditions are identified. ThingWorx Utilities Software Content Management (SCM) – Auto Retry: Provides the ability to automatically retry delivery of patches to devices if interrupted.  This ensures the ability to successfully update devices.  ThingWorx Trial Edition ThingWorx Trial Edition will be available to internal PTC resources at launch and will be made available externally on the Developer Portal shortly after launch. Developer Enablement: Enhancements have been made to the Trial Edition installation tool, providing a native installation process of the ThingWorx platform including: ThingWorx Foundation ThingWorx Utilities ThingWorx Analytics ThingWorx Industrial Connectivity Documentation ThingWorx 8.0 Reference Documents ThingWorx Analytics 8.0 Reference Documents ThingWorx Core 8.0 Release Notes ThingWorx Core Help Center ThingWorx Edge SDKs and WebSocket-based Edge MicroServer Help Center ThingWorx Connection Services Help Center ThingWorx Industrial Connectivity Help Center ThingWorx Utilities Help Center ThingWorx Utilities Installation Guide ThingWorx Analytics Help Center ThingWorx Trial Edition User Guide Additional information ThingWorx eSupport Portal ThingWorx Developer Portal ThingWorx Marketplace Download The following items are available for download from the PTC Software Download site. ThingWorx Platform – Select Release 8.0 ThingWorx Utilities – Select Release 8.0 ThingWorx Analytics – Select Release 8.0 You can also read this post in the Developer Community from Jeremy Little about the technical changes in ThingWorx 8.0.
View full tip
Learn how to use the DBConnection building block to create your own DB tables in ThingWorx.
View full tip
Internationalization and Localization Internationalization (often abbreviated I18N – from "I" + 18 more letters + "n") is the process of developing software that supports many languages, including those with non-Latin character sets. Localization (L10N) refers to developing applications that can be delivered in many languages, relying on the underlying architecture of I18N. This how-to article focuses mostly on localization, since the infrastructure is in place and stable. Create a Localization Table You create a Localization Table entity when you need to add support for another language to the application you're developing. Someone from Sales has said "There's an opportunity if we can deliver the Spiffy application in Estonian." This suggests that an Estonian-speaking end user should be able to run Spiffy and see all of its labels, messages, prompts, dialogs, and so on in Estonian. Most of the cost of adding Estonian language support is in a (usually contracted) service that does the English-to-Estonian (or whatever target language) translations. Such services employ native speakers who can get the nuances of translation correct. See Tips for translators below for suggestions on improving the accuracy of the translation. In Composer, view the Localization Tables list. Begin by duplicating an existing table (e.g. check Default or another language and click Duplicate) or by clicking New. A new tab will open with a New Localization Table in edit mode. The fields shown are: Locale (required). This is the official language tag of the new language. Language tags are defined by an Internet standard, IETF BCP 47. Briefly, they consist of a standard abbreviation for a language (e.g. en for English, de for German), followed optionally by a script subtag (e.g. Cyrl for Cyrilic), followed optionally by a region code (a country code, such as CH for Switzerland or HK for Hong Kong, or a U.N. region number), followed optionally by other qualifiers such as dialect. A simple example is es, Spanish. A complex one is sl-Latn-IT-nedis, Slovenian rendered in Latin characters as spoken in Italy in the Natisone dialect. Software rarely needs such highly specific language tags; the most specific practical examples are the various scripts and regions for Chinese (e.g. zh-Hans-CN, zh-Hant-TW). Language Name (Native) (required). This is the name of the language as written in that language, such that it would be readable by a native speaker. For example, 日本語 for Japanese, ਪੰਜਾਬੀ ਦੇ for Punjabi, or Deutsche for German. Language Name (Common). This is the name of the language as written in a common administrative language. For an application delivered internationally, English is probably a safe choice. Administrators at a customer site might change these to be in the language of the headquarters country. Description. Free form text describing the language. This will appear to end-users as a tooltip as they hover over language choices. Tags. Standard ThingWorx entity tags. Home Mashup. Does not apply. Avatar. An icon for this language. The default is . No other icons are delivered as standard, but language selection interfaces in many products use national flags to help distinguish choices, and those could be supplied here. Avatars are 48x48px images. There may be political implications in choosing a flag or other symbol for a language; use caution. Note that subtags of a language tag are separated by a hyphen, as in zh-Hans-SG. Using underscore is a Java convention that does not conform to BCP 47.A complete properties definition for Czech might look like this: Once the table has been created and saved, you can edit the translated text in Composer. Under Entity Information, select Localization Tokens. A grid similar to this will appear: The columns shown are: Token Name. This is the symbol used by mashup developers to insert a localized string into a certain place in a widget. For example, no matter how the phrase "Add New Page" is rendered (Neue Seite hinzufügen, Adicionar nova página, 새 페이지 추가...) the application developer is only concerned that the token addNewPage appears on the proper widget. See How tokens are resolved below for more information. This Language. How the text is to be represented in this language, that is, the language of the Localization Table currently being viewed or edited. Language. How the text has already been represented in any other language currently defined on the system. This is simply for reference purposes, to compare one translation with another. Usage. Can be set to Label, Message, or left unspecified. This is a guide to translators, who have to be concerned about the size of translated text. Usage Label suggests that the text needs to fit in a confined space, such as in a column header or on the face of a button. Usage Message suggests that the text is meant for a popup, error message, help, or somewhere that full sentences can be accommodated. Context. This is a free-form text field to provide instructions, advice, context, or other explanatory material to the translator. For the token book, for example, the context field can distinguish between the senses of book (something to read), book a table, book a sale, or book a prisoner, which may all have different translations. Translations can be entered in Composer. However, it's also likely that a third-party translator will do the work without using this editor. See Tips for translators below. Define language preferences for a user The reason for localization is to present user interfaces in the best language for a given user. To support this, each ThingWorx user is associated with one or more languages – those that that user can read comfortably. Some applications might offer just one language or a few, some many, and the supported languages may or may not overlap. So each user defines an ordered preference list, saying in effect: my best language is Catalan, but I'm decent in Spanish, and if those aren't available I did spend a few years in Hungary, and as a last resort there was some French in school. This would be represented in ThingWorx as: ca,es,hu,fr. A user from Scotland might have language preference en-UK,en, meaning that English with United Kingdom spellings and vocabulary is best (tyre, windscreen), but if not available then any English will do (tire, windshield). (It is not necessary to spell out related preferences of this type – see How tokens are resolved.) Any application then interacts with a given user in the best language that the application and user have in common.To define the language preference(s) for a user, open the Users list in Composer: Then choose an existing user to edit, or click New to create a new account. The only localization related information here is the Languages field. An administrator who knows the names of available languages may edit or paste an ordered, comma-separated list into the Languages field (e.g.  ca,es,hu,fr-CA). Clicking the Edit... button brings up a drag-and-drop preferences editor: The column on the left shows available (unselected) languages. The column on the right shows this user's languages, with the top entry being the most preferred language. Dragging a language from left to right adds it to the user's list; from right to left removes it; dragging rows up and down on the right changes the preference order. As language entries are dragged, a highlight appears to show where they might be dropped: A user with no language preference set will have all tokens resolved from the Default and System tables. Language Preferences can be set programmatically, as detailed in KCS Article CS243270. Localize Mashups The job of the application developer is to keep hard-coded natural language strings out of applications. To support this, widgets define an attribute isLocalizable: true for widget properties that can contain text. This shows up in the Mashup editor as a globe icon next to each localizable property. In this example, both the Text and ToolTipField properties are localizable: Clicking the globe icon changes the property from static to localized. The appearance in the Mashup editor changes accordingly: Clicking the magic wand icon opens the localization token picker: The list of tokens on the right corresponds to the Token Name column in the Localization Table editor. This is the key that is common to the meaning of a word or phrase, independent of its translation into natural languages. Select one from the list, or click to create a new one. Enter the token name and its Default (usually English) value: Note that, complying with best practices for extension developers, the token name has been namespaced: this token belongs to Acme Inc.'s Spiffy application. The rest of the name is descriptive and may reflect other development standards.When a new token is created, it becomes available to edit in every configured Localization Table. If these are not updated, then the default (English) value will be shown wherever the token occurs. How tokens are resolved What happens at run time when the UI needs to display the value of a localization token? The answer is determined by the current user's language preferences the set of Localization Tables configured on the system the presence or absence of a translation for a given token in a given table To visualize this, picture the user's language preferences as a stack, with the most preferred language on top and the least one sitting on the floor – where the floor consists of the Default and System Localization Tables: The user's language preference is fr,pt,ru,hi (French, Portuguese, Russian, Hindi, with French most preferred). The system is configured with Localization Tables, which have no order, for it (Italian), fr-CA (Canadian French), ru (Russian), pt-BR(Brazilian Portuguese), es (Spanish), and the default (likely Engish). Now the UI needs to present this user with the best value for the token com.acme.spiffy.labelAssembly. To resolve this, we start at the top of the stack. Is there a fr Localization Table? There is. Does it contain a translation for com.acme.spiffy.labelAssembly? For the sake of illustration, assume that it does not – perhaps other applications have French support, but the Spiffy application doesn't, so there aren't any com.acme.spiffy.* tokens in the French Localization Table. So we still need a value. Continuing down through the user's preferences, the next acceptable language is pt. Is there a pt localization table? No. There is a Brazilian Portuguese translation, but that won't help a user from Portugal. Still looking, we move to the next language, ru. Is there a ru Localization Table? There is. Does it contain a translation forcom.acme.spiffy.labelAssembly? It does: Ассамблея – so the token has a value, and that is what gets displayed in the UI. Suppose that the user's preferences were more specific, something like this: The users's language preference is fr-CA,pt-BR,ru-Cyrl-RU,sl-Latn-IT-nedis (Canadian French, Brazilian Portuguese, Russian in Cyrillic characters as used in Russia, Slovenian in Latin characters as used in Italy where the Natisone River dialect prevails). ThingWorx treats this by internally expanding the stack to include acceptable fall-back languages. In effect, it looks like: Of the four languages that the user can accept and that the system defines (fr-CA, fr, pt-BR, ru) the first one containing the desired token determines its value in the UI. Token and translation management for applications While it's possible to edit localized values using the Localization Table editor in Composer, translations are usually done in bulk by subject-matter experts. While workflow will vary among organizations and projects, the following example illustrates the basic process. ACME, Inc. is developing a ThingWorx application called Cambot for controlling security cameras. ACME's developer begins by constructing a mashup: This is the first draft. There is an area for the video widget, to be added later, and some button and label widgets for choosing and controlling a camera. The widgets have been given static labels: As shown here, the text for the pan left button has been entered simply as "Pan Left." But the Cambot app needs to be localized, and delivered in English, French, and Spanish. The next step for the developer is to replace all of the static text with localization tokens. Clicking the globe icon to the left of the label property changes the text from static to tokenized: and adds a magic picker for localization tokens. This is a new application, and will need its own set of localization tokens. To create the one for "Pan Left," click the magic wand to open the tokens picker: and then click "+ Localization Token" to add a new one. A dialog opens prompting for the token name and its default (English) value: Note that the token name has been namespaced for two reasons: to prevent conflicts with tokens from other sources, and to allow the developer and translators to work only with application-specific tokens. On clicking "Add Localization Token," the token is created and the default value saved. The mashup builder now shows: . After all of the tokens needed by the application have been defined, they and their values may be seen on the Localization Tokens editor for the Default Localization Table. By entering the namespace prefix in the filter textbox, the display can be restricted to the tokens for this application: As application development continues, and more tokens are required, this process is repeated. When tokens are defined, the developer should edit the Default Localization Table to supply Usage and Context information for each one: Finally, it's time to do the translations for French and Spanish. First, create the localization tables for those languages, as described above in "Create a Localization Table." From the Import/Export menu, select EXPORT / To File: Then, depending on the file format desired, choose either the Entities or Single Entity tab. For Entities, set the Collections value to Localization Tables, enter the namespace in the Token Prefix field, and choose XML as the Export Type: This will produce a single output file, containing a Localization Table element for every language defined on the system – in this example, English, French, and Spanish -- but including only the com.acme.cambot tokens. For Single Entity, choose the language to export, specify the prefix, and choose XML: This must be repeated, once for each language, and creates a separate XML file for each. In either case, the translator should be supplied with the Default XML and the file for the language to be added. (Or, the tokens and values may be converted to and from other formats, depending on the requirements of the translation service. In any case, the translated values must be in the same XML format before they can be imported.) The Default export file will contain a <Rows> element like this: < Rows >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttonnext]]> </ name >         < context > <![CDATA[Button to switch view to next camera]]> </ context >         < value > <![CDATA[Next Camera]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttonpanleft]]> </ name >         < context > <![CDATA[Button to pan view to the left]]> </ context >         < value > <![CDATA[Pan Left]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttonpanright]]> </ name >         < context > <![CDATA[Button to pan view to the right]]> </ context >         < value > <![CDATA[Pan Right]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttonprev]]> </ name >         < context > <![CDATA[Button to switch view to previous camera]]> </ context >         < value > <![CDATA[Prev. Camera]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttontiltdown]]> </ name >         < context > <![CDATA[Button to tilt view down]]> </ context >         < value > <![CDATA[Tilt Down]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttontiltup]]> </ name >         < context > <![CDATA[Button to tilt view up]]> </ context >         < value > <![CDATA[Tilt Up]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttonzoomin]]> </ name >         < context > <![CDATA[Button to view more detail]]> </ context >         < value > <![CDATA[Zoom In]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttonzoomout]]> </ name >         < context > <![CDATA[Button to expand view]]> </ context >         < value > <![CDATA[Zoom Out]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.labelcamera]]> </ name >         < context > <![CDATA[Label for current camera name]]> </ context >         < value > <![CDATA[Camera:]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.labelrecording]]> </ name >         < context > <![CDATA[Notice displayed when camera is recording]]> </ context >         < value > <![CDATA[Recording]]> </ value >     </ Row > </ Rows > Whereas the French and Spanish export files will contain an empty <Rows/> element. This is where the new translations should be added. When the translations are ready, check that the <LocalizationTable> attributes (name, description, languageCommon, languageNative) are correct. Then import the new languages and inspect the results using the Localization Table editor. Localization tables for an application may be bundled into an extension .zip file as other entities are handled; on import, the tokens for the application will be merged with existing localization tables for the same language. In the case that a brand new language is being introduced, note that many widgets use tokens from the System localization table. These will need to be translated as well – however, there is no easy way to restrict the set of tokens to those actually used. At present this is a manual filtering step. For existing languages, check to see if the System tokens have already been translated. Important note on character encoding In handling the export, transmission and editing of XML files, it's important to ensure that UTF-8 encoding is maintained throughout. Encoding problems can show up either as errors when the file is re-imported, or as localized strings with question marks or other unexpected characters in place of accented letters. ThingWorx must run with UTF-8 as the default file encoding. Specify the Java option -Dfile.encoding=UTF-8 on launch. Windows In %CATALINA_HOME%\bin\setenv.bat, include this command:     set CATALINA_OPTS=-Dfile.encoding=UTF-8 Tips for translators Each token in an exported Localization Table XML file is defined by four fields: name, value, usage, and context. While name might be suggestive, it is actually arbitrary and should not be relied on. Value contains the natural language value for the token in another language (as agreed upon). Translating from this language into the target language is the object. Usage hints at constraints on the size of the translated text. ThingWorx widgets do not in general resize to fit contents; so a button label, column heading, field label, etc. may be more difficult to translate. Because the default language is likely to be English, and English is a particularly compact language, the application may have been designed with narrow constraints. Such tokens should be marked as tricky by having a usage value of Label. Tokens with a usage of Message are for strings in more adaptable spaces, such as a texarea, warning message, etc. Context allows the application developer to provide translation hints. This may disambiguate synonyms, explain usage, discuss space constraints, specify tone of voice, or anything else applicable. The interesting section of a language's XML representation is contained in the <Rows> element. For example: <Rows> example 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 < Rows >     < Row >         < usage />         < name > <![CDATA[com.acme.spiffy.labelPart]]> </ name >         < context />         < value > <![CDATA[Part]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[Label]]> </ usage >         < name > <![CDATA[com.acme.spiffy.labelAssembly]]> </ name >         < context > <![CDATA[Label identifying the name of the assembly being edited, appears as Assembly: external_name]]> </ context >         < value > <![CDATA[Assembly]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[Message]]> </ usage >         < name > <![CDATA[com.acme.spiffy.warningIncomplete]]> </ name >         < context > <![CDATA[Pop-up warning message on Save]]> </ context >         < value > <![CDATA[A referenced part is missing, undefined, or not allowed in this assembly.]]> </ value >     </ Row > </ Rows > In this example, the token defined in lines 2 through 7 is missing the translation cues usage and context. The translator's only option is to intuit the sense of "Part" – is it a noun or a verb? – and attempt a reasonable guess. Access to a running example of the application would clearly be helpful. Lines 8 through 13 identify a label and describe how it is used; lines 14 through 19 do the same for a message. The translator would know that space for the translation of "Assembly" might be limited but that the warning message can be expressed naturally. A translator working on French might then edit this file as follows (again, only the <Rows> element is illustrated): After translating 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 < Rows >     < Row >         < usage />         < name > <![CDATA[com.acme.spiffy.labelPart]]> </ name >         < context />         < value > <![CDATA[Partie]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[Label]]> </ usage >         < name > <![CDATA[com.acme.spiffy.labelAssembly]]> </ name >         < context > <![CDATA[Label identifying the name of the assembly being edited, appears as Assembly: external_name]]> </ context >         < value > <![CDATA[Assemblée]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[Message]]> </ usage >         < name > <![CDATA[com.acme.spiffy.warningIncomplete]]> </ name >         < context > <![CDATA[Pop-up warning message on Save]]> </ context >         < value > <![CDATA[Une partie référencé est manquant, indéfini, ou non autorisés dans cette assemblée.]]> </ value >     </ Row > </ Rows > Note that only the <value> elements need to be translated – the context and usage are hints for the translator. System tokens for international data formats There are several tokens used for formatting that are also subject to localization. Token Default value Notes datepickerDayNamesMin Su,Mo,Tu,We,Th,Fr,Sa Day-of-week abbreviations used in calendar heading. datepickerFirstDay 0 First day of the week, 0 for Sunday, 1 for Monday... datepickerMonthNames January,February,March,April,May,June,July,August,September,October,November,December Month names used in calendar heading. dateTimeFormat_Default yyyy-MM-dd HH:mm:ss Date and time format codes are defined by the moment.js library. dateTimeFormat_FullDateTime LLLL dateTimeFormat_LongDate LL dateTimeFormat_LongDateTime LLL dateTimeFormat_MediumDate ll dateTimeFormat_ShortDate l dateTimeFormat_TimeOnly LT shortDateFormat mm/DD/yyyy See also KCS Article CS241828​ for details about numeric localization. Allowing users to set their own language preferences It may not be practical for the Administrator to set the language preferences for each user. An application may elect to expose the preferences editor to the end user, so that each user may select from the available languages those that are useful. To support this, ThingWorx Composer offers a Preferences widget in the Mashup builder. The widget may be inserted into any application wherever the designer chooses. It may be tied to a button or menu item, or simply appear in a layout with other widgets – perhaps along with application-specific preferences and other settings. To use the Preferences widget, design a mashup for it to appear in. The minimal case would be a responsive page mashup containing nothing but the preferences widget. Add the Preferences widget by dragging it into place: A placeholder for the widget appears in the mashup: The widget may be customized by setting various properties: These properties are specific to the Preferences widget: ShowClearRecent: Check this to include the option for the user to clear the Most Recently Used history. You may specify a localized tooltip. ShowRestoreTabs: Check this to include the option for the user to set tab restoration to ask, always, or never. You may specify a localized tooltip. ShowLanguages: Check this to include the option for the user to edit language preferences. You may specify a localized tooltip. ShowUserName: Check this to label the preferences widget with the user's name. ShowUserAvatar: Check this to label the preferences widget with the user's avatar, if one is defined. Style: Style the preferences widget itself. ButtonStyle: Style the Clear Recent and Edit buttons. These should probably be set to the application's primary button style. After adding the Preferences widget to a mashup, provide some way for the user to navigate to it, consistent with the application's UI design. The mashup may be tied to a menu entry, or assigned to a Navigation widget, or included in a page within the application's workflow – whatever suits the application design. Here is an example of providing access to preferences through a button in the application's title area: 1) The Navigation widget is placed in the page header. 2) The MashupName property is set to the mashup containing a Preferences widget. 3) The TargetWindow property is set to Modal Popup. 4) For a more interesting UI, the button label is bound from the user's name. At runtime, the example looks like this: Note that there is also a menu item leading to the mashup with the Preferences widget.
View full tip
Connectors allow clients to establish a connection to Tomcat via the HTTP / HTTPS protocol. Tomcat allows for configuring multiple connectors so that users or devices can either connect via HTTP or HTTPS.   Usually users like you and me access websites by just typing the URL in the browser's address bar, e.g. "www.google.com". By default browsers assume that the connection should be established with the HTTP protocol. For HTTPS connections, the protocol has to be specified explictily, e.g. "https://www.google.com"   However - Google automatically forwards HTTP connections automatically as a HTTPS connection, so that all connections are using certificates and are via a secure channel and you will end up on "https://www.google.com" anyway.   To configure ThingWorx to only allow secure connections there are two options...   1) Remove HTTP access   If HTTP access is removed, users can no longer connect to the 80 or 8080 port. ThingWorx will only be accessible on port 443 (or 8443).   If connecting to port 8080 clients will not be redirected. The redirectPort in the Connector is only forwarding requests internally in Tomcat, not switching protocols and ports and not requiring a certificate when being used. The redirected port does not reflect in the client's connection but only manages internal port-forwarding in Tomcat.   By removing the HTTP ports for access any connection on port 80 (or 8080) will end up in an error message that the client cannot connect on this port.   To remove the HTTP ports, edit the <Tomcat>\conf\server.xml and comment out sections like       <!-- commented out to disallow connections on port 80 <Connector port="80" protocol="org.apache.coyote.http11.Http11NioProtocol" connectionTimeout="20000" redirectPort="443" /> -->     Save and restart Tomcat. If opening Tomcat (and ThingWorx) in a browser via http://myServer/ the connection will fail with a "This site can’t be reached", "ERR_CONNECTION_REFUSED" error.   2) Forcing insecure connections through secure ports   Alternatively, port 80 and 8080 can be kept open to still allow users and devices to connect. But instead of only internally forwarding the port, Tomcat can be setup to also forward the client to the new secure port. With this, users and devices can still use e.g. old bookmarks and do not have to explicitly set the HTTPS protocol in the address.   To configure this, edit the <Tomcat>\conf\web.xml and add the following section just before the closing </web-app> tag at the end of the file:     <security-constraint>        <web-resource-collection>              <web-resource-name>HTTPSOnly</web-resource-name>              <url-pattern>/*</url-pattern>        </web-resource-collection>        <user-data-constraint>              <transport-guarantee>CONFIDENTIAL</transport-guarantee>        </user-data-constraint> </security-constraint>     In <Tomcat>\conf\web.xml ensure that all HTTP Connectors (port 80 and 8080) have their redirect port set to the secure HTTPS Connector (usually port 443 or port 8443).   Save and restart Tomcat. If opening Tomcat (and ThingWorx) in a browser via http://myServer/ the connection will now be forwarded to the secure port. The browser will now show the connection as https://myServer/ instead and connections are secure and using certificates.   What next?   Configuring Tomcat to force insecure connection to actually secure HTTPS connection just requires a simple configuration change. If you want to read more about certificates, encryption and how to setup ThingWorx for HTTPS in the first place, be sure to also have a look at   Trust & Encryption - Theory Trust & Encryption - Hands On
View full tip
1. Add an Json parameter Example: { ​    "rows":[         {             "email":"example1@ptc.com"         },         {             "name":"Qaqa",             "email":"example2@ptc.com"         }     ] } 2. Create an Infotable with a DataShape usingCreateInfoTableFromDataShape(params) 3. Using a for loop, iterate through each Json object and add it to the Infotable usingInfoTableName.AddRow(YourRowObjectHere) Example: var params = {     infoTableName: "InfoTable",     dataShapeName : "jsontest" }; var infotabletest = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); for(var i=0; i<json.rows.length; i++) {     infotabletest.AddRow({name:json.rows.name,email:json.rows.email}); }
View full tip
​​​There are four types of Analytics:                                                                 Prescriptive analytics: What should I do about it? Prescriptive analytics is about using data and analytics to improve decisions and therefore the effectiveness of actions.Prescriptive analytics is related to both Descriptive and Predictive analytics. While Descriptive analytics aims to provide insight into what has happened and Predictive analytics helps model and forecast what might happen, Prescriptive analytics seeks to determine the best solution or outcome among various choices, given the known parameters. “Any combination of analytics, math, experiments, simulation, and/or artificial intelligence used to improve the effectiveness of decisions made by humans or by decision logic embedded in applications.”These analytics go beyond descriptive and predictive analytics by recommending one or more possible courses of action. Essentially they predict multiple futures and allow companies to assess a number of possible outcomes based upon their actions. Prescriptive analytics use a combination of techniques and tools such as business rules, algorithms, machine learning and computational modelling procedures. Prescriptive analytics can also suggest decision options for how to take advantage of a future opportunity or mitigate a future risk, and illustrate the implications of each decision option. In practice, prescriptive analytics can continually and automatically process new data to improve the accuracy of predictions and provide better decision options. Prescriptive analytics can be used in two ways: Inform decision logic with analytics: Decision logic needs data as an input to make the decision. The veracity and timeliness of data will insure that the decision logic will operate as expected. It doesn’t matter if the decision logic is that of a person or embedded in an application — in both cases, prescriptive analytics provides the input to the process. Prescriptive analytics can be as simple as aggregate analytics about how much a customer spent on products last month or as sophisticated as a predictive model that predicts the next best offer to a customer. The decision logic may even include an optimization model to determine how much, if any, discount to offer to the customer. Evolve decision logic: Decision logic must evolve to improve or maintain its effectiveness. In some cases, decision logic itself may be flawed or degrade over time. Measuring and analyzing the effectiveness or ineffectiveness of enterprises decisions allows developers to refine or redo decision logic to make it even better. It can be as simple as marketing managers reviewing email conversion rates and adjusting the decision logic to target an additional audience. Alternatively, it can be as sophisticated as embedding a machine learning model in the decision logic for an email marketing campaign to automatically adjust what content is sent to target audiences. Different technologies of Prescriptive analytics to create action: Search and knowledge discovery: Information leads to insights, and insights lead to knowledge. That knowledge enables employees to become smarter about the decisions they make for the benefit of the enterprise. But developers can embed search technology in decision logic to find knowledge used to make decisions in large pools of unstructured big data. Simulation: ​Simulation imitates a real-world process or system over time using a computer model. Because digital simulation relies on a model of the real world, the usefulness and accuracy of simulation to improve decisions depends a lot on the fidelity of the model. Simulation has long been used in multiple industries to test new ideas or how modifications will affect an existing process or system. Mathematical optimization: Mathematical optimization is the process of finding the optimal solution to a problem that has numerically expressed constraints. Machine learning: “Learning” means that the algorithms analyze sets of data to look for patterns and/or correlations that result in insights. Those insights can become deeper and more accurate as the algorithms analyze new data sets. The models created and continuously updated by machine learning can be used as input to decision logic or to improve the decision logic automatically. Paragmetic AI: ​Enterprises can use AI to program machines to continuously learn from new information, build knowledge, and then use that knowledge to make decisions and interact with people and/or other machines.                                               Use of Prescriptive Analytics in ThingWorx Analytics: Thing Optimizer: Thing Optimizer functionality provides the prescriptive scoring and optimization capabilities of ThingWorx Analytics. While predictive scoring allows you to make predictions about future outcomes, prescriptive scoring allows you to see how certain changes might affect future outcomes. After you have generated a prediction model (also called training a model), you can modify the prescriptive attributes in your data (those attributes marked as levers) to alter the predictions. The prescriptive scoring process evaluates each lever attribute, and returns an optimal value for that feature, depending on whether you want to minimize or maximize the goal variable. Prescriptive scoring results include both an original score (the score before any lever attributes are changed) and an optimized score (the score after optimal values are applied to the lever attributes). In addition, for each attribute identified in your data as a lever, original and optimal values are included in the prescriptive scoring results. How to Access Thing Optimizer Functionality: ThingWorx Analytics prescriptive scoring can only be accessed via the REST API Service. Using a REST client, you can access the Scoring service which includes a series of API endpoints to submit scoring requests, retrieve results, list jobs, and more. Requires installation of the ThingWorx Analytics Server. How to avoid mistakes - Below are some common mistakes while doing Prescriptive analytics: Starting digital analytics without a clear goal Ignoring core metrics Choosing overkill analytics tools Creating beautiful reports with little business value Failing to detect tracking errors                                                                                                                                 Image source: Wikipedia, Content: go.forrester.com(Partially)
View full tip
Everywhere in the Thingworx Platform (even the edge and extensions) you see the data structure called InfoTables.  What are they?  They are used to return data from services, map values in mashup and move information around the platform.  What they are is very simple, how they are setup and used is also simple but there are a lot of ways to manipulate them.  Simply put InfoTables are JSON data, that is all.  However they use a standard structure that the platform can recognize and use. There are two peices to an InfoTable, the DataShape definition and the rows array.  The DataShape is the definition of each row value in the rows array.  This is not accessible directly in service code but there are function and structures to manipulate it in services if needed. Example InfoTable Definitions and Values: { dataShape: {     fieldDefinitions : {           name: "ColOneName", baseType: "STRING"     },     {           name: "ColTwoName", baseType: "NUMBER"     }, rows: [     {ColOneName: "FirstValue", ColTwoName: 13},     {ColOneName: "SecondValue, ColTwoName: 14}     ] } So you can see that the dataShape value is made up of a group of JSON objects that are under the fieldDefinitions element.  Each field is a "name" element, which of course defined the field name, and the "baseType" element which is the Thingworx primitive type of the named field.  Typically this structure is automatically created by using a DataShape object that is defined in the platform.  This is also the reason DataShapes need to be defined, so that fields can be defined not only for InfoTables, but also for DataTables and Streams.  This is how Mashups know what the structure of the data is when creating bindings to widgets and other parts of the platform can display data in a structured format. The other part is the "rows" element which contains an array of of JSON objects which contain the actual data in the InfoTable. Accessing the values in the rows is as simple as using standard JavaScript syntax for JSON.  To access the number in the first row of the InfoTable referenced above (if the name of the InfoTable variable is "MyInfoTable") is done using MyInfoTable.rows[0].ColTowName.  This would return a value of 13.  As you can not the JSON array index starts at zero. Looping through an InfoTable in service script is also very simple.  You can use the index in a standard "for loop" structure, but a little cleaner way is to use a "for each loop" like this... for each (row in MyInfoTable.rows) {     var colOneVal = row.ColOneName;     ... } It is important to note that outputs of many base services in the platform have an output of the InfoTable type and that most of these have system defined datashapes built into the platform (such as QueryDataTableEntries, GetImplimentingThings, QueryNumberPropertyHistory and many, many more).  Also all service results from query services accessing external databases are returned in the structure of an InfoTable. Manipulating an InfoTable in script is easy using various functions built into the platform.  Many of these can be found in the "Snippets" tab of the service editor in Composer in both the InfoTableFunctions Resource and InfoTable Code Snippets. Some of my favorites and most commonly used... Create a blank InfoTable: var params = {   infoTableName: "MyTable" }; var MyInfoTable= Resources["InfoTableFunctions"].CreateInfoTable(params); Add a new field to any InfoTable: MyInfoTable.AddField({name: "ColNameThree", baseType: "BOOLEAN"}); Delete a field: MyInfoTable.RemoveField("ColNameThree"); Add a data row: MyInfoTable.AddRow({ColOneName: "NewRowValue", ColTwoName: 15}); Delete one or more data row matching the values defined (Note you can define multiple field in this statement): //delete all rows that have a value of 13 in ColNameOne MyInfoTable.Delete({ColNameOne: 13}); Create an InfoTable using a predefined DataShape: var params = {   infoTableName: "MyInfoTable",   dataShapeName: "dataShapeName" }; var MyInfoTable = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); There are many more functions built into the platform, including ones to filter, sort and query rows.  These can be extremely useful when tying to return limited or more strictly structured InfoTable data.  Hopefully this gives you a better understanding and use of this critical part of the Thingworx Platform.
View full tip
A license is not required to download and install KEPServerEX V6. After finishing the full install, all features of the KEPServerEX Configuration are available for setup and testing, including the ThingWorx Native Interface. When either a Client application makes a request of a Driver, or a Plug-in becomes activated, then a license check is performed. If a feature is not licensed, a two-hour demo countdown period will begin for that feature. There are System tags that can be monitored in KEPServerEX to determine which features are licensed, which are in a time-limited demo countdown period, and which have expired. To reset the countdown timer for an unlicensed feature, the KEPServerEX Runtime service can be stopped/restarted. See the links and steps below for more information. How do I download and install KEPServerEX?​ Monitor licensed, time-limited, and expired features in KEPServerEX:​ 1. Open the KEPServerEX Configuration window 2. The right-most icon in the Configuration icon toolbar is the white "QC" icon. Select this icon to launch a new OPC Quick Client window. The server project should automatically build 3. In the left pane, locate the _System folder 4. Within this folder locate the following three Tags, with the corresponding information displaying in the Value column: Tag (Item ID) Information displayed as a String in the Value column _System._ExpiredFeatures Any unlicensed driver or plug-in that has timed out _System._LicensedFeatures Drivers or plug-ins that are currently licensed for use with this install _System._TimeLimitedFeatures Any unlicensed driver or plug-in that is in demo mode, with the remaining demo timer countdown (in seconds) Note: The ThingWorx Native Interface does not require a KEPServerEX license. Restart the KEPServerEX Runtime service The KEPServerEX Runtime service (server_runtime.exe) runs in the background as a system service. Resetting this service will reset the demo countdown for any unlicensed feature. To reset the Runtime service: 1. Right-click on the Administration tool (the green EX icon in the Systray by the System Clock) 2. Select "Stop Runtime Service. 3. Either the user will be prompted to restart this service, or the same Administration tool menu can be used again to "Start Runtime Service"
View full tip
Create a new Thing using the Scheduler Thing Template. The Scheduler Thing will fire a ScheduledEvent Event when the configured schedule is fired. The event is automatically present and does not need to be added manually. Configuration   The Scheduler Configuration is quite straightforward and allows for an exact setup of schedule based on units of time, e.g. seconds, minutes, hours, days of week etc. It can be accessed via the Thing's Entity Configuration   Configuration allows for Changing the runAsUser context - in which the Events will be handled. The user will need visibility and permission on e.g. executing Services or depending Things, which are required to run the Service triggered by the Event. Changing the Schedule - in which time the Events will be fired (by default every minute). The schedule is displayed in CRON String notation and can be changed and viewed in detail by clicking on "More". The CRON String will be generated automatically based on the inputs. Schedules can be configured in Manual mode - allowing for full configuration of each and every time based attribute. Schedules can be configured for a specific time Type - allowing for configuration only based on seconds, minutes, hours, days, weeks, months or years. Below screenshots show schedules running every minute and every Saturday / Sunday at 12:00 ("Every Weekend Day").     Services   Scheduler Things inherit two Services by default from the Thing Template DisableScheduler EnableScheduler These will activate / de-activate the Scheduler and allow / disallow firing Events once a scheduled time is reached If a Scheduler is currenty enabled or disabled can be seen in its properites  
View full tip
This document is designed to help troubleshoot some commonly seen issues while installing or upgrading the ThingWorx application, prior or instead of contacting Tech Support. This is not a defined template for a guaranteed solution, but rather a reference guide that provides an opportunity to eliminate some of the possible root causes. While following the installation guide and matching the system requirements is sufficient to get a successfully running instance of ThingWorx, some issues could still occur upon launching the app for the first time. Generally, those issues arise from minor environmental details and can be easily fixed by aligning with the proper installation process. Currently, the majority of the installation hiccups are coming from the postgresql side. That being said, the very first thing to note, whether it's a new user trying out the platform or a returning one switching the database to postgresql, note that: Postgresql database must be installed, configured, and running prior to the further Thingworx installation. ThingWorx 7.0+: Installation errors out with 'failed to succeed more than the maximum number of allowed acquisition attempts' Platform being shut down because System Ownership cannot be acquired error ERROR: relation "system_version" does not exist Resolution: Generally, this type of error point at the security/permission issue. As all of the installation operations should be performed by a root/Administrator role, the following points should be verified: Ensure both Tomcat and ThingworxPlatform folders have relevant read/write permissions The title and contents of the configuration file in the ThingworxPlatform folder has changed from 6.x to 7.x Check if the right configuration file is in the folder Verify if the name and password provided in this configuration file matches the ones set in the Postgres DB Run the Database cleanup script, and then set up the database again. Verufy by checking the thingworx table space (about 53 tables should be created)     Thingworx Application: Blank screen, no errors in the logs, "waiting for <url> " gears running be never actually loading, eventually times out     Resolution: Ensure that Java in tomcat is pointing to the right path, should be something like this: C:\Program Files\Java\jre1.8.0_101\bin\server\jvm.dll 6.5+ Postgres:   Error when executing thingworxpostgresDBSetup.bat psql:./thingworx-database-setup.sql:1: ERROR: could not set permissions on directory "D:/ThingworxPostgresqlStorage": Permission denied     Resolution:     The error means that the postgres user was not able to create a directory in the ‘ThingworxPostgresStorage’ directory. As it's related to the security/permission, the following steps can be taken to clear out the error: Assigning read/write permissions to everyone user group to fix the script execution and then execute the batch file: Right-click on ‘ThingworxPostgresStorage’ directory -> Share with -> specific people. Select drop-down, add everyone group and update the permission level to Read/Write. Click Share. Executing the batch file as admin. 2. Installation error message "relation root_entity_collection does not exist" is displayed with Postgresql version of the ThingWorx platform. Resolution:     Such an error message is displayed only if the schema parameter passed to thingworxPostgresSchemaSetup.sh script  is different than $USER or PUBLIC. To clear out the error: Edit the Postgresql configuration file, postgresql.conf, to add to the SEARCH_PATH item your own schema. Other common errors upon launching the application. Two of the most commonly seen errors are 404 and 401.  While there can be a numerous reasons to see those errors, here are the root causes that fall under the "very likely" category: 404 Application not found during a new install: Ensure Thingworx.war was deployed -- check the hard drive directory of Tomcat/webapps and ensure Thingworx.war and Thingworx folder are present as well as the ThingworxStorage in the root (or custom selected location) Ensure the Thingworx.war is not corrupted (may re-download from the support and compare the size) 401 Application cannot be accessed during a new install or upgrade: For Postgresql, ensure the database is running and is connected to, also see the Basic Troubleshooting points below. Verify the tomcat, java, and database (in case of postgresql) versions are matching the system requirement guide for the appropriate platform version Ensure the updrade was performed according to the guide and the necessary folders were removed (after copying as a preventative measure). Ensure the correct port is specified in platform-settings.json (for Postgresql), by default the connection string is jdbc:postgresql://localhost:5432/thingworx Again, it should be kept in mind that while the symptoms are common and can generally be resolved with the same solution, every system environment is unique and may require an individual approach in a guaranteed resolution. Basic troubleshooting points for: Validating PostgreSQL installation Postgres install troubleshooting java.lang.NullPointerException error during PostgreSQL installation ***CRITICAL ERROR: permission denied for relation root_entity_collection Error while running scripts: Could not set permissions on directory "/ThingworxPostgresqlStorage":Permission Denied Acquisition Attempt Failed error Resolution: Ensure 'ThingworxStorage', 'ThingworxPlatform' and 'ThingworxPostgresqlStorage' folders are created The folders have to be present in the root directory unless specifically changed in any configurations Recommended to grant sufficient privileges (if not all) to the database user (twadmin) Note: While running the script in order to create a database, if a schema name other than 'public' is used, the "search_path" in "postgresql.conf" must be changed to reflect 'NewSchemaName, public' Grant permission to user for access to root folders containing 'ThingworxPostgresqlStorage' and 'ThingworxPlatform' The password set for the default 'twadmin' in the pgAdmin III tool must match the password set in the configuration file under the ThingworxPlatform folder Ensure THINGWORX_PLATFORM_SETTINGS variable is set up Error: psql:./thingworx-database-setup.sql:14: ERROR:  could not create directory "pg_tblspc/16419/PG_9.4_201409291/16420": No such file or directory psql:./thingworx-database-setup.sql:16: ERROR:  database "thingworx" does not exist Resolution: Replacing /ThingworxPostgresqlStorage in the .bat file by C:\ThingworxPostgresqlStorage and omitting the -l option in the command window. Also, note the following error Troubleshooting Syntax Error when running postgresql set up scripts
View full tip
The following code is best practice when creating any "entity" in Thingworx service script.  When a new entity is created (like a Thing) it will be loaded into the JVM memory immediately, but is not committed to disk until a transaction (service) successfully completes.  For this reason ALL code in a service must be in a try/catch block to handle exceptions.  In order to rollback the create call the catch must call a delete for any entity created.  In line comments give further detail.     try {     var params = {         name: "NewThingName",         description: "This Is A New Thing",         thingTemplateName: "GenericThing"     };     Resources["EntityServices"].CreateThing(params);    // Always enable and restart a new thing to make it active on the Platform     Things["NewThingName"].Enable();     Things["NewThingName"].Restart();       //Now Create an Organization for the new Thing     var params = {         topOUName: "NewOrgName",         name: "NewOrgName",         description: "New Orgianization for new Thing",         topOUDescription: "New Org Main"     };     Resources["EntityServices"].CreateOrganization(params);       // Any code that could potentially cause an exception should     // also be included in the try-catch block. } catch (err) {     // If an exception is caught, we need to attempt to delete everything     // that was created to roll back the entire transaction.     // If we do not do this a "ghost" entity will remain in memory     // We must do this in reverse order of creation so there are no dependency conflicts     // We also do not know where it failed so we must attempt to remove all of them,     // but also handle exceptions in case they were not created       try {         var params = {name: "NewOrgName"};         Resources["EntityServices"].DeleteOrganization(params);     }     catch(ex2) {//Org was not created     }       try {         var params = {name: "NewThingName"};         Resources["EntityServices"].DeleteThing(params);     }     catch(ex2) {//Thing was not created     } }
View full tip
This is using the simplest structure to do a look through an infotable.  It's simple but it avoids having to use row indexes and cleans up the code for readability as well.   //Assume incoming Infotable parameter names "thingList" for each (row in thingList.rows) {      // Now each row is already assigned to the row variable in the loop      var thingName = row.name; }   You can also nest these loops (just use a different variable from "row").  Also important to note to not add or remove row entries of the Infotable inside the loop.  In this case you may end up skipping or repeating rows in the loop since the indexes will be changed.
View full tip
Developing Great IoT Solutions Brought to you once again by your EDC team, find attached here a brand-new, comprehensive overview of ThingWorx best practices! This guide was crafted by combining all available feedback, from support cases to PTC Community threads, and tapping all internal resources. Let this guide serve to bridge the knowledge gaps ThingWorx developers most commonly see.    The Developing Great IoT Solutions (DGIS) Guide is a great way to inform both business and technically minded folks about the capabilities of the ThingWorx Platform. Learn how to design good solutions from a high-level, an overview designed specifically with the business audience in mind. Or, learn how to implement good IoT designs through a series of technical examples. Start from very little knowledge of the Platform and end up understanding data structures and aggregation, how to use the collection widget, and how to build a fully functional rules engine for sending and acknowledging alerts in ThingWorx.   For the more advanced among us, check out the Appendix. Find here a handy list of do's and don'ts surrounding ThingWorx best practice in development, with links to KCS, Help Center, and Community content.   Reinforce your understanding of the capabilities of the ThingWorx Platform with this guide, today!   A big thanks to all who were involved on this project! Happy developing!
View full tip
How to score new data with ThingWorx Analytics ?   The following is valid starting with ThingWorx Analytics (TWA) 8.3.0   Overview   Once a training model has been created, one of the main objective is to score new data to predict the value for the goal ThingWorx Analytics can score new data in 2 ways: Batch scoring Real time scoring Batch scoring   Batch scoring will be used when a large amount of data needs to be scored. To perform a batch scoring we will usually follow steps similar to the below ones: Upload the historic data Create a new model with this historic data Upload new data – the one to be scored Perform a prediction job to score those new data Retrieve the prediction job result Uploading the new data can be done in different ways. If using a large amount of data, it can be easier to upload the data via a csv file in a similar way as the historic data. This is the way used in ThingWorx Analytics Builder. If the amount of data is more limited this can be sent in the body of the scoring request. The post Analytics: Prediction Methods Mashup  shows a good example of how to do this using the PredictionThing.BatchScore service. We are focusing below on ThingWorx Analytics Builder, that is uploading new data via a csv file. In order to perform the scoring job only on the new data in step 4 above, we need to be able to filter those added data. If the dataset has already suitable column/feature such as a timestamp for example, we can use this to score only new data after timestamp > newdate, assuming all data are in chronological order. If the dataset has no such feature, we will have to add one  beforehand when we first upload the historic data in step 1 above. We often use a new column/feature named record_purpose to this effect. So initial data can take a value of training for this record_purpose feature since they are used to create the initial model. Then new added data to be scored can get any value that identify those rows only. It is important to note that this record_purpose feature needs to be set with the optType INFORMATIONAL so as to not be taken into account by the learning algorithms.   The video below shows those steps while using ThingWorx Analytics Builder   Real time scoring   Real time scoring is better suited for small amount of data. The process for real time scoring can be done either via the Analytics Server PredictionThing RealTimeScore service or using the Analytics Manager framework. The posts How to work with ordinal and categorical data in ThingWorx Analytics  and Analytics: Prediction Methods Mashup do give  examples of the use of the RealTimeScore service.   We will concentrate below on the Analytics Manager. The process involves the following steps: In Analytics Manager Create an Analysis Provider that uses the AnalyticsServerConnector connector Publish the model created in ThingWorx Analytics Builder to Analytics Manager Enable the model created Create an Analysis Event Map the properties to the datashape field Enable the Event In ThingWorx Composer Relevant properties of the Thing used in the Analysis Event are updated in someway This trigger the analysis job to be executed The scoring result is populated into the result property mapped in the Analysis event The Help Center has got more detailed about this process. The following video shows those steps Following articles can also be of interest for this topic: How to use ThingPredictor in release 8.3 of ThingWorx Analytics Server ? Publish model from Analytics Builder into Analytics Manager using TW.AnalysisServices.AnalyticsServer.AnalyticsServerConnector Creating Template For Thing, And Configure Analysis Event For Real-Time Scoring via Analytics Manager Note that the AnalyticsServerConnector connector in release 8.3 replaces the ThingPredictor connector from previous releases.
View full tip
Welcome to the ThingWorx Manufacturing Apps Community! The ThingWorx Manufacturing Apps are easy to deploy, pre-configured role-based starter apps that are built on PTC’s industry-leading IoT platform, ThingWorx. These Apps provide manufacturers with real-time visibility into operational information, improved decision making, accelerated time to value, and unmatched flexibility to drive factory performance.   This Community page is open to all users-- including licensed ThingWorx users, Express (“freemium”) users, or anyone interested in trying the Apps. Tech Support community advocates serve users on this site, and are here to answer your questions about downloading, installing, and configuring the ThingWorx Manufacturing Apps.     A. Sign up: ThingWorx Manufacturing Apps Community: PTC account credentials are needed to participate in the ThingWorx Community. If you have not yet registered a PTC eSupport account, start with the Basic Account Creation page.   Manufacturing Apps Web portal: Register a login for the ThingWorx Manufacturing Apps web portal, where you can download the free trial and navigate to the additional resources discussed below.     B. Download: Choose a download/packaging option to get started.   i. Express/Freemium Installer (best for users who are new to ThingWorx): If you want to quickly install ThingWorx Manufacturing Apps (including ThingWorx) use the following installer: Download the Express/Freemium Installer   ii. 30-day Developer Kit trial: To experience the capabilities of the ThingWorx Platform with the Manufacturing Apps and create your own Apps: Download the 30-day Developer Kit trial   iii. Import as a ThingWorx Extension (for users with a Manufacturing Apps entitlement-- including ThingWorx commercial customers, PTC employees, and PTC Partners): ThingWorx Manufacturing apps can be imported as ThingWorx extensions into an existing ThingWorx Platform install (v8.1.0). To locate the download, open the PTC Software Download Page and expand the following folders:   ThingWorx Platform | Release 8.x | ThingWorx Manufacturing Apps Extension | Most Recent Datacode     C. Learn After downloading the installer or extensions, begin with Installation and Configuration.   Follow the steps laid out in the ThingWorx Manufacturing Apps Setup and Configuration Guide 8.2   Find helpful getting-started guides and videos available within the 'Get Started' section of the ThingWorx Manufacturing Apps Portal.     D. Customize Once you have successfully downloaded, installed, and configured the Manufacturing Apps, begin to explore the deeper potential of the Apps and the ThingWorx Platform.   Follow along with the discussion and steps contained in the ThingWorx Manufacturing Apps and Service Apps Customization Guide  8.2   Also contained within the the 'Get Started' page of the ThingWorx Manufacturing Apps Portal, find the "Evolve and Expand" section, featuring: -Custom Plant Layout application -Custom Asset Advisor application -Global Plant View application -Thingworx Manufacturing Apps Technical Lab with Sigma Tile (Raspberry Pi application) -Configuring the Apps with demo data set and simulator -Additional Advanced Documentation     E. Get help / give feedback / interact Use the ThingWorx Manufacturing Apps Community page as a resource to find documentation, peruse past forum threads, or post a question to start a discussion! For advanced troubleshooting, licensed users are encouraged to submit support tickets to the PTC My eSupport portal.
View full tip
Introduction Oracle 12c release introduced the concept of multi-tenant architecture for housing several databases running as service under a single database, I'll try to address the connectivity and required configuration to connect to one of the Pluggable database running in the multi-tenant architecture. Multi-tenant database architecture in scope of ThingWorx External Data Source What is multi-tenant Database architecture ? Running multiple databases under a single database installation. Oracle 12c allows user to create one database called Container Database (CDB) and then spawn several databases called Pluggable Databases (PDB) running as services under it. Why use multi-tenant architecture? Such a setup allows users to spawn a new PDB as and when needed with limited resource requirements, easily administer several PDBs just by administering the container database - since all the PDBs are contained within a single database's tablespace structure, start and stop individual PDB leading to low cost on maintaining different databases - as the resource management is limited to one CDB. When to use multi-tenant architecture? In scenarios like creating PoCs, different test environments requiring external data storage, maintaining different versions of dataset, having this run in the multi-tenant architecture could help save time, money and effort. Create Container Database (CDB) Creation of a Container Database (CDB) is not very different from creating a non Container Database use the attached guide Installing Oracle Database Software and Creating a Database.pdf same is accessible online. Create Pluggable Database (PDB) Use the attached Multitenant : Create and Configure a Pluggable Database (PDB) in Oracle Database 12c PDF guide to create and plug a Pluggable Database into the Container Database created in previous step, same is accessible online Using above guide I have bunch of pluggable databases as can be seen below. I'll be using TW724 for connecting to ThingWorx server as an external datasource for following example Connect to a Pluggable Database(PDB) as external data source for ThingWorx Download and unzip the Relational Databases Connectors Extension from ThingWorx Marketplace and extract Oracle12Connector_Extension Import Oracle12Connector_Extension to the ThingWorx using Extension -> Import Create a Thing using OracleDBServer12 Thing Template , e.g. TW724_PDB_Thing Navigate to the Configurations for TW724_PDB_Thing to update the default configuration: JDBC Driver Class Name : oracle.jdbc.OracleDriver JDBC Connection String : jdbc:oracle:thin:@//oravm.ptcnet.ptc.com:1521/tw724.ptcnet.ptc.com Database Username : <UserName> Database Password : <password>   5. Once done save the entity Note: A PDB in a container database can be reached only as a service and not using the CDB's SID. In the above configuration TW724 is a PDB which can be connected to via it's service name i.e. TW724.PTCNET.PTC.COM Let's head to the Services tab for TW724_PDB_Thing to query and access the PDB data Creating Services to access the PDB as external database source for ThingWorx Once the configuration is done the TW724_PDB_Thing is ready for use. The queries remain the same as any other SQL query needed to access the data from Oracle. Service for creating a Table Once on the Services tab for the TW724_PDB_Thing click on Add My Service select the service handler as SQL Command to use following script to create a testTable1 in the PDB create table testTable1 (     id NUMBER GENERATED ALWAYS AS IDENTITY primary key,     col1 varchar2(100),     col2 number ) Note: GENERATED ALWAYS AS IDENTITY option is Oracle 12c specific and I included it here for the reason that with Oracle 12c the possibility to auto generate is now built in with that option simplifying the sequence generation when compared with older Oracle versions such as Oracle 11g. User creating table will need access right on creating table and sequence checkout the Oracle documentation on Identity for more on this. Service for getting all the data from the table Add another service with script Select * from testTable1 for getting all the data from the table Service for inserting data into the table Adding another service with script insert into testTable1 (col1, col2) values ('TextValue', 123)  will insert the data into the table created above Service for getting all tables from the PDB i.e. TW724 Using Select * from tab lists all the available tables in the TW724 PDB Summary Just a quick wrap up on how this would look visually refer to the following image. Since this is a scalable setup - given the platform having enough resources it's possible to create upto 252 PDBs under a CDB therefore 252 PDBs could be created and configured to as many things extending the OracleDBServer12 Thing. ______________________________________________________________________________________________________________________________________________ Edit: Common Connection Troubleshooting If you observe the error something like this Unable to Invoke Service GetAllPDBTables on TW724_PDB_Thing : ORA-01033: ORACLE initialization or shutdown in progress Ensure that the pluggable database, in this error TW724 (since this is what I created and used above in my services) is opened and accessible. If it's not opened use the command after logging in as sys/system (with admin rights) in CDB, which is ORCL in via SQL*Plus or SQL Developer or any SQL utility of your choice capable of connecting to Oracle DB and open the pluggable database using the command : alter pluggable database tw724 open;
View full tip
Announcements