cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Learn all about PTC Community Badges. Engage with PTC and see how many you can earn! X

IoT Tips

Sort by:
1. Add an Json parameter Example: { ​    "rows":[         {             "email":"example1@ptc.com"         },         {             "name":"Qaqa",             "email":"example2@ptc.com"         }     ] } 2. Create an Infotable with a DataShape usingCreateInfoTableFromDataShape(params) 3. Using a for loop, iterate through each Json object and add it to the Infotable usingInfoTableName.AddRow(YourRowObjectHere) Example: var params = {     infoTableName: "InfoTable",     dataShapeName : "jsontest" }; var infotabletest = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); for(var i=0; i<json.rows.length; i++) {     infotabletest.AddRow({name:json.rows.name,email:json.rows.email}); }
View full tip
This is using the simplest structure to do a look through an infotable.  It's simple but it avoids having to use row indexes and cleans up the code for readability as well.   //Assume incoming Infotable parameter names "thingList" for each (row in thingList.rows) {      // Now each row is already assigned to the row variable in the loop      var thingName = row.name; }   You can also nest these loops (just use a different variable from "row").  Also important to note to not add or remove row entries of the Infotable inside the loop.  In this case you may end up skipping or repeating rows in the loop since the indexes will be changed.
View full tip
There are now three new places where you can get and/or share ThingWorx code examples in the ThingWorx Community: ThingWorx Platform Services ThingWorx Extensions and Widgets ThingWorx Edge and Edge SDKs We encourage you to share your own relevant code examples in the appropriate space. Be sure to read the how-to and guidelines for posting to the Code Examples Libraries before you create your document. Any official code from ThingWorx Support Services will be marked with an official designation at the top of the document, which looks like this: Keep an eye out for more code examples as we ramp up these libraries and don’t forget to share your own examples!
View full tip
The following code is best practice when creating any "entity" in Thingworx service script.  When a new entity is created (like a Thing) it will be loaded into the JVM memory immediately, but is not committed to disk until a transaction (service) successfully completes.  For this reason ALL code in a service must be in a try/catch block to handle exceptions.  In order to rollback the create call the catch must call a delete for any entity created.  In line comments give further detail.     try {     var params = {         name: "NewThingName",         description: "This Is A New Thing",         thingTemplateName: "GenericThing"     };     Resources["EntityServices"].CreateThing(params);    // Always enable and restart a new thing to make it active on the Platform     Things["NewThingName"].Enable();     Things["NewThingName"].Restart();       //Now Create an Organization for the new Thing     var params = {         topOUName: "NewOrgName",         name: "NewOrgName",         description: "New Orgianization for new Thing",         topOUDescription: "New Org Main"     };     Resources["EntityServices"].CreateOrganization(params);       // Any code that could potentially cause an exception should     // also be included in the try-catch block. } catch (err) {     // If an exception is caught, we need to attempt to delete everything     // that was created to roll back the entire transaction.     // If we do not do this a "ghost" entity will remain in memory     // We must do this in reverse order of creation so there are no dependency conflicts     // We also do not know where it failed so we must attempt to remove all of them,     // but also handle exceptions in case they were not created       try {         var params = {name: "NewOrgName"};         Resources["EntityServices"].DeleteOrganization(params);     }     catch(ex2) {//Org was not created     }       try {         var params = {name: "NewThingName"};         Resources["EntityServices"].DeleteThing(params);     }     catch(ex2) {//Thing was not created     } }
View full tip
Introduction Oracle 12c release introduced the concept of multi-tenant architecture for housing several databases running as service under a single database, I'll try to address the connectivity and required configuration to connect to one of the Pluggable database running in the multi-tenant architecture. Multi-tenant database architecture in scope of ThingWorx External Data Source What is multi-tenant Database architecture ? Running multiple databases under a single database installation. Oracle 12c allows user to create one database called Container Database (CDB) and then spawn several databases called Pluggable Databases (PDB) running as services under it. Why use multi-tenant architecture? Such a setup allows users to spawn a new PDB as and when needed with limited resource requirements, easily administer several PDBs just by administering the container database - since all the PDBs are contained within a single database's tablespace structure, start and stop individual PDB leading to low cost on maintaining different databases - as the resource management is limited to one CDB. When to use multi-tenant architecture? In scenarios like creating PoCs, different test environments requiring external data storage, maintaining different versions of dataset, having this run in the multi-tenant architecture could help save time, money and effort. Create Container Database (CDB) Creation of a Container Database (CDB) is not very different from creating a non Container Database use the attached guide Installing Oracle Database Software and Creating a Database.pdf same is accessible online. Create Pluggable Database (PDB) Use the attached Multitenant : Create and Configure a Pluggable Database (PDB) in Oracle Database 12c PDF guide to create and plug a Pluggable Database into the Container Database created in previous step, same is accessible online Using above guide I have bunch of pluggable databases as can be seen below. I'll be using TW724 for connecting to ThingWorx server as an external datasource for following example Connect to a Pluggable Database(PDB) as external data source for ThingWorx Download and unzip the Relational Databases Connectors Extension from ThingWorx Marketplace and extract Oracle12Connector_Extension Import Oracle12Connector_Extension to the ThingWorx using Extension -> Import Create a Thing using OracleDBServer12 Thing Template , e.g. TW724_PDB_Thing Navigate to the Configurations for TW724_PDB_Thing to update the default configuration: JDBC Driver Class Name : oracle.jdbc.OracleDriver JDBC Connection String : jdbc:oracle:thin:@//oravm.ptcnet.ptc.com:1521/tw724.ptcnet.ptc.com Database Username : <UserName> Database Password : <password>   5. Once done save the entity Note: A PDB in a container database can be reached only as a service and not using the CDB's SID. In the above configuration TW724 is a PDB which can be connected to via it's service name i.e. TW724.PTCNET.PTC.COM Let's head to the Services tab for TW724_PDB_Thing to query and access the PDB data Creating Services to access the PDB as external database source for ThingWorx Once the configuration is done the TW724_PDB_Thing is ready for use. The queries remain the same as any other SQL query needed to access the data from Oracle. Service for creating a Table Once on the Services tab for the TW724_PDB_Thing click on Add My Service select the service handler as SQL Command to use following script to create a testTable1 in the PDB create table testTable1 (     id NUMBER GENERATED ALWAYS AS IDENTITY primary key,     col1 varchar2(100),     col2 number ) Note: GENERATED ALWAYS AS IDENTITY option is Oracle 12c specific and I included it here for the reason that with Oracle 12c the possibility to auto generate is now built in with that option simplifying the sequence generation when compared with older Oracle versions such as Oracle 11g. User creating table will need access right on creating table and sequence checkout the Oracle documentation on Identity for more on this. Service for getting all the data from the table Add another service with script Select * from testTable1 for getting all the data from the table Service for inserting data into the table Adding another service with script insert into testTable1 (col1, col2) values ('TextValue', 123)  will insert the data into the table created above Service for getting all tables from the PDB i.e. TW724 Using Select * from tab lists all the available tables in the TW724 PDB Summary Just a quick wrap up on how this would look visually refer to the following image. Since this is a scalable setup - given the platform having enough resources it's possible to create upto 252 PDBs under a CDB therefore 252 PDBs could be created and configured to as many things extending the OracleDBServer12 Thing. ______________________________________________________________________________________________________________________________________________ Edit: Common Connection Troubleshooting If you observe the error something like this Unable to Invoke Service GetAllPDBTables on TW724_PDB_Thing : ORA-01033: ORACLE initialization or shutdown in progress Ensure that the pluggable database, in this error TW724 (since this is what I created and used above in my services) is opened and accessible. If it's not opened use the command after logging in as sys/system (with admin rights) in CDB, which is ORCL in via SQL*Plus or SQL Developer or any SQL utility of your choice capable of connecting to Oracle DB and open the pluggable database using the command : alter pluggable database tw724 open;
View full tip
In this video we cover the process of installing ThingWorx Analytics Server 52.1. Make sure to have reviewed the part 1 video about pre requisite   Updated Link for access to this video:  Installing ThingWorx Analytics Server: Part 2 of 2
View full tip
Putting this out because this is a difficult problem to troubleshoot if you don't do it right. Let's say you have an application where you have visibility permissions in effect. So you have Users group removed from the Everyone Organization Now you have a Thing "Thing1" with Properties that are being logged to a ValueStream "VS1" What do you need to make this work? Obviously the necessary permissions to Write the values to the Thing1 and read the values from Thing1 (for UI) But for visibility what you'll need is: Visibility to Thing1 (makes sense) Visibility to the Persistence Provider of the ValueStream VS1 !!!! Nope you don't need Visibility to the ValueStream itself, but you DO need Visibility to the Persistence Provider of that ValueStream The way the lack of this permission was showing in the Application Log was a message about trying to provide a Null value.
View full tip
Internationalization and Localization Internationalization (often abbreviated I18N – from "I" + 18 more letters + "n") is the process of developing software that supports many languages, including those with non-Latin character sets. Localization (L10N) refers to developing applications that can be delivered in many languages, relying on the underlying architecture of I18N. This how-to article focuses mostly on localization, since the infrastructure is in place and stable. Create a Localization Table You create a Localization Table entity when you need to add support for another language to the application you're developing. Someone from Sales has said "There's an opportunity if we can deliver the Spiffy application in Estonian." This suggests that an Estonian-speaking end user should be able to run Spiffy and see all of its labels, messages, prompts, dialogs, and so on in Estonian. Most of the cost of adding Estonian language support is in a (usually contracted) service that does the English-to-Estonian (or whatever target language) translations. Such services employ native speakers who can get the nuances of translation correct. See Tips for translators below for suggestions on improving the accuracy of the translation. In Composer, view the Localization Tables list. Begin by duplicating an existing table (e.g. check Default or another language and click Duplicate) or by clicking New. A new tab will open with a New Localization Table in edit mode. The fields shown are: Locale (required). This is the official language tag of the new language. Language tags are defined by an Internet standard, IETF BCP 47. Briefly, they consist of a standard abbreviation for a language (e.g. en for English, de for German), followed optionally by a script subtag (e.g. Cyrl for Cyrilic), followed optionally by a region code (a country code, such as CH for Switzerland or HK for Hong Kong, or a U.N. region number), followed optionally by other qualifiers such as dialect. A simple example is es, Spanish. A complex one is sl-Latn-IT-nedis, Slovenian rendered in Latin characters as spoken in Italy in the Natisone dialect. Software rarely needs such highly specific language tags; the most specific practical examples are the various scripts and regions for Chinese (e.g. zh-Hans-CN, zh-Hant-TW). Language Name (Native) (required). This is the name of the language as written in that language, such that it would be readable by a native speaker. For example, 日本語 for Japanese, ਪੰਜਾਬੀ ਦੇ for Punjabi, or Deutsche for German. Language Name (Common). This is the name of the language as written in a common administrative language. For an application delivered internationally, English is probably a safe choice. Administrators at a customer site might change these to be in the language of the headquarters country. Description. Free form text describing the language. This will appear to end-users as a tooltip as they hover over language choices. Tags. Standard ThingWorx entity tags. Home Mashup. Does not apply. Avatar. An icon for this language. The default is . No other icons are delivered as standard, but language selection interfaces in many products use national flags to help distinguish choices, and those could be supplied here. Avatars are 48x48px images. There may be political implications in choosing a flag or other symbol for a language; use caution. Note that subtags of a language tag are separated by a hyphen, as in zh-Hans-SG. Using underscore is a Java convention that does not conform to BCP 47.A complete properties definition for Czech might look like this: Once the table has been created and saved, you can edit the translated text in Composer. Under Entity Information, select Localization Tokens. A grid similar to this will appear: The columns shown are: Token Name. This is the symbol used by mashup developers to insert a localized string into a certain place in a widget. For example, no matter how the phrase "Add New Page" is rendered (Neue Seite hinzufügen, Adicionar nova página, 새 페이지 추가...) the application developer is only concerned that the token addNewPage appears on the proper widget. See How tokens are resolved below for more information. This Language. How the text is to be represented in this language, that is, the language of the Localization Table currently being viewed or edited. Language. How the text has already been represented in any other language currently defined on the system. This is simply for reference purposes, to compare one translation with another. Usage. Can be set to Label, Message, or left unspecified. This is a guide to translators, who have to be concerned about the size of translated text. Usage Label suggests that the text needs to fit in a confined space, such as in a column header or on the face of a button. Usage Message suggests that the text is meant for a popup, error message, help, or somewhere that full sentences can be accommodated. Context. This is a free-form text field to provide instructions, advice, context, or other explanatory material to the translator. For the token book, for example, the context field can distinguish between the senses of book (something to read), book a table, book a sale, or book a prisoner, which may all have different translations. Translations can be entered in Composer. However, it's also likely that a third-party translator will do the work without using this editor. See Tips for translators below. Define language preferences for a user The reason for localization is to present user interfaces in the best language for a given user. To support this, each ThingWorx user is associated with one or more languages – those that that user can read comfortably. Some applications might offer just one language or a few, some many, and the supported languages may or may not overlap. So each user defines an ordered preference list, saying in effect: my best language is Catalan, but I'm decent in Spanish, and if those aren't available I did spend a few years in Hungary, and as a last resort there was some French in school. This would be represented in ThingWorx as: ca,es,hu,fr. A user from Scotland might have language preference en-UK,en, meaning that English with United Kingdom spellings and vocabulary is best (tyre, windscreen), but if not available then any English will do (tire, windshield). (It is not necessary to spell out related preferences of this type – see How tokens are resolved.) Any application then interacts with a given user in the best language that the application and user have in common.To define the language preference(s) for a user, open the Users list in Composer: Then choose an existing user to edit, or click New to create a new account. The only localization related information here is the Languages field. An administrator who knows the names of available languages may edit or paste an ordered, comma-separated list into the Languages field (e.g.  ca,es,hu,fr-CA). Clicking the Edit... button brings up a drag-and-drop preferences editor: The column on the left shows available (unselected) languages. The column on the right shows this user's languages, with the top entry being the most preferred language. Dragging a language from left to right adds it to the user's list; from right to left removes it; dragging rows up and down on the right changes the preference order. As language entries are dragged, a highlight appears to show where they might be dropped: A user with no language preference set will have all tokens resolved from the Default and System tables. Language Preferences can be set programmatically, as detailed in KCS Article CS243270. Localize Mashups The job of the application developer is to keep hard-coded natural language strings out of applications. To support this, widgets define an attribute isLocalizable: true for widget properties that can contain text. This shows up in the Mashup editor as a globe icon next to each localizable property. In this example, both the Text and ToolTipField properties are localizable: Clicking the globe icon changes the property from static to localized. The appearance in the Mashup editor changes accordingly: Clicking the magic wand icon opens the localization token picker: The list of tokens on the right corresponds to the Token Name column in the Localization Table editor. This is the key that is common to the meaning of a word or phrase, independent of its translation into natural languages. Select one from the list, or click to create a new one. Enter the token name and its Default (usually English) value: Note that, complying with best practices for extension developers, the token name has been namespaced: this token belongs to Acme Inc.'s Spiffy application. The rest of the name is descriptive and may reflect other development standards.When a new token is created, it becomes available to edit in every configured Localization Table. If these are not updated, then the default (English) value will be shown wherever the token occurs. How tokens are resolved What happens at run time when the UI needs to display the value of a localization token? The answer is determined by the current user's language preferences the set of Localization Tables configured on the system the presence or absence of a translation for a given token in a given table To visualize this, picture the user's language preferences as a stack, with the most preferred language on top and the least one sitting on the floor – where the floor consists of the Default and System Localization Tables: The user's language preference is fr,pt,ru,hi (French, Portuguese, Russian, Hindi, with French most preferred). The system is configured with Localization Tables, which have no order, for it (Italian), fr-CA (Canadian French), ru (Russian), pt-BR(Brazilian Portuguese), es (Spanish), and the default (likely Engish). Now the UI needs to present this user with the best value for the token com.acme.spiffy.labelAssembly. To resolve this, we start at the top of the stack. Is there a fr Localization Table? There is. Does it contain a translation for com.acme.spiffy.labelAssembly? For the sake of illustration, assume that it does not – perhaps other applications have French support, but the Spiffy application doesn't, so there aren't any com.acme.spiffy.* tokens in the French Localization Table. So we still need a value. Continuing down through the user's preferences, the next acceptable language is pt. Is there a pt localization table? No. There is a Brazilian Portuguese translation, but that won't help a user from Portugal. Still looking, we move to the next language, ru. Is there a ru Localization Table? There is. Does it contain a translation forcom.acme.spiffy.labelAssembly? It does: Ассамблея – so the token has a value, and that is what gets displayed in the UI. Suppose that the user's preferences were more specific, something like this: The users's language preference is fr-CA,pt-BR,ru-Cyrl-RU,sl-Latn-IT-nedis (Canadian French, Brazilian Portuguese, Russian in Cyrillic characters as used in Russia, Slovenian in Latin characters as used in Italy where the Natisone River dialect prevails). ThingWorx treats this by internally expanding the stack to include acceptable fall-back languages. In effect, it looks like: Of the four languages that the user can accept and that the system defines (fr-CA, fr, pt-BR, ru) the first one containing the desired token determines its value in the UI. Token and translation management for applications While it's possible to edit localized values using the Localization Table editor in Composer, translations are usually done in bulk by subject-matter experts. While workflow will vary among organizations and projects, the following example illustrates the basic process. ACME, Inc. is developing a ThingWorx application called Cambot for controlling security cameras. ACME's developer begins by constructing a mashup: This is the first draft. There is an area for the video widget, to be added later, and some button and label widgets for choosing and controlling a camera. The widgets have been given static labels: As shown here, the text for the pan left button has been entered simply as "Pan Left." But the Cambot app needs to be localized, and delivered in English, French, and Spanish. The next step for the developer is to replace all of the static text with localization tokens. Clicking the globe icon to the left of the label property changes the text from static to tokenized: and adds a magic picker for localization tokens. This is a new application, and will need its own set of localization tokens. To create the one for "Pan Left," click the magic wand to open the tokens picker: and then click "+ Localization Token" to add a new one. A dialog opens prompting for the token name and its default (English) value: Note that the token name has been namespaced for two reasons: to prevent conflicts with tokens from other sources, and to allow the developer and translators to work only with application-specific tokens. On clicking "Add Localization Token," the token is created and the default value saved. The mashup builder now shows: . After all of the tokens needed by the application have been defined, they and their values may be seen on the Localization Tokens editor for the Default Localization Table. By entering the namespace prefix in the filter textbox, the display can be restricted to the tokens for this application: As application development continues, and more tokens are required, this process is repeated. When tokens are defined, the developer should edit the Default Localization Table to supply Usage and Context information for each one: Finally, it's time to do the translations for French and Spanish. First, create the localization tables for those languages, as described above in "Create a Localization Table." From the Import/Export menu, select EXPORT / To File: Then, depending on the file format desired, choose either the Entities or Single Entity tab. For Entities, set the Collections value to Localization Tables, enter the namespace in the Token Prefix field, and choose XML as the Export Type: This will produce a single output file, containing a Localization Table element for every language defined on the system – in this example, English, French, and Spanish -- but including only the com.acme.cambot tokens. For Single Entity, choose the language to export, specify the prefix, and choose XML: This must be repeated, once for each language, and creates a separate XML file for each. In either case, the translator should be supplied with the Default XML and the file for the language to be added. (Or, the tokens and values may be converted to and from other formats, depending on the requirements of the translation service. In any case, the translated values must be in the same XML format before they can be imported.) The Default export file will contain a <Rows> element like this: < Rows >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttonnext]]> </ name >         < context > <![CDATA[Button to switch view to next camera]]> </ context >         < value > <![CDATA[Next Camera]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttonpanleft]]> </ name >         < context > <![CDATA[Button to pan view to the left]]> </ context >         < value > <![CDATA[Pan Left]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttonpanright]]> </ name >         < context > <![CDATA[Button to pan view to the right]]> </ context >         < value > <![CDATA[Pan Right]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttonprev]]> </ name >         < context > <![CDATA[Button to switch view to previous camera]]> </ context >         < value > <![CDATA[Prev. Camera]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttontiltdown]]> </ name >         < context > <![CDATA[Button to tilt view down]]> </ context >         < value > <![CDATA[Tilt Down]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttontiltup]]> </ name >         < context > <![CDATA[Button to tilt view up]]> </ context >         < value > <![CDATA[Tilt Up]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttonzoomin]]> </ name >         < context > <![CDATA[Button to view more detail]]> </ context >         < value > <![CDATA[Zoom In]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.buttonzoomout]]> </ name >         < context > <![CDATA[Button to expand view]]> </ context >         < value > <![CDATA[Zoom Out]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.labelcamera]]> </ name >         < context > <![CDATA[Label for current camera name]]> </ context >         < value > <![CDATA[Camera:]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[label]]> </ usage >         < name > <![CDATA[com.acme.cambot.labelrecording]]> </ name >         < context > <![CDATA[Notice displayed when camera is recording]]> </ context >         < value > <![CDATA[Recording]]> </ value >     </ Row > </ Rows > Whereas the French and Spanish export files will contain an empty <Rows/> element. This is where the new translations should be added. When the translations are ready, check that the <LocalizationTable> attributes (name, description, languageCommon, languageNative) are correct. Then import the new languages and inspect the results using the Localization Table editor. Localization tables for an application may be bundled into an extension .zip file as other entities are handled; on import, the tokens for the application will be merged with existing localization tables for the same language. In the case that a brand new language is being introduced, note that many widgets use tokens from the System localization table. These will need to be translated as well – however, there is no easy way to restrict the set of tokens to those actually used. At present this is a manual filtering step. For existing languages, check to see if the System tokens have already been translated. Important note on character encoding In handling the export, transmission and editing of XML files, it's important to ensure that UTF-8 encoding is maintained throughout. Encoding problems can show up either as errors when the file is re-imported, or as localized strings with question marks or other unexpected characters in place of accented letters. ThingWorx must run with UTF-8 as the default file encoding. Specify the Java option -Dfile.encoding=UTF-8 on launch. Windows In %CATALINA_HOME%\bin\setenv.bat, include this command:     set CATALINA_OPTS=-Dfile.encoding=UTF-8 Tips for translators Each token in an exported Localization Table XML file is defined by four fields: name, value, usage, and context. While name might be suggestive, it is actually arbitrary and should not be relied on. Value contains the natural language value for the token in another language (as agreed upon). Translating from this language into the target language is the object. Usage hints at constraints on the size of the translated text. ThingWorx widgets do not in general resize to fit contents; so a button label, column heading, field label, etc. may be more difficult to translate. Because the default language is likely to be English, and English is a particularly compact language, the application may have been designed with narrow constraints. Such tokens should be marked as tricky by having a usage value of Label. Tokens with a usage of Message are for strings in more adaptable spaces, such as a texarea, warning message, etc. Context allows the application developer to provide translation hints. This may disambiguate synonyms, explain usage, discuss space constraints, specify tone of voice, or anything else applicable. The interesting section of a language's XML representation is contained in the <Rows> element. For example: <Rows> example 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 < Rows >     < Row >         < usage />         < name > <![CDATA[com.acme.spiffy.labelPart]]> </ name >         < context />         < value > <![CDATA[Part]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[Label]]> </ usage >         < name > <![CDATA[com.acme.spiffy.labelAssembly]]> </ name >         < context > <![CDATA[Label identifying the name of the assembly being edited, appears as Assembly: external_name]]> </ context >         < value > <![CDATA[Assembly]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[Message]]> </ usage >         < name > <![CDATA[com.acme.spiffy.warningIncomplete]]> </ name >         < context > <![CDATA[Pop-up warning message on Save]]> </ context >         < value > <![CDATA[A referenced part is missing, undefined, or not allowed in this assembly.]]> </ value >     </ Row > </ Rows > In this example, the token defined in lines 2 through 7 is missing the translation cues usage and context. The translator's only option is to intuit the sense of "Part" – is it a noun or a verb? – and attempt a reasonable guess. Access to a running example of the application would clearly be helpful. Lines 8 through 13 identify a label and describe how it is used; lines 14 through 19 do the same for a message. The translator would know that space for the translation of "Assembly" might be limited but that the warning message can be expressed naturally. A translator working on French might then edit this file as follows (again, only the <Rows> element is illustrated): After translating 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 < Rows >     < Row >         < usage />         < name > <![CDATA[com.acme.spiffy.labelPart]]> </ name >         < context />         < value > <![CDATA[Partie]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[Label]]> </ usage >         < name > <![CDATA[com.acme.spiffy.labelAssembly]]> </ name >         < context > <![CDATA[Label identifying the name of the assembly being edited, appears as Assembly: external_name]]> </ context >         < value > <![CDATA[Assemblée]]> </ value >     </ Row >     < Row >         < usage > <![CDATA[Message]]> </ usage >         < name > <![CDATA[com.acme.spiffy.warningIncomplete]]> </ name >         < context > <![CDATA[Pop-up warning message on Save]]> </ context >         < value > <![CDATA[Une partie référencé est manquant, indéfini, ou non autorisés dans cette assemblée.]]> </ value >     </ Row > </ Rows > Note that only the <value> elements need to be translated – the context and usage are hints for the translator. System tokens for international data formats There are several tokens used for formatting that are also subject to localization. Token Default value Notes datepickerDayNamesMin Su,Mo,Tu,We,Th,Fr,Sa Day-of-week abbreviations used in calendar heading. datepickerFirstDay 0 First day of the week, 0 for Sunday, 1 for Monday... datepickerMonthNames January,February,March,April,May,June,July,August,September,October,November,December Month names used in calendar heading. dateTimeFormat_Default yyyy-MM-dd HH:mm:ss Date and time format codes are defined by the moment.js library. dateTimeFormat_FullDateTime LLLL dateTimeFormat_LongDate LL dateTimeFormat_LongDateTime LLL dateTimeFormat_MediumDate ll dateTimeFormat_ShortDate l dateTimeFormat_TimeOnly LT shortDateFormat mm/DD/yyyy See also KCS Article CS241828​ for details about numeric localization. Allowing users to set their own language preferences It may not be practical for the Administrator to set the language preferences for each user. An application may elect to expose the preferences editor to the end user, so that each user may select from the available languages those that are useful. To support this, ThingWorx Composer offers a Preferences widget in the Mashup builder. The widget may be inserted into any application wherever the designer chooses. It may be tied to a button or menu item, or simply appear in a layout with other widgets – perhaps along with application-specific preferences and other settings. To use the Preferences widget, design a mashup for it to appear in. The minimal case would be a responsive page mashup containing nothing but the preferences widget. Add the Preferences widget by dragging it into place: A placeholder for the widget appears in the mashup: The widget may be customized by setting various properties: These properties are specific to the Preferences widget: ShowClearRecent: Check this to include the option for the user to clear the Most Recently Used history. You may specify a localized tooltip. ShowRestoreTabs: Check this to include the option for the user to set tab restoration to ask, always, or never. You may specify a localized tooltip. ShowLanguages: Check this to include the option for the user to edit language preferences. You may specify a localized tooltip. ShowUserName: Check this to label the preferences widget with the user's name. ShowUserAvatar: Check this to label the preferences widget with the user's avatar, if one is defined. Style: Style the preferences widget itself. ButtonStyle: Style the Clear Recent and Edit buttons. These should probably be set to the application's primary button style. After adding the Preferences widget to a mashup, provide some way for the user to navigate to it, consistent with the application's UI design. The mashup may be tied to a menu entry, or assigned to a Navigation widget, or included in a page within the application's workflow – whatever suits the application design. Here is an example of providing access to preferences through a button in the application's title area: 1) The Navigation widget is placed in the page header. 2) The MashupName property is set to the mashup containing a Preferences widget. 3) The TargetWindow property is set to Modal Popup. 4) For a more interesting UI, the button label is bound from the user's name. At runtime, the example looks like this: Note that there is also a menu item leading to the mashup with the Preferences widget.
View full tip
I had just finished writing an integration test that needed to update a Thing on a ThingWorx server using only classes in the Java JDK with as few dependencies as possible and before I moved on, I though I would blog about this example since it makes a great starting point for posting data to ThingWorx. ThingWorx has a Java SDK which uses the HTTP Websockets protocol and you can download it from our online at the ThingWorx IoT Marketplace​ that offers great performance and far more capabilities than this example. If you are looking, however for the simplest, minimum dependency example of delivering data to ThingWorx, this is it. This examples uses the REST interface to your ThingWorx server. It requires only classes already found in your JDK (JDK 7) and optionally includes the JSON Simple jar. References to this jar can be removed if you want to create your property update JSON object yourself. Below is the Java Class. package com.thingworx.rest; import org.json.simple.JSONObject; import javax.net.ssl.HostnameVerifier; import javax.net.ssl.HttpsURLConnection; import javax.net.ssl.SSLContext; import javax.net.ssl.SSLSession; import javax.net.ssl.TrustManager; import javax.net.ssl.X509TrustManager; import java.io.IOException; import java.io.OutputStreamWriter; import java.net.HttpURLConnection; import java.net.URL; import java.security.KeyManagementException; import java.security.NoSuchAlgorithmException; import java.security.cert.CertificateException; import java.security.cert.X509Certificate; /** * Author: bill.reichardt@thingworx.com * Date: 4/22/16 */ public class SimpleThingworxRestPropertyUpdater {    static {    //Disable All SSL Security Testing (Not for production!)      try {       disableSSLCertificateChecking();      } catch (Exception e) {       e.printStackTrace();      }      HttpsURLConnection.setDefaultHostnameVerifier(new HostnameVerifier(){        public boolean verify(String hostname, SSLSession session) {return true;}      });   }    public static void main(String[] args) {      // like http://localhost:8080 or https://localhost:443      String serverUrl = args[0];      // Generate one of these from the composer under Application Keys      String appKey = args[1];      String thingName = args[2];      // You don't have to use the Simple JSON class, just pass a JSON string to restUpdateProperties()      // This Thing has three properties, a (NUMBER), b (STRING) and c (BOOLEAN)      JSONObject properties = new JSONObject();      properties.put("a", new Integer(100));      properties.put("b", "My New String Value");      properties.put("c", true);      String payload= properties.toJSONString();      try {        int response = restUpdateProperties(serverUrl, appKey, thingName, payload);        System.out.println("Response Status="+response);      } catch (Exception e) {        e.printStackTrace();      }   }    public static int restUpdateProperties(String serverUrl, String appKey, String thingName, String payload) throws IOException {     String httpUrlString = serverUrl + "/Thingworx/Things/"+thingName+"/Properties/*";     System.out.println("Performing HTTP PUT request to "+httpUrlString);     System.out.println("Payload is "+payload);     URL url = new URL(httpUrlString);     HttpURLConnection httpURLConnection = (HttpURLConnection) url.openConnection();     httpURLConnection.setUseCaches(false);     httpURLConnection.setDoOutput(true);     httpURLConnection.setRequestMethod("PUT");     httpURLConnection.setRequestProperty ("Content-Type", "application/json");     httpURLConnection.setRequestProperty ("appKey",appKey);     OutputStreamWriter out = new OutputStreamWriter(httpURLConnection.getOutputStream());     out.write(payload);     out.close();     httpURLConnection.getInputStream();     return httpURLConnection.getResponseCode();   }    /**   * Disables the SSL certificate checking for new instances of {@link HttpsURLConnection} This has been created to   * aid testing on a local box, not for use on production.   */    private static void disableSSLCertificateChecking() throws KeyManagementException, NoSuchAlgorithmException {   TrustManager[] trustAllCerts = new TrustManager[] { new X509TrustManager() {      public X509Certificate[] getAcceptedIssuers() {        return null;      }      public void checkClientTrusted(X509Certificate[] arg0, String arg1) throws CertificateException {}      public void checkServerTrusted(X509Certificate[] arg0, String arg1) throws CertificateException {}      } };      SSLContext sc = SSLContext.getInstance("TLS");      sc.init(null, trustAllCerts, new java.security.SecureRandom());      HttpsURLConnection.setDefaultSSLSocketFactory(sc.getSocketFactory());   } } When run, it prints out what the request body should look like in JSON: Performing HTTP PUT request to https://localhost:443/Thingworx/Things/SimpleThing/Properties/* Payload is  {"a":100,"b":"My New String Value","c":true} Response Status=200 I have attached the full Gradle project that builds and runs this example class as a zip file to this article. When you download it, if you have Java JDK 7 already installed an on your path, you can run the example with the command: On Linux or OSX ./gradlew simplerest Windows gradlew.bat simplerest Don't forget to edit the build.gradle file to use your server's URL and application key. You will also find the Thing used in this example in the entities folder of this project and you can import it on your server to test it out. It is a Thing that is based on GenericThing and has three properties, a (NUMBER), b (STRING) and c (BOOLEAN).
View full tip
A user can make a direct REST call to Thingworx platform, but when it comes to a website trying to make a REST call. The platform server blocks the request as it is a Cross-Origin request. To enable this feature, the platform server needs to allow Cross-Origin request from all/specific websites. Enabling Cross-Origin request can be done by adding CORS filter to the server. CORS (Cross-Origin Resource Sharing) specification enables the cross-origin requests from other websites deployed in a different server. By enabling CORS filter, a 3rd party tool or a website can retrieve the data from Thingworx instance. Follow the below steps inorder to update the CORS filter: Update web.xml file (located in $CATALINA_HOME/conf/web.xml) For Minimal Configurations, add the below code: <filter> <filter-name>CorsFilter</filter-name>   <filter-class>org.apache.catalina.filters.CorsFilter</filter-class> </filter> <filter-mapping>   <filter-name>CorsFilter</filter-name>   <url-pattern>/*</url-pattern>         // "*" opens platform to all URL patterns, recommended to use limited patterns. </filter-mapping> NOTE: the url-pattern - /* opens the Thingworx application to every domain. For advanced configuration, follow the below code: <filter> <filter-name>CorsFilter</filter-name> <filter-class>org.apache.catalina.filters.CorsFilter</filter-class> <init-param> <param-name>cors.allowed.origins</param-name> <param-value> http://www.customerwebaddress.com </param-value> </init-param> <init-param> <param-name>cors.allowed.methods</param-name> <param-value>GET,POST,HEAD,OPTIONS,PUT</param-value> </init-param> <init-param> <param-name>cors.allowed.headers</param-name> <param-value>Content-Type,X-Requested-With,accept,Origin,Access-Control-Request-Method,Access-Control-Request-Headers</param-value> </init-param> <init-param> <param-name>cors.exposed.headers</param-name> <param-value>Access-Control-Allow-Origin,Access-Control-Allow-Credentials</param-value> </init-param> <init-param> <param-name>cors.support.credentials</param-name> <param-value>true</param-value> </init-param> <init-param> <param-name>cors.preflight.maxage</param-name> <param-value>10</param-value> </init-param> </filter> <filter-mapping> <filter-name>CorsFilter</filter-name> <url-pattern>/* </url-pattern>   // "*" opens platform to all URL patterns, recommended to use limited patterns. </filter-mapping> NOTE: update the cors.allowed.origin parameter with the desired web address Save web.xml file Restart tomcat For additional information, please follow the official tomcat reference document: http://tomcat.apache.org/tomcat-7.0-doc/config/filter.html#CORS_Filter Tested this using an online Javascript editor (jsfiddle) and executing the below script <script> var data = null; var xhr = new XMLHttpRequest(); xhr.open("GET", "http://localhost:8080/Thingworx/Things", true); xhr.withCredentials = true; xhr.send(); </script> The request was successful and list of things are returned.
View full tip
By Tim Atwood and Dave Bernbeck, Edited by Tori Firewind Adapted from the March 2021 Expert Session Produced by the IoT Enterprise Deployment Center The primary purpose of monitoring is to determine when your application may be exhausting the available resources. Knowledge of the infrastructure limits help establish these monitoring boundaries, determining straightforward thresholds that indicate an app has gone too far. The four main areas to monitor in this way are CPU, Memory, Networking, and Disk.   For the CPU, we want to know how many cores are available to the application and potentially what the temperature is for each or other indicators of overtaxation. For Memory, we want to know how much RAM is available for the application. For Networking, we want to know the network throughput, the available bandwidth, and how capable the network cards are in general. For Disk, we keep track of the read and write rates of the disks used by the application as well as how much space remains on those.   There are several major infrastructure categories which reflect common modes of operation for ThingWorx applications. One is Bare Metal, which relies upon the traditional use of hardware to connect directly between operating system and hardware, with no intermediary. Limits of the hardware in this case can be found in manufacturing specifications, within the operating system settings, and listed somewhere within the IT department normally. The IT team is a great resource for obtaining these limits in general, also keeping track of such things in VMware and virtualized infrastructure models.   VMware is an intermediary between the operating system and the hardware, and often its limits are determined based on the sizing of the application and set by the IT team when the infrastructure is established. These can often be resized as needed, and the IT team will be well aware of the limits here, often monitoring some of the performance themselves already. This is especially so if Cloud Providers are used, given that these are scaled up virtualizations which are configured in easy-to-use cloud portals. These two infrastructure models can also be resized as needed.   Lastly Containers can be used to designate operating system resources as needed, in a much more specific way that better supports the sharing of resources across multiple systems. Here the limits are defined in configuration files or charts that define the container.   The difficulties here center around learning what the limits are, especially in the case of network and disk usage. Network bandwidth can fluctuate, and increased latency and network congestion can occur at random times for seemingly no reason. Most monitoring scenarios can therefore make due with collecting network send and receive rates, as well as disk read and write rates, performed on the server.   Cloud Providers like Azure provide VM and disk sizing options that allow you to select exactly what you need, but for network throughput or network IO, the choices are not as varied. Network IO tends to increase with the size of the VM, proportional to the number of CPU cores and the amount of Memory, so this may mean that a VM has to be oversized for the user load, for the bulk of the application, in order to accommodate a large or noisy edge fleet. The next few slides list the operating metrics and common thresholds used for each. We often use these thresholds in our own simulations here at PTC, but note that each use case is different, and each situation should be analyzed individually before determining set limits of performance.   Generally, you will want to monitor: % utilization of all CPU cores, leaving plenty of room for spikes in  activity; total and used memory, ensuring total memory remains constant throughout and used memory remains below a reasonable percentage of the total, which for smaller systems (16 GB and lower) means leaving around 20% Memory for the OS, and for larger systems, usually around 3-4 GB.    For disks, the read and write rates to ensure there is ample free space for spikes and to avoid any situation that might result in system down time;  and for networking, the send and receive rates which should be below 70% or so, again to leave room for spikes.   In any monitoring situation, high consistent utilization  should trigger concern and an investigation into  what’s happening. Were new assets added? Has any recent change caused regression or other issues?    Any resent changes should be inspected and the infrastructure sizing should be considered as well. For ThingWorx specific monitoring, we look at max queue sizes, entries performed, pool sizes, alerts, submitted task counts, and anything that might indicate some kind of data loss. We want the queues to be consistently cleared out to reduce the risk of losing data in the case of an interruption, and to ensure there is no reason for resource use to build up and cause issues over time. In order for a monitoring set-up to be truly helpful, it needs to make certain information easily accessible to administrative users of the application. Any metrics that are applicable to performance needs to be processed and recorded in a location that can be accessed quickly and easily from wherever the admins are. They should quickly and easily know the health of the application from a glance, without needing to drill down a lot to be made aware of issues. Likewise, the alerts that happen should be  meaningful, with minimal false alarms, and it is best if this is configurable by the admins from within the application via some sort of rules engine (see the DGIS guide, soon to be released in version 9.1). The  monitoring tool should also be able to save the system history and export it for further analysis, all in the name of reducing future downtime and creating a stable, enterprise system.     This dashboard (above) is a good example of how to  rollup a number of performance criteria into health indicators for various aspects of the application. Here there is a Green-Yellow-Red color-coding system for issues like web requests taking longer than 30s, 3 minutes, or more to respond.   Grafana is another application used for monitoring internally by our team. The easy dashboard creation feature and built-in chart modes make this tool  super easy to get started with, and certainly easy to refer to from a central location over time. Setting this up is helpful for load testing and making ready an application, but it is also beneficial for continued monitoring post-go-live, and hence why it is a worthy investment. Our team usually builds a link based on the start and end time of tests for each simulation performed, with all of the various servers being monitored by one Grafana server, one reference point.   Consider using PTC Performance Advisor to help monitor these kinds of things more easily (also called DynaTrace). When most administrators think of monitoring, they think of reading and reacting to dashboards, alerts, and reports. Rarely does the idea of benchmarking come to mind as a monitoring activity, and yet, having successful benchmarks of system performance can be a crucial part of knowing if an application is functioning as expected before there are major issues. Benchmarks also look at the response time of the server and can better enable  tracking of actual end user experience. The best  option is to automate such tests using JMeter or other applications, producing a daily snapshot of user performance that can anticipate future issues and create a more reliable experience for end users over time.   Another tool to make use of is JMeter, which has the option to build custom reports. JMeter is good for simulating the user load, which often makes up most of the server load of a ThingWorx application, especially considering that ingestion is typically optimized independently and given the most thought. The most unexpected issues tend to pop up within the application itself, after the project has gone live.   Shown here (right) is an example benchmark from a Windchill application, one which is published by PTC to facilitate comparison between optimized test systems and real life performance. Likewise, DynaTrace is depicted here, showing an automated baseline (using Smart URL Detection) on Response Time (Median and 90th percentile) as well as Failure Rate. We can also look at Throughput and compare it with the expected value range based on historical throughput data. Monitoring typically increases system performance  and availability, but its other advantage is to provide faster, more effective troubleshooting. Establish a systematic process or checklist to step through when problems occur, something that is organized to be done quickly, but still takes the time to find and fix the underlying problems. This will prevent issues from happening again and again and polish the system periodically as problems occur, so that the stability and integrity of the system only improves over time. Push for real solutions if possible, not band-aids, even if more downtime is needed up front; it is always better to have planned downtime up front than unplanned downtime down the line. Close any monitoring gaps when issues do occur, which is the valid RCA response if not enough information was captured to actually diagnose or resolve the issue.   PTC Tech Support developed a diagnostic data gathering query for Oracle that customers can use, found in our knowledgebase. This is an example of RCA troubleshooting that looks at different database factors, reporting on which queries perform the worst  based on inputted criteria. Another example of troubleshooting is for the Java JVM, where we look at all of the things listed here (below) in an automated, documented process that then generates a report for easy end user consumption.   Don’t hesitate to reach out to PTC Technical Support in advance to go over your RCA processes, to review benchmark discrepancies between what PTC publishes and what your real-life systems show, and to ensure your monitoring is adequate to maintain system stability and availability at all times.  
View full tip
  Hello everyone,   If you’re like me, you’re always looking for the optimal or most efficient way to do something. Today, I’ll share a quick trick and two tips to help you develop your awesome IoT solutions with ThingWorx.   #1. Trick: Finding Dependency References We are targeting a new “Where Used” Composer feature in an upcoming release of the platform to help you find your references of bindings, properties, mashups, and services. In the meantime, did you know you can get some of that information yourself today with a quick service call?   As of ThingWorx 8.5, a new service is present on Project entities; the service crawls the contents of your project and highlights the full external dependency list to help you find references. On any Project Entity, ListExternalDependencies() shows output like this in 9.0:  ListExternalDependencies() output   For each entity (“A”) in the project, the service calls out any entities (“B”) that it is referencing and the referenced dependency’s extension package if present. It will only find external dependencies to the project and will not currently list dependencies within the project. Notice also in the infotable output, the last column, “where used,” even lists the type of reference (e.g. coded in JavaScript, Mashup Data, Resource, Property binding, etc.). Pretty handy!   Code reference from “Where Used” service output   Click this link for additional help content that explains the service output and usage. Again, it only searches for entity references outside of your current project scope. Also, this service will stop crawling the dependency hierarchy when it finds items in a project, since its current purpose is packaging.  Consider if you have Thing T1 in Project P1, which uses ThingTemplate TT2 and it’s not in a Project. TT2, in turn, uses ThingShape TS3 which is also not in a Project.  Calling ListExternalDependencies()  on Project P1 will find both TT2 and TS3. If, however, we then put TT2 in a Project P2, then call the List() service on Project P1, the scan will stop at TT2 and NOT identify TS3.  The reason for this is that the service assumes that when you package P2, it will find the orphan TS3.     We know this doesn’t cover all “where used” type use cases, so there is still a planned feature to really complete this concept on the platform. But even in the 8.5 or 9.0 releases, if you wanted to see entity references (inside and outside of its project) for a single Thing A, you could quickly assign Thing A to a new project and run the ListExternalDependencies() service to find all of its references and then assign Thing A back to its original project once you’ve found what you are looking for. Moving entities into projects just for searching is not something I would recommend doing often, but it can work in a pinch!   #2. Tip: JavaScript looping When iterating through data from infotables, use a .forEach() loop! Consider these four code options and their average performance on the Rhino engine:  Infotable looping performance   Very clearly, the .forEach() syntax is the most performant and, in my opinion, the cleanest to read. Try it out in your app! We plan to update our help documentation with more of these ThingWorx JavaScript best practices in 9.1. We also plan to provide some updates to our Code Snippets features in an upcoming Composer release so we can recommend these good practices right from the start.   #3. Tip: Code optimizations As with many performance bottlenecks, it is those pesky loops that can really amplify degradation. Here are two ThingWorx patterns for your consideration:   Wrong Way:   In this block of code, we setup the property names we are looking for, and then loop through to make a logger message. While creating each logger message, we are making an API call for querying all things for a Thing named me.name and executing a service call GetMetadataAsJSON() on that Thing which walks the hierarchy to build a JSON representation of itself. In this trivial example, we are making these same API 2 calls for each item in the propertyNames list, though the Thing reference and JSON definitions are never changing. Pretty expensive.   Correct Way:   Notice in this example, we are not only declaring the propertyNames outside of the loop, but also the propertyDefinitions. This will significantly improve performance and reduce the number of API calls and round trips to the application server. Again, this is a trivial example, but can pay off in larger and more complex code areas.   If you like these quick tips, check out more best practices here! Got a tip of your own? Have a question on how to tackle something? As always, just Ask Kaya!   Stay connected! Kaya
View full tip
Below is where I will discuss the simple implementation of constructing a POST request in Java. I have embedded the entire source at the bottom of this post for easy copy and paste. To start you will want to define the URL you are trying to POST to: String url = "http://127.0.0.1:80/Thingworx/Things/Thing_Name/Services/​Service_to_Post_to​"; Breaking down this url String: ​http://​ - a non-SSL connection is being used in this example 127.0.0.1:80 -- the address and port that ThingWorx is hosted on /Thingworx -- this bit is necessary because we are talking to ThingWorx /Things -- Things is used as an example here because the service I am posting to is on a Thing Some alternatives to substitute in are ThingTemplates, ThingShapes, Resources, and Subsystems /​Thing_Name​ -- Substitute in the name of your Thing where the service is located /Services -- We are calling a service on the Thing, so this is how you drill down to it /​Service_to_Post_to​ -- Substitute in the name of the service you are trying to invoke Create a URL object: URL obj = new URL(url); Class URL, included in the java.net.URL import, represents a Uniform Resource Locator, a pointer to a "resource" on the Internet. Adding the port is optional, but if it is omitted port 80 will be used by default. Define a HttpURLConnection object to later open a single connection to the URL specified: HttpURLConnection con = (HttpURLConnection) obj.openConnection(); Class HttpURLConnection, included in the java.net.HttpURLConnection import, provides a single instance to connect to the URL specified. The method openConnection is called to create a new instance of a connection, but there is no connection actually made at this point. Set the type of request and the header values to pass: con.setRequestMethod("POST"); con.setRequestProperty("Accept", "application/json"); con.setRequestProperty("Content-Type", "application/json"); con.setRequestProperty("appKey", "80aab639-ad99-43c8-a482-2e1e5dc86a2d"); You can see that we are performing a POST request, passing in an Accept header, a Content-Type header, and a ThingWorx specific appKey header. Pass true into the setDoOutput method because we are performing a POST request; when sending a PUT request we would pass in true as well. When there is no request body being sent false can be passed in to denote there is no "output" and we are making a GET request.         con.setDoOutput(true); Create a DataOutputStream object that wraps around the con object's output stream. We will call the flush method on the DataOutputStream object to push the REST request from the stream to the url defined for POSTing. We immediately close the DataOutputStream object because we are done making a request.         DataOutputStream wr = new DataOutputStream(con.getOutputStream());     wr.flush();     wr.close();           The DataOutputStream class lets the Java SDK write primitive Java data types to the ​con​ object's output stream. The next line returns the HTTP status code returned from the request. This will be something like 200 for success or 401 for unauthorized.         int responseCode = con.getResponseCode(); The final block of this code uses a BufferedReader that wraps an InputStreamReader that wraps the con object's input stream (the byte response from the server). This BufferedReader object is then used to iterate through each line in the response and append it to a StringBuilder object. Once that has completed we close the BufferedReader object and print the response we just retrieved.         BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));     String inputLine;     StringBuilder response = new StringBuilder();     while((inputLine = in.readLine()) != null) {       response.append(inputLine);     }     in.close();     System.out.println(response.toString());    The InputStreamReader decodes bytes to character streams using a specified charset.         The BufferedReader provides a more efficient way to read characters from an InputStreamReader object.         The StringBuilder object is an unsynchronized method of creating a String representation of the content residing in the BufferedReader object. StringBuffer can be used instead in a case where multi-threaded synchronization is necessary.      Below is the block of code in it's entirety from the discussion above: public void sendPost() throws Exception {   String url = "http://127.0.0.1:80/Thingworx/Things/Thing_Name/Services/Service_to_Post_to";   URL obj = new URL(url);   HttpURLConnection con = (HttpURLConnection) obj.openConnection();   //add request header   con.setRequestMethod("POST");   con.setRequestProperty("Accept", "application/json");   con.setRequestProperty("Content-Type", "application/json");   con.setRequestProperty("appKey", "80aab639-ad99-43c8-a482-2e1e5dc86a2d");   // Send post request   con.setDoOutput(true);   DataOutputStream wr = new DataOutputStream(con.getOutputStream());   wr.flush();   wr.close();   int responseCode = con.getResponseCode();   System.out.println("\nSending 'POST' request to URL : " + url);   System.out.println("Response Code : " + responseCode);   BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));   String inputLine;   StringBuilder response = new StringBuilder();   while((inputLine = in.readLine()) != null) {   response.append(inputLine);   }   in.close();   //print result   System.out.println(response.toString());   }
View full tip
The App URI in the ThingWorx Remote Thing Tunnel configuration specifies the endpoint of the specified tunnel. The default value (/Thingworx/tunnel/vnc.jsp) will point to the built in ThingWorx VNC client that can be downloaded through the Remote Access Widget in a Mashup to provide VNC remote desktop access. Leaving the App URI blank will result in the Tunnel being connected to the listen port on the users machine as specified in the Remote Access Widget​. In this case the user must supply the application client (e.g. an ssh client) in order to connect to the tunnel endpoint.
View full tip
Hi all,   ThingWorx contains lots of useful functionality for your services (last count is 339 Snippets in ThingWorx 8.5.2). These snippets are an important part of the platform application building capabilities, and most of them are simple enough to understand based on their name and the description that appears when hovering on them.   I have witnessed that however, in some cases, the platform users are not aware of their full capabilities. With this in mind, I started creating some time ago a Snippet Guide for my personal use that I'm sharing now with the community. It contains additional explanations, documentation links and sample source code tested by me.   Please bear in mind that it was done for an earlier ThingWorx version and I did not have enough time to update it for 8.5.x, but it should work the same here as well.   This enhanced documentation is not supported by PTC, so please 1. do not open a Tech Support ticket based on the content of this document and, instead 2. Comment on this thread if there are things I can improve on it.   Happy New Year!
View full tip
The Axeda Platform has long had the ability to write custom logic to retrieve, manipulate and create data.  In the current versions of the Platform, there are two classes of API, Version 1 (v1) and Version 2 (v2).  The v1 APIs allow a developer to work with data on the Platform, but all of the APIs are subject to the maxQueryResults configuration property, which by default limits the number of results per query to 1000. For some subsets of data, this can be inadequate to process data.  In comes the v2 API, which introduces pagination. One of the first things a new user does when exploring the V2 API, is something like the following: HistoricalDataItemValueCriteria criteria = new HistoricalDataItemValueCriteria() criteria.assetId = '9701' criteria.startDate = '2014-07-23T12:33:00Z' criteria.endDate = '2014-07-23T12:44:00Z' DataItemBridge dbridge = com.axeda.sdk.v2.dsl.Bridges.dataItemBridge FindDataItemValueResult results = dbridge.findHistoricalValues(criteria)           And they get frustrated when they only get the same 100 rows of data.  Repeat after me: V2 API invocations (find operations) are limited to batches of 100 results at a time! But that's not the end of the story.  With a small change, the query above can be tuned to iterate through all results that match the search criteria:  HistoricalDataItemValueCriteria criteria = new HistoricalDataItemValueCriteria() criteria.assetId = '9701' criteria.startDate = '2014-07-23T12:33:00Z' criteria.endDate = '2014-07-23T12:44:00Z' criteria.pageNumber = 1 criteria.pageSize = 100 // Default. DataItemBridge dbridge = com.axeda.sdk.v2.dsl.Bridges.dataItemBridge FindDataItemValueResult results = dbridge.findHistoricalValues(criteria) tcount = 0 while ( (results = dbridge.findHistoricalValues(criteria)) != null  && tcount < results .totalCount) {   results.dataItems.each { res ->     tcount++   }   criteria.pageNumber = criteria.pageNumber + 1 }    I currently recommend that people avoid using the count() or countDomainObjectByCriteria() functions if you're then going to call a find.  Currently both the count*() and find functions compute total results, and doubles execution time of just those two calls.  Total count is only computed when running the first find() operation, so the code pattern above is so far the most efficient way I've seen to run these operations on the platform. So having covered how to do this in code (custom objects), let's turn our attention to the REST APIs - the other entry-point for using these capabilities.  The REST API doesn't offer a count*() function, but the first find() invocation (if using XML) brings back totalCount as part of the result set.  You can use this in your application to decide how many times to call the REST end-point to retrieve your data.  So for the example above: POST:  https://customer-sandbox.axeda.com/services/v2/dataItem/findHistoricalValues HEADERS: Content-Type: application/xml Accept: application/xml BODY: <?xml version="1.0" encoding="UTF-8"?> <HistoricalDataItemValueCriteria xmlns="http://www.axeda.com/services/v2" pageSize="100" pageNumber="1"> <assetId>9701</assetId> <StartDate>2014-07-23T12:33:00Z</StartDate> <endDate>2014-07-23T12:35:02Z</endDate> </HistoricalDataItemValueCriteria>      RESULTS: <v2:FindAssetResult totalCount="1882" xmlns:v2="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">    <v2:criteria pageSize="100" pageNumber="1">       <v2:name>*</v2:name>       <v2:propertyNames/>    </v2:criteria>    <v2:assets>    </v2:assets> </v2:FindAssetResult>      Or JSON: POST:  https://customer-sandbox.axeda.com/services/v2/dataItem/findHistoricalValues HEADERS: Content-Type: application/xml Accept: application/xml BODY: {   "id":  9701,   "startDate": "2014-07-23T12:33:00Z",   "endDate": "2014-07-23T12:35:02Z",   "pageNumber": 1,   "pageSize": 2 }      And that's how you work around the maxQueryResults limitation of the v1 APIs.  Some APIs do not currently have matching v2 Bridges (e.g. MobileLocation and DataItemAssociation), in which case the limitation will still apply.  Creative use of the query Criteria will allow you to work around these limitations as we continue to improve the V2 API. Regards, -Chris
View full tip
ThingWorx 8.4 is here!   We know you’ve been patient, as we’ve released sneak peeks on Ask Kaya of various new or updated features, including: InfluxDB as New Time Series Data Persistence Provider Responsive Mashup Layout with New Layout Editor ThingPresence to Address Assets that Always Appear Offline Functions to Allow Expression & Validator Widgets to No Longer Crowd Canvases at Design Time Property Transforms to Do Statistical Transforms for Property Values No longer are you forced to sit idly as we give you glimpses of the new functionality without the ability to play with it. Now that it’s available, go run with the wind!   To discover even more features and details, check out the release notes.   ThingWorx 8.4 can be downloaded here.   Let us know what you think of the new release below!   - Kaya
View full tip
Style theming is a Beta feature that allows you to customize the look of your mashups and widgets.   A style theme is a set of styling properties for elements such as text, colors, and lines that you can apply to a mashup. You can manage styles for multiple mashups more easily by using style themes. Style theme apply on a mashup level, unlike style definitions, which apply on a widget level. When you apply a style theme to a mashup, all embedded widgets and mashups will derive styling properties from the style theme for the top level mashup. You can perform the following tasks:   • Create and modify style themes. • Apply a style theme to one or more mashups. • Reuse a style theme by using Import/Export. • Define custom CSS for a style theme. CSS rules are applied to all mashups that use the style theme.   Style theme support is limited to the following types widgets: • New widgets — You can only apply a style theme to these widgets. • Hybrid widgets — You can use style definitions or a style theme to style these widgets.   NOTE: You can enable or disable style themes for hybrid widgets by using the (BETA) UseThemeForHybrids property in the mashup properties panel. However, you cannot use style definitions with web component widgets.   To read more about Base Theme, Creating, and Modifying Style Themes, refer to our ThingWorx Help Center.
View full tip
Exciting news! ThingWorx now has improved support for Docker containers to help you manage CI/CD, improve development efficiency in your organization and save costs. Check out these FAQs below and, as always, reach out to me if you have any additional questions.   Stay connected, Kaya   FAQs: ThingWorx Docker Containers   What are Docker Containers? From Docker.com: “a Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings”. Learn more here.   What's the difference between Docker containers and VMs? Containers are an abstraction at the app layer that packages code and dependencies together, whereas Virtual Machines (VMs) are an abstraction of physical hardware turning one server into many servers. Here are some great discussions on it on Stack Overflow. Containers vs. VMs   How can I build ThingWorx Docker images? Check out the Building ThingWorx 8.3 Docker Images Guide or watch this video to instruct you on how to build and test Docker containers. (view in My Videos)   How does PTC support building ThingWorx Docker images? PTC provides the ability for customers and partners to build ThingWorx Docker images. A customer can download the Dockerfiles and scripts packaged as a zip folder from the PTC Software Downloads Portal under “ThingWorx Platform,” then “Release 8.3”  then“ThingWorx Dockerfiles.” (Please note that you must be logged in for the link to function properly.) PTC Software Downloads PortalThe zip folder contains the Dockerfiles, template jar, and scripts to fetch Tomcat, and ThingWorx WAR files using CLI. Java must be downloaded manually from the vendor's website. We also provide an instructional guide called “Build ThingWorx Docker Images” available on the Reference Documents page on the Support Portal.   How are ThingWorx Docker images different from the usual delivery media of WAR files? The WAR file delivery is typically accompanied by an installation guide that contains the manual steps for creating the VM or bare-metal environment. That guide includes instructions for the administrator to manually install the prerequisites, including Tomcat, Java, and ThingWorx platform settings files. To deploy and run the WAR file, the administrator follows the guide to create the runtime environment on an OS. In contrast, the Dockerfile build in this delivery automates the creation of a Docker image once supplied with the prerequisites.   Do you have any reference deployment and guidance? Yes, you can refer to our blog post to learn how to deploy and run ThingWorx Docker containers on your existing Kubernetes environment.   Is there any recommendation on which Container Orchestrator as a Service (CaaS) a customer should run ThingWorx Foundation Docker container images on? You can use Docker-Compose for testing, but it is generally not suggested for production deployment use cases. In a production environment, customers should use container orchestrators such as Kubernetes, OpenShift, Azure Kubernetes Service (AKS), or Amazon Elastic Container Service for Kubernetes (Amazon EKS), to deploy and manage ThingWorx Docker images.   What are the skill sets required? Familiarity with OS CLI and Docker tools is required to build building the ThingWorx Docker images. Familiarity with Docker-compose to run the resulting Docker containers is needed to test the resulting builds. We don’t recommend Docker-Compose for production use, but when using it for local testing and demo purposes, users can rapidly install ThingWorx and get it up and running in minutes. We expect PTC partners and customers who want to run ThingWorx containerized instances in their production environment to possess the required skill sets within their DevOps team.   How is ThingWorx licensing handled with the Docker images? By default, the container created from these Docker images starts up in a limited mode with no license supplied. You can configure your username and password for the PTC licensing portal to automatically load a license via environment variables passed into the container on startup. Additionally, you can mount a volume to the /ThingworxPlatform directory, which contains your license file, or to retrieve a license request. To keep your Host ID consistent, ensure that the /ThingworxStorage and /ThingworxPlatform directories are persisted and not removed with individual container restarts. More detailed instructions can be found in the build guide or in a Kubernetes blog post .   Is Docker free? What version of Docker does PTC support for ThingWorx? Docker is open-source and licensed under the Apache 2 license. Information on Docker licensing can be found here. The following Docker versions are required: Docker Community Edition (docker-ce) Version 18.05.0-ce is recommended. To install the Docker Community Edition on your system, follow the instructions for your operating system on the Docker website here. Docker Compose (docker-compose) Version 1.17.1 is recommended. To install the Docker Compose on your system, follow the instructions for your operating system on the Docker website here. What persistence providers are currently supported? PTC provides the ability to build ThingWorx Foundation containers for the following supported persistence providers: H2 Microsoft SQL Server PostgreSQL Additional persistence providers will be added to the Docker build delivery as the ThingWorx Foundation Platform releases support for those new databases in future releases.   What are some of the security best practices? For production use, customers are strongly advised to secure their Docker environments by following all the recommendations provided by Docker. Review and implement the best practices detailed at https://docs.docker.com/engine/security/security/.   Can we build Docker images for ThingWorx High Availability (HA) architecture? Yes. ThingWorx Dockerfiles are provided for both basic ThingWorx deployment architecture and HA ThingWorx deployment architecture.   How easy is the rehosting and upgrading of ThingWorx releases on Docker with existing data? In Kubernetes environment, data is kept in a separate volume and can be attached to different containers. When one container dies, the data can be attached to a different container and the container should start without issue. For more information, please refer to the upgrade section of the Building ThingWorx 8.3 Docker Images Guide.   Is it okay to use the Docker exec and access the bash shell to make config changes or should I always rebuild the image and re-deploy?­ Although using Docker exec to gain access to the container internals is useful for testing and troubleshooting issues, any changes made will not be saved after a container is stopped. To configure a container's environment, variables are passed in during the start process. This can be done with Docker start commands, using compose files with environment variables defined, or with helm charts. More detailed instructions can be found in the build guide or in this blog post .   What if there are issues? Should I call PTC Technical Support? We are providing the scripts and reference documents solely to empower our community to build ThingWorx Docker images. We believe that customers using Docker in their production processes would have expertise to manage running Docker containers themselves. If there are any issues or questions regarding the build scripts provided in the PTC official downloads portal, then customers can contact PTC Technical Support at 1-800-477-6435 or visit us online at: http://support.ptc.com. PTC does not provide support for orchestration troubleshooting.   What can you share about future roadmap plans? As we are enabling our customers and partners to build ThingWorx Foundation Platform Docker images, we plan to do the same for upcoming products such as ThingWorx Integration & Orchestration, ThingWorx Analytics, upcoming persistence providers such as InfluxDB, and many more. We also plan to provide additional reference architecture examples and use cases to help developers understand how to use Docker containers in their DevOps and production environments.   Where can I learn more about Docker containers and container orchestrators? See these resources below for additional information: https://training.docker.com/ https://kubernetes.io/docs/tutorials/online-training/overview/
View full tip
Underneath video walks through how to Publish a Model from Analytics Builder into Analytics Manager using the connector named TW.AnalysisServices.AnalyticsServer.AnalyticsServerConnector.
View full tip
This video gives an introduction to the Descriptive Services: what they are how to install them how to configure them how to use them  
View full tip
Announcements